The new year brings some exciting news for our automotive customers developing advanced driver assistance systems (ADAS) with the RT‑Range. Following the completion of final checks in December, we have now released a firmware with a time-saving and powerful new feature called Multiple Sensor Points.
The Multiple Sensor Points feature relieves the pressure on engineers by reducing the overall test time. It provides a quick and easy method of simultaneously validating up to 12 separate RADAR, LiDAR or vision-based sensors. These sensor types are already widely used in different ADAS solutions because they are the best way for vehicles to generate a real-time picture of their surroundings, both in city and inter-urban settings. But as the complexity of ADAS grows each year, so too does the number of sensors required to generate a reliable picture of what is going on around a vehicle.
The bad old days
In the past, this has meant collecting sensor data individually, then testing it to see if any targets were present in each sensor’s detection area. If they were, RT-Range S measurements could then be compared to the sensor’s own data for validation. While not a particularly challenging task for one or two sensors, it adds a delay to the process that increases significantly as more sensors need to be validated.
An easy way of working with sensors
The solution OxTS have created is simple but highly effective. During the configuration stage, the location and relative heading of each sensor, along with its field of vision angle, is defined. This enables the RT‑Range S to calculate in which direction sensors are orientated in real-time. A minimum and maximum detection range can then be added to create an annulus segment that represents the detection area for each sensor.
Once the configuration is uploaded to the RT-Range, it continuously tests each Sensor Point’s detection area for polygon-based targets and generates measurements relative to any it finds. This is one of the reasons why feedback during testing from engineers has been so positive; they’re instantly able to see real-time results and know that measurements generated for each Sensor Point are being logged internally and output via Ethernet and CAN.
Instant verification with new measurements
In terms of the measurements created when a target is inside a detection area, there are three new measurements for each Sensor Point that has been configured. The most obvious of these is the range from the sensor to the closest point on the target’s perimeter polygon, which can be directly compared to the sensor’s own data. The RT‑Range then calculates the percentage (width) of the field of vision that is occupied by that target before calculating what percentage of the target is visible.
Knowing which target is visible to which sensor is important when validating sensors in multi-vehicle or complex scenarios such as crossing and junction tests. A pedestrian may be obscured behind a parked vehicle, or a vehicle may be partly obscured by a pedestrian, depending on their proximity to the sensor. The real-time visibility calculation performed by the RT‑Range provides a quick and easy way of seeing this, simultaneously. When combined with the other measurements, this provides a powerful time-saving new tool for engineers.
Find out more
Further details about Multiple Sensor Points can be found on the website.
Alternatively Contact us for more information or a quotation.