Sensor fusion is a term that’s becoming more prevalent in pretty much every sector that makes use of inertial navigation technology. In this blog, we’re digging into what it is, and how it helps improve navigation and localisation data quality. So, what is sensor fusion?
Sensor fusion: a definition
Sensor fusion is the act of taking data from two different sensors that describe the same event, and combining them to create a single dataset.
In most online discussions, the term will be referring specifically to the addition of new sensors into an existing setup – adding a LiDAR to an INS, for instance. But it should be remembered that even a basic INS is still fusing data from two sensors. It’s combining the data from the GNSS receivers built into the INS with the data from the inertial measurement unit (IMU) to create a single output.
Incidentally, this is why lots of literature about inertial navigation refers to a “position estimate” when talking about the position data the INS has produced. Technically, the data can never be guaranteed to be 100% accurate – it’s an estimate based on various factors. In reality though, the data is incredibly accurate – especially if you’re working with an OxTS GNSS/INS, which is the product of more than 25 years of expertise fusing GNSS and IMU data for a wide range of applications.
What’s the benefit of sensor fusion?
There are two main benefits to sensor fusion. The first is to make the final data output more reliable. Every type of sensor has different performance characteristics, and different weaknesses. By combining data from more than one sensor, you can be more confident that the final output is accurate, more often.
Consider a regular INS. The IMU inside is vulnerable to position drift over time, every IMU is. But GNSS isn’t vulnerable to drift. So, by fusing the two datapoints together, you can use GNSS position updates to compensate for IMU drift and make your final data more reliable.
The relationship works the other way, too. GNSS might not drift like an IMU, but if your INS loses satellite signal – if it goes through a tunnel or under some trees, for instance – then the location data you get from it becomes very unreliable. The IMU, however, isn’t affected by any of that. So when GNSS signal drops, the data from the IMU helps to compensate for that lack of signal and keeps your data output more reliable.
The second benefit of sensor fusion is that it increases the versatility of a localisation solution. These days, engineers are working on fusing all manner of sensor data together to create localisation solutions that give reliable data in a wide variety of environments. Sensors including LiDAR, camera-based odometry, and wheel speed sensors are all being used by various people to overcome navigation challenges.
As a result, localisation solutions that fuse sensor data can function in a wider variety of environments. If you’re building a solution as your business’ product, that versatility opens up new potential markets; if you’re building a solution for use in the field, the added versatility gives you the ability to operate in more places.
To conclude, building a localisation solution that incorporates more than one sensor will give you a more robust and reliable product, capable of operating with or without GNSS.
How does sensor fusion work?
At its heart, most sensor fusion relies on a Kalman Filter to work. A Kalman Filter is an algorithm that allows a system to estimate a variable (such as position) based on a variety of other data points (such as data from an IMU, GNSS, and a wheel speed sensor). Crucially, the Kalman filter is given information about the reliability of each measurement, which allows it to disregard data that is likely to be erroneous.
Roughly speaking, it works like this:
- The filter uses an existing model of the vehicle it’s measuring to predict its position.
- A sensor sends data to the filter about the vehicle’s position, and the reliability (or uncertainty) of the measurement.
- The filter evaluates that data and decides whether to use it to update its prediction or to disregard it.
So the more sensors you have sending reliable data into the Kalman Filter, the more likely it is that the filter’s prediction will be accurate.
Before the data gets to the Kalman Filter, though, it has to be translated into a common frame of reference. Different sensors might report the same movement in different ways, using different frames of reference. For instance, a wheelspeed sensor might report movement as “forward velocity = 30 m/s” while a LiDAR might refer to the same movement as “x axis velocity = 108 km/h”. To compare all the data, the sensor fusion engine needs to:
- Transform the data into a common frame of reference
- Transform it into a common unit of measurement
- Account for the different positions of the sensors on the vehicle (also known as a lever arm)
- Give each measurement a covariance (the uncertainty of the measurement)
In an OxTS system, that is all controlled through our GAD Interface, which lets you define each of those parameters. Let’s now look at how sensor fusion is used in different industries.
Sensor fusion in automotive testing
In automotive testing, sensor fusion is primarily used in two environments: the open-road, and indoors. On the open-road, a combination of large distances and varying environments such as cities mean that traditional GNSS navigation doesn’t always provide the accuracy required for compliance with standards such as NCAP. Sensor fusion improves the navigation output in problem areas to ensure test data is still accurate enough to be useful.
Indoor testing, of course, has to take place without any GNSS signal at all. In these environments, IMU data must be aided by another sensor/s data otherwise drift will render the final output almost useless. Sensor fusion techniques provide the INS with other sources of information that it can use to estimate its position.
Common sensors used:
- Wheel speed sensors
- UWB positioning systems such as Poxyx 2GAD
- LiDAR
Sensor fusion in autonomous navigation
Autonomous navigation takes many forms, from indoor warehousing robots to robotic fruit pickers. The common challenge across all of these environments is navigating with poor or non-existent GNSS signal. Sensor fusion provides the answer, taking data from perception sensors on the robot and using it to improve the robustness of the localisation solution.
Common sensors used:
- LiDAR
- Camera odometry
- Radar
Sensor fusion in georeferencing
Georeferencing is the act of combining position, navigation and timing information with the data from another survey sensor (LiDAR, Radar etc), the idea being to provide each point of the survey with accurate localisation information. Without georeferencing you wouldn’t be able to use the survey data to measure anything reliably, or to state the position of something in the survey on the earth accurately.
Many surveys rely on GNSS to provide accurate position and timing data, however surveys are not always performed in areas with good GNSS visibility. Therefore, to prevent position drift and preserve accuracy, other sources of localisation information are needed.
Just like automotive testing, other sensors can be used to stabilise and improve position accuracy when GNSS signal is disrupted by buildings, tunnels or tree cover. In fact, OxTS has recently developed a method for using LiDAR data to improve position accuracy in urban canyons.
At OxTS we’ve been working with LiDAR data for a while now. OxTS Georeferencer is a software tool that helps surveyors georeference LiDAR data. Using that expertise we’ve since developed OxTS LiDAR Inertial Odometry, or LIO. OxTS LIO uses distance information from a 360° field-of-view LiDAR sensor to calculate the relative velocity of a survey vehicle. The data is then used to constrain position drift in urban canyons allowing the user to receive more accurate data for longer periods when GNSS conditions are tough.
Common sensors used:
- LiDAR (using OxTS LIO)
- Wheel speed sensors
- Radar
Sensor fusion for everyone
Historically, sensor fusion has been very complex for engineers to build and manage. At OxTS, though, we believe sensor fusion needs to be as easy as possible to maximise its benefits.
That’s why we created our GAD Interface, and our various out-of-the-box sensor fusion solutions including Pozyx 2GAD and OxTS LIO. Each one either takes care of the sensor fusion element of your project in its entirety, or it gives you an environment to fuse your sensors in relative ease.
You can take a look at the blogs below to learn more about some of the things we’ve been up to – or talk to our team about your project and your questions. Just contact us using the form below and we’ll be in touch!