Visit Support Centre Visit Support Centre Find a Distributor Find a Distributor Contact us Contact us

AMR Navigation Spotlight – Perception and Mapping

Blogs September 26, 2024

Welcome to the sixth blog in our AMR navigation spotlight series, where we’ll be focusing on the sensors you can use for perception and mapping. Click here to read the previous blog in the series, which discusses the different ways your AMR’s control stack uses localisation data.

In the context of autonomous mobile robots, “perception” refers to the robot’s ability to sense its surrounding environment. That includes identifying surfaces and objects, and how far away they are. Once the perception module has spotted these things, other parts of the AMR’s control stack will do various things with that information, including:

  1. Help the robot avoid an obstacle, if one has been detected.
  2. Identify features or objects relevant to a task, such as a charging station or a box that needs moving.
  3. Add the data to a map of the robot’s operational design domain (ODD).

Mapping is often talked about in the same breath as perception, since mapping relies on perception data to work. In the AMR world, “mapping” is the process of building a 2D or 3D model of the AMR’s ODD over time, using perception data. Unlike perception, mapping requires localisation data to work, whether that’s real-time position in a global frame or position relative to an origin point. That localisation piece allows the robot to ensure its map is accurate, and to situate the AMR within the map.

In this blog, we’re looking in a bit more detail at the sensors you can use for perception and mapping, and sharing an example of the sensors we used on our prototype AMR control system.

 

Using cameras as AMR perception sensors

There are lots of different types of cameras out there that you can use as perception sensors on your AMR. They range from very cheap rolling shutter cameras, to very expensive cameras that detect light outside the visible spectrum such as UV or infra-red. The point is, any camera is producing an image of its field of view.

For your AMR to make use of that image, some level of processing will be needed. That might involve analysing the images to identify geometrical objects that humans cannot detect, and then comparing the position of those objects across two images to measure movement. You might have two cameras mounted taking pictures of the same area with the images from both being processed to estimate how the objects in the images are positioned in 3D space. More expensive cameras may include that processing, giving you an out-of-the-box camera ranging solution; cheaper cameras will require you to find (or build yourself, though that is very complicated!) a processing algorithm. There are lots of software toolboxes out there like OpenCV that can provide this functionality and therefore allow you to get your camera integration off the ground more easily.

 

LiDAR as an AMR perception sensor

LiDAR is one of the more common technologies used in ranging sensors for autonomous mobile robots. The simple way to describe how it works is that it’s like radar or sonar, but with light. If you don’t know how radar or sonar works, then try this: LiDAR works by transmitting pulses of light that reflect off the surfaces in an environment. The returning light pulses are detected by a receiver, and the sensor uses the time between the laser being transmitted and received to calculate the distance between the object and the sensor.

Each returning pulse is known as a point, and by sending out millions of pulses per second the LiDAR builds up what’s known as a point cloud of the world around it.

Different types of LiDAR in use on an AMR

There are two types of LiDAR: a 3D sensor, which produces (as you might expect) a 3D point cloud of the environment; and a 2D sensor, which produces a point cloud that looks more like a floor plan. Whichever you use, the two most common setup on an AMR is to either use a spinning LiDAR mounted on the top of the vehicle, or to use multiple sensors pointing in different directions. There is no right, or wrong way to position your sensors. It purely depends on what task your AMR is trying to achieve.

LiDAR is popular for a few reasons. Point clouds provide a lot of detail – there are billions of points in a point cloud – making the measurements they provide very reliable. LiDAR is also immune to many of the issues cameras face, since the sensor is “active” (emitting its own light source). The only concerns with LiDAR are highly reflective surfaces like water, snow, or glass. And from a purely practical standpoint, if you use a 2D LiDAR you need to ensure that it’s positioned at the right height to detect whatever you want it to detect!

To balance all this good news, LiDAR is the most expensive technology on the list. A 3D LiDAR that offers good enough performance for autonomous navigation could cost between £5,000 and £8,000. So it’s worth considering whether your application needs a 3D LiDAR, if 2D is enough – or whether you can use a different type of sensor altogether.

 

Radar, sonar and ultrasound perception sensors

Radar, sonar, and ultrasound are all sensor technologies used on AMR platforms as well as LiDAR and cameras. As we’ve said already, they all work the same way as LiDAR: they send something out, it bounces off an object, it goes back to the sensor, and the sensor uses the time of flight to calculate the distance away the object is. To summarise:

  • Radar uses radio waves. It works over longer ranges than LiDAR and works better in rainy and dusty conditions, though it doesn’t provide the same level of detail.
  • Sonar uses sound, and is used underwater – so it’s not really seen outside autonomous boats or submarines.
  • Ultrasound also uses sound, but specifically high-frequency sounds and is used mostly over land. Because the sounds are high-frequency, ultrasound is generally limited to low-distance applications (it’s a common technology in parking sensors, for instance).

 

Using perception sensors to build maps for AMRs

Since perception sensors capture information about the world around them, it’s only natural that the data they generate is used to build maps of the world around an AMR. Those maps can be used for path planning, as we talked about in our previous blog, helping the AMR get where it needs to go while avoiding obstacles. The map produced by the AMR can also be useful in its own right – for instance, a map produced by a rover moving around a construction site could be used for site inspection purposes, and for inspecting infrastructure.

Mapping is also one of the reasons LiDAR technology is so popular in AMRs, as the point clouds produced by LiDAR are a great starting point for any map-building algorithm or software – but there are other maps formats that AMRs can and do use, including occupancy grid maps, semantic maps, and topological maps.

Of course, it’s important to note that while perception sensors do much of the work of generating map data, accurate localisation data is absolutely critical for good mapping. Without localisation data, there’s no way for your AMR to relate the map it’s made to the world around it, which means it can’t use it for navigation, path planning, or obstacle avoidance. Accurate, or arguably more important, repeatable localisation data is also of great importance if you want to understand changes in an environment over time as you need to know exactly where your AMR is/was when it took a snapshot of its environment. The OxTS xRED3000 GNSS/INS is designed to provide AMRs with accurate localisation data.

 

xRED3000 by OxTS
xRED3000 GNSS/INS by OxTS

 

In case you missed it, you can read more about localisation for AMRs in our first spotlight article.

There are generally two options for mapping: doing it in real time, or using a pre-surveyed map.

 

Real-time mapping

If your localisation data is good enough, and you have enough processing power on board, you can combine range perception data gathered by your LiDAR with your localisation data to build a map in real time. It’s one of the benefits of using something like an OxTS INS – it makes real-time mapping very simple.

If your localisation data isn’t quite good enough quality for this, then you might want to turn to something like SLAM. Standing for simultaneous localisation and mapping, SLAM does exactly what it says it does. From an initial known starting point, the robot will move through the environment, estimating its pose using odometry data from a sensor such as LiDAR, an INS, or a wheel speed sensor. As the perception sensors detect walls and other elements, the map is built. Periodically the AMR will attempt to localise based on that map, in order to counteract the drift that happens when navigating using just odometry data. That cyclical process of creating a map and localising the robot within the map improves the accuracy of the map and of the AMR’s position.

SLAM is popular in AMRs because it’s a pretty simple solution that doesn’t rely on high-quality localisation data. However, it requires some serious hardware in order to run – so it’s not suitable for every use case – and if you have the right quality of localisation data, you can get better mapping results than SLAM provides, with less computing power required.

 

Using pre-loaded map data

The other option for an AMR is to feed it an existing map of the area, based on an existing survey. This method often allows you to use a more accurate map, since you can survey the AMR’s ODD more precisely than a real-time mapping algorithm can. It also means your AMR doesn’t have to ‘learn’ the environment it’s in before operating.

At OxTS, we have a point cloud georeferencing tool called (wait for it) OxTS Georeferencer. It combines 3D LiDAR data and OxTS INS data to create highly accurate and fully georeferenced point clouds which can then be fed into AMRs for navigation.

Of course, the AMR will need a way of localising itself within that map in order to use it successfully.

 

Perception in action

In our last blog we introduced you to our prototype AMR control stack, which we’ve been using at OxTS HQ to hone our sensor fusion and AMR interfacing capabilities. As part of the project, we wanted to test obstacle detection using perception sensors. 

The goal was to create a system that would slow and stop the robot if an obstacle was detected in front of it. To keep things simple, the robot didn’t do any real-time mapping; instead, the path the robot was travelling on was pre-calculated. We chose two perception sensors to evaluate: 2D LiDAR and a stereo camera. 

Each sensor achieves its goal slightly differently. The 2D LiDAR gives us angle and distance measurements between any obstacles detected and the sensor. The stereo camera actually contains two cameras, each offset to one side, and we used an OpenCV image processing toolbox to analyse the two images to create a depth map of the composite image. That map shows how far each pixel of the image is from the robot. 

For both sensors, once we had the range data the AMR worked out whether any obstacles were within the threshold range (meaning they were too close to the robot). If they were, the robot would slow down at a rate proportionate to the distance away from the object – the nearer the object, the faster the deceleration. If the obstacle was too close, then the AMR would stop altogether. 

Although both worked, we chose to go with the stereo camera in the end. It had a clearer view and more information about the area in front of the robot, which was the main direction of travel for the robot. The 2D LiDAR, by comparison, had a limited detection range and only scanned a fixed height around the AMR. It did, however, have a wider field of view, so if you had an AMR that was expected to move in different directions without turning it might be a viable option – you’d have to weigh that against the cost of multiple stereo cameras.

 

Happy mapping and perception

That’s all we have time for in this blog! We hope you’ve found it useful – especially our own example of using perception data for obstacle detection and avoidance.

 

Autonomous Robot Navigation Solution Brief

AMRs need a robust robot localisation solution; a tool that not only records the position and orientation of the robot, but also operates both indoors and outdoors.

This solution brief steps through the aspects we recommend our customers consider when deciding on their source of localisation for their autonomous mobile robots.

Read the solution brief to learn how the right robot localisation solution can help your AMR project, including the key questions you need to ask yourself before embarking on a project.

AMR Solution Brief

We hope you enjoyed this blog and it’s helped you if you’re just starting out on your AMR journey.

If you’d like to learn more about what we can currently do for AMR engineers, view our application page.

Alternatively, if you’ve got a specific project that you’d like to talk to us about, contact us using the form below. We’re always excited to discuss the latest and greatest robotics projects.

Keep an eye out for the next blog in our series: an introduction to planning and control.



return to top

Return to top

,