Visit Support Centre Visit Support Centre Find a Distributor Find a Distributor Contact us Contact us

AMR Navigation Spotlight – Beyond Localisation Data

Blogs September 10, 2024

Welcome to the fifth blog in our AMR navigation spotlight series, where we’ll be focusing on what uses an AMR has for localisation data. Click here to read the previous blog in the series, which discusses how to interface your data with your robotic control stack.

Localisation data is only ever a means to an end for your autonomous mobile robot. But what does an AMR actually use this data for? That’s what we’ll be covering in this blog.

 

5 modules of a robotic control stack that require localisation data

Generally speaking, however your AMR operates, the control stack will have five main jobs that require localisation data, each done by a distinct module:

  1. Localising – working out where the robot is.
  2. Path planning – working how to get from where it is to its goal.
  3. Actuator control – deciding how to move the robot to follow the path.
  4. Obstacle perception – spotting obstacles and how far away they are.
  5. Decision making – how to react to various situations to ensure the safety of the AMR and the environment around it.

Each of those jobs relies on localisation data, though the way it uses that data might be slightly different.

To recap, in a robotics context, localisation data refers to:

  • Position data, whether in a global coordinate frame such as lat/lon, or in a local coordinate frame (ie: the AMR’s position is measured relative to a defined origin point such as its charging station). 
  • Orientation data, which tells the AMR which way it’s facing and whether it’s tilted on its x, y, or z axis. 

It should hopefully be obvious how the localisation module uses localisation data – it takes input from sensors such as an INS, runs them through a decoder or driver to translate it into a format the rest of the control stack can use, and transmits the data to the rest of the stack. It can also take perception data and use that to improve the robustness of its localisation data (for instance, using LiDAR to calculate odometry using a solution like OxTS LIO). Let’s now dive into the other areas of the control stack and see how they use localisation data.

 

LiDAR Inertial Odometry
Pointcloud created using the OxTS LIO tool

 

How the perception module uses localisation data

The perception module of your control stack has two functions: spotting obstacles so the AMR can avoid a collision, and perceiving the environment around the AMR for mapping purposes. It’s that second job – mapping – that uses localisation data.  

An example of this would include a 2D map made by passing data from a LiDAR scanner through a simultaneous localisation and mapping (SLAM) algorithm. The LiDAR data helps the algorithm create the map – but it needs localisation data in order to situate the AMR within that map. Specifically, in SLAM the AMR needs localisation data to generate its starting point in the map, and occasionally during the journey to minimise the drift that creeps in during SLAM mapping. The generated map is then used to plan a safe path through the environment. Which brings us to… 

 

Using localisation data for path planning

Path planning is the ability to self-calculate a safe route between the robots current or starting pose to a goal pose. There are lots of path planning algorithms out there; some are designed to find the fastest route, some to find the safest. Though the perception module might spot an obstacle, it’s the path-planning function that calculates a route around it.

Whatever algorithm your AMR uses for path planning, it needs to know three things: where the robot is (both at the start of the path and as it moves along it), where the goal is, and the layout of the space it’s in.

As we’ve just said, you can use localisation data from your AMR’s sensors to build a map of the environment. Alternatively, you can survey your environment to create a digital map which can be given to the robot. An example of this would be a georeferenced 3D pointcloud of a warehouse, created using OxTS Georeferencer.

 

Actuator controllers and localisation data

Actuator controllers are responsible for ensuring that the AMR stays on its path and moves towards its goal. There are a few different ways actuator controllers might work: they might use pulse wave modulation (PMW) to control the motors directly, or include an additional layer that allows the interface to use forward and steering velocity commands which are then translated into PMW. 

The position and orientation data from an INS are vital for telling the robot whether it is still on the path, and if not how to get back to the path.

 

Localisation data in decision making

Decision makers determine how the robot should behave under certain conditions. For instance, if an object is less than 1 cm from the robot, the robot should stop. Alternatively, if the robot has deviated from its path by a certain amount, it should stop and recalculate its route. 

Obviously, localisation data is vital for any decisions that involve the robot’s position or orientation.  And in every AMR control system that uses localisation data, the higher the data quality, the better your control will be. A greater level of accuracy will allow your robot to control its actuators more precisely, and higher quality map data will lead to more precise path planning. A greater confidence in position accuracy also allows your decision-making to be more exact, allowing your robot to stay safer or continue operating.

 

OxTS AMR
The OxTS AMR prototype

The OxTS prototype

In this blog we want to introduce you to our prototype AMR control system. In order to test our sensor fusion capabilities and develop our AMR expertise, we have built a control system that sits on a Clearpath Jackal UGV. Our control stack includes several ROS2 modules running on a Jetson Nano, and an OxTS AV200 GNSS/INS, wheel encoder, and an ArUco marker aiding system for localisation.

The ROS2 modules we’ve used include an obstacle detector, path-following controller, a decision maker, and a system monitor for visualisation or the AMR’s system status, path, and pose in real time. We’re excited to be showing you around our prototype and its modules in more detail in later blogs – but for now, we hope that this has helped you get your head around how your AMR can use the localisation data your sensors gather.

 

Autonomous Robot Navigation Solution Brief

AMRs need a robust robot localisation solution; a tool that not only records the position and orientation of the robot, but also operates both indoors and outdoors.

This solution brief steps through the aspects we recommend our customers consider when deciding on their source of localisation for their autonomous mobile robots.

Read the solution brief to learn how the right robot localisation solution can help your AMR project, including the key questions you need to ask yourself before embarking on a project.

AMR Solution Brief

We hope you enjoyed this blog and it’s helped you if you’re just starting out on your AMR journey.

If you’d like to learn more about what we can currently do for AMR engineers, view our application page.

Alternatively, if you’ve got a specific project that you’d like to talk to us about, contact us using the form below. We’re always excited to discuss the latest and greatest robotics projects.

Keep an eye out for the next blog in our series: an introduction to perception and mapping.



return to top

Return to top

,