Welcome to the ninth and final blog in our AMR navigation spotlight series, where we’ll be showing off our prototype control system. Click here to read the previous blog in the series, which discusses decision making and safety.
If you’ve read every spotlight blog in this series, then firstly, thank you. We hope you’ve found it useful and an enjoyable read. Along the way, we’ve covered just about every aspect of autonomous mobile robot navigation and localisation. In this blog, we’re finishing off the series by taking a closer look at our own AMR prototype control system.
Why did we make our own AMR prototype control system?
As you may have spotted by now, OxTS isn’t primarily a robot building company. We’re an inertial navigation company, specialising in making gnss-aided inertial navigation systems (INS) for a wide range of applications. One of those applications is as the heart of a localisation solution for autonomous mobile robots.
With that in mind, we decided to put ourselves into our customers’ shoes. Our main goal was to learn how best to integrate an inertial navigation system into an AMR’s control stack, so we could support our customers who are trying to do the same. Along the way, though, we also wanted to learn as much about the world of automated robotics and sensor fusion as possible so we could understand the wider world our customers live in.
An overview of the prototype
As we mentioned back in blog 5 (beyond localisation), our control system sits on a Clearpath Jackal UGV. In terms of the hardware, it includes a Jetson Nano running the control software, a wheel speed sensor, two Raspberry Pi cameras, and an OxTS AV200 INS.
The software controlling the robot is modular in design – each element of the control stack is its own ROS2 package. We did things this way so that we could add more sophisticated modules in future tests, or more easily adapt the robot to specific tasks.
We also chose to make our robot an SAE level 3 autonomous platform. The robot follows a pre-specified path, and can only do so under supervision by a human operator (though the human doesn’t control the robot). We made this choice for a few reasons:
- We wanted to maximise the safety of our build – our operator can step in and take control of the robot if its behaviour starts to deviate from what we expected.
- We wanted to create a robot that matched the levels of autonomy we see in the real world.
And, finally, we designed our control system to work with any OxTS INS – because of course we would – and any wheeled robot. We figured that would make sure whatever we learned from the prototype was applicable to as many of our customers and prospects as possible.
The modules in our prototype
We’ve covered a few of them already.
- The decision-making module, which decides what actions the robot should take, is covered in blog 8.
- The controller, which takes a route and translates that into commands for the robot, is covered in blog 7.
- The object detection module, which uses perception sensors to identify obstacles, is covered in blog 6.
There’s one more module that we briefly alluded to in blog 8, which we’ll talk more about here: the system monitor.
The system monitor
The system monitor is a collection of tools used to remotely monitor the status of the robot, and for an operator to issue commands to the system. Even a fully automated robot would need some manner of system monitor. The ability for a human to take over control of a robot is a useful failsafe in many use cases, and the status monitoring allows operators to spot and troubleshoot any issues preventing the robot from operating. This would be especially important if you were managing a fleet of robots from a remote location.
Our robot communicates with us via a WiFi module that creates a local network. Once our laptops are connected to it, the robot can send and receive ROS2 messages.
Here’s a screenshot of what our system monitor looks like, running on a laptop:
Technically, there are two applications here. The first is the black box in the top left called “OxTS Full System Monitor”. That’s a very basic application we built ourselves that allows us to issue commands to the decision making module. You can see that we’ve got the ability to start and stop the robot, ignore an obstacle, and also to reset the INS, which was useful for our testing methodology. It also has a readout of the robot’s status, and of its velocities.
The larger application is RVIZ, a visualisation tool that comes with ROS. It allows us to visualise the robot’s progress along its path in a 3D space, and gives us a live feed of the cameras so we can see what the robot sees (which is how we decide whether to press the “ignore obstacle” button in our controller). The arrows on the screen are the waypoints in the path, with the white one being the next waypoint the robot must get to; as you can see from the GIF, in this example we’re just getting the robot to go forwards and backwards. The thing that looks like a collection of brightly coloured straws is a representation of the INS, and allows us to monitor its position in 3D space (each coloured rod represents an axis).
Obviously, our system monitor is rudimentary since we’re just working on a prototype. A professional version of this could provide a single place to see the positions of an entire fleet of robots, see what each one is up to at any given moment, and give it custom instructions if needed.
Setting up the test run
We decided to test our robot around one of the barns in our Oxfordshire offices. We wanted to test whether our robot could:
- Start up indoors with no GNSS signal
- Navigate indoors with no GNSS signal
- Navigate outdoors using just GNSS signal
- Navigate using a combination of GNSS signal and ArUco markers at the same time
- Transition successfully between each of those different modes of navigation
The first step was to map the visual markers for our ArUco marker localisation solution. Since the robot uses the markers to identify its location in a GNSS-denied area, we had to make sure that the position of the markers themselves was known to a high degree of accuracy. We did that by surveying each point using a Total Station, and recording its position in global lat/lon/alt coordinates.
With that done, we needed to map out the route we wanted our robot to follow. We did that by manually driving the robot around the barn, using the INS to collect localisation data for each waypoint. The INS used a combination of ArUco markers and GNSS signal to log the positions of each waypoint in the same global lat/lon/alt coordinates that the ArUco markers were surveyed in. The waypoints were then saved to a .csv file, and given to the robot.
And that was it: after that, we put the robot in its starting position, told it that it was free to move, and off it went!
The results of the test run
Here’s a map showing the robot’s progress around the barn. Green shows our target route, red the route the simulation robot followed, and blue the route the physical robot followed:
We allowed the robot a maximum deviation of 0.5m from its target route, and most of the time the deviation was far less than that based on our visual inspections. Best of all, the results were fairly repeatable. We spent a morning doing laps of our barn for the entertainment of the rest of our colleagues at OxTS, and each time the route was very similar. Given the importance of repeatability in autonomous navigation, this was a big plus.
Was it perfect? Absolutely not. This was our first foray into developing our own control system that uses an OxTS INS for localisation, so there are plenty of things we could – and would – do differently next time.
Over time, it would be great to add more modules to make a more sophisticated robot. An obstacle avoidance module, for example, as well as smart path planning and a module that allows the robot to work as part of a group (or swarm, to use the much cooler and more accurate term) of AMRs. But for now, we’re really happy with what we’ve done. We demonstrated to ourselves that OxTS technology can be successfully integrated into a functional AMR control stack, enabling the robot to navigate indoors, outdoors, and between the two environments.
Importantly, it’s also opened the eyes of our engineering and our R&D teams to the wider world of robotics. Not only has it given us great ideas for new products in the future, it’s given us a new appreciation for the world our customers live in. And all of that, we hope, will come together to help our customers do awesome things with AMRs in the future.
Autonomous Robot Navigation Solution Brief
AMRs need a robust robot localisation solution; a tool that not only records the position and orientation of the robot, but also operates both indoors and outdoors.
This solution brief steps through the aspects we recommend our customers consider when deciding on their source of localisation for their autonomous mobile robots.
Read the solution brief to learn how the right robot localisation solution can help your AMR project, including the key questions you need to ask yourself before embarking on a project.
We hope you enjoyed this blog series and it’s helped you if you’re just starting out on your AMR journey.
If you’d like to learn more about what we can currently do for AMR engineers, view our application page.
Alternatively, if you’ve got a specific project that you’d like to talk to us about, contact us using the form below. We’re always excited to discuss the latest and greatest robotics projects.