Welcome to the eighth blog in our AMR navigation spotlight series, where we’ll be focusing on how AMR decision making and safety. Click here to read the previous blog in the series, which discusses planning and control mechanisms.
In this spotlight series we’ve mentioned lots of different elements of the robot control stack. Perception, navigation, and path planning to name just a few. Now it’s time to look at the module which is arguably the brain of the whole operation – the decision making module.
What is robot decision making and safety?
The decision making module contains the high-level intelligence and logic that determines how your autonomous mobile robot will behave in various circumstances. Generally speaking, the module contains a single framework that checks for specific conditions, and then tells the robot how to respond based on whether those conditions are present or not.
The framework will govern everything that your robot does, whether that’s simple path following, obstacle recognition, or even picking flowers. It will draw on data gathered by various other modules, and can be as simple or as complex as you need.
To help you relate this to the other things we’ve written about, in OxTS’ prototype control stack the decision making framework sits between the controller and the robot’s actuators. So the controller will come up with the commands to move the robot to its next waypoint, the decision making module will decide if it’s safe to move, and then the robot will move. You can sequence the different elements of the process differently, but we chose this way for two reasons. First, we wanted to ensure that all our decision making conditions were met before actuation – that’s why it’s not after the actuators. Second, we wanted to avoid the possibility that the robot would do anything other than the outcome of the decision maker – that’s why it’s the very last thing that gets processed before actuation.
AMR Decison Making and Safety Vs AI
Much is being written at the moment about AI and its potential in robotic control systems. To help you relate AMR decision making and AI, you can think of AI as a way of creating a complete control framework for an AMR automatically, and potentially of continually updating that framework in response to new events. That being said, much of what is touted as AI is in fact an extremely complex framework that does a better job of mimicking intelligence than less advanced frameworks, but which has still been created by a human and which has no (or very limited) capacity for self-editing.
Why is decision making important for autonomous mobile robots?
Simply put, decision making puts the ‘A’ in ‘AMR’. It’s what allows a robot to execute tasks and react to events without human input. The decision making module also makes the robot safer, as long as the framework includes elements designed to help the robot respond to risks such as obstacles or deviating from its path. In fact, good decision making theory dictates that safety should be the primary design consideration of any AMR control framework. After all, the worst thing that could happen is that the AMR causes damage to a person, to other property, or to itself.
The challenge with decision making is that the designer has to try and capture all the conditions where a decision will be needed, whether that’s to maintain safe operation or to get a job done. It requires a bit of imagination and creativity, as well as multiple heads working together, to refine a robotic control framework that can handle any scenario.
Types of decision making framework
As you’d expect, there are different ways to build a decision-making framework for an AMR. We’ll step through them now. If you’ve ever worked with physics simulations or in video game programming, you may recognise some of these systems – they are the same. The principles apply whether you’re helping a robot decide how to react to an unexpected obstacle, how a particle should react in a simulation, or how an NPC should behave in a video game.
We’ll be demonstrating all these systems using a very simple example of decision making logic. In reality, your logic will be far more complex – but for educational purposes, simplicity is better.
Framework 1: finite state machines
Finite state machines (or just state machines) consist of a number of states, represented in circles, with arrows depicting specific conditions that are needed to move from state to state.
In AMRs, the states are actions taken by the robot, whereas the conditions are usually inputs from other systems or a user. Consider this example:
The robot’s starting state is “start up”. To move from that to the next state of “continue to follow the path”, the user has to tell the robot that it is free to begin moving. From that state, there are three possible conditions:
- No obstacle is detected: the robot continues to follow the path.
- An obstacle is detected: the robot stops.
- The user tells the robot to stop: the robot stops.
From the state of “stop robot”, the user has to tell the robot it is free to move again before it will begin following its path again. As we said, this is a simple example; in reality, once your robot has stopped like this, you would include other conditions that enable it to start following the path again such as ‘recalculating route’.
While state machines are relatively simple to build, they can get very complicated for complex situations where there are lots of different conditions and states.
Framework 2: decision trees
This is the system that will look most familiar, we imagine. A decision tree is made up of action nodes (represented by rectangles), decision nodes (represented by circles), and branches (represented by arrows). Here’s the same logic from the previous section, transformed into a decision tree:
One of the major benefits of a decision tree is that it allows you to create a hierarchy of decisions that govern your robot’s operation. Unsurprisingly, we’ve put the user at the top – the robot can’t do anything until the user tells it that it’s free to start moving. After that, it checks to see if there’s an obstacle, and only starts following the path if there is no obstacle. The decision tree is run at set intervals – there’s no need to add multiple “obstacle detected” decision nodes down the tree, for instance.
Hopefully, you’ll agree that the decision tree presents the same logic as the state machine but in a simpler way. That means that it’s easier to create more complex logic using decision trees. But if the complexity rises, there’s a third framework you can use: the behaviour tree.
Framework 3: behaviour trees
Behaviour trees look very similar to decision trees, but they work quite differently. While they still have a hierarchy of nodes that govern the behaviour of your robot, they contain different types of nodes that allow you to control the flow of decision-making within your robot. Let’s take a look at our example logic, in a behaviour tree – with a few bits added to demonstrate all the concepts:
We have here three different components:
- The starting node at the top (with the ø symbol) is simply where the tree begins.
- We then have a flow control node (also known as a composite node), which is a node that can have one or more children. There are two types: sequence nodes (with the → symbol) and selector nodes (with the ? symbol).
- Underneath the flow control nodes we have leaf nodes. Leaf nodes have no children, and they are either queries, or actions the robot carries out.
All three types of node can have three states: running, success, or failure. And just like the decision tree, once the robot has run down the tree it will repeat the process at defined intervals.
Sequence vs selector nodes
These nodes both process the child nodes under them from left to right, but they react to the results from those child nodes in different ways.
Sequence nodes will only return a success result if all of their child nodes return a success result. If any child node returns a failure result, the sequence node will stop processing child nodes and return a fail result.
Selector nodes, however, will return a success result once they get their first success result from a child node, and will stop processing the rest of the child nodes once they do.
So, let’s run down our behaviour tree:
- First we have our start node. Easy!
- Next we have a sequence node. That means that it will run each of the child nodes under it, left to right, and won’t stop until all of its child nodes return a successful result.
- The first child node is “Continue to follow the path” – so the robot sets off.
- Once the robot is moving, that first node returns a “success” result – so the sequence node moves on to processing its second child: the selector node.
- The selector node will process each of its children from left to right – looking for obstacles, or a stop command. If any of those return a successful result, so too does the selector node.
- If that happens, the sequence node now processes its third child, and stops the robot. Otherwise, the robot keeps moving.
So, the tree ensures that the robot can’t move until it’s had a command, and then ensures that once it’s moving, it will stop if it detects an obstacle, deviates from its path, or reaches its goal. Neat, right?
This ability to combine (and nest) control flow nodes, each governing different decisions and actions, makes behaviour trees great for designing really complex behaviours.
The OxTS prototype decision maker
For our own robot, we elected to use a decision tree. This is in line with the famous KISS principle – “keep it simple, stupid!” for the uninitiated. We wanted to design a hierarchical series of conditions the robot had to meet before it could follow a path; a behaviour tree was more complex than we needed for that. And a finite state machine wasn’t sophisticated enough; it would have required a very complex machine to do what we wanted.
Here it is in all its glory:
We started with our three actions the robot could take: “follow path”, “stop”, and “slow down”. Then, we wanted to specify when the robot should take each of these actions. The tree runs through all the safety checks we would want the robot to do to safely move – those are each of the decision nodes:
- At the top, of course, is the human command telling the robot it can move – we never want the robot to set off without being told it can do so.
- Then we want the system monitor to be running, so that an observer (whether a human or a monitoring system) can detect any issues with the robot.
- Next, the robot checks whether it is still on the right path (the tolerance will allow it to stray by a predefined amount – perhaps a few centimetres, depending on the application).
- Then, the robot checks whether an obstacle is near the robot. If there is, it stops.
- If there are no objects near the robot, it then checks to see if any objects are on the horizon. If there are, the robot slows down; if there aren’t, it follows the path.
We designed the prototype to run all its modules in parallel. That means that when the robot needs to check that it’s on the path, or if an obstacle is near , the data is ready for it – it doesn’t have to pause while the calculations are done.
Building the logic took a lot of testing. We ran the logic through software simulations to validate that the system would give the right outputs under all the various conditions the robot might encounter. Once that was successful, we deployed it to the robot for live testing; the logic was simple enough that we didn’t really need to run it through a virtual simulation of the robot.
And that brings our blog on decision making to a close! We hope you’ve found it helpful in planning your own decision making logic.
Autonomous Robot Navigation Solution Brief
AMRs need a robust robot localisation solution; a tool that not only records the position and orientation of the robot, but also operates both indoors and outdoors.
This solution brief steps through the aspects we recommend our customers consider when deciding on their source of localisation for their autonomous mobile robots.
Read the solution brief to learn how the right robot localisation solution can help your AMR project, including the key questions you need to ask yourself before embarking on a project.
We hope you enjoyed this blog and it’s helped you if you’re just starting out on your AMR journey.
If you’d like to learn more about what we can currently do for AMR engineers, view our application page.
Alternatively, if you’ve got a specific project that you’d like to talk to us about, contact us using the form below. We’re always excited to discuss the latest and greatest robotics projects.
Keep an eye out for the next blog in our series which summarises all of the AMR Navigation spotlight blogs we’ve posted this year!