Autonomous Systems - Combine | Combine

Autonomous Systems

Development of autonomous delivery system.

Autonomous systems are expected to have a significant impact on many aspects of our society. One of them, the transportation of goods and people, is already undergoing a profound transformation toward autonomous delivery. Combine is contributing to the HUGO Delivery Robot by Berge AB by developing advanced positioning systems to tackle a major challenge in the field – the area close to the destination, where small-scale adjustments and precise solutions are required for timely and efficient delivery. 

The idea of ordering takeaway from a local restaurant and having it quickly delivered by autonomous robots not only sounds like scifi come to life, it also promises gains in efficiency and safety. So why are our streets not yet filled with R2D2’s nimble cousins whizzing about culinary delights or important packages? Unfortunately, there are still many challenges to be overcome, and localization is the most important among them. To solve it, we need accurate position and orientation data, a task we also face with the Hugo Delivery Robot. 

There are many different sensors that can be used for localization, but they all have different advantages and drawbacks. The most crucial difference are the different error types they can produce. The good news is that instead of causing overwhelming chaos, the differing nature of these error types can be used for cross-verification, like a high-frequency version of the cross-examination popularized by police procedurals, with sensors as criminals ratting each other out. 


IMU sensors are a very common sensor type that can be found even in a common phone or smartwatch. These sensors are generally measuring rotational velocity, linear acceleration and magnetic field by hall sensors. All these measurements are done by measuring capacitance and resistance at the silicone level. As depicted in the image below, the sensor structure is mostly dominated by loosely connected inertial masses, which shift their positions in response to sensor motion. By capturing the resulting changes in capacitance of the conductive wall elements, both linear acceleration and angular velocity can be measured. 

Tamara, Nesterenko & Koleda, Aleksei & E.s, Barbin. (2018). Integrated microelectromechanical gyroscope under shock loads. IOP Conference Series: Materials Science and Engineering. 289. 012003. 10.1088/1757-899X/289/1/012003. 

On the other hand, measuring the Yaw/Pitch/Roll axes is trickier than measuring acceleration and velocity. The image below illustrates the problem – since angle measurement depends on the earth’s magnetic field B, an IMU sensor placed near other magnetic field sources such as electric motors might end up checking the motor’s condition instead of measuring angles. 

Mathias Schubert, Philipp Kühne, Vanya Darakchieva, and Tino Hofmann, ”Optical Hall effect—model description: tutorial,” J. Opt. Soc. Am. A 33, 1553-1568 (2016).

Since an IMU is measuring acceleration and velocity, this data should be integrated along time to determine angle and position. As we can guess, there is no sensor without error and all these errors are accumulating along time and position, causing the orientation data to drift with time, as depicted in the  graph below. A common first measure to get rid of this error is to merge the data with another source characterized by a different error type using a Kalman Filter, a proven approach with a wealth of field testing and literature. 

  1. Sa and P. Corke, 100Hz Onboard Vision for Quadrotor State Estimation, Australasian Conference on Robotics and Automation, 2012.

As highly accurate IMUs can be prohibitively expensive to be employed in a robot fleet, reaching well into the 10k SEK range, a more sensible option is to buy low-end 9-Axis sensors with an internal fusion algorithm. However, these sensor types are sensitive to the geographical location, since they use the earth magnetic field for error correction. Typically, moving them by more than 100km would require recalibration. Since we are not considering fighter jets or long-distance cargo, however, they are a perfect choice for delivery robots servicing a fixed local area. 

Wheel Encoders

Wheel encoders are devices that measure position or velocity of wheels. These sensors are producing very clear data, as can be seen in the image below. 

Gougeon, Olivier & H Beheshti, Mohammad. (2017). 2D Navigation with a Differential Wheeled Unmanned Ground Vehicle. 10.13140/RG.2.2.20876.16006. 

Since the wheels have some slippage (especially in skid-steer vehicles), this data also drifts, defined by surface friction. This error is worse than the IMU data error because it is very hard to determine the surface friction constant, e.g. by using cameras or any other cheap sensor. A surface friction map of the environment would be an ideal, yet impractical solution. Another approach could involve using machine learning on a large-scale database of surface images and friction constants. As a more straightforward alternative, the wheel encoder data can also be fed into the Kalman Filter to be merged with other data sources. 

GPS Receiver

GPS is a ubiquitous technology inseparable from our daily lives.   The GPS receiver needs to get the signal from 3 different satellites, all of which have to be from the same satellite group, to derive position and orientation data. Since the clock speed of the GPS receiver is limited, it can only measure the delay between different satellites very coarsely. The error characteristics of the receiver are shown in the error graph below. If we consider the world as an area of 5×5 meter squares, a low-end GPS receiver would only be able to determine the block it is located in. Were it located in an indoor environment, the resolution could degrade down to about 50 meters in each direction, since the satellite clock signals would be attenuated when propagating through a medium like steel or concrete. 


Kalman Filter

As mentioned above, the Kalman Filter is used to combine the advantages of individual sensors while suppressing their different errors. Thus, data that is accurate but imprecise can be combined with precise but inaccurate data to obtain the best of both worlds, a process known as sensor fusion. For example, the precise but inaccurate IMU data is corrected by the accurate but imprecise GPS position so that its drift can be reset once it travels outside the 5×5 meter grid or the absolute drift exceeds 2.5 meters on average. 

2.5 meters might seem like a very poor spatial resolution at first. However, when combined with the highly accurate data of wheel encoders, it is possible to achieve a positioning accuracy of 5 centimeters with a heading accuracy of 0.5 degrees for an autonomous robot. The results of applying a Kalman Filter to another fusion system are shown in the graph below. 

Wong, Alexander. (2019). Low-Cost Visual/Inertial Hybrid Motion Capture System for Wireless 3D Controllers. 

Point Cloud Data

Many other measures can help increase accuracy without the need for expensive sensors.  In our case, it was point cloud data which was generated by a stereo infrared camera. Stereo cameras are based on the same principle as human vision, in that they derive depth data by comparing the images produced by two different image sensors. Since the distance between the image sensors is a known parameter, frames can be easily processed to generate a point cloud (or depth data). 

In our specific case, the major problem was the rotation of the robot around its center axis. During this motion, its tires were folding under the wheels and acting like a spring. When the spring force exceeds the robot’s weight, it jumps by a few millimeters and causes a displacement which is not detected by the wheel encoders and can be considered as a noise by the IMU (because it is not a continuous motion but an impact). 

To solve this, the features in the point cloud were used. The algorithm checks for any flat surface in the point cloud and compares the differences in sensor data – IMU, the wheel encoder as well as the flat surface position and angle – to correct the robot’s heading and position data. 

This method did not derive the perfect result but improved our position and orientation data sufficiently while only marginally increasing the CPU load. The improvements point toward efficient implementations and accurate localizations, promising to soon bring a friendly R2D2 clone to your doorstep bearing delicious food!

    Kontakta oss