Build your own DIY Robot

Doug Blanding
7 min readFeb 25, 2022

Using ROS to build an Autonomous Robot

Lego Robot
Photo by Jason Leung on Unsplash

I‘ve always been intrigued by mechanical things. I love to tinker with things and get to know how they work. Over the course of my career, I have been fortunate to find employment as a mechanical design engineer, so I have gotten paid to do what I love to do. But now that I’m retired, I have some spare time available to keep on doing this fun stuff as a hobby.

Robotic vacuum cleaners are here now and self driving cars seem to be just around the corner. What an exciting time this is to be able to tinker with the technology that makes these things possible! For example, a couple of years ago, we purchased a Shark Ninja robotic vacuum cleaner. Initially, we set it loose to vacuum the whole house. After bumping around and cleaning the whole house a few times, keeping track of where it had bumped into things, it was able to construct a pretty good map. At that point, I was invited to view the map on my phone app and label the rooms. So now, when my wife asks me to vacuum the kitchen, I can delegate this job to my Ninja vacuum cleaner by simply tapping on Kitchen on my phone app. The robot is then able to:

  • Plan an efficient route from its home base to the kitchen
  • Systematically vacuum the entire kitchen floor
  • Return to its home base
  • Transfer its dirt to the home base and get its battery recharged
  • Save a map showing the area cleaned

So now I am wondering… How hard can this be? What would it take to build my own autonomously navigating robot?

After doing a little research, I learned that there is actually quite a lot to this. But I also learned about ROS (which stands for Robot Operating System). ROS is an open source software system that takes care of most of the details of building an autonomously navigating robot. It’s actually Huge. But you don’t have to know every detail in order to put its awesome capabilities to use controlling your robot. Eventually, I decided to bite the bullet and spin up on learning ROS.

The ROS documentation is actually pretty good. It shows you how to install it and has lots of beginner tutorials on how to learn about its capabilities. But, as I mentioned, it is huge. And it can be daunting, especially if you’re new to it. Although ROS has been under development for over a decade and is actually pretty robust, I read that it runs best on the Linux operating system (but not necessarily any distribution of Linux). It prefers to run on the Ubuntu distribution. Oh, and not just any version of Ubuntu, but it prefers to run on an LTS (Long Term Support) version. It seems the ROS developers have worked out a release schedule that stays in lock step with the Ubuntu LTS release schedule. The latest version of ROS (ROS Noetic) is intended to run on Ubuntu 20.04 LTS. Being a newbie myself, I decided to stick pretty close to this guidance, so this is what I installed on my Raspberry Pi computers. I installed Ubuntu 20.04 LTS on my Raspberry Pi 3B+ computer and Ubuntu-Mate 20.04 LTS Desktop on my desktop computer.

But you don’t actually have to do all this in order to start dabbling with ROS. Truth be known, you can actually start learning ROS without loading anything on your computer. I actually started learning ROS through the Robot Ignite Academy. They offer online courses which enable learning ROS using your web browser. Their servers run ROS and display the results on your browser. It’s pretty painless and easy to get started, but eventually, if you’re like me, you will want to let go of this and install ROS on your own computer.

I completed two courses at the Robot Ignite Academy:

  1. ROS Basics in 5 Days
  2. ROS Navigation in 5 Days

Don’t let the names mislead you. Getting through these courses in 5 days is actually pretty ambitious. It took me a lot longer. But once I completed them, I was ready to install ROS on my local machine and undertake my DIY Robot Project.

Basic robot with Differential Drive wheel configuration

Now, let’s have a look at the actual physical robot. As my goal is to keep this as simple as possible, I’ve decided to use a Differential Drive wheel configuration, which, by the way, is exactly the same configuration my robot vacuum uses. Nothing fancy. If the right and left wheels are both rotating at the same speed, the robot will travel straight forward (or backward). If the left wheel is turning slower than the right wheel, the robot will veer to the left. So turning is controlled by controlling the differential speed between the left and right wheels, hence the name. Some other details:

  • The motors have integral encoders, allowing me to keep track of the speed of the left and right wheels.
  • An RPLidar A1 (~$100) is used to measure the distance to any objects over 360 degrees
  • A Bosch BNO055 IMU is used primarily to keep track of the robot’s orientation.

ROS Communication: nodes, messages & topics

As you come up the ROS learning curve, one of the first things you’ll learn about is the ROS paradigm for inter-process communication. In ROS, an individual program gets registered as a node. Nodes send data to each other via messages over channels called topics. So, as an example, I will write a node in python (or C++, my choice) which will listen to the individual ticks coming from the left and right encoders and then publish the accumulated number of ticks as messages on each of two topics: /right_ticks and /left_ticks. I will specify the node publishing rate = 10 Hz. Another node, whose name is /odometry_publisher, will subscribe to these topics and use the right and left tick message data to calculate where the robot has moved. The /odometry_publisher node, as its name suggests, will then publish a message with its best, up-to-date estimate of where the robot is. This is known as the robot’s pose (consisting of both the robot’s position in x, y, & z as well as its orientation) over a topic called /odom.

ROS — Graphical Visualization Tools

RVIZ

Beyond the inter-process messaging framework, ROS also provides a very slick tool called RVIZ for visualizing what is going on with all the messages flying around, sometimes at rates up to 50 Hz.

Gazebo

Earlier, I mentioned that you can get into ROS without even installing it on your computer. Well, now I’m going to tell you that you don’t even need to have a robot. Gazebo is a graphical simulation environment that will allow you to go a long way in learning ROS without ever needing to get your hands dirty on any nasty old mechanical stuff. Yuck!

There is a nice Introduction to ROS written by @Sebastian

Built In ROS Algorithms

Once I have gotten the nodes written to read the wheel motor encoders and send motor speed commands to the wheel motors, ROS takes care of the rest. Both the Lidar and the IMU come with ROS nodes already written and (mostly) working. ROS then provides a sensor fusion node that will combine the data from the /odom and /imu topics, judiciously keeping the best and ignoring the noise.

ROS Navigation Stack: Gmapping & SLAM

Autonomous navigation is where ROS really takes over and does the whole job. The ROS Navigation Stack refers to a suite of nodes that do all the work of bringing in the data from the wheel odometer, IMU and Lidar and combine it all intelligently to:

  • Generate a global map of the robot’s environment.
  • Generate a local map accounting for obstructions not on the global map.
  • Plan the best path around any obstacles to a goal pose.
  • Send velocity commands to the wheel motors, driving along the path to the goal.

Eventually, I was able to complete the project. Below is a split screen video showing the ROS RVIZ window on the left and the actual robot on the right.

  • At the start of the video, a final goal position and orientation are specified in RVIZ.
  • Once the goal is specified, the ROS navigation stack (on board the robot) computes a path from its start position to the goal. (This path is shown in light green.)
  • The robot then begins to follow the prescribed path from its current position to the goal position.
  • Along the way, the robot’s Lidar is running, showing (in red) any obstructions detected.
  • Any obstructions detected (by Lidar) that are within a 2 meter square area of the robot’s current position are shown on a local map in hot pink. The robot steers clear of these obstructions.
On the left screen, a Goal pose is set. The robot then calculates its path (faint green line) and drives autonomously to the Goal, while avoiding any collisions along the way.

I hope that you have found this story interesting, at the very least. And if you decide to undertake a robot project of your own, I hope that you have learned something that will be helpful as you make the journey.

--

--

Doug Blanding

Retired mechanical design engineer with an interest in Robots, CAD, Python, Linux, small computers, microControllers