A Quick Primer on Robotics
The word robot to describe a human-like machine was coined in a 1920 Czechoslovakian play called R.U.R. (Rossum’s Universal Robots) by Karel Čapek.
Mechanical men have been part of our cultural zeitgeist since then. Robots are different than androids in that they are mechanical whereas androids are at least partly organic.
When you here the word ‘robot’ you may immediately picture Robby the Robot from Forbidden Planet or maybe they robot from Lost in Space. That is the typical depiction. A mechanical version of ourselves. They can be benevolent, evil or just misunderstood.
When I think of robots, I picture industrial robots, because I have worked with them. Industrial robots have been around since 1962. So, guess what? They are as old as I am!
These industrial robots take different forms, but what they do is they automate a task. These robots are programmed to do the same task over and over again. They may be a stand-alone applications, like a welding robot or they might be part of some larger automation.
They were created to automate tasks that were dirty and dangerous for humans. They are very good at repetitive tasks. Welding a seam, drilling a hole, or inserting a chip into a circuit board.
The way these have been programmed traditionally is at the individual movement. The program tells the robot to move a certain distance in the X, Y or Z dimension at a certain speed, and to do something, like open and close the gripper.
Industrial robots, in this way are actually quite dumb. They will just repeat these movements over and over. They are not thinking. They don’t understand their environment. If something changes they cannot adapt to new feedback. All they do is execute discrete movement and action commands within a space and a time.
There is a whole catalogue of software that provides these commands for these robots. It is called control software. The actual mechanics of doing the work are hydraulics or electric motors built into the robots to provide the ‘animus’. There’s a whole science of control systems in the robot design to remove mechanical chatter, vibration and other mechanical forces that add variability to the movements.
The way industrial robots are deployed goes something like this:
- An organization has a repetitive task that they want to automate.
- They buy a robot arm that is capable of the task, in fact they probably design the production line from the start to use those robotic arms.
- They install the robot physically – typically these are bolted into the floor for stability.
- They have an engineer program the tasks in using the control software.
- They go through various levels of testing to make sure it’s doing what it’s supposed to do.
- They turn on the line and it helps build a run of 10,000 cars or a million mother boards.
- At the end of the model run they retire it or retask it to a new line.
If you’ve ever seen robots on a car production line, you’ll notice that the form factor is an arm. That’s the most common type of industrial robot. They are bolted to the floor, and they use some sort of tool to do some sort of task.
What are the advantages of using robots?
Most people might say that they are cheaper than humans. That is part of the equation but usually a minor consideration. The real benefits are that robots can work in environments that are hostile to humans, they can do repetitive tasks with precision for an extended period of time, and they can be more reliable and more precise in these tasks.
They also can be quite robust. A robot can move and manipulate large, heavy objects that a human cannot.
What are the challenges?
These types of dumb robots are not very flexible. They are set up for one specific task, they can’t do anything else without being moved and reprogrammed. That’s why the most common application is high-volume repetitive tasks. They are an investment. That industrial robot arm is going to cost a couple hundred thousand bucks.
So really, the traditional industrial robot is just another machine in the factory.
…
Now we pause for a story.
Early in my career I implemented a business system in a robotic arm manufacturer. I remember how eerie it was to come into the plant in the mornings and see the robot arms going through their testing. With no humans around they would be doing their repetitive dances. Arms whirling around, picking up boxes and stacking them, then unstacking them.
It was quite surreal.
But what about these new robots that we see that look like dogs or bipedal, like people?
Yeah, those are cool. This is a situation where, again, we always knew the math around how to make these form factors but didn’t have the processing power to enable it. With recent advances in edge computers and sensors we can now program the complexity of these more familiar movements into the machine.
It turns out that walking around and balancing is a really complex set of feedback and control mechanisms. It’s a hard problem. The real challenge is why? Why are we obsessed with making an electro-mechanical thing that moves around like us or our pets? What’s the point? What’s the application?
We’re in a situation where the technology is evolving faster than the business case. We will see adoption where it makes sense. We are not at that inflection point yet. Robots are a technology that has been on the verge of mass adoption for two decades. It’s a solution in search of its problem.
Different applications of robots are emerging due to what is know as a convergence of forces. These forces are; first, labor is getting more expensive and harder to find, second, in an application of Moore’s Law, hardware is getting cheaper and more powerful at a very fast rate – (I’ll let the quants argue over whether that is exponential or arithmetic) – but, everything electronic is cheaper and faster every year.
Better. Faster. Smarter. Cheaper. Every year.
These quadruped and bipedal robots are an example of an application area that is waiting on the cost/functionality inflection point. Despite advances and cool YouTube videos, there is a challenge in cost justification. Sure, we can make them now, but why? To what end?
There is another interesting space where I used to work that is also right on the edge of adoption.
Autonomous Mobile Robots. Most of the robots in what is known as “Autonomous Mobile Robots” use wheels, like a car, to get around. These are the ones you see delivering food or running errands or grabbing items in a warehouse.
Let’s unpack that “Autonomous and Mobile” part. Obviously, ‘mobile’ is the ability to move around by themselves. No extension cord. No tether.
That’s the autonomous part – they navigate on their own.
How do they find their way around? How do they know where they are?
They navigate by having sensors that ‘look’ around at the environment for cues and then comparing that to a digital map.
Typically these robots are using a combination of LIDAR and cameras. The LIDAR is a type of RADAR that bounces beams of light off of the environment so the robot can ‘see’ what’s around it. The cameras use object recognition as well to ‘see’ objects. Then the control software tells them what to do when they see something.
This is why that self-driving car stops when a pedestrian steps in front of them. They see the person with their sensors and the control program tells them to stop or go around.
This is the same way the robots navigate around a warehouse without running into people or other robots.
But how do they know where they are?
There are different ways for the robots to figure this out.
First, autonomous mobile robots require wireless communication of some sort. They are connected to a network through wireless. They can use the network to triangulate where they are, but that is typically not accurate enough. They can also use GPS, but that isn’t quite accurate enough either.
What they actually use are ‘Waypoints’. What’s a waypoint? A waypoint is a physical or digital marker or beacon that tells the robot where they are. For instance, in the warehouse the robot my read a barcode at the end of a shelf with its camera and know exactly where it is.
By constantly reading waypoints and coordinating these with the other sensor, network and GPS data they can place themselves on the digital map. When they know where they are on the map then they can route themselves to the next place they need to be.
You can see how this convergence of technology is starting to approximate what we humans do.
Let’s say I want to drive to Starbucks to get a coffee. What am I going to do? I’m going to back out of my driveway. I’m going to check to see that no people are walking and there are no cars coming , then pull out. When I get to the first intersection I’m going to stop if the light is red and when it turns green I’ll turn right, again checking to make sure no one is in the way.
That’s pretty much what a autonomous mobile robot is doing.
Remember, robots have been around for a long time, but the sensor and compute capacity is getting cheaper and more powerful every year – and that will start opening possibilities that were out of reach before.
If you then layer on Artificial Intelligence and machine learning, the robots are going to be able to get better by learning.
Then they are going to get really good.
And the only question we will be left with is the ‘Why?’