In an earlier blog post, I mentioned how we had 3 interns working at our Applied Research and Innovation Lab at Pier 9. A few weeks ago, we had our end of intern presentations. I have already shared the story of Jack Reinke and Brice Dudley's presentation on non-traditional arm/hand casts as well as Charlott Vallon's presentation on teaching a robot via demonstration.
Here is another story — also related to robots and demonstration.
The current state of using robots is that robots are well-suited to repetitious tasks but are unsuited to dealing with changes in their environments.
One way to address this is to equip robots with sensors, like cameras and scanners, so that they can survey the conditions around them and react accordingly; however, programming a robot to do just that is complicated.
Programming a robot today is either done in code (where you don’t have any spatial awareness, so it can be easy to make a mistake like flipping your Z axis and going the wrong direction) or it's done by moving the real robot around with the teach pendant and saving waypoints (which means you have to deal with the challenging user interface of a teach pendant, and you have to have the physical cell set up already, and there's a risk that you could still crash the robot into something if you make a mistake).
Demonstrating in VR is a more intuitive way to understand what's going on in all 3 dimensions with the robot, and it's a more intuitive interface than the teach pendant because you are directly manipulating the robot instead of using a mathematical expression like an Euler angle or one of the joints. It's arguably like other graphical programming interfaces — the code/math is embodied in the rigged model and the physics engine, so by moving the robot the computer calculates all of those angles/joint poses/waypoints that you need to send to the robot, but as an end user you don't need to see that lower level detail. You can look at the robot cell as more of a system and code higher order operations, like "pick up a box" since the computer can recalculate the necessary geometry whenever the box's starting pose moves.
So what if we just allowed a human (who comes pre-equipped with sensors as in eyes, ears, nose, etc.) to operate a robot remotely? It's a form of telepresence. And instead of programming a robot through code, what if we simply had the robot mimic a human's movements as the way to learn what to do? Furthermore, the learning could be done via demonstration in virtual reality (VR) before the robot and its physical environment are put into action.
In VR, you can confirm that your robot is moving the way you want, without risking collisions with your equipment. You can debug safely while looking very closely at various joints, that would be dangerous to put your face near in a physical robot. Also let's say your factory is still being built, or you haven’t decided which kind of robot to build? In VR, you can work in what your factory is going to look like when construction is done and start prototyping your factory production line, or you can test out different robots and see which have the best size/reach for your space.
Traditional robot arm programming is harder than you imagine. Take picking up a box as an example. A human knows how to pick up a box as shown below, but to teach the robot requires you to define the coordinates, set up several waypoints, and then run the robot. And worst of all, you need to reprogram the robot every time when the box locations changes.
Wouldn't it be easier if we could just move the robot teach pendant and have it remember that movement instead of programming it? And wouldn't this be even easier if we could do it in VR instead of physically moving the actual robot teach pendant? In VR, it is more intuitive to teach a robot. It is like the playing a video game. You just need to grasp the robot in VR and demonstrate how the robot should move to pick up an object. Then we record this motion and the relationship to the target object. The nice thing is that we just to teach it once.
In the real world, a robot computes the transformation from previously programmed target pose (from the VR simulation) to current desired target pose (from a tracker) and maps the trajectory to the desired task. Therefore, no matter how the location of the object changes, the robot knows how to pick up the object based on what we have taught in VR.
After we program each robot's desired motion by demonstration, we can let robots themselves to decide who is the best candidate to do the task. In this video, you can see the box is randomly placed on the table. By considering the reachability and manipulability, the VR system will decide who picks up the box. Imagine in the future, you have multiple robots, and you don't need to manually assign and program the task to each robot. The system itself will help you to arrange it and finish it.
The future looks promising for programming robots and getting them to augment human abilities.
Thanks, Fereshteh and Hsien-Chung. It's great that we don't need to fear our robot overlords in the future but welcome them as our able assistants instead.
A moving learning experience is alive in the lab.