The Autodesk Applied Research Lab pursues a broad scope of inquiry — from Advanced Robotics, Internet of Things, and Machine Learning, to Climate Change, and the Future of Work. The team builds real-world prototypes to truly understand how cutting-edge technology will develop in the future, and how these developments will affect the future of Autodesk and the world at large.
Team member, Senior Research Engineer, Heather Kerrick, filed this report based on her recent work.
Let's face it, robots are clueless.
They do not inherently know where they are or what is near them. So left to their own devices, robots can crash into walls, equipment, humans and even themselves when moving from point A to point B — completely oblivious to the impending collision. Today, the person programming the robot has to specify the robot's location relative to everything it needs to interact with, and everything it needs to avoid, before operating the machinery. But the moment anything moves or shifts in the space, all of that robot code needs to be updated. Bummer.
This limitation means robots are especially impractical for dynamic environments and operating near humans. But what if the robot did know where it was in a room and could keep an eye on everything and everyone moving by? By using motion tracking, a technology typically used in entertainment for animating digital characters based on the motion of a human actor, it is possible to track robots, equipment, and humans, and visualize them all in a single shared digital environment. With all of this information in one place, a robot could correctly locate the items it needed to use and could adjust its path to maneuver around obstacles.
The Autodesk Applied Research Lab has a Vicon Vantage 5 system, motion capture typically used in Hollywood with multiple cameras tracking the location of reflective markers to create real-time responsive 3D models in software such as Autodesk MotionBuilder or Dynamo. A Kinect is used for basic skeletal tracking without markers, so a person could wander into the space without wearing a special suit and still be tracked.
This system allows us to visualize and monitor the real-time position of a Universal Robot (UR10 robot), humans (in this case, me), and other tracked objects in a single software environment.
This is an ongoing project based on a combination of Autodesk Dynamo, Autodesk Maya, and Autodesk MotionBuilder. The distance between the robot and a nearby human is monitored in real time, which could be used in future obstacle avoidance code.
The motion capture data was also successfully streamed into Dynamo, and with the development of additional Dynamo packages, it will be possible to directly control the robot from the data.
The next step of the team's research will involve increasing the accuracy of the digital model by including the construction model of the lab and improving the human tracking. Both of these efforts will improve the quality of subsequent robot control code. The team is developing additional Dynamo packages to control the robot directly from the captured data and exploring whether it's possible to utilize Python scripting in Maya and the SDK for MotionBuilder both of which were initially designed for media and entertainment uses, for obstacle avoidance, or if a new workflow will need to be developed on top of the existing software tools.
Thanks, Heather.
Autodesk makes software that allows places, things, and media to be designed, made, and used. Robots are currently integral to the way many things are made. Right now, robots act in isolation. Perhaps with research like this, robots will work right alongside humans. It won't be a case of "robots took our jobs," but "robots helped us with our jobs" instead.
Motion capture is alive in the lab.