As I mentioned in my Project Photofly at TED post, we demonstrated the ability to create a 3D model of a person's head by having 14 cameras take a picture all at once. Though Project Photofly does not require such sophistication, this approach to taking the photos makes it easier because the person does not have to sit still for so long.
There's quite a story behind this.
-
To get a computer to take 14 pictures at once, you need software to control the cameras. At first we started with a package from Breeze software. This allowed us to auto-focus each camera and snap the picture. Eventually we abandoned it because the program kept crashing, and our support requests went unanswered.
-
Instead of using Breeze software, our software developer, Eddy Kuo, used the Canon Camera Software Development Kit (SDK). This kit provided the same Application Program Interfaces (APIs) as the Breeze software but without the crashes. Eddy wrote a program to auto-focus each cameras and take the shot.
-
The wiring of these cameras could be its own story. If you hook all of the cameras each with their own USB connection, each camera takes one sixth of a second to take a picture. This happens sequentially so this would mean the person would have to sit still for 2.33 seconds. If you hook the cameras up using a more direct method, each camera fires in milliseconds. This means the person only has to sit still for about 1 second. The wiring of the cameras required connecting the computer, via USB, to a circuit board, and wiring the 14 cameras to the board. Instead of powering the cameras with batteries, they get their power from a second cable that is wired to the circuit board.
-
Although we started with the circuit board in a cardboard box, we realized we could do better than that. Technical Evangelist, Gonzalo Martinez, used Inventor Fusion to design a housing for the circuit board, complete with proper screw holes and all, and printed it on one of our 3D printers in the Autodesk Gallery. The printer happened to be loaded with red plastic material, so when it was completed, it came to be known as "the red box."
We are using 14 of the 16 available connections, but only 12 were in use when I took the picture. -
To hold the cameras in place, Autodesk Gallery Technical Specialist, Jeff Clayton, welded together some special rigging. We attached the cameras to the rigging with clamps. The lights remove any shadows from the person's face.
-
In our initial testing we realized that when a human takes the pictures individually, he looks through the view finder each time and makes sure the subject is in view. He then snaps the picture. With an automated process, the computer just takes all of the pictures at one time, even if the subject is not centered in all of the shots. Hence ensuring the correct height of the chair in relation to the person's height became an issue so that the user's head does not get cropped in any of the shots.
All of this came together just hours before we had to take it all apart and ship it to TED. The TED conference runs through March 2. We hope to take what we learn from trying this at TED and stage this as an Autodesk Gallery exhibit soon thereafter. The gallery is open to the public on Wednesdays from 12 pm to 5 pm, and admission is free. Visit us.
Saying cheese is alive in the lab.