Software Architect, Ben Cochran, is our guest author today. He filed this report.
Technology always moves forward, but we (users of technology) tend to repeat ourselves. In the early days of photography, before digital CCD’s, cameras used film coated with a chemical. When the chemical on the film was exposed to light, it changed color. At first, the chemical process was very slow. The first permanent chemical photograph required an 8-hour exposure. source: wikipedia.org At the time this must have been incredibly exciting but not very useful for subjects that move, like people.
As the technology improved the chemical film became faster, and it became possible to take photographs of people. We have all seen early photos of people posing very still with a relaxed frown. For some, the frown may have indicated a respectful stoic demeanor, but the reality was that in the mid-19th century exposure times still required the subject to stand very still. A relaxed frown is easier to maintain than a smile. Subjects would frequently lean against a table to stand to keep from moving. source: wikipedia.org
Today technologies like Project Photofly and laser scans provide depth and model data. Once again the capture time is slower and more complex than a single image. Collecting data that can be used to build a rich model requires several pictures of a subject or multiple passes from a laser. This is great for subjects that do not move like buildings. Just as in the early days of photography, one of the first things we tried was creating a model of ourselves. When I looked the model of Scott Sheppard in his Blog article Prepare for upcoming Project Photofly 2.0 by watching some videos, I took note of his relaxed frown. It reminded me of all the old photographs I had seen from the 19th century. In fact, when we take the pictures of people for Project Photofly, we tell them to sit down or stand against a wall in a relaxed position, look forward, and don't move. Next, we then take several photos. If the subject is still, Project Photofly can find the same point on the subject's face from multiple photos and recreate a 3D model of the subject.
While technology moves forward we tend to repeat ourselves. One of the first things we want to capture with the new technology is ourselves, just as we did more than 150 years ago. It is not a leap to say that in the future as reality and motion capture technologies develop, we will be capturing fast moving 3D models. Someday we will capture 3D models of sporting events, our kids in the school play, or even a smile.
References:
- http://www.photoguru.tv/media/PhotoGuru.tv/
GB_The_History_of_Photography.html - http://en.wikipedia.org/wiki/Photography
- http://en.wikipedia.org/wiki/File:Daguerreotype_tintype_
photographer_model_studio_table_brady_stand_cast_iron_
portrait_photos.jpg - http://www.loc.gov/pictures/item/2008680388/
Thanks Ben. The more things change - the more things stay the same.
Observing the conventional unconventional is alive in the lab.
P.S. As I compared the photo of Abraham Lincoln and myself, I noted that our heads are not at the same angle. As it turns out, Software Director, Keshav Sahoo, who had me sit still while he took 55 pictures, did not happen to take a picture of me at the exact same angle. No worries though, I can rotate my 3D model to the same angle, grab a screen shot, and Photoshop my head on Lincoln’s body. This adds a whole new dimension to Photoshopping.
OK, I spent about 2 minutes on this, but you get the idea. If I had put more Photoshop time in on it, I could have really looked presidential. Actually, you can make a reasonable 3D model with 10 to 20 photos. Keshav took 55 photos of me as a test of accuracy versus photo count. That took longer, but the end result is great. Guess who's smiling now?