"Lie to me, you know that I know you too well
So don't you lie to me, lie to me
I watch you from afar, crying up your sleeve
When they turn away, so they'll still believe
You don't need no one, but I'm the only one that sees how you're torn apart
Stop you're breaking my heart
Who's the crybaby now?"
— Utopia, "Crybaby," Oblivion, 1984
Here's the setup. Author Mickey McManus is a visiting research fellow with Autodesk. He wrote an article for Autodesk's internal newsletter called the POV Dispatch. For more background, see POV Dispatch: Mickey McManus: There Be Dragons (Let's Go Over and Pet Them). Mickey's article is being shared in 3 parts. Here is part 2.
Introduction
I've just gotten my feet on the ground on the shores of Autodesk Island. Every day seems to be a chance for something to pop out of a bush or fly overhead to surprise me as I explore this new place. But soon I'd like to set sail towards the undiscovered territory that is forming at the intersection of advanced research inside Autodesk and the future of design, technology, and business emerging in the outside world. So I thought I'd use my second POV Dispatch article to play out a mini essay around a topic that seems to be resolving itself as a "wicked problem" to be addressed in the near future.
For this essay, I'll setup the parameters, talk about my current thinking, and end with the big questions I've got at the moment. Just like the explorers of old, I'm grappling with how to draw a map for this new frontier and puzzling out where dragons might be lurking over the horizon. I'm just starting out so if this topic strikes your interest or you have a different perspective or insight you'd like to share, I'd love some help with navigation.
Wicked Topic #2: Simulating the Unexpectable (or Lie to Me)
Nate Silver — in his book on building forecasts and finding the signal amidst the noise — points out that some things can be modeled, and some just can't. He uses the weather as an example of an area where we've gotten a little better over time at prediction by throwing more computation and ever-more subtle models at the problem. Science and real data from ever more powerful sensors and algorithms gave a better prediction and more time to respond for Hurricane Sandy than for Hurricane Katrina. For the other kind of problem, Nate uses earthquakes as an example — a challenge that hits particularly close to home for those Autodeskers who recently experienced the South Napa Quake. In this field, we just don't have enough samples (because the timescale of human historic documentation of earthquakes is much shorter than geological time) or a good enough grasp of how planets work to predict much of anything. The devastation in Japan — with the nuclear plants that were supposedly built for the worst case imaginable — proved that Mother Nature (or Saint Murphy) had a bigger imagination and a few more tricks up her sleeve than we could account for. Will "always on" simulations that can give non-experts a false sense of expertise and security actually (and ironically) blind us to the unexpected and the unexpectable?
When we talk about simulations we often talk about how they are valuable at helping us model possible future outcomes. They are powerful "what if?" tools. In one of my past lives I worked on a major logistics research effort. One of the most powerful elements of the system was something called "Mechanical Murph." Its job was to be a random expression of Murphy's Law during war-games. It worked by throwing wrenches and shaking out spark plug wires, sending into the system mice and toddlers and weather patterns that had never been seen before. Murph was part machine and part human; it had collaborators that were ex-generals, science fiction futurists, and yes, quite a few algorithmic robots. It couldn't flag entirely unexpectable futures beyond blind chance, but it could poke at things in ways that stimulated new pathways.
The rise of products like self-driving Internet of Things cars, which use machine intelligence and network effects to optimize flow, is providing us with hints that some "what if?" scenarios may challenge us in new and surprising ways. Google's car has spurred discussions of ethics and morality. For example, is it better to save that busload of high school kids than that one toddler on the side of the road? For all of human history humanity has safely been able to leave those questions to philosophy courses, Star Trek movies, and courts of law. But now we may suddenly have some actual control over these sorts of "what if?" scenarios by embedding answers (or at least responses to them) right into the final products themselves. This is a new kind of dragon that has never darkened our skies before. Do we really want to know the "right" answer to questions like the one about the school bus, or would we rather lie to ourselves a little and let chance take its course? Who will take responsibility for the choices made when we can now look so much more clearly into possible futures?
Which brings me to another angle on this particular topic. Should simulations lie to us? Will they stop us from making epic things come into the world if we believe our (often daunting) simulations too much? We'll start by pointing out a classic example of planning gone wrong: the Sidney Opera House, which is often used as a case study to illustrate this concept. It was a grand vision that was finished ten years late, and more than 14 times over budget. Would it ever have been built if the real cost and time were known at the early stages of the project? And if not, wouldn't that have been a bad thing?
Most people who are entrepreneurs or who work with entrepreneurial souls know that they lie to themselves — if they didn't, they often wouldn't be able to get up in the morning. They don't always want to break the placebo effect of their beliefs by admitting that this time they have no possible way out of a given mess (or at least, no way that they can see at the time). I wonder if there are dragons waiting around the horizon if we succeed in making simulation an always on, "ambient utility" without taking the value of this placebo effect into account. If a designer or team can always see every "what if?" that a computer can imagine, or even every "what if?" that a combination of people and machines can deduce in some sort of mixed dialogue, will they just decide that extremely ambitious and complex project just isn't worth the cost, time, money, effort, emotion, etc.? That could be a tragedy, because even the best simulation in the world can't predict what might happen when other breakthroughs interact, collapsing a wicked problem, or changing its nature. Sometimes entrepreneurs survive and thrive just by moving forward, and being open to improvisation and pivoting as new pathways open up. Will our customers be able to get out of their strange attractor/island of stability and go sailing off into the unknown in the hopes of finding another island beyond the horizon? Or will the fire-breathing dragon staring them in the face, in the form of a massively believable simulation, stop them in their tracks? If we have more interconnected feedback loops and cybernetic systems that are not only non-linear but also non-deterministic, will we have the right ways of simulating emergence beyond actually building the damn thing and running it through reality?
Finally, will we want to build willful blindness, or at least adjustable blindness, into our future Autodesk offerings? Simulations that let people lie to themselves just a little bit, so they can still build epic things that shouldn't, or couldn't, have been possible at the inception of the project? Even Robots designed to collaborate end up lying to each other as they play out successive cycles and learn how to win resources. It seems to be a trait that has evolved as a side effect of collaboration. There are dragons here that we may have to get past, if only because the currents of infinite computing are so strong, there may be no way to turn back against this tide.
Conclusion
I'm not sure I have any conclusions just yet. They are dragons I see on the horizon as we start to draw the map for the trip to the next frontier. I don't think they are a definitive list of challenges ahead, but I hope, at the very least, they've sparked a response that you'll want to discuss so we can get the hell off this island (I was promised a three hour tour.)
Thanks Mickey. You can reach Mickey at [email protected].
Truth seeking is alive in the lab.