To catalyze Autodesk's long-term thoughts about the industries we serve, Autodesk brings in external experts who have a different perspective and point of view. With this in mind, the Visiting Fellows program was established. The Autodesk Visiting Fellows Program recruits senior-level, industry-shaping talent to help light the future path of Autodesk. Although fellows have rich sectoral expertise, they tend to focus on defining and pursuing cross-industry issues emerging at the intersection of our traditional markets and technologies. The fellows program asks the question: How can Autodesk better serve its customers given a changing future?
Our newest fellow is Yaël Eisenstat. Yaël has spent 18 years working at the intersection of ethics, technology, security, and government policy. She has served as an intelligence officer, a national security advisor to Vice President Biden, a diplomat, Facebook's Global Head of Elections Integrity Operations, a corporate social responsibility strategist at ExxonMobil, and the head of a global risk firm. She is currently also a policy advisor to the Center for Humane Technology, headed by Tristan Harris, and sits of the Council of Foreign Relations. Her work has appeared in publications like the New York Times, TIME, WIRED, Quartz, and The Huffington Post. Yaël has appeared on BBC World News, CNN, CBS News, PBS, and C-SPAN, in policy forums, and on a number of podcasts. Yaël was recognized by Forbes in 2017 by being included on the list of "40 Women to Watch Over 40." Her full bio and press pieces can be found on her website.
In response to my Welcome and What Would You Ask an Ethics/Technology/Security/Policy Expert? post, here's a question posed to Yaël by Autodesk employee, Jay Dougherty:
Q: Does technology like Artificial Intelligence have a politics? Or is it Realpolitik?
For example, if our solutions are used to rapidly design a pre-fabricated generatively-designed bridge that speeds up an infrastructure project that reduces congestion and global warming, but that design is a low-clearance bridge that precludes a public bus from traveling to a certain road (think Robert Moses) and implicitly creates an exclusion policy, while also squeezing an on-site union job, are we still complicit in creating the physical conditions of wealth inequality and contributing to deadlocked identity politics?
Should a publicly-traded global company care? Can we afford not to at the dawn of AI? Responsibility or obligation?
I discussed the question with Yaël, and here is a brief summary of her response.
A: I was very excited to get this question. I have been exploring issues of responsibility in the tech industry and intend to delve deeper into this very question during my fellowship at Autodesk. To thoroughly answer all the elements of this question would require much more time, but to the first part of the question about whether AI has a "politics," I addressed a few thoughts about how human biases and politics get programed into the "machines" in my WIRED piece.
To get the discussion started on the broader question — and my answer will certainly evolve as my fellowship unfolds — I would note that none of the answers around responsibility vs. obligation are binary. Technologies are neither all good nor all bad for humanity. Despite even the best of intentions, problems often arise from unintended consequences that result from blindspots, unchecked biases and assumptions, groupthink, and a lack of diversity. The example in this question actually reminds me of a story I was told when working in Zimbabwe in 1999 about a project in a village that went awry. The women of the village had to spend six hours each day retrieving fresh water for their families because they had to walk around a large body of water. Westerners, with the best of intentions, came in and built a bridge across the body of water. The six-hour walk was reduced to minutes. Happy with their accomplishment, the Westerners departed.
As soon as the Westerners left, the women of the village destroyed the bridge. It turned out that they cherished this time; not only did they get to socialize among themselves while walking to fetch the water, they also relished the time away from the men in the village. Reducing the time for this task not only stifled their social time, but it freed up more time for them to assume other duties that were formerly being done by the men.
Although a seemingly more obvious and oft-repeated tale of foreign "do-gooders" making mistakes overseas, there are valuable lessons, mainly that it is important to bring all stakeholders to the table and have an inclusive dialog about possible outcomes and impacts. Deciding what is "good" needs to be made by those affected by technology instead of by those who design and supply technology. In honest attempts to satisfy customers, companies sometimes overlook the consequences to parts of society that are not their customers.
Publicly-traded companies who traditionally balance the needs of customers, employees, and investors would do well to also consider the second- and third-tiered impacts of their technology on society. When holding internal discussions about the design, production and sales of their technology, questions that are uncomfortable, or that seem unimportant at the time, may wind up being the most important. Companies should not be afraid to have their underlying biases and assumptions challenged, as these very challenges could lead to seemingly minor design or production changes that will benefit a broader base in the end. Society is paying attention to how companies behave and the real-world effects their technologies have.
In my first two weeks here, I have already seen one potential benefit of automation that I hope will be true: if technology and automation remove the need for humans to be bogged down with mundane tasks, more time could be spent thinking critically about the broader consequences of technology and how to promote the greater good.
Just like unfettered capitalism can lead to societal harm, so too can unfettered innovation. It shouldn't be up to the government alone to protect citizens. We are all part of the success of our country and the impacts we have on the world, and the private sector certainly has a role to play.
At this point, I'd like to add that with regard to the bridge example, Autodesk solutions keep humans in the loop. Generative design is a partner in the design process, not a replacement for the designer. A designer would specify the clearances necessary for the public bus to be able to have safe passage under the overpass. In addition to that, other requirements for throughput, cost, and sustainability could be included. Using artificial intelligence, generative design would provide the bridge designer with a set of designs to choose from, each scored according to the requirements. The designer would then pick the best design. In that sense, Autodesk solutions have no politics as decisions are still made by designers. In the case of Robert Moses, the problem was with the designer and not the technology.
Autodesk has always been an automation company. Today, more than ever, that means helping our customers automate their design and make processes. We help them embrace the future of making, where they can do more (e.g., efficiency, performance, quality), with less (e.g., energy, raw materials, time frames, waste of human potential), and realize the opportunity for better (e.g., innovation, user experience, return on investment). There are many paths to better. Acting responsibly provides the opportunity for better for customers, employees, investors, and society alike.
Corporate responsibility is alive in the lab.