Enterprise AI- In the months following the failed Apollo 13 mission, investigators discovered that a seemingly benign event two years earlier was the root cause of this near national disaster. Engineers handling one of two oxygen tanks built for the service module accidentally let one slip and fall. The total distance of this fall: two inches. I once dropped my iPhone from my seat at a hockey game and watched helplessly as it fell 15 feet toward the cement floor. Miraculously, it landed at just the right angle and survived. Apollo 13 wasn’t so lucky.
In a fateful moment years before launch, at the North American Aviation plant in Downey, California, a simple slip of just two inches created enough structural damage to set in motion a series of failures that nearly killed three astronauts. An article by Jim Banke on Space.com summarized the situation pointedly: “No one knew it, but when Apollo 13 lifted off, it carried the makings of a small bomb inside its service module.”
As enterprises race to unlock the potential benefits of artificial intelligence (AI), they are focused on vetting use-cases and quickly moving to scale, but often with limited enthusiasm for the unique risks associated with such systems. Potentially, enterprises are blasting off with the makings of a small bomb inside their AI program.
The performance of traditional enterprise software, once implemented, is typically measured by a simple question: “Is the system up?” Having stable and secure access is the primary metric, and governance processes and controls have evolved over the decades to manage the known risks to this state of performance. The very nature of AI, however, renders traditional IT operating and risk models less relevant, if not dangerously out of touch. Unlike traditional software, AI is not implemented but applied, and at every stage of the data science process, novel risks emerge about which we, as an industry, have limited knowledge.
Championing an AI initiative without a fluency in the risks of AI is, arguably, akin to handing car keys to an 11-year-old and suggesting a joyride. The latter sounds patently absurd after all — who would be that irresponsible? — and yet this is the current state of enterprise AI. There are two issues leading to this heightened risk:
First, the “operate” model for traditional IT systems does not fit here, but it is still commonly misapplied. As referenced above, AI must be dynamically monitored, trained and retrained as part of an iterative journey, while software is procured, installed and maintained as part of a predictable program with clear boundaries. This difference has implications for governance, controls, resources, key performace indicators (KPIs) and the continuous operations needed.