We will not wake up dead tomorrow
Reports of our death from Artificial Intelligence (AI) are greatly exaggerated
We are not going to wake up dead tomorrow. We all will someday. Just not tomorrow. So, while it may sell advertising for news outlets and drive likes or upvotes on social media, the hair-on-fire concern over the foregone conclusion of AI domination over humanity is premature.
I write this not as a data engineer, data scientist, or full-stack developer but as an innovation researcher and strategist who has guided technology and R&D initiatives for companies over two decades and participated in the tech industry’s evolution. I have had the rare privilege to help release new-to-the-world Internet mapping and e-commerce platforms, and B2B and B2C online businesses into the wild. That moment in time -- when the internet was new to much of the world -- was supposed to spell the looming end of the old tech world. Today, we have this new technology cluster 80 years in the making called AI (at least for marketing purposes for the moment). It is heralded to spell the end of the economy as we know it and we're encouraged to fear this change.
Horse&%^$.
Yes, 100%, the world will change in good and bad ways because of AI in the next century. Yes, with absolute certainty there are jobs today that will disappear in a decade or two because of AI. Also, new ones will emerge because companies are not prepared to adopt AI today and because change creates opportunity. We have some work to do to get the world’s population ready for this but we have time.
Today, we cannot even agree on which flavor(s) of AI will survive the competition for ‘best’ technology innovation. Fortunately, technology adoption follows a patten. From the discovery of fire to the curvature of the earth, to nuclear energy to AI, it follows a pattern. That pattern creates space for us to adapt to the coming change. Should game-changing, world-shaking technology be regulated? Of course, but not with the intent of locking it away. It’s out of the box and cannot be returned there. Innovators will innovate and bad actors will, well, innovate. As they always have. We need to monitor and nurture the evolution this emerging set of sophisticated capabilities so that industry and society evolve with it.
The regulatory reaction to novel ideas is not a new one. My research on online gaming and other disruptions to economic status quo have shown the same pattern of social reaction to innovation -- a new idea reaches a critical point of public visibility - it's been around for a WHILE before that (AI, ~1943). In response, we (firms lobbying governments in order to gain short-term competitive advantage 'in the public interest') seek to regulate what we can't beat; when that approach doesn't work, we mimic the very ideas we sought to regulate in order to recapture lost value. Then an ideological struggle ensues, all the while precipitating the release of more innovations that progressively improve the underlying foundation of our economic conditions and speed up the revolution (how many of us actually care about VHS or Betamax anymore).
The news media and others on the periphery of the revolution would like us to believe this time is different, but it's not. Even scientists deep in the thick of advancing AI capabilities cannot agree on its future (don't take my word for it, listen to the wide range of opinions curated on the Machine Learning Street Talk podcast or read the current research papers on the field).
Technology evolution is, well, evolutionary. For years, our lives have been shaped by AI and no one really complained -- frankly, we were confronted by AI that was in many ways more threatening the LLM models that can write essays like this for us (ChatGPT did not write this one - I am to blame). Lane-changing assistance, fraud detection, monitoring systems in hazardous environments, the list goes on. Why? Because AI is convenient label for dozens of technologies and capabilities, some of which are competitive with each other and others that have little to do with the rest. We have been slowly terraforming our economies with these capabilities for decades. Now, it's become a bit more real - hit closer to home - and therefore has become more of a target for concern.
I have been a part of building foundational companies in every generation of Internet technology since that mid-1990s period. The fear factor reared its ugly head in society every time (had its head reared by certain factions?), only to be tapped down by the reality of how long a revolution takes to emerge, evolve, and embed itself in society. We have been hacking away at the AI challenge now for 80 years and we are still early in its evolutionary trajectory. Will that progress speed up. Damn, I hope so. Will we figure out how to navigate the evolution of AI as a society. Damn, I hope so.
I believe that fear of this change will not do anything to improve our place in the evolution of AI. The best way we can ensure the survival of the human race (okay, too extreme) and the economic value creation that produces higher-quality, higher-income jobs for the population is to spread the development of AI out and disperse an evidence-based understanding widely in our communities so that its future state emerges from collective action of many players within society, not from a heavily-resourced few. Social regulation of adoption has always been more powerful than the government in ultimately setting a course for innovation. But, that takes patience, awareness, thoughtfulness, and courage.
The only truly determinants of the future of AI are decisions made in the current moment. Even then, the future is a complex system, which none of us can control. We must not fear it, however. Borrowing from whomever this idea has been attributed to (Bill Gates or otherwise), we overestimate how much impact AI will have in a year (or 5) and underestimate its massive impact on society in ten years (or probably 20 or 25). As leaders we need to take a collective breath and become innovators of our organizations in ways that they can benefit from AI and that they can shape the future direction of AI.
We have time.