Pausing artificial intelligence development is a big mistake.
An “open letter” from a group called, Future of Life Institute, is calling for a six month pause on developing anything more powerful than GPT-4.
This open letter has been signed by many tech luminaries like Elon Musk, Steve Wozniak, Yoshua Bengio, Yuval Noah Harari, and over 1200 others.
Their macro perspective is that AI systems with human-competitive intelligence pose profound risks to society and humanity.
Their concerns, which should not be minimized and are not unfounded, include everything from the need for better planning, concerns over job automation to non-human minds outsmarting humans, loss of control over the tech, and the need for oversight and regulation.
What do they, ultimately, want?
Their goal is to enjoy a flourishing future with AI by responsibly reaping its benefits and allowing society to adapt.
But… and there’s always a “but”…
- Global competition. A six-month pause on AI development might not be followed uniformly across the world. Some countries or organizations might continue their research, potentially gaining a competitive advantage, which could lead to uneven distribution of AI technology and knowledge. For referance, read the work of Kai-Fu Lee. Do not assume that countries like China won’t be the AI super-power of the world (and use this moment to race ahead).
- It’s a silly timeframe. Asking for six months reminds me of, “two weeks to flatten the curve.” What could possibly be accomplished in six months? Just look at the current debates over regulation and legislation for social media. This will take years (decades), not months… and we don’t even know what we’re solving for yet.
- Let’s not underestimate humans. My biggest issue with this moratorium is that we always underestimate the ability of humans to adapt to the opportunities (and challenges) brought on by any technology (and this includes AI). Throughout history, humans have adapted to various technological advancements (the wheel, fire, printing press, computers, etc.), and society will find ways to navigate the challenges posed by AI, without stopping the development of the technology.
A better way forward.
Instead, let’s put the media hype aside. Let’s also put aside that many of the people who have signed this letter are also in direct competition with OpenAI (and may not be experiencing the same level of success/commercial acceptance) or have divested in AI (and may have some regrets) or have invested in AI (but with companies that may be in direct competition with the developers of GPT-4). Instead of pausing AI development (for some of the reasons above, but there are many more), it might be more effective to promote (and push for) collaboration among researchers, policymakers, and other stakeholders to develop robust safety measures, ethical guidelines, and responsible AI practices. This approach allows for the continued advancement of AI technology, while simultaneously addressing the concerns and risks that are very real.
I’m not sure about what our future with AI holds, but it is our future.
So, we can stick our collective heads in the sand, or face the inevitable.
Do you think that we should pause all AI development?
This is what Elias Makos and I discussed on CJAD 800 AM this week. Listen in right here.
Before you go… ThinkersOne is a new way for organizations to buy bite-sized and personalized thought leadership video content (live and recorded) from the best Thinkers in the world. If you’re looking to add excitement and big smarts to your meetings, corporate events, company off-sites, “lunch & learns” and beyond, check it out.