Do We Need An AI Break?

Governance and regulation is falling behind current development. Is a pause in AI development the answer?

An open letter penned by a number of AI researcher employed by large universities was published March 22nd, calling for a pause in AI development to allow for the development of safety protocols. The letter mentions a time period of “at least six months”, and that it should include all AI labs worldwide. The letter has garnered the signatures of people like Steve Wozniak, Elon Musk, and Stability AI’s CEO Emad Mostaque. And me, even though I don’t entirely agree with the letter. But more on that in a bit.

The letter specifically calls for the halting of training of any and all AI models more powerful than GPT-4. The authors are not trying to roll back current development, but see the recently development of LLM’s from OpenAI, Google, and Meta as a sign of things to come, and cite the Asilomar AI Principles (which I am a whole-hearted signator of) as the basis for why this is necessary. The letter does not call directly for public or governmental intervention, but rather, call for the AI companies themselves to halt work, and to collaborate on a series of industry standards on AI ethics and safety. Only if this fails do they urge governments to step in.

This idea is not unheard of. When restriction enzymes were first discovered, there was considerable concern that this could lead to “designer babies” and other unethical uses. But the industry agreed on a series of protocols that curbed the worst-case scenarios, while allowing researchers to continue working on useful applications of the technology.

But this isn’t quite the same situation. First of all, when restriction enzymes became a thing, they were still a long way away from being viable commercial products. ChatGPT went from public beta to business model in less than two months. Second, the medicinal industry is highly regulated and used to working with standards and protocols. The tech industry less so. Third, the medicinal industry’s credo is “first, do no harm”. The tech industry’s is “move fast and break things”.

Part of the reason for the letter, I suspect, is also what Stanford University recently concluded in their recent “State of AI 2023” report (its formal name is “The AI Index Report“), that while private investments in AI have long been greater than those in academia, more recently, the key breakthroughs in AI and machine learning were no longer released by academia, but rather by private companies. Not least in 2022 and the first months of 2023. Some of the concern may stem from simply being outmaneuvered, combined with a legitimate concern that private companies may value profit over protection and protocols.

Honestly, I don’t believe in the notion of an AI break. There’s too much going on, too much momentum, and an industry that is too unregulated for it to work. Also, the concerns about AI are largely a western thing. As Stanford reported recently, both the Chinese government and the Chinese public are much more comfortable with the prospect of more AI than us in the west. So it is not guaranteed that all AI labs, nor all governments, feel the need for this pause. So while, in an ideal world, taking a collective step back, sitting down and agreeing on the playbook before we continue with the game, would be great, I just don’t think the situation allows for that.

We’ll need to write the rules as we build the tools, but we do need to write the rules. Urgently so. In spite of the impressive capabilities of GPT-3 and 4, they are still only representatives of AI in its relative infancy, and yet we’re already seeing examples of uses that are not conducive to the sort of world most of us would want to live in (also documented by Stanford in The AI Index Report).

And it’s not even guaranteed that a full stop for six months would be the fix some imagine it to be. While writing the rules as we play may seem scary, it does carry with it the benefit of allowing us to be more nimble. We can’t possibly imagine all the pitfalls that advanced AI could pose, some of them we’ll only discover as we build the tools utilizing it. A modern-day Council of Nicea trying to pre-empt any and all problems would most likely fail. And it definitely won’t get there in six months.

As an example of why that approach doesn’t work, look to the EU’s much-touted legislative framework for artificial intelligence. So far, it has been under way for two years, and just as they had finally handed in “consideration” in late 2022, ChatGPT came along and upended much of what was in the document. Any working group, whether private or governmental, that is large enough to have a global impact, will work too slowly to keep up with development. The old adage of building the road while driving on it is very apt here. How do we suppose we can write the rules of the road once and for all?

So why have I signed the letter? Because, as I said earlier in this post, we do need the playbook, urgently. Governments, organizations, companies, and researchers all need to be part of this. And if this letter can be one step in creating that awareness, it is definitely worth signing. I agree with Bill Gates when he says that AI has the potential to address some of the world’s most pressing problems, so the fact is, we need AI. But we also need to constantly discuss how we use it, what we should and should not build, and how we handle the challenges that AI poses. This letter is a call for action, a first step in a series of steps.

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s