The US should use its leadership in semiconductors as a “chokepoint” to enforce minimum global standards for the use of artificial intelligence, according to the head of one of the country’s most ambitious AI start-ups.
Mustafa Suleyman, chief executive of Inflection and a co-founder of DeepMind, told the Financial Times in an interview that Washington should restrict sales of the Nvidia chips that play a dominant role in training advanced AI systems to buyers who agree to safe and ethical uses of the technology.
At a minimum, he added, that should mean agreeing to abide by the same undertakings that some of the leading US AI companies made to the White House in July, such as allowing external tests before releasing a new AI system.
“The US should mandate that any consumer of Nvidia chips signs up to at least the voluntary commitments — and more likely, more than that,” Suleyman said. “That would be an incredibly practical chokepoint that would allow the US to impose itself on all other actors [in AI].”
The call for greater US control of AI comes as the rapid development of the technology threatens to outrun efforts of regulators to control it. The “exponential trajectory” of AI meant that two years from now, the large language models at the centre of current AI development would be 100 times more powerful than OpenAI’s GPT-4, Suleyman said. “That justifies real action” such as the limit on chip sales, he added.
Most concerns about AI have been split between the immediate risks posed by today’s AI-powered chatbots and the long-term risk that AI systems will escape human control once they exceed the understanding of their makers, something known as superintelligence. Instead, tech executives such as Suleyman point to an intermediate period that is fast approaching, when the large language models that stand behind today’s chatbots are used in much more significant applications.
“Too much of the conversation is fixated on superintelligence, which is a huge distraction,” he said. “We should be focused on the practical near-term capabilities which are going to arise in the next 10 years [and] which I believe are reasonably predictable.”
Inflection, one of the best-funded AI start-ups, has raised $1.5bn since it was set up early last year. It has released a chatbot called Pi and is planning to build a personal assistant that can play an active role in managing its users’ lives, making it one of a number of companies racing to push AI beyond the present generation of chatbots such as ChatGPT and Google’s Bard.
In the next two or three years, adding more memory to today’s AI systems would lead to them being able to store “two or three ideas and then reason over them”, Suleyman said. Along with greater planning abilities, this could make them much better at solving real-world problems.
The extra capabilities would bring “a huge step forward that will unlock a whole suite of new applications”, he said. This included “a very coherent expert role that can co-ordinate and decide and plan and reason and use its judgment”.
Inflection is not the only company racing to extend the technology behind today’s chatbots. Google chief executive Sundar Pichai said this year that his company’s next large language model, called Gemini, would also have expanded memory and planning capabilities, giving it the wherewithal to grapple with a greater range of real-world problems.
Google this week followed OpenAI in announcing that it would link its AI systems to other software applications, enabling them to initiate actions directly. The search company highlighted relatively simple actions for the AI, such as acting on behalf of a worker by automatically booking time off in a company’s HR system, but its move lays the foundation for more complex actions.
“In future, AI is going to be participating in the economy in a material way, unlike the way that Excel participates in the economy. It’s going to be orchestrating actions using APIs,” Suleyman said, referring to the code that enables software programs to communicate with each other. “It’s going to be booking and buying and planning and organising.”
The prospect of powerful AI agents operating beyond direct human control is set to raise the stakes in the current debate over AI regulation. In a book to be published next week called The Coming Wave, Suleyman calls for a more concerted effort to anticipate and control the next generation of AI systems, starting with the tech industry taking a more active role.
“The burden of proof is increasingly going to rest on the developers of the technology,” he said. “We’ll want to evaluate what the potential consequences are rather than doing it after the fact” — unlike the “last wave of social media”, whose potential impact was not studied in advance.
He defended the commitments that seven US AI companies — including Inflection — made recently to the White House, despite criticism about their vagueness and the fact that they are only voluntary.
“Practically speaking, the odds of passing primary legislation through the US political process are very low,” Suleyman said. “It’s not very often you get the seven leading players in a new generation of tech to sign up voluntarily to a set of commitments.”
He also called for a new global institution, modelled on the Intergovernmental Panel on Climate Change, to bring more transparency to the AI systems being developed by private companies. Such an organisation could report on the state of progress in AI and act as an outside auditor of commercial systems, he said — though ultimately this would still need legislation, such as the EU’s proposed AI Act.
Many US tech executives have been wary of Brussels’ intervention in the development of AI, in particular because of a proposal that would make the developers of AI liable for how their technology is used. Suleyman, however, claimed the EU was “heading in the right direction”, adding: “We’re responsible for the quality and performance [of AI] — I think that’s the right regime.”
Pointing to the lack of progress in the US, he said: “Inaction would be the worst of all possible worlds.”