Member-only story
In Pursuing Human-Level Intelligence, The AI Industry Risks Building What It Can’t Control
Instead of asking whether AI can achieve something, perhaps we should ask whether it should.
In front of a packed house at Amsterdam’s World Summit AI on Wednesday, I asked senior researchers at Meta, Google, IBM, and The University of Sussex to speak up if they did not want AI to mirror human intelligence. After a few silent moments, no hands went up.
The response reflected the AI industry’s ambition to build human-level cognition, even if it might lose control of it. AI is not sentient now — and won’t be for some time, if ever — but a determined AI industry is already releasing programs that can chat, see, and draw like humans as it tries to get there. And as it marches on, it risks having its progress careen into the dangerous unknown.
“I don’t think you can close Pandora’s box,” said Grady Booch, chief scientist at IBM, of eventual human-level AI. “Much like nuclear weapons, the cat is out of the bag.”
Comparing AI’s progress to nuclear weapons is apt but incomplete. AI researchers may emulate nuclear scientists’ desire to achieve technical progress despite the consequences — even if the danger is on different levels. Yet more people will access AI…