Hyping the Risk of Out-of-Control AI
Many purveyors of AI snake oil delight in forecasting that Skynet is right around the corner. They suggest that Generative AI is so close to AGI (artificial general intelligence, meaning AI apps that can perform most or all tasks as effectively as any human being). They claim that we should expect a Terminator-style revolt any day now. This is the flip side of the AI-Is-Our-Savior pitch.
Narayanan and Kapoor have a more realistic view in their new book AI Snake Oil:
We’re not saying that AGI will never be built, or that there is nothing to worry about if it is built. But we think AGI is a long-term prospect, and that society already has the tools to address its risks calmly. We shouldn’t let the bugbear of existential risk distract us from the more immediate harms of AI snake oil.
AGI is definitely a possibility we should take seriously. The questions we should ask are when we might reach it, what it will look like, and what we can do to steer it in a more beneficial direction.
I agree with the authors that we can get giant benefits from AI safely if we build in the right safeguards. My only concern is whether our polarized politics will allow us to implement the safeguards the authors recommend.