Today's AI is narrow: it's great at one thing or a few related things. A model that writes essays may struggle with math. One that recognizes faces can't drive a car. AGI — artificial general intelligence — is the idea of an AI that could do any intellectual task a human can: learn a new language, switch from writing code to diagnosing illness to composing music, and adapt to novel situations without being retrained from scratch.
Think of the difference between a calculator and a person. A calculator is better than any human at arithmetic, but it can't read a novel, plan a trip, or comfort a friend. AGI would be more like the person: not necessarily the best at any single task, but flexible enough to tackle a wide range of them. Researchers disagree on whether AGI is decades away, centuries away, or even achievable — but it remains a north star for the field.
Why it matters: AGI raises questions about safety, control, and the future of work that narrow AI doesn't. A chatbot that sometimes gets facts wrong is annoying; a system with human-like general intelligence that misbehaves could be far more consequential. Much of AI safety research focuses on how to build systems that remain aligned with human values as they become more capable.
For now, AGI is a concept, not a product. When you hear claims about "AGI" or "human-level AI," treat them with skepticism. The systems we have today are powerful and useful — but they're still narrow tools, not general minds.