In a $30 million mansion perched on a cliff overlooking the Golden Gate Bridge, a group of AI researchers, philosophers, and technologists gathered to discuss the end of humanity.
The Sunday afternoon symposium, called âWorthy Successor,â revolved around a provocative idea from entrepreneur Daniel Faggella: The âmoral aimâ of advanced AI should be to create a form of intelligence so powerful and wise that âyou would gladly prefer that it (not humanity) determine the future path of life itself.â
Faggella made the theme clear in his invitation. âThis event is very much focused on posthuman transition,â he wrote to me via X DMs. âNot on AGI that eternally serves as a tool for humanity.â
A party filled with futuristic fantasies, where attendees discuss the end of humanity as a logistics problem rather than a metaphorical one, could be described as niche. If you live in San Francisco and work in AI, then this is a typical Sunday.
About 100 guests nursed nonalcoholic cocktails and nibbled on cheese plates near floor-to-ceiling windows facing the Pacific ocean before gathering to hear three talks on the future of intelligence. One attendee sported a shirt that said âKurzweil was right,â seemingly a reference to Ray Kurzweil, the futurist who predicted machines will surpass human intelligence in the coming years. Another wore a shirt that said âdoes this help us get to safe AGI?â accompanied by a thinking face emoji.
Faggella told WIRED that he threw this event because âthe big labs, the people that know that AGI is likely to end humanity, don’t talk about it because the incentives don’t permit itâ and referenced early comments from tech leaders like Elon Musk, Sam Altman, and Demis Hassabis, who âwere all pretty frank about the possibility of AGI killing us all.â Now that the incentives are to compete, he says, âthey’re all racing full bore to build it.â (To be fair, Musk still talks about the risks associated with advanced AI, though this hasnât stopped him from racing ahead).
On LinkedIn, Faggella boasted a star-studded guest list, with AI founders, researchers from all the top Western AI labs, and âmost of the important philosophical thinkers on AGI.â
The first speaker, Ginevera Davis, a writer based in New York, warned that human values might be impossible to translate to AI. Machines may never understand what itâs like to be conscious, she said, and trying to hard-code human preferences into future systems may be shortsighted. Instead, she proposed a lofty-sounding idea called âcosmic alignmentââbuilding AI that can seek out deeper, more universal values we havenât yet discovered. Her slides often showed a seemingly AI-generated image of a techno-utopia, with a group of humans gathered on a grass knoll overlooking a futuristic city in the distance.
Critics of machine consciousness will say that large language models are simply stochastic parrotsâa metaphor coined by a group of researchers, some of whom worked at Google, who wrote in a famous paper that LLMs do not actually understand language and are only probabilistic machines. But that debate wasnât part of the symposium, where speakers took as a given the idea that superintelligence is coming, and fast.
+ There are no comments
Add yours