It looks like you're new here. If you want to get involved, click one of these buttons!
If you can read this message, please contact us immediately at the following email address:
We'd like to communicate.
Artificial intelligence boosters predict a brave new world of flying cars and cancer cures. Detractors worry about a future where humans are enslaved to an evil race of robot overlords. Veteran AI scientist Eric Horvitz and Doomsday Clock guru Lawrence Krauss, seeking a middle ground, gathered a group of experts in the Arizona desert to discuss the worst that could possibly happen -- and how to stop it.
Their workshop took place last weekend at Arizona State University with funding from Tesla Inc. co-founder Elon Musk and Skype co-founder Jaan Tallinn. Officially dubbed "Envisioning and Addressing Adverse AI Outcomes," it was a kind of AI doomsday games that organized some 40 scientists, cyber-security experts and policy wonks into groups of attackers -- the red team -- and defenders -- blue team -- playing out AI-gone-very-wrong scenarios, ranging from stock-market manipulation to global warfare.
Horvitz is optimistic -- a good thing because
machine intelligence is his life's work -- but some other, more
dystopian-minded backers of the project seemed to find his outlook too
positive when plans for this event started about two years ago, said
Krauss, a theoretical physicist who directs ASU's Origins Project, the
program running the workshop. Yet Horvitz said that for these
technologies to move forward successfully and to earn broad public
confidence, all concerns must be fully aired and addressed.