Should AI creators be more paranoid and aware of cybersecurity?
Artificial intelligence (AI) allows us to unlock an iPhone with our face or use speech recognition to check email, but a recent study warns tech workers creating AI need to be more cognizant of the moral implications of their work.
The 99-page document argues for urgent and active discussion of AI misuse, according to a WIRED report. Such nefarious uses include cleaning robots being repurposed to assassinate politicians or criminals launching personalized phishing campaigns.
Robust discussions around safety and security of AI technologies is a must, including possible policy implications, according to recommendations from the report. A more paranoid mindset of potentially harmful uses of the product or software is also suggested.
“People in AI have been promising the moon and coming up short repeatedly,” said Shahar Avin, a lead author of the report to WIRED. “This time it’s different, you can no longer close your eyes.”
Read the full report below: