Should AI creators be more paranoid and aware of cybersecurity?

Artificial intelligence (AI) allows us to unlock an iPhone with our face or use speech recognition to check email, but a recent study warns tech workers creating AI need to be more cognizant of the moral implications of their work.

The 99-page document argues for urgent and active discussion of AI misuse, according to a WIRED report. Such nefarious uses include cleaning robots being repurposed to assassinate politicians or criminals launching personalized phishing campaigns.

Robust discussions around safety and security of AI technologies is a must, including possible policy implications, according to recommendations from the report. A more paranoid mindset of potentially harmful uses of the product or software is also suggested.

“People in AI have been promising the moon and coming up short repeatedly,” said Shahar Avin, a lead author of the report to WIRED. “This time it’s different, you can no longer close your eyes.”

Read the full report below:

""

Matt joined Chicago’s TriMed team in 2018 covering all areas of health imaging after two years reporting on the hospital field. He holds a bachelor’s in English from UIC, and enjoys a good cup of coffee and an interesting documentary.

Around the web

To fully leverage today's radiology IT systems, standardization is a necessity. Steve Rankin, chief strategy officer for Enlitic, explains how artificial intelligence can help.

RBMA President Peter Moffatt discusses declining reimbursement rates, recruiting challenges and the role of artificial intelligence in transforming the industry.

Deepak Bhatt, MD, director of the Mount Sinai Fuster Heart Hospital and principal investigator of the TRANSFORM trial, explains an emerging technique for cardiac screening: combining coronary CT angiography with artificial intelligence for plaque analysis to create an approach similar to mammography.