Should AI creators be more paranoid and aware of cybersecurity?

Artificial intelligence (AI) allows us to unlock an iPhone with our face or use speech recognition to check email, but a recent study warns tech workers creating AI need to be more cognizant of the moral implications of their work.

The 99-page document argues for urgent and active discussion of AI misuse, according to a WIRED report. Such nefarious uses include cleaning robots being repurposed to assassinate politicians or criminals launching personalized phishing campaigns.

Robust discussions around safety and security of AI technologies is a must, including possible policy implications, according to recommendations from the report. A more paranoid mindset of potentially harmful uses of the product or software is also suggested.

“People in AI have been promising the moon and coming up short repeatedly,” said Shahar Avin, a lead author of the report to WIRED. “This time it’s different, you can no longer close your eyes.”

Read the full report below:

""

Matt joined Chicago’s TriMed team in 2018 covering all areas of health imaging after two years reporting on the hospital field. He holds a bachelor’s in English from UIC, and enjoys a good cup of coffee and an interesting documentary.

Around the web

The cardiac technologies clinicians use for CVD evaluations have changed significantly in recent years, according to a new analysis of CMS data. While some modalities are on the rise, others are being utilized much less than ever before.

The new guidelines were designed to ensure sonographers and other members of the heart team have the information they need to screen patients when appropriate and identify early warnings signs of PH. 

Harvard’s David A. Rosman, MD, MBA, explains how moving imaging outside of hospitals could save billions of dollars for U.S. healthcare.