Should AI creators be more paranoid and aware of cybersecurity?

Artificial intelligence (AI) allows us to unlock an iPhone with our face or use speech recognition to check email, but a recent study warns tech workers creating AI need to be more cognizant of the moral implications of their work.

The 99-page document argues for urgent and active discussion of AI misuse, according to a WIRED report. Such nefarious uses include cleaning robots being repurposed to assassinate politicians or criminals launching personalized phishing campaigns.

Robust discussions around safety and security of AI technologies is a must, including possible policy implications, according to recommendations from the report. A more paranoid mindset of potentially harmful uses of the product or software is also suggested.

“People in AI have been promising the moon and coming up short repeatedly,” said Shahar Avin, a lead author of the report to WIRED. “This time it’s different, you can no longer close your eyes.”

Read the full report below:

""

Matt joined Chicago’s TriMed team in 2018 covering all areas of health imaging after two years reporting on the hospital field. He holds a bachelor’s in English from UIC, and enjoys a good cup of coffee and an interesting documentary.

Around the web

Positron, a New York-based nuclear imaging company, will now provide Upbeat Cardiology Solutions with advanced PET/CT systems and services. 

The nuclear imaging isotope shortage of molybdenum-99 may be over now that the sidelined reactor is restarting. ASNC's president says PET and new SPECT technologies helped cardiac imaging labs better weather the storm.

CMS has more than doubled the CCTA payment rate from $175 to $357.13. The move, expected to have a significant impact on the utilization of cardiac CT, received immediate praise from imaging specialists.