ACR outlines 10 priorities to steer federal oversight of artificial intelligence
The American College of Radiology recently submitted comments to federal officials outlining priorities it would like incorporated into a federal document guiding artificial intelligence oversight.
Writing to the White House Office of Management and Budget last week, the college covered 10 principles put forth in a draft memo of the “Guidance for Regulation of Artificial Intelligence (AI) Applications."
The early blueprint is part of the larger Executive Order 13859: “Maintaining American Leadership in Artificial Intelligence," which informs regulatory and nonregulatory approaches to technologies and industries powered and enabled by AI.
The ACR agreed with many of the OMB’s priorities and stressed the importance of validation and real-world performance of artificial intelligence. Below are comments from the college in a condensed format.
1) Approaches must ensure that the public can trust artificial intelligence. The ACR suggested the U.S. government work with third parties—such as professional associations—to create validation services, certification measures and real-world performance monitoring agencies.
2) The ACR said it agrees with OMB that the public should be involved in federal processes that ensure the transparency and accountability of regulators.
3) Scientific integrity and information quality should inform rulemaking and guidance efforts, the ACR noted. Specifically, “transparency articulating the strengths, weaknesses, intended optimizations or outcomes, bias mitigations, and appropriate use of the regulated AI applications.” Risks and risk mitigations should also be added.
4) Oversight approaches should be based on a “consistent application of risk assessment and risk management” across multiple agencies and technologies, as stated by the OMB. The latter must keep in mind, however, that certain sectors or agencies, such as healthcare, may have gaps in oversight. Third party validation and certification can help in such instances.
5) Benefits and costs related to regulating AI should also be considered when developing specific applications. Collaborating with national associations, such as the ACR Data Science Institute, that represent AI users can ensure resources are being funneled toward innovations that will be adopted and implemented.
6) Regulatory bodies must have the flexibility to support rapid changes and updates, while also protecting health and patient safety.
7) Artificial intelligence algorithms must be generalizable for multiple populations and care sites. Therefore, platforms must be trained on large datasets, “rigorously validated,” and monitored, as technologies can evolve in “unexpected ways.”
8) Premarket AI review should also be subject to full disclosure and transparency, the ACR said. This includes ensuring a “high level” of data traceability and insight into the training data used in developing new models.
9) Patient and public safety should be the “foremost” consideration for AI used in healthcare, the ACR noted. The latter, along with the OMB, agree that cybersecurity risks should also be top of mind.
10) Public stakeholders, AI developers and users should be aware of coordination between regulatory agencies.