|
|
|
If the Trump administration continues taking a laissez-faire stance toward AI—including AI used in healthcare—why not let the states go it alone on regulating the technology?
Here’s one defensible answer: Because disunited regulatory regimes would complicate operations for the many AI suppliers that serve a national clientele.
The Q&A is suggested in a paper posted by the American Society for AI (ASFAI). The document is written as solicited input for the framers of the AI Action Plan drafted in February by the White House’s Office of Science and Technology Policy.
Noting that 40 of 50 states are already moving on AI oversight, ASFAI offers an overview of these state-level initiatives. The group’s stated hope is that federal AI policies will draw from states’ efforts to do the right thing for their residents.
“Understanding these state-level initiatives can provide insights to shape federal policy,” ASFAI writes. More:
‘While a patchwork approach to regulation could be harmful, state governments can also serve as vital laboratories for policy innovation, offering real-world evidence of how different governance approaches succeed or fail in practice.’
The group’s published comment dedicates three of its 14 pages to informing the White House on what it can learn from the states about regulating AI. Here are excerpts from that section, which is broken into implementation models and policies.
STATE-LEVEL IMPLEMENTATION MODELS
1. Dedicated task forces.
Several states, such as Maryland, have established dedicated task forces with specific mandates and sunset provisions, the ASFAI authors point out. “This model typically involves a discrete group of experts working within a defined timeframe to produce specific deliverables, such as policy recommendations or implementation frameworks.” More:
‘This can result in a more focused mission, but the fixed duration can limit long-term oversight capability.’
2. Integrated agency.
Other states have integrated AI governance into existing governmental structures, such as Georgia’s effort spearheaded by the Georgia Technology Authority.
‘This model can be more efficient, but it lacks the focus of a dedicated task force.’
3. Legislative committees.
States such as Colorado have established legislative committees with ongoing AI oversight responsibilities, ASFAI notes before adding that this model “emphasizes continuous legislative engagement and oversight.”
‘However, legislative committees may not have the same expertise available compared to a dedicated task force or a technology-specific government agency.’
4. Hybrid approaches.
Some states have developed hybrid models that combine elements of multiple approaches. For example, ASFAI points out, Oregon has combined a legislative task force with an executive advisory council. The group remarks:
‘Hybrid approaches can balance the advantages of various other approaches.’
STATE-LEVEL AI POLICIES
A. Adoption and preparation.
States have recognized that AI policy should not be solely focused on regulation and limiting risk, ASFAI writes. “Governments can also encourage increased development and adoption of AI.” More:
‘For example, Iowa’s executive task force specifically targets cost reduction and automation opportunities, while Arkansas emphasizes practical applications in unemployment insurance fraud detection and recidivism reduction.’
B. Deepfake/fraud detection.
“California recently passed a law compelling companies to remove deepfakes when identified by users,” ASFAI reminds, “and allowing courts to issue injunctions blocking the distribution of deceptive political content during elections.” Meanwhile,
‘Tennessee passed legislation targeting unauthorized AI-generated replication of people’s voices and likenesses to prevent unwanted AI impersonation.’
C. Consumer protection.
Colorado recently enacted legislation requiring developers of high-risk AI systems to exercise reasonable care to prevent algorithmic discrimination and mandating disclosures to consumers, ASFAI notes. “Similarly, Utah passed a law establishing liability for undisclosed AI use that violates consumer protection laws and requiring disclosures in regulated professions such as healthcare.”
‘Other states have focused on prevention of bias and discrimination. For example, a recent New York law mandates bias audits for automated employment decision tools. Arizona’s judicial steering committee is tasked with addressing bias mitigation in sensitive government functions.’
If its RFI period is proceeding according to plan, the executive branch’s Office of Science and Technology Policy has already considered ASFAI’s full input. The comment period ends tomorrow night at one minute before midnight.
|
|
|
|
|
|
|
|
|
Health Workforce Well-Being Day 2025: Strategies and Tools for Health System Leaders to Build a Culture of Clinician Well-Being – On March 19, in recognition of #HealthWorkforceWellBeingDay 2025, Nabla and HealthLeaders are bringing together healthcare executives for an online discussion on how to build a sustainable culture of clinician well-being.
Featured speakers: Ed Lee, MD, MPH – Chief Medical Officer at Nabla John Chuck, M.D. – Chief Wellness Officer and Professor of Family Medicine at California Northstate University, College of Medicine Hugo Gómez Rueda, MD, MSc, Ph.D. – Board-certified Adult Psychiatrist at Harbor Psychiatry and Mental Health Matthew Sakumoto – Virtualist and & CMIO at Sutter Health
Join us for actionable insights from leaders dedicated to creating a healthier, more resilient healthcare workforce. Register here: https://event.on24.com/wcc/r/4889552/C83213E49794F0BE87DA83BD586AAF6D?partnerref=nablapromos
|
|
|
|
|
|
|
|
Buzzworthy developments of the past few days.
- Many AI documentation suppliers bill providers on a per-use basis. Use the vendor’s tool a lot, ring up a big vendor bill. At large healthcare organizations, this can be a ballooning problem, as the number of physicians using AI documentation has been increasing exponentially. And few have a way to pass these costs along to payers. So the more a healthcare system integrates ambient AI into its daily workflows, the heavier the cumulative financial burden becomes. And the faster it keeps accruing. Ronald Rodriguez, MD, PhD, of UT-San Antonio reminds the wise about these realities in a Q&A with HIMSS Media. “Unless hospitals and healthcare providers negotiate cost-effective pricing structures, implement usage controls or develop in-house AI systems, they may find themselves in a situation where AI adoption leads to escalating operational costs rather than the anticipated savings,” he says. Rodriguez, who heads the nation’s first MD/MS dual-degree program in AI, also issues wakeup calls for LLM users who are oblivious to patient-privacy risks inherent in the technology. In fact, he preaches attentiveness to several foolishly overlooked points of potential ethical, clinical and legal trouble for clinical AI adopters. See here, healthcare AI aficionados.
- Like snacks on the shelves of dieters, the same tools can be hard to resist for overstretched healthcare providers. Ambient AI “helps us get through massive amounts of unstructured paperwork and data that insurance companies are trying to go through to figure out if this is an appropriate authorization,” Nishit Patel, MD, chief medical informatics officer at Tampa General Hospital, tells a local TV news operation. “Hopefully, if we use these tools in the optimal ethical way to do the right thing for the patient, we all win.”
- It’s still fairly easy enough to find healthcare AI holdouts. As the Maine Reporter found on a recent assignment, AI resisters tend to cluster within smaller provider organizations. In a survey the newspaper conducted on the use of AI in healthcare, multiple respondents representing this provider category said they worried about people using AI to make medical decisions without consulting healthcare providers, healthcare journalist Rose Lundy reports. One mental health counselor from a small practice stated flatly that AI “doesn’t belong in healthcare.” Read the article.
- One hospital executive who’s also a physician and a tech expert has a bold proposal for controlling AI usage costs. Just get AI vendors to commit, under contract, to sharing risk with their healthcare clients. “The problem with a time-and-materials contract is, if I have a great idea, and you agree to build it for me, wouldn’t you love to build it for a long period of time because then my bill keeps going up?” rhetorically asks Zafar Chaudry, MD, MBA, in remarks made to MedCity News. “It’s very hard to cost-control that.” Chaudry notes that, as things stand now, hospital contracts with tech vendors “usually don’t guarantee that the product you’re investing in will deliver the intended result.” Chaudry is senior vice president, chief digital officer and chief AI and information officer at Seattle Children’s.
- Quality-wise, healthcare data is all over the place. Stated less diplomatically: Garbage in, garbage out. The inconsistency is a big problem when it comes to training and tracking any given healthcare AI model. Fortunately, the hurdle is clearable. All AI-implementing providers have to do is focus on six simple elements—accuracy, validity, data integrity, completeness, consistency and timeliness. OK, they’re not all that simple. But they surely are approachable. Wolters Kluwer solutions engineer Brian Laberge breaks it down in a March 10 blog post. “Through effective data governance and normalization practices,” he assures, “healthcare organizations can maximize AI capabilities and ensure the most accurate outputs for the betterment of patient care.”
- In like manner, computer vision for healthcare can’t be any better than the image data it’s fed for training. Andrew Gostine, MD, MBA, explains the dynamic. “Sight is our most powerful sensory capability, with up to 90% of our brains directly or indirectly participating in the processing of visual information,” he tells HealthTech magazine. “Similarly, computer vision is the most valuable form of AI-enabled perception.” Gostine, a critical care anesthesiologist and an AI entrepreneur, adds that high-bandwidth image processing with computer vision is “the only way to drive healthcare automation at the scale required to fix many of healthcare’s access and efficiency problems.”
- The world’s most populous country is home to an 86-year-old IT entrepreneur who recently beefed up his AI skills in a postgraduate program. Why? So he could do more to tech-enable healthcare services for underprivileged communities. Over the past 15 years, Mahendra Patel has helped more than 4,000 orphaned and handicapped children access education, the Ahmedabad Mirror reports. “His story inspires anyone contemplating new paths and illustrates that, with determination and support, anything is achievable,” the outlet adds. For Mahendra, the challenging AI curriculum that he completed “was not only an academic chase. It had become an urge to reinvent his passion for innovation.”
- Recent research in the news:
- Notable FDA approvals:
- M&A activity:
- Funding news of note:
- From AIin.Healthcare’s news partners:
|
|
|
|
|
|
|
|
|