Dustin Salinas is the CEO of I’m just going viral.
Artificial intelligence is disrupting healthcare in ways we never thought possible. Whether helping doctors make diagnoses, streamlining the patient care experience, or managing administrative tasks like billing and paperwork, AI is already making a difference.
But one question comes to mind, especially when I hear people talking about taking AI to the next level. How to make AI autonomous? Should we? The answer isn’t simple, but one thing is clear: we need synthetic data to start.
What AI is already doing in healthcare
The COVID pandemic has made obvious waves in the healthcare industry, but one of the more subtle aspects has been the widespread use of AI-assisted CT scans. In one study, these AI systems correctly identified 68% of positive results for 297 patients. The trap? The 279 patients were classified as negative results by human professionals.
UCLA researchers recently developed an AI model that can read MRIs with a precision rivaling that of the most experienced specialists. These types of visual AI can help surgeons decide whether an operation is even necessary and, if so, what type of procedure will give the patient the best chance of success. This is not a futuristic concept. This is happening today.
But here’s where things get complicated: just because AI can make decisions on its own doesn’t mean it should. Of course, AI is effective. It’s fast, it’s precise.
But in healthcare, speed or accuracy are not the highest goals. There’s a human element where things aren’t as simple as numbers on a screen. This is why I firmly believe we need human oversight, no matter how good the AI is. Healthcare merges data with empathy.
Why we need synthetic data now
The FDA’s Process for Investigating Medical Devices was not designed to study the security of adaptive machine learning or artificial intelligence software. They are working on it, but have not developed a single standard process for approving such technologies, often relying on pre-market review to assess the safety of an AI-based product.
Here’s the sticking point when it comes to training new AI models. We used just about all the real health data available, as well as privacy laws like HIPAA. make things difficult to gather more. So where do we go from here?
Synthetic data mimics real medical data, but does not use real patient information. This means we can train AI systems to improve without having to tap into sensitive personal data. With FDA approval, this process could significantly accelerate the development and approval of new medical AI tools. I have been working with the FDA on this approval to ensure the promise of synthetic data for future applications, but it is a slow process.
The good, the bad and the risk of AI autonomy
AI is amazing when it works. But what happens when that’s not the case? That’s the real question. We’re talking about healthcare, where mistakes lead to life-changing consequences. The AI may be right 99% of the time, but what about that 1% when it’s wrong?
I don’t think AI should ever be left to its own devices, especially in critical healthcare decisions. Yes, AI can help, but we need humans need to know to detect edge cases where the technology might not detect something.
We are talking about real lives here. People. AI should support doctors and nurses, not replace them. We need to find a balance between what AI does best (manage massive amounts of data) and what humans do best (interpreting nuanced patient responses, showing compassion, and understanding the gray areas between diagnosis and treatment).
How synthetic data helps us get there
Synthetic data is part of the solution to making AI safe for healthcare. It allows us to test AI systems in a controlled, risk-free environment. No real patients are involved, so we can put the AI through all sorts of tests before it interacts with a real case.
I’ve seen this process in action and I can tell you it works. We used synthetic data to run tens of thousands of simulations. If the AI makes a mistake during these tests, we can correct it before it becomes a real problem. By the time any type of AI arrives in a hospital setting, it has been refined and verified. This is the level of safety and reliability we need before we start letting AI take on more responsibilities in healthcare.
The Future of AI in Healthcare: What’s Next?
So what does the future look like? AI will continue to grow in healthcare. It already handles tasks like appointment scheduling and imaging exams that previously required entire teams of people, and that’s only going to grow from here. But we need to be careful about how far we let AI go. AI may be better than humans in some areas, like diagnosing certain conditions and providing surgical assistance, but that doesn’t mean we should remove humans from the equation.
Synthetic data will play an important role in how we safely advance AI. But that’s only part. We still need strong ethical guidelines and real commitment to ensure that AI is a tool that serves people. It’s not just about what AI can do. It’s about how we want to use it and what role we want humans to continue to play in health care.
Forbes Business Advice is the leading growth and networking organization for business owners and leaders. Am I eligible?