A Conversation with Sina Bari MD: Real Talk on AI in Hospitals

Sina Bari

Below is the transcript from Sarah Chen’s recent “Technology Frontiers” interview with Sina Bari MD on Public Radio, lightly edited for clarity.


SARAH CHEN: We’re back on Technology Frontiers. I’m your host Sarah Chen, and today we’re diving into something that honestly keeps a lot of healthcare administrators up at night – how to actually govern AI systems in hospitals without creating a bureaucratic nightmare. I’m joined by Sina Bari MD, who brings this really fascinating background as both a plastic surgeon and now a leader in the medical AI space. Thanks for being here, Dr. Bari.

SINA BARI MD: Thanks, Sarah. And please, just call me Sina. I’ve spent enough time in hospitals to know that formality is, uh, usually the first thing that goes out the window when you’re trying to get something actually done [laughs].

SARAH CHEN: I appreciate that, Sina. So let’s jump right in – when hospital leadership hears this buzzword “AI governance,” what are we really talking about?

SINA BARI MD: Yeah, so… it’s funny because I was in a meeting with a hospital CIO last month who said exactly that – “What the heck does AI governance even mean?” Look, at its core, it’s really just about answering some pretty basic questions, right? Like, when this algorithm suggests a treatment or flags a scan or whatever, who’s actually responsible for making sure it’s accurate?

The thing is – and I learned this the hard way when I was still doing surgeries – hospitals are these incredibly complex places with all these competing priorities. And now we’re adding in these AI systems that, let’s be honest, most hospital boards don’t fully understand the legal and ethical repercussions.

SARAH CHEN: That’s a really good point. I imagine there’s a lot of confusion.

SINA BARI MD: Oh my god, yes. So much confusion. I was at this healthcare conference in Chicago last year, and I’m not exaggerating, there were three different panels all talking about AI governance, and they were all describing completely different things!

One hospital was basically just having their IT security team handle it all, another had created this massive committee structure that, honestly, looked like it would take six months to approve anything. And then there was this community hospital that was just letting individual departments buy whatever AI tools they wanted with almost no oversight, which… [sighs] that’s just asking for trouble.

SARAH CHEN: So it sounds like there isn’t really a standard approach yet.

SINA BARI MD: Not even close. And that’s part of the problem, right? Every hospital is trying to figure this out on their own. I mean, in an ideal world, you’d have this clear structure with an oversight committee at the top, then your specialized teams handling different aspects – the clinical people, the tech folks, legal and compliance – and then the frontline users who actually have to make these systems work in practice.

But in reality, it’s usually much messier. You’ve got the people who get blamed when something goes wrong, the doctors and nurses just trying to use the technology without it slowing them down, and the lawyers saying “no” to everything because they’re worried about liability. It’s not exactly a recipe for innovation [laughs].

At the end of the day, however, whether it’s centralized or decentralized, whether it’s owned by committee or department, AI systems must always be continuously tested for your hospital’s patient population and monitored against drift.

SARAH CHEN: How should hospitals approach this monitoring?

SINA BARI MD: Well first, start small.

Pick one AI application that’s pretty low-risk – maybe something that helps with scheduling or, I don’t know, prioritizing which radiology images get looked at first. Don’t start with anything diagnostic! That’s just… that’s just asking for trouble.

And the other thing – and this is something I wish someone had told me when I first got into this space – is that you need to have clinicians involved from day one.

And then create a system of ground truth.

SARAH CHEN: What’s ground truth?

SINA BARI MD: Sorry, that’s industry lingo but I mean when you’re asking AI to give an opinion or do a task, you need a way of checking if it’s right. That “correct answer” is the ground truth.

Imagine you have a new employee that you’ve just hired. You’re really excited about them working for you because they’re incredibly industrious and somehow never take a break. Sounds great, huh? Well, you’re still going to want to check their work.

Because AI systems can scale so quickly, we need to be thinking in the same systematic way for checking work instead of just an occasional look over the shoulder. Our system’s at iMerit even include AI systems to do this monitoring.

SARAH CHEN: That is scary, so we need AI monitoring AI.

SINA BARI MD: Yes, because of the volume of data being generated but also humans, otherwise yes, it’ll be AI all the way down.

SARAH CHEN: And I imagine dealing with regulatory stuff just adds another layer of complexity?

SINA BARI MD: The FDA is actually doing their best to keep up. They’ve got this whole framework for Software as a Medical Device, which is what most AI systems fall under. But the technology is moving so fast, and the regulatory process is, well, not known for its speed [laughs]. Part of that is because we want them to move slowly, we’ve learned hard lessons about that in medicine in the past.

I was at this roundtable with some FDA folks last year, and they were asking us – the industry people – for suggestions and our work in Radiology which is the most mature clinical part of AI has really helped set the tone.

SARAH CHEN: How do doctors typically feel about these AI systems? I imagine there’s some resistance.

SINA BARI MD: Oh yeah, for sure. And honestly, as someone who spent years in the OR, I get it. Doctors are trained to be skeptical, right? That’s part of the job.

I’ll tell you a quick story. When I was still practicing, they brought in this new AI system for… I think it was for predicting surgical complications. And my first thought was, “Great, another thing that’s going to beep at me and be wrong.”

And I wasn’t entirely wrong! The system had all these false positives, and it became this “boy who cried wolf” situation where everyone just started ignoring it. Which is actually worse than not having it at all.

What I’ve learned – and what I try to build into everything we do at iMerit – is that you have to be brutally honest about what these systems can and can’t do. Doctors can smell BS from a mile away. If you oversell what your AI can do, they’ll never trust you again.

SARAH CHEN: Looking ahead, what changes do you see coming in this space?

SINA BARI MD: Hmm, that’s a good question. I think… I think we’re going to see a lot more standardization, for one thing. Right now it’s a bit like the Wild West.

Oh, and this is something I’m pretty excited about – I think we’ll see more federated systems where hospitals can share insights without sharing actual patient data. Privacy is huge, obviously.

But honestly, the biggest change might just be cultural. Getting healthcare organizations to view governance not as this annoying compliance thing but as actually creating value. That’s the hard part. I mean, I gave a talk at this hospital association thing in Boston, and afterward this hospital CEO came up to me and was like, “So this governance stuff – how much is it going to cost me?” And I had to explain that it’s not just a cost center; it’s risk management. It’s quality control.

Sorry, I’m rambling a bit. It’s just something I care about a lot. [laughs]

SARAH CHEN: No, that’s exactly why we wanted to have you on – to get that real perspective from someone who’s been in the trenches. Sina Bari MD, this has been fascinating. Any final thoughts for our listeners?

SINA BARI MD: Just that AI in healthcare is coming whether we’re ready or not. And I’ve seen both the amazing potential and the scary pitfalls. The difference between those outcomes usually comes down to how thoughtfully these systems are governed and integrated.

And maybe one last thing – to the clinicians listening, get involved in this process. Don’t leave it to the administrators and tech people. These systems will shape how you practice medicine, so you should have a say in how they work.

SARAH CHEN: That’s great advice. Thanks so much for joining us today.

SINA BARI MD: Thanks for having me, Sarah. This was fun – even the rant part [laughs].


Sina Bari MD leads Medical AI initiatives at iMerit Technology. With a background spanning plastic surgery and artificial intelligence development, he’s been a vocal advocate for responsible AI implementation in healthcare settings. He occasionally blogs about the intersection of clinical practice and emerging technologies at sinabarimd.wordpress.com.