Navigating AI’s growing influence on health care
Leaders at the Texas A&M Institute for Healthcare Access are examining how artificial intelligence affects patient outcomes and trust as the technology becomes part of everyday medicine.
Artificial intelligence is no longer a future-facing experiment in health care. It’s already embedded in many settings and systems, influencing everything from clinical decision-making to medical documentation.
As AI becomes a permanent fixture in medicine, experts at Texas A&M’s Institute for Healthcare Access say its growing role raises critical questions about whether the legal, ethical and human systems surrounding it are prepared for its influence on patients’ prognoses and broader community health outcomes.
The issue sits at the center of the institute’s mission to bring together professionals in health and law to advance solutions to problems that hinder access to timely, effective health care for individuals and communities. Based in Fort Worth, the multidisciplinary institute was created in 2022 to reinvigorate conversations among policymakers, stakeholder groups and the public about barriers to health care.
AI now touches nearly every corner of medicine, from doctor-patient interactions and clinicians’ documentation burdens to medical imaging and diagnostics. These tools can quickly digest vast amounts of data and learn over time, offering the potential to increase efficiency, reduce administrative workloads and free clinicians to spend more meaningful time with patients.
But they also raise a multitude of questions, said Bryn Esplin, the institute’s education director.
“Whether that’s AI literacy, how communities access technology in general or just appreciating the true nature of what we’re asking from a tool that we’re seeing supplant relationships, the question that aligns with the institute is how can we design systems and safeguards that ensure AI deployment is done ethically and equitably,” Esplin said.
Keegan Warren, the institute’s executive director and a licensed attorney, said much of her research focuses on non-medical drivers of health. Increasingly, AI itself has become one of them.
She points to the growing use of generative AI in administrative and legal processes, where algorithms are now being used to assist with Medicaid coverage determinations, prior authorization and eligibility decisions. AI tools are also being applied in worker’s compensation claims and child welfare cases, helping determine whether injuries qualify for benefits or whether children remain with their families.
In these cases, Warren said, algorithmic bias in AI tools could reinforce gaps in access and quality of care for disadvantaged groups.
“If you build new technology on top of an imperfect system, it doesn’t fix the cracks. It widens them,” Warren said.
Ambient listening tools — which record, transcribe and summarize patient visits in real-time — are also being increasingly deployed to generate medical records. These documents later carry legal weight in everything from disability claims to consumer fraud cases and family court proceedings. While these technologies promise efficiency, Warren cautions that automated medical records can contain errors, raising questions about their evidentiary status.
And when AI systems fail — such as in cases of wrongful coverage denial — the burden will almost universally fall on the patient, Warren said. In the absence of thoughtful deployment, she emphasizes, automation may worsen health outcomes by turning clinical documentation errors into obstacles not only to care but to legal redress — leaving patients themselves to navigate the resulting disputes and, in turn, bear the health consequences of unresolved legal needs.
At the same time, Warren emphasizes that AI is already enabling meaningful advances in patient-centered care, such as radiology tools that can help assess, based on the way a bone breaks, whether an injury may have resulted from interpersonal violence. Esplin adds that AI systems are demonstrating “remarkable predictive accuracy,” including in identifying candidates for deep brain stimulation.
AI may deliver highly accurate health information, but it also complicates how patients understand and consent to care, Esplin said. She said patients may increasingly arrive at major medical decisions after interacting with generative AI systems rather than credentialed clinicians, forming relationships with technology that can influence trust and choice. When consent is shaped by automated tools — especially ones with embedded bias or a tendency to mirror user expectations — she said it becomes harder to ensure patients fully understand and appreciate the risks and benefits upon which their decisions are premised.
For Esplin, an ethicist, those challenges make AI an opportunity to examine what she describes as a “moral disruption.”
“It’s an opportunity for the institute to conceive of the whole care team differently, starting with ourselves,” she said. “We, too, are patients. We shape and are shaped by the communities in which we live, not remote spectators without something at stake. I think part of the humanistic lens the institute adds is ensuring that sense of reciprocity in relationships gets carried over into health systems more intentionally. And with AI, we’re not afraid of it: we’re embracing it. When conversations are stalled, sometimes disruption — or interruption from new voices — is necessary to move the conversation forward.”