
When most of us walk into a doctor’s office, we come prepared with questions. What does this symptom mean? Do I really need this test? What are my treatment options? What we may not be aware of is that there’s often a silent, invisible guest answering those questions right alongside our clinicians: Artificial intelligence.
In the past few months alone, the FDA has cleared AI tools that can predict five-year breast cancer risk from routine mammograms, analyze lung sounds during virtual visits, and map the outline of organs on MRIs, through tools like Clairity Breast or Tyto Insights. These, and many others, may already be touching parts of our care without us ever being told they’re there. Increasingly, algorithms are offering risk scores, recommending treatment plans, and flagging which patients need attention first.
But the missing piece is that patients often have no idea when this technology is being used on them. Also, there’s no clear way to opt out. AI in healthcare brings real potential, but also real risks, especially for Black communities. What does informed consent look like in an era when algorithms are shaping our care?
“Patients don’t see a pop-up message that says, ‘Today’s care was brought to you by this algorithm,’” says Tiffani Bright, PhD, assistant professor at Cedars-Sinai and co-director of its Center for AI Research and Education. “The algorithmic influence is just there in the background.”
That background use can shape far more than we realize. Algorithms help hospitals decide who needs urgent care, which treatments are recommended, and even which appointment slot you get based on your symptoms. When the technology is invisible, Bright explains, patients lose something essential: Agency.
“If you don’t know it’s there, you don’t have an option,” she says. “You can’t say, ‘I don’t know who made this tool. I don’t know what data they used. I don’t know if it even works for patients like me.’” Bright believes that these tools should have to earn the trust of patients, just like doctors do, and that trust must be earned through transparency and equity.
For Black patients who already navigate a healthcare system shaped by discrimination, under-treatment, and misdiagnosis, lack of transparency can be downright harmful. AI systems learn from patterns in existing data, including medical records, imaging, and lab results. But the historical data may already reflect decades of unequal care.
“We have to ask who is represented in the data set, and who isn’t,” Bright says. “Anything that uses historical data can amplify existing disparities.” She is intentional about applying equity lenses in her work at The Center for AI Research and Education. “We test AI for language, gender, insurance, and things like that. We want to make sure that patients, and groups of patients, aren’t underrepresented in our records.”
Black women already face some of the most dangerous gaps in American healthcare, from higher rates of maternal mortality to under-treatment for pain to delayed cancer diagnoses. AI has the potential to help close those gaps, but only if equity is intentionally built into the technology and patients are informed participants in how it affects their care.
Professor Antony Haynes is a privacy law attorney and professor at Albany Law School. He notes that AI tools pick up patterns that reflect economic, racial, and cultural differences, even when race isn’t explicitly included. For example, pulse oximeters (the small clips placed on a finger to test oxygen) are less accurate on darker skin, and temperature scanners often used in clinics can under-detect fevers in Black patients.
During COVID, an algorithm used by a major insurer prioritized healthier white patients over sicker Black ones, not because race was an input, but because Black patients historically receive less care, and thus appear “lower cost.” Another tool miscategorized asthma patients, who are disproportionately Black, as “low risk” based solely on hospital stay lengths. On a policy level, these disparities remain largely unaddressed. “Because Black patients are not the priority population for industry or regulators, these issues often aren’t corrected,” Haynes says.
So, what rights do patients actually have? Legally, this space is murky, but Haynes breaks down a few key points to note: HIPAA, the main federal health privacy law, does not require doctors to disclose when they use AI. Patients generally do not have a federal right to opt out of AI-assisted care, and vendors can often use “anonymized” patient data to train AI.
However, some states, like California, give residents the right to opt out of certain automated decisions. The caveat here is that hospitals themselves are usually exempt.
“In informed consent law, consent is typically required when your data is used for research,” Haynes says. “But for routine treatment, there’s no requirement at the federal level to inform you about the use of AI.” He believes this needs to change, and urgently.
“You as a human have the right to a human decision maker,” he says. “You have the right to know if your doctor relies on software. You have the right to request a human override.”
But even without transparency laws in place, patients still have power. Haynes encourages asking your doctors questions like these:
- Are you using AI or software to help diagnose or treat me?
- How exactly is it being used?
- Are you relying on it, or is it just one tool among others?
- If I prefer a human-only decision, can that be done?
“I think you should always ask,” he says. “At the end of the day, you can seek a second opinion or choose a different provider.”
Bright agrees, adding that Black patients must feel empowered to interrogate the tools, just as you interrogate the system. “Don’t be afraid of the technology, but do be informed. Do ask questions. Do use your voice. We want our tools used with our patients, not on them,” she says. “That’s the difference between ethical AI and everything else. You have the right to understand, and the right to say no.”
People can also call upon lawmakers to draft federal and state legislation requiring doctors to proactively disclose when AI is being used, vendors to disclose exactly how their algorithms were trained, and patients to have access to information that explains what a tool does and doesn’t do. “Patients shouldn’t have to guess,” Haynes says.
Ultimately, no matter how advanced the technology becomes, trust in healthcare still starts with one simple principle: Nothing about us, without us.