AI in Healthcare: Symposium Insights

For years, artificial intelligence (AI) has been growing behind the scenes of our lives. Starting off as modifications of not‑so‑simple algorithms, early large language models could barely string a few words together, much like early vision systems that struggled to distinguish a lamppost from a cat in digital images. More recently AI has not just grown but proliferated—like Darwin’s finches in the Galapagos—into nearly every niche available in the digital world.

AI has infiltrated into daily life personally and professionally for many, and while modern healthcare has historically been hesitant to adapt to new technologies, Raghav Mani, director of Digital Health at Nvidia, pointed out that healthcare is adopting AI at three times the rate of other industries. Clearly, there is a lot to discuss, which is why The New York Academy of Sciences and the Windreich Department of Artificial Intelligence and Human Health at the Icahn School of Medicine at Mount Sinai co-hosted the 3rd annual “New Wave of AI in Healthcare,” a two-day symposium on May 12 and 13 with the goal of opening discourse between researchers, clinicians, industry leaders and other interested parties on all topics related to AI and healthcare.

Day one

The first day opened with a lightning round of welcome remarks from organizers expressing their personal experience with AI in healthcare research and practice. While some, like Nicholas Dirks, PhD, president and CEO of The New York Academy of Sciences shared concerns about how to maintain human involvement in AI use, he also expressed awe stating that “The pace of progress is breathtaking.”

Others were more practical in their assessments. Lisa Stump, chief digital information officer at Mount Sinai Health System asserted, “The future is not something we enter, it’s something we create.” Similarly, Brendan G. Carr, MD, CEO, Mount Sinai Health System, described AI as a “new partner” to aid clinicians in synthesizing the vast and growing clinical data. Girish N. Nadkarni, MD, a nephrologist and practicing clinician at Icahn School of Medicine at Mount Sinai summarized the whole event before the first talk even began: “The real question is not IF AI will transform healthcare, but HOW.”

The keynote presentation leading day one’s discussions endeavored to answer that very question. With his talk entitled, “Harnessing the power of Platform Thinking to Transform Healthcare,” John Halamka, MD, president of the Mayo Clinic Platform, spent 30 minutes exploring the power of data while questioning how AI is and should be used to analyze the varied data currently available, but cautioned that this is no simple task when considering the sources of data and potential restrictions on data use. He spoke about practical applications of AI data analysis that have and can be done, including in drug discovery. He also pointed out that AI can fill gaps in the healthcare workforce.

The day continued with four talks exploring different aspects of AI model use in healthcare. Marina Sirota, PhD, professor at the University of California, San Francisco spoke about how clinical data can be used for predictive medicine. Others, including Mani and Jonathan Carlson, PhD, vice president and managing director of Microsoft Heath Futures, discussed how AI agents and models can be used as part of hospital and clinician toolkits at multiple levels—not just as data analysis engines, but also to aid in synthesizing patient data and diagnostic support. Rounding out the discussion, Azra Bihorac, MD, senior associate dean for research at the University of Florida described how AI models need to be validated just like any other tool. She also pointed out that while AI is continuously improving in its ability to assess problems and suggest the next best course of action, human input is vital for collaborative success.

Panel discussion moderated by Robert Freeman, DNP. Panelists from left to right: Pierre Elias, MD, Karen Wong, MD and Alexander Fedotov, PhD

The final talks for day one focused on how AI can be used directly with patient care situations. Following their individual talks on how AI can be integrated into electronic health records (EHR), combining models to develop new insights, or reimagining diagnosis ability to improve diagnostic equity, the final three speakers engaged in a dynamic, and sometimes heated panel discussion. Karen Wong, MD, a physician at Epic, Alexander Fedotov, PhD, director of AI digital precision health at AstraZeneca and Pierre Elias, MD, assistant professor at Columbia University Irving Medical Center each shared their thoughts on how AI will be used in the near future. While they were all in agreement that AI cannot replace clinicians, they also recognized that AI will be a disruptive force, but it’s up to clinicians to take responsibility to use the technology as appropriate but to rely on their intuition and judgement as trained professionals. When opining on the future of AI use in healthcare five years from now, Fedotov stated, “I would still want to see humans at the helm of all the decision maker processes.”

Day two

While the first day laid the foundations for AI use in healthcare spanning bench to bedside, the second day of the symposium included more discussion and criticism of AI on the logistic level.

Fireside chat between Girish N. Nadkarni, MD and Dave A. Chokshi, MD

The day began with a keynote fireside chat between Nadkarni and Dave A. Chokshi, MD, a physician and professor at City University of New York, and former NYC health commissioner. He spoke about his leadership experiences, sharing many anecdotes of his time as a public health advocate and communicator during the COVID-19 pandemic. When questioned on the importance of communication considering the state of healthcare and declining trust of the public—especially with the increased use of AI, which has the potential of adding layers of feelings of abandonment, surveillance, and impersonalization—Chokshi pointed out that “It makes relationships even more important that we know then are.” He stressed that a his job, as a clinician, is to build trust with patients, and make sure that they return for care. While he envisions AI being transformative to healthcare in the next few years, he cautioned that listening and integrating feedback from front line users, clinical staff and patients, will be vital.

The morning continued with talks exploring AI’s use in research and learning in healthcare. Joshua C. Denny, MD, CEO of NIH All of Us Research, delivered a detailed summary of the progress and of the All of Us project. Despite recent funding concerns and cuts, the project scope remains on track, and researchers world-wide are utilizing the data derived from this project and how the project leads are working to establish parameters and modules for researchers to more easily implement AI in their data analysis. Andrew Gruen, PhD, standards lead at MLCommons, then spoke animatedly about the importance of establishing standards and benchmarks for AI use in researcher and healthcare settings. He spoke candidly on the need to not just train AI but to have external evaluation and validation of AI models.

Panel discussion moderated by Girish N. Nadkarni, MD. From left to right: Karandeep Singh, MD, Girish N. Nadkarni, MD, and Vardit Ravitsky, PhD

The symposium concluded with multiple discussions on the interactions between AI and humans—not just as a tool, but by viewing the use of AI in the broader scale. Karandeep Singh, MD, executive director for health innovation at the University of California, San Diego explored various opinions of clincians and patients on the use of AI, while pointing out that the use of AI in healthcare settings should be thoughtfully considered before implantation. Meanwhile, Vardit Ravitsky, PhD, president and CEO of The Hastings Center for Bioethics, discussed the ethics behind AI use as a direct to patient setting, specifically as a patient-used chatbot. In a debate following their respective talks, the two delved deeply into the risks associated with AI use, both on the patient side with chatbots and with scribe technologies used by clinicians and patients. They often agreed on the need for transparency in AI usage, but specific AI applications, like uses of AI robots in the home to combat loneliness in the elderly resulted in disagreements.

The final talk presented by Tanzeem Choudhury, PhD, chief of health innovation at Cornell Tech, brought many previously discussed topics together. Her research explores how AI can be used in treatment of mental health, describing how AI can be used in multiple aspects of mental health therapy from recording physiological symptoms with wearables to using chatbots for various functions. She cautioned that while these tools may eventually be transformative, the current state of AI use in mental health is still growing.

The closing remarks by Alexander Charney, MD, PhD, professor at Icahn School of Medicine at Mount Sinai summarized the event well. He shared that throughout the symposium he imagined what clinicians and researchers from 100 years ago and from 100 years in the future would think about the current state of healthcare and about the challenges being faced now with how to incorporate AI. He said, “We aren’t the first group of human beings to deal with powerful technology and figuring out how we’re going to use it to change society.” He hopes that the people from the past would see that we understand and respect the past and learn from it being rigorous in our research and testing, while the people from the future will look on us with pride at our fearless and tenacity in the face of new technology. He hopes that both groups would see that we “tried to do the right thing.” He ended saying that he does see all of that here along with passion and coming together of everyone at the meeting.

The post AI in Healthcare: Symposium Insights appeared first on Inside Precision Medicine.

Cross-Dataset Evaluation of an Automated Video-Based Model for Detecting Tardive Dyskinesia Using the Clinician’s Tardive Inventory: Validation Study

<strong>Background:</strong> Tardive dyskinesia (TD) is a common, often underrecognized movement disorder resulting from long-term antipsychotic use, yet its detection in routine mental health care remains inconsistent despite the availability of structured rating scales. <strong>Objective:</strong> This study evaluated the performance of an artificial intelligence–powered, video-based model for detecting abnormal movements associated with TD using the Clinician’s Tardive Inventory (CTI) dataset. We compare automated assessments of videos from the CTI dataset with previously completed clinician-rated Abnormal Involuntary Movement Scale (AIMS) and CTI scores for the dataset’s videos to determine the model’s reliability and the accuracy of its assessment conclusions relative to expert raters. <strong>Methods:</strong> In total, 69 videos with corresponding AIMS and CTI ratings were analyzed using the visual transformer algorithm model called TDtect reported previously. The dataset included single-video assessments per participant, with varied instructions and movement types. The relationship between automated predictions and clinician ratings was assessed using Pearson correlation, and predictive accuracy was evaluated using area under the curve (AUC) metrics. <strong>Results:</strong> The model showed a strong correlation with AIMS total scores (<i>r</i>=0.717) and high diagnostic accuracy (AUC 0.854), which improved further at an optimized threshold (AUC 0.900). Performance differed across anatomical regions, with the tongue, lips, and jaw displaying the highest predictive reliability. Functional CTI components had weaker correlations (<i>r</i>=0.27-0.63), as expected due to the subjective nature of these measures. <strong>Conclusions:</strong> These findings provide preliminary evidence that an artificial intelligence–driven TD detection model can generalize across video protocols, suggesting potential for broader clinical applicability, although further validation is needed. Future refinements and fine-tuning are expected to enhance accuracy, particularly in predicting functional impact.

Adoption of Digital Mental Health Interventions in National Health Service England, Scotland, and Wales: Freedom of Information Questionnaire Study

<strong>Background:</strong> Digital mental health interventions (DMHIs) have been widely promoted to improve access to mental health care within the UK National Health Service (NHS), particularly following the COVID-19 pandemic. In 2015, a total of 48 technologies were reportedly used in NHS services in England, but over the past decade, substantial changes to regulatory requirements, evidence standards, and procurement processes have reshaped the digital mental health landscape. There is limited clarity regarding which DMHIs are currently being formally procured and funded by NHS mental health services across the United Kingdom. <strong>Objective:</strong> This study aimed to identify and describe the DMHIs currently procured, contracted, or paid for by NHS mental health service providers in England, Scotland, and Wales for adult common mental health problems and to compare current procurement practices with findings reported in 2015. <strong>Methods:</strong> Freedom of Information requests were submitted to all NHS mental health trusts in England and all health boards in Scotland and Wales. Responses were collated and screened to provide an updated and extended record of which technologies are reportedly procured or paid for by services. <strong>Results:</strong> In total, 19 different DMHIs were identified as being procured across mental health service providers for adult common mental health problems at the time of data collection. This demonstrates a substantial reduction in the number of technologies being adopted into practice compared to the 48 reported in England in 2015. The findings reveal several key insights, including that only 2 technologies have remained in use for a decade, and they shed light on the types of technologies being selected and the variations in procurement practices among the 3 national health services. <strong>Conclusions:</strong> Despite the expansion of the digital mental health marketplace, the number of DMHIs formally procured by NHS mental health services has markedly decreased over the past decade. This consolidation may reflect increased selectivity and the adoption of higher-quality products, driven by strengthened regulatory oversight, evidence standards, and national guidance. Although these developments may enhance safety and quality assurance, they also raise important questions about innovation, market sustainability, and equitable access to digital mental health care. Ongoing monitoring of procurement practices is needed to inform policy, service design, and the future development of DMHIs.
<![CDATA[Learn how common late-life mental illness is, and what new treatments mean for geriatric care.]]>

Wireless Stress Detector Offers Multiple Medical Uses

A next-generation device that detects signs of stress could have wide-ranging applications, from investigating sleep disorders to detecting signs of sepsis.

The polygraph detector, described in Science Advances, is worn on the chest and can even sense when a person is lying.

It allows psychophysiological states to be continuously monitored through a combination of multimodal sensing and wireless data transmission.

The gadget offers an alternative to current approaches such as such as polygraphy and polysomnography (PSG), which involve cumbersome wired sensors that limit their practicality.

“By uncovering mechanistic links between autonomic imbalance, stress reactivity, and health outcomes, these devices have the potential to transform diagnostic workflows, optimize educational programs, and enable personalized therapeutic monitoring across stress medicine, pediatrics, and behavioral health,” reported Sun Hong Kim, PhD, from the University of Seoul in South Korea, and co-workers.

Subtle physiological variations in cardiac, respiratory, electrodermal, and thermal activity often serve as indicators of compromised health or heightened stress responses.

These can be reflected in many scenarios, from pediatric sleep disorders that disrupt neurodevelopment to the psychological strain experienced in high-stakes clinical settings or during polygraph examinations.

Accurate monitoring of psychophysiological states is therefore essential for understanding how stress and autonomic dysfunction manifest across a wide spectrum of medical conditions.

However, most existing devices monitor only one or two parameters or rely on electrochemical sensors that detect sweat biomarkers, thereby failing to reflect the complex and dynamic interplay between multiple physiological systems.

Wearable polygraph device in the palm of a hand for scale. [John A. Rogers/Northwestern University]

Kim and co-workers therefore designed a single platform to enable comprehensive assessment of autonomic and stress-related physiology in real time.

The device continuously measures changes in heartbeat, skin temperature, and breathing, which are then converted using machine learning into measures of psychological strain.

The device had high fidelity with gold standard systems in quantifying the complex psychological stress induced by polygraph interviews and complex cognitive load tasks as well as the physical stress caused by repeatedly putting a hand in an iced water.

During overnight monitoring of children, it reliably identified arousals, hypopnea, and apnea while revealing disease-specific autonomic signatures among infants with Down syndrome.

Real-world deployment during emergency simulation training showed that multimodal stress signatures correlate inversely with performance, reflecting its value for medical education.

Machine learning analyses across all studies confirmed that multimodal features outperformed single-signal approaches in detecting stress and clinical events with high sensitivity and specificity.

“A particularly notable contribution lies in pediatric sleep medicine,” the authors noted.

“Simultaneous comparison with PSG confirms the ability to detect arousals, hypopnea, and apnea while also providing mechanistic insights into autonomic regulation.

“In infants with Down syndrome, multimodal analysis reveals attenuated sympathetic responsiveness and parasympathetic dominance, consistent with known vulnerabilities in airway patency and autonomic control.

“Such disease-specific autonomic signatures may serve as valuable biomarkers for risk stratification, early diagnosis, and targeted intervention in neurodevelopmental disorders.”

The post Wireless Stress Detector Offers Multiple Medical Uses appeared first on Inside Precision Medicine.

Prior Heart Attack Linked to Faster Cognitive Decline Over Time

People who have experienced a heart attack, including those who had a “silent” heart attack that hadn’t been previously diagnosed, showed faster declines in memory and thinking skills over time, according to a study published in the journal Stroke. Researchers found that evidence of a previous myocardial infarction was associated with an accelerated rate of cognitive decline and a higher likelihood of developing cognitive impairment during more than a decade of follow-up, indicating this a cohort of patients who may need to take more proactive measures to retain cognitive acuity as they age.

“Having had a heart attack in the past may speed up the decline in memory and thinking over time,” said study lead author Mohamed Ridha, MD, an assistant professor of neurology at The Ohio State University. “Given the rising burden of dementia and cognitive decline among Americans, it is important to understand how cardiovascular disease affects their brain health. This knowledge can help heart attack survivors take steps to improve their brain health as they age.”

The research analyzed data from 20,923 adults enrolled in the REGARDS (Reasons for Geographic and Racial Differences in Stroke) study, a national cohort designed to examine racial and geographic disparities in stroke outcomes in the United States. Participants, who were enrolled between 2003 and 2007, had interpretable electrocardiograms and no evidence of cognitive impairment at the start of the study. Their average age was 63 years, with 62% identified as White adults and 38% as Black adults.

The team used a combination of self-reported medical history and electrocardiogram readings to determine if participants had evidence of a prior heart attack. The patients in the study were categorized into groups: those who had self-reported a heart attack, a clinically recognized heart attack confirmed by electrocardiogram, and silent heart attack, defined as electrocardiographic evidence of myocardial infarction without a prior diagnosis.

All participants took part in an annual telephone-based cognitive screening for a median of 10.1 years. The six-question assessment evaluated orientation and memory recall, with lower scores indicating poorer cognitive performance. Investigators adjusted for other factors that are known to be associated with cognitive decline including age, sex, race, education, exercise frequency, diabetes, smoking, blood pressure, depression, kidney function, and cardiovascular events that occurred during follow-up.

Among the study population, 2,183 participants had evidence of prior myocardial infarction at baseline. Of those cases, 1,098 were self-reported heart attacks, 281 were clinically recognized heart attacks confirmed by electrocardiogram, and 804 were silent heart attacks. Nearly 37% of all heart attacks identified in the study were clinically silent.

Compared with participants without a prior heart attack, heart attack survivors had an annual risk of developing cognitive impairment that was 5% higher that patients who had not suffered a heart attack. The accelerated decline was observed across all categories of prior heart attack, including silent myocardial infarction and was also consistent across races and sex.

The study adds to prior research that has linked cardiovascular disease and dementia risk and noted the importance of identifying such patients. “Previous investigations of incident coronary ischemic events have demonstrated that the impact on cognitive function is not immediate but manifests as a subsequent accelerated rate of long-term cognitive decline,” the researchers wrote. “Vascular contributions to cognitive impairment, including stroke, are prevalent and potentially modifiable factors underlying cognitive decline.”

The findings could help clinicians provide preventative care, since electrocardiograms and patient history are commonly available in routine practice. These tools could help clinicians identify patients who may benefit from counseling and monitoring related to cognitive health and Ridha noted that clinicians caring for heart attack survivors should discuss ways to reduce the risk of cognitive decline and dementia as patients age.

While the biological mechanisms linking heart attack and cognitive decline remain uncertain, the discussion proposed possible contributors, including microvascular disease, silent cerebral infarcts, systemic inflammation, reduced blood flow to the brain, and impaired amyloid clearance.

The post Prior Heart Attack Linked to Faster Cognitive Decline Over Time appeared first on Inside Precision Medicine.

Large-scale meta- and cross-trait analyses uncover shared genetic risk factors for IBS and psychiatric disorders

IntroductionIrritable bowel syndrome (IBS) is a common gut-brain axis disorder characterized by abdominal pain and altered bowel habits, and it shows high comorbidity with psychiatric disorders. However, the shared genetic mechanisms underlying these associations remain incompletely understood.MethodsWe performed a large-scale meta-analysis of IBS in individuals of European ancestry by integrating genome-wide association study (GWAS) summary statistics from the UK Biobank, Bellygenes, and the Million Veteran Program (MVP), thereby increasing statistical power to detect novel IBS loci. We further conducted global genetic correlation analyses with psychiatric traits, followed by multi-trait analysis of GWAS (MTAG) and conditional false discovery rate (condFDR) analyses to identify pleiotropic loci. Transcriptomic, methylomic, and expression quantitative trait locus (eQTL) data were integrated to explore potential regulatory mechanisms.ResultsThe meta-analysis identified up to ten previously unreported IBS loci, several of which were supported by colonic and brain eQTL effects. Global genetic correlation analyses confirmed substantial genetic overlap between IBS and psychiatric traits, particularly major depressive disorder and neuroticism. MTAG and condFDR analyses uncovered more than 100 pleiotropic loci, including signals at SORCS1, SLC35D1, COA1, and TLE1. Integrative analyses of transcriptome- and methylome-wide data highlighted regulatory mechanisms spanning colonic, immune, and neuronal tissues, supporting neuro-immune crosstalk and mitochondrial involvement.DiscussionOur findings provide a comprehensive genetic characterization of IBS, refine its heritable basis, reveal pleiotropic links with psychiatric disorders, and implicate molecular pathways across the gut-brain axis. These results advance mechanistic understanding of IBS and may inform future therapeutic development for IBS and its psychiatric comorbidities.

Can the treatment effects of human-animal interaction be maintained? A randomized controlled trial including follow-up in people with severe mental illness

IntroductionThere are persistent demands for well-designed randomized controlled trials (RCTs), including follow-up measurements, in studies on animal-assisted treatment (AAT). In addition, a possible dose-response relationship is under discussion. The aim of the present study was to investigate the efficacy of a single-session AAT with sheep, including a booster exercise, over a follow-up period of four weeks.MethodsIn an RCT, a single-session AAT with sheep in a group setting, including an imaginative booster exercise conducted in the week following the AAT session, was compared to treatment as usual (TAU). Sixty psychiatric inpatients with severe mental illness were assessed for positive and negative emotions, mindfulness, and self-efficacy expectancy at baseline (PRE), immediately after the intervention (POST), and at one-week and four-week follow-ups.ResultsThe results indicate significant differences between the two groups at POST and still in the one-week follow-up (FU1) in three of four outcomes. Within the intervention group, within-group analyses demonstrated significant improvements from PRE to POST and from PRE to FU1 across all outcomes, with large effect sizes. At the four-week follow-up, all significant effects had diminished.ConclusionsAn imaginative booster exercise conducted within one week after an AAT session was effective in maintaining large effect sizes for up to one week. However, the results did not persist at the four-week follow-up. Longer follow-up periods, variations in the number of sessions, and the inclusion of active control groups are therefore necessary for further AAT studies.Trial registrationhttps://drks.de/search/de/trial/DRKS00031347, identifier DRKS 00031347

Getting the timing right. Autistic adolescents reflect on the value of an early diagnosis

IntroductionIn Western countries, autism diagnoses are increasingly assigned in the first years of life. But is earlier necessarily better? Despite potential benefits, autistic infants and toddlers cannot participate in these discussions. In the ethical debate on early autism diagnosis, this raises tensions between parental duties and rights, and the child’s developing autonomy.MethodsTo mend the lack of autistic voices in this debate, we queried a diverse group of 18 autistic adolescents (aged 16–18). In a set of indepth interviews, we explored their experiences of their autism diagnosis, and their views on the ideal timing of such a diagnosis, if at all.ResultsUsing the QUAGOL data-analysis method, we developed three themes: (1) (Not) feeling different, (2) Drawing up the balance of the label’s value, and (3) Getting the timing right. Adolescents experiencing most difficulties in navigating the neurotypical world also seemed to value the diagnostic label most, and vice versa. Nevertheless, nearly all adolescents favored a relatively early diagnosis and early disclosure thereof—not necessarily in infancy, but early enough to enable timely support for both themselves and their parents. Crucially, adolescents emphasized that such early support should be personalized, readily available and neurodiversity-affirmative to make early diagnosis truly worthwhile.DiscussionOur data did not corroborate any presumed clash of interests between parents and autistic children. Consequently, we suggest moving this ethical debate away from a discourse based on individual rights or interests toward a relational, care ethics approach.

The shock of seeing your body used in deepfake porn 

When Jennifer got a job doing research for a nonprofit in 2023, she ran her new professional headshot through a facial recognition program. She wanted to see if the tech would pull up the porn videos she’d made more than 10 years before, when she was in her early 20s. It did in fact return some of that content, and also something alarming that she’d never seen before: one of her old videos, but with someone else’s face on her body.

“At first, I thought it was just a different person,” says Jennifer, who is being identified by a pseudonym to protect her privacy. 

But then she recognized a distinctly garish background from a video she’d shot around 2013, and she realized: “Somebody used me in a deepfake.”

Eerily, the facial recognition tech had identified her because the image still contained some of Jennifer’s features—her cheekbones, her brow, the shape of her chin. “It’s like I’m wearing somebody else’s face like a mask,” she says. 

“It’s like I’m wearing somebody else’s face like a mask.”

Conversations about sexualized deepfakes—which fall under the umbrella of nonconsensual intimate imagery, or NCII—most often center on the people whose faces are featured doing something they didn’t really do or on bodies that aren’t really theirs. These are often popular celebrities, though over the past few years more people (mostly women and sometimes youths) have been targeted, sparking alarm, fear, and even legislation. But these discussions and societal responses usually are not concerned with the bodies the faces are attached to in these images and videos.

As Jennifer, now 37 and a psychotherapist working in New York City, says: “There’s never any discussion about Whose body is this?” 

For years, the answer has generally been adult content creators. Deepfakes in fact earned their name back in November 2017, when someone with the Reddit username “deepfakes” uploaded videos showing faces of stars like Scarlett Johansson and Gal Gadot pasted onto porn actors’ bodies. The nonconsensual use of their bodies “happens all the time” in deepfakes, says Corey Silverstein, an attorney specializing in the adult industry. 

But more recently, as generative AI has improved, and as “nudify” apps have begun to proliferate, the issue has grown far more complicated—and, arguably, more dangerous for creators’ futures. 

Porn actors’ bodies aren’t necessarily being taken directly from sexual images and videos anymore, or at least not in an identifiable way. Instead, they are inevitably being used as training data to inform how new AI-generated bodies look, move, and perform. This threatens the livelihood and rights of porn actors as their work is used to train AI nudes that in turn could take away their business. And that’s not all: Advancements in AI have also made it possible for people to wholly re-create these performers’ likenesses without their consent, and the AI copycats may do things the performers wouldn’t do in real life. This could mean their digital doubles are participating in certain sex acts that they haven’t agreed to do, or even that they’re perpetrating scams against fans. 

Adult content creators are already marginalized by a society that largely fails to protect their safety and rights, and these developments put them in an even more vulnerable position. After Jennifer found the deepfake featuring her body, she posted on social media about the psychological effects: “I’ve never seen anyone ask whether that might be traumatic for the person whose body was used without consent too. IT IS!” Several other creators I spoke with shared the mental toll that comes with knowing their bodies have been used nonconsensually, as well as the fear that they’ll suffer financially as other people pirate their work. Silverstein says he hears from adult actors every day who “are concerned that their content is being exploited via AI, and they’re trying to figure out how to protect it.” 

One law professor and expert in violence against women calls these creators the “forgotten victims” of NCII deepfakes. And several of the people I spoke with worry that as the US develops a legal framework to combat nonconsensual sexual content online, adult actors are only at risk of further injury; instead of helping them, the crackdown on deepfakes may provide a loophole through which their content and careers could be stripped from the internet altogether.

How deepfakes cause “embodied harms”

During his preteen years in the 1970s, Spike Irons, now a porn actor and president of the adult content platform XChatFans, was “in love” with Farrah Fawcett. Though Fawcett did not pose nude, Jones managed to get his hands on what looked like pictures of her naked. “People were cutting out faces and pasting them on bodies,” Irons says. “Deepfakes, before AI, had been going around for quite a while. They just weren’t as prolific.”

The early public internet was rife with websites capitalizing on the idea that you could use technology to “see” celebrities naked. “People would just use Microsoft Paint,” says Silverstein, the attorney. It was a simple way to mash up celebrities’ faces with porn. 

People later used software like Adobe After Effects or FakeApp, which was designed to swap two individuals’ faces in images or videos. None of these programs required serious expertise to alter content, so there was a low barrier to entry. That, plus the wealth of porn performers’ videos online, helped make face-swap deepfakes that used real bodies prevalent by the 2010s. When, later in the decade, deepfakes of Gal Gadot and Emma Watson caused something of a broader panic, their faces were allegedly swapped onto the bodies of the porn actors Pepper XO and Mary Moody, respectively.

But it wasn’t just high-profile actors like them whose bodies were being used. Jennifer was “a very minor performer,” she says. “If it happened to me, I feel like it could happen to anybody who’s shot porn.” Since he started his practice in 2006, Silverstein says, “numerous clients” have reached out to report “This is my body on so-and-so.” 

Both people whose faces appear in NCII deepfakes and those whose bodies are used this way can feel serious distress. Experts call this type of damage “embodied harms,” says Anne Craanen, who researches gender-based violence at the UK’s Institute for Strategic Dialogue, an organization that analyzes extremist content, disinformation, and online threats. 

The term reflects the fact that even though the content exists in the virtual realm, it can cause physiological effects, including body dysmorphia. The face-swapped entity occupies the uncanny valley, distorting self-perception. After discovering their faces in sexual deepfakes, many people feel silenced, experts told me; they may “self-censor,” as Craanen puts it, and step back from public-facing life. Allison Mahoney, an attorney who works with abuse survivors, says that people whose faces appear in NCII can experience depression, anxiety, and suicidal ideation: “I’ve had multiple clients tell me that they don’t sleep at night, that they’re losing their hair.” 

Independent creators aren’t just “having sex on camera.” For someone to rip off their work “for their own entertainment or financial gain fucking sucks.”

Though the impact on people whose bodies are used hasn’t been discussed or studied as often, Jennifer says that “it’s just a really terrible feeling, knowing that you are part of somebody else’s abuse.” She sees it as akin to “a new form of sexual violence.”

The uncertainty that comes with not being aware of what your body is doing online can be highly unsettling. Like Jennifer, many adult actors don’t really know what’s out there. But some devoted followers know the actors’ bodies well—often recognizing tattoos, scars, or birthmarks—and “very quickly they bring [deepfakes] to the adult performer’s attention,” says Silverstein. Or performers will stumble upon the content by chance; some 20 years ago, for instance, the first such client to tell Silverstein her body was being used in a deepfake happened to be searching Nicole Kidman online when she found that one of the results showed Kidman’s face on her porn. “She was devastated, obviously, because they took her body,” he says, “and they were monetizing it.” 

Otherwise, this imagery may be found by an organization like Takedown Piracy, one of several copyright enforcement companies serving adult content creators. US copyright violations can be challenging to prove if someone’s body lacks distinguishing features, says Reba Rocket, Takedown Piracy’s chief operating and marketing officer. But Rocket says her team has added digital fingerprinting technology to clients’ material to help flag and remove problematic videos, often finding them before clients realize they’re online. 

By capturing “tens of thousands of tiny little visual data points” from videos, digital fingerprinting creates unique corresponding files that can be used to identify them, Rocket says—kind of like an invisible watermark. The prints remain even if pirates alter the videos or replace performers’ faces. Takedown Piracy has digitally fingerprinted more than half a billion videos and the organization has gotten 130 million copyrighted videos taken down from Google alone (though, of those videos, Rocket hasn’t tracked how many of these specifically include someone else’s face on a performer’s body). 

Besides copyright, a range of legal tools can be used to try and combat NCII, says Eric Goldman, a law professor at Santa Clara University. For example, victims can claim invasion of privacy. But using these tools isn’t particularly straightforward, and they may not even apply when it comes to someone’s body. If there aren’t, for instance, unique markers indicating that a body in a deepfake belongs to the person who says it does, US law “doesn’t really treat [this content] as invasion of privacy,” Goldman says, “because we don’t know who to attribute it to.”

In a 2018 study that reviewed “judicial resolution” of cases involving NCII, Goldman found that one successful way plaintiffs were able to win cases was to assert “intentional affliction of emotional distress.” But again, that hinges on the ability to clearly identify the person in the content. Relevant statutes, he adds, might also require “intent to harm the individual,” which may be hard to show for people whose bodies alone are featured.

“AI girls will do whatever you want”

In the last few years, Silverstein says, it’s become less and less common to see the bodies of real adult content creators in deepfakes, at least in a way that makes them clearly identifiable. 

Sometimes the bodies have been manipulated using AI or simpler editing tools. This can be as basic as erasing a birthmark or changing the size of a body part—minor edits that make it impossible to identify someone’s image beyond a reasonable doubt, so even porn actors who can tell that an altered image used their body as a base won’t get very far in the legal realm. “A lot of people are like, That looks like my body,” says Silverstein, but when he asks them how, they’ll reply, It just does

At the same time, other users are now creating NCII with wholly AI-generated bodies. In “nudify” apps, anyone with a minimal grasp of technology can upload a photo of someone’s clothed body and have it replaced with a fake naked one. “So [much] of this content being created is just someone’s face on an AI body,” Silverstein says.

Such apps have drawn a ton of attention recently, from Grok “nudifying” minors to Meta running ads for—and then suing—the nudify app Crushmate. But there’s been relatively little attention paid to the content being used to train them. They almost certainly draw on the more than 10,000 terabytes of online porn, and performers have virtually zero recourse. 

One reason is that creators aren’t able to demonstrate with any certainty that their content is being used to train AI models like those used by nudify apps. “These things are all a black box,” says Hany Farid, a professor at the University of California, Berkeley, who specializes in digital forensics. But “given the ubiquity” of adult content, he adds, it’s a “reasonable assumption” that online porn is being used in AI training. 

“It’s just not at all difficult to come up with pornographic data sets on the internet,” says Stephen Casper, a computer science PhD student at MIT who researches deepfakes. What’s more, he says, plenty of shadowy online communities provide “user guides” on how to use this data to train AI, and in particular programs that generate nudes. 

It’s not certain whether this activity falls within the US legal definition of “fair use”—an issue that’s currently being litigated in several lawsuits from other types of content creators—but Casper argues that even if it does, it’s ethically murky for porn created by consenting adults 10 years ago to wind up in those training data sets. When people “have their stuff used in a way that doesn’t respect or reflect reasonable expectations that they had at that time about what they were creating and how it would be used,” he says, there’s “a legitimate sense in which it’s kind of … nonconsensual.” 

Adult performers who started working years ago couldn’t possibly have consented to AI anything; Jennifer calls AI-related risks “retroactively placed.” Contracts that porn actors signed before AI, adds Silverstein, might provide that “the publisher could do anything with the content using technology that now exists or here and after will be discovered.” That felt more innocuous when producers were talking about the shift from VHS to DVD, because that didn’t change the content itself, just the way it was conveyed. It’s a far different prospect for someone to use your content to train a program to create new content … content that could replace your work altogether. 

Of course, this all affects creators’ bottom line—not unlike the way Google’s AI overviews affect revenue for online publishers who’ve stopped getting clicks when people are content with just reading AI-generated summaries. Performers’ “concern is … it’s another way to pirate [their] content,” says Rocket. 

After all, independent creators aren’t just “having sex on camera,” as the adult content creator Allie Eve Knox says. They’re paying for filming equipment and location rentals, and then spending hours editing and marketing. For someone to then rip off and distort that content “for their own entertainment or financial gain,” she says, “fucking sucks.” 

KIM HOECKELE

Tanya Tate, a longtime adult content creator, tells me about another highly unsettling AI-created situation: She was recently chatting with a fan on Mynx, a sexting app, when he asked her if she knew him. She told him no, and “his eyes just started watering,” Tate says. He was upset because he thought she did know him. Turns out he’d sent $20,000 to a scammer who’d used an AI-generated deepfake of Tate to seduce him. 

Several men, Tate subsequently learned, had been scammed by an AI version of her, and some of them began blaming her for their losses and posting false statements about her online. When she reported one particularly aggressive harasser to the police, they told her he was exercising his “freedom of speech,” she says. Rocket, too, is familiar with situations where AI is used to take advantage of fans. “The actual content creator will get nasty emails from these people who’ve been scammed,” she says.

Other porn actors say they fear that their likenesses have been used without consent to do other things they wouldn’t do. One, Octavia Red, tells me she doesn’t do anal scenes, “but I’m sure there’s tons of deepfake anal videos of me that I didn’t consent to.” That could cost her, she fears, if viewers choose to watch those videos instead of subscribing to her websites. And it could cause fans to develop false expectations about what kind of porn she’ll create.

“I saw one AI creator saying, ‘Well, AI girls will do whatever you want. They don’t say no,’” says Rocket. “That horrifies me … especially if they’re training those AI models on real people. I don’t think they understand the damage to mental health or reputation that that can create. And once it’s on the internet, it’s there forever.” 

Efforts to “scrub adult content from the internet”

As AI technology improves, it’s increasingly difficult for people to discern any type of real video from the best AI-generated ones on their own. In one 2025 study, UC Berkeley’s Farid found that participants correctly identified AI-generated voices about 60% of the time (not much better than random chance), while advances like false heartbeats make AI-generated humans tougher than ever to spot.

Nevertheless, most lawyers and legal experts I spoke with said copyright laws are still adult performers’ best bet in the US legal system, at least for getting their face-swapped content taken down. For his clients, Silverstein says, he tries to figure out the content’s origins and then issue takedown requests under the Digital Millennium Copyright Act, a 1998 law that adapted copyright law for the internet era. “Even recently, I had a performer who has an insanely well-known tattoo,” he says, and with a DMCA subpoena he managed to identify the poster of the content, who voluntarily removed it. 

But this way of working is becoming increasingly rare.

These days it’s nearly “impossible,” Silverstein says, to determine who produced a deepfake, because many platforms that host pirated content operate facelessly. They’re also often based in places that “don’t really care about US law when it comes to copyrights,” says Rocket—places like Russia, the Seychelles, and the Netherlands. 

While governments in the EU, the UK, and Australia have said they will ban or restrict access to nudify apps, it’s not an easily executed proposition. As Craanen notes, when app stores remove these services, they often simply reappear under different names, providing the same services. And social platforms where people share NCII deepfakes, argues Rocket, are slacking in getting them removed. “It’s endless, and it’s ridiculous, because places like Twitter and Facebook have the same technology we do,” Rocket says. “They can identify something as an infringement instantly, but they choose not to.”

(Apple spokesperson Adam Dema emailed, “’nudification’ apps are against our guidelines” in the app store, and it has “proactively rejected many of these apps and removed many others,” flagging a reporting portal for users. A Google spokesperson emailed, “Google Play does not allow apps that contain sexual content,” noting it takes “proactive steps to detect and remove apps with harmful content” and has suspended hundreds of apps for violating its policy. Meta spokesperson shared a blog post about actions it’s taken against nudify apps, but did not respond to follow-up questions about copyrighted material. X did not respond to a request for comment.)

As porn performers are forced to navigate AI-related threats, the only current federal law to address deepfakes may not help them much—and could even make matters worse. The Take It Down Act, which became US law last year, criminalizes publishing NCII and requires websites to remove it within 48 hours. But, as Farid notes, people could weaponize the measure by reporting porn that was made legally and with consent and claiming that it’s NCII. This could result in the content’s removal, which would hurt the performers who made it. Santa Clara’s Goldman points to Project 2025, the Heritage Foundation’s policy blueprint for the second Trump administration, which aims to wipe porn from the web. The Take It Down Act, he argues, “allows for the coordinated effort to scrub adult content from the internet.” 

US lawmakers have a history of hurting sex workers in their attempts to regulate explicit content online. State-level age verification laws are an example; visitors can pretty easily get around these measures, but they can still result in reduced revenue for adult performers (because of lower traffic to those sites and the high price of age-checking services they have to purchase). 

“They’re always doing something to fuck with the porn industry, but not in a way that actually helps sex workers,” says Jennifer. “If they do something, they’re taking away your income again—as opposed to something like giving you more rights to your image, [which] would be tremendously helpful.” 

But as generative AI plays an increasingly large role in NCII deepfakes, the types of images to which adult performers have rights moves deeper into a gray area. Can actors lay claim to AI images likely trained on their bodies? How about AI-generated videos that impersonate them, like the one that tricked Tanya Tate’s fan?

The biggest challenge will be creating “legitimate, effective laws that will absolutely protect content creators from abusing their likeness to train and create AI,” Rocket says. “Absent that, we’re just going to have to keep pulling content down from the internet that’s fake.”

In the meantime, a few porn actors tell me, they’re trying to take advantage of copyright laws that weren’t really made for them; they’ve signed with platforms that host their AI-generated duplicates, with whom fans pay to chat, in part so they’ll have contracts that protect ownership of their AI likenesses. When I spoke with the actor Kiki Daire in September 2025 for a story on adult creators’ “AI twins,” she said she “own[ed] her AI” because she’d signed a contract with Spicey AI, a site that hosted AI duplicates of adult performers. If another company or person created her AI-generated likeness, she added, “I have a leg to stand on, as far as being able to shut that down.”  

Even this, though, is not a sure thing; Spicey AI, for instance, shut down several months after I spoke with Daire, so it’s unlikely that her contract would hold. And when I spoke in October with Rachael Cavalli, another adult actor who had signed with an AI duplicate site in hopes it’d help protect her AI image, she admitted, “I don’t have time to sit around and look for companies that have used my image or turned something into a video that I didn’t actually do … it’s a lot of work.” In other words, having rights to your AI image on paper doesn’t make it easier to track down all the potentially infinite breaches of those rights online.

If she’d known what she knows about technology today, Jennifer says she doesn’t think she would have done porn. The risks have increased too much, and too unpredictably. She now does in-person sex work; it’s “not necessarily safer,” she says, “but it’s a different risk profile that I feel more equipped to manage.” 

Plus, she figures AI is unlikely to replace in-person sex workers the way it could porn actors: “I don’t think there’s going to be stripper robots.” 

Jessica Klein is a Philadelphia-based freelance journalist covering intimate partner violence, cryptocurrency, and other topics.