Reducing Intrusive Trauma Memories Using a Brief Mental Imagery Competing Task Intervention: Case Series of Trauma-Exposed Women in Iceland

Background: There is a need for scalable and simple interventions for trauma-exposed people. In this case series, we built on our previous case study and case series findings and further explored the use and potential effectiveness of a brief novel intervention to reduce the number of past intrusive memories of trauma. The imagery competing task intervention consists of a memory reminder and the visuospatial task Tetris played with mental rotation, targeting 1 intrusive memory at a time. Here, we test remote delivery of the intervention, including guidance from researchers without specialist mental health training, in a sample of women in Iceland with current intrusive memories from trauma. Objective: In a case series of trauma-exposed women, we aimed to explore whether this brief novel intervention reduces the number of established intrusive memories (primary outcome) and improves general functioning and symptom reduction in posttraumatic stress, depression, and anxiety (secondary outcomes). The acceptability of the intervention along with adaptations, that is, delivery by psychology students without specialist mental health training and digital delivery, was explored. Methods: Participants (N=8) monitored the number of intrusive memories from an index trauma (occurring 3‐16 years previously) in a daily diary at baseline, during the intervention, and postintervention at 1-month and 3-month follow-ups. The intervention was delivered digitally with guidance from clinical psychologists or psychology students. A repeated AB design was used (“A”: preintervention baseline, “B”: intervention phase). Intrusions were targeted one by one, creating repetitions of an AB design (ie, length of baseline “A” and intervention “B” varied for each memory). Results: The number of intrusive memories reduced for all participants from the baseline phase compared with the intervention phase, although the reduction was minimal for 2 participants (6.3%‐93%). The number of intrusive memories continued to reduce for 6 out of 8 participants (58%‐100% reduction at 1-month follow-up; 72%‐100% reduction at 3-month follow-up). Symptoms of posttraumatic stress, depression, and anxiety were reduced for most participants postintervention and continued to decrease during the follow-up periods. Functioning was improved for 7 of the 8 participants from baseline to postintervention and continued to improve at the follow-up assessments for 3 participants. The intervention delivered digitally and partly by students was perceived to be an acceptable way to reduce the frequency of intrusive memories by all participants (mean rating 9.5 out of 10). Conclusions: Data from this case series of traumatized women provide preliminary evidence for the effectiveness of this novel brief intervention in reducing intrusive memories of trauma occurring several years ago and in improving functioning and reducing core symptom burden. This study will inform a randomized controlled trial of this novel intervention, which may have considerable implications for large-scale clinical management of traumatized populations. Trial Registration: ClinicalTrials.gov NCT04209283; https://clinicaltrials.gov/study/NCT04209283 International Registered Report Identifier (IRRID): RR2-10.2196/29873
<img src="https://jmir-production.s3.us-east-2.amazonaws.com/thumbs/0ee87211be76a539e221eafe3d6346f8" />

Context-dependent interaction between oxytocin gene polymorphisms and alcohol dependence in modulating negative emotions during acute alcohol withdrawal in adult males

ObjectiveThe importance of multiple gene-environment interaction (G × E) has been highlighted in understanding the etiology of negative emotions. This study examines the impact of oxytocin (OXT) polymorphisms (rs2740210, rs6133010, and rs2740209) in combination with alcohol dependence on anxiety and depression symptoms during acute alcohol withdrawal under different social and environmental contexts.MethodA total of 414 Chinese Han male adults undergoing acute alcohol withdrawal were recruited. Participants provided blood samples for genotyping, self-reported measures of depression and anxiety, assessments of alcohol dependence severity, and demographic information regarding social and environmental contexts.ResultsResults revealed a positive correlation between severity of alcohol dependence and symptoms of depression and anxiety, while oxytocin polymorphism did not have a direct effect on depressive and anxiety symptoms. A significant interaction between OXT polymorphism (rs2740210 and rs2740209) and alcohol dependence in relation to anxiety symptoms solely among adults living with family and/or those who were married was observed. Further analyses indicate that the GG and CC genotypes are risk genotypes, while the T allele (rs2740210) and G allele (rs2740209) are non-risk alleles in the interaction between OXT genotypes (rs2740210, rs2740209) and alcohol dependence on anxiety among the aforementioned participants.ConclusionsThese findings provide evidence for distinct G × E interaction effects on anxiety and depression symptoms during acute alcohol withdrawal, supporting the weak diathesis-stress model. Furthermore, the study highlights the importance of considering environmental factors when investigating the role of oxytocin as a biological substrate underlying social bonding and the regulation of negative emotions.

Contact Lenses Show Promise for Depression

Using specialized contact lenses to stimulate the brain could offer a novel route to treating depression, preclinical research suggests.

The research, in mice, demonstrates how wearable neuromodulation devices can provide a versatile platform for mood and other brain disorders.

It brings eye-based neurotherapies a step closer towards clinical reality and reveals the feasibility of using contact lenses as a bioelectronic strategy for the treatment of depression.

The findings appear in the latest issue of Cell Reports Physical Science.

“Our work opens up an entirely new frontier of treating brain disorders through the eye,” said lead author Jang-Ung Park, PhD, from Yonsei University.

“We believe this wearable, drug-free approach holds tremendous promise for transforming how depression and other brain conditions are treated, including anxiety, drug addiction, and cognitive decline.”

Depression is increasingly recognized as a disorder involving structural and functional abnormalities in brain networks.

Conventional treatments—such as pharmacological therapy, electroconvulsive therapy, and deep brain stimulation—target these abnormalities but can be invasive and are often limited in their efficacy or tolerability.

Park and team note that the eye provides a compelling gateway for indirect brain modulation due to its embryological derivation from the brain and extensive connectivity.

Studies also suggest that visual impairment with higher prevalence of depression, further recognizing the importance of the eye-brain axis.

To investigate this avenue further, the researchers developed a contact lens that uses transcorneal electrical stimulation (TES) based on temporal interference (TI) to stimulate the brain. This delivers two electrical signals to the retina, which only become active where they intersect, allowing specific areas of the brain to be targeted.

The platform circumvents the invasiveness and limited tolerability of conventional brain stimulation therapies by using the retina as a precise interface for the eye-brain axis.

Electrodes made from ultrathin layers of gallium oxide and platinum allow the lens to be flexible and transparent, conforming to the cornea and preserving natural vision.

The researchers examined the efficacy of the lenses in a stress-induced mouse model that recapitulated key behavioral and biological features associated with depression.

Depressed mice received either no intervention, temporal interference, or the SSRI fluoxetine and were compared with control mice that were not depressed before and after treatment. Machine learning was applied for comprehensive efficacy evaluation.

The team reported that the lenses restored behavioral, neural, and biological deficits in depression.

TI-TES enhanced behavioral resilience, restored prefrontal-hippocampal oscillatory synchrony, and normalized depression-related biomarkers.

When machine-learning integration was used to integrate behavior, brain activity, and biomarkers, it consistently grouped the mice with lenses with the non-depressed control mice rather than the untreated depressed mice.

The researchers acknowledge their research is in its early stages, and that the current study employed a wired configuration to ensure precise waveform control and stimulation stability during proof-of-concept validation.

“Like any new medical technology, our contact lenses will need to go through rigorous clinical evaluation in patients before reaching the market,” said Park.

“Next, we plan to make the lens fully wireless, test it for long-term safety in larger animals, and personalize the stimulation for each user before advancing into clinical trials in patients.”

The post Contact Lenses Show Promise for Depression appeared first on Inside Precision Medicine.

Establishing AI and data sovereignty in the age of autonomous systems

When generative AI first moved from research labs into real-world business applications, enterprises made a tacit bargain: “Capability now, control later.” Feed your proprietary data into third-party AI models, and you will get powerful results. But your data passes through systems you do not own, under governance you do not set. The protections you rely on are only as durable as the provider’s next policy update.

Now, with generative AI established in everyday business operations and sophisticated new agentic AI systems advancing every day, companies are reevaluating the terms of that deal.

“Data is really a new currency; it’s the IP for many companies,” says Kevin Dallas, CEO of EDB, echoing a recurrent anxiety from customers. “The big concern is, if you’re deploying an AI-infused application with a cloud-based large language model, are you losing your IP? Are you losing your competitive position?”

That question is now fueling a movement toward reclaiming both the data and AI systems that have rapidly become part of core business infrastructure. AI and data sovereignty, which refers to breaking dependence on centralized providers and establishing genuine control over models and data estates, it is an urgent priority for many companies, says Dallas, citing internal EDB data: “70% of global executives believe they need a sovereign data and AI platform to be successful.”

The idea of AI sovereignty is becoming a global policy conversation. NVIDIA CEO Jensen Huang recently spoke about the need for such a shift at the World Economic Forum’s annual meeting at Davos in January 2026: “I really believe that every country should get involved to build AI infrastructure, build your own AI, take advantage of your fundamental natural resource—which is your language and culture—develop your AI, continue to refine it, and have your national intelligence be part of your ecosystem.”

This report explores how enterprises are pursuing sovereignty over their models and data estates in an era of rapid AI adoption. Drawing on a survey conducted by EDB of more than 2,050 senior executives and a series of interviews with industry experts, the research confirms that the sovereignty movement on the enterprise level is already well underway.

Download the report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Recognizing anxiety and depression in cancer patients based on speech and facial expressions

PurposeTo address the anxiety and depression experienced by cancer patients due to the stress of diagnosis and treatment, as well as the limitations of traditional assessment methods characterized by high subjectivity and low efficiency, this study aims to develop a multimodal fusion approach for the simultaneous and precise evaluation of these two psychological states.Patients and methodsA speech-video dataset of clinically diagnosed cancer patients was used. This study proposes a multimodal fusion approach: for depression recognition, We employ the HuBERT pre-training architecture based on Transformers, integrating specific acoustic features of depression with textual content to achieve accurate classification of depression through a voice-text modality. For anxiety recognition, a multi-task convolutional neural network is designed to infer anxiety status from the facial expressions.ResultsExperiments conducted on a speech-video dataset of clinically diagnosed cancer patients demonstrates that the multimodal fusion model achieves a depression recognition F1 value of 0.85 and an anxiety recognition F1 value of 0.74, significantly outperforming the unimodal model.ConclusionThe results of the two modalities are fused by decision-level weighted averaging to realize the simultaneous assessment of anxiety and depression in cancer patients. The study may provide technical support for rapid, noninvasive screening of psychological status in cancer patients.

The shock of seeing your body used in deepfake porn 

When Jennifer got a job doing research for a nonprofit in 2023, she ran her new professional headshot through a facial recognition program. She wanted to see if the tech would pull up the porn videos she’d made more than 10 years before, when she was in her early 20s. It did in fact return some of that content, and also something alarming that she’d never seen before: one of her old videos, but with someone else’s face on her body.

“At first, I thought it was just a different person,” says Jennifer, who is being identified by a pseudonym to protect her privacy. 

But then she recognized a distinctly garish background from a video she’d shot around 2013, and she realized: “Somebody used me in a deepfake.”

Eerily, the facial recognition tech had identified her because the image still contained some of Jennifer’s features—her cheekbones, her brow, the shape of her chin. “It’s like I’m wearing somebody else’s face like a mask,” she says. 

“It’s like I’m wearing somebody else’s face like a mask.”

Conversations about sexualized deepfakes—which fall under the umbrella of nonconsensual intimate imagery, or NCII—most often center on the people whose faces are featured doing something they didn’t really do or on bodies that aren’t really theirs. These are often popular celebrities, though over the past few years more people (mostly women and sometimes youths) have been targeted, sparking alarm, fear, and even legislation. But these discussions and societal responses usually are not concerned with the bodies the faces are attached to in these images and videos.

As Jennifer, now 37 and a psychotherapist working in New York City, says: “There’s never any discussion about Whose body is this?” 

For years, the answer has generally been adult content creators. Deepfakes in fact earned their name back in November 2017, when someone with the Reddit username “deepfakes” uploaded videos showing faces of stars like Scarlett Johansson and Gal Gadot pasted onto porn actors’ bodies. The nonconsensual use of their bodies “happens all the time” in deepfakes, says Corey Silverstein, an attorney specializing in the adult industry. 

But more recently, as generative AI has improved, and as “nudify” apps have begun to proliferate, the issue has grown far more complicated—and, arguably, more dangerous for creators’ futures. 

Porn actors’ bodies aren’t necessarily being taken directly from sexual images and videos anymore, or at least not in an identifiable way. Instead, they are inevitably being used as training data to inform how new AI-generated bodies look, move, and perform. This threatens the livelihood and rights of porn actors as their work is used to train AI nudes that in turn could take away their business. And that’s not all: Advancements in AI have also made it possible for people to wholly re-create these performers’ likenesses without their consent, and the AI copycats may do things the performers wouldn’t do in real life. This could mean their digital doubles are participating in certain sex acts that they haven’t agreed to do, or even that they’re perpetrating scams against fans. 

Adult content creators are already marginalized by a society that largely fails to protect their safety and rights, and these developments put them in an even more vulnerable position. After Jennifer found the deepfake featuring her body, she posted on social media about the psychological effects: “I’ve never seen anyone ask whether that might be traumatic for the person whose body was used without consent too. IT IS!” Several other creators I spoke with shared the mental toll that comes with knowing their bodies have been used nonconsensually, as well as the fear that they’ll suffer financially as other people pirate their work. Silverstein says he hears from adult actors every day who “are concerned that their content is being exploited via AI, and they’re trying to figure out how to protect it.” 

One law professor and expert in violence against women calls these creators the “forgotten victims” of NCII deepfakes. And several of the people I spoke with worry that as the US develops a legal framework to combat nonconsensual sexual content online, adult actors are only at risk of further injury; instead of helping them, the crackdown on deepfakes may provide a loophole through which their content and careers could be stripped from the internet altogether.

How deepfakes cause “embodied harms”

During his preteen years in the 1970s, Spike Irons, now a porn actor and president of the adult content platform XChatFans, was “in love” with Farrah Fawcett. Though Fawcett did not pose nude, Jones managed to get his hands on what looked like pictures of her naked. “People were cutting out faces and pasting them on bodies,” Irons says. “Deepfakes, before AI, had been going around for quite a while. They just weren’t as prolific.”

The early public internet was rife with websites capitalizing on the idea that you could use technology to “see” celebrities naked. “People would just use Microsoft Paint,” says Silverstein, the attorney. It was a simple way to mash up celebrities’ faces with porn. 

People later used software like Adobe After Effects or FakeApp, which was designed to swap two individuals’ faces in images or videos. None of these programs required serious expertise to alter content, so there was a low barrier to entry. That, plus the wealth of porn performers’ videos online, helped make face-swap deepfakes that used real bodies prevalent by the 2010s. When, later in the decade, deepfakes of Gal Gadot and Emma Watson caused something of a broader panic, their faces were allegedly swapped onto the bodies of the porn actors Pepper XO and Mary Moody, respectively.

But it wasn’t just high-profile actors like them whose bodies were being used. Jennifer was “a very minor performer,” she says. “If it happened to me, I feel like it could happen to anybody who’s shot porn.” Since he started his practice in 2006, Silverstein says, “numerous clients” have reached out to report “This is my body on so-and-so.” 

Both people whose faces appear in NCII deepfakes and those whose bodies are used this way can feel serious distress. Experts call this type of damage “embodied harms,” says Anne Craanen, who researches gender-based violence at the UK’s Institute for Strategic Dialogue, an organization that analyzes extremist content, disinformation, and online threats. 

The term reflects the fact that even though the content exists in the virtual realm, it can cause physiological effects, including body dysmorphia. The face-swapped entity occupies the uncanny valley, distorting self-perception. After discovering their faces in sexual deepfakes, many people feel silenced, experts told me; they may “self-censor,” as Craanen puts it, and step back from public-facing life. Allison Mahoney, an attorney who works with abuse survivors, says that people whose faces appear in NCII can experience depression, anxiety, and suicidal ideation: “I’ve had multiple clients tell me that they don’t sleep at night, that they’re losing their hair.” 

Independent creators aren’t just “having sex on camera.” For someone to rip off their work “for their own entertainment or financial gain fucking sucks.”

Though the impact on people whose bodies are used hasn’t been discussed or studied as often, Jennifer says that “it’s just a really terrible feeling, knowing that you are part of somebody else’s abuse.” She sees it as akin to “a new form of sexual violence.”

The uncertainty that comes with not being aware of what your body is doing online can be highly unsettling. Like Jennifer, many adult actors don’t really know what’s out there. But some devoted followers know the actors’ bodies well—often recognizing tattoos, scars, or birthmarks—and “very quickly they bring [deepfakes] to the adult performer’s attention,” says Silverstein. Or performers will stumble upon the content by chance; some 20 years ago, for instance, the first such client to tell Silverstein her body was being used in a deepfake happened to be searching Nicole Kidman online when she found that one of the results showed Kidman’s face on her porn. “She was devastated, obviously, because they took her body,” he says, “and they were monetizing it.” 

Otherwise, this imagery may be found by an organization like Takedown Piracy, one of several copyright enforcement companies serving adult content creators. US copyright violations can be challenging to prove if someone’s body lacks distinguishing features, says Reba Rocket, Takedown Piracy’s chief operating and marketing officer. But Rocket says her team has added digital fingerprinting technology to clients’ material to help flag and remove problematic videos, often finding them before clients realize they’re online. 

By capturing “tens of thousands of tiny little visual data points” from videos, digital fingerprinting creates unique corresponding files that can be used to identify them, Rocket says—kind of like an invisible watermark. The prints remain even if pirates alter the videos or replace performers’ faces. Takedown Piracy has digitally fingerprinted more than half a billion videos and the organization has gotten 130 million copyrighted videos taken down from Google alone (though, of those videos, Rocket hasn’t tracked how many of these specifically include someone else’s face on a performer’s body). 

Besides copyright, a range of legal tools can be used to try and combat NCII, says Eric Goldman, a law professor at Santa Clara University. For example, victims can claim invasion of privacy. But using these tools isn’t particularly straightforward, and they may not even apply when it comes to someone’s body. If there aren’t, for instance, unique markers indicating that a body in a deepfake belongs to the person who says it does, US law “doesn’t really treat [this content] as invasion of privacy,” Goldman says, “because we don’t know who to attribute it to.”

In a 2018 study that reviewed “judicial resolution” of cases involving NCII, Goldman found that one successful way plaintiffs were able to win cases was to assert “intentional affliction of emotional distress.” But again, that hinges on the ability to clearly identify the person in the content. Relevant statutes, he adds, might also require “intent to harm the individual,” which may be hard to show for people whose bodies alone are featured.

“AI girls will do whatever you want”

In the last few years, Silverstein says, it’s become less and less common to see the bodies of real adult content creators in deepfakes, at least in a way that makes them clearly identifiable. 

Sometimes the bodies have been manipulated using AI or simpler editing tools. This can be as basic as erasing a birthmark or changing the size of a body part—minor edits that make it impossible to identify someone’s image beyond a reasonable doubt, so even porn actors who can tell that an altered image used their body as a base won’t get very far in the legal realm. “A lot of people are like, That looks like my body,” says Silverstein, but when he asks them how, they’ll reply, It just does

At the same time, other users are now creating NCII with wholly AI-generated bodies. In “nudify” apps, anyone with a minimal grasp of technology can upload a photo of someone’s clothed body and have it replaced with a fake naked one. “So [much] of this content being created is just someone’s face on an AI body,” Silverstein says.

Such apps have drawn a ton of attention recently, from Grok “nudifying” minors to Meta running ads for—and then suing—the nudify app Crushmate. But there’s been relatively little attention paid to the content being used to train them. They almost certainly draw on the more than 10,000 terabytes of online porn, and performers have virtually zero recourse. 

One reason is that creators aren’t able to demonstrate with any certainty that their content is being used to train AI models like those used by nudify apps. “These things are all a black box,” says Hany Farid, a professor at the University of California, Berkeley, who specializes in digital forensics. But “given the ubiquity” of adult content, he adds, it’s a “reasonable assumption” that online porn is being used in AI training. 

“It’s just not at all difficult to come up with pornographic data sets on the internet,” says Stephen Casper, a computer science PhD student at MIT who researches deepfakes. What’s more, he says, plenty of shadowy online communities provide “user guides” on how to use this data to train AI, and in particular programs that generate nudes. 

It’s not certain whether this activity falls within the US legal definition of “fair use”—an issue that’s currently being litigated in several lawsuits from other types of content creators—but Casper argues that even if it does, it’s ethically murky for porn created by consenting adults 10 years ago to wind up in those training data sets. When people “have their stuff used in a way that doesn’t respect or reflect reasonable expectations that they had at that time about what they were creating and how it would be used,” he says, there’s “a legitimate sense in which it’s kind of … nonconsensual.” 

Adult performers who started working years ago couldn’t possibly have consented to AI anything; Jennifer calls AI-related risks “retroactively placed.” Contracts that porn actors signed before AI, adds Silverstein, might provide that “the publisher could do anything with the content using technology that now exists or here and after will be discovered.” That felt more innocuous when producers were talking about the shift from VHS to DVD, because that didn’t change the content itself, just the way it was conveyed. It’s a far different prospect for someone to use your content to train a program to create new content … content that could replace your work altogether. 

Of course, this all affects creators’ bottom line—not unlike the way Google’s AI overviews affect revenue for online publishers who’ve stopped getting clicks when people are content with just reading AI-generated summaries. Performers’ “concern is … it’s another way to pirate [their] content,” says Rocket. 

After all, independent creators aren’t just “having sex on camera,” as the adult content creator Allie Eve Knox says. They’re paying for filming equipment and location rentals, and then spending hours editing and marketing. For someone to then rip off and distort that content “for their own entertainment or financial gain,” she says, “fucking sucks.” 

KIM HOECKELE

Tanya Tate, a longtime adult content creator, tells me about another highly unsettling AI-created situation: She was recently chatting with a fan on Mynx, a sexting app, when he asked her if she knew him. She told him no, and “his eyes just started watering,” Tate says. He was upset because he thought she did know him. Turns out he’d sent $20,000 to a scammer who’d used an AI-generated deepfake of Tate to seduce him. 

Several men, Tate subsequently learned, had been scammed by an AI version of her, and some of them began blaming her for their losses and posting false statements about her online. When she reported one particularly aggressive harasser to the police, they told her he was exercising his “freedom of speech,” she says. Rocket, too, is familiar with situations where AI is used to take advantage of fans. “The actual content creator will get nasty emails from these people who’ve been scammed,” she says.

Other porn actors say they fear that their likenesses have been used without consent to do other things they wouldn’t do. One, Octavia Red, tells me she doesn’t do anal scenes, “but I’m sure there’s tons of deepfake anal videos of me that I didn’t consent to.” That could cost her, she fears, if viewers choose to watch those videos instead of subscribing to her websites. And it could cause fans to develop false expectations about what kind of porn she’ll create.

“I saw one AI creator saying, ‘Well, AI girls will do whatever you want. They don’t say no,’” says Rocket. “That horrifies me … especially if they’re training those AI models on real people. I don’t think they understand the damage to mental health or reputation that that can create. And once it’s on the internet, it’s there forever.” 

Efforts to “scrub adult content from the internet”

As AI technology improves, it’s increasingly difficult for people to discern any type of real video from the best AI-generated ones on their own. In one 2025 study, UC Berkeley’s Farid found that participants correctly identified AI-generated voices about 60% of the time (not much better than random chance), while advances like false heartbeats make AI-generated humans tougher than ever to spot.

Nevertheless, most lawyers and legal experts I spoke with said copyright laws are still adult performers’ best bet in the US legal system, at least for getting their face-swapped content taken down. For his clients, Silverstein says, he tries to figure out the content’s origins and then issue takedown requests under the Digital Millennium Copyright Act, a 1998 law that adapted copyright law for the internet era. “Even recently, I had a performer who has an insanely well-known tattoo,” he says, and with a DMCA subpoena he managed to identify the poster of the content, who voluntarily removed it. 

But this way of working is becoming increasingly rare.

These days it’s nearly “impossible,” Silverstein says, to determine who produced a deepfake, because many platforms that host pirated content operate facelessly. They’re also often based in places that “don’t really care about US law when it comes to copyrights,” says Rocket—places like Russia, the Seychelles, and the Netherlands. 

While governments in the EU, the UK, and Australia have said they will ban or restrict access to nudify apps, it’s not an easily executed proposition. As Craanen notes, when app stores remove these services, they often simply reappear under different names, providing the same services. And social platforms where people share NCII deepfakes, argues Rocket, are slacking in getting them removed. “It’s endless, and it’s ridiculous, because places like Twitter and Facebook have the same technology we do,” Rocket says. “They can identify something as an infringement instantly, but they choose not to.”

(Apple spokesperson Adam Dema emailed, “’nudification’ apps are against our guidelines” in the app store, and it has “proactively rejected many of these apps and removed many others,” flagging a reporting portal for users. A Google spokesperson emailed, “Google Play does not allow apps that contain sexual content,” noting it takes “proactive steps to detect and remove apps with harmful content” and has suspended hundreds of apps for violating its policy. Meta spokesperson shared a blog post about actions it’s taken against nudify apps, but did not respond to follow-up questions about copyrighted material. X did not respond to a request for comment.)

As porn performers are forced to navigate AI-related threats, the only current federal law to address deepfakes may not help them much—and could even make matters worse. The Take It Down Act, which became US law last year, criminalizes publishing NCII and requires websites to remove it within 48 hours. But, as Farid notes, people could weaponize the measure by reporting porn that was made legally and with consent and claiming that it’s NCII. This could result in the content’s removal, which would hurt the performers who made it. Santa Clara’s Goldman points to Project 2025, the Heritage Foundation’s policy blueprint for the second Trump administration, which aims to wipe porn from the web. The Take It Down Act, he argues, “allows for the coordinated effort to scrub adult content from the internet.” 

US lawmakers have a history of hurting sex workers in their attempts to regulate explicit content online. State-level age verification laws are an example; visitors can pretty easily get around these measures, but they can still result in reduced revenue for adult performers (because of lower traffic to those sites and the high price of age-checking services they have to purchase). 

“They’re always doing something to fuck with the porn industry, but not in a way that actually helps sex workers,” says Jennifer. “If they do something, they’re taking away your income again—as opposed to something like giving you more rights to your image, [which] would be tremendously helpful.” 

But as generative AI plays an increasingly large role in NCII deepfakes, the types of images to which adult performers have rights moves deeper into a gray area. Can actors lay claim to AI images likely trained on their bodies? How about AI-generated videos that impersonate them, like the one that tricked Tanya Tate’s fan?

The biggest challenge will be creating “legitimate, effective laws that will absolutely protect content creators from abusing their likeness to train and create AI,” Rocket says. “Absent that, we’re just going to have to keep pulling content down from the internet that’s fake.”

In the meantime, a few porn actors tell me, they’re trying to take advantage of copyright laws that weren’t really made for them; they’ve signed with platforms that host their AI-generated duplicates, with whom fans pay to chat, in part so they’ll have contracts that protect ownership of their AI likenesses. When I spoke with the actor Kiki Daire in September 2025 for a story on adult creators’ “AI twins,” she said she “own[ed] her AI” because she’d signed a contract with Spicey AI, a site that hosted AI duplicates of adult performers. If another company or person created her AI-generated likeness, she added, “I have a leg to stand on, as far as being able to shut that down.”  

Even this, though, is not a sure thing; Spicey AI, for instance, shut down several months after I spoke with Daire, so it’s unlikely that her contract would hold. And when I spoke in October with Rachael Cavalli, another adult actor who had signed with an AI duplicate site in hopes it’d help protect her AI image, she admitted, “I don’t have time to sit around and look for companies that have used my image or turned something into a video that I didn’t actually do … it’s a lot of work.” In other words, having rights to your AI image on paper doesn’t make it easier to track down all the potentially infinite breaches of those rights online.

If she’d known what she knows about technology today, Jennifer says she doesn’t think she would have done porn. The risks have increased too much, and too unpredictably. She now does in-person sex work; it’s “not necessarily safer,” she says, “but it’s a different risk profile that I feel more equipped to manage.” 

Plus, she figures AI is unlikely to replace in-person sex workers the way it could porn actors: “I don’t think there’s going to be stripper robots.” 

Jessica Klein is a Philadelphia-based freelance journalist covering intimate partner violence, cryptocurrency, and other topics.

Targeted Ultrasound Could Offer Alternative to Chronic Pain Medication

A new study has shown that targeting ultrasound stimulation to brain regions involved in processing pain can induce long-lasting changes in brain activity, significantly reducing pain perception. Published in Nature Communications, these findings point at a novel non-invasive strategy to treat chronic pain. 

“Our study represents an important first step in understanding how this technology can non-invasively stimulate deep brain regions involved in pain processing,” said Sam Hughes, PhD, senior lecturer in pain neuroscience at the University of Exeter. “We found that targeting a specific brain region involved in pain processing can alter how pain is perceived and change how this area communicates with other parts of the brain’s pain network. The next stage of our research will be to test whether this approach can help people living with chronic pain.”

Hughes and colleagues used transcranial ultrasound stimulation (TUS), a low-intensity neuromodulation technique, to target the dorsal anterior cingulate cortex (dACC), a brain region implicated in chronic pain. The study recruited a total of 32 healthy volunteers, who were treated either with TUS or a sham while putting their right hand in a cold gel to trigger pain due to the low temperature. All participants were asked to rate the severity of the pain they were feeling and underwent MRI and MRS scans to monitor the physiological changes caused by the treatment. 

Results showed that, while TUS had no immediate effect on pain intensity, participants reported a significant reduction in pain from 28 to 55 minutes after the stimulation, suggesting it can trigger a delayed analgesic effect. At the physiological level, TUS was found to disrupt the relationship between temperature and pain intensity, increasing the connectivity between the dACC and other brain regions involved in pain modulation and changing the concentration of the GABA neurotransmitter within the dCC. 

“The study aimed to characterize how transcranial ultrasound stimulation interacts with—and potentially also alters—the brain’s processing of pain,” said Sophie Clarke, PhD, postdoctoral research fellow at the University of Plymouth and lead author of the study. “Understanding these mechanisms will be very important to support the next steps in understanding whether the stimulation can be effective in helping patients with chronic pain.”

Previous research at the University of Plymouth had shown the potential benefits of TUS for psychiatric conditions including anxiety, depression, and addiction. This study shows these benefits could extend beyond neurological disorders and one day offer a non-invasive treatment option for those experiencing chronic pain due to conditions such as fibromyalgia, back pain, and arthritis, or recovering after cancer treatment.  

“Having shown the use of ultrasound can yield positive results for people with a variety of neurological conditions, we wanted to explore what it could mean for those living with chronic pain,” said Elsa Fouragnan, PhD, director of the University of Plymouth’s Brain Research and Imaging Centre (BRIC) and Centre for Therapeutic Ultrasound (CENTUS). “Most of us know someone experiencing chronic pain, and there are very few treatments that deliver any form of long-term benefit. The findings of this new work are really promising, and we are already building on it to assess whether TUS could be a beneficial and non-invasive therapeutic treatment.”

The post Targeted Ultrasound Could Offer Alternative to Chronic Pain Medication appeared first on Inside Precision Medicine.

I’m scared of everything — what does it mean and how do I get over it?

What you’re describing sounds really overwhelming. I’m glad you reached out. The fears you mention — being scared of doing something against your will, worrying you might not have control, and feeling intensely concerned about being judged — are patterns I often see in people with anxiety and, sometimes, people with obsessive-compulsive disorder (OCD). A hallmark of OCD is a deep doubt about control: the fear that you might act in a way that goes against your values, even though you don’t want to. These kinds of fears are called intrusive thoughts. While intrusive thoughts can feel very real and frightening, they are not things you actually intend to do or predictions of things that you will do — they’re unwanted experiences that don’t define you.

Avoiding sports and other things for fear of being judged is also a symptom of anxiety. I can understand how hard it is to tell your family what you’re going through, especially if you have felt ignored in the past. At the same time, your pain deserves to be heard and taken seriously. I encourage you to try talking to your parents again, but if you truly feel like you can’t, consider telling one safe person — whether that’s another family member, a school counselor, or even a teacher you trust. You can write how you’re feeling in a note if speaking feels too hard.

The physical symptoms you mentioned — neck and shoulder pain, fidgeting — are also common in anxiety because our bodies can hold tension when our brains are on high alert. What this likely means is that your brain is caught in a fear loop, constantly scanning for danger around control and judgment.

The good news is that this is very treatable. A mental health professional may recommend a type of cognitive behavioral therapy called exposure and response prevention (ERP). ERP helps you gradually face the situations or thoughts you fear instead of looking for reassurance from someone else or avoiding those situations or thoughts altogether. Over time, ERP teaches your brain that thoughts are just thoughts, not actions, and that you can tolerate uncertainty without something bad happening.

For now, you might try gently labeling upsetting thoughts as anxiety, not facts, and practicing not accepting them as true when they show up. Taking small steps toward what you’ve been avoiding can help you rebuild your confidence, even if it feels uncomfortable at first.

While you can practice managing anxiety or intrusive thoughts on your own, it’s better to have help. Once you talk to someone you know and trust, have them help you reach out to a mental health professional who can provide a more thorough assessment and the appropriate treatment for you. You don’t have to go through this alone, and with the right support, this can get much better.

The post I’m scared of everything — what does it mean and how do I get over it? appeared first on Child Mind Institute.

Direct modulation of human GABA-A α1β2γ2 receptors by the endocannabinoid 2-arachidonoylglycerol: implications for cannabinoid-related ligands and limitations for anxiolytic drug development

Anxiety disorders are associated with impaired inhibitory neurotransmission mediated by γ-aminobutyric acid type A (GABA-A) receptors. Although benzodiazepines remain effective anxiolytics, their clinical utility is limited by sedation, cognitive impairment, tolerance, and dependence, prompting the search for mechanistically distinct GABAergic modulators. Among cannabinoid-related molecules, the strongest evidence for direct GABA-A receptor modulation concerns the endocannabinoid 2-arachidonoylglycerol (2-AG), which potentiates recombinant human α1β2γ2 receptors through residues located in the M4 helix of the β2 subunit. Here, we review the structural architecture, biophysical properties, and pharmacological profile of the human GABA-A α1β2γ2 isoform as the relevant molecular framework for evaluating this mechanism, while discussing the broader relevance of cannabinoid-related ligands and selected phytocannabinoids without assuming mechanistic equivalence. We further assess the hypothesis that 2-AG reaches the β2-M4 site through a membrane-access route and identify five conceptual barriers that currently limit translation of this mechanism into anxiolytic drug development: supraphysiological effective concentrations, unresolved synaptic-versus-extrasynaptic actions, uncertain subtype selectivity, incomplete validation of lipid-environment effects, and lack of clinical evidence linking this mechanism to anxiolysis in humans. We conclude that direct modulation through β2-M4 defines a mechanistically intriguing allosteric pathway distinct from benzodiazepine action; however, its location on a shared β2 subunit and the micromolar concentrations required for modulation represent substantial obstacles to the rational design of anxioselective agents based on this mechanism.

Internalizing and externalizing pathways to internet gaming disorder: the roles of anger and social anxiety

BackgroundInternet Gaming Disorder (IGD) represents a significant behavioral health concern, yet the roles of internalizing and externalizing psychological vulnerabilities in its development remain underexplored, particularly in Arabic-speaking populations.ObjectiveThis study examined anger and social anxiety as distinct externalizing and internalizing predictors of IGD severity in a Saudi Arabian community sample.MethodsA cross-sectional survey was administered to 303 participants (60.1% female; estimated mean age = 29.79 years, SD = 8.83) across five regions of Saudi Arabia. Participants completed the Internet Gaming Disorder Scale–Short Form (IGDS9-SF), a three-item Anger Screening Scale, and a two-item Social Anxiety screener. Hierarchical linear regression and structural equation modeling (SEM) were conducted to examine unique and incremental contributions of anger and social anxiety to IGD symptoms.ResultsAnger and social anxiety were strongly intercorrelated (r = .86, p <.001) but demonstrated divergent patterns in multivariate models. Hierarchical regression indicated that both predictors contributed unique variance when entered simultaneously, with anger positively and social anxiety negatively predicting IGD after controlling for shared variance. However, SEM clarified that only social anxiety significantly predicted latent IGD severity (β = .32, p = .027), whereas anger did not (β = .07, p = .68). The final model explained approximately 13% of variance in IGD symptoms.ConclusionsSocial anxiety was associated with IGD severity as a distinct internalizing correlate, consistent with avoidance-based coping and online social preference accounts. These preliminary, cross-sectional findings suggest that social anxiety warrants consideration in future IGD screening and research efforts in Arabic-speaking contexts.