Preoperative anxiety and depression symptoms are associated with poorer clinical outcomes following corrective surgery for adult equinocavovarus foot
No one’s sure if synthetic mirror life will kill us all
For four days in February 2019, some 30 synthetic biologists and ethicists hunkered down at a conference center in Northern Virginia to brainstorm high-risk, cutting-edge, irresistibly exciting ideas that the National Science Foundation should fund. By the end of the meeting, they’d landed on a compelling contender: making “mirror” bacteria. Should they come to be, the lab-created microbes would be structured and organized like ordinary bacteria, with one important exception: Key biological molecules like proteins, sugars, and lipids would be the mirror images of those found in nature. DNA, RNA, and many other components of living cells are chiral, which means they have a built-in rotational structure. Their mirrors would twist in the opposite direction.
Researchers thrilled at the prospect. “Everybody—everybody—thought this was cool,” says John Glass, a synthetic biologist at the J. Craig Venter Institute in La Jolla, California, who attended the 2019 workshop and is a pioneer in developing synthetic cells. It was “an incredibly difficult project that would tell us potentially new things about how to design and build cells, or about the origin of life on Earth.” The group saw enormous potential for medicine, too. Mirror microbes might be engineered as biological factories, producing mirror molecules that could form the basis for new kinds of drugs. In theory, such therapeutics could perform the same functions as their natural counterparts, but without triggering unwelcome immune responses.
After the meeting, the biologists recommended NSF funding for a handful of research groups to develop tools and carry out preliminary experiments, the beginnings of a path through the looking glass. The excitement was global. The National Natural Science Foundation of China funded major projects in mirror biology, as did the German Federal Ministry of Research, Technology, and Space.
By five years later, in 2024, many researchers involved in that NSF meeting had reversed course. They’d become convinced that in the worst of all possible futures, mirror organisms could trigger a catastrophic event threatening every form of life on Earth; they’d proliferate without predators and evade the immune defenses of people, plants, and animals.
“I wish that one sunny afternoon we were having coffee and we realized the world’s about to end, but that’s not what happened.”
Kate Adamala, synthetic biologist, University of Minnesota
Over the past two years, they’ve been ringing alarm bells. They published an article in Science in December 2024, accompanied by a 299-page technical report addressing feasibility and risks. They’ve written essays and convened panels and cofounded the Mirror Biology Dialogues Fund (MBDF), a broadly funded nonprofit charged with supporting work on understanding and addressing the risk. The issue has received a blaze of media attention and ignited dialogues among not only chemists and synthetic biologists but also bioethicists and policymakers.
What’s received less attention, however, is how we got here and what uncertainties still remain about any potential threat. Creating a mirror-life organism would be tremendously complicated and expensive. And although the scientific community is taking the alarm seriously, some scientists doubt whether it’s even possible to create a mirror organism anytime soon. “The hypothetical creation of mirror-image organisms lies far beyond the reach of present-day science,” says Ting Zhu, a molecular biologist at Westlake University, in China, whose lab focuses on synthesizing mirror-image peptides and other molecules. He and others have urged colleagues not to let speculation and anxiety guide decision-making and argued that it’s premature to call for a broad moratorium on early-stage research, which they say could have medical benefits.
But the researchers who are raising flags describe a pathway, even multiple pathways, to bringing mirror life into existence—and they say we urgently need guardrails to figure out what kinds of mirror-biology research might still be safe. That means they’re facing a question that others have encountered before, multiple times over the last several decades and with mixed results—one that doesn’t have a neat home in the scientific method. What should scientists do when they see the shadow of the end of the world in their own research?
Looking-glass life
The French chemist and microbiologist Louis Pasteur was the first to recognize that biological molecules had built-in handedness. In the late 19th century, he described all living species as “functions of cosmic asymmetry.” What would happen, he mused, if one could replace these chiral components with their mirror opposites?
Scientists now recognize that chirality is central to life itself, though no one knows why. In humans, 19 of the 20 so-called “standard” amino acids that make up proteins are chiral, and all in the same way. (The outlier, glycine, is symmetrical.) The functions of proteins are intricately tied to their shapes, and they mostly interact with other molecules through chiral structures. Almost all receptors on the surface of a cell are chiral. During an infection, the immune system’s sentinels use chirality to detect and bind to antigens—substances that trigger an immune response—and to start the process of building antibodies.
By the late 20th century, researchers had begun to explore the idea of reversing chirality. In 1992, one team reported having synthesized the first mirror-image protein. That, in turn, set off the first clarion call about the risk: In response to the discovery, chemists at Purdue University pointed out, briefly, that mirror-life organisms, if they escaped from a lab, would be immune to any attack by “normal” life. A 2010 story in Wired highlighting early findings in the area noted that if a such a microbe developed the ability to photosynthesize, it could obliterate life as we know it.
The synthetic biology community didn’t seriously weigh those threats then, says David Relman, a specialist who bridges infectious disease and microbiology at Stanford University and a trailblazer in studying the gut and oral microbiomes. The idea of a mirror microbe seemed too far beyond the actual progress on proteins. “This was almost a solely theoretical argument 20 years ago,” he says.
Now the research landscape has changed.
Scientists are quickly making progress on mirror images of the machinery cells use to make proteins and to self-replicate. Those components include DNA, which encodes the recipes for proteins; DNA polymerases, which help copy genetic material; and RNA, which carries recipes to ribosomes, the cell’s protein factories. If researchers could make self-replicating mirror ribosomes, then they would have an efficient way to produce mirror proteins. That could be used as a biological manufacturing method for therapeutics. But embedded in a self-replicating, metabolizing synthetic cell, all these pieces could give rise to a mirror microbe.
When synthetic biologists convened in Northern Virginia in 2019, they didn’t recognize how quickly the technology was advancing, and if they saw a threat at all, it may have been obscured by the blinding appeal of pushing the science forward. What’s become apparent now, says Glass, is that scientists in different disciplines, all related to mirror life, were largely unaware of what other scientists had been doing. Chemists didn’t know that synthetic biologists had made so much progress on creating mirror cells with natural chirality from scratch. Biologists didn’t appreciate that chemists were building ever-larger mirror macromolecules. “We tend to be siloed,” Glass says. And nobody, he says, had thought to seriously examine the immune system concerns that had already been raised in response to earlier work. “There was not an immunologist or an infectious disease person in the room,” Glass says, reflecting on the 2019 meeting. “I may have come closest, given that I work with pathogenic bacteria and viruses,” he adds, but his work doesn’t address how they cause infections in their hosts.
These scientists also didn’t know that around the same time as their meeting, another conversation about mirror life was happening—a darker dialogue that was as focused on danger as it was on discovery. Starting around 2016, researchers with a nonprofit called Open Philanthropy had begun compiling research files on catastrophic biological risks. The organization, which rebranded as Coefficient Giving in 2025, funds projects across a range of focus areas; it adheres to a divisive philanthropic philosophy called effective altruism, which advocates giving money to projects with the highest potential benefit to the most people. While that might not sound objectionable, critics point out that the metrics devotees use to gauge “effectiveness” can prioritize long-term solutions while neglecting social injustices or systemic problems.
Someone in Open Philanthropy’s biosecurity group had suggested looking into the risks posed by mirror life. In 2019 the organization began funding research by Kevin Esvelt, who leads the Sculpting Evolution group at the MIT Media Lab, on biosecurity issues, including mirror life. He began reading up to see whether mirror life was something to worry about.
Esvelt made waves in 2013 for pioneering the use of CRISPR to develop a gene drive, a technology that could spread genetic changes introduced into a living organism through a whole population. Researchers are exploring its use, for example, to make mosquitoes hostile to the parasite that causes malaria—and, as a result, lower their chance of spreading it to humans. But almost immediately after he developed the tool, Esvelt argued against using it for profit, at least until proper safeguards could be set and its use in fighting malaria had been established. “Do you really have the right to run an experiment where if you screw up, it affects the whole world?” he asked, in this magazine, in 2016. At the Media Lab, Esvelt leads efforts to safely develop gene drives that can be deployed locally but prevented from spreading globally.
Esvelt says he’s often thinking about the security risks posed by self-sustaining genetically engineered technologies, and research led him to suspect that the threat of mirror organisms hadn’t been seriously interrogated. The more he learned about microbial growth rates, predator-prey and microbe-microbe interactions, and immunology, the more he began to worry that mirror organisms, if impervious to the innate defenses of natural ones, could cause unstoppable infections in the event that they escaped the lab.
Even if the first experimental iteration of such a germ were too fragile to survive in the environment or a human body, Esvelt says, it would be a light lift to genetically engineer new, more resilient versions with existing technology. Even worse, he says, the results could be weaponized. The possible path from 2019 to global annihilation seemed almost too direct, he found.
But he wasn’t an expert in all the scientific fields involved in research on mirror life, so he started making calls. He first described his concerns to Relman one night in February 2022, at a restaurant outside Washington, DC. Esvelt hoped Relman would tell him he was wrong, that he’d missed something over the years of gathering data. Instead, he was troubled.
The concern spreads
When Relman returned to California, he read more about the technology, the risks, and the role of chirality in the immune system and the environment. And he consulted experts he knew well—ecologists, other microbiologists, immunologists, all of them leaders in their fields—in an attempt to assuage his concerns. “I was hoping that they’d be able to say, I’ve thought about this, and I see a problem with your logic. I see that it’s really not so bad,” he says. “At every turn, that did not happen. Something about it was new to every person.”
The concern spread. Relman worked with Jack Szostak, a professor of chemistry at the University of Chicago, and a group of researchers to see if it was possible to make an argument that mirror life wasn’t going to wipe out humanity. Included in that group was Kate Adamala, a synthetic biologist at the University of Minnesota. She was a natural choice: Adamala had shared the initial grant from the NSF, in 2019, to explore mirror-life technologies.
She also became convinced the risk was real—and was dumbfounded that she hadn’t seen it earlier. “I wish that one sunny afternoon we were having coffee and we realized the world’s about to end, but that’s not what happened,” she says. “I’m embarrassed to admit that I wasn’t even the one that brought up the risks first.” Through late 2023 and early 2024, the endeavor began to take on the form of a rigorous scientific investigation. Experts were presented with a hypothesis—namely, that if mirror cells were built, they would pose an existential threat—and asked to challenge it. The goal was to falsify the hypothesis. “It would be great if we were wrong,” says Vaughn Cooper, a microbiologist at the University of Pittsburgh and president-elect of the American Society for Microbiology.
Relman says that as the chemists and biologists learned more about one another’s work and began to understand what immunologists know about how living things defend themselves, they started to connect the dots and see an emerging picture of an unstoppable synthetic threat.
Some scientists have pushed back against the doomsday scenario, suggesting that the case against mirror life offers an “inflated view of the danger.”
Timothy Hand, an immunologist at the University of Pittsburgh who hadn’t participated in the 2019 NSF meeting, wasn’t initially worried when he heard about mirror life, in 2024. “The mammalian immune system has this incredible capability to make antibodies against any shape,” he says. “Who cares if it’s a mirror?” But when he took a closer look at that process, he could see a cascade of potential problems far upstream of antibody production. Start with detection: Macrophages, which are cells the immune system uses to identify and dispatch invaders, use chiral sensing receptors on their surfaces. The proteins they use to grab on to those invaders, too, are chiral. That suggests the possibility that an organism could be infected with a mirror organism but not be able to detect it or defend against it. “The lack of innate immune sensing is an incredibly dangerous circumstance for the host,” Hand says.
By early 2024, Glass had become concerned as well. Relman and James Wagstaff, a structural biologist from Open Philanthropy, visited him at the Venter Institute to talk about the possibility of using synthetic cell technology—Glass’s specialty—to build mirror life. “At first I thought, This can’t be real,” Glass says. They walked through arguments and counterarguments. “The more this went on, the more I started feeling ill,” he says. “It made me realize that work I had been doing for much of the last 20 years could be setting the world up for this incredible catastrophe.”
In the second half of 2024, the growing group of scientists assembled the report and wrote the policy forum for Science. Relman briefed policymakers at the White House, members of the defense community, and the National Security Agency. Researchers met with the National Institutes of Health and the National Science Foundation. “We briefed the United Nations, the UK government, the government of Singapore, scientific funding organizations from Brazil,” says Glass. “We’ve talked to the Chinese government indirectly. We were trying to not blindside anybody.”
A year and a half on, the push has had an impact. UNESCO has recommended a precautionary global moratorium on creating mirror-life cells, and major philanthropic organizations that fund science, including the Alfred P. Sloan Foundation, have announced they will not finance research leading to a mirror microorganism. The Bulletin of the Atomic Scientists highlighted considerations about mirror life in its most recent report on the Doomsday Clock. In March, the United Nations Secretary-General’s Scientific Advisory Board issued a brief highlighting the risks—noting, for example, that recent progress on building mirror molecules could reduce the cost of creating a mirror microbe.
“I think no one really believes at this stage that we should make mirror life, based on the evidence that’s available,” says James Smith, the scientist who leads the MBDF, the nonprofit focused on assessing the risks of mirror life, which is funded by Coefficient Giving, the Sloan Foundation, and other organizations. The challenge now, Smith says, is for scientists to work with policymakers and bioethicists to figure out how much research on mirror life should be permitted—and who will enforce the rules.
Drawing the line
Not everyone is convinced that mirror organisms pose an existential threat. It’s difficult to verify predictions about how mirror microbes would fare in the immune system—or the larger world—without running experiments on them. Some scientists have pushed back against the doomsday scenario, suggesting that the case against mirror life offers an “inflated view of the danger.” Others have noted that carbohydrates called glycans already exist in both left- and right-handed forms—even in pathogens—and the immune system can recognize both of them. Experiments focused on interactions between the immune system and mirror molecules, they say, could help clarify the risks of mirror organisms and reduce uncertainty.
Even among those convinced that the worst-case scenario is possible, researchers still disagree over where to draw the line. What inquiries should be allowed and what should be prohibited?
Andy Ellington, a biotechnologist and synthetic biologist at the University of Texas at Austin, doesn’t think mirror organisms will come to fruition anytime soon. Even if they do, he isn’t sure they will pose a threat. “If there is going to be harm done to the human race, this is about position 382 on my list,” he says. But at the same time, he says it’s a complicated issue worth studying more, and he wants to see the conversations continue: “We’re operating in a space where there’s so much unknown that it’s very difficult for us to do risk assessment.”
Even among those convinced that the worst-case scenario is possible, researchers still disagree over where to draw the line. What inquiries should be allowed and what should be prohibited?
Adamala, of the University of Minnesota, and others see a natural line at ribosomes, the cellular factories that transform chains of amino acids into proteins. These would be a critical ingredient in creating a self-replicating organism, and Adamala says the path to getting there once mirror ribosomes are in place would be pretty straightforward. But Zhu, at Westlake, and others counter that it’s worth developing mirror ribosomes because they could possibly produce medically useful peptides and proteins more efficiently than traditional chemical methods. He sees a clear distinction, and a foundational gap, between that kind of technology and the creation of a living synthetic organism. “It is crucial to distinguish mirror-image molecular biology from mirror-image life,” he says. That said, he points out that many synthetic molecules and organisms containing unnatural components, including but not limited to the mirror-image subset, might pose health risks. Researchers, he says, should focus on developing holistic guidelines to cover such risks—not just those from mirror molecules.
Even if the exact risk remains uncertain, Esvelt remains more convinced than ever that the work should be paused, perhaps indefinitely. No one has taken a meaningful swing at the hypothesis that mirror life could wipe out everything, he says. The primary uncertainties aren’t around whether mirror life is dangerous, he points out; they have more to do with identifying which bacterium—including what genes it encodes, what it eats, how it evades the immune system’s sentinels—could lead to the most serious consequences. “The risk of losing everything, like the entire future of humanity integrated over time, is not worth any small fraction of the economy. You just don’t muck around with existential risk like that,” he says.
In some ways, scientists have been here before, working out rules and limits for research. Two years after the start of the covid-19 pandemic, for example, the World Health Organization published guidelines for managing risks in biological research. But the history is much deeper: Horrific episodes of human experimentation led to the establishment of institutional review boards to provide ethical oversight. In the early 1970s, in response to concerns over lab-acquired infections and growing use of biological warfare, the US Centers for Disease Control and Prevention established biohazard safety levels (BSLs), which govern work on potentially dangerous biological experiments.
And in 1975—at the dawn of recombinant DNA research, which allows researchers to put genetic material from one organism into another—geneticists met at the Asilomar conference center in Pacific Grove, California, to hammer out rules governing the work. There were concerns over what would happen if some virus or bacterium, genetically engineered to have traits that would make it particularly dangerous for people, escaped from a lab. Scientists agreed to self-imposed restrictions, like a moratorium on research until new safety guidelines were in place. As a result of the meeting, in June 1976 the NIH issued rules that, among other things, categorized the risks associated with rDNA experiments and aligned them with the newly adopted BSL system.
Asilomar is often hailed as a successful model for scientific self-governance. But that perception reflects a tendency to recall the meeting through a nostalgic haze. “In fact, it was incredibly messy and human,” says Luis Campos, a historian of science at Rice University. Equally brilliant Nobelists argued on either side of the question of whether to rein in rDNA research. Technical discussions dominated; talks about who would be affected by the technology were missing. The meeting didn’t start establishing guidelines, says Campos, until the lawyers mentioned liability and lab leaks.
For now it’s unclear whether these examples of self-governance, which arose from the demonstrated risks of existing technologies, hold useful lessons for the mirror-life community. Three competing images of the future are coming into focus: Mirror life might not be possible, it might be possible but not threatening, or it might be possible and capable of obliterating all life on Earth.
Scientists may be censoring themselves out of fear and speculation. To some, shutting down the work seems necessary and urgent; to others, it is unnecessarily limiting. What’s clear is that the question of what to do about mirror life has been both illuminating and disorienting, pushing scientists to interrogate not only their current research but where it might lead. This is uncharted territory.
Stephen Ornes is a science writer based in Nashville, Tennessee.
User Experience and Early Clinical Outcomes of a Mental Wellness Chatbot for Depression and Anxiety: Pilot Evaluation Mixed Methods Study
Background: Artificial intelligence–powered conversational agents (ie, chatbots) are increasingly popular outlets for users seeking psychological support, yet little is known about how users experience early-stage prototypes or which therapeutic processes contribute to clinical improvement. A transparent evaluation of emerging chatbot prototypes is needed to clarify if, how, and why artificial intelligence companions work and to guide their continued development. Objective: This mixed methods pilot study evaluated user experience, acceptability, and preliminary clinical signals for an early-stage mental wellness chatbot. We also examined whether baseline symptom severity moderated clinical improvement. Methods: Three sequential cohorts (n=125) completed a 2-week, incentivized chatbot exposure (approximately 60 min per week). Participants provided first-impression ratings, qualitative feedback, and pre–post assessments of depressive symptoms (PHQ-8 [Patient Health Questionnaire-8]), anxiety symptoms (GAD-7 [Generalized Anxiety Disorder-7]), psychological distress, well-being, and loneliness. Statistical models estimated symptom change and tested interactions with baseline symptom severity. Mixed methods analysis integrated quantitative outcomes with large language model–assisted qualitative content analysis of open-ended responses. Results: Participants described the chatbot as accessible, easy to use, and emotionally validating, while citing limitations in personalization and conversational depth. Qualitative responses consistently highlighted early therapeutic processes such as emotional validation, goal setting, and perceived attunement. Regression models showed significant pre–post reductions in depressive (Hedges =–0.32) and anxiety (=–0.32) symptoms, alongside modest improvements in distress and well-being. Baseline severity moderated improvement, with marginal effects indicating larger predicted reductions at higher PHQ-8 and GAD-7 baseline scores (eg, PHQ-8=15: =–0.84; GAD-7=15: =–0.62). Conclusions: This pilot provides a comprehensive view of early chatbot development and suggests promising user experiences and preliminary symptom improvements under structured pilot conditions. By integrating experiential and exploratory clinical data, the study identifies candidate process targets to inform ongoing refinement. Findings support continued development and demonstrate procedural feasibility for progression to larger, longer-term trials evaluating engagement and clinical outcomes under more naturalistic conditions.
<img src="https://jmir-production.s3.us-east-2.amazonaws.com/thumbs/df551c8cc1fc34d8080828a3b50a6924" />
STAT+: Access granted: CMS greenlights more than 150 participants for chronic care experiment
More than 150 companies and providers have been provisionally approved to participate in an experimental Medicare program meant to expand access to technology-supported chronic care. They include popular mental health apps, wearable device makers, a life sciences company tied to Google, and startups that help large health systems manage heart failure patients.
Announced late last year by the Center for Medicare and Medicaid Innovation, the ACCESS model will pay participants set rates to treat chronic conditions like diabetes, hypertension, high cholesterol, musculoskeletal pain, anxiety, and depression. The payments are tied to measurable health outcomes; the model is meant as an alternative to paying for individual technology services. The initial deadline to participate in the first ACCESS cohort was April 1, but CMMI Monday announced it will extend the deadline to allow more to join.
CMS officials say the large number of applications to participate in ACCESS exceeded their expectations and that the enthusiasm suggests modest payment rates and restrictions did not discourage digital health companies from applying. According to officials, most of the participants had not previously served Medicare patients.
Want to understand the current state of AI? Check out these charts.
If you’re following AI news, you’re probably getting whiplash. AI is a gold rush. AI is a bubble. AI is taking your job. AI can’t even read a clock. The 2026 AI Index from Stanford University’s Institute for Human-Centered Artificial Intelligence, AI’s annual report card, comes out today and cuts through some of that noise.
Despite predictions that AI development may hit a wall, the report says that the top models just keep getting better. People are adopting AI faster than they picked up the personal computer or the internet. AI companies are generating revenue faster than companies in any previous technology boom, but they’re also spending hundreds of billions of dollars on data centers and chips. The benchmarks designed to measure AI, the policies meant to govern it, and the job market are struggling to keep up. AI is sprinting, and the rest of us are trying to find our shoes.
All that speed comes at a cost. AI data centers around the world can now draw 29.6 gigawatts of power, enough to run the entire state of New York at peak demand. Annual water use from running OpenAI’s GPT-4o alone may exceed the drinking water needs of 12 million people. At the same time, the supply chain for chips is alarmingly fragile. The US hosts most of the world’s AI data centers, and one company in Taiwan, TSMC, fabricates almost every leading AI chip.
The data reveals a technology evolving faster than we can manage. Here’s a look at some of the key points from this year’s report.
The US and China are nearly tied
In a long, heated race with immense geopolitical stakes, the US and China are almost neck and neck on AI model performance, according to Arena, a community-driven ranking platform that allows users to compare the outputs of large language models on identical prompts. In early 2023, OpenAI had a lead with ChatGPT, but this gap narrowed in 2024 as Google and Anthropic released their own models. In February 2025, R1, an AI model built by the Chinese lab DeepSeek, briefly matched the top US model, ChatGPT. As of March 2026, Anthropic leads, trailed closely by xAI, Google, and OpenAI. Chinese models like DeepSeek and Alibaba lag only modestly. With the best AI models separated in the rankings by razor-thin margins, they’re now competing on cost, reliability, and real-world usefulness.

The index notes that the US and China have different AI advantages. While the US has more powerful AI models, more capital, and an estimated 5,427 data centers (more than 10 times as many as any other country), China leads in AI research publications, patents, and robotics.
As competition intensifies, companies like OpenAI, Anthropic, and Google no longer disclose their training code, parameter counts, or data-set sizes. “We don’t know a lot of things about predicting model behaviors,” says Yolanda Gil, a computer scientist at the University of Southern California who coauthored the report. This lack of transparency makes it difficult for independent researchers to study how to make AI models safer, she says.
AI models are advancing super fast
Despite predictions that development will plateau, AI models keep getting better and better. By some measures, they now meet or exceed the performance of human experts on tests that aim to measure PhD-level science, math, and language understanding. SWE-bench Verified, a software engineering benchmark for AI models, saw top scores jump from around 60% in 2024 to almost 100% in 2025. In 2025, an AI system produced a weather forecast on its own.
“I am stunned that this technology continues to improve, and it’s just not plateauing in any way,” says Gil.

However, AI still struggles in plenty of other areas. Because the models learn by processing enormous amounts of text and images rather than by experiencing the physical world, AI exhibits “jagged intelligence.” Robots are still in their early days and succeed in only 12% of household tasks. Self-driving cars are farther along: Waymos are now roaming across five US cities, and Baidu’s Apollo Go vehicles are shuttling riders around in China. AI is also expanding into professional domains like law and finance, but no model dominates the field yet.
But the way we test AI is broken
These reports of progress should be taken with a grain of salt. The benchmarks designed to track AI progress are struggling to keep up as models quickly blow past their ceilings, the Stanford report says. Some are poorly constructed—a popular benchmark that tests a model’s math abilities has a 42% error rate. Others can be gamed: when models are trained on benchmark test data, for example, they can learn to score well without getting smarter.
AI companies are also sharing less about how their models are trained, and independent testing sometimes tells a different story from what they report. “A lot of companies are not releasing how their models do in certain benchmarks, particularly the responsible-AI benchmarks,” says Gil. “The absence of how your model is doing on a benchmark maybe says something.”
AI is starting to affect jobs
Within three years of going mainstream, AI is now used by more than half of people around the world, a rate of adoption faster than the personal computer or the internet. An estimated 88% of organizations now use AI, and four in five university students use it.
It’s early days for deployment, and AI’s impact on jobs is hard to measure. Still, some studies suggest AI is beginning to affect young workers in certain professions. According to a 2025 study by economists at Stanford, employment for software developers aged 22 to 25 has fallen nearly 20% since 2022. The decline might not be pinned on AI alone, as broader macroeconomic conditions could be to blame, but AI appears to be playing a part.

Employers say that hiring may continue to tighten. According to a 2025 survey conducted by McKinsey & Company, a third of organizations expect AI to shrink their workforce in the coming year, particularly in service and supply chain operations and software engineering. AI is boosting productivity by 14% in customer service and 26% in software development, according to research cited by the index, but such gains are not seen in tasks requiring more judgment. Overall, it’s still too early to understand the bigger economic impact of AI.
People have complicated feelings about AI
Around the world, people feel both optimistic and anxious about AI: 59% of people think that it will provide more benefits than drawbacks, while 52% say that it makes them nervous, according to an Ipsos survey cited in the index.
Notably, experts and the public see the future of AI very differently, according to a Pew survey. The biggest gap is around the future of work: While 73% of experts think that AI will have a positive impact on how people do their jobs, only 23% of the American public thinks so. Experts are also more optimistic than the public about AI’s impact on education and medical care, but they agree that AI will hurt elections and personal relationships.

Among all countries surveyed, Americans trust their government least to regulate AI appropriately, according to another Ipsos survey. More Americans worry federal AI regulation won’t go far enough than worry it will go too far.
Governments are struggling to regulate AI
Governments around the world are struggling to regulate AI, but there were some minor successes last year. The EU AI Act’s first prohibitions, which ban the use of AI in predictive policing and emotion recognition, took effect. Japan, South Korea, and Italy also passed national AI laws. Meanwhile, the US federal government moved toward deregulation, with President Trump issuing an executive order seeking to handcuff states from regulating AI.
Despite this federal action, state legislatures in the US passed a record 150 AI-related bills. California enacted landmark legislation, including SB 53, which mandates safety disclosures and whistleblower protections for developers of AI models. New York passed the RAISE Act, requiring AI companies to publish safety protocols and report critical safety incidents.

But for all the legislative activity, Gil says, regulation is running behind the technology because we don’t really understand how it works. “Governments are cautious to regulate AI because … we don’t understand many things very well,” she says. “We don’t have a good handle on those systems.”
Evaluation of anxiety levels and stress coping methods of pregnant women after the Kahramanmaraş earthquake
How blindness shapes personality: a neuro-ecological account
Constellations
I.
We had crash-landed on the planet. We were far from home. The spaceship could not be repaired, and the rescue beacon had failed. Besides me, only the astrogator, part of the captain, and the ship’s AI mind were left.
Outside, the atmosphere registered as hostile to most organisms. We huddled in the lifeboat, which was inoperable but still held air. Vast storms buffeted our cockleshell shelter, although we knew from prior readings that other areas remained calm. All that remained to us was to explore, if we wanted to live. The captain gave me the sole weapon. She tasked the astrogator with carrying some tools that would not unduly weigh him down.
Little existed on the planet except deserts of snow. But alien artifacts lay in an area near us. We were an exploration team, so this discovery had oddly comforted us, even though we had been on our way elsewhere. The massive systems failure had no discernible source, and the planet had been our only choice for landfall.
The artifacts took the form of 13 domes, spread out over that hostile terrain. The domes had been linked by cables just below shoulder level, threaded through the tops of metal posts at irregular intervals. Whether intended or not, these cables and rods formed a series of paths between the domes.
Before our instruments failed, the AI had reported that the domes appeared to have a heat signature. The cables pulsed under our grip in a way that teased promised warmth far ahead. It took some time to get used to the feeling.
The shortest path between domes was a thousand miles long. The longest path was 10 thousand miles long. Our suit technology was good: A suit could recycle water, generate food, create oxygen. It could push us into various states of near hibernation while motors in the legs drove us forward. For the captain, the suit would compensate for having lost her legs and ease her pain. We estimated we could reach the nearest path and follow it to the nearest dome … and that was it. If the dome had life support capabilities, or even just a way to replenish our suits, we would live. Otherwise, we would probably die.
We revised the estimate of our survival downward when we reached the path and soon encountered the skeletons of dead astronauts littering the way. In all shapes and sizes, cocooned within their suits. Their huddled forms under the snow displayed a serenity at odds with their fate. But when I wiped the frost from face plates, we saw the extremity of their suffering.
It is difficult to explain how we felt walking among so many fatalities. So many dead first contacts.
We no longer had to puzzle over the systems failure. Spaceships came here to crash, and intelligent entities came here to die, for whatever reason. We could not presume our fate would be any different, and adjusted our expectations accordingly. The AI’s platitudes about courage did not raise morale. There were too many lost there in the frozen wastes.
Here were the ghastly emissaries of hundreds of spacefaring species we had never before encountered.
The number of the bodies and their haphazard positioning hampered our ability to make progress to the dome. The AI estimated our chances of survival at below 50% for the first time. We would starve in our suits as the motors propelled us forward. We would become desiccated and exist in an elongation of our thoughts that made us weak and stupid until the light winked out. But still, we had no choice. So even in places where the dead in their suits were piled high, we would simply plunge forward, over and through them, headed for the dome.
What we would find there, as I have said, we did not know. But we were in an area of the galaxy where ancient civilizations had died out millions of years ago. We had been on our way to a major site, an ancient city on a moon with no atmosphere in a wilderness of stars.
Although our emotions fluctuated, a professional awe and curiosity about the dead eventually came over us. This created much debate over the comms. We had made a discovery for the ages, but our satisfaction was bittersweet. Even if we lived longer than expected, we would never return home, never see our friends or family again. The AI might continue on after we were dead, but I doubt it envied being the one to report on our discovery centuries hence. And to who?
Here were the ghastly emissaries of hundreds of spacefaring species we had never before encountered. Their suits displayed an extraordinary range, although our examination was cursory. Some even appeared to be made out of scales and other biological substances from their home worlds, giving us further clues as to their origins.
The burial of the suits by snow and the lack of access to anything other than a screaming face or faces, often distorted by time and ice, worked against recording much usable data. This issue was compounded in those cases where the suit was part of the organism and they had not needed any “artificial skin,” as the AI put it, to survive harsh conditions. That many had died despite appearing well-prepared for the planet’s environment sobered us up even before our own suits dispensed drugs to help our mental states.
After a time, each face seemed to express some aspect of our own stress and terror at the seriousness of our situation. After a time, the sheer welter of detail defeated us and caused us extreme distress. The captain made the observation that even one instance of alien contact might cause physiological and mental conditions, including anxiety, stress, fatigue. Here, we were constantly encountering the alien dead of what seemed at times an infinite number of civilizations.
We stopped recording. We recommitted ourselves to the slog toward the nearest dome.
The captain’s drugs unit had failed, but the AI found a way to help her by turning off the heating element in select panels of her suit. Some parts of her would soon be lost to the cold, but the system would allow her to live on with some measure of comfort.
I must admit, we were just glad the screaming had stopped and welcomed her counsel.
II.
For a long time, as we labored in our spacesuits on that planet—following the path, beleaguered by snowstorms—we could not understand why we found so many dead astronauts, of so many unknown alien types, and yet no spaceships. During good visibility, our line of sight reached, unbroken, for 500 miles. Where were the crash sites?
But one day we chanced upon an antenna sticking up out of the ground. Clumsy attempts at excavation soon revealed that below this antenna lay a vast dead spaceship of a kind we had never seen before. The gash that had opened it to the elements had laid bare its unique architecture, but also gave the illusion that the snow had spilled out of it to create the world around us rather than having infiltrated and accumulated inside over time.
Aspects of the spaceship’s texture gave the startling suggestion that it had been made of some ultra-hard wood or wood equivalent. Clambering partway up to stare at the inner compartments, we all felt the strangeness of the dimensions and proportions of the living quarters. There was no sign of the occupants. Perhaps, I suggested, they had headed for the domes. Perhaps they had even made it to the domes. I tried and failed to keep hope from my voice.
But the captain had ordered the AI to perform a materials analysis. The “snow” in this region had been contaminated by ash and tiny particles of bone. The AI estimated that more than 70% of the white surrounding us was made of the remains of vertebrate sentient life and the remnants of suits. Of invertebrates there was no telling. A thaw might bring not just the drip, drip of water but a shushing sound indicative of bone particulate in the mixture. I imagined there might even be the clink of small objects not rendered down by whatever intense heat had created the ash.
The astrogator had insisted on digging deeper into the ship, with the idea that some recognizable commonality between technologies might yield a part or parts with which he could fix our ship. The rest of us allowed this delusion for the obvious reasons. But upon his return, he held in his hands ovals of snow not much larger than the space formed by the circle between a thumb and finger. Many of them had soft indentations, as one might find in the afterbirth of reptiles from eggs. A kind of ghostly cilia-like tread appeared along the bottoms of these objects.
The astrogator did not find any technology of use to us. Instead, he discovered that the species piloting the spaceship had been so different from us as to be safely encapsuled in suits the size of eggs. Much of what had spilled into or spilled out of the gash constituted the bodies of the crew, in their hundreds of thousands. Their suits had been inadequate to the conditions. They had died en masse attempting to escape their own ship.
The AI speculated that it had been a generation ship, perhaps fleeing a planet with a dying star. If we wondered how the AI had reached this conclusion, it was because we did not want it to be true.
The captain became silent upon receiving this further news and did not speak to us for more than 100 miles of further progress.
As we left that site, unsure exactly what we stepped upon, we also knew that since the spaceship was entirely covered by snow, it had been falling into the sediment for days or months or years. We knew then that our ship might not be visible against the horizon should we retrace our steps. The already bleak probability of rescue through visual identification of a crash site from above would be lost to us in time, even as the line of cables remained perpetually visible to the horizon. We now thought of the planet as a trap. But of what sort?
III.
We could not be sure, but in the absence of the captain’s voice, it may have been the AI that put forward the idea of the planet’s being “duplicitous.” The phrasing concerned us, for there was a duplicity in using the planet as the subject of the spoken sentence. A sphere rotating around a sun in deep space could not exhibit forethought or premeditation or other qualities of sentience.
The AI meant whoever or whatever had created the conditions on the planet that allowed spacecraft to be trapped and then the occupants placed in a perilous situation with no recourse. But I distinctly recall the AI using the words “the planet.” In addition to being inaccurate, this also let us know that the AI did not have any analysis available that might help us understand the agency and motivations acting upon us.
But in a sense, the AI only voiced something I had felt for several miles: that there existed an overlay to the planet’s surface, an area or space or different landscape unavailable to us. This overlay had also not been available to any of the prior astronauts who had died here. In this area or space or different landscape existed a wealth of the usual hoped-for things: a breathable atmosphere and abundant food and water.
While we struggled with the line through the snow and through the storms that welled up, others could see us but chose to ignore us for reasons or perhaps just for their own well-being. For hundreds, possibly thousands of years, as explorers had died here in merciless and terrible ways, there raged a sumptuous feast for the senses, as excessive as it was ancient and unending.
I cannot tell you how powerfully the AI’s words struck us, so that our mouths watered at the thought of real food and of clean, unrecycled water, of a freedom unencumbered by suits and breathing apparatus. Even at our intended destination, we would have spent most of our days aboard a small space station. This tedium would have been broken only by the arduous process of reaching the unbreathable surface and its ancient ruins of jagged black stone.
This vision that overtook us functioned not just as tantalizing delusion. It scared us so much that we could not compartmentalize it in our thoughts. It continued to overwhelm us like a wave.
We fought for the first time, with the astrogator expressing the wish to return to the ruined spacecraft and explore nearby areas for parts, while the captain broke silence to order us to continue to make progress toward the nearest dome. The AI, which had brought us to this point, stole the captain’s silence and said no more.
For each of us, those endless white plains with no real elevation, just the metal rope and the metal posts, had become a kind of repetition that hurt the brain, and the mind with it.
As I looked out across the white, I could not help seeing the impression of shapes in the wind, as if invisible entities fled by, carried there by gusts, unable to get purchase, swept up for hundreds and hundreds of miles before being dashed to the ground.
We did not give up, however.
IV.
About halfway to the nearest dome, amid a storm that reduced our progress incrementally and our line of sight to nothing, we came upon a peculiar tableau.
Six astronaut suits had fallen across and around the metal rope. With the flurries of snow, it took us, even with our powerful headlamps, some minutes to determine the nature of the obstruction. The six suits had been created for a humanoid species that must have had torsos like nine-foot-long slabs, attached to six limbs, three for walking. Their heads had flared out like thick fans. All the helmets were cracked open, and curled inside were the skeletons of some other intelligent species no larger than 40 or 50 pounds, possibly warm-blooded. With no sign of the original occupants.
After a brief analysis cut short by the conditions, we postulated that the warm-blooded species had worn breathable skin suits that, as they failed, required these intruders to seek shelter. All they could find were these six dead astronauts. Because we could discover no trace of the original occupants, the AI put forward the theory that this smaller species had eaten every scrap of the remains within the suits.
Then they too had perished, and in time, the AI suggested, something smaller would take up residence inside those bodies, then smaller still within those, and smaller still—
At this point, the captain attempted a soft reboot of the AI using a coded question. We could hear the concern in her voice.
Yet the AI continued undeterred, suggesting that we might find this to be a common situation. It might be replicated across the planet, depending on a system’s ability to break down and process meat that had not evolved alongside the devourer for millions of years. In all likelihood, most who attempted to eat in this way died soon after, poisoned by alien flesh.
The astrogator had taken to muttering inside his suit, off comms, as if he no longer thought we functioned as a team. No amount of castigation from the captain served to change his mind.
In the terse harshness of the captain’s reprimand, I recognized that her pain levels had spiked once again.
V.
The AI began to talk to us in strange alien voices at mile 700, as we labored through the snowstorm to hold onto the cables and thus the path. The AI warbled and chirped and howled and hummed and clucked. The AI spoke in voices like fossilized choruses of beasts, vast and harmonious. And in voices like dry grass spun to fire by the sun. And in voices like the dissolution of all things, darkness in the blinding white that scared me.
At first we thought the AI was deranged. Then that the AI channeled voices from the dome 300 miles ahead. But finally, the AI managed to make known to us that these were the voices of the dead astronauts we had come across from time to time. Huddled frozen. The suits in so many shapes and sizes. That the voices of the dead were channeled through the AI, and nothing could stop them.
We chose to believe that the AI had begun to malfunction. We did not waste time with a response. The captain asked the AI to perform self-shutdown and whispered the numbers in the correct sequence. We knew what we lost with this act, and yet we knew if we did not shut down the AI it might become harmful to us beyond the mental distress of what it had just conveyed to us.
Soon after, the AI gave up its own voice, and all that came from it were the sounds of the others.
A little later, the AI no longer spoke at all.
VI.
The snow began to betray us, as the storms created different forms of ice. Often, our arms became weary, our legs cramping, and we had to rest with greater frequency. We came to accept the solid crunch that could support our weight. We came to reject the feather-light freshness that felt effortless underfoot but could give way just as easily as if it were air. In some places, slick purple-hued ice welled up in sluggish layers as if something half-alive. In others, we discovered strange islands of elevation, with brutal curls and curves that suggested two continental shelves had clashed in that space.
As we adapted to these conditions, and as conditions worsened and still we adapted, we came to feel an illusion of competency, one that made even the astrogator temporarily cheerful. The sounds through the comms of our efforts, the deeper breathing, the occasional muffled curse, seduced us in this regard. We felt that we were becoming adroit at handling the snow. We began to believe if we could only make it to the dome, we would be saved.
Yet this uptick in morale ran parallel to, rather than intersected with, the idea of our ultimate survival.
VII.
We lost track of the distance left to us without the AI to tell us. Or the captain, in her pain, no longer thought to issue updates. But across the distance left to us came sights beyond reckoning: three giant astronauts spaced 50 miles apart. Larger than most starships, each body lay sprawled across an area larger than several fields and in very different conditions.
The first had been badly burned and was thus unrecoverable, even in terms of salvage. The astronaut had crawled or pulled itself along for some distance. It had left a long smudge of black and red across that expanse. The alien species was, as ever, unknown to us, but the five arms were sunk in the ground as if in agony. The skull had once held three eyes, and the face plate had been cracked by force so strong it resembled a meteor strike. The body was bloated, the fabric of the suit gray with a shimmer of green that came and went, linked to photosensitive skin cells. The way the flesh took up space, and how it exhibited aspects more plant than animal, made it impossible to study further.
The second was a sprawl of limbs, with the suggestion of a defensive posture. The debris of conflict flared out to the side in an incomprehensible display. The suit had an intactness that surprised us, but a similar crack in the face plate without any trace of body within. The rest of the suit had become inhabited by a wealth of other dead astronauts of varying sizes and shapes, who had sought shelter or sustenance and then become trapped or simply … given up. As the AI had predicted, we had once again encountered bodies providing other bodies with temporary sustenance and shelter.
I felt like a parasite who beheld a god. Or was the scale even more ludicrous?
But this condition was not at first evident to us, becoming apparent only after we had clambered for an hour to reach the cracked face plate and the entry hole extended like a broken archway before us.
Despite the number of remains within, and the difficulty in moving through them to explore, the captain ordered an exhaustive recon. Her pulse in the readings had a thready quality. Sometimes I felt, and the astrogator too when we took private comms, that the captain had begun to say things similar to the AI’s delusions. Yet we obeyed the order, on the chance that some internal calculation on the captain’s part meant she believed this was the only way we would survive.
What did we expect to find in the dead body of a once-intelligent giant? Food? Oxygen? Some cause of death? To put off the thought of our own death by seeking shelter with a death so large we could not comprehend it?
I felt like a parasite who beheld a god. Or was the scale even more ludicrous? I had trouble envisioning the way the body must have twisted as it pitched forward into that icy ground. I had trouble holding onto my own thoughts.
More and more pressure moved through my skull as I contemplated that scene. We were in the midst of something none of my kind had ever known. We might be the only ones, ever. I better understood the unraveling of the AI and of the captain. My sharpness had dulled, taking my calm with it.
It was impossible to tell how long the astronaut had taken to die. Unless somewhere within that fallen figure some hint of life hid that we would never find.
The storms fell away, rose, then fell away again.
VIII.
The third huge astronaut was full of light and life and shone out across the wasteland of snow like a beacon. For a moment, I thought we had pierced the invisible layer and could see what lay beyond the veil. We would have comforts beyond anything found on our ruined spaceship even when it had been fit to cross galactic space. There would not be recycled urine for our water. There would not be the faint stink of sweat creeping into our suits as the ventilation system began to fail. Our liquid food would not taste stale and moldy.
As we approached, the suit extended almost to the horizon in that foreshortened perspective created by the left foot. We noted through our remaining instrumentation that the suit remained intact. The pressure told us a kind of air circulated within its sealed surfaces.
We climbed with a renewed energy, the promise of sanctuary so close making us giddy. We each exhorted the others on with such exuberance that it made me a little afraid. What lay on the other side of this state of mind but a fall?
When we reached the helmet plate, we could see inside not a face or a skull, but instead such a richness of healthy growth that we fell silent before it. None of us could, I believe, understand exactly what we saw, except that it equaled ecosystem—resplendent with vibrant greens and blues, stippled with other colors. There might be some parallel to a terrarium full of moss and exotic plants. There might be some sense of life moving amongst those plants, as of jewel-like amphibians or even tiny shy sapphire birds. We could not smell or taste or hear what lay behind the face plate. We could not experience it in that way, but somehow we each imagined enough to be calmed and comforted by it.
The astrogator said he might be able to create a hole in the plate or elsewhere on the body to let us in, and then patch the surface such that not too much air or vitality would spill out. This workaround might take an hour or two, due to the delicate nature of what we saw within. But it was possible.
The captain considered the astrogator’s proposal and then agreed. The weather had begun to turn dangerous again. That we should begin immediately did not need to be said. With the proper pressure brought to bear, we would have some measure of sanctuary from which to recover for a final push to the dome. It could be the difference between life and death, the astrogator said. If the atmosphere was breathable, we might even be able to give the captain some better solution to her pain.
I unclipped the astrogator’s equipment from his waist and threw it off the mountain that was the astronaut and watched it sail through the air and into the snow. Then I used my weapon to fry it where it lay. Then I threw my weapon into the snow, too, in a place where the featheriness would cover it and hide it forever.
We were a team and I had helped my team while showing them I posed no threat—although I knew the astrogator and the captain would not see it that way. I stood there on the face plate that we could no longer open with the diminished tools at our disposal as they both yelled at me through the comms. It’s unimportant what they said to me. They were admonishing me for something that had already happened and that they had no power to stop. I did not bother to explain, but began to make the descent to the ground so we could once again take up the metal rope and make for the dome.
Will you follow, I asked them from the ground, when I saw they still stood on the heights. There came no reply, but when they saw me take up the rope, they climbed down to take up the rope too.
I waited then, and let them catch up.
IX.
The captain died not long after. The pain was too great or the wounds she had suffered too damaging. I had known for some time she would never make it to the dome, but there was no point in emphasizing that to her. Nothing she had done until the end had required her to be removed from command. Her last words were the name of our ship and giving her love to someone who would be dead of old age even if we found a way to escape this place and return home. But the astrogator told her he would carry those words forward.
Then we left her by the marker that meant we had 100 miles left to the dome. We knew the snow would cover her for burial. It had done so faithfully for all the rest.
That in that frozen hellscape, the persistence of life in that manner, an oasis in the midst of nothing, could be categorized as a miracle.
As the astrogator followed me down the rope line, he cried out for explanation. The captain’s death required it for some reason, in his mind. The captain had not deserved my betrayal. The captain would not rest easy until I told him why.
You must believe in ghosts, I replied.
This reply incensed him and he castigated me in words not used among members of a team that respect each other. Once more, I ignored him, but told him if our oxygen got low, he could have mine if we calculated he could make it to the base. I meant this, as I knew the odds were low anyway. I had hurt my knee taking the equipment from the astrogator and then making my way so rapidly down from the dead astronaut.
The astrogator did not reply, by which I knew he did not accept my answer.
The reason I took the tools and destroyed them is because the wind had told me something it had not whispered to the captain or the astrogator. The wind had not spoken to me before, so I believed what it told me. That the astronaut within the suit lived on, if unable to move. That what we saw on the outside and registered as ecosystem, as separate “plants” and “animals,” instead formed a composite life-form and that to crack open the suit or cut through the suit at a leg would have been a violation.
That in that frozen hellscape, the persistence of life in that manner, an oasis in the midst of nothing, could be categorized as a miracle.
I would not snuff that out. I could not allow that to be snuffed out. But I remembered too how I felt looking at that vast and alien country behind the face plate. So calm, so comforted, overcome by the depths of an emotion I could not place. Would I replace that feeling with the feeling of seeing all those explorers dead within the other vast suit? Even as I become one of them?
Because the planet had already told us the rules, the consequences, and the ultimate outcome. There are no odds so terrible that they could not be experienced, and in dozens of ways, in this place.
So I trudged on and the astrogator cursed me and cursed me and called out my childhood and how badly I must have been brought up and how I must have cheated to pass the psych exams, and yet I had thought the same of him at various points during our journey.
See how beautiful the snow is, falling now, I said to him over the comms. See how precise and geometric this line we follow across this expanse.
He did not reply, but a little later he told me he no longer believed in the line at all, and by his calculations he would get to the dome faster if he abandoned it and struck out on his own.
I could not stop the astrogator and did not want to, so I watched him become a smaller and smaller figure against the white until the white ate him up and I was alone.
X.
I have been walking a long time, visiting with the dead. Here, against an arch of heaven that appears no different than what I see directly in front of me.
Jeff VanderMeer is the author of the critically acclaimed, bestselling Southern Reach series, translated into 38 languages. His short fiction has appeared in Vulture, Slate, New York Magazine, Black Clock, Interzone, American Fantastic Tales (Library of America), and many others.
In Memoriam: Edna B. Foa, PhD
Dr. Edna Foa served for decades as a professor of clinical psychology in psychiatry at the University of Pennsylvania, where she also directed the Center for the Treatment and Study of Anxiety (CTSA), the internationally renowned program she founded in 1979. Through the CTSA, Edna created not only a hub for groundbreaking research, but also a training ground that would shape the future of evidence-based treatment for anxiety, obsessive compulsive disorder (OCD), and post-traumatic stress disorder (PTSD).
At a time when OCD was poorly understood and often ineffectively treated, Edna helped establish and rigorously validate exposure and response prevention (ERP) as a gold-standard intervention. Building on the early behavioral work of pioneers before her, she brought a level of empirical precision, clinical sophistication, and dissemination that transformed ERP from a promising approach into a cornerstone of modern treatment. In doing so, she fundamentally changed what recovery could look like for millions of people living with OCD.
Her influence extended well beyond OCD. Dr. Foa was also a central figure in the development of cognitive-behavioral models and treatments for PTSD, including prolonged exposure therapy, which has become one of the most widely used and effective interventions for trauma-related disorders. Across both domains, her work exemplified a rare integration of theory, research, and clinical application—always grounded in a singular goal: to reduce suffering and restore lives.
Her connection to the International OCD Foundation (IOCDF) was a natural extension of her commitment to bridging science and real-world impact. Edna was deeply engaged with the IOCDF community over many years, contributing to its mission of improving access to effective treatment and advancing understanding of OCD. The Foundation awarded her with the Outstanding Career Achievement Award in 2011. She was a frequent presence at conferences, where she not only shared her research but also helped elevate the standards of clinical care through teaching, mentorship, and collaboration.
The IOCDF’s growth into a global leader in OCD advocacy, education, and training reflects, in many ways, the scientific foundation that Edna helped build. Her work made it possible for organizations like the IOCDF to promote treatments that are not only evidence-based, but truly life-changing. And through her direct involvement, she helped ensure that the connection between research and practice remained strong, dynamic, and accessible.
Edna Foa showed us what it means to dedicate a life to advancing knowledge in the service of humanity. She illuminated a path forward for so many, and her influence will continue to guide the field for generations to come.
Below are several tributes to Dr. Foa from IOCDF community members.
From Jonathan Grayson, PhD
My mentor, Tom Borkovec, used to talk about our psychological lineage; that in 1979, you only had to go back a few generations of your “forefathers” to reach the founders of American psychology. In this respect, Tom is my psychology father – he taught me to discipline my thinking – he encouraged wild flights of speculation, but to always temper it in print with what could be researched and proved. With this in mind, Edna is my psychology mother. As I noted elsewhere, for all of us who work with OCD, we are her children, grandchildren and so on.
I first met Edna in 1979 at Joseph Wolpe’s Behavior Therapy Unit at Temple University. She hired me as an adjunct research assistant professor. This was in the ancient days at the height of the first wave. There was no cognitive behavioral therapy. ABCT was AABT, American Association of Behavior Therapy. The disorder we were studying was OC, the DSM labeling it obsessive compulsive disorder doesn’t yet exist. Edna was on the first of her landmark OC grants.
She was the flashpoint for all that we do with OCD. Don’t get me wrong, she didn’t invent ERP, but her work was/is the basis of all OCD treatment today. In the same way that cognitive therapy techniques existed before Aaron Beck, but his work was the flashpoint of that second wave; and the techniques of ACT pre-exist Stephen Hayes, but his work and thinking were the flashpoint of the third wave. There was no OC Foundation.
I joined Edna and Gail Steketee and to work with Edna was always a collaboration. So many hours of discussing, designing and analyzing research. Writing papers together often until midnight and beyond. You may have heard that Edna was demanding. She was, but that had nothing to do with the hours we worked. The same clinical skills she used with patients, she used in choosing those who worked with her. We were all driven. There are those who found her direct delivery difficult, but it wasn’t anger or belittling, it wasn’t intimidating (okay, maybe a little), she was simply direct without sugar coating. The truth about Edna was that she was caring and very generous.
As I said, our research was a collaboration and the order of authors on publications reflected our contributions. If you had a research idea that was tangential to her main projects, she would support you. When I told her I thought we should have support groups to help sufferers maintain their gains, I was given a free hand to develop and run GOAL as I saw fit. When my son was nine months old and I told Edna that I was going to change my work hours to: one and a half daytime hours and the rest of my hours after 4 pm, she accepted this. She didn’t have to admonish me or warn me to do my job, Edna knew the kind of people she had chosen. She wanted the people who worked with her to grow. When it came time for me to move on, she was like any parent, sorry for me to go, but happy for me to pursue my life. She was like that with all of us. So many of those who have shaped the OCD world worked with Edna. While I was there, Michael Kozak joined the team and later Edna and Michael published their ground breaking paper on emotional processing. Alec Pollard, Charly Mansueto and Rich McNally also passed through our center. Marty Franklin and Jon Abramowitz came after me making up the many generations of her “children.”
For those whom I’ve neglected to mention, forgive me, but the list is too long. My OCD career began in 1979. Her loss is a hole in the fabric of reality, but her legacy and wisdom lives on through all of us whose OCD psychological lineage can be traced back to Edna Foa.
From Marty Franklin, PhD
I am writing this tribute while waiting at an airport gate for a flight to a national conference. Over the course of the next few days I will have the opportunity to present applied research data, participate in a clinical roundtable about OCD and its treatment, & engage with colleagues as we toss around ideas for how best to move the field forward. Edna’s profound influence on my career, my life, and even my thinking is most often accessible during relatively quiet moments like this, where opportunities for reflection make their way forward amidst the work I have committed to myself to doing. Indeed, I learned of Edna’s passing a few weeks ago while right in the middle of presenting a clinical training about exposure-based treatments for OCD. I paused for a moment to take it all in, but before I could decide how best to proceed under the circumstances, I heard Edna’s voice, in her characteristic and unmistakable Israeli accent, telling me that these clinicians took time out of their busy schedules to receive this training, and therefore I must continue straight through to the end. My feelings? You can process those later. Classic Edna.
My very first day of internship in 1991 at the Medical College of Pennsylvania was spent in Edna’s presence at her Center for the Treatment and Study of Anxiety, the unit she established in 1979 to develop, test, and disseminate cognitive-behavioral interventions for anxiety and related conditions. Edna’s work even by then was highly influential, and her legend was already well in the making. At that initial meeting, Edna slid a formidable stack of old-school medical charts across the table to me and said, “Marty, is it? These are your OCD cases for this rotation.” I thanked her, then asked the first of myriad naïve questions in the legendary Tuesday Meetings: “When will I receive the training to treat these cases?” She pivoted back to look at Michael Kozak, her Clinical Director, as if to wax nostalgic about the process of indoctrinating yet another green intern. Edna then gestured at the pile, and said, “The training is in there.” Edna was a fine clinician too, and thus read well my horrified expression, then offered, “But don’t worry: we’ll help you.” True to her word, she did exactly that.
Edna’s influence on the field broadly speaking, on the development and expansion of cognitive-behavioral theory, on using clinical science to alleviate human suffering, and in pushing the proverbial envelope, has been chronicled elsewhere and cannot ever be overstated. Edna was one of the true pillars of clinical psychology, and the effects of her work will live on in perpetuity, of that I have little doubt. What was less well known except for those of us fortunate enough to have been mentored by Edna was the incredible amount of time and emotional investment she made in seeding the field with the next generation of theorists, scholars, and clinicians who would carry that work forward in the years to come. I count myself in that incredibly lucky group, all of whom were blessed by her personal investment in our training and careers. Edna had exacting standards for herself and for us, and fully expected that same level of investment and intensity on our part. Vigorous debate was just part of the process, where occasionally the fur would fly. But Edna also knew us well enough to understand what each of us needed in order to help us make the commitment needed to join her in the vanguard. In one of our many career development conversations back in the mid 1990s, likely in her East Falls office well after 8 pm, I was fretting about the “soft money” environment of academic psychiatry, and openly wondering if it was time to pivot to hard-line academic psychology or even to private practice. Edna stopped my rumination dead in its tracks, looked into the depths of my soul (which she did regularly), and said, “It’s only soft money if you can’t get it…and I know you can get it. Plus, academia is a really fun way to make a living, and a life.” Edna Foa believed in me: it was about damn time to believe in myself as well, and to make the commitment required to honor that belief. And to always keep pushing to get better at the work, which is truly a never-ending process.
Sitting in this airport now, on my way to give another set of talks on topics I have come to know very well and continue to pursue with the passion that comes from also believing that this work is vital, I concur with Edna’s assessment of academia, and am truly grateful that I listened. Thank you, Edna, for illuminating a path forward for me, as I know you did for countless others. You were unforgettable, and your work will continue on in the hands of those you mentored and trained to carry on the legacy.
From Gail Steketee, PhD, MSW
I had the pleasure and helpful educational challenge of training under Dr. Edna Foa beginning in 1976 and continuing for a decade during which I worked closely with her studying OCD and co-authoring manuscripts and federal grant applications. Edna generously provided me with excellent clinical supervision during my training at the Behavior Therapy Unit at Temple University where I learned how to treat phobias, agoraphobia and panic, and especially OCD. Edna’s encouragement and specific feedback guided my understanding of patients and how to provide effective treatment. Her supervision coincided with the end of her important early study of the impact of exposure and response prevention, following in the steps of Victor Meyer, Isaac Marks, and Jack Rachman. I treated the last few patients with OCD in her study and co-authored a case report stemming from that work – my first published paper in the field in 1977.
Edna opened many doors for me to join colleagues around the world who were studying OCD and behavioral treatment methods. Together we wrote and published 26 papers and 14 book chapters. And I mean “together”. We would schedule writing times during which Edna generated ideas and spoke aloud in her heavily accented Israeli English while I contributed my thoughts and sharpened the language as we went along. Grant applications were a special challenge as NIMH became strict about page limits. More than once we stayed up all night writing grants to meet the deadline – we were both younger then – and once we actually drove to Bethesda to deliver a grant application just in time for the deadline. I joined Edna at many conferences in the U.S. (especially AABT [now ABCT] and OCF [now IOCDF]) and in Europe at EABCT and WCBCT (the World Congress of CBT). We met many delightful OCD researchers and clinicians – it was an exhilarating time. I traveled with Edna and friends to her home country of Israel where she treated us to delightful sights and experiences including the Dead Sea.
The 10 year period with Edna was a heady time as my career unfolded. She supported my decision to get a PhD in social work at Bryn Mawr while working full time with her on our research. Eventually, I left Temple to take a full-time faculty position at Boston University, arriving with a strong publication record already in hand thanks to Edna’s masterful training and modeling of how to design and conduct research, how to write papers that accurately reflected the study and its findings, how to write strong grant applications, and how to connect with energizing colleagues around the world. I am grateful for her mentoring that enabled me to establish my own career and become a mentor to others. She was a brilliant theoretician who spawned impressive thinking and research on OCD, PTSD, behavior therapy, and related topics. Hers was a long and full life. She will be sorely missed.
The post In Memoriam: Edna B. Foa, PhD appeared first on International OCD Foundation.

