STAT+: Biotech investors’ plea to Trump, and a busy M&A week

Want to stay on top of the science and politics driving biotech today? Sign up to get our biotech newsletter in your inbox.

The Trump administration is using newly announced 100% tariffs as leverage to push both large and small drugmakers into confidential pricing and manufacturing agreements.

Also, the burgeoning peptide craze is highlighting a trust gap in medicine, in which patients increasingly favor unproven treatments over well-established drugs.

Continue to STAT+ to read the full story…

Opinion: My patient would rather take a peptide than a statin. That reveals an uncomfortable truth in medicine

A patient came to my office recently and told me she had stopped her statin. She’d been on it for two years. Her coronary artery calcium score was 280 and LDL was 168, up almost 100 points since she had stopped taking her statin. Her father had died from a heart attack at 58.

When I asked about the decision, she crossed her arms and furrowed her brow.

Read the rest…

Orchestrating the Development of a Sustainable Network IT Solution for a Research Network: Qualitative Participatory Multimethod Design

Background: Practice-based research networks (PBRNs) rely on sustainable and interoperable IT infrastructures to support coordination, data management, and long-term collaboration across geographically distributed primary care practices. Large federated initiatives, such as the German DESAM-ForNet (Initiative of German Practice-Based Research Networks) program, face substantial sociotechnical challenges, as diverse user groups, heterogeneous local systems, and multiple governance levels must align around shared digital solutions. Objective: The aim of this study was to design and evaluate a participatory, consensus-driven process for developing a sustainable and interoperable IT solution that supports the coordination of multiple regional PBRNs, and to identify the sociotechnical factors that influence how such a process unfolds. Methods: A qualitative participatory multimethod design combined an iterative consensus-based IT development process in a central working group, interdisciplinary domain-driven design workshops (N=40 stakeholders from 6 PBRNs), and qualitative content analysis of internal documents (2020‐2025). Members of the IT working group were nominated by networks based on IT responsibility and strategic involvement; workshop participants represented general practitioners, study nurses, researchers, and coordinators. Documents (meeting minutes, workshop artifacts, and decision logs) were coded inductively by 2 authors to trace sociotechnical dynamics and decision trajectories. Results: The analysis revealed pronounced differences in IT ambitions, resources, and established practices across the 6 PBRNs (ranging from 2 to 90 person-months), which resulted in divergent expectations and uneven readiness for joint development. This heterogeneity—spanning objectives from simple REDCap (Research Electronic Data Capture; Vanderbilt University) databases to comprehensive digitization strategies—necessitated network-specific bounded contexts within a federated architecture. Through iterative development, stakeholders reached consensus on 6 core use cases (base data management, screening or recruitment processes, study or event participation tracking, management of event participation, accreditation procedures, and standardized communication or data exchange) and 2 national proofs-of-concept: quarterly key performance indicator reporting and pseudonymized practice queries based on a shared core dataset. This collaborative process culminated in a 3-tier practice relationship management infrastructure that integrates local autonomy with central metadata management and connectors to the Medical Informatics Initiative and REDCap, and was endorsed by the steering committee as a scalable compromise balancing interoperability and data sovereignty. Conclusions: The study shows that developing a national, interoperable IT infrastructure for PBRNs depends as much on social and organizational alignment as it does on technical solutions. Iterative participatory collaboration, transparent governance, and early stakeholder engagement were essential for building shared understanding and trust. Strengthening these relational and organizational elements will be crucial for sustaining future implementation efforts and fully realizing the potential of federated data infrastructures in primary care research.
<img src="https://jmir-production.s3.us-east-2.amazonaws.com/thumbs/903438150da430d2263b2261f4fa5915" />

AI Chatbots for Mental Health Self-Management: Lived Experience–Centered Qualitative Study

Background: Large language models (LLMs) now enable chatbots to engage in sensitive mental health conversations, including depression self-management. Yet their rapid deployment often overlooks how well these tools align with the priorities of people with lived experiences, which can introduce harms such as inaccurate information, lack of empathy, or inadequate crisis support. Objective: This study explores how people with lived experience of depression experience an LLM-based mental health chatbot in self-management contexts, and what perceived benefits, limitations, and concerns inform harm-mitigating design implications. Methods: We developed a technology probe (a GPT-4o–based chatbot named Zenny) designed to simulate depression self-management scenarios grounded in prior research. We conducted interviews with 17 individuals with lived experiences of depression, who interacted with Zenny during the session. We applied qualitative content analysis to interview transcripts, notes, and chat logs using sensitizing concepts related to values and harms. Results: We identified 3 themes shaping participants’ evaluations: (1) informational accuracy and applicability, including concerns about incorrect or misleading information, vagueness, and fit with personal constraints; (2) emotional support vs need for human connection, including validation and a judgment-free space alongside perceived limits of machine empathy; and (3) a personalization-privacy dilemma, where participants wanted more tailored guidance while withholding sensitive information and using privacy-preserving tactics. Conclusions: People with lived experience of depression evaluated LLM-based mental health chatbots through intertwined priorities of actionable information, emotional validation with clear limits, and personalization that does not require unsafe data disclosure. These findings suggest concrete design strategies to mitigate harms and support LLM-based tools as complements to, rather than replacements for, human support and recovery.
<img src="https://jmir-production.s3.us-east-2.amazonaws.com/thumbs/78ae955c189d2dea8e926c80ddf7b242" />

Commercial or industrial use of mental health data for research: primer and best-practice guidelines from the DATAMIND patient/public Lived Experience Advisory Group

BackgroundRoutinely collected health data, such as that held by United Kingdom (UK) national health services (NHS), has important research uses. However, its use requires public trust and transparency. Access by commercial/industrial organisations is especially sensitive for the public, as is mental health (MH) data. Although existing MH data science guidelines emphasise patient/public involvement (PPI), they do not cover commercial uses specifically.ObjectivesTo develop patient- and public-led guidelines for the commercial and industrial use of MH data for research. Though UK-focused, their principles may apply internationally.MethodsA PPI Lived Experience Advisory Group (LEAG) was created within DATAMIND, a UK data hub for MH informatics. Initial discussion yielded a requirement for definitions and explanations of concepts relating to MH data research, developed iteratively. Subsequently, the LEAG developed guidelines via a qualitative quasi-Delphi approach. The agreed scope excluded data provided for research with informed consent, data processing arrangements (e.g. companies hosting electronic systems on the instruction of health services), and compliance with legal minimum requirements. The scope included the use of routinely collected MH data for research by commercial/industrial organisations without explicit consent, and aspects of industry-led MH data collection conducted with consent.ResultsAlongside the primer in MH data research concepts, the LEAG provide best-practice guidelines relating to commercial/industrial research use of MH data, for organisations controlling MH data (such as NHS bodies) and for commercial applicants seeking access. Core principles include transparency, patient rights, meaningful PPI, stringent governance, and statistical disclosure control. The guidelines recommend a risk–benefit approach to assessing data access applications, within limits that include avoiding the export of unconsented patient-level data outside NHS-controlled secure data environments, and not providing commercial applicants with access to unconsented free-text MH data. Further recommendations for NHS executive and regulatory bodies relate to public choice and transparency, clarity of guidance to research-active NHS organisations, and support for de-identification.ConclusionsMH data research requires patient/public involvement and understanding. These guidelines reflect the views of people with personal or family experience of mental ill health. We hope they are useful to the MH research community and increase public transparency and trust.

Synergies in psychedelic-assisted therapy: a qualitative interview study of psychotherapeutic processes

Research on the therapeutic effects of psychedelics in psychiatry, commonly referred to as Psychedelic-Assisted Therapy (PAT), has expanded substantially in recent years. The context-dependent nature of psychedelics has sparked discussion about the importance of the psychotherapeutic environment in achieving beneficial outcomes. This study explores the contribution of psychotherapeutic factors on PAT in Switzerland, where psychedelic treatments can be implemented within long-term clinical frameworks. Seven semi-structured interviews were conducted with Swiss therapists to explore how they frame psychedelic treatments and the role of the psychotherapeutic setting in facilitating therapeutic outcomes. Thereby, individual experiences of the patients as reported by the therapists, were particularly considered. Thematic analysis identified two main themes, each with several sub-themes. The first theme revealed that while psychotherapeutic techniques are adapted to PAT, they retain similarities to non-psychedelic psychotherapy practices, supporting patients in having meaningful therapeutic experiences. The second theme describes a synergistic relationship between psychedelics and psychotherapy, amplifying underlying general psychotherapeutic factors such as trust, a sense of profundity, and the emergence of therapeutic experiences. The interviewed therapists agreed that psychedelics work as unspecific catalysts for psychotherapeutic processes, while still acknowledging the potential for psychopharmacological effects or the interaction between psychedelics and psychotherapy to create unique psychotherapeutic processes. Findings from our sample suggest that, for specific indications, incorporating psychedelics into long-term psychotherapeutic treatment may strengthen therapeutic processes. Future research could investigate the efficacy of PAT within the framework of specific psychotherapeutic modalities or in different settings, including prospective quantitative assessments of outcomes. Ultimately, clarifying mechanisms of action of PAT may help to enhance its efficacy and potentially to integrate psychedelic treatments into mainstream mental health care.

Asking for help: the development of a simulation-based mental health application to enhance depression literacy, mental health communication, and help-seeking among Black autistic youth

Black autistic youth experience disproportionately high rates of depression and face intersecting barriers such as racial discrimination, stigma, and limited access to care, yet few interventions address their needs. This study introduces Asking for Help (A4H), a culturally responsive, simulation-based intervention designed to improve depression literacy and help-seeking skills through an e-learning module and interactive conversation practice. Guided by mental health literacy theory, the Theory of Help-Seeking Behavior, the Theory of Planned Behavior, and Disability Critical Theory, A4H was developed using community-engaged and user-centered design principles. Usability testing employed a mixed-methods design with 32 participants (12 youth, 10 caregivers, 8 specialists) using the System Usability Scale (SUS), Patient Health Questionnaire-9 (PHQ-9), and semi-structured interviews. Black autistic youth reported moderate depressive symptoms (mean PHQ-9 = 14.7) and rated usability slightly below benchmark (mean SUS = 66.2), while caregivers and specialists scored higher (73.5 and 71.0). Qualitative feedback highlighted cultural relevance and immediate feedback as strengths, with recommendations for simplified language, improved navigation, and multimodal supports; emotional safety and trust were critical for engagement. No short-term symptom change was observed, consistent with the formative design. Findings indicate A4H is feasible and culturally responsive but requires refinements before efficacy testing to assess impacts on literacy, help-seeking intentions, and communication skills.

ARIA funding

We’re proud to share that Relatix Bio has applied for funding from the UK’s Advanced Research and Invention Agency (ARIA) under their Trust Everything, Everywhere programme. This initiative explores how trust can be built across the digital and physical worlds, and we believe this conversation must include those whose minds work differently.

Our proposal focuses on one of the most pressing and least understood challenges of the digital age: how people with neurodevelopmental and neurodiverse conditions — including autism, ADHD, schizophrenia, borderline traits, and psychopathy — experience, interact with, and build trust in AI systems. In a world increasingly mediated by algorithms, the ways these systems interpret, respond to, and store our most personal thoughts and data matter profoundly.

Throughout history, individuals living with stigmatised neurocognitive conditions have been marginalised or misrepresented — by institutions, by society, and now, potentially, by AI. Some may over-trust technology that feels neutral or supportive; others may under-trust it due to past harm or bias. We want to ensure that digital systems meet people where they are — building trust rather than eroding it. Protecting privacy, and supporting quality of life, health and wellbeing.

Through our work, Relatix Bio aims to lead the way in ethical and inclusive neuro-AI design: protecting privacy, removing stigma, and defining standards for responsible data handling in the era of AI. Our goal is to make sure that the next generation of AI-driven tools — from chatbots to diagnostics — truly serve everyone, regardless of how their brain is wired.

We know how often in the past things have gone wrong — from chatbots unintentionally encouraging depressive or paranoid thoughts, to credit and gambling platforms optimising for addiction or impulsive behaviour. These systems were built without safeguarding those with neurodevelopmental conditions, who may react differently to AI optimised interactions. Many respond by disengaging digitally, and may be feeling that an AI-driven world is a minefield — because it wasn’t built for them.

Join us in shaping a radically different future where cognitive diversity and digital trust can coexist, and AI tools are built to truly support and facilitate. To learn more about our mission or to collaborate contact our team.

Commentary: A case for ethical continuity in the age of medical AI


By Gregory Kiar, PhD
Director, Center for Data Analytics, Innovation, and Rigor (DAIR), Child Mind Institute
&
Michael P. Milham, MD, PhD
Chief Science Officer, Child Mind Institute


Abstract

Medicine has long wrestled with a form of professional hubris, often termed a “God complex”, in which the conviction of noble intent is mistaken for a guarantee of patient safety. History has repeatedly shown the limits of that belief. Each breakthrough, from anesthesia to antibiotics, has carried unforeseen harms that demanded restraint, oversight, and a commitment to safety proportional to clinical risk. Medical artificial intelligence now renews that challenge, this time accelerated by commercial pressures, amplified by scale, and driven largely by forces outside medicine. This commentary calls for ethical continuity, extending the discipline that made medicine trustworthy into the digital age. We outline a risk stratification framework consisting of: risk–benefit assessment, operationalizing accuracy thresholds, pathways for human care escalation, and continuous post-market accountability. Behavioral health sits at the front line of this transformation, testing whether medicine’s ethical discipline can be incorporated into the digital age.

Introduction

Historically, medical breakthroughs, ranging from anesthesia to antipsychotics, have introduced novel risks alongside clinical benefits. These precedents underscore that the methodology of advancement matters as much as the innovation itself.

Artificial Intelligence (AI) is emerging as a new inflection point, reviving a familiar ethical challenge. Medicine once operated under a belief that having noble intent and professional self-regulation were sufficient. Catastrophes like thalidomide proved otherwise; the FDA was medicine’s hard won regulatory answer to that hubris. This requirement for external oversight is shared across technical disciplines, where ethical codes evolved in response to systemic failures.

Today, the scale and velocity of medical AI deployment necessitate a similar evolution. Here, we argue for ethical continuity: extending the rigorous engineering principles, professional codes, and regulatory safeguards that have kept medicine humane for decades.

Balancing unmet need with unchecked innovation

The medical AI marketplace is emerging, though without even the standards of over-the-counter medicine. Although offering greater scale and accessibility, absent accountability it risks replacing one form of inequity with another. General purpose AI tools interpret symptoms and guide decisions without professional input or assurances of quality. Specialist tools, such as therapy bots, pose risks when deployed without clinical oversight. Reports of clinical harm, including youth suicide linked to unmoderated AI persona use (e.g., Character.AI case), reveal the dangers of technological hubris. Commercial pressures and unprecedented scalability further amplify these risks, with momentum driven largely from outside the clinical field.

Yet the opposite risk is equally real: overly restrictive responses carry their own dangers. Medicine’s mandate to “do no harm” is a matter of proportion. “No harm” does not mean “no risk,” as even benign drugs can yield serious side effects. This tradeoff is poignant when considering underserved populations where digital tools may offer the only immediate hope for intervention. AI’s scalability can redefine medical action, extending the duty of care beyond the clinic walls and into the digital lives of patients.

Medicine’s progress has depended on learning safely from failure through structured trials and transparent reporting. Fast failures can be valuable when appropriately monitored and contained within systems of accountability, advancing innovation through evidence rather than exceptions. Clinical research as a care option (CRCO) has emerged within pharmaceutical research as a mechanism for bringing novel innovations to the public with appropriate labeling and monitoring. The artificial intelligence community must follow the same ethical model: innovations require justification by proportional benefit and bounded by oversight through standards of transparency and accountability.

The litmus test of behavioral health

Behavioral health sits at the most personal and interpretive edge of medicine, where AI most clearly can both reproduce and distort care. AI hallucinations, misread cues, and patient manipulations can cause immediate harm, as can subtler effects like discouraging people from seeking human intervention. Some systems may overstate medical risk, while others may mirror distorted thinking, overpathologize normal emotion, or minimize severe distress as ordinary.

Conversely, behavioral health stands to gain significantly from AI by expanding access where clinicians are scarce, tailoring language for individual contexts, and sustaining support between visits. The challenge is to capture that potential without eroding the clinical judgment and empathy that define therapeutic care. This duality, where the potential for connection meets the risk of distortion, makes behavioral health the definitive test for whether we can build AI to be both intelligent and humane.

A framework for risk stratification

A new system of governance is required to bridge the gap between unmet need and unchecked innovation. We propose a framework for ethical continuity that balances progress with risk, ensuring the safe and equitable deployment of medical AI tools.

Risk–benefit assessment — The first question is whether a tool should be built. This involves underscoring the gap, existing alternatives, and the cost of not filling that gap. This includes an assessment of what harm can be done if that gap is filled poorly, and which populations may be differentially impacted — such as individuals who are non-native English speakers. The decision to proceed must rest on an explicit acknowledgement of these tradeoffs, and pass through a review-board merit evaluation.

Operationalizing accuracy thresholds — No tools or medical assays are perfectly accurate or without bias: all have known sensitivities, specificities, and failure-modes. Physicians increase their understanding of patients through these imperfect assessments, while balancing risk to the patient, psychological burden, and resource availability. Medical AI may inherently require similar decision-making, without the luxury of clinician involvement. This positions the accuracy of an AI tool as an acceptable ethical threshold. In order for this ethical standard to be understood, much less enforced, medical AI tools need to be built and benchmarked on transparent and representative datasets for well-defined purposes.

Pathways for human care escalation — Gradual escalation and specialization of care has always been a core element of medicine. Medical AI needs to follow a similar model. Escalation can take multiple forms, including moving from general-purpose AI tools to domain-specialist models. However, medical AI must recognize its limitations and provide a clear pathway for human-led care escalation. The inherent scalability of digital tools gives them the tremendous opportunity to be used as a pathway for obtaining care or treatment oversight — though, only if escalation for clinical support is a core design feature.

Continuous post-market accountability — Even with the above, medical AI tools need oversight and guardrails. Ongoing evaluation against representative datasets is necessary, alongside clear guidelines governing the intended use and boundaries, and ongoing management of user consent. Strict behavioral guardrails are needed to govern what tools can and cannot do, as the possibility of action requires accountability.

The necessity of innovation in regulation

Responsible advances in medicine require that innovation be matched by discipline and restraint. The regulatory frameworks that followed past failures were corrective, not bureaucratic. Each emerged from the same recognition: good intent is not a safeguard. Artificial intelligence now brings that lesson to a new frontier, requiring its extension to create oversight for AI that is proportional, transparent, and scaled to risk.

Oversight should be risk-stratified. A generic resource portal or symptom checker requires less scrutiny than a diagnostic engine or an unconstrained “therapy bot.” Between these poles, mechanisms such as structured audits, standardized safety benchmarks, and domain-specific frameworks can guide oversight. Centralizing these requirements, rather than leaving them solely to tool developers, ensures consistency, fairness, and transparency.

By bringing regulators and technology builders to the same table, we can innovate in how we regulate, establishing platforms for continuous public auditing, open licensing, and defined escalation pathways that achieve discipline without slowing innovation. This is the necessary evolution of regulators, from gatekeepers to ecosystem builders.

Conclusion

The development of medical AI technologies promises both substantial benefit and significant risk. This is a familiar crossroads for medicine, and we have the advantage of an established ethical foundation to guide our progress. By adopting a risk stratification framework, we can ensure that innovation is timely and safe. The hard-won lessons of risk-benefit assessment, rigorous accuracy evaluation, human escalation pathways, and clear accountability, transform medical AI from an unregulated marketplace into a disciplined clinical structure. The measure of our success will not be the speed at which AI scales, but how it preserves the humility and caution that have protected patients and advanced medicine.

The post Commentary: A case for ethical continuity in the age of medical AI appeared first on Child Mind Institute.