STAT+: Capsida says it still doesn’t know what caused gene therapy death 

Capsida Biotherapeutics said Tuesday that it still had no answers in its investigation into the death of a child in a gene therapy trial last September.

Its scientists’ efforts, it said, have been stymied because the hospital where the study was conducted has declined to share tissue samples from an autopsy. 

The therapy, known as CAP-002, was the first of a wave of new gene therapies designed to deliver genes deep into the brain. Scientists around the world engineered viruses that could slide through the blood-brain barrier that walls off our most vital organ from the rest of the body. Companies spun up promising treatments for devastating rare genetic diseases and common conditions like Alzheimer’s and Parkinson’s.

Continue to STAT+ to read the full story…

Smart Pediatric Oncology Tracker of Symptoms (SPOTS), a Web-Based Interface for the Pediatric PRO-CTCAE: Development and Usability Study

<strong>Background:</strong> Children undergoing cancer treatment experience a range of treatment-related toxicities that significantly affect quality of life and adherence to therapy. Current methods for symptom reporting rely heavily on clinician interpretation of caregiver or child verbal reports, which can result in incomplete or inaccurate records. The Pediatric Patient-Reported Outcomes version of the Common Terminology Criteria for Adverse Events (Pediatric PRO-CTCAE; National Cancer Institute) provides a validated mechanism for direct symptom reporting by children and caregivers, yet its traditional administration and preselection of questions limit the breadth of symptom capture. <strong>Objective:</strong> This research aimed to co-design and conduct formative usability testing of the Smart Pediatric Oncology Tracker of Symptoms (SPOTS), a novel, web-based interface for the Pediatric PRO-CTCAE to allow children with cancer and their caregivers to comprehensively report symptoms. <strong>Methods:</strong> The research comprised 2 sequential phases: co-design and usability testing. Guided by child-computer interaction theory and participatory design methods, child-caregiver dyads collaborated with the research team to iteratively design and refine the SPOTS prototype. Nine participant dyads engaged in up to 3 co-design sessions that informed system features, layout, and content. During the usability phase, 12 additional dyads (6 with children aged 7-12 years and 6 with adolescents aged 13-17 years, each with a caregiver) completed structured usability tasks using the SPOTS prototype. Task completion, pathway efficiency, and user feedback were recorded through screen capture, field notes, and think-aloud protocols. Quantitative data were analyzed descriptively, and qualitative feedback was analyzed thematically. <strong>Results:</strong> SPOTS was described by users as “very clear” and “easy to navigate.” Participants valued the visual design, the use of a customizable character, and the opportunity for children to report symptoms independently. Key usability challenges included confusing terminology, navigation redundancy, and visual complexities. Quantitative task analyses indicated that while most structured tasks were completed successfully, many required excess steps or assistance. When not directed to use a specific screen, participants’ symptom reporting methods varied, with caregivers and adolescents preferring the Body Parts Screen and younger children favoring the Search Screen. <strong>Conclusions:</strong> The formative development of SPOTS demonstrates the feasibility and value of co-designing pediatric health technologies directly with children and caregivers. SPOTS has the potential to enhance the implementation of the Pediatric PRO-CTCAE by offering an engaging, child-friendly digital format that facilitates more direct symptom reporting. Future work will include a pilot study to further assess real-world usability, the quality of symptom capture (ie, completeness and accuracy), and integration with clinical workflows.

STAT+: In the battle of sepsis algorithms, performance alone doesn’t predict victory

Five years ago, the bottom fell out of sepsis prediction software. Hundreds of hospitals had adopted an algorithm from electronic health record company Epic that promised to alert physicians to predicted cases of sepsis, a life-threatening reaction to infection that kills more than 350,000 people in the United States every year. 

The AI was a technical flop. Despite its results on paper, the technology failed to perform in the real world, and sent so many alerts that doctors tuned them out or hospitals turned them off. 

Half a decade on, new sepsis models are hitting the scene. Epic released a retooled version of its own algorithm. Startups are testing their models in health systems. A team uses large language models to mine clinical notes for signs of sepsis. And on Tuesday, a sepsis flagging device from Bayesian Health, with origins at Johns Hopkins, announced it has received clearance from the Food and Drug Administration.

Continue to STAT+ to read the full story…