Harsh discipline mediates the association between parenting stress and internalizing problems in children and adolescents: survey-based and online intervention evidence
A Scalable Trans Diagnostic Intervention Targeting Adolescent Agency Supported by Conversational AI (AGENCIA)
Interventions: Behavioral: AGENCIA Digital Self-Guided; Behavioral: AGENCIA In-person With Digital Assistant
Sponsors: Fundación Pública Andaluza para la gestión de la Investigación en Sevilla; Hospital Universitario Virgen del Rocio; Instituto de Salud Carlos III
Not yet recruiting
Many Trauma-affected Youth Face Long Waits for Therapy, Worsening Stress, and Avoidance. CISS, a 90-minute Session Based on Stanford’s Cue-Centered Therapy, Offers Coping Tools and Psychoeducation During This Gap. This Pilot Tests CISS’s Feasibility, Acceptability, and Impact on 30-40 Adolescents.
Interventions: Other: Cue-Centered Therapy (CCT)-Informed Single Session Intervention (CISS)
Sponsors: Stanford University; University of Auckland, New Zealand; Health New Zealand
Not yet recruiting
The Comprehensive Assessment of Social Media Use: Development and Validation Study
Designing and Evaluating Digital Mental Health Interventions: Scoping Review
Background: The ongoing adoption and use of digital interventions offer promising opportunities to meet the growing demand for mental health support. The effectiveness, implementation, and usage of these interventions depend on how well they are designed and evaluated. However, given the emerging nature of design research in this area, there is still no clear consensus on the specific principles and guidelines for developing digital mental health interventions (DMHIs). There seems to be a lack of clarity regarding the best practices for designing and evaluating these tools. Objective: We aimed to investigate and report on the design principles and evaluation approaches used in digital interventions specific to mental health care. Additionally, we sought to outline how these principles and approaches are applied in research. Methods: This scoping review was conducted in accordance with the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines for scoping reviews. The literature search was performed in 2 electronic databases, SCOPUS and Web of Science, across 3 iterations from January 2024 to January 2025. A total of 2 independent reviewers screened and selected papers based on predefined inclusion and exclusion criteria, followed by data extraction from the selected studies. The data were then synthesized by categorizing the papers according to the primary research aim of each study. The inclusion criteria covered studies involving populations with mental health challenges or users of DMHIs, any digital tools for mental health care, and principles or strategies related to the design, evaluation, or implementation of DMHIs. Results: Our search identified 401 papers, of which 17 met the inclusion criteria for this review. Among these, 11 focused on evaluation studies, while 6 covered both design and evaluation studies (mixed). An iterative user-centered development process, expert inclusion, usability testing, specification of design elements, and user tracking and feedback were identified as common design principles used in studies focused on DMHIs. Evaluation approaches were shaped by the evaluation goal, which influenced the chosen methodologies. We also summarize the recommendations for implementation highlighted in some studies. Based on our findings, we propose 8 guidelines emphasizing stakeholder involvement in the development process and the need for clear justifications for design decisions, among other considerations. Conclusions: Design principles used in DMHI development include user-centered development, expert inclusion, and usability testing, while evaluation approaches often rely on randomized controlled trials to assess efficacy. Qualitative and mixed-method approaches are commonly adopted by studies to capture user experience and bridge both process and outcome measures. We recommend that future research explicitly report its design justification and adopt a multiperspective approach in the research and design of DMHIs.
<img src="https://jmir-production.s3.us-east-2.amazonaws.com/thumbs/f062b157598b65481ecf0069cd958411" />
Errors in AI-Transformed Patient-Centered Mental Health Documentation Written by Psychiatrists: Qualitative Pre-Post Study
Background: Patients’ digital access to their personal health data is becoming increasingly common worldwide. However, medical documentation often contains technical language and sensitive information, which can lead to potential misunderstandings and distress among patients. These issues may be particularly impactful in mental health contexts. Large language models (LLMs) offer a promising approach by transforming clinician-generated health notes into language that is more patient-centered, nonmedicalized, and empathetic. However, risks related to accuracy and clinical safety have not been adequately investigated in psychiatry. Objective: This study aimed to qualitatively analyze the errors introduced by LLMs when transforming notes written by psychiatrists into patient-facing formats. It also highlights the implications for clinical communication and patient safety. Methods: Clinical notes (n=63) written by 19 psychiatrists in an outpatient treatment setting were collected, anonymized, and translated from German to English by humans. OpenAI GPT-3.5 Turbo was used to develop a preprompt that transformed these notes into a patient-centered, lay-readable form through an iterative process. Three psychiatrists qualitatively analyzed the LLM-revised documentation using Kuckartz content analysis. They compared the preconversion and postconversion notes to systematically identify and categorize LLM-induced errors. Results: Five categories of clinically relevant errors were identified: (1) clinical misinterpretations, particularly in critical assessments such as suicidality, where nuanced terminology was oversimplified or inaccurately represented; (2) attribution errors, where behaviors or roles within family dynamics or interactions were incorrectly attributed to different individuals; (3) content distortion errors, which were characterized by speculative additions, emotional exaggerations, and inappropriate contextual assumptions; (4) abbreviation and terminology errors, which resulted from inaccurate expansions of medical abbreviations and terms; and (5) structural and syntax errors, which resulted in ambiguity, particularly when the original notes were brief or bulleted. Despite significant improvements in the readability and overall linguistic fluency of the converted notes, these errors occurred. Conclusions: LLMs have the potential to transform psychiatric notes into patient-friendly formats. However, critical errors remain prevalent and can impair clinical judgment, understanding of patient circumstances, clarity of medication regimens, and interpretation of clinical observations. To safely integrate artificial intelligence–generated documentation into psychiatric care, clinician oversight and targeted model refinement are essential. Future research should explore strategies to mitigate these errors, assess their comprehensive clinical impact, and incorporate patient and provider perspectives to ensure robust implementation.
<img src="https://jmir-production.s3.us-east-2.amazonaws.com/thumbs/b1b33de75056d8cdcb051d86c740c2c8" />

