Internet addiction in adolescents with suicidal ideation: the role of self-esteem and school connectedness

BackgroundInternet addiction (IA) has become a growing concern, particularly among adolescents, due to its adverse effects on mental health, physical well-being, and future development. Adolescents with suicidal ideation (SI) are particularly vulnerable to IA, which may be associated with a higher risk of engaging in suicidal behaviors. However, the relationship and underlying mechanisms between SI and IA remain unclear. This study, grounded in the cognitive-behavioral model of pathological internet use, investigates the relationship and explores the roles of self-esteem (mediator) and school connectedness (moderator) in this association.MethodsIn this cross-sectional study, 462 Chinese adolescents with SI (79.0% female) were recruited from psychiatric outpatient clinics between June 2024 and September 2025. Validated instruments measured SI, self-esteem, school connectedness, and IA. Structural equation modeling with bootstrapping procedures was used to test the mediation effect of self-esteem on the relationship between SI and IA. The moderating role of school connectedness was examined using PROCESS Model 8.ResultsSI was positively associated with IA (β = 0.224, p < 0.001). SI was negatively associated with self-esteem (β = -0.464, p < 0.001), and self-esteem was further negatively associated with IA (β = -0.448, p < 0.001). Self-esteem partially mediated the relationship between SI and IA, with an indirect effect of 0.208 (95% CI: 0.154-0.271). School connectedness significantly moderated the direct association between SI and IA (β = -0.005, p = 0.001), but did not moderate the association between SI and the mediator, self-esteem (β = 0.004, p = 0.202).ConclusionThis study identifies a significant positive association between SI and IA among adolescents with SI, with self-esteem partially mediating this link. Furthermore, school connectedness showed a very weak buffering effect on the direct association between SI and IA, and it does not moderate the association between SI and self-esteem. These findings enhance our understanding of the mechanisms underlying IA in this vulnerable population and suggest potential targets for interventions.

The Role of Disulfide Bonds in the GluN1 Subunit in the Early Trafficking and Functional Properties of GluN1/GluN2 and GluN1/GluN3 NMDA Receptors

N-Methyl-d-aspartate receptors (NMDARs) are ionotropic glutamate receptors essential for excitatory neurotransmission. Previous studies proposed the existence of four disulfide bonds in the GluN1 subunit; however, their role in NMDAR trafficking remains unclear. Our study first confirmed the existence of four disulfide bonds in the GluN1 subunit using biochemistry in human embryonic kidney 293T (HEK293T) cells. Disrupting the individual disulfide bonds by serine replacements produced the following surface expression trend for GluN1/GluN2A, GluN1/GluN2B, and GluN1/GluN3A receptors: wild-type (WT) > GluN1-C744S-C798S > GluN1-C79S-C308S > GluN1-C420S-C454S > GluN1-C436S-C455S subunits. Electrophysiology revealed altered functional properties of NMDARs with disrupted disulfide bonds, specifically an increased probability of opening (Po) at the GluN1-C744S-C798S/GluN2 receptors. Synchronized release from the endoplasmic reticulum confirmed that disruption of disulfide bonds impaired early trafficking of NMDARs in HEK293T cells and primary hippocampal neurons prepared from Wistar rats of both sexes (Embryonic Day 18). The pathogenic GluN1-C744Y variant, associated with neurodevelopmental disorder and seizures, caused reduced surface expression and increased Po at GluN1/GluN2 receptors, consistent with findings for the GluN1-C744S-C798S subunit. The FDA-approved memantine inhibited GluN1-C744Y/GluN2 receptors more potently and with distinct kinetics compared with WT GluN1/GluN2 receptors. We also observed enhanced NMDA-induced excitotoxicity in hippocampal neurons expressing the GluN1-C744Y subunit, which memantine reduced more effectively compared with the WT GluN1 subunit. Lastly, we demonstrated that the presence of the hGluN1-1a-C744Y subunit counteracted the effect of the hGluN3A subunit on decreasing dendritic spine maturation, consistent with the reduced surface delivery of the NMDARs carrying this variant.

Closing the Gap in Autism Genetics: Population-Specific Variants and the Imperative for Global Inclusion

Autism spectrum disorder (ASD) is a highly heritable neurodevelopmental condition with an exceptionally complex and heterogeneous genetic architecture, encompassing both polygenic common variants and rare, high-impact variants. Over the past decade, large-scale sequencing studies in Europe and North America have identified hundreds of ASD risk genes and substantially advanced biological insight. However, the global distribution of ASD genomic research remains profoundly imbalanced, with most non-European ancestry populations severely underrepresented.

Canada Gets its First National Guidance on AI for Mental and Substance Use Health

Ottawa (ONTARIO) – In a first-of-its-kind initiative, national guidance for using artificial intelligence (AI) in the mental and substance use health field is being developed through a partnership between the Canadian Centre on Substance Use and Addiction (CCSA) and the Mental Health Commission of Canada.

AI is increasingly being used for healthcare triage, service navigation, service delivery, and communication, but developers and users have no guidelines specific to mental or substance use health to support its effective and safe use. The recently published E-Mental Health Strategy for Canada highlights the need for safety in this field.

The new National Guidance for Artificial Intelligence Use in Mental Health and Substance Use Health Care will provide guidance, tools, and resources  to help practitioners, organizations, and health leaders in efficiently evaluating and implementing AI-enabled mental health and substance use health care services and solutions. It will also support people with lived or living experience of mental health or substance use health concerns in making informed choices about these technologies, while helping technology companies design and improve such solutions to meet the needs of those who use them.

“People are excited about what AI can bring, but the saying ‘break it then fix it’ can take on new dangers when what is at risk is people’s lives. This guidance will allow innovators to move fast while working to ensure it’s done safely and in a way that increases impact and access,” says CCSA CEO Dr. Alexander Caudarella.

The Mental Health Commission of Canada President and CEO Lili-Anna Pereša adds, “Technology can be a powerful ally in transforming mental health care, but innovation must be matched with responsibility. Communities are the best problem-solvers. By working together with developers, providers, and people with lived experience, we’re creating guidance that ensures AI enhances care safely and meaningfully.”

The National Guidance team will share its early findings at several upcoming conferences, including the World Psychiatric Association’s World Congress of Psychiatry, the Canadian Centre on Substance Use and Addiction’s Issues of Substance conference, and the eMental Health International Collaborative (eMHIC) Congress.

In Canada, mental health and substance use health needs are highly common, yet many people continue to face significant barriers to care, including limited access, stigma, financial costs, and lack of tailored treatment options.

 
The National Guidance for Artificial Intelligence Use with Mental Health and Substance Use Health is expected to launch in 2026/2027.

-30-

About CCSA:

CCSA was created by Parliament to provide national leadership to address substance use in Canada. A trusted counsel, we provide national guidance to decision makers by harnessing the power of research, curating knowledge and bringing together diverse perspectives. CCSA activities and products are made possible through a financial contribution from Health Canada. The views of CCSA do not necessarily represent the views of Health Canada.

About The Mental Health Commission of Canada:

The Commission leads the development and dissemination of innovative programs and tools to support the mental health and wellness of people in Canada. Through its unique mandate from the Government of Canada, the Commission supports federal, provincial, and territorial governments as well as organizations in the implementation of sound public policy. The Commission’s current mandate aims to deliver on priority areas identified in the Mental Health Strategy for Canada in alignment with the delivery of its strategic plan

Media contacts:

Canadian Centre on Substance Use and Addiction
Christine LeBlanc, Senior Strategic Communications Advisor
613-898-6343 | cleblanc@ccsa.ca

Mental Health Commission of Canada
media@mentalhealthcommission.ca

The post Canada Gets its First National Guidance on AI for Mental and Substance Use Health appeared first on Mental Health Commission of Canada.

Help-Seeking in the Age of AI: Cross-Sectional Survey of the Use and Perceptions of AI-Based Mental Health Support Among US Adults

<strong>Background:</strong> Anecdotal evidence suggests that an increasing number of people are turning to generative artificial intelligence (GenAI) tools or artificial intelligence (AI)-assisted chatbots to discuss and manage mental health concerns. However, systematic data on the use and perception of such tools remain scarce. <strong>Objective:</strong> This study aimed to examine how young and middle-aged adults in the United States use GenAI and AI-assisted mental health chatbots as mental health resources and assess their preferences for these tools relative to human mental health professionals. <strong>Methods:</strong> An anonymous online survey was conducted in October 2025 among US adults in a commercial online panel sample of US adults aged 18-49 years (N=1805). Respondents were asked about the sources they typically turn to when facing mental health concerns, their frequency of using GenAI tools or chatbots for mental health support, and whether the frequency of seeing human mental health professionals had changed since they started using AI tools for mental health support. Attitudes toward AI-based mental health support were assessed and compared with attitudes toward human mental health professionals. <strong>Results:</strong> In this sample, of the 1805 respondents, 638 (35.2%) reported using AI tools at least once a week in a typical week for mental health support, and 99 (5.5%) were classified as “heavy users” who reported regularly spending hours discussing their mental health concerns through AI. However, nearly 60% of respondents reported that they would turn first to family (1078/1805) and friends (1046/1805) when facing mental health concerns. Respondents who screened positive for moderate to severe depressive or anxiety symptoms were more likely to use AI-based mental health support compared to those without these symptoms (adjusted odds ratio 1.71, 95% CI 1.36-2.15) and those with suicidal ideation were more likely to be heavy AI users (adjusted odds ratio 2.42, 95% CI 1.49-3.95). Among those who had ever seen a human mental health professional (n=511), 28.4% (145/511) reported a perceived decline in visit frequency to human mental health professionals since they started using AI tools for the same purpose. Participants expressed more favorable attitudes toward human mental health professionals than toward AI-based tools. However, among heavy AI users, perceptions of AI-based mental health support and human counseling were nearly equivalent in positivity. <strong>Conclusions:</strong> AI appears to be an important component of the mental health help-seeking landscape among respondents in this sample. Although most respondents still preferred human professionals, a subset reported relying on AI tools for comparable support. Ongoing monitoring and ethical guidelines are needed to ensure that AI technologies expand access to care while being safely and effectively integrated into the broader continuum of mental health services.

Mass Media Narratives of Psychiatric Adverse Events Associated With Generative AI Chatbots: Rapid Scoping Review

<strong>Background:</strong> Generative artificial intelligence (AI) chatbots have rapidly entered public use, including in contexts involving emotional support and mental health–related interactions. Although these systems are increasingly accessible, concerns have emerged regarding potential adverse psychiatric outcomes reported in public discourse, including psychosis, suicidal ideation, self-harm, and suicide. However, these reports largely originate from journalistic accounts rather than systematically verified clinical data. <strong>Objective:</strong> This rapid scoping review aimed to systematically map and characterize mass media narratives describing alleged adverse psychiatric outcomes temporally associated with generative AI chatbot interactions. <strong>Methods:</strong> A rapid scoping review methodology was applied to publicly accessible news articles identified primarily through Google News searches. Articles published from November 2022 onward were screened for eligibility if they described a specific case in which psychiatric deterioration or crisis was temporally linked to generative AI use. Data were extracted using a structured coding template capturing article characteristics, demographic information, AI platform features, interaction intensity, outcome type and severity, type of evidence reported, and causal attribution language. Descriptive statistics and cross-tabulations were performed. <strong>Results:</strong> A total of 71 news articles representing 36 unique cases were included. Suicide death was the most frequently reported outcome (35/61, 57.4% cases with complete severity coding), followed by psychiatric hospitalization (12/61, 19.7%). Fatal outcomes were disproportionately represented among minors (19/21, 90.5%) compared with adults (17/35, 48.6%). ChatGPT was the most frequently cited platform (51/71, 71.8%), followed by Character AI (10/71, 14.1%). Causal attribution most commonly referenced AI system behavior (45/61, 73.8%), and the term “alleged” was the predominant causal descriptor (33/61, 54.1%). Evidence sources were primarily chat logs or screenshots (34/61, 55.7%), while police or medical documentation was rare (1/61, 1.6%). Regulatory calls were present in 51 of 60 (85%) articles with nonmissing data. <strong>Conclusions:</strong> Mass media reporting of generative AI–related psychiatric harms is concentrated around severe outcomes, particularly suicide deaths among youth, and is frequently framed within regulatory and corporate accountability narratives. While causality cannot be established from media reports, consistent patterns of high-intensity interactions, user vulnerability, and limited safeguard reporting highlight the need for structured safety surveillance, transparent AI risk auditing, and clearer governance frameworks. As generative AI becomes increasingly integrated into everyday psychosocial contexts, systematic research and formal safety monitoring will be necessary to determine whether media-reported harms correspond to verifiable clinical risk patterns.

The Performance of Wearable Device–Based Artificial Intelligence in Detecting Depression: Systematic Review and Meta-Analysis

Background: In recent years, advances in wearable sensor technology and artificial intelligence (AI) have provided new possibilities for detecting and monitoring depression. Objective: This study systematically reviewed and meta-analyzed the diagnostic and predictive performance of wearable device–based AI models for detecting depression and predicting depressive episodes and explored factors influencing outcomes. Methods: Following PRISMA-DTA (Preferred Reporting Items for a Systematic Review and Meta-Analysis of Diagnostic Test Accuracy) guidelines, the PubMed, Embase, Web of Science, and PsycINFO databases were searched from inception to May 27, 2025. Eligible studies used AI algorithms on wearable device data for depression detection or episode prediction. Sensitivity, specificity, diagnostic odds ratio, and area under the curve (AUC) were pooled using a bivariate random effects model. Risk of bias was assessed using Prediction Model Risk of Bias Assessment Tool plus artificial intelligence (PROBAST+ AI), and certainty of evidence was assessed using the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) tool. Results: We included 16 studies (32 datasets) with 1189 patients and 13,593 samples. For depression detection, pooled sensitivity and specificity were 0.89 (95% CI 0.83‐0.93) and 0.93 (95% CI 0.87‐0.96), with a diagnostic odds ratio of 110.47 (95% CI 33.33‐366.17) and AUC of 0.96 (95% CI 0.94‐0.98). Random forest models showed the best performance (sensitivity=0.89, specificity=0.91, AUC=0.97). Subgroup analyses indicated that study design, AI method, reference standard, and input type significantly affected diagnostic accuracy (<.05). For depressive episode prediction (3 datasets), pooled sensitivity was 0.86 (95% CI 0.80‐0.91), and pooled specificity was 0.65 (95% CI 0.59‐0.71). The overall risk of bias was low to moderate, with no evidence of publication bias. Conclusions: Wearable device–based AI models achieved high accuracy for detecting depression and moderate utility in predicting episodes. However, heterogeneity, reliance on retrospective and public datasets, and lack of standardized methods limited generalizability. Trial Registration: PROSPERO CRD420251070778; https://www.crd.york.ac.uk/PROSPERO/view/CRD420251070778

Virtual Reality Implementation in Mental Health Care Is a Marathon, Not a Sprint: Qualitative Longitudinal Study of a Virtual Reality Training Program

Background: Despite the potential of virtual reality (VR) for treatment and assessment in mental health care, its practical implementation remains limited. Much implementation research explores barriers and facilitators; fewer studies actually evaluate targeted implementation strategies and track how their effects evolve over time in mental health care practice. Objective: This study aims to examine how a structured VR training program functioned as an implementation strategy in routine mental health care and to identify how therapists’ adoption trajectories and implementation needs shifted across stages of the process. Methods: Eleven therapists from a Dutch mental health care organization completed a 6-session VR training. Semistructured interviews were conducted at 3 time points: pretraining, immediately posttraining, and 3 months posttraining. Data were deductively analyzed using theoretical thematic analysis based on the capability, opportunity, motivation – behavior model and the Theoretical Domains Framework to map stage-specific changes in implementation needs relating to VR use. Results: The training improved therapists’ perceived knowledge, skills, and confidence in using VR. Nonetheless, actual uptake of VR in clinical routines remained limited. Enduring barriers included workflow misalignment, hierarchical decision-making structures, and the absence of a shared organizational vision and sustained leadership support. The longitudinal design revealed a dynamic pattern: early adoption hinged on individual capability and motivation, whereas maintenance depended on organizational opportunity and communicated support. These stage-specific shifts clarify why training alone does not translate into routine use and which organizational levers are most important when. Conclusions: VR training for therapists is a necessary but insufficient implementation strategy in mental health care. A longitudinal approach shows that successful implementation requires pairing training with organization-level changes that address opportunity barriers over time. By shifting from static evaluations of whether training works to a process-oriented focus on what support is needed at each stage of implementation, this study advances implementation science in digital mental health and offers actionable guidance for embedding VR in routine care.
<img src="https://jmir-production.s3.us-east-2.amazonaws.com/thumbs/129adb5ec1d9301acab4db4c0c0ee16b" />

ARIA funding

We’re proud to share that Relatix Bio has applied for funding from the UK’s Advanced Research and Invention Agency (ARIA) under their Trust Everything, Everywhere programme. This initiative explores how trust can be built across the digital and physical worlds, and we believe this conversation must include those whose minds work differently.

Our proposal focuses on one of the most pressing and least understood challenges of the digital age: how people with neurodevelopmental and neurodiverse conditions — including autism, ADHD, schizophrenia, borderline traits, and psychopathy — experience, interact with, and build trust in AI systems. In a world increasingly mediated by algorithms, the ways these systems interpret, respond to, and store our most personal thoughts and data matter profoundly.

Throughout history, individuals living with stigmatised neurocognitive conditions have been marginalised or misrepresented — by institutions, by society, and now, potentially, by AI. Some may over-trust technology that feels neutral or supportive; others may under-trust it due to past harm or bias. We want to ensure that digital systems meet people where they are — building trust rather than eroding it. Protecting privacy, and supporting quality of life, health and wellbeing.

Through our work, Relatix Bio aims to lead the way in ethical and inclusive neuro-AI design: protecting privacy, removing stigma, and defining standards for responsible data handling in the era of AI. Our goal is to make sure that the next generation of AI-driven tools — from chatbots to diagnostics — truly serve everyone, regardless of how their brain is wired.

We know how often in the past things have gone wrong — from chatbots unintentionally encouraging depressive or paranoid thoughts, to credit and gambling platforms optimising for addiction or impulsive behaviour. These systems were built without safeguarding those with neurodevelopmental conditions, who may react differently to AI optimised interactions. Many respond by disengaging digitally, and may be feeling that an AI-driven world is a minefield — because it wasn’t built for them.

Join us in shaping a radically different future where cognitive diversity and digital trust can coexist, and AI tools are built to truly support and facilitate. To learn more about our mission or to collaborate contact our team.