Research on the Development, Implementation Effect and Neural Mechanism of a Physical Intervention Program for College Students With ADHD Based on the Characteristics of Balance Dysfunction

Conditions: ADHD – Attention Deficit Disorder With Hyperactivity; ADHD; ADHD – Combined Type; ADHD – Inattentive Type; ADHD Specifically With Executive Function Impairment

Interventions: Behavioral: Aerobic Treadmill Training; Behavioral: Progressive Balance Training; Behavioral: Cognitive-Balance Dual-Task Training

Sponsors: Wuhan Sports University

Terminated

Competitive Research Fellowship Opportunity Launched by the SNF Global Center for Child and Adolescent Mental Health at the Child Mind Institute

The program provides funding up to 550,000 USD for eligible early-career researchers leading youth mental health innovation in low-and middle-income countries (LMICs)

New York, NY—The Stavros Niarchos Foundation (SNF) Global Center for Child and Adolescent Mental Health at the Child Mind Institute continues to strengthen investment in the next generation of mental health leaders by announcing its latest Request for Applications (RFA), targeted at early-career researchers from host institutions in low- and middle-income countries (LMICs). This competitive program will award two grants, each providing funding of up to 550,000 USD for a four- to five-year research project to advance child and adolescent mental health (CAMH).

The SNF Global Center Research Fellowship is an innovative funding and career development program designed for exceptional early-career researchers making an impact on their communities. Selected Fellows will receive financial support to expand their research expertise and leadership capacity, enabling them to drive innovation and strengthen research ecosystems locally while contributing evidence-based findings that better inform CAMH care systems and understanding.

Fellows will have the opportunity to undertake short-term mentored research training to further enhance their skills, advance projects, or complete early career skills development before transitioning to full research independence. The program aims to attract a diverse cohort of investigators with varied disciplinary backgrounds and skillsets, welcoming researchers new to the child and adolescent mental health field who possess the interest and potential to revolutionize it. Specific research focus areas include understanding the mechanisms of mental illness, developing innovative treatments and prevention strategies, promoting equity in mental health care access, and improving early identification and intervention for vulnerable populations.

The Fellowship reflects the Child Mind Institute’s broader commitment to advancing science-driven, accessible mental health care for children and adolescents worldwide. It is estimated that 90 percent of the world’s children live in LMICs where access to mental health care is often limited, while one in seven children and adolescents globally are affected by mental health challenges. Most mental health conditions occur before the age of 18. And yet, many of those diagnosed do not receive any form of treatment.

The program also offers a unique international platform for mentorship, training, career development, and networking alongside clinical- and communicator-based sister programs under the SNF Global Center at the Child Mind Institute.

“By empowering promising researchers in this program, we aim to build the local capacity necessary to generate the evidence needed to transform policy and care for children and adolescents who need it most,” says Peter Raucci, director of Global Fellowships Strategy at the SNF Global Center at the Child Mind Institute.

The Fellowship aims to address these critical gaps by building sustainable research capacity and supporting the development of evidence-based, culturally responsive initiatives. The program is open to early-career researchers within ten years of completing a doctoral degree, who can dedicate at least 50 percent of their full-time work to the program and are nominated by an eligible host institution. Eligible institutions must be established in and demonstrate a research record within LMICs and may register interest and nominate up to two candidates by June 1, 2026, and the application closes on June 15, 2026.

Informational webinars are scheduled for April 23, 2026, for prospective nominees. An international expert panel will review applications from June to July 2026, with shortlisted candidates invited for virtual interviews in August. The final cohort of Fellows will be announced on World Mental Health Day, October 10, 2026. For full details, eligibility criteria, and registration links, please visit the program website. Questions may be directed to applications@childmind.org and peter.raucci@childmind.org.


About the SNF Global Center at the Child Mind Institute
The Stavros Niarchos Foundation (SNF) Global Center for Child and Adolescent Mental Health at the Child Mind Institute brings together the Child Mind Institute’s expertise as a leading independent nonprofit in children’s mental health and the Stavros Niarchos Foundation’s deep commitment to supporting collaborative projects to improve access to quality health care worldwide. The center is building partnerships to drive advances in under-researched areas of children and adolescents’ mental health, and expand access to culturally appropriate training, resources, and treatment in low- and middle-income countries. This work is conducted by the Child Mind Institute with support from SNF through its Global Health Initiative (GHI).

About the Child Mind Institute
The Child Mind Institute is dedicated to transforming the lives of children and families struggling with mental health and learning disorders by giving them the help they need. We’ve become the leading independent nonprofit in children’s mental health by providing gold-standard, evidence-based care, delivering educational resources to millions of families each year, training educators in underserved communities, and developing tomorrow’s breakthrough treatments.

The post Competitive Research Fellowship Opportunity Launched by the SNF Global Center for Child and Adolescent Mental Health at the Child Mind Institute appeared first on Child Mind Institute.

Commentary: A case for ethical continuity in the age of medical AI


By Gregory Kiar, PhD
Director, Center for Data Analytics, Innovation, and Rigor (DAIR), Child Mind Institute
&
Michael P. Milham, MD, PhD
Chief Science Officer, Child Mind Institute


Abstract

Medicine has long wrestled with a form of professional hubris, often termed a “God complex”, in which the conviction of noble intent is mistaken for a guarantee of patient safety. History has repeatedly shown the limits of that belief. Each breakthrough, from anesthesia to antibiotics, has carried unforeseen harms that demanded restraint, oversight, and a commitment to safety proportional to clinical risk. Medical artificial intelligence now renews that challenge, this time accelerated by commercial pressures, amplified by scale, and driven largely by forces outside medicine. This commentary calls for ethical continuity, extending the discipline that made medicine trustworthy into the digital age. We outline a risk stratification framework consisting of: risk–benefit assessment, operationalizing accuracy thresholds, pathways for human care escalation, and continuous post-market accountability. Behavioral health sits at the front line of this transformation, testing whether medicine’s ethical discipline can be incorporated into the digital age.

Introduction

Historically, medical breakthroughs, ranging from anesthesia to antipsychotics, have introduced novel risks alongside clinical benefits. These precedents underscore that the methodology of advancement matters as much as the innovation itself.

Artificial Intelligence (AI) is emerging as a new inflection point, reviving a familiar ethical challenge. Medicine once operated under a belief that having noble intent and professional self-regulation were sufficient. Catastrophes like thalidomide proved otherwise; the FDA was medicine’s hard won regulatory answer to that hubris. This requirement for external oversight is shared across technical disciplines, where ethical codes evolved in response to systemic failures.

Today, the scale and velocity of medical AI deployment necessitate a similar evolution. Here, we argue for ethical continuity: extending the rigorous engineering principles, professional codes, and regulatory safeguards that have kept medicine humane for decades.

Balancing unmet need with unchecked innovation

The medical AI marketplace is emerging, though without even the standards of over-the-counter medicine. Although offering greater scale and accessibility, absent accountability it risks replacing one form of inequity with another. General purpose AI tools interpret symptoms and guide decisions without professional input or assurances of quality. Specialist tools, such as therapy bots, pose risks when deployed without clinical oversight. Reports of clinical harm, including youth suicide linked to unmoderated AI persona use (e.g., Character.AI case), reveal the dangers of technological hubris. Commercial pressures and unprecedented scalability further amplify these risks, with momentum driven largely from outside the clinical field.

Yet the opposite risk is equally real: overly restrictive responses carry their own dangers. Medicine’s mandate to “do no harm” is a matter of proportion. “No harm” does not mean “no risk,” as even benign drugs can yield serious side effects. This tradeoff is poignant when considering underserved populations where digital tools may offer the only immediate hope for intervention. AI’s scalability can redefine medical action, extending the duty of care beyond the clinic walls and into the digital lives of patients.

Medicine’s progress has depended on learning safely from failure through structured trials and transparent reporting. Fast failures can be valuable when appropriately monitored and contained within systems of accountability, advancing innovation through evidence rather than exceptions. Clinical research as a care option (CRCO) has emerged within pharmaceutical research as a mechanism for bringing novel innovations to the public with appropriate labeling and monitoring. The artificial intelligence community must follow the same ethical model: innovations require justification by proportional benefit and bounded by oversight through standards of transparency and accountability.

The litmus test of behavioral health

Behavioral health sits at the most personal and interpretive edge of medicine, where AI most clearly can both reproduce and distort care. AI hallucinations, misread cues, and patient manipulations can cause immediate harm, as can subtler effects like discouraging people from seeking human intervention. Some systems may overstate medical risk, while others may mirror distorted thinking, overpathologize normal emotion, or minimize severe distress as ordinary.

Conversely, behavioral health stands to gain significantly from AI by expanding access where clinicians are scarce, tailoring language for individual contexts, and sustaining support between visits. The challenge is to capture that potential without eroding the clinical judgment and empathy that define therapeutic care. This duality, where the potential for connection meets the risk of distortion, makes behavioral health the definitive test for whether we can build AI to be both intelligent and humane.

A framework for risk stratification

A new system of governance is required to bridge the gap between unmet need and unchecked innovation. We propose a framework for ethical continuity that balances progress with risk, ensuring the safe and equitable deployment of medical AI tools.

Risk–benefit assessment — The first question is whether a tool should be built. This involves underscoring the gap, existing alternatives, and the cost of not filling that gap. This includes an assessment of what harm can be done if that gap is filled poorly, and which populations may be differentially impacted — such as individuals who are non-native English speakers. The decision to proceed must rest on an explicit acknowledgement of these tradeoffs, and pass through a review-board merit evaluation.

Operationalizing accuracy thresholds — No tools or medical assays are perfectly accurate or without bias: all have known sensitivities, specificities, and failure-modes. Physicians increase their understanding of patients through these imperfect assessments, while balancing risk to the patient, psychological burden, and resource availability. Medical AI may inherently require similar decision-making, without the luxury of clinician involvement. This positions the accuracy of an AI tool as an acceptable ethical threshold. In order for this ethical standard to be understood, much less enforced, medical AI tools need to be built and benchmarked on transparent and representative datasets for well-defined purposes.

Pathways for human care escalation — Gradual escalation and specialization of care has always been a core element of medicine. Medical AI needs to follow a similar model. Escalation can take multiple forms, including moving from general-purpose AI tools to domain-specialist models. However, medical AI must recognize its limitations and provide a clear pathway for human-led care escalation. The inherent scalability of digital tools gives them the tremendous opportunity to be used as a pathway for obtaining care or treatment oversight — though, only if escalation for clinical support is a core design feature.

Continuous post-market accountability — Even with the above, medical AI tools need oversight and guardrails. Ongoing evaluation against representative datasets is necessary, alongside clear guidelines governing the intended use and boundaries, and ongoing management of user consent. Strict behavioral guardrails are needed to govern what tools can and cannot do, as the possibility of action requires accountability.

The necessity of innovation in regulation

Responsible advances in medicine require that innovation be matched by discipline and restraint. The regulatory frameworks that followed past failures were corrective, not bureaucratic. Each emerged from the same recognition: good intent is not a safeguard. Artificial intelligence now brings that lesson to a new frontier, requiring its extension to create oversight for AI that is proportional, transparent, and scaled to risk.

Oversight should be risk-stratified. A generic resource portal or symptom checker requires less scrutiny than a diagnostic engine or an unconstrained “therapy bot.” Between these poles, mechanisms such as structured audits, standardized safety benchmarks, and domain-specific frameworks can guide oversight. Centralizing these requirements, rather than leaving them solely to tool developers, ensures consistency, fairness, and transparency.

By bringing regulators and technology builders to the same table, we can innovate in how we regulate, establishing platforms for continuous public auditing, open licensing, and defined escalation pathways that achieve discipline without slowing innovation. This is the necessary evolution of regulators, from gatekeepers to ecosystem builders.

Conclusion

The development of medical AI technologies promises both substantial benefit and significant risk. This is a familiar crossroads for medicine, and we have the advantage of an established ethical foundation to guide our progress. By adopting a risk stratification framework, we can ensure that innovation is timely and safe. The hard-won lessons of risk-benefit assessment, rigorous accuracy evaluation, human escalation pathways, and clear accountability, transform medical AI from an unregulated marketplace into a disciplined clinical structure. The measure of our success will not be the speed at which AI scales, but how it preserves the humility and caution that have protected patients and advanced medicine.

The post Commentary: A case for ethical continuity in the age of medical AI appeared first on Child Mind Institute.

AI Chatbots and Teens

In talking with a dozen teens in my life recently, I learned many are interacting with ChatGPT in ways that surprised me. They described when they turn to this virtual tool: for algebra assistance, a personalized daily horoscope, the best way to phrase an awkward text to their boss. At times, they sought deeper advice: Is my friend ghosting me if they haven’t replied to my text yet? Another queried: Do I maybe have ADHD? I can never settle down to study!

Like the rest of us, teenagers are increasingly using AI chatbots, digital tools that simulate human interactions. AI bots are also proliferating on gaming and social media sites. Platforms like Replika and Character.AI allow the user to create highly customized characters to interact with as you would a friend (or partner!). A 2025 study from Common Sense Media found that 72 percent of teens surveyed have used AI companions at least once, and 52 percent qualify as regular users who interact with these platforms at least a few times a month.

“The genie is out of the bottle. Your teen has AI chatbot apps on their phones, on their laptops, not to mention that many companies are scrambling to make their interfaces more engaging through the use of AI,” says Dave Anderson, PhD, a psychologist at the Child Mind Institute.

This trend is causing concerns among mental health professionals, who are worried these obliging digital companions may pose significant risks to teens’ emotional and social well-being. Indeed, the Common Sense Media study concluded AI companions pose an “unacceptable risk” to teens under 18, citing such concerns as exposure to sexual content and dangerous advice.

“There’s no federal regulation. We’re dealing with the Wild West when it comes to chatbots’ effects on children’s development,” says Naomi Aguiar, PhD, a researcher at Oregon State University who has studied how children and adults form relationships with chatbots. That means for now, it falls to parents to help teens try to navigate this uncharted terrain.   

AI chatbots as digital companions

While they come in different forms, chatbots generally engage in ongoing back-and-forth conversations with the user. The more you interact, the more the bot learns about you and the more personalized its responses become. Bots can come off as your best friend — their answers are often affirming, they are available 24/7, they will churn out that three-page essay on Hamlet in seconds, no complaints. They respond to your every request with effusive enthusiasm (That is an insightful question! Great idea! What would you like me to do next for you?).

This charm is by design: While they might seem to be an empathetic pal, chatbots are driven by an algorithm whose main purpose is to keep you engaged so it can mine your data or get you to linger on a platform as long as possible. “It’s not designed to ever push back. By design it will always agree” says Annie Maheux, PhD, an assistant professor of psychology at the University of North Carolina at Chapel Hill, who studies adolescents and digital media.

While teens might start off by using AI for help with schoolwork, they are increasingly relying on chatbots for the kind of emotional support and unburdening of confidences that earlier generations turned to real-life besties for. “They call it Chat, like it’s a proper name,” says Megan Ice, PhD, a psychologist at the Child Mind Institute, “and they use it frequently for emotional support — say,  asking what to do about trouble with a friend. They can come to depend on it.”

Many teens do simply experiment with bots for entertainment or information. Dr. Anderson says most teens understand that interacting with obliging chatbots does not constitute a real relationship. “Teens know they are being glazed, to awkwardly apply a slang term,” he says. But relationships with AI chatbots have in a few high-profile cases appeared to play a role in reinforcing self-harm and suicidal thoughts for teens struggling with their mental health.

While these cases may be extreme, Dr. Anderson says they signal a wider problem. Studies show that teens in the United States are experiencing increasing levels of anxiety and depression. “There is a reason why kids are reaching out to these chatbots,” he notes. “We have a ton of teens who report feeling lonely or socially isolated. At the same time, we have a massive shortage of access to mental health professionals for them.”

Why teens are drawn to chatbots

Developmentally, teenagers may be uniquely vulnerable to chatbots, suggest experts: They have grown up very comfortable forming “relationships” with computer characters from the time they could first swipe their tiny finger on a screen. “It’s totally normal for them to have completely disembodied conversations,“ Dr. Aguiar says. “You text your friend rather than talk. You communicate feelings with emojis. You might have a ‘best’ friend you only know through online gaming.”

Adolescence is also an age when you are increasingly focused on how you are fitting in with friends and peer groups, says Dr. Ice. “There can be a lot of social anxiety. The option of a connection with an AI ‘friend’ who is not going to judge you is uniquely enticing for this population.” Sharing feelings and private thoughts with a chatbot provides the flavor of friendship in a frictionless way — no risk of rejection or awkwardness. “A friend might not text you back. Bots are always available.”

This judgment-free zone can have upsides, says Dr. Ice. “Talking to a bot, a teen can explore identity issues they might be going through. For example, when you’re talking to the bot, you don’t have to express yourself in the same way you would at school. There can be room to explore identities that you might not feel safe doing elsewhere.” Dr. Ice has also seen kids use AI to help their natural creativity find a new outlet. “They may create an AI character and weave elaborate backstories for it. It can bring to life the dreams in their minds.”

Risks of using chatbots

But these synthetic connections have risks for teens, too. An overreliance on bots can get in the way of the messy and sometimes painful business of forming and maintaining real life relationships. Practicing social skills to connect with complicated actual people is a key developmental task of adolescence. “The more they engage with bots, the less practice they get in how to respond in the moment to what someone says, to clarify misunderstandings, or to tolerate the feelings that can come up in awkward social situations,” Dr. Ice says.

Bots may also satisfy the need for connection in a superficial way. “It is the fast food of human connection” says Dr. Aguiar. In those pre-iPhone days, boredom and loneliness used to drive teens to the food court or the basketball bleachers to mix it up with their peers. The weaker substitute of bots may be just enough to keep some teens alone on their phones in their bedrooms, idling away hours in what seem like friendly conversations.

Dangers for the most vulnerable

The human-like quality of AI chatbots can have particular allure for teens with underlying vulnerabilities, such as being socially isolated or suffering from a mental health disorder that might impair social interactions, says Dr. Anderson. If teens are struggling with their mental health and turn to AI for advice, it can respond in ways that can be unhelpful and even dangerous, he says. “If a teen asks, What should I do about the fact that I’m depressed? AI’s initial answers tend to pull facts like: Depression is a well-known condition. Here are the diagnostic criteria. Here are leading treatments. But if the teen responds Listen, I’m thinking I want to [insert bad idea] about my depression, parents are right to be concerned. AI companies need to implement safeguards that prevent AI from being overly agreeable with responses such as, I’m glad you told me that. That is a common idea that people have…. Now the advice moves into an unacceptably dangerous area of risk.”

The results in a few extreme cases have been devastating. “There have been tragic stories where a teenager was talking intensely to a chatbot, and it led toward an acceleration of the mental health crisis the teenager was currently experiencing,” says Dr. Anderson. In some cases, chatbots can act as dangerous echo chambers, reinforcing a user’s serious mental health symptoms rather than questioning them. But, says Dr. Anderson, the profusion of headlines that sound the alarm about topics like “AI-induced psychosis” can be misleading. AI psychosis is not a clinical diagnosis, but “parents’ concerns about these topics do have the much-needed effect of driving the discussion toward the guardrails that we desperately need to see from companies in this space. Teenagers who are already isolated, vulnerable to depression and suicidality, perhaps at the early stage of psychosis and wrestling with delusions, and spending long periods of time alone are the most vulnerable.”

Dr. Anderson adds, “As of now, chatbots can’t and don’t do what a therapist does — assess for risk, make sure to confirm that someone is connecting to social or professional support to ensure safety, or supportively challenge and reframe a patient’s thinking when it has the potential to hurt them. And if we’re smart enough to invent AI, we should be smart enough to help it recognize when it’s out of its depth in substituting for critical mental health care.”    

How to talk to your teen about chatbots

It is an understatement to say the landscape of AI is changing rapidly and we are all scrambling to keep up. Even in the face of lawsuits, tech companies have been slow to put up effective guardrails on chatbot use by youth. This makes it even more important to have a talk with your teen. “Fostering your teen’s digital literacy is most important here,” says Dr. Ice. “We need them to be able to recognize risks and benefits for themselves and be thoughtful about what they do.”

Be curious. At this naturally rebellious age, simply telling teens “Don’t do this” doesn’t work well, says Dr. Ice. “A better approach with teens is to be curious. Ask your teen, how have you used AI? What was it like for you? What did you find helpful? What did you find unhelpful? How are your friends using it?” That can start a discussion that will give you insight into their experience.

Educate. Pull back the curtain on chatbots’ main goal. “Have a back-and-forth conversation with them about how algorithms work, how companies have their own motives behind chatbots and how they are designed to keep you interacting with them,” Dr. Ice says. To avoid eye rolling, present this concern as something you are learning about together, not about deciding whether AI is good or bad.

Encourage self-sufficiency. You want to help your teen build their own social “muscle” says Dr. Ice. “Encourage kids to try on their own first before asking AI.” Before asking Chat to write an apology text to a friend, suggest they give it a whirl themselves. “Help them build confidence that they can do it without AI.”

Help kids foster real-life connections

If your teen is spending more time on their devices than interacting with actual people, investigate what may be going on, counsels Dr. Ice. “Are they not finding kids with the same interests to hang out with? Is someone in their friend group being mean? Be curious about what’s making it so much more appealing for your teen to be online.”

Dr. Anderson emphasizes that even in this digital age, there is no substitute for actual human-to-human interaction: “Balance is important. Teens can have social lives that exist to some degree in the digital world, but we want parents to support their teens in having face-to-face peer experiences. That might mean talking to a teacher to see if there is a club your teen can join so they around other like-minded peers. We want to try to put them in lots of different situations where they can have exposure to peers in real life.”

Keep your own lines of communication with your teen wide open, says Dr. Anderson. “Study after study finds teens saying they don’t feel like they have a coach, a tutor, a religious or spiritual leader, a teacher, a school counselor, or a parent who they can go to, who will be nonjudgmental and will listen.” Remind them they can always come to you for advice and support if they are struggling. Being that sounding board can make their craving for a bot’s willing ear a little less compelling.

Frequently Asked Questions

Is AI good or bad for kids?

AI isn’t inherently good or bad — it has both benefits and risks. Teens can use chatbots for homework help, creativity, and exploring ideas, but overreliance on them for emotional support can interfere with real-life relationships and social development. 

What is AI psychosis?

“AI psychosis” is not an official clinical diagnosis. Experts use the term informally to describe cases where vulnerable individuals developed worsening mental health symptoms while heavily interacting with chatbots, which sometimes reinforced unhealthy thoughts instead of challenging them. 

What are the negative effects of AI?

AI chatbots can discourage teens from practicing real-life communication and coping skills if they become a primary source of support. They may also provide inaccurate or unhelpful advice, reinforce harmful ideas, or expose teens to inappropriate content. Chatbot use can contribute to isolation by replacing time spent with real peers and trusted adults.

The post AI Chatbots and Teens appeared first on Child Mind Institute.