Establishing AI and data sovereignty in the age of autonomous systems

When generative AI first moved from research labs into real-world business applications, enterprises made a tacit bargain: “Capability now, control later.” Feed your proprietary data into third-party AI models, and you will get powerful results. But your data passes through systems you do not own, under governance you do not set. The protections you rely on are only as durable as the provider’s next policy update.

Now, with generative AI established in everyday business operations and sophisticated new agentic AI systems advancing every day, companies are reevaluating the terms of that deal.

“Data is really a new currency; it’s the IP for many companies,” says Kevin Dallas, CEO of EDB, echoing a recurrent anxiety from customers. “The big concern is, if you’re deploying an AI-infused application with a cloud-based large language model, are you losing your IP? Are you losing your competitive position?”

That question is now fueling a movement toward reclaiming both the data and AI systems that have rapidly become part of core business infrastructure. AI and data sovereignty, which refers to breaking dependence on centralized providers and establishing genuine control over models and data estates, it is an urgent priority for many companies, says Dallas, citing internal EDB data: “70% of global executives believe they need a sovereign data and AI platform to be successful.”

The idea of AI sovereignty is becoming a global policy conversation. NVIDIA CEO Jensen Huang recently spoke about the need for such a shift at the World Economic Forum’s annual meeting at Davos in January 2026: “I really believe that every country should get involved to build AI infrastructure, build your own AI, take advantage of your fundamental natural resource—which is your language and culture—develop your AI, continue to refine it, and have your national intelligence be part of your ecosystem.”

This report explores how enterprises are pursuing sovereignty over their models and data estates in an era of rapid AI adoption. Drawing on a survey conducted by EDB of more than 2,050 senior executives and a series of interviews with industry experts, the research confirms that the sovereignty movement on the enterprise level is already well underway.

Download the report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Data readiness for agentic AI in financial services

Financial services companies have unique needs when it comes to business AI. They operate in one of the most highly regulated sectors while responding to external events that are updated by the second. As a result, the success of agentic AI in financial services depends less on the sophistication of the system and more on the quality, security, and accessibility of the data it relies on. 

“It all starts with the data,” says Steve Mayzak, global managing director of Search AI at Elastic.

Agentic AI—systems that can independently plan and take actions to complete tasks, rather than simply generate responses—holds enormous potential for financial services due to its ability to incorporate real-time data and optimize complex workflows. Gartner has found that more than half of financial services teams have already implemented or plan to implement agentic AI. 

However, introducing autonomous AI into any organization magnifies both the strengths and weaknesses of the underlying data it uses. To deploy agentic AI with speed, confidence, and control, financial services companies must first be able to search, secure, and contextualize their data at scale. “Agentic AI amplifies the weakest link in the chain: data availability and quality,” says Mayzak. “And your systems are only as good as their weakest link.”

Financial services companies, therefore, require a trusted and centralized data store that is easy to access, dependable, and can be managed at scale.

The high stakes of quality information

Regulation in the financial services sector requires a high degree of accountability for all data tools. As Mayzak says, “You can’t just stop at explaining where the data came from and what it was transformed into: ‘Here’s the data that went in, and this is what came out.’ You need an auditable and governable way to explain what information the model found and the logic of why that data was right for the next step.” That is, you need to be able to see, understand, and describe the underlying processes.

At the same time, financial services companies require speed and accuracy in order to meet customer expectations and stay ahead of competition. Markets are continually shifting, and risks and opportunities move along with them. If an AI model can parse natural language (unstructured data) from complex sources—in addition to structured data in spreadsheets that are easier to analyze—this gives users more relevant information. 

In this environment, there is no tolerance for error, including the hallucinations that plagued early AI efforts. Agentic AI systems depend on rapid access to high-quality, well-governed data that is secure and accessible. In financial services, that data spans transactions, customer interactions, risk signals, policies, and historical context. The task of preparing that data for AI should not be underestimated. “Natural language is way more messy than structured data, and that makes the process of organizing and cleaning it up that much more important and also that much harder,” says Mayzak.

The data must be well indexed and consolidated across different locations, not locked in the silos of separate systems across the organization. Otherwise, AI agents lag, provide inconsistent answers, and produce decisions that are harder to trace and explain, undermining confidence among regulators, customers, and internal stakeholders. 

As Mayzak says, “There are many different ways to describe how to execute a trade at a bank. In an agent-powered world, we need those descriptions to be deterministic—to give the same results every time. Yet we’re building on powerful but non-deterministic models. That’s incredibly tricky, but not impossible.”

For a financial services firm, managing this can be very challenging. A Forrester study found that 57% of financial organizations are still developing the necessary internal capabilities to fully leverage agentic AI. The data exists in many different formats, created over the course of a bank’s history,” says Mayzak. “Take any bank that’s been around for 50 years: They might have 60 different types of PDFs for the exact same thing. And at the same time, we want the output of these systems to be 100% accurate. In many cases, there is no ‘good enough’.” That is, companies need to do it right, and the first time.

Searching and securing results 

An effective search platform is key to solving the problem of fragmented, poorly indexed, inaccessible data. Financial services companies that can readily sift through both their structured and unstructured data, keep it secure, and apply it in the right context will get the most value from agentic AI. This often requires designing AI systems with data access and utility in mind so they can work faster and yield more accurate results, as well as reduce risk. “Search is the foundational technology that makes AI accurate and grounded in real data,” Mayzak says. “Search platforms have become the authoritative context and memory stores that will power this AI revolution.”

Once in place, these AI-enhanced searches and autonomous systems can serve financial services companies for a range of purposes. When monitoring client exposure, agentic AI can continuously scan transactions, market signals, and external data to detect emerging risks; platforms can then automatically flag or escalate issues in real time. In trade monitoring, AI agents can review trade workflows, identify discrepancies across different formats, and resolve exceptions step by step with minimal human intervention. In regulatory reporting, AI can gather data from across systems, generate required reports, and track how each output was produced. These applications of AI save time while supporting audit and compliance needs by being traceable and explainable.

Although such capabilities already exist, they are often manual, fragmented, and difficult to scale. Agentic AI allows financial organizations to move toward more automated, efficient, and scalable processes while maintaining the accuracy and transparency required in their highly regulated environment. As Mayzak says, “It’s not that different from how humans operate today, just done at a much faster pace and at scale.” 

Building an agentic AI ecosystem

Launching agentic AI can be daunting, especially if other AI ventures have stalled internally. Mayzak’s recommendation is to choose a manageable use case and allow it to grow over time. “Success can build on success,” he says. “While companies may aim to automate a 70-step business process, they are discovering that you have to start somewhere. What is working in the market is tackling the problem one step at a time. Once you get the first step working, then you can take the next step, and the next.” 

The financial services organizations that lead among their peers will be those that integrate agentic AI into a broader ecosystem that includes strong security controls, good data governance, and effective management of system performance. As Mayzak says, “Doing this well will create an AI feedback loop, where executives gain new signals from these systems to assess the effectiveness of their investments and generate reliable, actionable insights.” By iterating on pilots and continuously improving, companies will build agentic systems that can be measured, managed, and scaled. This will transform agentic AI into lasting competitive advantage.

Learn more about how Elastic supports financial services.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

1H-MRS brain metabolites as biomarkers of high-altitude hypobaric hypoxia following mild traumatic brain injury in mice

Introduction and objectivePopulations at high altitude (HA) face a higher incidence and severity of traumatic brain injury (TBI). This pilot study utilized longitudinal 1H-MRS to identify neurochemical biomarkers of HA adaptation and the subsequent metabolic response to mild TBI (mTBI).MethodsMale C57BL/6J mice were exposed to simulated HA (5,000 m) or sea level (SL) for 12 weeks. Following adaptation, a unilateral mTBI was induced via closed head injury (CHI). Mice were then monitored for an additional 2 weeks at HA (total duration of 14 weeks). In vivo1H-MRS spectra (7 T) were collected from the frontal cortex, hippocampi, and cerebellum at weeks 0, 4, 12 to assess HA adaptation. Following the CHI, subsequent measurements were collected at week 12 (post-injury) and week 14 to monitor longitudinal neurochemical responses to the mTBI.ResultsChronic HA exposure induced significant reductions in myo-inositol (Ins) and total choline (tCho) in the hippocampus, establishing a baseline of metabolic fragility that sensitized the brain to subsequent traumatic insult. Post-mTBI, the HA group exhibited a profound “metabolic crisis,” characterized by significantly lower tCho and failed recovery of total N-acetylaspartate (tNAA) compared to SL controls. Total creatine (tCr) was the most acutely affected metabolite, underscoring a depletion of the bioenergetic reserve.ConclusionChronic hypobaric hypoxia fundamentally alters baseline brain metabolism and impairs the neurochemical recovery from mTBI. These findings suggest that standard recovery protocols may be insufficient for HA-adapted populations and highlight 1H-MRS as a critical tool for detecting “invisible” metabolic vulnerability in extreme environments.

Overcoming the blood–brain barrier in Alzheimer’s disease: translational perspectives on advanced drug delivery platforms

Alzheimer’s disease (AD) is the leading cause of dementia worldwide and represents a growing public health challenge in aging societies. Despite extensive research efforts, currently approved therapies provide only limited symptomatic benefit and do not halt disease progression. A major obstacle to effective treatment is the blood–brain barrier (BBB), which severely restricts the brain delivery of most therapeutic agents. Nanoparticle-based drug delivery systems have emerged as a promising strategy to overcome BBB-related limitations by enabling precise control over physicochemical properties such as size, surface characteristics, and material composition. These properties can improve drug solubility, stability, pharmacokinetics, and targeted brain accumulation while reducing systemic toxicity. However, efficient BBB penetration and clinically feasible translation remain major challenges. This review summarizes key design principles for nanoparticles intended for AD therapy and highlights representative platforms with translational considerations, particularly lipid-based and polymer-based nanoparticles. In addition, alternative delivery strategies—including nose-to-brain nanoparticle systems and nanoparticles exploiting receptor-mediated and adsorptive-mediated transcytosis, as well as synaptic dysfunction targeting—are discussed. Collectively, this review outlines current advances and future directions for nanoparticle-mediated therapeutic delivery in AD.

Disrupted sleep-wake cycles and circadian rhythms in a Drosophila model of C9orf72-FTD

Frontotemporal dementia (FTD) is a neurodegenerative disorder that affects behavior, personality, motor activity, speech, cognition, and sleeping patterns. Previous findings support the idea that disruption of sleep and circadian systems may not only be affected by this disease but also work to actively shape the clinical phenotype of FTD. Thus, understanding how sleep-wake cycles are altered may provide insight into mechanisms that influence both disease progression and quality of life. We studied an established Drosophila model of FTD to investigate changes in the sleep-wake cycle of both young and aging flies. A C9orf72-associated FTD model was chosen, as the most common genetic cause of sporadic and hereditary FTD is a hexanucleotide repeat expansion in intron 1 of the C9orf72 gene. We performed behavioral assays to measure locomotor activity in both a 12 h:12 h light/dark (LD) cycle and complete darkness (free running). From this data, we were able to analyze changes in sleep and activity patterns, as well as circadian rhythms in flies modeling C9orf72-FTD. Our data suggests that these flies have increased nighttime activity and decreased sleep at night, which becomes more significant as they age. Older flies also displayed decreased sleep pressure during both day and night and lost rhythmicity. Of specific interest, young flies modeling C9orf72-FTD demonstrated altered day and night sleep latency, decreased sleep depth at night, and reduced rhythmicity in constant darkness. This suggests that changes in their sleep-wake cycle occur early in disease progression and provide an avenue for potential intervention and early diagnostic markers.

Natural head orientation and spatial hearing with symmetric frontal maskers

The purpose of this study was to evaluate the effect of natural, undirected head orientation on speech perception in the presence of interfering speech maskers that were symmetrically arranged to minimize the better-ear advantage. We also examined the characteristics of natural head motion under these conditions. Fourteen normal-hearing adults participated in continuous number categorization tasks under both head-fixed and head-free conditions. Three parameters were measured across different spatial listening configurations: (1) speech reception threshold (SRT) in both co-located and spatially separated masker conditions (±30° azimuth), (2) accuracy with spatially separated maskers at a fixed target-to-masker ratio (TMR), and (3) functional spatial boundary (FSB) in an adaptive masker-location condition. No significant differences in speech perception performance were observed between head-fixed and head-free conditions across all listening configurations. However, performance changes relative to the head-fixed condition were significantly correlated with head-orientation magnitude in the co-located SRT and FSB conditions. Exploratory analyses further indicated that larger head rotations were sometimes associated with performance improvements, whereas smaller rotations occasionally accompanied performance decrements. These observations may reflect complex interactions between dynamic spatial cues produced by head motion and moment-to-moment variations in task engagement. However, these observations warrant cautious interpretation and may provide a basis for future investigations into the role of natural head movement in spatial listening.

Large-scale meta- and cross-trait analyses uncover shared genetic risk factors for IBS and psychiatric disorders

IntroductionIrritable bowel syndrome (IBS) is a common gut-brain axis disorder characterized by abdominal pain and altered bowel habits, and it shows high comorbidity with psychiatric disorders. However, the shared genetic mechanisms underlying these associations remain incompletely understood.MethodsWe performed a large-scale meta-analysis of IBS in individuals of European ancestry by integrating genome-wide association study (GWAS) summary statistics from the UK Biobank, Bellygenes, and the Million Veteran Program (MVP), thereby increasing statistical power to detect novel IBS loci. We further conducted global genetic correlation analyses with psychiatric traits, followed by multi-trait analysis of GWAS (MTAG) and conditional false discovery rate (condFDR) analyses to identify pleiotropic loci. Transcriptomic, methylomic, and expression quantitative trait locus (eQTL) data were integrated to explore potential regulatory mechanisms.ResultsThe meta-analysis identified up to ten previously unreported IBS loci, several of which were supported by colonic and brain eQTL effects. Global genetic correlation analyses confirmed substantial genetic overlap between IBS and psychiatric traits, particularly major depressive disorder and neuroticism. MTAG and condFDR analyses uncovered more than 100 pleiotropic loci, including signals at SORCS1, SLC35D1, COA1, and TLE1. Integrative analyses of transcriptome- and methylome-wide data highlighted regulatory mechanisms spanning colonic, immune, and neuronal tissues, supporting neuro-immune crosstalk and mitochondrial involvement.DiscussionOur findings provide a comprehensive genetic characterization of IBS, refine its heritable basis, reveal pleiotropic links with psychiatric disorders, and implicate molecular pathways across the gut-brain axis. These results advance mechanistic understanding of IBS and may inform future therapeutic development for IBS and its psychiatric comorbidities.

Can the treatment effects of human-animal interaction be maintained? A randomized controlled trial including follow-up in people with severe mental illness

IntroductionThere are persistent demands for well-designed randomized controlled trials (RCTs), including follow-up measurements, in studies on animal-assisted treatment (AAT). In addition, a possible dose-response relationship is under discussion. The aim of the present study was to investigate the efficacy of a single-session AAT with sheep, including a booster exercise, over a follow-up period of four weeks.MethodsIn an RCT, a single-session AAT with sheep in a group setting, including an imaginative booster exercise conducted in the week following the AAT session, was compared to treatment as usual (TAU). Sixty psychiatric inpatients with severe mental illness were assessed for positive and negative emotions, mindfulness, and self-efficacy expectancy at baseline (PRE), immediately after the intervention (POST), and at one-week and four-week follow-ups.ResultsThe results indicate significant differences between the two groups at POST and still in the one-week follow-up (FU1) in three of four outcomes. Within the intervention group, within-group analyses demonstrated significant improvements from PRE to POST and from PRE to FU1 across all outcomes, with large effect sizes. At the four-week follow-up, all significant effects had diminished.ConclusionsAn imaginative booster exercise conducted within one week after an AAT session was effective in maintaining large effect sizes for up to one week. However, the results did not persist at the four-week follow-up. Longer follow-up periods, variations in the number of sessions, and the inclusion of active control groups are therefore necessary for further AAT studies.Trial registrationhttps://drks.de/search/de/trial/DRKS00031347, identifier DRKS 00031347

Recognizing anxiety and depression in cancer patients based on speech and facial expressions

PurposeTo address the anxiety and depression experienced by cancer patients due to the stress of diagnosis and treatment, as well as the limitations of traditional assessment methods characterized by high subjectivity and low efficiency, this study aims to develop a multimodal fusion approach for the simultaneous and precise evaluation of these two psychological states.Patients and methodsA speech-video dataset of clinically diagnosed cancer patients was used. This study proposes a multimodal fusion approach: for depression recognition, We employ the HuBERT pre-training architecture based on Transformers, integrating specific acoustic features of depression with textual content to achieve accurate classification of depression through a voice-text modality. For anxiety recognition, a multi-task convolutional neural network is designed to infer anxiety status from the facial expressions.ResultsExperiments conducted on a speech-video dataset of clinically diagnosed cancer patients demonstrates that the multimodal fusion model achieves a depression recognition F1 value of 0.85 and an anxiety recognition F1 value of 0.74, significantly outperforming the unimodal model.ConclusionThe results of the two modalities are fused by decision-level weighted averaging to realize the simultaneous assessment of anxiety and depression in cancer patients. The study may provide technical support for rapid, noninvasive screening of psychological status in cancer patients.