Single-Cell Survival Modeling Tool Offers New Precision in Cancer Prognosis

Oregon Health & Science University (OHSU) researchers have developed a first of its kind tool, scSurvival, that directly links information from individual tumor cells to patient survival outcomes, allowing clinicians to understand which specific cells are driving disease progression rather than treating them all the same.

“Traditional survival models in cancer rely on bulk data, which average signals across millions of cells and obscure important heterogeneity,” explained senior author Zheng Xia, PhD, associate professor of biomedical engineering in the OHSU School of Medicine and a member of the OHSU Knight Cancer Institute. “Tumors are highly complex ecosystems where different cell subpopulations can have very different and sometimes opposing effects on patient outcomes.”

He told Inside Precision Medicine that “scSurvival is designed to directly model survival using single-cell data, preserving this heterogeneity. Instead of treating a tumor as a single entity, it treats it as a collection of individual cells and learns which specific subpopulations are most associated with survival outcomes. This enables both more accurate prediction and deeper biological insight.”

The tool was designed using a statistical method known as an attention-based multiple-instance Cox regression framework, which constructs survival prediction models from single-cell cancer cohort data while simultaneously identifying cell subpopulations that are strongly associated with patient risk.

“The attention mechanism preserves cellular heterogeneity within each patient, allowing [cell] subpopulations with higher attention scores to be more closely linked to survival probability,” the researchers explain in Cancer Discovery. “The resulting outputs of scSurvival are the attention-adjusted hazard score for each cell along with patient-level risk scores.”

Xia and team tested the performance of scSurvival in two cohorts that included 32 patients with melanoma and 124 patients with liver cancer. Together, the cohorts provided single cell RNA sequencing data for more than 1.1 million individual cells.

They found that key immune cell types were enriched for higher- or lower-hazard cells. For example, monocytes/macrophages were enriched for high-risk subpopulations in both the melanoma and liver cancer cohort, but B cells were enriched for low-risk subpopulations in the melanoma cohort and high-risk subpopulations in the liver cancer cohort.

In both groups, the tool accurately predicted patient outcomes, with cells taken from melanoma patients who did not respond to immunotherapy having significantly higher hazard scores than those taken from responders.

Xia noted that the information scSurvival provides has several translational applications. “Differential gene expression between high- and low-risk cells can be used to develop prognostic biomarkers,” he said. “Pathways enriched in high-risk populations may reveal actionable therapeutic targets, while the abundance of specific cell types can support patient stratification for treatment selection. Importantly, these insights are derived at single-cell resolution, providing greater biological precision than bulk approaches.”

At present, scSurvival is primarily a research tool but Xia believes that longer term, it has potential clinical relevance. “For example, signatures derived from survival-associated cell populations could be translated into more practical assays (e.g., bulk RNA or targeted panels) for patient stratification,” he suggested. “However, direct clinical deployment would require further validation, simplification, and standardization.”

According to Xia, one of the biggest challenges to widespread adoption of the tool is the limited availability of large, well-annotated single-cell datasets with matched survival data, as single-cell sequencing is not yet routine in clinical workflows. But as more clinical trials adopt single-cell sequencing, he expects scSurvival to see broader use in resolving disease at cellular resolution.

The investigators now plan to extend the framework to incorporate spatial transcriptomics, which will allow them to account for how cells are organized within the tumor microenvironment. “We also aim to improve the model’s robustness across datasets and sequencing platforms, and to enhance its biological interpretability. Ultimately, we hope to translate the survival-associated signatures identified by scSurvival into clinically practical tests,” Xia said.

The study findings were also presented at the American Association for Cancer Research Annual meeting 2026 and the open-source scSurvival program and its tutorials are freely available at GitHub, Zenodo and Code Ocean.

The post Single-Cell Survival Modeling Tool Offers New Precision in Cancer Prognosis appeared first on Inside Precision Medicine.

<![CDATA[Study links schizophrenia’s earlier onset to higher genomic deletion CNV burden, showing future potential for personalized care.]]>

Sex-specific impact of vitamin D and B9 concentrations on neuroticism: a polygenic score-based study

IntroductionNeuroticism is a personality domain with prognostic value for physical and mental health. To properly inform public health policy, it is crucial to uncover the mechanisms underlying high neuroticism. Many internal and external factors that affect brain development and functioning and therefore might contribute to the variability of neuroticism remain understudied. Among them, the impact of vitamin sufficiency is of great interest, as it is a modifiable factor. This study aimed to evaluate the associations of neuroticism with vitamin D (VD) and vitamin B9 (VB9) using polygenic scores (PGS) in a nonclinical cohort.MethodsWe analyzed data from 348 healthy unrelated individuals, including neuroticism scores on the Eysenck Personality Inventory, VD-PGS, VB9-PGS and PGS for neuroticism-related traits.ResultsThe analysis controlling for demographic and genetic confounders revealed a negative association between VB9-PGS and neuroticism scores in women and a positive association between VD-PGS and neuroticism scores in men. The highest values of the VD-PGS were observed in men, who scored high on both neuroticism and extraversion. In men, unlike women, neuroticism scores were not correlated with PGS for neuroticism but were associated with PGS for bipolar disorder type 1 and alcohol use disorders.ConclusionThe results suggest that the effects on neuroticism of genetic propensity for suboptimal vitamin D and B9 concentrations might differ across the two sexes. The findings are consistent with the idea of the importance of vitamin B9 for emotional stability in women and indicate the involvement of genetic factors predisposing to higher vitamin D levels in excitability-related components of neuroticism in men.

Intestinal metaplasia is the only precursor to esophageal adenocarcinoma

Nature Medicine, Published online: 23 April 2026; doi:10.1038/s41591-026-04332-7

We integrated large-scale epidemiological and genomic data from patients with esophageal adenocarcinoma to compare cancers with and without Barrett’s esophagus (BE). We found shared risk factors, molecular features, evolutionary trajectories and BE lineage markers in both cancer phenotypes. Our findings support a single intestinal metaplasia-mediated pathway and have direct implications for early detection and prevention strategies.

Jurgi Camblong: Data-Driven Doctors Without Borders

Jonathan D. Grinstein, PhD, North American Editor of Inside Precision Medicine, hosts a new series called Behind the Breakthroughs that features the people shaping the future of medicine. With each episode, Jonathan gives listeners access to his guests’ motivational tales and visions for this emerging, game-changing field.

Precision medicine is often framed as imminent: gather more data, refine analytics, and individualized care will naturally follow. In reality, progress has been uneven. Genomic, imaging, pathology, and clinical data remain fragmented across systems and poorly integrated into clinical workflows. The core challenge is not data scarcity but the ability to interpret complex, heterogeneous inputs quickly enough to guide real medical decisions. To address this, Jurgi Camblong founded SOPHiA Genetics with a focus on building infrastructure rather than isolated tools—aiming to turn multimodal health data into actionable insights, a goal far more difficult in practice than in theory.

In Behind the Breakthroughs, Camblong highlights persistent structural and technical barriers limiting data-driven healthcare. Genomic standardization, for example, remains inconsistent, with approaches ranging from targeted panels to whole-genome sequencing, each balancing cost, sensitivity, and speed. The field is also shifting from single mutations to complex interactions among variants. Expanding beyond genomics adds further complexity, as transcriptomics, radiology, liquid biopsy, and computational pathology each involve distinct methods and clinical uses. Rather than enforcing uniformity, SOPHiA Genetics works across this diversity to produce consistent, clinically usable outputs despite technological and regulatory variation.

Ultimately, success depends on integrating statistical, machine learning, and deep learning methods while staying grounded in biology. A major limitation is the lack of robust feedback loops: precision medicine requires long-term patient outcomes, which many systems fail to capture. Without this, even advanced models are constrained. The central challenge is execution—translating existing data into meaningful insights that improve individual patient care.

This interview has been edited for length and clarity.

 

IPM: What types of multi-omics datasets are currently workable and applicable in a clinical setting, and how do you see their role evolving in routine patient care?

Camblong: When we started in 2015 and launched the platform into the market, people were just analyzing CFTR for cystic fibrosis and BRCA1 and BRCA2, two genes for hereditary cancer. To be honest, there were some efforts around whole genome analysis, but it was very, very rare. Our intent was always not to be a research tool but a tool that brings real benefit to most patients routinely and safely, and things evolved over time.

Now, probably the mean number of genes analyzed when producing genomic information for a patient is around 100 genes. Then you have some solutions that require analyzing only 30 genes because you want to be extremely precise, cost-effective, and rapid. There are other solutions that require sequencing the whole genome. But getting full information with the same sensitivity you can have with smaller panels is not an easy task, and this is where algorithms are really important.

In our case, the fact that we have grown along this journey with the field gives us an advantage today, enabling people to produce more genomic information with the same sensitivity as smaller panels. Genomics is continuously evolving. In the past, people did not necessarily look at copy number variations. Now we are even talking about partial copy variations, like in a gene called PTEN, which is a driver gene, and where a partial CNV can be very important.

What I am trying to explain is that it is not yet simple. It is not streamlined. Lab protocols are different; sequencing approaches are different; it is a constant evolution. In our case, being an operating system that supports thousands of hospitals, we are privileged to be exposed to this complexity, which enables us to improve our algorithms more rapidly and deliver them back to users who can benefit from new capabilities.

Transcriptomics is becoming a very interesting data modality. Initially, it was used to detect so-called gene fusions, specific genomic features that are hard to detect from DNA and require RNA. I am quite bullish on transcriptomics. I believe it will enable cancer subtyping at scale, possibly with more efficient methodologies than what is done today on tissue. It may not replace tissue, but it may allow us to go further and, in some cases, provide more objective outcomes than staining protocols.

Along those lines, radiomics is also very important. By radiomics, I mean data produced by radiologists, CT scans, PET scans, and MRI. There is a signal in this data. For example, you can see if cells are necrotic. You get additional information based on tissue composition and imaging. You can automatically measure tumor volume.

In metastatic cases, where tumors are spread, measuring them is not necessarily easy. You can identify where tumors are, and this information, feature extraction from images, is very powerful. It is also the only data modality that is used longitudinally today in cancer to monitor response to treatment.

Another modality that will become important is liquid biopsy testing to follow patients longitudinally, based on molecular profiles and minimal residual disease (MRD). If you think about computational pathology, H&E staining in particular will be important. I am more skeptical about immunohistochemistry at scale, given feedback from pathologists; multiplexing may introduce too much signal and create confusion. Proteomics has potential, but clinically, it is not quite there yet. Even the most advanced actors are not fully at clinical utility.

Over time, we will need to combine these modalities and apply smart algorithms to extract signals and support decision-making. In the end, this is what matters: not computing data unless it brings value to the oncologist, pathologist, biologist, or geneticist.

 

IPM: How is the SOPHiA interface designed for clinicians in practice? What does the user experience look like across different use cases, such as oncology or liquid biopsy workflows?

Camblong: It is a web-based interface you log into. For example, if you are at Moffitt Cancer Center in Florida, using the platform for hematological malignancies, you will see which mutations are detected with high sensitivity and how actionable they are. If you are in a hospital in the U.K. using it for liquid biopsy testing, you will see the mutations identified for those patients.

We also have customers using it from a multimodal perspective, more from an oncologist’s point of view, where they can see how similar patients with similar molecular profiles respond to treatments elsewhere. For us, this includes partnerships with major clinical genomic databases. Through these, we provide access to additional data layers for institutions, even when the patient data originates locally.

The interface is always web-based. In the backend, we use microservices to compute data using AI, deep learning, machine learning, statistical inference, and pattern recognition. The user then leverages this information to make decisions and answer clinical questions.

 

IPM: Given the diversity of data sources and technologies, how do you approach standardization and harmonization across datasets, particularly in a global context?

Camblong: We operate in over 70 countries. We support local data production and management, but within a framework of collective knowledge. It is important to align solutions with regulations. In some countries, we operate in research mode only. In Europe, some applications are IVD, and in the future possibly In Vitro Diagnostic Regulation (IVDR) or companion diagnostic solutions.

The key is to build technology with optionality, documenting how it is built and its intended use. If you want to make clinical claims, you must conduct clinical studies. The foundation is design control, like in aviation, so that you ensure sensitivity, specificity, reproducibility, repeatability, and robustness, regardless of regulatory frameworks.

 

IPM: How does your platform adapt to the wide variety of user systems, including different sequencing instruments, workflows, and laboratory environments?

Camblong: The backend is fully engineered and automated. But workflows differ across hospitals due to global constraints and complexities. Managing this heterogeneity while delivering consistent outputs means adapting to different workflows. This is not easy, but we have demonstrated strong performance. For example, with Memorial Sloan Kettering, we accessed both their data and their applications, MSK-IMPACT and MSK-ACCESS. We industrialized these within SOPHiA without infringing on IP, enabling hospitals to produce data locally and leverage our algorithms. We achieved over 98% concordance across sites, comparable to repeating sequencing within a single workflow.

We also work with multiple sequencing vendors to ensure compatibility across instruments and consumables. Because we process large volumes of data, we can also advise on optimal workflows for specific applications. Since we are paid per use, our incentives are aligned with hospitals; better workflows mean more patient cases and better outcomes.

On AI: it is a toolbox. Different models suit different problems. Large language models are useful for text and sometimes images, but not everything. Understanding biology and data diversity is key to selecting the right mathematical model that scales effectively.

 

IPM: As you expand into adjacent domains like radiology, how do you approach entering new clinical areas while ensuring relevance and usability?

Camblong: Always with partners, healthcare institutions. We are strong in software, AI, and biology, but not medical practice. We co-develop with clinicians to ensure integration into workflows and real clinical benefit. For example, with MD Anderson, we collaborate on translational and routine lab work to move technologies into clinical practice, such as transcriptomics for cancer subtyping and MRD.

In multimodality, we work case by case. For instance, in kidney cancer in France, we partnered with the UroCCR network, analyzing 27,000 patient cases. This allowed us to identify signals and predict responses to immunotherapy. Innovation only matters if it is adopted in practice.

 

IPM: How actionable are your clinical decision-support tools today, and how do you incorporate real-time or longitudinal data?

Camblong: It depends on regulations. In some places, like the U.K., the platform provides information to oncologists, who then interpret it. For multimodality, feedback loops are essential, linking molecular data, treatment, and outcomes.

With UroCCR, we continuously improve algorithms using real-world data. We should be leveraging post-market data more systematically to refine treatment decisions. Real-world complexity can reveal which patients truly benefit from therapies. Longitudinal data is critical, not just for outcomes, but also for avoiding adverse effects. For example, some ovarian cancer patients benefit from PARP inhibitors but may develop leukemia. Understanding these patterns requires real-world data loops.

 

IPM: How do you think about data ownership, access, and control?

Camblong: Ownership does not exist in a strict sense. Individuals are the ultimate controllers. Hospitals and companies are processors. Data is critical for AI, but our model is decentralized: hospitals retain control of their data. Algorithms learn from data, but once trained, they can deliver insights without retaining raw data, enhancing privacy.

Also, oncology data does not age well because treatments and technologies evolve rapidly. What matters is continuous exposure to new data. Collective intelligence through networks and platforms is essential for precision medicine.

 

IPM: How does SOPHiA approach cross-border collaboration and democratization?

Camblong: Democratization means making technology accessible and usable. For example, in India, a hospital previously sent samples to the U.S., with high costs and six-week turnaround times. We enabled local testing within months, reducing turnaround to under two weeks and building internal expertise. This increased testing volumes and improved clinical adoption.

 

IPM: Are there areas less amenable to your approach?

Camblong: About 80% of our work is in cancer, 20% in rare disorders. Rare diseases require even more collaboration due to limited data. We support peer networks where clinicians share insights, for example, variant classifications, helping others make faster decisions. As medicine becomes more precise, collaboration becomes even more critical.

The post Jurgi Camblong: Data-Driven Doctors Without Borders appeared first on Inside Precision Medicine.

AACR 2026: David Parkinson and the Arc of Modern Cancer Therapy

SAN DIEGO, CA – In 1977, when David R. Parkinson, MD, graduated from medical school at the University of Toronto and moved to McGill University to train in internal medicine and eventually hematology, the idea of medical oncology was in its infancy. In Canada, the profession didn’t exist.

“In Canada, there were no medical oncologists,” Parkinson told Inside Precision Medicine. “Radiation therapists administered what little chemotherapy existed. They resisted the development of medical oncology as a specialty.”

David Parkinson - AACR
David R. Parkinson, MD, recipient of the 2026 AACR Outstanding Achievement Award for Service to Cancer Science and Medicine [The American Association for Cancer Research (AACR)]

Through the ensuing 49 years, Parkinson didn’t just see the rise of kinase inhibitors, antibodies, and cell therapies in real-time—he helped create the world of modern cancer therapeutics.

In reflecting on his remarkable career, which was recognized with the 2026 AACR Outstanding Achievement Award for Service to Cancer Science and Medicine, Parkinson said, “I’ve essentially grown alongside the field.”

From scarcity to structure: Oncology’s early years

When Parkinson arrived in Montreal, there were only a handful of chemotherapeutics available. “In those days, there were only one or two drugs available for hematologic malignancies across the entire field,” Parkinson said. “The main treatments were cyclophosphamide and nitrosoureas.”

Even supportive care lagged. “Initially, we had no effective way to control chemotherapy-induced nausea,” he noted of the standard of care for testicular cancer. “Some patients stopped treatment because they couldn’t tolerate it.”

Parkinson explained that early cancer drugs worked best on rapidly dividing tumors, like leukemias and testicular cancers, because that’s what the animal models represented. These therapies targeted DNA and cell division broadly, often with severe toxicity, and were far less effective against slower-growing solid tumors.

After his residency at McGill, Parkinson moved to Boston, first to Tufts New England Medical Center on a modest Canadian fellowship that placed him at the edge of a field just beginning to coalesce. “I was on a Canadian fellowship earning $12,000 a year,” he said. “The exchange rate fluctuated significantly, which made things difficult, and I couldn’t work due to my student visa.”

What he found, however, was momentum. Through connections with Dana-Farber, Parkinson entered formal training in medical oncology as the specialty began to take shape. “I connected with Dana-Farber and took their introductory course for fellows—that was my entry into medical oncology.”

At the same time, breakthroughs in specific cancers hinted at what might be possible. “What really shaped my thinking was the emergence of treatments for testicular cancer just as I entered oncology,” he said. “Platinum-based therapies—and later combination regimens—felt like miracles. We had never seen anything like it. These were often young patients, difficult to manage, but suddenly there were real cures.”

Targeted therapy and the Gleevec moment

Parkinson’s career soon intersected with early efforts to harness the immune system against cancer—decades before immunotherapy became a dominant paradigm. “I became deeply involved in immunotherapy, particularly interleukin-2 and early tumor-infiltrating lymphocyte studies,” he said.

Working at the National Cancer Institute (NCI), he collaborated with leaders, including immunotherapy pioneer Steven Rosenberg, MD, PhD, maintaining a hybrid role that combined research with clinical care. “At the same time, I continued clinical work for a couple of months each year, collaborating with Steve Rosenberg in the surgical branch.”

These early approaches were technically challenging and often unpredictable, but they laid the groundwork for later advances. “We started with basic approaches, moved to tumor-infiltrating lymphocytes, and eventually to engineered CAR T cells,” Parkinson said. “Progress has been steady, though often slower than those treating patients would like.”

If immunotherapy represented one trajectory, targeted therapy represented another—one that depended on a deeper understanding of cancer biology.

“When I joined Novartis in the late 1980s, we were among the first developing kinase inhibitors,” Parkinson said. At the time, the idea was controversial. “Early skepticism suggested kinase inhibitors wouldn’t work due to high intracellular ATP levels and structural challenges.”

But advances in molecular biology were beginning to change the landscape. The discovery of the Philadelphia chromosome and its associated oncogene created a clear therapeutic target. “The Philadelphia chromosome had been known since the 1960s, and by the 1980s the responsible gene was identified,” Parkinson explained.

The result was imatinib (Gleevec), a drug that would become a prototype for precision oncology. “Eventually, a small molecule inhibitor was developed that targeted it precisely.”

The clinical results were extraordinary. “By the third cohort in a Phase I trial, patients with chronic myelogenous leukemia showed dramatic responses—some within 24 hours,” Parkinson said. “It’s probably the only Phase I oncology trial where essentially every patient achieved remission.”

For Parkinson, the implications extended far beyond a single drug. “Of course, [Gleevec] was a unique case,” he said. “But it proved an important point: what once seemed impossible can become possible.”

Since then, the field has expanded dramatically. Hundreds of kinase inhibitors have been developed, with thousands more explored, reflecting a broader shift toward therapies grounded in specific molecular mechanisms.

Precision medicine—and its limits

As oncology evolved, so too did its language. “For years, we called it ‘personalized medicine,’” Parkinson said. “I used to joke that medicine has always been personalized—you’re always trying to determine what’s best for a specific patient in a specific context.”

He credits industry with popularizing a more precise term. “Although Pfizer popularized the term ‘precision medicine,’ I think it’s a better term,” he added, with a note of humor: “I have a few good Pfizer jokes—best shared over a drink.”

Yet the reality of precision medicine has proven more complex than its promise. “The evolution of therapeutics mirrored the models and biological understanding available,” Parkinson said. “Targeted therapies only emerged once we understood the biology. Diagnostics, however, lagged by about two decades.”

That lag remains a structural challenge. Parkinson founded a diagnostics company based on single-cell signaling technology developed at Stanford. “Technically, it worked—we solved major challenges in instrumentation, standardization, and analysis,” he said. “But we couldn’t establish a viable business model.”

The core issue was reimbursement. “Without adequate reimbursement from Medicare, even highly sophisticated diagnostics struggle commercially,” said Parkinson. “Better diagnostics can reduce the use of expensive drugs by identifying who won’t benefit—something that doesn’t always align with pharmaceutical business models.”

In recent years, Parkinson has focused increasingly on large-scale data integration, including his involvement with the GENIE consortium. The initiative aggregates genomic and clinical data across institutions, aiming to accelerate discovery and improve clinical decision-making. “GENIE has been a technical success,” he said. “But its long-term sustainability remains uncertain.”

The broader challenge, he argues, is conceptual as much as technical. “Looking forward, the field is evolving toward integrating multiple data types—genomics, transcriptomics, imaging, and more—to better understand tumor biology,” he said. “Sequencing alone isn’t enough. The challenge now is not a lack of data, but making sense of it—something where artificial intelligence will play an increasingly important role.”

Back to basics

Across academia, government, and industry—including roles at the NCI, Novartis, Amgen, and Biogen Idec—Parkinson sees a single throughline. “I remember an interview with a biotech company where an HR representative told me, ‘You seem to have done a lot of different things,’” he said. “I responded that I had really only done one thing: trying to improve cancer treatment, just from many different angles.”

Not every effort succeeded. “In one case, we developed a drug that performed beautifully in mice but failed in human trials,” he said. “That’s common in oncology—most ideas don’t translate. You don’t think of it as failure but as learning. Still, there’s a limit to how many ‘learnings’ one can appreciate.”

Reflecting on decades of progress, Parkinson emphasizes both how far the field has come and how much remains unresolved. “Outcomes have improved dramatically across several cancers, especially hematologic ones,” he said.

Yet he underscores a fundamental principle: that progress in cancer treatment comes down to understanding biology. “The better we understand it, the more effectively we can develop targeted therapies,” said Parkinson. “Without that understanding, we’re essentially guessing.”

At AACR 2026, Parkinson’s recognition underscores not just past achievements but a continuing trajectory—one shaped by the interplay of discovery, failure, and persistence. “Despite all the challenges,” he said, “[precision medicine] is still the most promising path forward.”

 

The post AACR 2026: David Parkinson and the Arc of Modern Cancer Therapy appeared first on Inside Precision Medicine.

10x Genomics Unveils Atera Spatial Platform at AACR Meeting

The genomics community’s long wait for 10x Genomics’ highly anticipated news is finally over. On Saturday night, at the Hard Rock Café Hotel in San Diego—across the street from the American Association for Cancer Research (AACR) conference—the company hosted the “Impossible” party to announce its new spatial instrument—the Atera.

Serge Saxonov, PhD, CEO of 10x Genomics, walking onto the stage to thunderous applause, noted that there is “a gap between what we need to see and what we have been able to measure.” The Atera, which enables whole-transcriptome spatial biology at scale, “obliterates the typical trade offs” that come with existing spatial tools, he said.

“This is the biggest launch in our history. I am the most excited I’ve ever been about any product, or any product category, across the board,” Saxonov told GEN. “It has been a long time in development, and it is what we have known the world needs for a long time. I think it will fundamentally change how we measure and understand biology, and it really puts research on a new trajectory. It is really exciting to be at a place now where we can deliver it to the world.”

Nuts and bolts

Atera offers more plex, throughput, and sensitivity than 10x Genomics’ Xenium—enabling whole-transcriptome at scale. More specifically, when compared to Xenium, Atera has four times the throughput, six times higher plex capacity for targeted assays, 3.6x higher plex, and 2–3x sensitivity for whole transcriptome assays.

10x Genomics Atera
10x Genomics Atera

The price for Atera is $495,000, and the instrument measures roughly 53” x 36” x 64” or (4.42 ft × 3 ft × 5.33 ft). Orders are currently being taken, and the instrument will be available in the second half of this year.

The instrument can run up to 800 1 cm2 whole transcriptome samples (FFPE and fresh frozen) per year, with flexible run configurations, and a greater than 5 cm² imageable area per slide (for greater than 2,000 mm² total tissue per run when using all four slides.)

There are 18,000-genes on the Atera WTA (whole transcriptome) with stackable customization of 1,000-gene Atera Select panels available now, and optional stacking of up to three 1,000-gene panels coming in the future.

“Spatial genomics with whole-transcriptome profiling capabilities is the ultimate approach to measure single cells in their tissue context,” Holger Heyn, PhD, ICREA professor at the Centro Nacional de Análisis Genómico (CNAG) and member of the Human Cell Atlas, added. “All other lower-plexity approaches have been just a warm-up phase leading to this application.”

Jasmine Plummer, PhD, associate member of the St. Jude Faculty and director of the Center for Spatial OMICs points out that the whole transcriptome, while exciting, can bring a big “sticker shock” for many researchers because it will require a lot more probes in contrast to a sequencing-based platform, where a library accesses all of the genes.

The instrument uses standard glass microscopy slides, which is exciting to Plummer. In the past, she said, slides have posed a challenge when coordinating with other researchers, and using regular slides will be more “pathology friendly.”

An end to tradeoffs? 

Existing spatial technologies, which are still relatively nascent in genomics, have been constrained by tradeoffs between plex, resolution, and throughput. Researchers have had to make choices and prioritize.

“In general, with the landscape as it is today, there is a tradeoff,” Nick Banovich, PhD, VP of scientific development at TGen, and professor of bioinnovation and genome sciences division and director of the Center for Spatial Multi-Omics (COSMO), told GEN. “The closer you walk toward whole transcriptome, the lower the per gene sensitivity.”

“The most exciting thing [about Atara],” he continued, “is that there is still quite good sensitivity with whole transcriptome breadth. That’s the huge advantage of this system; there is no tradeoff anymore.”

However, this launch comes just over three years after Xenium’s launch. Purchasing a new instrument so soon may pose a challenge. Plummer notes: “In this economy, with the uncertainty of scientific funding, it is concerning to ask customers—many of whom just landed a machine—to spend another several hundred thousand dollars.”

Why AACR?

Oncology is one of the most exciting, most promising applications of spatial, especially in the near term, noted Saxonov. This is, in large part, because the work exists across the spectrum—from basic discovery to translation to clinical applications. Spatial is unambiguously important, he asserted.

Unveiling at AACR “just made a lot of sense.”

In addition to the party, the company will host a digital launch event on Tuesday, April 21. Within the AACR program, a presentation from the German Cancer Research Center (DKFZ) will include data generated on the platform, highlighting Atera’s ability to uncover cancer biology not accessible with legacy approaches. Researchers distinguished multiple malignant and stem cell states across disease stages, within a single colorectal tumor sample, and mapped how these populations interact with the surrounding immune microenvironment. The data reveal a more complex immune landscape that could inform future therapeutic strategies and drug development. In addition, two posters (#7116, #6216) will include data from Atera.

The future

10x Genomics said that Atera will play a role in advancing large data studies. For example, the company noted that Atera will enable the goal of the Human Cell Atlas (HCA) as it continues its mission to map every cell type in the human body.

“With the Human Cell Atlas entering its next phase of generating spatially resolved atlases, whole-transcriptome approaches will be the workhorse for data generation,” Heyn told GEN.

“I am excited to see the Atara platform being launched now,” he added. “It is very timely as we ramp up production for the Human Cell Atlas 2.0 phase.”

Atera’s future

The company presented a roadmap with future plans at the AACR event, highlighting spatial proteomics, automation, base by base sequencing (a de novo sequencing assay) and software improvements.

Atera, Saxonov told GEN, is a fundamental platform that the company will continue enabling. It lays the groundwork for the next decade of research and work. This point in time, he said, feels similar to the early days of next generation sequencing (NGS). And although the company will continue to develop its other platforms and product lines, Atera has “massive amounts of headroom to keep building on top of it. It is the convergence of all these different technology stacks and different fields onto one.”

“What the platform can do right out of the gate is exciting. And all the things that it can do in the future will be really, really exciting,” he asserted.

The post 10x Genomics Unveils Atera Spatial Platform at AACR Meeting appeared first on GEN – Genetic Engineering and Biotechnology News.