
CEO, Proscia
As cancer care becomes data-driven, artificial intelligence (AI) will play an increasingly central role across the treatment continuum, from biomarker identification and drug development to clinical trial recruitment and diagnostics. In this corner of healthcare, the ability of AI to interpret and annotate tumor sample slides that have been digitized is taking center stage. While the promise is great, and AI interpretation is already influencing some clinical care, it has not yet reached critical mass.
“There’s something like a billion slides created every year for diagnostic purposes, and today most of those, about 85%, are still read by a pathologist with a microscope on physical glass slides,” said David West, CEO and co-founder of digital pathology company Proscia. In practice, that means pathologists manually examine slides, identify cancer, grade tumors, and dictate reports in a traditional approach to diagnosing cancer that has seen little change in decades.

Associate Professor
Cedars-Sinai Medical Center
But that foundation is now shifting. Advances in slide scanning, cloud storage, and AI are turning digital pathology images into data that can be analyzed at scale. At Memorial Sloan Kettering Cancer Center, large archives of digitized slides helped launch Paige AI, one of the earliest companies to train deep learning systems on pathology images linked to clinical and genomic outcomes. This yielded the first U.S. Food and Drug Administration (FDA)-approved diagnostic using AI and digital pathology: Paige Prostate Detect. The company, which was acquired last year by AI-enabled precision medicine company Tempus, now combines Paige’s digital pathology-based AI with Tempus’s broad genomic sequencing data platform.
Researchers in the field say the implications of AI in digital pathology extend beyond image analysis. Mohamed Omar, MD, an associate professor of computational biology at Cedars-Sinai Medical Center, Los Angeles, noted that large language models can help clinicians navigate a research landscape that produces “hundreds of papers every single day” to inform ongoing cancer research. Multimodal AI tools promise to unlock even more insights from digital pathology data by combining it with genomic, radiomic, and clinical data to build powerful new models of both common and rare cancers for diagnosis, drug development, and clinical trial enrollment.

SVP and GM, Tempus
While adoption is in its early stages, the advent of faster and less expensive scanners is bringing digital pathology within reach of both regional and rural hospitals. Razik Yousfi, senior vice president and general manager of AI products at Tempus, and a co-founder of Paige, predicts that within the next 10 years, the majority of pathology workflows will be digital. The ultimate goal of the application of AI here is not to replace human pathologists, but to empower them with a capable assistant while spreading adoption beyond major medical centers.
Building the foundations
As the field of applying AI to digital pathology progresses, it needs to build the groundwork for a wider range of potential applications that could address rare cancers and other areas without an abundance of data. One such project is called Atlas, a collaboration between researchers in Korea, Germany, and the United States to build a foundation model trained using 1.2 million histopathology whole-slide images from 490,000 cases sourced from the Mayo Clinic and Charité – Universitätsmedizin Berlin.
Foundation models like Atlas allow large-scale pre-training of data to develop numerical representations called embeddings that capture both the structural and contextual features of slides in the dataset. Atlas incorporates a diversity of diseases, staining types, and scanners, and uses multiple image magnifications during training. This broad approach confers power and utility. It allows the digitized representations of the histology to be adapted, queried, or fine-tuned to very specific downstream tasks using much less data than would be needed to build a one-off model.
As such, a foundation model provides a reusable digitized computational backbone that can be tapped across a wide range of uses, like tumor classification, detection of morphologic structures, biomarker quantification, and outcome prediction. In short, foundational models make the process of querying digital pathology images more efficient compared with past approaches.

CMO, Mayo Clinic
“In the case of pathology, the successful AI models developed using ‘conventional’ neural network approaches before the advent of FMs (foundation models) typically required huge amounts of training data to achieve high performance and generalizability—the ability to work across datasets distinct from the training data,” said lead Atlas researcher Andrew P. Norgan, MD, PhD, CMO of Mayo Clinic Digital Pathology and assistant professor of laboratory medicine and pathology. “We think of FMs as [an] enabler that allows model development in pathology … to move from artisanal or craft processes to more scalable and reproducible processes that should allow for the rapid development of high-quality models to address problems in pathology.”
At Paige AI, the company’s early work resulted in the first FDA-approved AI diagnostic, Paige Prostate Detect. Its algorithm was built using a technique called multiple instance learning instead of traditional supervised neural network techniques that require detailed human annotation of slides, a time-consuming and expensive method that could expose the learning to human error. The difference between the two methods is that traditional neural networks expose AI to a slide with cancer and tell it that there is cancer present. In multiple instance learning, the model is shown unannotated slides and is tasked with finding the cancer.
Even this approach, however, required a very large dataset. It became apparent to company leaders that the heavy lifting required to get Paige Prostate Detect to work wasn’t scalable.
“We had kind of cracked this recipe,” said Yousfi. “We know how to use a lot of GPU (graphics processing unit) compute, and if we get a ton of data and a lot of compute, we can build anything. But GPU infrastructure is very expensive, and it takes a lot of time to train a very large system.”
Perhaps the most important factor moving Paige away from this model is that it will not work when there is only a small amount of data available. This blocks the ability to train AI to recognize rare cancers for which sample counts are low. The company needed a different approach.
“We had this idea [for] a new system that was basically trained on all of the images we had access to, independent of the organ and indication and tissue and task,” Yousfi said. “Back then, we didn’t know what that thing was called. But ultimately, that became what everyone is calling today a foundation model.”
Originally trained on 200,000 slides, Paige’s new model now includes 3.5 million images and roughly two billion parameters, making it the backbone for other downstream applications the company builds today. This ability to use foundation models as the AI and data encyclopedia for smaller applications will ultimately propel the field of digital pathology forward by widening the playing field.
Going multimodal
To address more complex predictive problems, additional data types can be integrated. Clinical, radiologic, or genomic data can be combined with morphologic embeddings or used during training to help the model learn which tissue features carry a signal of disease or identify a biomarker. These approaches aim to support precision oncology by making morphologic data computable and aligning slide-derived features with other cancer-focused datasets. “These approaches can surface subtle or ‘latent’ patterns in pathology slides and align them with other data sources,” Norgan said. Pathologist and oncology care teams can then evaluate and interpret the features identified by the models within the clinical and biological context.
“In this way, pathologists and oncology teams use these outputs as decision-support tools, while clinical judgment remains central to diagnostic interpretation and therapeutic decision making,” Norgan added.
Atlas has now been succeeded by Atlas2, which was trained on 5.5 million pathology images and is now a two billion-parameter model, making it one of the largest pathology foundation models to date. The team has explored distilling methods to create smaller, more efficient, and targeted versions of the model that retain performance, with an eye toward finding a balance between scale and deployability.
Proscia is embarking on a different multimodal approach that combines vision models with language models, with the intent of creating methods to query the morphology of digitized slides. Their efforts in vision-language models (VLMs) combine textual data with visual data and allow the model to describe the morphology of a slide, answer questions about what it contains, find images in a database based on a text query, and even follow multimodal instructions such as “circle the tumor area on this image.”
In short, a VLM can be engaged in the same way you can engage a human. “I could go ask a pathologist to point out all the areas of tumor-infiltrating lymphocytes,” West said. “Now, because language-vision models are encoding language and images in the same space, they can do that, too. You can ask the model to describe what is happening in an image, and it will tell you exactly what it sees.”
At Cedars-Sinai, Omar’s work with large language models takes a less direct route of leveraging queries to gather information from research studies or even images. “Basically, you could go to the tool, ask questions, and the tool will provide you with pieces of code,” he explained. “These pieces of code are what you use on the slide to get more information.”
Atlas provides a similar function at the Mayo Clinic, Norgan noted. Because the model-generated embeddings in the digitized slide also encode semantic information, the Atlas team is now building a slide search function, which would allow researchers or clinicians to identify and access slides, or regions of slides, with related features.
Democratizing care
Although it will take time to disseminate the tools needed for AI-enabled digitized models of cancer care to smaller health systems, the future is now at Moffitt Cancer Center, where the research hospital is engaged in a top-to-bottom digitization of its system.

Senior Member
Moffitt Cancer Center
According to Marilyn Bui, MD, PhD, senior member of the departments of pathology and machine learning, the comprehensive cancer center plans for full digital adoption across clinical and research labs by 2027. Last August, it entered a multi-year collaboration with integrated AI and digital pathology company PathAI to deploy its cloud-based digital pathology image management system for both research and clinical applications.
Within the pathology department, the transition will mean that all glass slides will be scanned and reviewed digitally, providing the basis for applying AI computational tools to assist pathologists. Bui said that the cancer center is accelerating its move toward clinical AI adoption: “Just today I received an email asking which AI algorithms we plan to incorporate for clinical utility—prostate cancer, breast cancer, general tumor detection,” she said. “For us, it’s no longer just research.”
Moffitt is taking a hybrid approach to algorithm development and deployment within the system. Some AI tools will come from commercial vendors and will be validated internally, while others will be developed by investigators through the center’s translational pathology work. Taking this approach will allow it to apply AI to both common cancers and the rare tumor types Moffitt frequently encounters.
While the digital initiative will be transformational, Bui emphasized that the goal is not to replace pathologists but to enhance their capabilities. She prefers to refer to AI as augmented intelligence to reflect this. “Artificial intelligence suggests a robot replacing us,” she said. “But what we mean is augmented intelligence—tools that assist and enhance our ability to make clinical decisions.”
Further, Moffitt intends to integrate digitized slide data with genomic, proteomic, and clinical outcome data to build a multimodal data environment that could advance precision oncology. “Digital pathology and AI will allow us to extract far more information from tissue samples,” Bui said, “making our diagnoses more actionable for the clinical team and ultimately improving patient care.”
The promise of AI in oncology isn’t just better algorithms, it’s broader access. The maturation of computational pathology and its dissemination from large cancer centers like Moffitt to regional and rural health systems has the potential to provide levels of care typically only available at large research hospitals in community settings as well.
“It’s about democratizing access to care,” said Omar. “For a person in Maine or Wisconsin or another place to have access to the same high-quality care that you would get from a larger academic medical center in LA or New York, slides have to be digitized.”
Over the next 10 years, there could be a compelling business case for hospitals to embrace digital pathology. As the cost of scanners comes down and a broad range of diagnostic tools becomes available, digitizing routine H&E slides could become common.
While genetic cancer testing can cost hundreds of dollars, Omar pointed out that pathology slides “cost $5 [and] they are available universally, in all patients with cancer.” As AI models increasingly identify genomic-level insights directly from those inexpensive images, it represents a “huge win for accessibility, making AI work for patients who cannot afford genetic tests,” Omar said. If there is broad adoption of digital pathology “it is very easy to roll out any kind of AI models and computational tools across the board, across situations and locations that don’t have access to care.”
“At the end of the day, all slides will be digitized,” he concluded. “It’s just a matter of time.”
Chris Anderson, a Maine native, has been a B2B editor for more than 25 years. He was the founding editor of Security Systems News and Drug Discovery News, and led the print launch and expanded coverage as editor in chief of Clinical OMICs, now named Inside Precision Medicine.
The post The Digital Path to AI in Cancer Care appeared first on Inside Precision Medicine.




band: a CE-certified wearable medical device designed for continuous, cuffless, BP monitoring that has been clinically validated against traditional ambulatory BP monitoring.



![The principle for developing fast-acting covalent proteins via comprehensive crosslinker and protein sequence engineering. [Bobo Dang's Lab at Westlake University]](https://www.genengnews.com/wp-content/uploads/2026/04/fast-acting-covalent-p.jpg)