Forecasting Protein Aggregation with an Improved Algorithm

A new, improved algorithm for studying protein aggregation could help biologics manufacturers design better-performing products with less experimental effort. The software, developed by scientists based in Barcelona, offers the ability to analyze the aggregation of proteins drawn from the AlphaFold protein structure database, as well as helping companies identify more soluble alternatives.

“Protein aggregation is a bottleneck in the production and manufacturing of biologics,” explains Salvador Ventura, PhD, a professor in the department of biochemistry and molecular biology at the Autonomous University of Barcelona (UAB).

The problem, he explains, is that many proteins used as therapies evolved to be soluble at the concentrations found in the human body. But therapeutics, such as antibodies, are produced in as high a concentration as possible.

“We want the product to deliver the maximum dose with the minimum amount of injection,” he says. “But proteins aren’t designed to be soluble at these concentrations, and their aggregation causes different effects.”

These can include the patient’s immune system reacting negatively or the aggregated product ceasing to work.

To overcome this problem, Ventura says, companies and labs try to forecast protein aggregation, usually experimentally with high-throughput combinational assays. But these approaches are not convenient for startups or small spinoff companies.

A computational approach, such as his algorithm, now in its fourth generation, can help these companies predict and design around protein aggregation.

It offers the ability to draw protein structures from AlphaFold to analyze likely protein aggregation using simulations of molecular dynamics. Users, he says, can also choose to mutate selected parts of the protein, identify other proteins in the same family, and even look at the possible impact of pH on solubility.

“Our lab is both computational and experimental, so most of the designs we’ve made, we’ve already proved by experiment,” Ventura says.

Limitations include the scarcity of high-quality experimental data available to train the algorithm, he explains.

Going forward, the team intends to model which solution and formulation conditions best maintain the stability of therapeutic proteins in manufacturing and clinical settings. “We’re working on these next steps already,” he says. “Although, as yet, we don’t have an algorithm for this.”

Ventura spoke about the latest version of his algorithm at the Bioprocessing Summit Europe in March.

The post Forecasting Protein Aggregation with an Improved Algorithm appeared first on GEN – Genetic Engineering and Biotechnology News.

Faster Process Development via “Transfer Learning”

An emerging artificial intelligence technique called “transfer learning” could help drug makers use data to speed up the development of biopharmaceutical manufacturing processes, according to new analysis.

In transfer learning, predictive models that have been trained on historical data are used to improve the performance of a task.

Unlike machine learning (ML)—where the training process begins from scratch—transfer learning applies existing knowledge to new but related problems, reducing the amount of data and time required to build the model.

Researchers at the Karlsruhe Institute of Technology in Germany, who looked at the approach, identified several potential biopharma applications, according to lead author Daniel Barón Díaz, citing reactor modeling as an example.

“Transfer learning models can be used to predict critical outcomes like viable cell density (VCD) and product titre from online sensor data—for example, pH, temperature, gas flow—from historical data from a different, but related process.”

The approach can also optimize process monitoring. Díaz tells GEN that, “Transfer learning-enhanced soft sensors can be established to monitor protein concentrations in real-time by leveraging existing models from related fermentations.”

Data limitation

When compared with other model-building techniques, transfer learning offers potential cost and time savings, according to Díaz, who cites a reduced experimentation burden as an example.

“Conventional machine learning requires large, structured datasets that are often unavailable in biopharma due to the high cost and labor-intensive nature of experiments. Transfer learning allows companies to leverage historical data and existing models to build reliable predictors for new processes with very limited data.

“By reusing prior knowledge, transfer learning can significantly decrease the number of experiments required—sometimes needing only one to three batches to achieve robust simulations,” he says.

However, the ultimate benefit is that transfer learning speeds up process model development, according to Díaz, who adds, “It can make model adaptation faster than retraining from scratch, facilitating quicker process design and digital twin deployment.”

Challenges

So, transfer learning has the potential to create predictive models for manufacturing development. However, the key caveat is that the processes involved must be sufficiently similar for it to be effective, Díaz says.

“For transfer learning to be effective, the source and target domains must be meaningfully related. If the processes are too different, the assumptions and learned representations may not align, leading to negative transfer, where the transferred knowledge actually degrades the model’s performance.

“Data sets obtained at different scales or under varying conditions are often inconsistent, which can hinder the successful transfer of knowledge. Fine-tuning complex neural network architectures on very small target datasets can lead to overfitting, where the model fails to generalize to new data,” he says.

To address this, manufacturers will need to establish metrics to determine similarity, Díaz explains.

“There are currently no standardized metrics for measuring domain similarity in bioprocessing, nor are there comprehensive benchmark datasets to easily compare different transfer learning techniques.”

Another challenge is the current lack of AI expertise in the industry, Díaz says.

“There is often a disciplinary knowledge gap between process engineers and data scientists, and ML models without a mechanistic backbone may be perceived as opaque black boxes, hindering trust and industrial adoption,” he tells GEN.

The post Faster Process Development via “Transfer Learning” appeared first on GEN – Genetic Engineering and Biotechnology News.

Redefining Bioprocessing Using Reservoirs of Biochemical Diversity

In the global race to improve how medicines are made, scientists are turning to an unlikely source of innovation: the microscopic life thriving in some of the harshest soils on Earth. Beneath wild plants in Saudi Arabia’s arid landscapes, researchers have identified biological tools that could redefine bioprocessing.

A recent study by Saudi Arabia-based Rewaa S. Jalal, PhD, associate professor of biology at the University of Jeddah, and Fatimah M. Alshehrei, PhD, associate professor of microbiology at Umm al-Qura University, focuses on the rhizosphere—the thin layer of soil surrounding plant roots—where dense microbial communities interact with their host plants. These environments, shaped by extreme heat and limited water, are proving to be reservoirs of biochemical diversity with direct relevance to drug manufacturing.

The researchers zeroed in on enzymes known as glycosyltransferases, which play a central role in building complex sugar structures on proteins and other molecules. In pharmaceutical bioprocessing, this step—glycosylation—is crucial. It determines how therapeutic proteins behave, influencing everything from stability to effectiveness and immune compatibility.

What makes these enzymes especially compelling is their environmental pedigree. The microbes that produce them have adapted to survive under intense stress, evolving systems that remain functional in high temperatures and low-moisture conditions. These traits could translate into more robust and flexible bioprocessing workflows, where maintaining strict environmental control is often costly and technically demanding.

The study also reveals that different plant species cultivate distinct microbial communities, each enriched with unique enzyme families. For example, the rhizosphere of Moringa oleifera shows a different enzymatic profile compared to Abutilon fruticosum, highlighting how plant-microbe partnerships shape biochemical potential. For bioprocessing, this diversity could enable the selection of highly specific enzymes tailored to particular drug production needs.

Beyond protein modification, the identified enzymes are linked to the synthesis of key biomolecules such as cellulose, chitin, and β-glucans. These materials are already used in areas like drug delivery, wound care, and tissue engineering. Improving how they are produced through advanced bioprocessing could expand their applications and reduce manufacturing constraints.

Despite the promise, the researchers emphasize that their findings are based on computational analysis of genetic data. The real-world performance of these enzymes in industrial bioprocessing systems remains to be tested.

Still, the implications are significant. As pharmaceutical companies seek more sustainable and efficient ways to produce complex biologics, enzymes shaped by extreme environments might offer a powerful advantage. Instead of engineering solutions from scratch, scientists are increasingly uncovering them in nature—already optimized through evolution.

In this emerging vision of bioprocessing, the future of medicine might be shaped not only by cutting-edge technology but also by the resilient microbial ecosystems hidden beneath desert plants.

The post Redefining Bioprocessing Using Reservoirs of Biochemical Diversity appeared first on GEN – Genetic Engineering and Biotechnology News.

Plant Molecular Farming Comes of Age

Plant molecular farming (PMF) may seem like a bold option for companies accustomed to mammalian or microbial systems, but recent advances have transformed plant-based bioproduction into serious, scalable biomanufacturing platforms able to produce even complex biologics cost-effectively.

“A major advantage is sustainability,” Marco P.C. Marques, PhD, associate professor, University College London (UCL), tells GEN. This comes at a time when “…regulators and global initiatives are putting real pressure on industry to reduce environmental footprint(s). Because plants grow using low energy inputs rather than stainless steel reactors or energy-intensive systems, they can bring down operating costs, reduce carbon emissions, and provide more flexible manufacturing options.”

Additional benefits include PMF systems’ ability to support eukaryotic protein-folding and post-translational modification pathways, their lack of human pathogens, minimal biosafety risks, and compatibility with distributed manufacturing.

PMF reached its current state because sensors, host plant engineering, AI-enabled models, and related technologies have become more mature, reliable, and predictable in the past few years. Consequently, “PMF platforms can deliver consistent, good manufacturing practice (GMP)-compatible performance while needing far less infrastructure, [which] allows much faster setup than conventional approaches,” Marques says.

Robust, economic, responsible

In a recent paper, he and first author Teresa Iucci, PhD, a bioprocessing scientist at Sapienza University of Rome and UCL, cite 13 companies that are using or have used plants to produce a variety of proteins, including antibodies, enzymes, and peptides, for vaccines and other biologics. Many are at clinical or commercial scale.

Those examples show “that controlled cultivation, advanced transient-expression systems, and more refined downstream workflows can overcome many of the technical and regulatory hurdles historically associated with plant-based biomanufacturing.” In particular, they note substantial improvements in host plant engineering. Now, they point out, Nicotina plants can produce mAbs and Fc-fusion proteins that closely match those derived from CHO cells.

However, “Realizing the full value of these biological innovations will depend on aligning PMF with contemporary digital manufacturing principles,” Iucci and Marques stress.

“There is a lot of scope for continued innovation…particularly on the molecular biology side, where further gains in expression, stability, and product quality are very achievable,” Marques elaborates. “Downstream processing could also be better tailored to plant-based hosts,” to lower costs further.

The benefits of PMF are well-recognized, but biomanufacturers also need clear, streamlined regulatory pathways and the internal determination that PMF is worth sustained investment.

For biomanufacturers, “A good starting point is simply to treat PMF as a genuine production platform rather than an interesting alternative,” he says. To be able to compare PMF products with those derived from traditional mammalian or microbial cultures, he calls for the industry to standardize unit operations and generate regulatory-grade datasets, and then to run comparability studies and pilot-scale campaigns.

Running such campaigns is becoming increasingly practical with the conjunction of sensors and data-driven processors. In vertical farming facilities, for example, every parameter critical for plant growth is tightly monitored and controlled using digital sensors to enable precise, real-time environmental adjustments.

Ultimately, this allows producers to select the optimal timing of such events as infiltration and harvest at levels not possible in conventional greenhouses. “The long-term objective is a semi-continuous, digitally regulated PMF production line that links infiltration, extraction, and purification into a coherent, self-correcting workflow,” Iucci and Marques write.

Transitioning from mammalian or microbial systems to PMF, “is easier said than done…especially when companies already have well-established mammalian or microbial platforms with validated processes and established supply chains,” Marques acknowledges. “In many respects, it would be simpler to design a PMF-based approach from scratch than to retrofit it into an existing operation…but with the right incentives (such as additional revenue streams from side processes), application cases, and evidence, we may well see more companies prepared to make that shift.”

The post Plant Molecular Farming Comes of Age appeared first on GEN – Genetic Engineering and Biotechnology News.

<![CDATA[A new US Department of War backed phase 2a study will test BXCL501’s efficacy in easing acute stress reactions and preventing PTSD.]]>

Lung Screening Incidental Findings May Guide Follow-Up for Other Cancers

An analysis of the US National Lung Screening Trial (NLST) has found that the presence of certain types of abnormalities in regions outside of the lungs on low-dose computed tomography (LDCT) images may be associated with a significantly increased risk for extrapulmonary cancer.

The abnormalities, termed significant incidental findings (SIFs), could help clinicians decide when follow-up care is likely to catch extrapulmonary cancer early and when it may not be necessary.

“In this paper, we provide an evidence base for making decisions on abnormalities outside of the lungs that might be seen at lung screening,” said study author Ilana Gareen, PhD, a professor of epidemiology at Brown University School of Public Health. “The goal is to give physicians and patients better data so that they can make more informed choices about those abnormalities that should be considered for follow-up and those that most likely can be ignored.”

Writing in JAMA Network Open, Gareen and co-authors explain that LDCT lung cancer screening frequently detects SIFs unrelated to lung cancer; in the NLST, 34% of 26,455 patients screened with LDCT had SIFs reported but the nature of the SIFs varied.

And although there are recommendations for reporting and addressing SIFs, there is limited evidence for an association between SIFs detected at LDCT lung cancer screening and extrapulmonary cancer diagnoses.

To address this, Gareen and team analyzed data from 75,104 LDCT screening rounds performed in 26,445 individuals (mean age, 61 years; 59.0% men) who were randomly assigned to receive LDCT during the NSLT. The participants had a history of heavy smoking (≥30 pack–years), meaning they are also at high risk for several extrapulmonary cancers, including pancreatic, bladder, and kidney cancer.

The researchers focused on SIFs that were labelled as potentially indicative of extrapulmonary cancer (cancer SIF), rather than those that possibly indicated emphysema or cardiovascular disease.

They report that cancer SIFs were recorded for 2265 (3.0%) screening rounds in 1807 (6.8%) participants across the three screening rounds they received.

Participants with cancer SIFs were significantly older than those with no cancer SIF (mean 62.1 vs. 61.4 years) and significantly more likely to have a history of a smoking-related disease (68.6 vs. 65.7%).

Within one year of a screening round, 1025 participants were diagnosed with an extrapulmonary cancer. Of these, 67 (6.5%) had a SIF on LDCT. This corresponds to 3.0% of participants with a cancer SIF.

Overall, the risk for extrapulmonary cancer among the people with a cancer SIF was 29.6 per 1000 screening rounds compared with 13.3 per 1000 screening rounds in those without a cancer SIF. After adjustment for potential confounders, the marginal risk difference between the two groups was 13.9 per 1000 participants, suggesting that for every 1000 people screened, the presence of a cancer SIF is associated with 13.9 additional cases of extrapulmonary cancer.

When the researchers looked at specific cancer types, they found that the marginal risk difference was substantially higher for urinary cancers, at 17.0 per 1000 participants. It was 5.0 for digestive cancer, 12.3 for breast cancer, and 13.8 for other cancers including lymphoma and leukemia.

“In general, if an abnormality is found that might indicate cancer, the patient receives additional imaging to evaluate that abnormality,” Gareen told Inside Precision Medicine. “Our paper provides additional information as to those abnormalities that should be considered to increase the risk of a cancer diagnosis.”

Importantly, mortality from extrapulmonary cancer accounted for 22.3% of the certified deaths in the LDCT arm of the NLST. Therefore “early detection of these cancers may facilitate early treatment and potentially reduce associated morbidity and mortality,” the authors write. “Identification of cancer SIFs associated with extrapulmonary cancers in NLST participants could be used to plan appropriate diagnostic evaluations for patients undergoing lung cancer screening.”

Gareen said the next step will be to determine if the findings are replicated in lung screening in the community, or if the rate in community screening is higher or lower.

In accompanying comment, Patrick Senior and Andrew Creamer, both from Gloucestershire Hospitals NHS Foundation Trust, in Gloucester, United Kingdom, point out that the false positive rate for a cancer SIF was 97% but say “it is hard to imagine a scenario in which an incidental finding with even a possibility of representing cancer would be disregarded.”

However, they note that “when considered in the context of the numbers of people eligible for lung cancer screening programs around the world, acting on such findings poses a considerable additional burden on the health systems that must investigate them.”

Senior and Creamer say that the results “underscore the importance of both a robust health economics analysis of how screening programs manage such incidental findings and patient-centered research to understand the impact that such unexpected results may have on the individual. Further research is needed to ensure that screening programs are confident when faced with information they did not ask for.”

The post Lung Screening Incidental Findings May Guide Follow-Up for Other Cancers appeared first on Inside Precision Medicine.

Base Editing Shows Early Promise for Treating Beta Thalassemia

The Chinese biotech CorrectSequence Therapeutics, also known as Correctseq, reports good results from a Phase I study of its technology involving editing a person’s hematopoietic stem cells to treat beta thalassemia.

The trial, published in Nature, included five patients with transfusion dependent beta thalassemia who were able to stop red blood cell transfusions, the standard treatment for the condition, after receiving the base-edited treatment CS-101. The participants continued to have good levels of hemoglobin with no serious side effects during follow-up.

Beta thalassemia is a rare inherited condition affecting around one in 100,000 people in the U.S. Mutations in the beta‑globin gene HBB reduce or stop production of the beta chains of hemoglobin, leading to chronic anemia that varies in its severity.

There are already several therapies on the market for beta thalassemia. The most common treatment is still regular blood transfusions to treat the anemia, but recently the genetic therapies Zynteglo, a lentiviral gene therapy developed by Bluebird Bio, and Casgevy, a CRISPR edited therapy developed by Vertex Pharmaceuticals and CRISPR Therapeutics were approved by the FDA.

Casgevy works by boosting fetal hemoglobin levels to treat the anemia seen in thalassemia patients. It uses CRISPR–Cas9 to cut both strands of DNA at the BCL11A enhancer site, which relies on error‑prone repair and can theoretically generate insertions, deletions, and larger rearrangements.

Correctseq is also aiming to raise fetal hemoglobin levels with CS-101, targeting the same site, but is only changing individual bases without making a full cut, which should reduce risks linked to double‑strand breaks, such as large deletions or chromosomal translocations.

In this study, CS-101 was given to five patients with beta thalassemia, previously treated with blood transfusions. The process involves extracting their stem cells, reactivating fetal hemoglobin production using base editing, giving the patients chemotherapy to clear existing stem cells and make way for the newly edited population, and finally injecting the patients with the edited stem cells.

All five patients were able to stop red blood cell transfusions and had maintained good levels of hemoglobin at three months. These levels stayed at a similar level through a median follow up period of 23 months. No deaths or reported cancers due to the chemotherapy treatment were observed and the safety profile so far is acceptable.

Although these results are promising, this trial is just a small initial study and further work is needed to confirm safety and efficacy of CS-101.“The planned Phase II/III trial will be crucial for evaluating a larger and more genetically diverse patient population across multiple centers,” write the authors.

“Extended follow-up will be required to enable comprehensive analyses of chimerism and clonality, which will facilitate more definitive assessment of long-term safety, engraftment dynamics and clinical benefit.”

One Correctseq’s main competitors is U.S.-based Beam Therapeutics, which is developing a similar base edited treatment. Beam is behind Correctseq in developing its edited therapy for beta thalassemia, but ahead with its therapy for sickle cell disease, something Correctseq are also targeting using a similar pathway.

The Chinese biotech industry is currently on an upward trajectory. Correctseq is one of many Chinese biotech companies currently working to produce competitors for gene therapies like Casgevy and Zynteglo at a more affordable price than those seen in the U.S.

The post Base Editing Shows Early Promise for Treating Beta Thalassemia appeared first on Inside Precision Medicine.

<![CDATA[Stroke recovery lasts years; learn why fatigue, sleep, hormones and cognition demand ongoing rehab beyond 90 days—and how progress can continue.]]>

Childhood Dementia Explained by Synaptic Dysfunction, Opens New Therapies

In a new study published in Nature Communications titled,Modelling synaptic dysfunction in childhood dementia using human iPSC-derived cortical networks,” researchers from Flinders University in Adelaide have uncovered how hyperactive and dysregulated synaptic circuits emerge in the brain tissue of children impacted by Sanfilippo syndrome, a common form of childhood dementia. 

In Australia, an estimated 1400 children currently live with childhood dementia, with hundreds of thousands of cases worldwide. Sanfilippo syndrome is a rare genetic condition that causes fatal brain damage. Children typically reach early developmental milestones before rapidly losing cognitive skills, speech, and mobility. Early symptoms often include hyperactivity and sleep disturbance. 

Alterations in synaptic communication play key roles in neurodegenerative disease progression and cognitive decline. Yet few studies have explored how excitation and inhibition synaptic imbalances contribute to pediatric neurodegenerative disorders. 

Cedric Bardy, PhD, professor and head of the Laboratory for Human Neurophysiology and Genetics at the South Australian Health, describes the study findings as “significant progress.” Chronic overactivity in the brain appears to be a fundamental mechanism contributing to cognitive deterioration in children with Sanfilippo syndrome. 

Using human stem cell-derived cortical neurons and electrophysiology, the team demonstrated that excitatory synapses in the neurons of affected children become abnormally active during early brain development. 

While these neurons initially developed and functioned normally, they became increasingly overactive over time. Brain cell networks showed bursts of intense, highly synchronized electrical activity as they matured, mirroring the hyperactivity and neurological symptoms seen in children with the condition. 

“This hyperactivity offers a clear biological explanation for early behavioral changes, and it brings us closer to understanding the complex mechanisms contributing to childhood dementia,” said Bardy.

Results also demonstrated that these neurons are vulnerable to stress. When exposed to mild nutrient deprivation, excitatory synaptic abnormalities increased, suggesting that common illnesses or physiological stressors may accelerate neurological decline. 

“Our research shows that disrupted synaptic communication is not simply a byproduct of degeneration. It is an early driver of the disease,” Bardy says. 

Childhood Dementia Initiative CEO and founder, Megan Maack, is a co-author of the study and has been involved in guiding the project since its inception. 

“This research is significant not just for Sanfilippo syndrome, but for the field of childhood dementia as a whole,” said Maack. “By identifying the precise cellular mechanisms driving the disease, we are moving towards a personalized medicine approach—the kind of targeted treatment strategy that has transformed outcomes for children with cancer.”

Researchers are now evaluating whether drugs that are already on the market for use in other conditions could be repurposed for childhood dementia. Bardy says the team has already demonstrated that these synaptic imbalances can be corrected with certain medications in the laboratory, indicating that they represent a genuine therapeutic target. 

The post Childhood Dementia Explained by Synaptic Dysfunction, Opens New Therapies appeared first on GEN – Genetic Engineering and Biotechnology News.