Written by Morgan Evans (Chief Scientific Officer), Jack Barton, and Agata Michalak (Director, Immunoassay)
There is a quiet but consequential debate running through the bioanalytical community. It does not make headlines, but it has real implications for how drugs are developed, how clinical data are interpreted, and how efficiently new therapies reach patients. At its core is a deceptively simple question: Does current immunogenicity testing still align with modern drug development and clinical decision-making?
For more than two decades, the industry has operated within a framework born of crisis. In the early 2000s, patients treated with recombinant erythropoietin began developing pure red-cell aplasia. This was a severe, immune-mediated reaction caused by anti-drug antibodies. These antibodies neutralised not just the therapeutic, but the patients’ own endogenous erythropoietin. Regulators responded quickly and appropriately. A structured, tiered immunogenicity testing paradigm was established. It required screening, confirmatory, and titration assays for all biotherapeutic programmes. It was a sensible response to a high-stakes and poorly understood phenomenon.
The problem is that the field never fully moved beyond it.
Today, the three-tiered immunogenicity testing strategy is applied almost universally. It is used for low-risk monoclonal antibodies, highly humanised molecules, and programmes where decades of clinical experience show that antibody formation rarely leads to meaningful harm. Guidance created for a high-risk scenario has become the default for everything. This has happened through institutional inertia and regulatory caution. Increasingly, leading bioanalytical scientists are questioning whether this approach is still justified. Many now argue that it is neither scientifically necessary nor beneficial for patients.
Why Is the Immunogenicity Testing Paradigm Under Pressure?
The current moment is notable because of the breadth and quality of the challenge. This is not fringe dissent. Cross-industry working groups and major bioanalytical forums have spent years examining the evidence. The conclusions are becoming consistent.
One of the clearest issues lies in how the three-tier immunogenicity testing system functions in practice. The screening and confirmatory assays use similar formats. They are not truly independent. In many cases, the confirmatory step does little more than filter out borderline signals. These are samples that sit just above the cut point. If the screening assay is designed with a tighter false positive rate, the confirmatory tier adds limited value. It increases cost, extends timelines, and consumes sample volume without significantly improving interpretation within a broader bioanalysis strategy for clinical trials.
The titration step presents a different problem. It attempts to quantify response magnitude through serial dilution. However, this introduces variability. Low-affinity antibodies can dissociate during dilution. Resolution is poor near the lower assay range. Additional sample handling also introduces noise. For a measurement intended to define immune response magnitude, it is not always reliable.
An alternative is emerging. Signal-to-noise ratio is increasingly used as a measure of antibody magnitude. Across multiple datasets, strong correlations between S/N and titre have been demonstrated. Importantly, conclusions relating to immunogenicity testing and ADA impact remain consistent. This includes relationships with pharmacokinetics and pharmacodynamics. S/N also provides practical advantages. It offers improved precision, continuous data across all samples, and better sensitivity for low-affinity responses. These benefits are particularly powerful when interpreted through strong large molecule bioanalytical expertise.
Are Cut-Points Still Relevant in Modern Immunogenicity Testing?
Few elements of immunogenicity testing are as deeply embedded as the cut-point. Few are questioned as often.
Cut-points sit at the centre of assay development and validation. They define what is positive and what is negative. They underpin screening and confirmatory workflows. They shape how immunogenicity testing data are interpreted. Over time, they have become standard practice within bioanalysis.
Yet this raises an important question: What do cut-points actually represent.
At their core, they impose a statistical boundary on biological data. They separate signal from noise based on predefined thresholds. However, biology does not operate in binary terms. A sample just above the cut-point is treated as positive. A sample just below is treated as negative. In reality, these two samples may be biologically indistinguishable.
As assay development improves, this limitation becomes more visible. Modern bioanalytical platforms detect very low levels of antibody response. Many of these responses have no impact on pharmacokinetics, pharmacodynamics, or clinical outcomes. As a result, immunogenicity testing is generating more positives. However, it is not always generating more insight.
This raises a more fundamental issue: Are cut-points still useful for decision-making in the context of modern large molecule bioanalytical expertise?
There is a growing argument that they are not. Signal-based approaches offer an alternative. These methods evaluate signal-to-noise across all samples. They treat immunogenicity as a continuum rather than a category. This allows patterns to emerge over time and across subjects. It also aligns more naturally with pharmacokinetics, pharmacodynamics, and clinical outcomes.
That said, cut-points may still have a role. In certain regulatory contexts, categorical decisions are required. However, their position as the central organising principle of immunogenicity testing is increasingly difficult to defend.
A more pragmatic view is emerging. Cut-points should support the strategy, not define it.
What Does “Context-of-Use” Mean in Immunogenicity Testing?
The evolving discussion around immunogenicity testing is not about doing less. It is about doing what is appropriate. The concept of context-of-use is central to this shift. It has long been applied in biomarker assay development. It is now being applied to immunogenicity.
This changes how bioanalytical strategy is designed. The starting point is no longer regulatory expectation alone. Instead, it is the clinical question. What risk does the molecule present? What decisions need to be made? What level of sensitivity is meaningful?
In some cases, a simplified approach is justified. For low-risk monoclonal antibodies, immunogenicity may have limited clinical impact. Pharmacokinetics and pharmacodynamics will often reveal any meaningful changes in drug behaviour. In these situations, a full three-tier immunogenicity testing strategy may not be necessary.
In other cases, more extensive characterisation is required. The key is recognising the difference. The mistake has been treating all programmes the same.
Applying this effectively requires not just technical capability, but deep large molecule bioanalytical expertise aligned with clinical trial endpoints and regulatory strategy.
Rethinking Neutralising Antibody (NAb) Assessment
Neutralising antibody assays are another area under review. They have long been considered essential. However, their role is being reconsidered.
NAbs are a subset of ADAs. Detecting them does not automatically indicate clinical relevance. This is particularly true as assay sensitivity increases. Many detected responses do not affect pharmacokinetics, pharmacodynamics, or efficacy.
An integrated approach offers a different perspective. By combining ADA data with pharmacokinetics, pharmacodynamics, and clinical outcomes, a clearer picture emerges. This approach focuses on impact rather than detection alone.
Importantly, this does not reduce scientific rigour. It requires stronger integrated bioanalysis capabilities and advanced assay development strategies. Data must be interpreted in context. Positive results must be linked to meaningful outcomes through integrated bioanalysis.
What This Means for Biotech Companies in Australia
For small to mid-sized biotechs, these changes are highly relevant. This is particularly true for those operating in or entering Australia’s clinical trial environment and bioanalytical services landscape.
There is a natural tendency to adopt conservative immunogenicity testing strategies. The risks of under-testing are well understood. However, the risks of over-testing are often underestimated. These include delays, increased costs, and data that does not support decision-making.
A more effective approach is strategic. Immunogenicity testing should be built from first principles. This includes assessing molecule risk, defining context-of-use, and aligning assays with clinical endpoints. It also requires strong bioanalytical laboratory support in Australia with global regulatory alignment.
Where Is Immunogenicity Testing Heading Next?
The direction of travel is clear. Regulators are engaging with these discussions. There is increasing openness to adaptive immunogenicity testing strategies. However, expectations remain high. Sponsors must demonstrate that their approach is scientifically justified.
Immunogenicity is not simply a regulatory requirement. It is a biomarker. It reflects how the body interacts with a therapeutic. When analysed alongside pharmacokinetics and pharmacodynamics, it provides valuable insight across development.
At Agilex Biolabs, we approach immunogenicity through integrated bioanalysis, advanced assay development, and deep large molecule bioanalytical expertise. We support sponsors across Australia and globally with services designed to deliver clinically meaningful data.
The three-tier paradigm served its purpose. The science has now moved forward.
The question is whether immunogenicity testing will move with it.
Frequently Asked Questions
Q: What is immunogenicity testing in drug development?
A: Immunogenicity testing evaluates whether a therapeutic drug triggers an immune response, including the formation of anti-drug antibodies (ADAs) that may affect safety, efficacy, or pharmacokinetics.
Q: Why is the three-tier immunogenicity testing approach being reconsidered?
A: The traditional three-tier approach (screening, confirmatory, titration) can increase cost and complexity without always improving clinical insight, especially for low-risk biologics.
Q: What is a context-of-use approach in immunogenicity testing?
A: Context-of-use means designing immunogenicity testing based on the molecule’s risk, clinical goals, and decision-making needs rather than applying a one-size-fits-all regulatory framework.
"*" indicates required fields

