Comment Deadline: FDA Use of Standards in Regulatory Oversight of NGS-Based In Vitro Diagnostics Used for Diagnosing Germline Diseases
As part of the White House's Precision Medicine Initiative (PMI), the Food and Drug Administration (FDA) is issuing this draft guidance to provide FDA's proposed approach on the content and possible use of standards in providing oversight for targeted and whole exome human DNA sequencing (WES) NGS-based tests intended to aid in the diagnosis of individuals with suspected germline diseases or other conditions. This document provides recommendations for designing, developing, and validating NGS-based tests for germline diseases, and also discusses possible use of FDA-recognized standards for regulatory oversight of these tests. These recommendations are based on FDA's understanding of the tools and processes needed to run an NGS-based test along with the design and analytical validation considerations appropriate for such tests. This draft guidance is not final nor is it in effect at this time.
As part of the PMI, FDA is committed to implementing a flexible and adaptive regulatory oversight approach, which fosters innovation and simultaneously assures that patients have access to accurate and meaningful test results.
The FDA invites comments in general, and on the following questions, in particular:
- Does the draft guidance content adequately address the analytical performance of targeted and whole exome human DNA sequencing (WES) NGS-based tests intended to aid in the diagnosis of individuals with suspected germline diseases or other conditions (referred to as “NGS-based tests for germline diseases” or “NGS-based tests” in the guidance)? For example, do the recommendations outlined in the draft guidance adequately address the analytical performance of NGS-based tests used as an aid in diagnosis of patients with signs and symptoms of developmental delay or intellectual disability, undiagnosed diseases, or hereditary cancer syndromes? If not, what additional test design, development, or validation activities are necessary for analytical validation of such tests? Are there specific indications within this broad intended use that require different or additional test design, development, or validation activities from those described in the draft guidance?
- Do the recommendations in the draft guidance adequately address the analytical validation of NGS-based tests that use targeted panels or WES? Targeted sequencing panels? Are there differences between the use of targeted panels and WES that were not adequately distinguished in the recommendations described in the draft guidance?
- The recommendations in this document focus on WES and targeted NGS-based tests for germline diseases. Are the recommendations outlined in the guidance sufficient to address analytical validation for whole genome sequencing (WGS) NGS-based tests for germline diseases? If not, what additional test design, development, and validation activities are needed to address the analytical validation of such tests?
- Accuracy is generally described using an agreement, typically positive and negative percent agreement (PPA and NPA), between a new test and an accepted reference method. For NGS-based tests, positive predictive value (PPV) may be a more meaningful metric than NPA when calculating the likelihood that a variant call detected by the test is a true positive. If PPV is calculated using only analytical results without taking into account prevalence in a population, it is sometimes called “technical” PPV (TPPV) to distinguish it from prevalence-based PPV. What are the benefits and weaknesses to assessing NGS-based test accuracy using TPPV in addition to PPA and NPA, or instead of NPA?
- Are the minimum performance thresholds presented in this draft guidance appropriate, or are alternative thresholds more appropriate? Are there “best ways” to determine acceptable thresholds for each metric? Are there performance metrics that do not require minimum thresholds? Are there test scenarios where minimum thresholds are not useful or relevant?
- How can bias and over-fitting be minimized or accounted for if known “reference” samples are used as comparators in accuracy studies?