GUIDANCE DOCUMENT
Establishing the Performance Characteristics of In Vitro Diagnostic Devices for the Detection or Detection and Differentiation of Influenza Viruses - Guidance for Industry and FDA Staff July 2011
- Docket Number:
- FDA-2008-D-0095
- Issued by:
-
Guidance Issuing OfficeCenter for Devices and Radiological Health
Document issued on: July 15, 2011
The draft of this document was issued on February 15, 2008
For questions regarding this document contact Tamara Feldblyum at 301-796-6195 or by email at tamara.feldblyum@fda.hhs.gov.
U.S. Department of Health and Human Services |
Preface
Public Comment
You may submit written comments and suggestions at any time for Agency consideration to the Division of Dockets Management, Food and Drug Administration, 5630 Fishers Lane, rm. 1061, (HFA-305), Rockville, MD, 20852. Submit electronic comments to http://www.regulations.gov. Identify all comments with the docket number FDA-2008-D-0095. Comments may not be acted upon by the Agency until the document is next revised or updated.
Additional Copies
Additional copies are available from the Internet. You may also send an e-mail request to dsmica@fda.hhs.gov to receive an electronic copy of the guidance or send a fax request to 301-847-8149 to receive a hard copy. Please use the document number 1638 to identify the guidance you are requesting.
Table of Contents
- Introduction
- Background
- Scope
- Risks to Health
- Device Description
- Intended Use
- Test Methodology
- Instruments - Hardware and Software
- Ancillary Reagents
- Limitations
- Controls
- Interpreting and Reporting Test Results
- Establishing Performance Characteristics
- Analytical Performance
- Analytical Sensitivity
- Analytical Specificity
- Cut-off and Equivocal Zone
- Precision
- Carry-Over and Cross-contamination Studies (for multi-sample assays and devices that require instrumentation.)
- Specimen Storage and Shipping Conditions
- Clinical Performance Studies
- Study Protocol
- Study Population
- Specimens
- Fresh vs. Frozen Samples
- Study Sites
- Reference Methods
- Post-market Performance Validation
- CLIA Waiver
- Nucleic Acid-based Influenza Devices
- Nucleic Acid Extraction
- Controls for Nucleic Acid-based Influenza Assays
- Analytical Performance
- References
Guidance for Industry and FDA Staff
Establishing the Performance Characteristics of In Vitro Diagnostic Devices for the Detection or Detection and Differentiation of Influenza Viruses
This guidance represents the Food and Drug Administration's (FDA's) current thinking on this topic. It does not create or confer any rights for or on any person and does not operate to bind FDA or the public. You can use an alternative approach if the approach satisfies the requirements of the applicable statutes and regulations. If you want to discuss an alternative approach, contact the FDA staff responsible for implementing this guidance. If you cannot identify the appropriate FDA staff, call the appropriate number listed on the title page of this guidance.
1. Introduction
FDA is issuing this guidance to provide industry and agency staff with recommendations for studies to establish the analytical and clinical performance of in vitro diagnostic devices (IVDs) intended for the detection, or detection and differentiation, of influenza viruses. These devices are used to aid in the diagnosis of influenza infection. They include devices that detect one specific type or subtype, as well as devices that detect more than one type or subtype of influenza virus and further differentiate among them.1
This guidance provides detailed information on the types of data FDA recommends submitting in support of Class I and Class II premarket submissions for these devices. The guidance includes a list of influenza virus strains recommended for analytical sensitivity studies, a list of microorganisms recommended for analytical specificity studies, and an example of a suggested format for presenting data from cross-reactivity studies.
The scope of this document is limited to types of data intended to establish the performance characteristics of devices that detect either influenza viral antigen(s) or influenza viral gene segment(s). It includes devices detecting influenza virus protein or nucleic acid targets, either single unit test formats or multi-test formats. It does not address performance for assays detecting serological response of the host to the viral antigen, nor does it address establishing performance of non-influenza components of multi-analyte or multiplex devices.
FDA’s guidance documents, including this guidance, do not establish legally enforceable responsibilities. Instead, guidance documents describe the Agency’s current thinking on a topic and should be viewed only as recommendations, unless specific regulatory or statutory requirements are cited. The use of the word should in Agency guidance documents means that something is suggested or recommended, but not required.
2. Background
This document recommends studies for establishing the performance characteristics of in vitro diagnostic devices for the detection, or detection and differentiation, of influenza viruses directly from human specimens or from culture isolates. FDA believes that these recommended studies will be relevant for premarket submissions (e.g., 510(k)) that may be required for a particular device.
A manufacturer who intends to market an in vitro diagnostic device for detection, or detection and differentiation, of influenza viruses must conform to the general controls of the Federal Food, Drug, and Cosmetic Act (the Act) and, unless exempt, obtain premarket clearance or approval prior to marketing the device (sections 510(k), 513, 515 of the Act; 21 U.S.C. 360(k), 360c, 360e)).
This document is intended to supplement 21 CFR 807.87 (information required in a premarket notification) and other FDA resources such as “Premarket Notification: 510(k)”, and "Guidance for Industry and FDA Staff: Format for Traditional and Abbreviated 510(k)s."
In addition, this document complements two FDA guidance documents that specifically address influenza IVDs: “In Vitro Diagnostic Devices to Detect Influenza A Viruses: Labeling and Regulatory Path,” and “Class II Special Controls Guidance Document: Reagents for Detection of Specific Novel Influenza A Viruses.”
The guidance document entitled, “In Vitro Diagnostic Devices to Detect Influenza A Viruses: Labeling and Regulatory Path,” addresses recommendations for fulfilling labeling requirements applicable to all in vitro diagnostic devices intended to detect influenza A (or A/B) virus directly from human specimens, with a particular emphasis on ensuring appropriate labeling for legally marketed influenza A (or A/B) test devices whose clearance is not based on data addressing performance with regard to novel influenza A viruses infecting humans (including H5N1). It also discusses the FDA's thinking on premarket pathways for new or modified products intended to detect influenza A viruses, including a novel influenza A virus, or to detect and differentiate a specific influenza A virus.
The guidance document entitled “Class II Special Controls Guidance Document: Reagents for Detection of Specific Novel Influenza A Viruses,” is one of two special controls for reagents for detection of specific novel influenza A viruses, classified into class II under 21 CFR 866.3332. This special control guidance document includes recommendations for establishing device performance, as well as recommendations for labeling and postmarket measures. Devices classified under 21 CFR 866.3332 are subject to an additional special control limiting distribution of these devices to laboratories with experienced personnel having training in standardized molecular testing procedures and expertise in viral diagnosis and appropriate biosafety equipment and containment.
This guidance is intended to complement the two preceding guidance documents by describing the types of studies FDA recommends for establishing the analytical and clinical performance of in vitro diagnostic devices (IVDs) intended for the detection, or detection and differentiation, of influenza viruses. FDA recommends that sponsors of influenza diagnostic devices use this guidance, in combination with the two existing guidances regarding influenza diagnostics, for information on FDA’s current thinking about the regulation of these devices.
3. Scope
As previously described, this document recommends studies for establishing the performance characteristics of in vitro diagnostic devices for the detection or detection and differentiation of influenza viruses, including those for the detection of novel influenza viruses in either human specimens or culture isolates. This document is limited to studies intended to establish the performance characteristics of devices that either detect influenza viral antigens or influenza viral gene segments (protein or nucleic acid). This guidance references serological reagents but does not address detection of serological response from the host to the viral antigen, nor does it address establishing performance of non-influenza components of multi-analyte or multiplex devices.
The scope of this document includes the devices described in existing classifications, as indicated below, and may also be applicable to future influenza diagnostic devices that may not fall within these existing classifications. Those future devices may include devices that will be subject to requests for initial classification under section 513(f)(2) of the act ("de novo classification"), as well as subsequent devices that seek determinations of substantial equivalence to future de novo classified devices.
The following are existing influenza IVD classification regulations:
21 CFR 866.3330 Influenza virus serological reagents:
(a) Identification. Influenza virus serological reagents are devices that consist of antigens and antisera used in serological tests to identify antibodies to influenza in serum. The identification aids in the diagnosis of influenza (flu) and provides epidemiological information on influenza. Influenza is an acute respiratory tract disease, which is often epidemic.
(b) Classification. Class I (general controls). The device is exempt from the premarket notification procedures in subpart E of part 807 of this chapter subject to the limitations in § 866.9.
Although devices within the classification described in 21 CFR 866.3330 are Class I devices, which are generally exempt from premarket notification, (21 U.S.C. 360(l)) under FDA regulations a premarket notification may be required for some tests purported to fall within this type of device. Specifically, an IVD for detection of influenza is not exempt from submission of a 510(k) to the extent that it meets the limitations on exemption defined in 21 CFR 866.9:
- Under 21 CFR 866.9(c)(6), an IVD that is intended for use in identifying or inferring the identity of a microorganism directly from clinical material is not exempt from premarket notification requirements. An IVD that is intended to detect an influenza virus directly from a human specimen falls within this provision.
- In addition, an IVD to detect influenza may trip the limitations in 21 CFR 866.9(a) if the new device is intended for a use different from the intended use of a legally marketed device classified under 21 CFR 866.3330; or may trip the limitations in 21 CFR 866.9(b), if it operates using a different fundamental scientific technology from existing influenza tests in that classification.
The following are the product codes for devices cleared under 21 CFR 866.3330:
GNS – Antisera, HAI, Influenza virus A, B, C
GNT – Antigens, HA (including HA control), Influenza virus A, B, C
GNX – Antigens, CF, including CF control, Influenza virus A, B, C
GNW – Antisera, CF, Influenza virus A, B, C
NIA – Nucleic acid amplification, Influenza virus
21 CFR 866.3332 Reagents for detection of specific novel influenza A viruses
(a) Identification. Reagents for detection of specific novel influenza A viruses are devices that are intended for use in a nucleic acid amplification test to directly detect specific virus RNA in human respiratory specimens or viral cultures. Detection of specific virus RNA aids in the diagnosis of influenza caused by specific novel influenza A viruses in patients with clinical risk of infection with these viruses, and also aids in the presumptive laboratory identification of specific novel influenza A viruses to provide epidemiological information on influenza. These reagents include primers, probes, and specific influenza A virus controls.
(b) Classification. Class II (special controls). The special controls are:
(1) FDA’s guidance document entitled “Class II Special Controls Guidance Document: Reagents for Detection of Specific Novel Influenza A Viruses.” See § 866.1(e) for information on obtaining this document.
(2) The distribution of these devices is limited to laboratories with experienced personnel who have training in standardized molecular testing procedures and expertise in viral diagnosis, and appropriate biosafety equipment and containment.
The following are the product codes for devices cleared under 21 CFR 866.3332:
NXD – Nucleic Acid Amplification, Novel influenza A virus, A/H5 (Asian lineage) RNA
OMS – Novel influenza A virus, A/H5 NS1 protein
4. Risks to Health
Human influenza is a highly contagious acute respiratory tract disease. There are three genera of human influenza viruses: A, B and C. Infection with influenza A virus is the most severe, with several notable pandemics during the past century. Influenza A viruses are classified into subtypes according to the antigenic composition of their hemagglutinin (HA) and neuraminidase (NA) glycoproteins on the viral envelope.
Illness caused by commonly circulating influenza viruses can cause high morbidity and mortality, particularly in special populations such as the elderly and the very young. The development of acquired immunity to seasonal influenza viruses is limited because influenza viruses mutate in small but important ways from year to year (a process known as antigenic drift). More dramatic changes or major antigenic shifts may result in the emergence of a new subtype of influenza A virus, or novel virus that has never circulated or has not circulated in humans for several decades. Novel influenza viruses have the potential to cause widespread disease and/or disease of unusually high severity because few, if any, people have prior exposure to these viruses. This lack of immunity, as well as additional pathogenic factors that may also increase virulence, results in a greater likelihood of morbidity and mortality among those infected.
In vitro diagnostic devices for the detection, or detection and differentiation, of influenza viruses are important for establishing the diagnosis of influenza, for differentiating seasonal from novel influenza virus strains, and for obtaining epidemiologic information on influenza outbreaks. Public health officials have emphasized the need for reliable influenza diagnostic devices that can differentiate seasonal from emerging viral strains and provide rapid test results.
Failure of devices for detection of influenza viruses to perform as expected, or failure to correctly interpret results, may lead to incorrect patient management decisions and inappropriate public health responses. In the context of individual patient management, a false negative report could lead to delays in providing (or failure to provide) definitive diagnosis and appropriate treatment and infection control and prevention measures. A false positive report could lead to unnecessary or inappropriate treatment or unnecessary control and prevention actions. Therefore, establishing the performance of these devices and understanding the risks that might be associated with the use of these devices is critical to their safe and effective use.
The studies to establish the performance of influenza detection devices as described in this guidance document are recommended to support premarket submissions and FDA’s finding of substantial equivalence for these devices.
5. Device Description
You must identify a legally marketed predicate device in your 510(k). 21 CFR 807.92(a)(3). You should also identify the regulation and the product code for your device. We recommend including a table that outlines the similarities and differences between the predicate and the new device. You should include the following descriptive information to adequately characterize the new device that is intended to detect or detect and differentiate influenza viruses.
5.A Intended Use
The intended use statement should specify the influenza virus types and subtypes the device detects and identifies, the nature of the analyte (e.g., antigen or RNA), test platform, specimen types for which testing will be indicated, the clinical indications for which the test is to be used, and the specific population(s) for which the test is intended. The intended use statement should state whether the test is qualitative, whether analyte detection is presumptive, and any specific conditions of use.
5.B Test Methodology
You should describe in detail the methodology used by your device. For example, the following elements, as applicable to the device, should be included:
- Description of the technology (e.g., immunoassay, RT-PCR, bead array) and whether the device is a manual test or run on an instrument.
- Information and rationale for selection of specific targets and the methods used to design antibodies or primers and probes.
- Specificity of capture and detection reagents for influenza antigens or nucleic acid sequences of interest.
- Specimen types and collection methods (e.g., swabs, aspirates, viral culture media, etc.).
- Assay components provided or recommended for use, and their function within the system (e.g., buffers, enzymes, fluorescent dyes, instrumentation and software).
- Internal controls and a description of their specific function in the system.
- External controls that are recommend or provided to the users.
- Types of output generated by the device and system parameters (e.g., measurement ranges, units, when applicable).
- The computational path from raw data to the reported result (please see Section 5.C below “Instruments - Hardware and Software” for details).
- Illustrations or photographs of non-standard equipment or methods.
Your 510(k) should include performance information supporting the conclusion that design control requirements for your device have been met as described in 21 CFR 820.30.
5.C Instruments - Hardware and Software
For instruments and systems that measure multiple signals, and for other complex laboratory instrumentation that has not been previously cleared, refer to the guidance document "Class II Special Controls Guidance Document: Instrumentation for Clinical Multiplex Test Systems,"[1] for details on the types of instrument-related data you should provide to support clearance.
If your device includes software, you should submit software information in accordance with the level of concern described in the FDA guidance document “ Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices” [ 2 ]. You should determine the level of concern prior to the mitigation of hazards. In vitro diagnostic devices of this type are typically considered a moderate level of concern; software flaws could indirectly affect the patient and potentially result in injury because inaccurate information may be given to the healthcare provider and the patient.
You should clearly describe how raw signals are converted into a result including adjustment to the background signal for normalization, if applicable. We also recommend that you include the following information for software development and implementation in the submission:
- System and Software Requirements
- Hazard Analysis
- Architecture Design Chart
- Software Design Specification
- Software Development Environment Description
- Verification and Validation
- Traceability Analysis
- Unresolved Anomalies
Before beginning clinical studies, the configuration of the hardware and software components should be very similar or identical to the final version of the device. A risk assessment should be performed if any significant changes are made to the hardware or software after the completion of the clinical studies and before the clearance and distribution of the device.
Below are additional references to help you develop and maintain your device under good software life cycle practices consistent with FDA regulations.
- General Principles of Software Validation; Final Guidance for Industry and FDA Staff;
- Guidance for Off-the-Shelf Software Use in Medical Devices; Final;
- 21 CFR 820.30 Subpart C – Design Controls of the Quality System Regulation;
- ISO 14971 -1; Medical devices - Risk management - Part 1: Application of Risk Analysis.
- AAMI 62304:2006; Medical device software - Software life cycle processes.
5.D Ancillary Reagents
Ancillary reagents are those reagents that are specified in device labeling as “required but not provided” in order to carry out the assay as indicated in its instructions for use. Ancillary agents of concern in this context are those that are specified, for example, by manufacturer name, and/or product number. For example, if your device labeling specifies the use of Brand X DNA amplification enzyme, and use of any other DNA amplification enzyme may alter the performance characteristics of the device from that reported in the labeling, then Brand X DNA amplification enzyme is an ancillary reagent of concern. In contrast, if your device requires the use of 95% ethanol, and any brand of 95% ethanol will allow the device to achieve the performance characteristics provided in the labeling, then 95% ethanol is not an ancillary reagent of concern.
If the instructions for use of your device specify ancillary reagents of concern, you should address how you will ensure that the results of testing with your device and these ancillary reagents, in accordance with the instructions, will be consistent with the performance established in the premarket submission. Every effort should be made to bring the ancillary reagents under your company’s quality system by recommending use of only those ancillary reagents that you have determined meet quality standards for your test. The plan may include an application of quality systems approaches, product labeling, and other measures. You should include the elements described below in your submission.
- A risk assessment addressing the use of ancillary reagents, including risks associated with the management of reagent quality and variability, risks associated with any inconsistency between instructions for use provided directly with the ancillary reagent and those supplied by the manufacturer with the device, and any other issues that could present a risk of obtaining incorrect results with your device.
- Using your risk assessment as a basis, you should describe how you intend to mitigate risks through implementation of any necessary controls over ancillary reagents. These may include, where applicable:
- Plans for assessing user compliance with labeling instructions regarding ancillary reagents.
- Material specifications for ancillary reagents in the labeling.
- Identification of reagent lots that will allow appropriate performance of your device.
- Stability testing.
- Complaint handling.
- Corrective and preventive actions.
- Plans for alerting users in the event of an issue involving ancillary reagents that would impact the performance of your device.
- Any other issues that must be addressed in order to ensure safe and effective use of your test when used in combination with named ancillary reagents, in accordance with the device’s instructions for use.
In addition, you should provide testing data to establish that the quality controls supplied or recommended are adequate for detecting performance or stability problems with the ancillary reagents.
6. Limitations
You must include a statement of limitations of the procedure in the labeling accompanying the product (21 CFR 809.10(b)(10)). This should include potential issues that may affect the performance of your device and were not addressed in your analytical or clinical studies. You should include any potential risks associated with using the device in the Warning and Limitation sections of the device labeling. We recommend including statements such as those listed below as they pertain to your device:
- The detection of viral nucleic acid [or viral antigen] is dependent upon proper specimen collection, handling, transportation, storage, and preparation, including extraction. Failure to observe proper procedures in any one of these steps can lead to incorrect results.
- Negative results do not preclude influenza virus infection and should not be used as the sole basis for treatment or other patient management decisions.
- Results from the [device name] should be correlated with the clinical history, epidemiological data and other data available to the clinician evaluating the patient.
- Viral nucleic acids may persist in vivo independent of virus viability. Detection of analyte target(s) does not imply that the corresponding virus(es) are infectious, or are the causative agents of clinical symptoms.
- This device has been evaluated for use with human specimen material only.
- False negative results may occur if the number of organisms in the clinical specimen is below the detection limits of the device.
- If the virus mutates in the target region, influenza virus may not be detected or may be detected less predictably.
- This device is a qualitative test and does not provide information on the viral load present in the specimen.
- The performance of this device has not been evaluated for patients without signs and symptoms of influenza infection .
- The performance of this device has not been evaluated for monitoring treatment of influenza infection.
- The performance of this device has not been evaluated for the screening of blood or blood products for the presence of influenza.
- This test cannot rule out diseases caused by other bacterial or viral pathogens.
- The effect of interfering substances has only been evaluated for those listed in the labeling. Interference by substances other than those described below can lead to erroneous results.
- Cross-reactivity with respiratory tract organisms other that those listed below [in the labeling] may lead to erroneous results.
- The performance of this device has not been evaluated for patients receiving intranasally administered influenza vaccine.
- The performance of this device has not been evaluated for immunocompromised individuals.
- The prevalence of infection will affect the test’s predictive value.
7. Controls
When conducting the performance studies described below, we recommend that you run appropriate external controls every day of testing for the duration of the analytical and clinical studies. Examples of appropriate external controls include vaccine or prototypic vaccine strains, low pathogenic viruses, and inactivated viruses. Specific information about controls for nucleic acid based devices is provided in Section 9.E, “Controls for Nucleic Acid-based Influenza Assays” of this guidance document. You may contact the Division of Microbiology Devices within the Office of In Vitro Diagnostic Device Evaluation and Safety (OIVD) at FDA for further information regarding controls.
8. Interpreting and Reporting Test Results
We recommend that you describe in your submission how positive, negative, equivocal (if applicable), or invalid results are determined and how they should be interpreted. We recommend that you indicate the cut-off values for all outputs of the assay including the cut-off value for defining a negative (or negative/positive) result of the assay. If the assay has an equivocal zone, the cut-off values (limits) for the equivocal zone should also be defined. If your interpretation of the initial equivocal results requires re-testing, you should provide (1) a recommendation whether re-testing should be performed from the same nucleic acid preparation, a new extraction, or a new patient specimen and (2) an algorithm for defining a final result by combining the initial equivocal result and the results after re-testing. Note that this algorithm should be developed before the pivotal clinical study that confirms the significance of the assay cut-offs, and the algorithm should be followed precisely (e.g., perform re-testing, new extraction, or new patient specimen collection) during the clinical study for the performance evaluation of the device.
If an assay result can be read or reported as “invalid”, you should describe how invalid results are defined. If controls are part of the determination of invalid results, you should describe each possible combination of control results for defining the invalid result. You should provide a recommendation on how to follow up any invalid result, i.e., whether the result should be reported as invalid or the specimen should be re-tested. If re-testing is recommended, provide the information similar to the one for re-testing of equivocal results (whether re-testing should be performed from the same nucleic acid preparation, a new extraction, or a new patient specimen). You should include in the submission and the labeling the number of initial invalid results and the number of re-tests that were needed to determine a final result during your studies. Additional information on result evaluation and reporting can be found in CLSI document ILA 18-A2 [3].
9. Establishing Performance Characteristics
9.A Analytical Performance
We recommend that you establish the following analytical performance characteristics for your assay:
9.A.i Analytical Sensitivity
Limit of Detection
The limit of detection (LoD) is defined as the lowest concentration of analyte that can be consistently detected (typically in ≥95% of samples tested under routine clinical laboratory conditions) in a defined type of specimen. You should determine the LoD for each specimen type and each analyte that will be tested with your device utilizing the entire test system from sample preparation to detection when evaluating assay LoD. This can be accomplished by limiting dilutions of propagated and titered viral stocks. The study should include serial dilutions of at least two strains representative of types or subtypes for each claimed influenza virus (please see Table 1 for suggested viral strains) and 3-5 replicates for each dilution. The reference methods we recommend for LoD determination are the Tissue Culture Infectious Dose50 (TCID50), Egg Infective Dose50 (EID50), or plaque assay. Since the nucleic acid based devices detect not only the infective viral particles but also the total viral RNA present in the specimen, an additional reference method, quantifying nucleic acids, (e.g., genome copy equivalent or µg/mL of viral RNA) may also be included. You should report the LoD as the level of virus that gives a 95% detection rate. The LoD should be confirmed by preparing at least 20 additional replicates at the LoD concentration and demonstrating that the virus was detected 95% of the time.
We recommend that you determine the LoD for each analyte in the most commonly used or most challenging matrix tested by the device. You may refer to Clinical Laboratory Standards Institute (CLSI) document EP17-A [4] for examples of the study design. When selecting an appropriate matrix for your analytical studies you should choose one of the two alternatives outlined below:
- Negative human clinical respiratory specimens can be pooled to create a large volume of uniform sample matrix (e.g., negative nasopharyngeal (NP) pools prepared from leftover NP swab clinical samples). The pooled matrix should be screened prior to spiking.
- Viral transport medium (VTM) or another simulated matrix can be used if you can demonstrate in a study that analytical performance of your assay is equivalent using the proposed simulated matrix and the natural clinical matrix containing viruses. The study can be conducted in-house (i.e. within your own company) and include a limited number of samples (e.g., 60).
Analytical Reactivity (Inclusivity)
We recommend that you demonstrate that the test can detect at least 5 strains for influenza B and 10 strains for each influenza A subtype detected by your device. Influenza A detection should be tested across all subtypes that have infected humans and at viral levels at or near the LoD. Influenza B strains representing both lineages (Victoria and Yamagata) should be included. Influenza strains selected should reflect temporal and geographical diversity with an emphasis on contemporary strains. For each claimed influenza subtype an additional selection of strains representing known lineages and clades should be included. For subtypes for which it is difficult to obtain sufficient number of strains to demonstrate reactivity, we recommend that you contact the Division of Microbiology Devices to discuss your study. All virus identities and titers should be confirmed. Additional information on viral culture and identification procedures is available in CLSI document M41-A [5] and in the WHO manual [6].
Examples of recommended strains for the LoD and the analytical reactivity studies are shown in Table 1. Vaccine strains (wild type) from recent flu seasons can be included. Vaccine strains may vary from one influenza season to another. The information on the current vaccine strains is available from the Centers for Disease Control and Prevention (CDC) at http://www.cdc.gov/flu/professionals [7].
Table 1. Examples of influenza strains for analytical sensitivity (LoD) studies.
Type | Subtype | Influenza Viral Strain |
---|---|---|
A | H1N1 | A/California/7/2009 (H1N1) |
A | H3N2-like | A/Perth/16/2009 (H3N2)* |
B | B-like | B/Brisbane/60/2008 |
A | H1N1 | A/PR/8/34 |
A | H1N1 | A/FM/1/47 |
A | H1N1 | A/NWS/33 |
A | H1N1 | A1/Denver/1/57 |
A | H1N1 | A/New Caledonia/20/1999 |
A | H1N1 | A/New Jersey/8/76 |
A | H1N1 | A/Brisbane/59/2007 |
A | H1N1 | A/Hawaii/15/2001 |
A | H3N2 | A/Port Chalmers/1/73 |
A | H3N2 | A/Hong Kong/8/68 |
A | H3N2 | A2/Aichi2/68 |
A | H3N2 | A/Victoria/3/75 |
A | H3N2 | A /New York /55/2004 |
A | H3N2 | A2/Wisconsin/67/2005 |
B | B/Malaysia/2506/2004 | |
B | B/Lee/40 | |
B | B/Allen/45 | |
B | B/GL/1739/54 | |
B | B/Taiwan/2/62 | |
B | B/Hong Kong/5/72 | |
B | B/Maryland/1/59 | |
A | H5N1 | Human and /or Avian |
A | H5N2 | Avian |
A | H7N2 | Human and /or Avian |
A | H7N7 | Human and /or Avian |
A | Other subtypes | Human and/or animal species |
* A/Wisconsin/15/2009 is an A/Perth/16/2009 (H3N2)-like virus and is a 2010 Southern Hemisphere vaccine virus.
9.A.ii Analytical Specificity
Exclusivity
We recommend that you demonstrate analytical specificity of your assay with influenza types and subtypes not detected by your device. An exclusivity panel could be comprised of well characterized seasonal or novel influenza strains not detected by your device as well as non-human influenza viruses that have been shown to infect humans. Pertaining to nucleic acid-based devices, for large panels or for influenza strains that are difficult to culture, purified nucleic acids may be quantified, e.g. genome copies/mL instead of determining the viral titers. Nucleic acids can also be quantified in cases when highly purified influenza viruses are used in the study, e.g. sucrose gradient purification of influenza viruses.
Cross-reactivity
We recommend that you test for potential cross-reactivity with non-influenza respiratory pathogens and other microorganisms with which the majority of the population may have been infected. We recommend that you confirm the virus and bacteria identities and titers and test the organisms at medically relevant levels (usually 106 cfu/ml or higher for bacteria and 105 pfu/ml or higher for viruses). The microorganisms recommended for cross-reactivity studies are listed in Table 2. Pertaining to nucleic acid-based devices, for large panels or for organisms that are difficult to culture, purified nucleic acids may be quantified, e.g. genome copies/mL instead of determining the viral or bacterial titers. Nucleic acids can also be quantified in cases when highly purified organisms are used in the study, e.g. sucrose gradient purification of viruses.
Table 2. Microorganisms recommended for analytical specificity (cross- reactivity) studies.
Organism | Type |
---|---|
Adenovirus | Type 1 |
Adenovirus | Type 7 |
Human coronavirus* | OC43 and 229E strains |
Cytomegalovirus | |
Enterovirus | |
Epstein Barr Virus | |
Human parainfluenza | Type 1 |
Human parainfluenza | Type 2 |
Human parainfluenza | Type 3 |
Measles | |
Human metapneumovirus | |
Mumps virus | |
Respiratory syncytial virus | Type B |
Rhinovirus | Type 1A |
Bordetella pertussis | |
Chlamydia pneumoniae | |
Corynebacterium sp. | |
Escherichia coli | |
Hemophilus influenzae | |
Lactobacillus sp. | |
Legionella spp | |
Moraxella catarrhalis | |
Mycobacterium tuberculosis | avirulent |
Mycoplasma pneumoniae | |
Neisseria meningitidis | |
Neisseria sp. | |
Pseudomonas aeruginosa | |
Staphylococcus aureus | Protein A producer, e.g., Cowan strain |
Staphylococcus epidermidis | |
Streptococcus pneumoniae | |
Streptococcus pyogenes | |
Streptococcus salivarius |
For devices detecting multiple analytes, e.g., Flu A and B, and each of the flu A subtypes, you should establish that there is no cross-reactivity between types and subtypes detected. We encourage sponsors to present cross-reactivity testing data for devices detecting multiple pathogens in the format shown in Table 3).
Table 3. Data presentation example.
EXAMPLE | Reference Reagent, Results Positive (+) or Negative (-) for Reactivity | |||||||
---|---|---|---|---|---|---|---|---|
Organism | Strain | Adeno | Flu A | Flu B | Para 1 | Para 2 | Para 3 | RSV |
Adenovirus | Type 1 | + | - | - | - | - | - | - |
Type 3 | + | - | - | - | - | - | - | |
Type 5 | + | - | - | - | - | - | - | |
Type 6 | + | - | - | - | - | - | - | |
Type 7 | + | - | - | - | - | - | - | |
Type 10 | + | - | - | - | - | - | - | |
Type 13 | + | - | - | - | - | - | - | |
Type 14 | + | - | - | - | - | - | - | |
Type 18 | + | - | - | - | - | - | - | |
Type 31 | + | - | - | - | - | - | - | |
Type 40 | + | - | - | - | - | - | - | |
Type 41 | + | - | - | - | - | - | - | |
Influenza A | Aichi (H3N2) | - | + | - | - | - | - | - |
Mal (H1N1) | - | + | - | - | - | - | - | |
Hong Kong (H3N2) | - | + | - | - | - | - | - | |
Denver (H1N1) | - | + | - | - | - | - | - | |
Port Chalmers (H3N2) | - | + | - | - | - | - | - | |
Victoria (H3N2) | - | + | - | - | - | - | - | |
WS (H1N1) | - | + | - | - | - | - | - | |
PR (H1N1) | - | + | - | - | - | - | - | |
Influenza B | Hong Kong | - | - | + | - | - | - | - |
Maryland | - | - | + | - | - | - | - | |
Mass | - | - | + | - | - | - | - | |
Taiwan | - | - | + | - | - | - | - | |
GL | - | - | + | - | - | - | - | |
Russia | - | - | + | - | - | - | - | |
RSV | Long | - | - | - | - | - | - | + |
Wash | - | - | - | - | - | - | + | |
9320 | - | - | - | - | - | - | + |
Interference
We recommend that you conduct a comprehensive interference study using medically relevant concentrations of the interferent and at least two strains for each influenza type to assess the potentially inhibitory effects of substances encountered in respiratory specimens.
Potentially interfering substances include, but are not limited to, the following: blood, nasal secretions or mucus, and nasal and throat medications used to relieve congestion, nasal dryness, irritation, or asthma and allergy symptoms. Examples of potentially interfering substances are presented in Table 4, below. We recommend that you test interference at the assay cut-off determined for each influenza virus type/subtype detected by your device and for each of the interfering substances. We also recommend that you evaluate each interfering substance at its potentially highest concentration (“the worst case”). If no significant effect is observed, no further testing is necessary. Please refer to the CLSI document EP7-A2 [8] for additional information.
Table 4. Substances recommended for interference studies
Substance | Active Ingredient |
---|---|
Mucin: bovine submaxillary gland, type I-S | Purified mucin protein |
Blood (human) | |
Nasal sprays or drops | Phenylephrine, Oxymetazoline, Sodium chloride with preservatives |
Nasal corticosteroids | Beclomethasone, Dexamethasone, Flunisolide, Triamcinolone, Budesonide, Mometasone, Fluticasone |
Nasal gel | Luffa opperculata, sulfur |
Homeopathic allergy relief medicine | Galphimia glauca, Histaminum hydrochloricum |
FluMist© | Live intranasal influenza virus vaccine |
Throat lozenges, oral anesthetic and analgesic | Benzocaine, Menthol |
Anti-viral drugs | Zanamivir |
Antibiotic, nasal ointment | Mupirocin |
Antibacterial, systemic | Tobramycin |
9.A.iii Cut-off and Equivocal Zone
In your submission, you should explain how the assay cut-off was determined and how the cut-off values were validated (see also Section 8, “Interpreting and Reporting Test Results”). The cut-off should be determined using appropriate statistical methods. For example, provide a result distribution, 95 th and 99 th percentiles, percents of the non-negative (positive or equivocal) results, and other statistics, for the clinical samples without any respiratory viruses (zero analyte concentration) tested in your pilot studies. Selection of the appropriate cut-off can be justified by the relevant levels of sensitivity and specificity based on Receiver Operating Curve (ROC) analysis of the pilot studies with clinical samples (for details about ROC analysis, see CLSI document GP10-A [9]). If the assay has an equivocal zone, you should explain how you determined the limits of the equivocal zone. The performance of your device using the pre-determined cut-off (and equivocal zone, if applicable) should be validated in an independent population consistent with the defined intended use of your device.
9.A.iv Precision
We recommend that you provide data demonstrating the precision (i.e., repeatability and reproducibility) of your system. The CLSI documents, EP5-A2 [10] and EP12-A2 [11], include guidelines that may be helpful for developing an appropriate statistical experimental design, computations and a format for establishing performance claims. The precision should be established for each influenza virus type and subtype detected by the submitted device. Any variable that may impact the assay precision should be examined.
Site-to-Site Reproducibility
The protocol for the reproducibility study may vary slightly depending on the assay format but it should include an evaluation of all major sources of variability described below:
- Site-to-site and operator-to-operator. You should include three or more sites (for example, two external sites and one in-house site) with multiple operators at each site. Operators should represent end users of the assay in terms of education and experience. You should provide training only to the same extent that you intend to train users after marketing the device. We recommend that, for rapid testing or point-of-care (POC)2 devices, you include a larger number of tests and operators in your evaluation, in order to best represent the settings in which the devices will be used.
- Day-to-day variability. The testing should be conducted on five non-consecutive days, including a minimum of two runs per day, (unless the assay design precludes multiple runs per day) and three replicates of each panel member per run to assess between run and within run imprecision components. Run variability may be combined with operator variability, for example, each run can be performed by a different operator.
- Extraction-to-extraction variability. For devices that require an extraction step, such as the nucleic acid amplification assays, samples used in the reproducibility testing should be processed at the test site, starting from clinical specimens (e.g., nasopharyngeal swabs) using the extraction procedure you recommend in the test labeling. If more than one extraction procedure is recommended, each extraction method should be evaluated at each site separately (see also “ Nucleic Acid Extraction” in Section 9.E).
- The test sample panel should represent each type and subtype of influenza virus detected by your device at three levels of viral load including analyte or output concentrations close to the assay cut-off. The reproducibility panel may be prepared by spiking viruses into negative clinical matrix pools prepared from leftover clinical specimens.
- A “high negative” sample ( C5 concentration) with an analyte concentration below the clinically established cut-off such that results of repeated tests of this sample are negative approximately 95% of the time and results are positive approximately 5% of the time, (e.g., for real–time PCR assays sample with an analyte concentration not more than 10 fold below the clinical cut off of the assay).
- A “low positive” sample (C95 concentration) with a concentration of analyte just above the clinically established cut-off such that results of repeated tests of this sample are positive approximately 95% of the time.
- A “moderate positive” sample should reflect a clinically relevant viral load3. At this concentration, one can anticipate positive results approximately 100% of the time (e.g., approximately two to three times the concentration of the clinically established cut-off).
- For an ultrasensitive test for which the clinical cut-off may not be established based on the truly negative samples (zero concentration), it may be impossible to obtain a C5 sample . For such a device, the following two concentration levels may be tested in the precision/reproducibility study in place of the C5 concentration recommended above:
- A “negative” sample: a sample with an analyte concentration below the clinical cut-off such that results of repeated tests of this sample are negative 100% of the time, if less than 10% of the clinical samples positive by the reference method give results in a real time PCR assay below the threshold cycle (Ct) corresponding to LoD,
OR - A “near cut-off” sample (C20 to C80 ): a sample with a concentration of analyte just above or below the assay cut-off such that results of repeated tests of this sample are positive approximately 20% to 80% of the time, if more than 10% of the clinical samples positive by reference method give results below the Ct corresponding to LoD.
- A “negative” sample: a sample with an analyte concentration below the clinical cut-off such that results of repeated tests of this sample are negative 100% of the time, if less than 10% of the clinical samples positive by the reference method give results in a real time PCR assay below the threshold cycle (Ct) corresponding to LoD,
We recommend that you provide a detailed description of the study design and statistical analyses used. For the factors (e.g., instrument calibration, operators) considered in the precision study, we recommend using a balanced factorial design, that is, the precision study includes all the possible combinations of these factors. For example, if a precision study is performed over five days using two operators/runs and three sites, there are 5 days times 2 operators per runs times 3 sites, which equals 30 total combinations of the different levels of factors evaluated. If each combination is evaluated using 3 replicates then there are 90 results per panel member from this study. In general, for qualitative tests, variance components should be estimated using the appropriate statistical models and methods for each of the factors considered in the precision study. Confounded sources of variability, as well as overall variation should be described, with all included factors noted. For qualitative tests that have underlying quantitative output, the component of precision is often measured for each source of variation, as well as the total variation, using analysis of variance. For each panel member in the precision study, you should provide both a separate analysis by site and a site combined data analysis, including the mean value with each variance component estimate (standard deviation and percent CV) as well as total variance. For example, for a combined site data analysis, if a precision study is performed at three sites over five days using two operators per run and three replicates per run, provide the mean value, standard deviation, and percent CV for total variance and variance components for site to site, day to day, operator/run to operator/run, and residual (replicate to replicate). In addition, for each panel member, you should provide the percents of the values above and below the cut-off and the percent of invalid results for each site separately and for all sites combined. If applicable, you should also provide percents of equivocal results for each panel member in the precision study for each site and for all sites combined. The CLSI document, EP15-A2 [ 12], contains additional information on reproducibility study design.
Within-Laboratory Precision/Repeatability
We recommend that you conduct within-laboratory precision studies for devices that include instruments or automated components. These studies may be performed in-house, i.e., within your own company. We recommend that you test sources of variability (such as operators, days, assay runs, etc.) for a minimum of 12 days (not necessarily consecutive), with two operators each performing two runs per day (for a total of four runs per day), and two replicates of each sample per run for a total of 96 results for each test panel member. These test days should span at least two calibration cycles if the calibration cycle is shorter than two months and also should represent different time points within a single calibration cycle. We recommend you use the same sample panel as described in the section, “Site-to-Site Reproducibility”, above. We recommend using a balanced factorial design; that is, the precision study should include all the possible combinations of operator, run, and day. For example, a precision study is performed over 12 days, by two operators, over two runs. In this example, there are 12 days times two operators times two runs which equals 48 total combinations of the different levels for each factor evaluated. If each combination is evaluated in two replicates then there are 96 results per panel member from this study. For the two replicates, all factors of potential source of variations should be held constant. We recommend that you report these results in a similar way to that described in the section, “Site-to-Site Reproducibility”.
9.A.v Carry-Over and Cross-contamination Studies (for multi-sample assays and devices that require instrumentation.)
For multi-sample assays and devices that require instrumentation we recommend that you demonstrate that carry-over and cross-contamination do not occur with your device. In a carry-over and cross-contamination study, we recommend that high positive samples be used in series alternating with negative samples in patterns dependent on the operational function of the device. At least 5 runs with alternating high positive (the highest viral load found in clinical specimens or a minimum of 105 pfu/ml) and high negative samples (as defined in the “ Site-to-Site Reproducibility ” section, above) should be performed. We recommend that the high positive samples in the study be high enough to exceed 95% or more of the results obtained from specimens of diseased patients in the intended use population. In the case of an ultrasensitive device a high negative sample may be replaced with a negative sample (please see Section Site-to-Site Reproducibility, above for additional explanations). The carry-over and cross-contamination effect can then be estimated by the percent of negative results for the negative sample in the carry-over study [13].
9.A.vi Specimen Storage and Shipping Conditions
If you recommend specimen storage and shipping conditions, you should demonstrate that your device generates equivalent results for the stored specimens at several time points throughout the duration of the recommended storage and transport at both ends of your recommended temperature range. If you recommend viral transport medium (VTM) for storage or shipping, you should conduct appropriate studies and provide the data, showing that the one or more recommended VTMs are suitable and that the device will perform as described when the specimen is preserved in the recommended VTM [5]. You should include a statement in the package insert to that effect, also indicating the commercial source or chemical composition of the acceptable VTM(s).
9.B Clinical Performance Studies
We recommend that you conduct prospective clinical studies to determine the performance of your device in comparison to the established reference methods for all influenza types and subtypes and each specimen type you claim in your labeling.6
9.B.i Study Protocol
We recommend that you develop and include in the premarket submission a detailed study protocol that describes:
- Patient inclusion and exclusion criteria.
- Type and number of specimens needed.
- Directions for use.
- A statistical analysis plan that accounts for variances to prevent data bias.
- Documents supporting compliance with human subject protection regulations.
- Any other relevant protocol information.
Clinical investigations of unapproved and uncleared in vitro diagnostic devices are subject to the investigational device exemption (IDE) provisions of Section 520(g) of the Federal Food, Drug, and Cosmetic Act (21 U.S.C. 360j) and the implementing regulations. You should consider how 21 CFR part 812 (IDE) applies to your particular study and refer to 21 CFR part 50 (Informed Consent), and 21 CFR part 56 (Institutional Review Board) for other applicable requirements. Investigational devices that differentiate influenza A subtypes and detect novel influenza viruses, such as influenza A/H5, are particularly likely to meet the definition of "significant risk device" in 21 CFR 812.3(m). Clinical investigations of significant risk devices require the submission of an IDE application to FDA for review and approval, in accordance with 21 CFR part 812.7
For investigational devices that differentiate influenza A subtypes A/H1, A/H3, and A/2009 H1N1, but not influenza A/H5, we recommend that the clinical study protocol specify procedures for addressing influenza A unsubtypeable results in a timely manner, including the following steps:
- The sample should be tested with the comparator (as per study protocol) in order to confirm the result as unsubtypeable.
- Caution should be used and the laboratory SOP should be followed before propagating the unsubtypeable virus in culture.
- In the event that the sample is confirmed unsubtypeable using the comparator method, the appropriate local, state, and/or federal public health authorities should be immediately notified and their instructions should be followed.
We encourage sponsors to contact the Division of Microbiology Devices to request a review of their proposed studies and selection of specimen types prior to the initiation of the studies. We particularly encourage manufacturers to seek this type of discussion when samples are difficult to obtain.
9.B.ii Study Population
We recommend that you conduct your studies with specimens obtained from individuals presenting with influenza-like illness (e.g., cough, nasal congestion, rhinorrhea, sore throat, fever, headache, and myalgia). Influenza virus concentration in nasal and tracheal secretions remains high for 24-48 hours after the onset of the symptoms and may last longer in children. We recommend that the sample be collected within three days of the onset of influenza-like symptoms in order to obtain optimal assay performance results. If your device is intended for screening individuals for influenza infection, you should include a substantial number of both symptomatic and asymptomatic individuals in your study population, and also conduct a study during the time period when influenza is not prevalent.
We recommend that you include a representative number of positive samples (determined by the reference method) from each age group and present the data stratified by age (e.g., pediatric populations aged birth to 5 years, 6 to 21 years, [14] adults aged 22-59, and greater than 60 years old) in addition to the overall data summary table.
9.B.iii Specimens
You should include in the clinical studies samples consisting of all specimen types and matrices claimed in the intended use (e.g., nasal swabs, nasopharyngeal swabs, nasal aspirates) to demonstrate that correct results can be obtained from each type of clinical material. You should indicate the types of collection devices (swabs, medium - VTM) used for collecting the clinical specimens and for establishing the performance claims stated in the package insert. If the swabs are not provided with the device, the package insert should contain the information about the commercial source and swab specifications (e.g. size, shape, fiber, and shaft type). Swabs with wood shafts or other materials known to inhibit growth of influenza viruses should not be used. Analytical studies demonstrating equivalency of VTMs (e.g., LoD) should be included in the submission if the recommended VTM is different from the one used for the specimens in the clinical studies or if more than one VTM is recommended in the labeling.
The total number of samples needed for substantiating a claim for detection of influenza A, influenza B, or H/N subtypes of influenza A, will depend on the prevalence of the virus and on the assay performance. We recommend that a ll influenza detecting devices demonstrate specificity with a lower bound of the 95% (two-sided) confidence interval (CI) exceeding 90% .
- For rapid devices detecting influenza A virus antigen, we recommend that you include a sufficient number of prospectively collected samples for each specimen type claimed to generate a sensitivity result with a lower bound of the two-sided 95% CI greater than 60%. Generally, we recommend testing a minimum of 50 samples, determined to be positive using the reference method, for each specimen type.
- For rapid devices detecting influenza B virus antigen, we recommend that you include a sufficient number of samples for each claimed specimen type to generate a result for sensitivity with a lower bound of the two-sided 95% CI greater than 55%. Generally we recommend a minimum of 30 positive samples for each specimen type.
- Nucleic acid-based tests should demonstrate at least 90% sensitivity for each analyte and each specimen type with a lower bound of the two-sided 95% CI greater than 80%.
We recommend that you assess the ability of your device to detect influenza viruses in fresh specimens collected from patients suspected of having an influenza infection who have been sequentially enrolled in the study (all-comers study).
Frozen archived specimens may be useful for analytical performance evaluations, but are not recommended for studies to calculate clinical sensitivity or specificity. Freeze-thawing can change the characteristics of the specimen from those of fresh specimens with which the test is intended to be used, possibly affecting assay performance. However, for devices intended to detect and/or differentiate influenza viruses for which fresh specimens are difficult to obtain, it may be necessary to use, for example, frozen archived clinical specimens:
- For novel influenza viruses you may need to use frozen archived clinical specimens from patients who are case-confirmed, in accordance with World Health Organization (WHO) criteria for laboratory-confirmed cases, to demonstrate the performance of your device [ 6 ].
- During an influenza season when the prevalence of influenza of a particular type or subtype is unusually low, prospectively collected archived specimens8 of known specimen type can be used to supplement the fresh prospectively collected specimens. We also recommend the following:
- If the archived specimens were cultured before freezing and the culture results are available then the specimens should be tested only with the investigational device and the results should be compared to the original viral culture results.
- If the culture results are not available, then the archived specimens should be thawed and tested with an acceptable NAAT-based comparator (e.g. CDC rRT-PCR flu panel) and the investigational device. For details and updates on acceptable NAAT-based comparator devices, please contact the Division of Microbiology Devices (DMD).
- A fresh-frozen equivalency study should be performed to demonstrate equivalency of the investigational device performance using fresh and frozen specimens.
In general, when the number of specimens available for clinical testing is very low (e.g., newly emerging strains), the available evidence for FDA's premarket review may, of necessity, be obtained from analytical rather than clinical studies. In this circumstance, it is particularly critical to have well designed analytical studies. Animal studies can be used to supplement analytical studies.
If a limited number of fresh influenza specimens are available, we recommend that you contact DMD to discuss alternative proposals.
9.B.iv Evaluation of Fresh vs. Frozen samples
The performance of your device for detection of influenza viruses may change when testing frozen specimens as compared to fresh. If both fresh and frozen specimens were included in the clinical study you should assess the effect of repeated freeze/thaw cycles on the assay performance and should demonstrate positive agreement of at least 95% with a lower bound of a 95% (two-sided) confidence interval exceeding 90%. Either clinical or contrived specimens may be used for this study. The contrived specimens may be prepared by spiking cultured viruses into an appropriate negative clinical matrix at different levels of viral load including the titers around the assay cut-off. At a minimum, 60 samples representing swab specimen types and 60 samples representing washes and aspirates should be tested.
Propagation of influenza virus in culture is most reliable when fresh specimens are used. Freezing and thawing influenza virus may reduce its viability and generate a false negative result especially in specimens with a low viral load. If these culture results are used as a reference method then the result of the investigational device may be positively biased. If frozen specimens are propagated, you should demonstrate that there was no substantial loss of viral titer or change in performance of the investigational device when compared to propagation of fresh specimens. The study comparing viral titer after propagating fresh and frozen specimens should include specimens with a range of viral loads including the titers around the LoD. Results should demonstrate a positive agreement of at least 95% with a lower bound of the 95% (two-sided) confidence interval exceeding 90%.
9.B.v Study Sites
We recommend that you collect specimens and conduct your studies at a minimum of three geographically diverse facilities, one of which may be in-house.
We recommend that the performance evaluation for devices intended for POC) use or rapid testing include, at minimum, one site at a clinical laboratory as well as sites representative of non-laboratory settings where the device is intended to be used (e.g., patient’s bedside, emergency department). Conducting testing with the device in a clinical laboratory with more experienced and trained personnel, in addition to testing in non-laboratory sites where the device is intended to be used but operators are likely to have less laboratory training, will help to determine whether training of the person conducting the test is likely to affect the performance of the device.
9.B.vi Reference Methods
We recommend that you compare the results obtained with your device to the results obtained by using one or more of the following established reference methods or comparators:
- Virus culture followed by direct fluorescent antibody (DFA) or other type-specific antigen detection system (e.g., ELISA).
- A direct specimen fluorescence assay (DSFA) that has been cleared by the FDA. Direct specimen testing using immunofluorescent methods (DSFA) provides a specific result that is available faster than culture. However, in order to ensure optimal accuracy it is essential that the operator follow the instructions and recommendations in the package insert and follow up all DSFA-negative specimens by viral culture.
We recommend that you verify that the virus culture methods used in your study follow the CLSI document M41-A, and WHO Manual on Animal Influenza Diagnosis and Surveillance [5,6]. It is essential that the specimens be rapidly transported to the laboratory for optimal virus recovery or detection. Culture should not be performed on frozen specimens if other options are available as freeze-thawing may result in loss of virus infectivity. If the fluorescent antibody used for virus detection in cultured cells is FDA-cleared, no validation information is needed in the submission, as long as the laboratory performing the test follows the package insert instructions. If the antibody used in the DFA is a pre-Amendments device,9 then you should provide published literature or laboratory data (e.g. LoD and analytical reactivity) in your premarket submission in support of the validation for the antibody used for typing of influenza virus isolated from culture.
If your clinical protocol includes the use of frozen specimens, we recommend that you contact the Division of Microbiology Devices at FDA to discuss alternative proposals.
- For devices based on nucleic acid amplification technologies (NAAT), alternative comparator methods may be used, including FDA-cleared NAAT-based assays testing directly from clinical specimens. An acceptable comparator is an assay that (i) demonstrates for each analyte (e.g., type A, B or subtype H1N1, H3N2) a sensitivity (compared to viral culture) of at least 95% and a specificity of at least 92% with a lower bound of the 95% (two-sided) confidence interval exceeding 90% and (ii) that does not recommend or require culture confirmation for negative results. If the performance of your device is established in comparison to an acceptable comparator assay then you should calculate results as positive percent agreement and negative percent agreement (rather than sensitivity and specificity). Performance should meet the following criteria: positive and negative percent agreement of at least 95% with a lower bound of the 95% (two-sided) CI exceeding 90%. As the influenza virus continues to mutate and undergo antigenic drift, the performance of comparator devices may also change over time. We recommend that you contact DMD for a list of currently acceptable comparator assays.
- When FDA-cleared tests or well characterized antibodies are not available, we recommend using a validated polymerase chain reaction (PCR) followed by bi-directional sequencing of hemagglutinin (HA) or other subtype-specific target amplicons, for influenza A subtype identification. This alternative method is acceptable for subtyping viruses in cultured cells or in specimens determined to be influenza A positive by either culture or an acceptable NAAT comparator. If your test is based on nucleic acid amplification technologies, the primer sequences for the non-FDA-cleared comparator PCR should be different from the primer sequences included in your device and the comparator assay should be validated.
You should provide published literature or laboratory data in support of the PCR validation for differentiation of the influenza A subtype. Validation should include LoD and analytical reactivity data. The LoD of this PCR should be similar to the analytical sensitivity of the submitted device. If sequencing is used as a component of the comparative method for the differentiation of influenza A viruses, we recommend that you perform the sequencing reaction on both strands of the amplicon (bidirectional sequencing) and that the generated sequence meet all of the following acceptance criteria:- The sequence should contain a minimum of 100 contiguous bases.
- Bases should have a Quality Value of 20 or higher as measured by PHRED, Applied Biosystems KB Basecaller, or similar software packages (this represents a probability of an error of 1% or lower) [15].
- The sequence should match the reference or consensus sequence with an Expected Value (E-Value) <>-30 for the specific target (for a BLAST search in GenBank, http://www.ncbi.nlm.nih.gov/Genbank/).
Additionally, if public health authorities recommend against culturing a specimen from a patient who is suspected to be infected with a novel influenza virus, we recommend that you use an FDA-cleared NAAT device or validated PCR testing followed by sequencing of the amplicons to confirm the identity of the novel virus.
9.C Post-market Performance Validation
We recommend that you obtain and analyze post-market data to ensure the continued reliability of your device, particularly given the propensity for influenza viruses to mutate and the potential for changes in viral strain prevalence over time. As required by 21 CFR 820.100(a)(1), Corrective and Preventive Action, “you must analyze processes… complaints, returned product, and other sources of quality data to identify existing and potential causes of nonconforming product or other quality problems.” As updated influenza viral sequences become available (from WHO, NIH and other public health entities), you should continue to monitor the performance of your assay. Further, these analyses should be evaluated against the device design validation and risk analysis required by 21 CFR 820.30(g), Design Validation, to determine if any design changes may be necessary. The aim of your post-market monitoring should be to ensure that your device maintains its stated level of performance over time in spite of the antigenic drift that is characteristic of influenza viruses.
9.D CLIA Waiver
If you are seeking waiver categorization for your device under the Clinical Laboratory Improvement Amendments of 1988 (CLIA),10 we recommend that you consult with Division of Microbiology Devices staff regarding the design of specific studies to support the CLIA waiver application for your device. The guidance, “Recommendations for Clinical Laboratory Improvement Amendments of 1988 (CLIA) Waiver Applications,” is also available [16].
9.E Nucleic Acid-based Influenza Devices
The information described here is relevant to studies intended to determine the performance of nucleic acid-based influenza assays [17,18]. This section complements the recommendations for performance studies described earlier in this document. Where applicable, you should describe design control specifications that address or mitigate risks associated with primers, probes, and controls used to detect viral RNA segments, such as the following examples:
- Prevention of probe cross-contamination for multiplexed tests in which many of the probes are handled during the manufacturing process.
- Minimization of false positives due to contamination or carryover of sample.
- Use of multiple probes for a single analyte to enable detection of virus variants appearing due to mutations within the target RNA segment(s), or variants within a designated Influenza virus strain (or lineage).
- Developing or recommending validated methods for nucleic acid extraction and purification that yield suitable quality and quantity of viral nucleic acid for use in the test system with your reagents.
9.E.i Nucleic Acid Extraction
Different extraction methods may yield nucleic acids of varying quantity and quality and, therefore, the extraction method can be crucial to a successful result. Purification of viral nucleic acids can be challenging as biological samples may contain low viral titers masked by a background of human genomic DNA, as well as high levels of proteins and other contaminants.
For these reasons, you should evaluate the effect of your chosen extraction method on the performance of the assay. If you include or recommend multiple extraction methods for use with your assay, you should demonstrate analytical and clinical performance of your assay with each extraction method and each claimed influenza virus type and subtype. Specifically, you should demonstrate the LoD and reproducibility for each recommended extraction procedure. You may be able to combine the extraction method variable with each site performance variable in the reproducibility study. For example, if you recommend three different extraction methods, you can design a reproducibility study to evaluate one of the three extraction methods at each of three testing sites: test extraction method A at site 1, method B at site 2, and method C at site 3. However, if the studies from the three sites indicate statistically significant differences in assay performance, the reproducibility study should be expanded to include testing each extraction method at three study sites (e.g. site 1 extraction methods A, B and C, site 2 extraction methods A,B and C, and site 3 extraction methods A, B and C).
In addition to the analytical studies (LoD and Reproducibility at external sites), each extraction method should be utilized in at least one clinical site during the clinical studies to generate clinical performance data. If results from the expanded reproducibility testing indicate a significant difference in efficiency among the extraction methods, the data from the individual clinical testing sites (using different nucleic acid extraction methods) are not considered equivalent and should not be pooled, but rather should be analyzed separately. As a consequence, further testing of prospective clinical samples may be needed in order to support the claimed extraction method.
9.E.ii Controls for Nucleic Acid-based Influenza Assays
We recommend that you use quality control material for verification of assay performance in analytical and clinical studies. If your device is based on nucleic acid technology, we generally recommend that you include the following types of controls:
Negative Controls
Blanks or no template control
The blank, or no-template control, contains buffer or sample transport media and all of the assay components except nucleic acid. These controls are used to rule out contamination with target nucleic acid or increased background in the amplification reaction. It may not be needed for assays performed in single test disposable cartridges or tubes.
Negative sample control
The negative sample control contains non-target nucleic acid or, if used to evaluate extraction procedures, it contains the whole non-target organism. It reveals non-specific priming or detection and indicates that signals are not obtained in the absence of target sequences. Examples of acceptable negative sample control materials include:
- Patient specimen from a non-influenza infected individual.
- Samples containing a non-target organism (e.g., cell line infected with non-influenza virus).
- Surrogate negative control, e.g., alien encapsidated RNA [19].
Positive Controls
Positive control for complete assay
The positive control contains target nucleic acids, and is used to control the entire assay process, including RNA extraction, amplification, and detection. It is designed to mimic a patient specimen and is run as a separate assay, concurrently with patient specimens, at a frequency determined by a laboratory’s Quality System (QS). Examples of acceptable positive assay control materials include:
- Cell lines infected with an inactivated or a non-virulent strain of influenza virus.
- Packaged influenza RNA.
Positive control for amplification/detection
The positive control for amplification/detection contains purified target nucleic acid at or near the limit of detection for a qualitative assay. It controls the integrity of the patient sample and the reaction components when negative results are obtained. It indicates that the target is detected if it is present in the sample.
Internal Control
The internal control is a non-target nucleic acid sequence that is co-extracted and co-amplified with the target nucleic acid. It controls for integrity of the reagents (polymerase, primers, etc.), equipment function (thermal cycler), and the presence of inhibitors in the samples. This type of control can also assure specimen adequacy or that human cellular material was sampled (host target). Examples of acceptable internal control materials include MS2 bacteriophage or human nucleic acid co-extracted with the influenza virus and primers amplifying human housekeeping genes (e.g., RNaseP, β-actin).
10. References
- Class II Special Controls Guidance Document: Instrumentation for Clinical Multiplex Test Systems.
- Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices.
- Clinical and Laboratory Standards Institute. 2001. Specifications for Immunological Testing for Infectious Diseases; Approved Guideline, Second Edition. ILA18-A2.
- Clinical and Laboratory Standards Institute. 2004. Protocol for Determination of Limits of Detection and Limits of Quantitation; Approved Guideline. EP17-A.
- Clinical and Laboratory Standards Institute. 2006. Viral Culture; Approved Guideline. M41-A. Clinical and Laboratory Standards Institute, Wayne PA.
- WHO Manual on Animal Influenza Diagnosis and Surveillance. 2002, Geneva, (World Health Organization). (Complete document WHO/CDS/CSR/NCS/2002.5)
- Centers for Disease Control and Prevention. 2006-07 Influenza vaccine composition in, “MMWR Recommendations and Reports: Prevention and Control of Influenza: Recommendations of the Advisory Committee on Immunization Practices (ACIP).” 2006; July 28; 55(RR10):1-42.
- Clinical and Laboratory Standards Institute. 2005. Interference Testing in Clinical Chemistry; Approved Guideline. EP7-A2. Clinical and Laboratory Standards Institute, Wayne PA.
- Clinical and Laboratory Standards Institute. 1995. Assessment of the Clinical Accuracy of Laboratory Tests Using Receiver Operating Characteristics (ROC) Plots; Approved Guideline. GP10-A. Clinical and Laboratory Standards Institute, Wayne PA.
- Clinical and Laboratory Standards Institute. 2004. Evaluation of Precision Performance of Quantitative Measurement Methods; Approved Guideline. EP5-A2. Clinical and Laboratory Standards Institute, Wayne PA.
- Clinical and Laboratory Standards Institute. 2002. User Protocol for Evaluation of Qualitative Test Performance; Approved Guideline. EP12-A2. Clinical and Laboratory Standards Institute, Wayne PA.
- Clinical and Laboratory Standards Institute. 2006. User Verification of Performance for Precision and Trueness; Approved Guideline. EP15-A2. Clinical and Laboratory Standards Institute, Wayne PA.
- Haeckel R. (1991) Proposals for the description and measurement of carry-over effects in clinical chemistry. Pure Appl. Chem. 63:302-306.
- Guidance for Industry and FDA Staff Premarket Assessment of Pediatric Medical Devices. 2004.
- Patton, S.J., Wallace, A.J., Elles, R. (2006) Benchmark for Evaluating the Quality of DNA Sequencing: Proposal from an International External Quality Assessment Scheme. Clinical Chemistry, 52:728-736.
- Guidance for Industry and FDA Staff: Recommendations for Clinical Laboratory Improvement Amendments of 1988 (CLIA) Waiver Applications for Manufacturers of In Vitro Diagnostic Devices. 2008.
- Clinical and Laboratory Standards Institute. 2004. Nucleic Acid Sequencing Methods in Diagnostic Laboratory Medicine; Approved Guideline. MM9-A. Clinical and Laboratory Standards Institute, Wayne PA.
- Clinical and Laboratory Standards Institute. 2006 Molecular Diagnostic Methods for Infectious Disease; Proposed Guideline. MM3-A2. Clinical and Laboratory Standards Institute, Wayne PA.
- Pasloske BL, Walkerpeach CR, Obermoeller RD, Winkler M, and DuBois DB. (1998) Armored RNA Technology for Production of Ribonuclease-Resistant Viral RNA Controls and Standards. J. Clin. Microbiol, 36:3590-3594.
1 There are three types of influenza viruses: A, B, and C. Influenza A viruses are further classified by subtype on the basis of the two main surface glycoproteins, hemagglutinin (HA) and neuraminidase (NA). Influenza A subtypes and B viruses are further classified by strains (http://www.cdc.gov/flu/avian/gen-info/flu-viruses.htm).
2 Tests designed to be used at or near the site where the patient is located, that do not require permanent dedicated space, and that are performed outside the physical facilities of the clinical laboratories http://www.cap.org/apps/cap.portal
3 Sample with a typical virus concentration found in the infected subjects in the intended use population.
4 The limit of blank is defined as the highest expected value in a series of results on a sample that contains no analyte or the lowest observed test result that can reliably be declared greater than zero.
5 Type I error is the probability of having truly negative samples (with zero analyte concentration) generate values that indicate the presence of analyte. Usually, Type I error is set as 5% or less.
6 Comparing performance of a new assay against an established reference method creates a frame of reference for evaluating the device that is useful whether the data is to be considered in an initial classification action or to facilitate comparison with the performance of a predicate device, in the case of a premarket notification and evaluation of substantial equivalence.
7 You may also refer to the “Information Sheet Guidance for IRBs, Clinical Investigators, and Sponsors” and “Guidance on Informed Consent for In Vitro Diagnostic Device Studies Using Leftover Human Specimens that are Not Individually Identifiable.”
8 For purposes of this guidance, prospectively collected archived specimens are those collected sequentially from all patients meeting study inclusion criteria and representing the assay’s intended use population (i.e., not pre-selected specimens with known results) coming in to a clinical testing facility between two pre-determined dates (e.g., from the beginning to the end of one flu season), so there is no bias and the prevalence is preserved. These specimens should be appropriately stored (e.g., frozen at -70 oC).
9 Pre-Amendments devices are those devices that were introduced or delivered for introduction into interstate commerce for commercial distribution prior to May 28, 1976 (the date of enactment of the Medical Device Amendments of 1976).
10 See 42 U.S.C. § 263a(d)(3).
Submit Comments
You can submit online or written comments on any guidance at any time (see 21 CFR 10.115(g)(5))
If unable to submit comments online, please mail written comments to:
Dockets Management
Food and Drug Administration
5630 Fishers Lane, Rm 1061
Rockville, MD 20852
All written comments should be identified with this document's docket number: FDA-2008-D-0095.