Local Coverage Determination (LCD)

Genetic Testing for Oncology

L39365

Expand All | Collapse All
Proposed LCD
Proposed LCDs are works in progress that are available on the Medicare Coverage Database site for public review. Proposed LCDs are not necessarily a reflection of the current policies or practices of the contractor.

Document Note

Posted: 7/13/2023
PLEASE NOTE: This LCD will not become effective on 07/17/2023. A new Proposed LCD will be published for comment and presented at an Open Meeting in the near future.

Note History

Contractor Information

LCD Information

Document Information

Source LCD ID
N/A
LCD ID
L39365
Original ICD-9 LCD ID
Not Applicable
LCD Title
Genetic Testing for Oncology
Proposed LCD in Comment Period
N/A
Source Proposed LCD
DL39365
Original Effective Date
For services performed on or after 07/17/2023
Revision Effective Date
N/A
Revision Ending Date
N/A
Retirement Date
N/A
Notice Period Start Date
06/02/2023
Notice Period End Date
07/16/2023

CPT codes, descriptions, and other data only are copyright 2023 American Medical Association. All Rights Reserved. Applicable FARS/HHSARS apply.

Fee schedules, relative value units, conversion factors and/or related components are not assigned by the AMA, are not part of CPT, and the AMA is not recommending their use. The AMA does not directly or indirectly practice medicine or dispense medical services. The AMA assumes no liability for data contained or not contained herein.

Current Dental Terminology © 2023 American Dental Association. All rights reserved.

Copyright © 2024, the American Hospital Association, Chicago, Illinois. Reproduced with permission. No portion of the AHA copyrighted materials contained within this publication may be copied without the express written consent of the AHA. AHA copyrighted materials including the UB‐04 codes and descriptions may not be removed, copied, or utilized within any software, product, service, solution, or derivative work without the written consent of the AHA. If an entity wishes to utilize any AHA materials, please contact the AHA at 312‐893‐6816.

Making copies or utilizing the content of the UB‐04 Manual, including the codes and/or descriptions, for internal purposes, resale and/or to be used in any product or publication; creating any modified or derivative work of the UB‐04 Manual and/or codes and descriptions; and/or making any commercial use of UB‐04 Manual or any portion thereof, including the codes and/or descriptions, is only authorized with an express license from the American Hospital Association. The American Hospital Association (the "AHA") has not reviewed, and is not responsible for, the completeness or accuracy of any information contained in this material, nor was the AHA or any of its affiliates, involved in the preparation of this material, or the analysis of information provided in the material. The views and/or positions presented in the material do not necessarily represent the views of the AHA. CMS and its products and services are not endorsed by the AHA or any of its affiliates.

Issue

Issue Description

Over the past few decades, the accelerating availability and diversity of genetic tests coupled with dramatic advancements in technology have changed the landscape of medicine, especially in the field of oncology. As more tests become available, the potential for misuse and/or misunderstanding of tests also increases. In fact, current data and reports indicate Medicare beneficiaries are exposed to genetic testing that is not medically necessary and may negatively affect and harm beneficiaries.1 Moreover, results from genetic testing can be extremely complicated and require a knowledgeable provider to properly assess and utilize these results. If the ordering provider is not directly involved in management of a patient’s cancer, their ordering of oncologic genetic testing is inappropriate. Tests not ordered by the physician who is treating the beneficiary’s cancer are not reasonable and necessary (see Code of Federal Regulations (CFR), Title 42, Volume 2, Chapter IV, Part 410.32(a)). Furthermore, factors such as patient informed consent and genetic counseling should be considered. Because of this complicated testing landscape, we deemed development of this LCD imperative for mitigating risk to Medicare beneficiaries and ensuring they receive only medically reasonable and necessary testing.

Multiple reconsideration requests have been received regarding a variety of molecular pathology services. This is a rapidly evolving field and area of study and it is important to assess genetic testing in the context of oncology with a rigorous, evidence-based approach. This should not be seen as a barrier, but a facilitator of appropriate testing for all eligible Medicare beneficiaries. Therefore, this LCD addresses testing of DNA and RNA in the context of oncology through the use of multiple evidence-based third-party databases and MAC review of individual genetic tests.

Issue - Explanation of Change Between Proposed LCD and Final LCD

In response to stakeholder comments that were received during the Open Meeting and subsequent Comment Period, several changes were made to the LCD:

  • Update to the Issue Description.
  • Updates to the Introduction Citations.
  • Minor text changes to the History/Background and/or General Information.
  • Updates to the Definitions including 25 additional terms.
  • Updates to the Covered Indications including the addition of a note regarding cell-free DNA and NCD 90.2, the addition of score tables for Items 2, 3 and 4, and a note regarding reconsideration requests.
  • Added a new section: Additional Requirements for Next Generation Sequencing (NGS) tests.
  • Updates to the Limitations including revised text for Items 1, 2, and 3, and the addition of 13 specific laboratory tests.
  • Updates to the Summary of Evidence including Technology Assessment of knowledge bases, addition of a reference to the National Comprehensive Cancer Network’s (NCCN’s) summary, and literature reviews added for 13 specific tests (Cxbladder, ThyroSeq® CRC, PancraGEN®, DecisionDX-Melanoma, DecisionDX-SCC, UroVysion® FISH, Colvera, PancreaSeq® Genomic Classifier).
  • Updates to the Analysis of Evidence including Technology Assessment of knowledge bases and literature reviews added for 13 specific tests (Cxbladder, ThyroSeq CRC, PancraGEN, DecisionDX-Melanoma, DecisionDX-SCC, UroVysion FISH, Colvera, PancreaSeq Genomic Classifier).
  • Updates to the Bibliography to add numerous citations and corresponding reference numbers throughout the LCD.

CMS National Coverage Policy

This LCD supplements but does not replace, modify or supersede existing Medicare applicable National Coverage Determinations (NCDs) or payment policy rules and regulations for genetic testing for oncology. Federal statute and subsequent Medicare regulations regarding provision and payment for medical services are lengthy. They are not repeated in this LCD. Neither Medicare payment policy rules nor this LCD replace, modify or supersede applicable state statutes regarding medical practice or other health practice professions acts, definitions and/or scopes of practice. All providers who report services for Medicare payment must fully understand and follow all existing laws, regulations and rules for Medicare payment for genetic testing for oncology and must properly submit only valid claims for them. Please review and understand them and apply the medical necessity provisions in the policy within the context of the manual rules. Relevant CMS manual instructions and policies may be found in the following Internet-Only Manuals (IOMs) published on the CMS Web site:

IOM Citations:

  • CMS IOM Publication 100-02, Medicare Benefit Policy Manual,
    • Chapter 15, Section 80.1 Clinical Laboratory Services and Section 280 Preventive and Screening Services
  • CMS IOM Publication 100-03, Medicare National Coverage Determinations (NCD) Manual,
    • Chapter 1, Part 2, Section 90.2 Next-Generation Sequencing for Patients with Advanced Cancer
    • Chapter 1, Part 3, Section 190.3 Cytogenetic Studies
    • Chapter 1, Part 4, Section 210.3 Colorectal Cancer Screening Tests
  • CMS IOM Publication 100-08, Medicare Program Integrity Manual,
    • Chapter 13, Section 13.5.4 Reasonable and Necessary Provisions in an LCD

Medicare National Correct Coding Initiative (NCCI) Policy Manual:

  • Chapter 10, Section A Introduction

Social Security Act (Title XVIII) Standard References:

  • Title XVIII of the Social Security Act, Section 1862(a)(1)(A) states that no Medicare payment may be made for items or services which are not reasonable and necessary for the diagnosis or treatment of illness or injury.

Code of Federal Regulations (CFR) References:

  • CFR, Title 42, Volume 2, Chapter IV, Part 410.32(a), Part 410.32(d), and Part 410.32(e) Diagnostic x-ray tests, diagnostic laboratory tests, and other diagnostic tests: Conditions
  • CFR, Title 42, Volume 2, Chapter IV, Part 411.15(k)(1) Particular services excluded from coverage
  • CFR, Title 42, Volume 2, Chapter IV, Part 493 Laboratory Requirements

Coverage Guidance

Coverage Indications, Limitations, and/or Medical Necessity

Compliance with the provisions in this LCD may be monitored and addressed through post payment data analysis and subsequent medical review audits.

History/Background and/or General Information

With advancement in science and technology comes the ability to incorporate genetic testing for oncology biomarkers into clinical care, with the goal of improved patient outcomes. The scope of this LCD is DNA and RNA genetic testing in the practice of oncology in the Medicare population.

As defined by the Food and Drug Administration (FDA) and National Institutes of Health (NIH) Biomarker Working Group, a biomarker is “A defined characteristic that is measured as an indicator of normal biological processes, pathogenic processes, or biological responses to an exposure or intervention, including therapeutic interventions,” and may include molecular, histologic, radiographic, or physiologic characteristics.2 (p.45)

Cancer is a disease caused by changes or alterations to a person’s genome. Some genetic changes or alterations can be inherited (also known as germline mutations). About 5-10% of all cancer diagnoses result from germline mutations, and over 50 hereditary cancer syndromes are known. Other cancer-causing (oncogenic) genetic changes or alterations result from acquired genetic damage (also known as somatic mutations). Somatic mutations can arise in numerous scenarios, including exposure to chemicals that alter DNA (carcinogens) or ultraviolet (UV) radiation from the sun.3

Biomarker testing is a part of precision medicine (also known as personalized laboratory medicine). Precision medicine is a tailored approach to medical care and treatment. Because each patient has a unique combination of genetic heritage and somatic changes, and therefore, a unique pattern of biomarkers, precision medicine for oncology involves the use of biomarker testing to pinpoint the disease management needs of individual patients and avoid the use of treatments which are unlikely to be successful.4 Much of this testing involves direct evaluation of the genetics of the malignancy through various testing methodologies. These methodologies can include high level genetic evaluations such as karyotyping (analysis of chromosomes) to more detailed evaluations such as identifying specific pathogenic point variations (analysis of specific nucleotide changes).

Additionally, testing may be used to check for a single biomarker or multiple biomarkers at the same time via a multigene test or panel.3 As a result, the growing compendium of products described as biomarkers requires careful evaluation to determine what testing configurations are medically reasonable and necessary under Medicare.

Biomarkers for oncology can be generally classified into four functional types5:

  1. Diagnostic biomarkers detect or confirm the presence of a disease or condition.
  2. Prognostic biomarkers provide information about the likely course of a disease process and potential patient outcomes if left untreated.
  3. Predictive biomarkers forecast a patient’s response and/or benefit to a specific treatment.
  4. Therapeutic biomarkers identify potential targets for a medical intervention. (e.g., targeted drug therapy)

In certain circumstances, genetic testing for oncology biomarkers in patients with the corresponding appropriate medical condition could have the potential to assist patient management in the Medicare population. However, given the complexity and rapidly expanding knowledge in this topic area, we must vigilantly avoid testing that generates confusion and that does not improve patient outcomes. In order for services to be considered medically reasonable and necessary, they must impact the management of the patient and lead to improved patient outcomes. Specialized clinical expertise in oncology in addition to advanced knowledge in both genetic variation and effect on gene function is required to facilitate optimal outcomes for patients.

Definitions

Analytical validation
is a process intended to determine if a test, tool, or instrument has acceptable technical performance (sensitivity, specificity, accuracy, precision, etc.). Analytical validation is an assessment of the test’s technical performance (the test measures what it was designed to measure), not its usefulness or clinical significance.2,6 Analytical validity includes the ability of the test to accurately and reliably detect the mutation and/or variant. 6

Biomarkers A biological molecule found in blood, other body fluids, or tissues that is a sign of a normal or abnormal process, or of a condition or disease. A biomarker may be used to see how well the body responds to a treatment for a disease or condition. Also called molecular marker and signature molecule.7

Cancer Screening An attempt to detect cancer early by routine examination of apparently healthy people.8

Cancer Surveillance is closely watching a patient’s condition but not treating it unless there are changes in test results. Surveillance is also used to find early signs that a disease has come back. During surveillance, certain exams and tests are done on a regular schedule.9

Cell-free DNA (cfDNA) is a laboratory method that involves analyzing free (i.e., no longer within the cell) DNA contained within a biological sample, most often to look for genomic variants associated with a hereditary or genetic disorder. For example, prenatal cfDNA testing is a non-invasive method used during pregnancy that examines the fetal DNA that is naturally present in the maternal bloodstream. Cell-free DNA testing is also used for the detection and characterization of some cancers and to monitor cancer therapy.10

Clinical Indication for Germline testing is a sign, symptom, laboratory test result, or medical condition, or a combination of these indications, that leads to the recommendation of a treatment, laboratory test, or procedure for a hereditary disease or condition. 11

Circulating tumor DNA (ctDNA) are small pieces of DNA that are released into a person’s blood by tumor cells as they die. A sample of blood can be used to look for and measure the amount of ctDNA and identify specific mutations (changes) in the DNA. Circulating tumor DNA is being used as a biomarker to help diagnose some types of cancer, to help plan treatment, or to find out how well treatment is working or if cancer has come back.12

Clinical validity is defined as the ability of a test to classify a patient’s specific circumstance into a diagnostic, prognostic, or predictive functional category. It should be noted that clinical validity is not a fixed value. Clinical validity includes the ability of the test to accurately and reliably detect the disease of interest in the defined population. 6,13

Clinical utility can be defined as the ability of a test to provide information related to the patient’s care and management, and thus, its ability to inform treatment decisions. CMS is most focused on assessing clinical utility in the context of whether or not a test is used to guide patient management and whether or not use of the test results leads to treatment that improves health outcomes. 6,13

Comprehensive Genomic Profiling (CGP) is, at this time, a term with many potential interpretations depending on which entity uses the term and in what context the term is used. Because of this, the term Comprehensive Genomic Profiling will not be utilized within this LCD. It is recognized that knowledge bases like NCCN sometimes use the term CGP, but it must be recognized that the term’s precise definition depends on the unique context in which it is utilized. Because of this, the way CGP is defined in one guideline will not necessarily translate to other guidelines.

FDA-cleared or approved test system means a test system was cleared or approved by the FDA through the premarket notification (510(k)) or premarket approval (PMA) process for in-vitro diagnostic use. (See CFR, Title 42, Volume 2, Chapter IV, Part 493.2 Laboratory Requirements: Definitions)

Genetic Testing, for the purposes of this LCD, describes any and all assays evaluating DNA and/or RNA without regard for a test’s purpose, methodology, or output. Examples of “genetic testing” include DNA sequencing of genes and surrounding non-coding regions, quantification of RNA expression, identification of epigenetic changes to the DNA, and evaluation of overall changes in chromosome structure. Moreover, this term encompasses all testing that includes genetic evaluation without regard to other included test components such as simultaneous protein testing and algorithmic analyses (e.g., multianalyte assays with algorithmic analyses [MAAAs]). Widely accepted terminology used in oncology such as “biomarker testing,” “genetic testing for an inherited mutation,” and “genetic testing for inherited cancer risk” are included under the umbrella term “genetic testing” for the purposes of this LCD, recognizing that molecular oncology is a highly complex and rapidly evolving field and thus, more inclusive terminology is required.

Genomic Testing, for the purposes of this LCD, will not be used as a defining term. Depending on the context, genomic testing can describe testing that includes both genes and non-coding sequence; however, given how this term is not always used precisely, we chose to avoid using this term to define policy in this LCD.

Genomic Sequencing Procedures (GSPs) are DNA and/or RNA sequence analysis methods that simultaneously assay multiple genes or genetic regions relevant to a clinical situation. They may target specific combinations of genes or genetic material, or assay the exome or genome. 14

Germline The sequence of cells in the line of direct descent from zygote to gametes, as opposed to somatic cells (all other body cells). Mutations in germline cells are transmitted to offspring; mutations in somatic cells are not transmitted to offspring. 15

Kit means all components of a test that are packaged together. (See CFR, Title 42, Volume 2, Chapter IV, Part 493.2 Laboratory Requirements: Definitions)

Laboratory Developed Tests (LDT) defined by the FDA as an in vitro diagnostic test that is manufactured by and used within a single laboratory.16

Liquid biopsy is a test performed on blood to either look for cancer cells circulating in the blood or for DNA from tumor cells that are in the blood.17

Minimal residual disease (MRD) is a term used to describe a very small number of cancer cells that remain in the body during or after treatment. Minimal residual disease can be found only by highly sensitive laboratory methods. Also called measurable residual disease.18

Multianalyte Assays with Algorithmic Analyses (MAAAs) are procedures that utilize multiple results derived from panels of analyses of various types, including molecular pathology assays, fluorescent in situ hybridization assays, and non-nucleic acid based assays. Algorithmic analysis using the results of the assays as well as other patient information is performed and typically reported as a numeric score(s) or as a probability.19

Neoplasm An abnormal mass of tissue that forms when cells grow and divide more than they should or do not die when they should. Neoplasms may be benign (not cancer) or malignant (cancer).20

Next Generation Sequencing (NGS) is a high-throughput method used to sequence a part or the whole of an individual’s genome. This technique utilizes DNA sequencing technologies that are capable of processing multiple DNA sequences in parallel. Also called massively parallel sequencing. Note that NGS is also utilized to analyze RNA, however, the RNA is typically converted to complementary DNA (cDNA) before analysis.21

Reflex testing means confirmatory or additional laboratory testing that is automatically requested by a laboratory under its standard operating procedures for patient specimens when the laboratory's findings indicate test results that are abnormal, are outside a predetermined range, or meet other pre-established criteria for additional testing. (See CFR, Title 42, Volume 2, Chapter IV, Part 493.2 Laboratory Requirements: Definitions)

Risk Factor for Germline testing is a variable associated with an increased risk of a disease, such as age, gender, or family history of disease.11

Somatic, a term synonymous with acquired, refers to genetic code alterations that develop after birth (e.g., occurring in neoplastic cells)22

A Substantiated Suspicion of Cancer requires direct, physical sampling of a lesion, such as a needle aspiration or excision of tissue, followed by microscopic (histologic or cytologic) or flow cytometric evaluation of the sample. For the purposes of this LCD, radiologic suspicion of cancer is not considered “substantiated.” (Except where otherwise specified in the Covered Indications as reasonable and necessary.)

Treating physician is the physician who is treating the beneficiary, that is, the physician who furnishes a consultation or treats a beneficiary for a specific medical problem and who then uses the results in the management of the beneficiary's specific medical problem. Tests not ordered by the physician who is treating the beneficiary are not medically reasonable and necessary. Nonphysician practitioners that are enrolled in the program, who are operating within the scope of their authority under State law and within the scope of their Medicare statutory benefit, are also considered for this purpose, as treating and managing a beneficiary’s specific medical problem. (see CFR, Title 42, Volume 2, Chapter IV, Part 410.32(a))

Tumor mutation burden (TMB) is the total number of mutations (changes) found in the DNA of cancer cells.23

Covered Indications

Three evidence-based databases and/or knowledge bases have been identified as valid and reliable sources. Note that a specific genetic test may be listed in one database or knowledge base, but not others; therefore, providers may choose to utilize guidelines from any of the three databases/knowledge bases. However, for services to be considered medically reasonable and necessary, #1 below is required regardless of which guidelines are utilized. Genetic testing for oncology will be considered medically reasonable and necessary if:

  1. The provider has either established a diagnosis of cancer or found significant evidence to create suspicion for cancer in their patient. (See SSA 1862(a)(1)(A)) Both a clinical evaluation AND abnormal results from histologic, cytologic and/or flow cytometric examination are required to establish a diagnosis of cancer or suspicion of cancer. If then, as a next step in the clinical management of the patient, genetic testing would directly impact the management of the patient’s specific condition, the testing would be indicated. (see CFR, Title 42, Volume 2, Chapter IV, Part 410.32(a))

Note: In rare circumstances where patients have significant evidence to create suspicion for cancer AND are not candidates for a tissue biopsy due to high risk for complications AND genetic testing would directly impact the management of the patient’s specific condition, cell-free genetic testing could be indicated.

Note: See below regarding NCD 90.2 Next Generation Sequencing (NGS). The NCD does not allow for suspicion of cancer as a covered indication for DNA-only NGS testing of non-hematologic malignancies.

AND ONE OF THE FOLLOWING (2-4):

  1. The evidence for the gene-disease association is evaluated by the evidence-based, transparent, peer-reviewed process of the National Institutes of Health (NIH) sponsored Clinical Genome Resource (ClinGen)24 and is determined to demonstrate actionability in clinical decision making, meeting the criteria for all 5 categories below. At least one of the items listed under each of the categories (severity, likelihood of disease, nature of the intervention, effectiveness, and validity) must be satisfied:
    • Disease severity equal to:
      • Sudden death (Level 3), or
      • Possible death or major morbidity (Level 2), or
      • Modest morbidity (Level 1)
    • Likelihood of disease equal to:
      • Substantial evidence of a >40% chance (Level 3A), or
      • Moderate evidence of a >40% chance (Level 3B)
    • Nature of the intervention is equal to:
      • Low risk/medically acceptable/low intensity (Level 3), or
      • Moderately acceptable/risk/intensive (Level 2)
    • Effectiveness equal to:
      • Substantial evidence of a highly effective intervention (Level 3A), or
      • Moderate evidence of a highly effective intervention (Level 3B), or
      • Substantial evidence of a moderately effective intervention (Level 2A), or
      • Moderate evidence of a moderately effective intervention (Level 2B)
    • Validity of the gene-disease relationship equal to:
      • Definitive, or
      • Strong, or
      • Moderate

Table 1: NIH ClinGen coverage criteria, scores meeting LCD medical reasonableness and necessity criteria. One from each of all 5 categories must be met.

Disease Severity equal to:

Likelihood of disease equal to:

Nature of intervention equal to:

Effectiveness equal to:

Validity of the gene-disease relationship equal to:

Level 1

Modest morbidity

 

OR

 

Level 2

Possible death or major morbidity

 

OR

Level 3

Sudden death

Level 3A

Substantial evidence of a >40% chance

 

OR

 

Level 3B Moderate evidence of a >40% chance

Level 2

Moderately acceptable/risk/intensive

 

OR

 

Level 3

Low risk/medically acceptable/low intensity

Level 2A

Substantial evidence of a moderately effective intervention

 

OR

Level 2B

Moderate evidence of a moderately effective intervention

 

OR

Level 3A

Substantial evidence of a highly effective intervention

OR

Level 3B

Moderate evidence of a highly effective intervention

Definitive

OR

Strong

OR

Moderate

OR

  1. The evidence for the intervention is evaluated by the NCCN25 and is determined to demonstrate actionability in clinical decision making, meeting the following metric:
    • Based upon high-level evidence, there is uniform NCCN consensus that the intervention is appropriate (Category 1), or
    • Based upon lower-level evidence, there is uniform NCCN consensus that the intervention is appropriate (Category 2A)

Table 2: NCCN Compendium and/or Guideline coverage criteria, scores meeting LCD medical reasonableness and necessity criteria. Either 1 or 2A must be met.

 

Category

1 - Based upon high-level evidence, there is uniform NCCN consensus that the intervention is appropriate

 

OR

 

2A - Based upon lower-level evidence, there is uniform NCCN consensus that the intervention is appropriate

 

OR

  1. The evidence for the intervention is evaluated by the Memorial Sloan Kettering Cancer Center-sponsored Oncology Knowledge Base (OncoKB)26 and is determined to demonstrate actionability in clinical decision making, meeting one of the following metrics:

    • For therapeutic use cases:
      • The intervention is an FDA-recognized biomarker predictive of response to an FDA-approved drug in this indication (Level 1)

OR

 

      • The intervention is a standard care biomarker recommended by the NCCN or other professional guidelines predictive of response to an FDA-approved drug in this indication (Level 2)

OR

      • The intervention is a standard care biomarker predictive of resistance to an FDA-approved drug in this indication (Level R1)

  • For diagnostic use cases:

      • The intervention is an FDA and/or professional guideline-recognized biomarker required for diagnosis in this indication (Level Dx1)

OR

      • The intervention is an FDA and/or professional guideline-recognized biomarker that supports diagnosis in this indication (Level Dx2)

  • For prognostic use cases:

      • The intervention is an FDA and/or professional guideline-recognized biomarker prognostic in this indication based on a well-powered study (or studies) (Level Px1)

OR

      • The intervention is an FDA and/or professional guideline-recognized biomarker prognostic in this indication based on a single study or multiple small studies (Level Px2)

 

Table 3: OncoKB coverage criteria, scores meeting LCD medical reasonableness and necessity criteria. Any one from any of the three categories must be met.

 

For therapeutic use cases:

For diagnostic use cases:

For prognostic use cases:

Level 1

The intervention is an FDA-recognized biomarker predictive of response to an FDA-approved drug in this indication

OR

 

Level 2

The intervention is a standard care biomarker recommended by the NCCN or other professional guidelines predictive of response to an FDA-approved drug in this indication

OR

Level R1

The intervention is a standard care biomarker predictive of resistance to an FDA-approved drug in this indication

Level Dx1

The intervention is an FDA and/or professional guideline-recognized biomarker required for diagnosis in this indication

OR

Level Dx2

The intervention is an FDA and/or professional guideline-recognized biomarker that supports diagnosis in this indication

Level Px1

The intervention is an FDA and/or professional guideline-recognized biomarker prognostic in this indication based on a well-powered study (or studies)

 

OR

Level Px2
The intervention is an FDA and/or professional guideline-recognized biomarker prognostic in this indication based on a single study or multiple small studies


NOTE:
The Contractor will accept LCD reconsideration requests as to whether a specific genetic test or specific genetic content meets CMS IOM Publication 100-08, Chapter 13, Section 13.5.4 reasonable and necessary criteria.

Additionally, other knowledge bases with well-established evidence-based recommendations may be submitted for consideration for inclusion into this LCD via the reconsideration process. Note that these knowledge bases will be considered under the same criteria utilized for the above three knowledge bases.

Additional Requirements for Next Generation Sequencing (NGS) tests:

Per NCD 90.2, specific coverage indications must be met before coverage for DNA sequencing is permitted under Medicare. First and foremost, per the NCD, the FDA status of an NGS test determines whether a test is evaluated under the NCD or under Medicare Administrative Contractor (MAC) discretion. Secondly, coverage is permitted only when the patient and their cancer meets very specific criteria as detailed in the NCD. This patient criteria is clearly stated for tests under MAC discretion.

Of note, these required indications include limitations for cancer status or stage, which providers can order testing, and how test results are used. According to Medicare regulations, NCD requirements take precedence over Local Coverage Determinations. As a result, for NGS testing, all requirements in NCD 90.2 must be met before coverage can be considered through this LCD. For all DNA-only NGS testing of non-hematologic malignancies where MACs determine coverage, the NCD and LCD criteria must both be met for coverage.

In the January 27, 2020 decision memo for NCD 90.2 released by the CMS Coverage Analysis Group (CAG), all coverage determinations for NGS testing of hematologic malignancies were deferred to the MACs.11 The CAG cited difficulties differentiating between somatic versus germline origins and difficulties clearly staging these malignancies. This memo indicates that the requirements listed under somatic (acquired) and germline (inherited) categories only apply to non-hematologic malignancies.

Finally, NCD 90.2 only addresses NGS tests that evaluate DNA. RNA sequencing and/or in conjunction with protein analysis are not considered under the NCD.

Limitations

The following are considered not medically reasonable and necessary:

    1. A genetic test with unestablished analytical validity, clinical validity, and/or clinical utility.
    2. Interventions with levels of evidence not identified by ClinGen24, NCCN25, or OncoKB26 as demonstrating actionability in clinical decision making OR interventions that are non-covered per MAC review.
    3. Genetic testing in patients who do not have either an established diagnosis of cancer or substantiated suspicion of cancer as determined by a clinical evaluation and abnormal results (cancer or suspicious for cancer) from histologic, cytologic, and/or flow cytometric examination. (Except where otherwise specified in the Covered Indications as reasonable and necessary) (See SSA Section 1862(a)(1)(A))
    4. Genetic testing of asymptomatic patients for the purposes of screening the patient or their relatives. (See SSA Section 1862(a)(1)(A))
    5. Repetitions of the same genetic test on the same genetic material. (see Medicare NCCI Policy Manual, Chapter 10, Section A Introduction)
    6. DecisionDx-Melanoma*
    7. DecisionDx-SCC*
    8. UroVysion fluorescence in situ hybridization (FISH)*
    9. Cxbladder Detect*
    10. Enhanced Cxbladder Detect*
    11. Cxbladder Monitor*
    12. Cxbladder Triage*
    13. Enhanced Cxbladder Triage*
    14. Cxbladder Resolve*
    15. Colvera*
    16. PancreaSeq Genomic Classifier*
    17. PancraGEN*
    18. ThyroSeq CRC*

*Please see below Summary and Analysis of Evidence sections for citations.

Note: Genetic tests for hereditary cancer syndromes, which are considered germline testing, may only be performed once per beneficiary’s lifecycle.

Provider Qualifications

The following provider qualification requirements must be met for the service to be considered medically reasonable and necessary. The ordering provider of a genetic test for a patient with an established diagnosis of cancer or substantiated suspicion of cancer must:

  • Be the treating clinician who is responsible for the management of the patient’s cancer; and,
  • Understand how the test result will impact the patient’s condition; and,
  • Have presented this information to the patient eliciting patient understanding.


(See Code of Federal Regulations, Title 42, Volume 2, Chapter IV, Part 410.32(a)).

Notice: Services performed for any given diagnosis must meet all of the indications and limitations stated in this LCD, the general requirements for medical necessity as stated in CMS payment policy manuals, any and all existing CMS national coverage determinations, and all Medicare payment rules.

Summary of Evidence

Introduction

Please refer to the “History/Background and/or General Information” section for general information on testing of RNA and DNA as it applies to oncology.

This evidence review focuses on genetic testing used to guide oncologic treatment and whether the evidence behind this testing is adequate to draw conclusions about improved health outcomes for the Medicare population. In general, health outcomes of interest include patient mortality, morbidity, quality of life, and function.

For use in the Medicare population, tests themselves must demonstrate analytic validity, clinical validity, and clinical utility. Tests should enhance clinical decision-making, directly informing clinical management and improving patient outcomes.

In the context of oncology, genetic testing endeavors to improve patient outcomes through both prognostic and predictive means. For instance, oncologic genetic testing can optimize treatment choice (predictive), avoiding ineffective treatments and reducing adverse events. Ultimately, patient-centered outcomes must be the underlying justification for oncologic testing.

Internal Technology Assessment for Databases and Knowledge Bases

Google, PubMed and Google Scholar were searched for literature or links that provided information regarding available peer-reviewed, publicly accessible, regularly updated, knowledge bases with biological and clinical information about genomic alterations in cancer. Key words available to search in various combination included: genetics, cancer, database(s), molecular testing, clinical, diagnostic, prognostic, genomics, biomarker(s), genomic alteration(s), peer-reviewed, FDA, genetic test, knowledge base(s), valid evidence, level of evidence, updated, guideline(s), public, oncology, and genetic mutations. From this search, a number of knowledge bases were found. As identified in the Institute of Medicine’s seminal work Clinical Practice Guidelines: Directions for a New Program, there are eight recommended attributes for clinical practice guidelines.27 These include validity, reliability/reproducibility, clinical applicability, clinical flexibility, clarity, development via a multidisciplinary process, scheduled review, and documentation. These attributes were referenced in the selection of databases/knowledge bases for genetic testing.

In order to be included, the database/knowledge base was required to be evidence-based, widely available, and created and/or facilitated by an organization with a focus on either oncology or genetics. Each database/knowledge base was also required to include a scoring metric which could be utilized to determine clinical actionability for specific genetic tests. Additionally, the database/knowledge bases and their scoring metrics were required to demonstrate the attributes listed in the Clinical Practice Guidelines. All countries of origin were included as long as the database/knowledge base met the criteria, with only sources in English considered. Based on the above criteria, three databases/knowledge bases were identified that ideally met the needs of this LCD.

Databases and Knowledge Bases

National Comprehensive Cancer Network (NCCN)

The NCCN is a nonprofit alliance of U.S. National Cancer Institute (NCI)-designated comprehensive cancer centers. NCCN strives to improve the effectiveness and quality of care for patients with cancer and has published clinical practice guidelines applying to more than 97% of cancers affecting individuals in the U.S.28 According to the organization, their guidelines are “intended to assist all individuals who impact decision-making in cancer care including physicians, nurses, pharmacists, payers, patients and their families, and many others.”28 (p.1)

In addition to a Guidelines Steering Committee (which provides oversight and planning), and a Guidelines Panel Chair and Vice Chair (who provide oversight of content development activities), each NCCN guideline has an individual Guidelines Panel that includes multidisciplinary representation from all of the core medical specialties relevant to the guideline, a primary care physician, and a patient advocate. NCCN notes that “any Panel Member with a meaningful conflict of interest is excluded from participating in Panel presentations, reviews, discussions, and voting relevant to the area of the conflict of interest.”28 (p.2)

The development and update of NCCN guidelines® is an ongoing process which includes “critical evaluation of evidence, integrated with the clinical expertise and consensus of a multidisciplinary panel of cancer specialists, clinical experts and researchers in those situations where high-level evidence does not exist.”28 (p.5) Recommendations for treatment are based on the level of clinical evidence available as well as consensus among the Guidelines Panel regarding the efficacy and safety of the intervention. Active NCCN guidelines are reviewed and updated at least annually.

NCCN evidence and consensus categories are as follows: Category 1 (high level of evidence with uniform Panel consensus that the intervention is appropriate); Category 2A (lower level of evidence with uniform Panel consensus that the intervention is appropriate); Category 2B (lower level of evidence with at least 50% [but less than 85%] panel consensus); and Category 3 (any level of evidence, but major Panel disagreement regarding whether the intervention is appropriate).28 As discussed by Birkeland and McClure, the majority of recommendations in the NCCN guidelines fall into Category 2A “because high-level evidence is not available for most decisions across the continuum of care.” 29 (p.608)

Due to rapid development of biomarker and companion diagnostic testing in the field of oncology, the NCCN Biomarkers Compendium was established “to facilitate identification of biomarker tests recommended for use by NCCN guideline panels.” 29 (p.609) As discussed by Birkeland and McClure, the Biomarkers Compendium “focuses on the clinical usefulness of biomarker testing rather than specific tests or test kits,” and therefore includes all tests measuring genes or gene products, regardless of their functional category (predictive, prognostic, diagnostic, screening, monitoring, surveillance). 30 (p.611) NCCN assigns a category of evidence and consensus to individual alterations (as opposed to the entire gene). Furthermore, NCCN states guideline recommendations (including those relative to the Biomarkers Compendium) are “intended to apply to the vast majority of patients in a particular clinical situation”28 (p.6) and are therefore not exhaustive or expected to apply to all patients or all situations.

NCCN is a widely available resource, which is frequently utilized by oncologists and other clinicians. Poonacha and Go discuss that clinical practice guidelines published by NCCN are “the most comprehensive and widely used standard in clinical practice in the world.”30 (p.187) In their study, the authors investigated the level of scientific evidence behind NCCN guidelines for the ten most common types of cancer in the U.S. (breast, prostate, lung [both small-cell and non-small cell subtypes], colorectal, melanoma, non-Hodgkin’s lymphoma, kidney, pancreas, urinary bladder, and uterus). Of the ten clinical practice guidelines reviewed, Poonacha and Go30 identified that on average, guidelines contained over 100 intervention recommendations; the NCCN lung cancer guideline included the most recommendations (238) while the kidney cancer guideline had the least (45).

Of the ten guidelines reviewed, most intervention recommendations (83%) were from Category 2A, and only 6% were from Category 1.30 Categories of evidence were found to be highly variable based on diagnosis; the authors identified that the guidelines for kidney and breast cancers included the highest proportion of recommendations with Category 1 evidence (20% and 19%, respectively), eight of the cancer types had between 1% and 6% recommendations with Category 1 evidence, and neither urinary, bladder nor uterine cancer had any recommendations with Category 1 evidence in their respective NCCN guidelines. Poonacha and Go30 also noted that of the ten guidelines reviewed, Category 3 evidence (major panel disagreement regarding whether the intervention is appropriate) was rare.

Also of note, in another paper written by Poonacha, Go, and colleagues in 2021, NCCN is called “the most comprehensive standard for clinical care in malignant hematology, and they are widely used by clinicians and payers.”31

National Institute of Health funded Clinical Genome Resource (ClinGen)

The NIH-funded ClinGen was designed as an open-access resource to support clinical decision making by aggregating, curating, and defining the clinical relevance and actionability of gene-disease relationships.32 As an open-access resource, ClinGen is publicly available to all clinicians and patients. Their database “provides a structure to enable research and clinical communities to make clear, streamlined, and consistent determinations of clinical actionability based on transparent criteria to guide analysis and reporting of genomic variation.”33 ClinGen is also included in the FDA’s Recognition of Public Human Genetic Variant Databases.34

ClinGen’s consortium of experts includes a Steering Committee (responsible for establishing standards and overseeing all ClinGen processes), a Clinical Domain Working Groups Oversight Committee (responsible for overseeing the development and approval of variant curation), and a Sequence Variant Interpretation (SVI) workgroup (comprised of industry experts responsible for providing guidance relevant to variant assessment activities, including education tasks).32 ClinGen’s Variant Curation Expert Panels (VCEPs) are comprised of “individuals with scientific expertise regarding gene function, clinical expertise regarding disease manifestations, and biocurators who are trained in evaluating evidence sources that support a variant assertion.”32 (p.2) Variant Curation Expert Panels follow a standard operating procedure (SOP) during the process of gene curation and assessment; this SOP is publicly available via their website. Among other things, the SOP details the organization’s transparency and public accessibility (all variant assertions and summary evidence are publicly available), as well as conflict of interest disclosures (all conflicts are publicly declared).

ClinGen has taken great measures to ensure staff involved in variant curation and evaluation are adequately trained.32 ClinGen expects their VCEPs to demonstrate the diversity of expertise in the field of genetics (including the major areas of clinical, diagnostic laboratory, and research). While VCEPs include disease/gene experts, they also include biocurators, who are not required to be experts (and are primarily responsible for assembling evidence for expert review). Regardless of their level of expertise, each VCEP member is required to demonstrate competence through completion of extensive training and an evaluation of their proficiency. All individuals are also required to obtain HIPAA and human subjects training (based on their level of access to human subjects’ data). Finally, the SVI workgroup provides organization-wide guidance regarding the evaluation and curation of human variant data.

ClinGen requires that variant curation and preliminary evaluation must be conducted by at least two reviewers.32 The requirements for variant evaluation are described in the ClinGen Variant Curation Expert Panel Protocol, publicly available via ClinGen’s website. Part of this process includes evaluating supporting data against rules and criteria developed by the VCEP, and ranking them as either standalone, very strong, strong, moderate, or supporting. These ranks are then used to determine a classification assertion (pathogenic [P], likely pathogenic [LP], benign [B], likely benign [LB], or uncertain significance [VUS]). Final evaluation and decisions about variant assertions are made by consensus of the relevant VCEP. Consensus can be indicated by either unanimous agreement by all members of the VCEP or a majority vote. In order to be published as an approved assertion, variant classifications must have at least a majority vote. If a majority vote cannot be obtained, the variant may be considered an unclassified variant (which are reevaluated every two years to determine if additional evidence has been made available to support a classification) or may be classified as a lower-ranking class (for instance, a variant may be considered VUS if a majority vote cannot be obtained for a LP or LB classification). In order to receive final approval and publication, all variant interpretations are reviewed by the full VCEP membership (which includes non-biocurator, clinical, and disease experts). Furthermore, all evidence curated by the ClinGen team is readily accessible via their website.

The framework established by ClinGen attempts to define and evaluate the clinical validity of gene-disease relationships by evaluating the evidence supporting or contradicting them.35 This standardized framework was developed because there is substantial variability in the level of evidence supporting claims of gene-disease relationships. As noted by Strande et al, “This framework aims to provide a systematic, transparent method to evaluate a gene-disease relationship in an efficient and consistent manner suitable for a diverse set of users.” 35 (p.905)

ClinGen’s database validates gene-disease relationships by evaluating both quantity and quality of evidence.35 Gene-disease relationships are then identified under one of the following levels, with each level building upon the previous: Definitive (requires that the relationship has been repeatedly demonstrated in research and clinical diagnostic settings, as well as upheld over time), Strong (requires that the relationship has been featured in two or more independent studies with multiple unrelated probands with pathogenic variants, as well as several types of supporting experimental data), Moderate (requires that the relationship has been featured in at least one independent study with several unrelated probands with pathogenic variants, as well as having some supporting experimental data), Limited (requires that the relationship has been featured in at least one independent study with more than three unrelated probands with pathogenic variants, or multiple unrelated probands without pathogenicity), and No Known Disease Relationship (where no pathogenic variants have been identified to date, therefore no evidence supports a causal role).

There are additionally two levels of evidence reserved for when conflicting evidence has been reported – Disputed (which suggests that disputing evidence has been discovered but does not necessarily outweigh existing evidence in support of the gene-disease association) and Refuted (which suggests that disputing evidence has been discovered, and significantly outweighs existing evidence in support of the gene-disease association). The refuted status is applied at the discretion of clinical experts, after analysis of all available evidence. Experimental evidence is scored based on a separate framework.

The evidence supporting clinical actionability for genetic disorders varies significantly. Therefore, ClinGen developed and implemented a standardized, evidence-based method to determine actionability of genomic testing. Hunter et al explains that the assessment of clinical actionability is part of the effort to create a central resource of information for the clinical relevance of genomic variation.30 As discussed by Strande et al, the ultimate goal of the ClinGen database is to “enhance the incorporation of genomic information into clinical care.”35 (p.905) That said, ClinGen has also created a semi-quantitative scoring metric to be utilized to assess actionability for clinical decision making. As discussed by Berg et al, it should be noted that clinical actionability “is a continuum, not a binary state.”36 (p.467-468) That said, the ClinGen semi-quantitative scoring metric is used to score interventions, not genes; ClinGen assigns a level of evidence to individual alterations (rather than the entire gene). The scoring metric assesses four categories: disease severity, likelihood of disease, effectiveness of the intervention, and nature of the intervention. The scoring matrix also assesses level of available evidence for two categories: likelihood of disease and intervention effectiveness.

Using the ClinGen framework, Strande et al evaluated a number of gene-disease pairs and examined reproducibility of the scoring metric by having two independent clinical domain experts evaluate each gene-disease relationship.35 Clinical domain experts agreed with the preliminary classifications for 87.1% of ClinGen’s gene-disease relationship curations with published evidence. Discrepancies between expert and curator classification were discussed and explained; additionally, it was noted that when the expert and curator classifications differed, they did so by only a single category (moderate versus limited). The authors concluded that ClinGen’s evidence-based method for evaluating gene-disease associations “will provide a strong foundation for genomic medicine.”35 (p.902)

As concluded by Hunter et al, “The ClinGen framework for actionability assessment will assist research and clinical communities in making clear, efficient, and consistent determinations of actionability based on transparent criteria to guide analysis and reporting of findings from clinical genome-scale sequencing.”33 (p.10)

Memorial Sloan Kettering Cancer Center Oncology Knowledge Base (OncoKB)

OncoKB was established as a comprehensive precision oncology tool to deliver evidence-based information about tumor mutations and alterations and distill NCCN guidelines, expert recommendations, and scientific literature, in order to support treatment decisions.26 OncoKB provides a resource which is available to all clinicians and patients. The database is publicly available through their website, organized by gene, alteration, tumor type, and clinical implication, and is searchable by any of the above. OncoKB has received FDA recognition for a portion of the database and is also included in the FDA’s Recognition of Public Human Genetic Variant Databases.34

OncoKB’s staff is made up of highly qualified scientists, physicians, and engineers, each meeting specific qualifications criteria including educational background, professional training, and skills.37 Individuals with Lead Scientist, Clinical Genomics Annotation Committee (CGAC), or Scientific Content Management Team (SCMT) roles are required to be physicians or Ph.D-level scientists who are considered experts in their field and disease specialty. These individuals’ responsibilities include “coordinating and monitoring training and proficiency of curators in procuring the appropriate data, assessing the data in the context of variant interpretation, and entering the data with sufficient detail into the OncoKB curation platform.”37 (p.5) Curators, who are responsible for assessing and curating gene alterations, their biological effects, and associated treatment implications, can be either pre-doctoral graduate students, postdoctoral fellows, or clinical fellows. Curators receive extensive in-person training in variant classification, including mapping variants to FDA levels. All OncoKB staff are also evaluated for potential conflicts of interest, with financial conflicts being publicly disclosed on the OncoKB website. Any CGAC member with a conflict of interest relevant to a specific Level of Evidence assignment is not permitted to work on the assignment.

CGAC reviews and approves all OncoKB/FDA level associations prior to internal review.37 (p.3-5) Additionally, data curated by OncoKB staff does not become publicly available until it has undergone an internal, independent review by a different OncoKB staff member. Specific protocols exist to manage conflicting data or conflicting assertations regarding alterations, including an independent review of curated data, as well as evaluation and discussion of decisions until a consensus has been reached.37 (p.20-21) In instances where a consensus is reached, the alteration is accepted into the knowledge base with a notation that there was majority but not uniform consensus; in instances where consensus cannot be reached, the alteration is not assigned a level of evidence within the knowledge base.

As discussed by Chakravarty et al, OncoKB contains a classification system for clinical utility and potentially actionable alterations. 26 “Potentially actionable alterations in a specific cancer type are assigned to one of four levels that are based on the strength of evidence that the mutation is a predictive biomarker of drug sensitivity to FDA-approved or investigational agents for a specific indication.”26 (p.2) OncoKB delineates separate levels of evidence for therapeutic, diagnostic, and prognostic use cases. OncoKB assigns a level of evidence to individual alterations (as opposed to the entire gene).

The OncoKB therapeutic levels of evidence are as follows: Level 1 gene alterations have been recognized by the FDA as “predictive of response to an FDA-approved drug in a particular disease context.”26 (p.2) Level 2 gene alterations are considered “standard care” predictive biomarkers. They are not FDA-recognized but are recommended by professional guidelines (including NCCN) and predict response to FDA-approved therapy in a particular disease context. Level 3A and 3B are considered investigational; 3A requires compelling clinical evidence to support the biomarker as predictive of response to a drug in a particular disease context and only applies to investigational biomarkers for which there has been clinical activity (such as a clinical or preclinical trial). 3B could be either a standard care or investigational biomarker predictive of response to an FDA-approved or investigational drug in another indication. Level 4 is considered hypothetical and requires compelling biological evidence to support the biomarker as predictive of response to a drug.

Additionally, there are two therapeutic levels of evidence for treatment resistance; Level R1 is for standard care biomarkers predictive of resistance to an FDA-approved drug in a particular disease context, while Level R2 requires compelling clinical evidence to support the biomarker as being predictive of resistance to a drug.

OncoKB also offers scoring of evidence for both diagnostic and prognostic use cases. For diagnostic indications, level Dx1 biomarkers have been recognized by the FDA or professional guidelines as a requirement for diagnosis in a particular disease context. Level Dx2 biomarkers have been recognized by the FDA or professional guidelines as supportive of diagnosis in a particular disease context. Biomarkers in level Dx3 may assist disease diagnosis based upon clinical evidence. Similarly, for prognostic indications, Level Px1 biomarkers have been recognized by the FDA or professional guidelines as prognostic for a particular disease context based on at least one well-powered study. Level Px2 biomarkers have been recognized by the FDA or professional guidelines as prognostic for a particular disease context based on at least one small study. Biomarkers in level Px3 are considered prognostic for a particular disease context based on clinical evidence from well-powered studies.

As a portion of the OncoKB database has been approved by the FDA, the therapeutic levels of evidence indicated above can be mapped to one of three FDA Levels of Evidence within the database.38 FDA Level 1 requires companion diagnostics (CDx) tests, which “are supported by analytical validity of the test for each specific biomarker and a clinical study establishing either the link between the result of that test and patient outcomes or clinical concordance to a previously approved CDx.”38 Level 1 is the highest level of recognition by the FDA; however, OncoKB does not include any companion diagnostic claims, and therefore no genes or variants are currently considered Level 1. FDA Level 2 is designated for mutations with evidence of clinical significance, which allows providers to utilize information about their patients’ health alongside clinical evidence presented in professional guidelines. “Such claims are supported by a demonstration of analytical validity (either on the mutation itself or via a representative approach, when appropriate) and clinical validity (typically based on publicly available clinical evidence, such as professional guidelines and/or peer-reviewed publications).”38 FDA Level 3 is reserved for mutations with potential clinical significance, but not identified as a higher level. “Such claims are supported by analytical validation, principally through a representative approach, when appropriate, and clinical or mechanistic rationale for inclusion in the panel” (to include peer-reviewed publications or in vitro pre-clinical models).38 OncoKB has a validation protocol in place to assess the consistency of variant classification to FDA levels of evidence; mapping OncoKB levels of evidence to FDA levels ranges from 85.7% to 100%.38 (p.20)

Summary of Evidence for Specific Lab Tests

Cxbladder (Detect, Triage, Monitor, Resolve)

PubMed and Google Scholar were searched for peer-reviewed, evidence-based literature which provided information regarding analytic and clinical validity and clinical utility for the Cxbladder test. Key words used to search in combination included: Cxbladder, Cxbladder detect, Cxbladder triage, Cxbladder monitor, molecular testing, urine test, bladder cancer, urine biomarker(s), mRNA, uRNA, gene expression profile test, GEP test, 5 gene expression assay, prognostic, and TERT and FGFR3 mutations. Outside of publications from Pacific Edge Diagnostics or studies with funding from that company, only a few peer-reviewed papers have been published addressing the performance of Cxbladder tests.

The Cxbladder line of tests are currently represented by six variations on a five gene expression assay: Cxbladder Detect, Cxbladder Triage, Cxbladder Monitor, Cxbladder Resolve, enhanced Cxbladder Triage, and enhanced Cxbladder Detect. The sequential development of each test variant may be traced through a series of publications, beginning in 2008 with a paper describing the development of Cxbladder’s precursor, the uRNA test, a four gene expression profile (GEP) test.39 Each of the Cxbladder tests rely on a different set of statistical parameters to optimize the function of the five gene expression assay, sometimes synthesizing gene expression data with other inputted data such as patient demographics, cancer history, other clinical history, and single nucleotide variants associated with the FGFR3 and TERT genes. Cxbladder Detect is optimized for sensitivity and specificity as initially described in the seminal 2012 paper.40 Both Cxbladder Triage and Cxbladder Monitor are optimized for sensitivity, Negative Predictive Value (NPV), and test negative rate as initially described in 2015 and 2017 papers, respectively.41,42 Cxbladder Resolve, with a published validation in 2021, is optimized for sensitivity and specificity.43 Most recently, in 2022, the enhanced versions of Cxbladder Triage and Detect were described, with the basic purpose of each parent test unchanged but the parameters of each new test modified by adding data from sequencing six “single nucleotide polymorphisms” associated with two genes: FGFR3 and TERT.44

In 2008, Holyoake and colleagues described a precursor to the Cxbladder tests, uRNA.39 The four gene (CDC2, MDK, IGFBP5, and HOXA13) expression profile test was designed to detect and characterize transitional cell carcinoma (TCC) from patients’ urine. Development of the test involved selection of RNA expression markers that best detected and characterized both early and late stage TCC tumors. The best candidate markers were identified through comparison of tissue from 58 tumors of different stages (Ta-T4) and normal urothelial tissue. Validation of the test utilized urine samples from a cohort of 142 patients that was comprised of 75 patients who were diagnosed with Ta-T4 tumors and 77 “control” patients. The overall specificity of this test was 85% with a range of sensitivities depending on tumor stage (from 48% for Ta tumors to 100% for tumors with a stage greater than T1).

In 2012, O’Sullivan and colleagues developed and validated the first Cxbladder test (Cxbladder Detect), using a foundation of the uRNA four GEP test and adding an additional gene (CXCR2) to this profile.40 The 2012 publication also compared the new Cxbladder test to its precursor test (uRNA-D), urine cytology, and other urine tests on the market (NMP22 ELISA and BladderChek). The patient cohort was comprised of 485 patients presenting with gross hematuria. Cxbladder demonstrated an 81.8% sensitivity at a fixed specificity of 85%; all other tests in the comparison fell below a sensitivity of 65%, although the specificity of all other tests was higher than Cxbladder, with the highest specificity at 96.4% for the BladderChek test.

In 2015, Breen and colleagues further evaluated the Cxbladder Detect test in a comparative study with other tests used to detect urothelial carcinoma in urine.45 The other tests evaluated included cytology, UroVysion FISH, and NMP22. The study utilized five cohorts of patients, only one of which evaluated all four tests for the entire cohort. Data from the five cohorts were evaluated and integrated, with several different imputation analyses utilized to fill in for missing test values and create a “new, imputed, comprehensive dataset.” From this data, the authors found that before imputation, Cxbladder Detect had superior sensitivity (79.5%) as compared to the other three tests (the second highest sensitivity being 45.5%) but inferior specificity (82.2%), with the second lowest specificity being 87.3%. Utilizing several different imputation methodologies, similar findings for comparative sensitivities and specificities were seen, leading to the conclusion that the imputed data sets were valid, with the best imputation methodology being the 3NN model. Finally, with the new imputed data set, the authors re-assessed the comparisons between tests and found that Cxbladder Detect “outperformed” the other tests in this study.

In 2015, Kavalieris and colleagues developed another version of the Cxbladder test (later to be called Cxbladder Triage), this time evaluating the impact of adding clinical data (age, gender, frequency of macrohematuria, and smoking history) to the testing algorithm.41 Genetic input into the algorithm was termed the G INDEX while clinical data was termed the P INDEX. The study utilized 517 patients with macrohematuria from the 2012 Cxbladder study population, an additional 178 patients with macrohematuria from two separate cohorts, and 45 patients from a small cohort of patients with microhematuria.38 Combining the G and P indices provided a better bias-corrected receiver operating characteristic curve (AUC) (0.86) than either of the indices alone (0.83 and 0.61 respectively). When set at a test-negative rate of 0.4, the G + P INDEX performed with a sensitivity of 95% and NPV of 98%, improving on the G INDEX sensitivity and NPV of 86% and 96% respectively. The authors envisioned the G + P INDEX being used to triage outpatients with a low probability of having urothelial carcinoma, reducing the need for diagnostic procedures.

In 2017, Kavalieris and colleagues developed another version of the Cxbladder test (Cxbladder Monitor) utilizing a cohort of 763 patients under surveillance for recurrence of bladder urothelial carcinoma.42 In addition to the data from the five gene expression profile, Cxbladder Monitor also used clinical data in its algorithm which included previous tumor status (primary tumor or recurrent tumor) and the number of years elapsed since the previous tumor. The paper analyzed several subgroups including different stages of tumor and patients who had received adjuvant bacillus Calmette-Guerin (BCG) treatment. With a test negativity rate of 0.34, Cxbladder Monitor demonstrated a sensitivity of 93% and NPV of 97%.

Also from 2017, Lotan and colleagues utilized the same patient cohort found in the Kavalieris (2017) study to perform a comparative analysis between Cxbladder Monitor and other noninvasive urine tests that were used to rule out recurrent urothelial carcinoma.42,46 The authors found that Cxbladder “outperformed” all comparative tests (which included cytology, NMP22 ELISA, NMP22 BladderChek, and UroVysion FISH), with higher sensitivity (91% versus sensitivities ranging from 11% to 33%) and higher NPV (96% versus NPVs ranging from 86% to 92%).

In 2017, Darling and colleagues performed a clinical utility study for Cxbladder Triage and Detect.47 The study used previously obtained clinical data and Cxbladder test results to create clinical scenarios for twelve urologists. These scenarios centered on patients presenting with hematuria and evaluated how the urologists would hypothetically work-up these patients in the context of Cxbladder results. The study found that when the urologists had access to Cxbladder results, they would hypothetically change their clinical decisions in caring for these patients, ultimately leading to fewer invasive diagnostic procedures.

In 2018, Lough and colleagues performed a clinical utility study for Cxbladder Monitor.48 The study used previously obtained clinical data and Cxbladder test results to create clinical scenarios for eighteen physicians. These scenarios centered on patients with a history of urothelial carcinoma and evaluated how the physicians would hypothetically manage these patients in the context of Cxbladder results. The study found that when the physicians had access to Cxbladder results, they would hypothetically change their clinical decisions in caring for these patients, leading ultimately to fewer tests and procedures for patients classified as low risk by Cxbladder and an increased number of tests and procedures for patients classified as higher risk.

In 2019, Konety and colleagues performed a retrospective analysis of pooled data (from four patient cohorts) in the context of Cxbladder Triage, Detect, and Monitor.49 A total of 436 samples were evaluated from patients with hematuria and 416 samples from patients with potential recurrence of urothelial carcinoma. These Cxbladder results were then compared with cytology results for the same samples. The authors found that overall, Cxbladder demonstrated a better NPV than cytology (97.4% versus 92.6%) and missed less tumors (false negatives) than cytology (eight missed versus 59 missed).

In 2020, Koya and colleagues performed a retrospective audit of a new surveillance protocol that incorporated Cxbladder Monitor for patients with a history of urothelial carcinoma.50 The patients involved were divided into two cohorts: low risk (n = 161) and high risk (n = 47), noting that these numbers represent only patients who completed the study with both Cxbladder testing and follow up cystoscopy. There were 309 patients who were initially enrolled in the study but only 208 that completed the study. In the low-risk cohort, patients who received a negative result for the Cxbladder test were permitted to wait longer (12 months as opposed to within two to three months of a positive result) before receiving a follow-up cystoscopy. In the high-risk cohort, patients were managed the same regardless of the Cxbladder result, although the data was used to speculate on potential changes to the protocols for this type of patient. Over the course of the study and in the 35 months of the follow-up period, no cases positive for urothelial carcinoma were missed by the first Cxbladder test (although there was at least one false negative result in a second, follow-up round of Cxbladder testing) and no patients developed newly invasive or metastatic urothelial carcinoma. For the low-risk cohort, confirmed recurrence occurred in three of the patients who initially tested negative in the Cxbladder test; only two of those three patients had a follow-up, second Cxbladder test, with one true positive and one false negative (as demonstrated in the supplementary figures). There was confirmed recurrence in three low risk patients who tested positive with Cxbladder. For high risk patients, four patients demonstrated a recurrence of urothelial carcinoma and all four of these patients tested positive with Cxbladder.

In 2023, Li and colleagues evaluated Cxbladder Monitor through a prospective study of 92 patients diagnosed with non-muscle invasive bladder cancer (NMIBC) from two different clinical sites that were ready for follow up regarding their diagnosis (primary or recurrent), a previous procedural visit, and/or other therapy (e.g., BCG instillation).51 The study sought to triage scheduling these patients’ for follow-up cystoscopy through use of at home Cxbladder Monitor testing, delaying the follow-up appointment if patients received a lower risk score (<3.5) on the Monitor test. Patients with either gross hematuria or active UTI were excluded from the study. Moreover, of the 92 patients, a total of 16 were lost to follow-up, although data was still included in summary tables for these 16 patients. Of the 24 patients followed-up earlier due to a higher risk score (>3.5), nine were found to have tumors on cystoscopy. Of the 52 patients with delayed cystoscopy due to a lower risk score, none were found to have tumor. Note that of the 66 patients with a lower risk score, 14 were not evaluated via the delayed cystoscopy for the following reasons: did not show up for cystoscopy, chose another round of Cxbladder Monitor testing instead of cystoscopy, stopped surveillance for reasons not given, or died of “unrelated” but undescribed causes. The paper also noted that the patients who opted out of the follow-up cystoscopy in favor of a second Cxbladder Monitor test were only found at one of the two sites. The authors concluded that using at-home Cxbladder Monitor testing to triage patients and allow delayed cystoscopy for patients with a lower risk score was an effective new protocol.

In 2021, Raman and colleagues developed a fourth version of the Cxbladder test, Cxbladder Resolve, utilizing three different patient cohorts (total of 863 patients) in the internal validation and one separate cohort (548 patients) in the external validation.43 In the external validation, testing was also performed with other versions of the Cxbladder test: Cxbladder Triage and Detect.

Cxbladder Resolve was designed to identify patients with a high probability of high-impact tumors (HIT), namely high grade urothelial carcinomas, by stratifying patients into one of three categories: high priority for HIT evaluation, work-up for HIT based on physician-directed protocol (PDP), or manage by observation. In the internal validation, Cxbladder Resolve was found to have a bootstrap-adjusted estimated sensitivity of 92.4% and specificity of 93.8% for HIT; note that the overall sensitivity and specificity for all tumors during the internal validation was 91.2% and 61.0% respectively. In the external validation, Cxbladder Resolve correctly identified all HIT diagnoses and missed three low grade tumors, with a cumulative sensitivity of 90.0% and specificity of 96.3%. The authors also found that using a reflexive test algorithm with Cxbladder Triage, Detect, and Resolve together would correctly identify 87.6% of patients who did not need further work-up (NPV of 99.4%).

In 2022, Lotan and colleagues published a study describing two newer versions of the Cxbladder tests (enhanced Cxbladder Triage [CxbT+] and Detect [CxbD+]).46 The enhancement (digital droplet PCR testing of urine specimens) was used to identify the presence or absence of six single nucleotide polymorphisms (SNPs) associated with the genes FGFR3 and TERT. These SNPs, mostly somatic (acquired) genetic variants, can be found in urothelial carcinomas, as described in the literary references provided in Lotan and colleagues’ paper. Lotan and colleagues performed an internal validation of these new biomarkers, testing urine from two cohorts: 344 patients from the United States and 460 patients from Singapore. The six SNPs were evaluated both as stand-alone tests and as enhancement to original Cxbladder Triage and Detect tests. The authors concluded that the addition of the six SNPs to the Cxbladder tests improved test performance, particularly specificity.

Two publications from Davidson and colleagues in 2019 and 2020 evaluated the performance of the Cxbladder Triage test when integrated into hematuria work-up protocols.52,53 Notably, these studies were not performed or funded by Pacific Edge Diagnostics. The 2019 paper prospectively evaluated the new protocol without enacting it within the clinical setting while the 2020 paper described the outcome of a fully enacted protocol. In the 2019 study, which included 478 patients with hematuria referred to the urology practice and 73 patients with hematuria who had not been referred, the Cxbladder test correctly triaged 42 of 44 patients with urothelial malignancy; the two false negatives were either confirmed or suspected (no histology obtained) low grade lesions. From their cohort, the authors found that Cxbladder Triage had a sensitivity of 95% and NPV of 98%. The authors concluded “the risk of missing a significant cancer from the adoption of the theoretical pathway appears very low and clinically acceptable,” while later stating that larger studies were still needed to “prove the true clinical value of inclusion of these biomarkers in investigative pathways.” In the 2020 study, Davidson and colleagues retrospectively evaluated the clinical courses of 884 patients with hematuria who were worked-up with the new protocol and subsequently followed for a median of 21 months. The protocol identified 46 histologically confirmed urothelial carcinomas. Cxbladder Triage results included five false negatives, four of which were detected upon imaging and one of which was discovered in a three month follow up. Cxbladder Triage results also demonstrated low specificity with 39% of results being false positives. Overall, the authors found that the protocol that included Cxbladder Triage had a sensitivity of 98.1% and NPV of 99.9%. The authors concluded that their findings “add to the increasing evidence that biomarkers have a place in the assessment of hematuria, but that the results of these assays need to be supported by imaging of the bladder.”

In terms of systemic reviews and meta-analyses, two publications in 2015 were identified, both from Chou and colleagues, in contract with the Agency for Healthcare Research and Quality, AHRQ, U.S. Department of Health and Human Services.54,55 The publications discussed several urinary biomarker tests including Cxbladder Detect. However, both publications only discussed a single Cxbladder study performed by O’Sullivan and colleagues in 2012.40 In the shorter publication by Chou and colleagues, a systematic review and meta-analysis in the Annals of Internal Medicine, the sensitivity and specificity of Cxbladder was given a low grade for strength of evidence (as determined by study quality, precision, consistency and directness). The process of evidence assessment was covered in greater detail in the 923 page document from Chou and colleagues entitled “Emerging Approaches to Diagnosis and Treatment of Non-Muscle-Invasive Bladder Cancer;” however, the key points concerning Cxbladder were covered in the shorter publication and mostly just reiterated in the longer document.

Another systemic review and meta-analysis was published more recently in 2022 by Laukhtina and colleagues.56 The study reviewed five different urinary biomarker tests (UBT) used to detect recurrent urothelial carcinoma, including Cxbladder Monitor. The authors assessed statistical values associated with each test, such as sensitivity, specificity, positive predictive value (PPV), NPV, and accuracy. Additionally, the authors assessed the risk of bias using the Quality Assessment of Diagnostic Accuracy Studies tool (QUADAS-2), using cystoscopy and histology results for reference. The study also performed network meta-analysis on the tests as compared with cytology. Two Cxbladder Monitor studies were analyzed: Koya (2020) and Lotan (2017).46,50 At the end of the paper, the authors concluded the performances of the five tests “support[ed] their potential value in preventing unnecessary cystoscopies.” The authors also assessed other diagnostic tests from four of the five test companies, including Cxbladder Triage and Detect (O’Sullivan [2012] and Davidson [2019]) and concluded “there are not enough data to support their use in the initial diagnosis setting.”40,52

ThyroSeq CRC, CBLPath, Inc, University of Pittsburgh Medical Center

PubMed and Google Scholar were searched for peer-reviewed, evidence-based literature bases which provided information regarding analytic and clinical validity and clinical utility for the ThyroSeq Cancer Risk Classifier test. Key words used to search in combination included: ThyroSeq, ThyroSeq Cancer Risk Classifier, ThyroSeq CRC, molecular testing, thyroid nodule(s), thyroid cancer, Bethesda VI, Bethesda VI nodules, FNA(s), fine needle aspiration biopsy, formalin fixed paraffin-embedded (FFPE), prognostic, and TERT, TP53, AKT1, and PIK3CA mutations.

Only three peer-reviewed publications were identified evaluating the clinical validity of the ThyroSeq CRC test. The first paper written by Yip and colleagues in 2021 represented the first assessment of the ThyroSeq CRC test and evaluated 287 patients with Bethesda VI (malignant) cytology.57 The second paper written by Skaugen and colleagues in 2022 assessed ThyroSeq CRC in 100 patients with Bethesda V (suspicious for malignancy) cytology.58 The third paper from Liu and colleages retrospectively assessed the three tier Molecular Risk Group classification system in 578 patients who had received a thyroidectomy for primary thyroid cancer and who had been subsequently tested using the ThyroSeq v3 molecular test.59 The primary outcome measure was recurrence of the thyroid cancer following completion of the initial oncologic treatment. A recurrence event was defined as either structural recurrence or (if no structural recurrence was detected) biochemical recurrence. No papers significantly assessing clinical utility were identified.

PancraGEN- Interpace Diagnostics

PubMed and Google Scholar were searched for peer-reviewed, evidence-based literature that provided information regarding the analytic and clinical validity and clinical utility for the PancraGEN test. Key words used to search in combination included: PancraGEN, PathfinderTG, molecular testing, topographic genotyping, pancreatic cyst(s), pancreatic cyst fluid, solid pancreatic lesions, and KRAS and/or GNAS mutations.

Thirty-five total publications addressing the analytical validity, clinical validity, or clinical utility of the PancraGEN prognostic test (from Interpace Diagnostics) were identified. The papers identified focused on both individuals with pancreatic cysts and with solid pancreaticobiliary lesions.

Of the 35 publications identified, 25 were excluded for the following reasons. Seventeen of these studies described an earlier version of the test that no longer resembles the currently offered PancraGEN test, and thus, these studies were excluded.60-76 Moreover, two of these 17 studies did not include a reference standard such as survival or time to tumor recurrence, further justifying exclusion.75,76 One study did not pair the PancraGEN test results with corresponding patient data and was thus excluded.77 This study and another study did not adequately describe patient selection criteria, and thus, both were excluded.77,78 Three papers which did not address patient characteristics were excluded. 65,75,78 Six studies that compared the guidelines for molecular testing in pancreatic cysts but made no specific mention of the PancraGEN test were excluded.79-84

The remaining 10 papers were included in this LCD’s analysis of the evidence. Three of these papers were retrospective studies and addressed PancraGEN’s clinical validity and clinical utility for pancreatic cysts.85-87 Additionally, two retrospective studies that only addressed the clinical utility of PancraGEN for pancreatic cysts were identified.88,89 Three studies that addressed PancraGEN’s clinical validity and clinical utility for solid pancreaticobiliary lesions were also found.90-92 Finally, two technical reviews that examined clinical utility and clinical validity of PancraGEN were identified.93,94

DecisionDx – Castle Biosciences

DecisionDx-Melanoma

PubMed and Google Scholar were searched for peer-reviewed, evidence-based literature which provided information regarding analytic and clinical validity and clinical utility for the DecisionDx-Melanoma test. Key words used to search in combination included: DecisionDx, DecisionDx-Melanoma, Castle Biosciences, molecular testing, melanoma, skin cancer, sentinel lymph node biopsy, SLNB, GEP test, gene expression profile, stage I melanoma, stage II melanoma, stage III melanoma, 31-gene profile test, 28-gene profile test, cutaneous melanoma, and formalin-fixed paraffin embedded (FFPE) tissue.

The DecisionDx-Melanoma prognostic test (from Castle Biosciences) is described in numerous peer-reviewed publications. Forty total publications addressing analytic validity, clinical validity, or clinical utility were identified. Five papers were immediately excluded from review due to their publication in a non-peer reviewed journal (not found on PubMed).95-99 For the remaining 35 papers, three were editorial in nature or in response to editorial comments and four were evidence review papers without meta-analysis.100-106 There were two primarily meta-analysis papers identified, and additionally, one of these papers was frequently used as a source of raw data for other publications.107,108 The remaining 26 papers were identified as either validation papers or cohort studies (either prospective or retrospective, or both). Of the four validation papers, one paper described the development of the original GEP test and the other three papers described clinicopathologic syntheses with the GEP data that resulted in new information beyond the results of the original GEP test.109-112 It should be noted that these latter three papers were relatively new and do not currently have any follow-up publications that further evaluate the clinical validity and/or utility of the clinicopathologic and GEP combined results.110-112 One paper primarily assesses the analytic validity of DecisionDx-Melanoma, while seven papers address aspects of clinical utility.113 The remaining 14 papers mostly assess clinical validity through retrospective and/or prospective cohorts of patients, typically evaluating five-year outcomes in recurrence free survival (RFS), disease free survival (DFS), distant metastasis free survival (DMFS), melanoma specific survival (MSS), and/or overall survival (OS).114-127

DecisionDx-SCC

PubMed and Google Scholar were searched for peer-reviewed, evidence-based literature which provided information regarding analytic and clinical validity and clinical utility for the DecisionDx-SCC test. Key words used to search in combination included: DecisionDx, DecisionDx-SCC, Castle Biosciences, molecular testing, melanoma, skin cancer, prognostic test, SCC, cutaneous squamous cell carcinoma, cSCC, GEP test, gene expression profile, metastasis, 40-gene profile test, and formalin-fixed paraffin embedded (FFPE) tissue.

Twelve total publications addressing the analytical validity, clinical validity, and/or clinical utility of the DecisionDx-SCC prognostic test (from Castle Biosciences) were identified. All 12 identified studies were funded by, or written by employees of, Castle Biosciences. The papers identified included one panel review and three surveys of medical professionals.128-131 Additionally, two papers were evidence reviews without meta-analyses.132,133 The remaining six papers included two that addressed analytical validity, three cohort studies (both prospective and retrospective) that addressed clinical validity, and one case series that addressed clinical utility.134-139


UroVysion fluorescence in situ hybridization (FISH) – Abbott

PubMed and Google Scholar were searched for peer-reviewed, evidence-based literature which provided information regarding analytic and clinical validity and clinical utility for the UroVysion FISH test. Key words used to search in combination included: UroVysion, UroVysion FISH, UroVision, UroVision FISH, UroVysion outcomes, UroVysion utility, and fluorescence in situ hybridization.

The UroVysion FISH test for bladder cancer (Abbott) is described in numerous peer-reviewed publications, and 23 papers addressing analytical validity, clinical validity, or clinical utility were identified. Meta-analyses, systematic reviews, and literature reviews accounted for 10 of these papers, seven of which assessed multiple biomarkers for urothelial cancer, not just the UroVysion FISH test.140-149 The remaining 13 papers were identified as case-control, cross-sectional, or cohort studies (either prospective or retrospective).45,150-161 Only two papers were identified that addressed clinical utility, while the rest addressed analytical and clinical validity – typically evaluating outcomes such as recurrence free survival, disease free survival, and overall survival.

Colvera – Clinical Genomics

PubMed and Google Scholar were searched for peer-reviewed, evidence-based literature which provided information regarding analytic and clinical validity and clinical utility for the COLVERA test. Key words used to search in combination included: COLVERA, Colvera, blood test, molecular testing, colorectal cancer (CRC), CEA test, ctDNA, colorectal adenocarcinoma, real-time PCR test, DNA methylation, CRC surveillance and BCAT1 and IKZF1.

Ten total publications addressing the analytic validity, clinical validity, or clinical utility of the Colvera test for colorectal cancer (from Clinical Genomics) were identified. All 10 identified studies were funded by, and written by employees of, Clinical Genomics. The papers identified included three validation papers and seven cohort studies (both prospective and retrospective).162-171 One paper addressed analytic and clinical validity, while the rest addressed clinical validity. No papers were found that addressed the clinical utility of the Colvera test.

Note that several papers that are not listed in this LCD were identified addressing testing of BCAT1 and IKZF1 in colorectal cancer patients; however, since these papers did not address the Colvera test itself, they were excluded from this Summary and subsequent Analysis of Evidence for Colvera.

PancreaSeq® Genomic Classifier, Molecular and Genomic Pathology Laboratory, University of Pittsburgh Medical Center

PubMed and Google Scholar were searched for peer-reviewed, evidence-based literature bases which provided information regarding analytic and clinical validity and clinical utility for the PancreaSeq genetic test. Key words used to search in combination included: PancreaSeq, 22-gene panel, molecular testing, PancreaSeq Genomic Classifier, pancreatic cyst(s), pancreatic neuroendocrine tumors (PanNETs), pancreatic adenocarcinoma, DNA targeted next-generation sequencing, DNA-targeted NGS, and KRAS and/or GNAS mutations. The PancreaSeq DNA and mRNA-based targeted NGS test (from University of Pittsburgh Medical Center) as currently offered and described in the methods section of clinical reports does not currently have any corresponding peer-reviewed literature.

Analysis of Evidence (Rationale for Determination)

Databases and Knowledge Bases

Potential knowledge bases were evaluated against the Good Practice Guidelines found in the Clinical Practice Guidelines: Directions for a New Program written by the Institute of Medicine in 1990. Per the publication, the Institute recommended eight Attributes of Good Practice Guidelines: Validity, Reliability/Reproducibility, Clinical Applicability, Clinical Flexibility, Clarity, Multidisciplinary Process, Scheduled Review, and Documentation. While the definitions and descriptions for each attribute will be only briefly detailed in this LCD, the rationale supporting how a potential knowledge base was considered in the context of these attributes will be detailed below.27

Per the Good Practice Guidelines, the Validity attribute addresses both the guidelines as a whole in addition to the content within the guidelines. For the former, validity is demonstrated when the guidelines lead to improvement in health and cost outcomes, a metric that would best be assessed through an external study of the guidelines in clinical practice. For the latter, validity is demonstrated when the guideline content assesses how a clinical action/recommendation affects health and cost outcomes. Validity of the guideline content is ideally evaluated using 11 elements: Projected health outcomes, Projected costs, Relationship between the evidence and the guidelines, Preference for empirical evidence over expert judgment, Thorough literature review, Methods used to evaluate the scientific literature, Strength of the evidence, Use of expert judgment, Strength of expert consensus, Independent review, and Pretesting.

The NCCN generally meets the definition of validity. In 2022, the NCCN was the subject of a retrospective study performed by CVS Health that was presented at that year’s American Society of Clinical Oncology conference. The study found that adherence to NCCN guidelines lowered the total cost of care in the treatment of colorectal and breast cancers in Medicare patients.172,173 Similarly, adherence to NCCN guidelines has been shown to improve health outcomes in cancer patients.174-177 As for the guideline content, the NCCN thoroughly details its recommendations using language that acknowledges the potential complexities and variations found in patient cases (terminology such as “consider” or “preferred” or “should be guided by”) and supports these recommendations with ample peer reviewed literature. Often the discussion of the evidence highlights strengths and weaknesses in studies and tailors the strength of a recommendation based on such evaluation. Additionally, the guidelines are created, reviewed, and updated by subject matter experts through consensus panels.

Unlike NCCN, OncoKB acts as more of a catalogue of genetic variants with proven clinical validity and utility. OncoKB does not make formal recommendations regarding specific clinical actions, but rather provides expert and evidentiary analysis of known variants and explains how confident one can be in the actionability of a variant. For instance, a genetic variant with a low score on OncoKB may not have enough research and clinical evidence to prove a certain drug is more or less effective when that variant is present. This does not mean that testing for the variant will not (in OncoKB’s view) become useful in the future, but OncoKB suggests, based on expert consensus and evidence, that the variant should not yet be used in clinical decision making. Because of these characteristics, OncoKB does not meet the above definition of the attribute of validity, although it does provide valuable information from expert and evidentiary analysis.178

ClinGen demonstrates both features of NCCN and OncoKB as a knowledge base in the context of the attribute of validity. In terms of similarities to OncoKB, ClinGen assesses the relationship between a gene and disease (including hereditary cancer) and makes determinations whether or not there is sufficient evidence supporting this causative relationship. In this instance, again, the knowledge base acts as guidance for whether or not a gene should be evaluated as part of a particular cancer workup. However, more like NCCN, ClinGen also provides very specific recommendations on “Clinical Actionability.” The recommendations provided by this section of ClinGen, however, are relatively simplistic, including clinical actions such as “Surveillance” and preventative surgery. Given that ClinGen is designed to evaluate hereditary diseases, it is unsurprising that most clinical actions evaluated by ClinGen revolve around screening and preventative interventions as opposed to therapeutic actions. Moreover, the prospective nature of clinical actions evaluated on ClinGen are difficult to assess health and cost outcomes as an external study.

The Reliability/Reproducibility attribute in the Good Practice Guidelines refers to consistency in guideline development and procedures. In practice, by following a knowledge base’s standard operating procedures and criteria in evaluating a topic, different groups of reviewers should come up with the same guideline for that topic. Reliability/reproducibility also refers to consistency in how a guideline is interpreted and utilized by its audience. This means that patients under different providers who use the same guideline should see equivalent care and management of their needs. Providers who use the same guideline should also come to the same conclusions and management decisions when evaluating similar patient complaints.

Evaluating knowledge bases for their reliability and reproducibility is difficult, as Good Practice Guidelines has noted. Guidelines, by definition, are meant to guide (not necessarily mandate) clinical decisions. As described in the Validity attribute, the practice of medicine requires flexibility in clinical management to address different patient presentations. This means that guidelines should not only be clear in their recommendations but also provide alternatives, when available.

As stated earlier, the NCCN guidelines thoroughly details its recommendations using language that acknowledges the potential complexities and variations found in patient cases (terminology such as “consider” or “preferred” or “should be guided by”) and supports these recommendations with ample peer reviewed literature. Given the complexity and variation inherent to managing even similar appearing clinical presentations, external comparison of the reliability/reproducibility of NCCN guidelines between these presentations would not be productive. However, one could interpret improvements in health outcomes for those using NCCN guidelines (as opposed to not using them) as an indication that the guidelines reliably improve patient outcomes.174-177

OncoKB provides a very detailed standard operating procedure detailing how it reviews, scores, and updates the knowledge base. The standard operating procedure, while providing step by step instructions in granular detail, still requires OncoKB’s software and personnel infrastructure to review evidence and score genetic content. In this sense, while consistency within the organization is likely robust, it cannot be translated to external reviewers outside the OncoKB team, preventing evaluation of reproducibility as described in the Good Practice Guidelines. Additionally, several steps rely on the judgement of experienced individuals as can be seen in the procedure. The thought processes of these individuals are not provided publicly. As for how OncoKB is used by clinicians, currently no studies were identified addressing this topic. This lack of studies, however, is to be expected given that OncoKB provides scored data without direct clinical management recommendations. Overall, OncoKB meets the Reliability/Reproducibility attribute, albeit, in the context of the OncoKB organization without ability to translate to external reviewers.

ClinGen, like OncoKB, supplies clinicians with scored data but additionally includes minimal clinical management recommendations (e.g., when the clinical action of cancer surveillance is given a high score and is thus implicitly recommended for patients with relevant genetic findings). ClinGen, however, is less centralized than OncoKB in standard operating procedures. Generalizable standard operating procedures applying to all workgroups are provided, however, workgroups sometimes have panel-specific operating procedures as well. Given the broad scope of ClinGen as a whole (pediatric and adult hereditary disorders, both non-cancer and cancer related), having different protocols per workgroup with an overarching more general policy makes sense, even if it leads to variability in how genetic material is evaluated. The disadvantage to this decentralization of standardized operating procedures is consistency can be lost from workgroup to workgroup, making reproducibility more difficult and less consistent. Given the simplicity of clinical recommendations, with more focus on providing a user with information to help make their decisions, external studies comparing the reproducibility of ClinGen recommendations in clinical practice would be difficult if not impractical. ClinGen, while having some internal variation in standard operating procedures, still captures the general features of the Reliability/Reproducibility attribute.

To fulfill the attribute Clinical Applicability, a knowledge base should clearly define in what circumstances and/or patient populations a guideline applies. For all three knowledge bases in this LCD, Clinical Applicability was demonstrated de facto by gene-disease correlations; namely, every guideline relevant to this LCD identified genes in context to a cancer(s) or risk of cancer. More specifically, NCCN further contextualizes its recommendations based on several factors including strength of evidence, cost of intervention, and special patient population considerations. ClinGen similarly expounds upon its scored recommendations in attached Summary Reports that address details such as relevant ages, gender, and clinical presentations. OncoKB categorizes its scoring based on the clinical indication (e.g., prognostic scores Px1, Px2, Px3 and therapeutic scores 1, 2, 3A, 3B, 4, R1, R2) with ties to the FDA scoring where indicated.

Clinical Flexibility, per the Good Practice Guidelines, describes the practice of providing alternatives and exceptions to primary recommendations in a guideline. At best, a guideline would be systematic, considering not only all major alternatives and exceptions, but also describing the rationales behind them and describing which circumstances support not utilizing a primary recommendation, including a consideration of patient preferences, clinical judgement, and logistics behind an alternative recommendation.

Again, the NCCN’s guidelines demonstrate thoroughness backed by evidence, as is seen in the details surrounding each recommendation statement. Often, NCCN guidelines will provide parameters to their recommendations, including what elements should be considered before performing a clinical action and what exceptions and alternatives should be considered, when appropriate. The NCCN guidelines also tend to defer clinical actions to clinical judgement, including recommending team-based decision making in collaboration with the patient.

ClinGen acts less as guidelines and more as a knowledge repository in respect to providing recommendations. As stated earlier, the clinical actionability statements in ClinGen generally refer to hereditary cancer syndromes, which focus primarily on surveillance and prophylactic interventions. However, ClinGen is designed to provide scoring and evidence for clinical actions, allowing the clinicians to decide if the scores are high enough and the evidence strong enough to move forward with a clinical action. In this way, ClinGen does not directly provide clinical instructions, but rather indirectly guides clinical behavior through supplying evidence without frank conclusions. Thus, in this sense, the Clinical Flexibility attribute does not readily apply to ClinGen.

Likewise, OncoKB acts as a knowledge base, however, with even less description of clinical actions. OncoKB also supplies clinicians with information and evidence, but without recommending a certain clinical action and providing alternatives (such as selecting one chemotherapy over another). As a result, the Clinical Flexibility attribute is not applicable to OncoKB.

According to the Good Practice Guidelines, a knowledge base should also demonstrate Clarity, namely be straightforward, logical, organized, and well defined. Terminology and clinical descriptions (e.g., anemia versus hemoglobin level) should be as precise as possible.

The attribute of Clarity is exemplified by NCCN guidelines in many respects. The guidelines use several modalities to present their recommendations, including a searchable database and frequent use of flow chart, diagrammatic algorithms. Additionally, NCCN provides numerous resources describing its process of developing recommendations and what the scoring means. It should be noted that clarity varies per NCCN guideline, with some guidelines less clear in terminology and defining recommendations. In fact, embedding recommendations within long paragraphs of discussion can make a recommendation harder to find. Another weakness of the NCCN guidelines is the practice of saying any unscored recommendation within a guideline should be treated as Level 2A, namely: “Based upon lower-level evidence, there is uniform NCCN consensus that the intervention is appropriate.”179 The great majority of NCCN recommendations are Level 2A but given the fact that the recommendation received “uniform NCCN consensus,” the Level 2A recommendations were considered strong enough to warrant coverage as clinically reasonable and necessary in Medicare. Overall, the NCCN guidelines aim to be transparent and clear with their recommendations and use a variety of modalities to ensure the recommendations are clearly communicated.

ClinGen addresses the Clarity attribute through thorough explanation of its scoring systems, separation of different topics such as gene-disease relationships versus clinical actionability of genetic testing, use of searchable databases, and presentation of scores in consistently structured reports. ClinGen allows its users to determine how to use the presented information, only providing recommendations indirectly through listing potential clinical actions, ranking the strength of evidence, and consensus scoring of the clinical actions. ClinGen, however, provides information with the apparent audience being people with more experience in genetics. Due to the complexity of the ClinGen website, the technical nature of evidence review, and the various topics assessed per gene (gene-disease relationship, dosage sensitivity, clinical actionability, variant pathogenicity, and pharmacogenetics), the ClinGen knowledge base can be challenging to use depending on the user’s background and foundation of knowledge. ClinGen requires its users to learn how to work with the data before they can properly use the website. Moreover, the absence of stated recommendations requires users to understand the details provided before arriving at a conclusion. In fact, per the website: “The information on this website is not intended for direct diagnostic use or medical decision-making without review by a genetics professional.”24 Overall, these weaknesses reduce the clarity of the knowledge base; however, the website does provide a lot of definitional and procedural details that add to its transparency.

OncoKB, much like ClinGen, addresses the Clarity attribute through its clearly described scoring system. Additionally, OncoKB does not provide direct recommendations, but rather provides the strength of evidence and expert assessment of genetic content in the context of cancer(s). Also, like ClinGen, the absence of expert recommendations requires users to come to their own decisions on the provided scoring and data. However, OncoKB’s website is comparatively straightforward/intuitive and has a searchable database.

Noted as “one of the committee’s [Institute of Medicine] strongest recommendations,” good practice guidelines should focus on the attribute of Multidisciplinary Process.27 The committee for Good Practice Guidelines felt that guideline development should include all stakeholders potentially affected by the guideline, suggesting even patients and payors be considered. Part of the process of organizing a guideline committee would be determining the participants’ conflicts of interest, not necessarily to exclude individuals but rather recognize and account for potential biases. It is noteworthy that even among the committee writing the Good Practice Guidelines there was debate on who should be included in guideline development and who should lead the guideline development.

Per the NCCN’s Disclosure Policies: “The NCCN Guidelines are updated at least annually in an evidence-based process integrated with the expert judgment of multidisciplinary panels of experts from NCCN Member Institutions. NCCN depends on the NCCN Guidelines Panel Members to reach decisions objectively, without being influenced or appearing to be influenced by conflicting interests.”179 Panel members are overseen by various levels of staff and leadership with conflicts of interest updated at least semiannually. It is recognized that panel members will have outside activities with other entities such as industry and patient advocacy groups, and thresholds for untenable conflicts of interest have been set, as described on the NCCN website. It should also be noted that while panels are comprised of subject matter experts, the guidelines also receive input from patient advocates. Overall, the process of selecting and maintaining panel members as well as other NCCN leadership, participants, and staff are well-detailed on the NCCN website. As a whole, the NCCN system robustly meets the Multidisciplinary Process attribute.

As described on their website and documented in their standard operating procedures, OncoKB maintains its knowledge base through use of curators, a CGAC, and External Advisory Board. The CGAC consists of “Core” members with broad skillsets that include clinical management, research, and translational cancer biology expertise. The CGAC also includes “Extended” members that include service chiefs, physicians, and scientists that represent multidisciplinary clinical leadership within the Memorial Sloan Kettering Cancer Center (MSKCC). Each member must submit their conflicts of interest and the system of developing and approving genetic assertions including scoring takes conflicts of interest into consideration before a member is allowed to approve or deny these assertions. The CGAC is additionally overseen by an External Advisory Board consisting of leaders from the oncology and genomics community that are not employed by MSKCC. One of the inherent weaknesses of the OncoKB knowledge base is the limited representation from experts outside of the MSKCC. Moreover, involvement from other stakeholders such as patient advocacy groups do not appear to be included in OncoKB development and maintenance. The current CGAC committee appears to consist of doctoral level members (MDs and PhDs) only. While these weaknesses are noted, the overall system utilized by OncoKB meets the core elements of the Multidisciplinary Process attribute.

ClinGen utilizes Expert Panels to score, review, and report recommendations. Expert Panels are open to public volunteers who first receive training, are required to disclose conflicts of interest, and finally are assigned to a topic specific panel(s) by ClinGen leadership. Members of Expert Panels are listed on the ClinGen website, and their conflicts of interest disclosures usually can be found within their workgroup’s webpage. These members represent a variety of professions in research, medicine, academia, and/or industry and diverse educational backgrounds including both doctoral and non-doctoral degrees. Additionally, membership is not limited to the United States. While ClinGen provides one of the most inclusive memberships of the three knowledge bases discussed, the transparency around these members (including leadership) is lacking. Moreover, as would be expected given the scope of ClinGen’s content, the predominance of membership leans towards genetics research as opposed to clinical management. ClinGen does demonstrate weaknesses that would benefit from improvement, but the essence of the knowledge base, inclusivity guided by centralized review and reporting standardized operating procedures and multidisciplinary input, meets the core elements of the Multidisciplinary Process attribute.

The Institute of Medicine committee also endorsed Scheduled Reviews for knowledge bases. While the committee does not specify exact timelines for scheduled reviews, the Good Practice Guidelines does state that reviews should be scheduled based on the known or expected frequency of updates in the topic of interest. Additionally, scheduled reviews do not preclude ad hoc reviews as new evidence is identified.

NCCN guidelines are reviewed and updated as necessary on a yearly basis. However, NCCN guidelines are often updated on a more frequent basis as stakeholder inquiries are submitted or new information is learned. A good example of this process can be seen in NCCN’s transparency documents which record guideline specific meetings and the topics discussed at those meetings. NCCN clearly meets the Scheduled Reviews attribute.

OncoKB, of the three discussed knowledge bases, appears to have the most rigorous and frequent scheduled reviews of its content. OncoKB’s team reviews various data sources at frequencies from weekly for databases like cBioPortal and COSMIC to monthly for peer-reviewed literature from a variety of well-established journals. Additionally, OncoKB keeps abreast of new information from major annual conferences (e.g., ASH Annual Meeting) and evaluates other sources ad hoc, such as user feedback or data from clinical trials. All of these procedures can be found in the Standard Operating Procedure for the website.178 This procedure strongly meets the Scheduled Review attribute described in the Good Practice Guidelines.

Based on the structure of ClinGen, the Scheduled Review attribute is not well met. In 2018, McGlaughon and colleagues published an analysis asking how frequently a gene curation should be re-assessed and updated.180 The retrospective study recommended different timelines for re-evaluation based on the initial strength of a gene-disease association (limited, moderate, strong), ranging from greater than five years to three years. In 2019, ClinGen did develop a policy along these lines, with the shortest time to recuration being two years for gene-disease associations with a Moderate classification, mandating the policy for “all current and future GCEPs” (Gene Curation Expert Panels).181 However, a similar policy for recuration of Clinical Actionability scores and recommendations was not found. Based on what is available, it appears that ClinGen may re-evaluate Clinical Actionability on an ad hoc basis, such as the recent re-evaluation of hereditary cardiac disease scores in 2021.182 It should also be noted that for all released Clinical Actionability reports, the oldest update was only in April 2020, meaning that all released reports appear to be current up to at least 2020 (with a vast majority of reports showing updates in 2022). While the absence of a clear re-curation policy for Clinical Actionability is problematic, ClinGen’s reports are based on current evaluations and recent updates.

The eighth and final attribute described in Good Practice Guidelines is Documentation. This attribute refers to the written procedures, policies, scoring metrics, etc. describing how a knowledge base makes its determinations and guidelines. Good documentation should also record activities of the knowledge base, including which individuals developed a guideline, the evidence they utilized, their rationales and assumptions, and any analytical methodology they used.

NCCN demonstrates robust standard operating procedures and vetting of its guideline developers. Additionally, NCCN both records submissions from external sources and documents the subsequent panel meetings where submissions are considered. Much of the text in NCCN guidelines outlines both the evidence and rationale for NCCN scoring and recommendations. However, NCCN is less transparent on the dialogue and content of its guideline panel meetings, for instance not providing comprehensive meeting minutes or transcription of the meeting’s discussions. The votes of the panel on an issue are likewise anonymous even if the members attending a discussion are disclosed. These practices can obscure full understanding of the rationales and assumptions behind a panel’s final decisions and recommendations. As a whole, the NCCN is fairly transparent in both the procedures used to develop guidelines and the people creating and updating these guidelines. There are areas in which more transparency would be needed to fully meet the Documentation attribute, but as a whole, NCCN demonstrates good documentation.

OncoKB provides a very detailed standard operating procedure on its website and lists all team members with their respective Conflicts of Interest disclosures. However, a major weakness of OncoKB is the lack of transparency as to what data and/or citations support their score for genetic content. For instance, although alterations are listed with their score, disease, and drug associations, only the number (but not the actual references) of citations are listed. Moreover, meeting minutes do not appear to be publicly available, limiting insight into the rationales and assumptions behind scores. OncoKB, because of these weaknesses, does not completely meet the Documentation attribute as described above, although the knowledge base does do a good job explaining its general procedures and documenting who is curating the knowledge base.

ClinGen provides overall scoring criteria and parameters for its knowledge base, but some expert panels create their own working group protocols. Given the broad scope of ClinGen as a whole (pediatric and adult hereditary disorders, both non-cancer and cancer related), having different protocols per workgroup with an overarching more general policy makes sense, even if it leads to variability in how genetic material is evaluated. Members of Expert Panels are listed on the ClinGen website, and their conflicts of interest disclosures usually can be found within their workgroup’s webpage. ClinGen, because of these weaknesses, does not completely meet the Documentation attribute as described above, although the knowledge base does do a good job explaining its general procedures and documenting who is curating the knowledge base.

Specific Lab Tests

Cxbladder (Detect, Triage, Monitor, Resolve)

The fundamental methodology of Cxbladder tests is founded on a 2008 paper from Holyoake and colleagues describing the creation of an RNA expression assay that could predict the likelihood of urothelial carcinoma from urine.39 Several gene expression profiles were examined between urothelial carcinoma tissue and normal urothelial tissue, the latter of which was collected as non-malignant tissue from patients with renal cell carcinoma who had undergone a radical nephrectomy. The gene expression profiles that demonstrated the most promise in differentiating between cancer and non-cancer were gathered into a four gene expression panel and then optimized to discriminate between urine from patients with any grade/stage of urothelial cancer and patients without urothelial cancer. Gene expression tests that are used to predict the presence or absence of cancer, however, must take into consideration many potential complicating and confounding factors. The absence of a rigorous approach to addressing complicating/confounding factors undermined the clinical validity of Cxbladder tests, as will be detailed below.

As evidenced in the publications reviewed in the Summary of Evidence, the key weakness of the Cxbladder tests is found within their test design. Cxbladder tests are founded on the concept that differences in gene expression between urothelial cancer and non-urothelial cancer (including non-neoplastic tissue) can be measured in urine to determine if urothelial cancer is present or not present. This means that a well-designed test will be able to not only discriminate between cancer and normal tissue, but also between different types of malignancy.

For the precursor test uRNA, Holyoake and colleagues started with a custom-printed array from MWG Biotech that allowed gene expression profiling of 26,600 genes.39 This array was used to analyze normal tissue (18 specimens) and urothelial carcinoma tissue (28 specimens from Ta tumors and 30 specimens from T1-T4 tumors). The preliminary data was then analyzed to select the most promising genes for creation of a GEP test. This subset of promising genes was further pruned by testing urine from patients with transitional cell carcinoma (TCC) (urothelial carcinoma) (n=75), patients with other “urological cancer” (n=33), and patients without cancer, including patients with infection (n=20) and “other benign urinary tract disease” (n=24). Additionally, the paper mentions testing blood to get gene expression levels for blood and inflammatory cells, but the results of this subset of tests were not provided in this paper. It must be noted here that the characteristics of the non-TCC cancers were also not disclosed in the paper.

After this additional testing, Holyoake and colleagues settled on four gene expression test (uRNA-D) that utilized the genes MDK, CDC2 (now officially known as CDK1), IGFBP5, and HOXA13.39 Unfortunately, the false positives and false negatives received little attention from the paper, including false positives for patients with other non-TCC cancers (n=3). The authors concluded that their results “will need to be further validated in a prospective setting to more accurately determine test characteristics, particularly in patients presenting with hematuria and other urological conditions.”

Standing alone, the 2008 paper from Holyoake and colleagues lacks the scientific rigor to establish the uRNA-D test as able to accurately distinguish between urothelial carcinoma and other cancers or other non-cancer urological conditions.39 One very notable gap included a lack of details or definition for non-urothelial cancers, of which many would feed into the urinary system, including prostate cancers, renal cancers, and metastatic or locally invasive cancers from other organs. It would be expected that a well-designed test would assess not only the full spectrum of potential cancers, but that the test design would include a much higher count of specimens (beyond the 33 undefined cancers found in this study). In the same sense, the absence of details regarding non-malignant specimens, which included only 20 “urinary tract infections,” was a major and notable gap in this test’s development. There were many other issues identified with this paper including a strong population bias towards male patients, but altogether, this validation of uRNA-D was insufficient to support that the test performed accurately as a tool for distinguishing between urine from patients with and without urothelial carcinoma.

It is critical to understand the limitations of the 2008 publication from Holyoake and colleagues because the test uRNA-D was used to create the Cxbladder line of tests, with the main difference between uRNA-D and Cxbladder being the addition of a single gene, CXCR2, to the gene expression profile of the Cxbladder assay.39,40 It is noted that other versions of Cxbladder use non-genetic data in an overarching algorithm to produce results, but the focus of this discussion will be upon the gene expression profile technology of Cxbladder tests.

In 2012, the first paper describing a Cxbladder test was published by O’Sullivan and colleagues.40 This paper acted as both a test validation and a comparison of the new Cxbladder test with other urine tests on the market. While the statistical results of Cxbladder seem promising, we must return to the foundation of the test, namely its ability to distinguish between urothelial carcinoma and other cancerous or non-cancerous conditions (or patients without disease). In this paper, other malignancies (n=7) were assessed only when they were found in patients with urothelial carcinoma. Moreover, the types of other malignancy were not disclosed in this paper. There are also 255 “nonmalignant disease” specimens, which included representations of “benign prostatic hyperplasia/prostatitis”, “cystitis/infection or inflammation of urinary tract”, calculi, and “hematuria secondary to warfarin,” and 164 specimens from patients with “no specific diagnosis.” This first paper from 2012 also does not sufficiently address Cxbladder’s ability to distinguish between urothelial carcinoma and other malignancies, which is of particular relevance when a majority of the patient population were male (78%) with a median patient age of 64 years and thus, with higher risk of prostate carcinoma. The paucity of clinical data also created gaps in the data integrity, failing to answer questions such as how many of the urothelial carcinoma specimens had coincident inflammation and what other medical conditions (and medications) were present in this patient population? Additionally, the paper does not spend significant time discussing the potential reasons for false positives and false negatives. These issues are compounded by a short follow-up period (only 12-months) with participating patients.

In the most recent paper published for Pacific Edge Diagnostics by Lotan and colleagues in 2022, Cxbladder Triage and Detect were “enhanced” by addition of a different test methodology, digital droplet PCR, adding a different approach to detecting urothelial carcinoma: identification genetic variants associated with urothelial carcinoma.44 This approach was based on the premise that the six variants (called single nucleotide polymorphisms or SNPs in the paper) are either acquired as mutations in the carcinogenesis of urothelial carcinoma or already present as an inherited genetic variant in the patient’s germline DNA, representing a higher risk of urothelial carcinoma. However, it is known that these SNPs can also show up in the context of other malignancies (such as papillary renal cell carcinoma), which is not addressed by Lotan and colleagues.183,184 Moreover, as mentioned in the paper’s discussion, the presence of these SNPs in urine may not coincide with clinically detectable (e.g., cystoscopically visible) carcinoma. This could lead to further confusion with false positives, especially when the PPV of Cxbladder tests tends to be very low. If numerous false positive results in Cxbladder are accepted as an inherent trait of the test, providers may not be as vigilant in closely following patients with a positive Cxbladder result after a negative cystoscopy. In addition, providers may not search for other malignancies (e.g., papillary renal cell carcinoma) as a potential cause for the “false positive” Cxbladder result. Another weakness of the 2022 study was seen in the differences between cohorts. Notably, the six SNPs alone were less sensitive for urothelial carcinoma in the Singapore cohort (66%) than in the United States cohort (83%). This could indicate differences in the genetic etiology of urothelial carcinoma in different populations, meaning that the six SNPs may not be as representative in populations not evaluated in this study. Furthermore, while this study claimed to evaluate multiple ethnicities, the paper does not disclose which ethnicities were evaluated and the numbers of patients from each ethnicity.

Each new Cxbladder test builds on their predecessors, often utilizing the same specimens from prior studies in their test validations and performance characterizations. Moreover, the insufficient assessment of potential confounding factors is perpetuated through these studies. For instance, if we look at assessment of non-urothelial cancer through all major published uDNA and Cxbladder studies, we see the following:

  • Holyoake 2008: 33 undefined cancers were noted39
  • O’Sullivan 2012: Seven other malignancies (undefined) were noted, all in patients with urothelial carcinoma40
  • Kavalieris 2015: Non-urothelial neoplasms were not discussed (study population included 517 patients from the O’Sullivan 2012 study)40,41
  • Breen 2015: Non-urothelial neoplasms were not discussed (study population included patients from the O’Sullivan 2012 study)40,45
  • Kavalieris 2017: Non-urothelial neoplasms were not discussed (same patient population as Lotan 2017)42,46
  • Lotan 2017: Non-urothelial neoplasms were not discussed (same patient population as Kavalieris 2017) 42,46
  • Konety 2019: In some subpopulations, patients with history of prostate or renal cell carcinoma were excluded from the study; otherwise, non-urothelial neoplasms were not discussed (study population included patients from O’Sullivan 2012 and Kavalieris/Lotan 2017) 40,42,46,49
  • Davidson 2019: Other non-bladder malignancies and neoplasms were identified (but not subclassified) in a study evaluating hematuria; notably, Cxbladder-Triage was positive in most of these other malignancies (seven of nine total) and neoplasms (two of three total)52
  • Koya 2020: Non-urothelial neoplasms were not discussed50
  • Davidson 2020: Other non-bladder malignancies and neoplasms were identified in the study but data was not presented to allow association of these other malignancies and neoplasms with positive or negative results from Cxbladder.53
  • Raman 2021: In some subpopulations, patients with history of prostate or renal cell carcinoma were excluded from the study; otherwise, non-urothelial neoplasms were not discussed (study population included patients from O’Sullivan 2012 and Konety 2021)40,43,49
  • Lotan 2022: In some subpopulations, patients with history of prostate or renal cell carcinoma were excluded from the study; otherwise, non-urothelial neoplasms were not discussed44
  • Li 2023: Some patients (24 of 92 patients) noted to have “other cancers;” except for one mention (a patient with breast cancer who missed their nine month follow-up due to conflict with breast cancer treatment), other types of cancers are not described or significantly discussed51

There are numerous potential malignancies that can contribute to urine genetic composition (e.g., renal cell cancer, bladder cancer, prostate cancer). However, by using only 40 unspecified neoplasm specimens, 33 of which were tested only for uDNA (not Cxbladder), the validation from Pacific Edge Diagnostics underrepresents potentially confounding variables. This underrepresentation is further substantiated through data found in studies not performed or funded by Pacific Edge Diagnostics. In an independent study from Davidson and colleagues in 2019 performed on patients with hematuria, seven of nine malignant prostate or kidney lesions were discovered in patients with a positive Cxbladder-Triage result.52 False positive Cxbladder-Triage results were also seen in a majority of patients without bladder cancer but instead diagnosed with radiation cystitis, vascular prostate, bladder stones, anticoagulation related bleeding, post-TURP bleeding, and urethral stricture. No cause for hematuria was found in 225 patients, with 137 of them having a positive Cxbladder-Triage result. This 2019 study as well as others indicate that Cxbladder is less sensitive for detecting smaller, low grade malignancies making it unlikely the false positives could represent urothelial malignancies below the limit of detection by cystoscopy and other conventional evaluations.52

The exclusion criteria further weakened the development and validation of Cxbladder tests. Consistent with the aforementioned confounding variables of other urinary tract cancers and metastases, the Cxbladder studies generally excluded patients with a history of prostate or renal cancer; however, these exclusions were not always seen in studies funded and/or published by Pacific Edge Diagnostics. The validation studies for Cxbladder also typically excluded inflammatory disorders such as pyelonephritis and active urinary tract infections and also excluded known causes for hematuria like bladder or renal calculi or recent manipulation of the genitourinary tract (e.g., cystoscopy).40-43, 46 In the first published validation study for Cxbladder, the authors stated that the additional 5th RNA marker, CXCR2, “was predicted to reduce the risk of false-positive results in patients with acutely or chronically inflamed urothelium.”40 However, in this same study, the authors went on to exclude patients with “documented urinary tract infection.” Fortunately, publications from other sources such as Davidson and colleagues in 2019 provide insight into how Cxbladder tests (namely Cxbladder Triage) perform under these benign conditions.52 Davidson and colleagues found that false positives were seen in a majority of patients where the underlying etiology of hematuria was radiation cystitis, vascular prostate, bladder stones, anticoagulation related bleeding, post-TURP bleeding, or urethral stricture. False positives were also seen in 10 of 23 (43%) patients with urinary tract infection and 8 of 10 (80%) patients with “other” inflammatory etiologies. Altogether, in the inflammatory category of the study, over half (59%) of patients with an inflammatory etiology of their hematuria received a false positive result from the Cxbladder Triage (CxbT) test. In a subsequent study by Davidson and colleagues in 2020, “approximately 10% of patients (85 of 884) required a repeat CxbT assay because quality control failures, mainly caused by interference of inflammatory products or a large number of white blood cells.”53 The couple of systemic reviews and meta-analyses that included Cxbladder tests were mixed in their assessment of this line of tests. Chou and colleagues in 2015 only reviewed one of the Cxbladder papers (O’Sullivan 2012) and came to the conclusion that the strength of evidence for the study was graded low.40,54,55 In 2022, Laukhtina and colleagues performed a more involved assessment of Cxbladder tests, particularly Cxbladder Monitor, coming to the conclusion that it had “potential value in preventing unnecessary cystoscopies.”56 The authors also determined that there was “not enough data to support” using Cxbladder Triage and Detect in the “initial diagnosis setting.” Laukhtina and colleagues did acknowledge that their study had several potential limitations which included the “absence of data on blinding to pathologist and urologists” and the inability to “perform subgroup analyses for HG [High Grade urothelial carcinoma] recurrence detection only.” However, it should be noted that for both the 2015 and 2022 systemic reviews and meta-analyses, the evaluation of the actual clinical features of each Cxbladder study was relatively superficial, focusing more on the statistical values and less on the quality of the studies and the designs underlying those values.54-56

In conclusion, the Cxbladder line of tests all suffer from the foundational problem of insufficient validation of their test in potentially confounding clinical circumstances including non-urothelial carcinoma malignancies and inflammatory conditions of the urinary tract. Cxbladder also demonstrates several population biases, including early papers with a strong bias towards male patients of European ancestry. The majority of Cxbladder papers avoid disclosing the PPV and number of false positives of their tests. Cxbladder tests generally have low PPVs (down to 15-16% as seen in Konety, et al 2019) and high numbers of false positives (also in Konety, et al 2019, there were 464 false positive results as compared to 86 true positive results).49 These values are significant in that false test results, particularly false positives, can lead to patient anxiety and distress among other procedural issues related to follow up for an inaccurate result. Most of the primary literature regarding Cxbladder test development and performance is funded, if not directly written by, the test’s parent company, Pacific Edge Diagnostics. This conflict of interest must be taken into account when reviewing these papers. Finally, and most importantly, due to the insufficient representation of confounding factors in the validation populations, the Cxbladder tests have not been adequately vetted in the context of the Medicare population. Given all of these findings, the Cxbladder line of tests are considered not medically reasonable and necessary for Medicare patients.

ThyroSeq CRC, CBLPath, Inc, University of Pittsburgh Medical Center

ThryoSeq CRC is a prognostic test for malignant cytology that predicts the five year likelihood of cancer recurrence (low, intermediate, or high risk) based on algorithmic synthesis of raw data from the next generation sequencing (NGS) of DNA and RNA from 112 genes. Given that fine needle aspirated (FNA) nodules proven to be malignant on cytology are typically surgically resected, sometimes with coincident lymph node dissections as warranted, and the total features of the cancers are then assessed on permanent pathology of the resection specimen, having a prognostic test predicting risk of cancer recurrence on cytology before assessment of the entire resection seems preemptive. However, ThyroSeq CRC is proposed to direct extent of surgery for Bethesda VI nodules, increasing aggressiveness of surgery for more aggressive cancers. Therefore, ThyroSeq CRC must not only supply information that is not obtained through standard clinical and pathologic procedures prior to a resection, but also provide results that are subsequently confirmed on patient follow-up after the resection. Ultimately, a prognostic test should provide information that predicts the course of a patient’s disease before therapy is implemented and thus informs future clinical management to preemptively reduce adverse outcomes. For a prognostic test to be clinically useful, it must ultimately improve patient outcome.

In the first publication describing the evaluation of the ThyroSeq CRC test, a small population of patients (n=287) with differentiated thyroid cancer (DTC) were evaluated with the CRC prognostic algorithm and their molecular risk group (Low, Intermediate, High) was compared to their outcome in terms of distant metastases (DM) as identified through pathology or whole body scans with iodine-131.57 Patients were divided into two groups: control (n=225) and DM within five years (n=62). In the control group, precise numbers of how many patients fell into each CRC risk category were not supplied by the paper. Instead the control group was further segregated into a subcategory of propensity matched patients where each DM positive patient was compared with a control patient with similar demographic and pathologic characteristics, although the authors clearly state histologic subtype was not used to perform this propensity match. Using this propensity matching technique, comparisons were provided between the 53 DM positive patients and 55 control patients. In this subgroup comparison, the DM positive patients demonstrated more high risk scores (Low=1 patient; Intermediate=17 patients; High=35 patients) than the control patients (Low=28 patients; Intermediate=19 patients; High=8 patients). This comparison was felt to be adequate by the authors to conclude that their “molecular profile can robustly and quite accurately stratify the risk of aggressive DTC defined as DM.”

This study had numerous limitations and drew dramatic conclusions from a very small sample size that was poorly presented by the paper.57 The immediate issue with this study was the lack of transparency. Thyroid cancer is a complex category of malignancy that includes many different subtypes of cancer, each with a variety of behaviors depending on numerous demographic, clinical, and pathologic factors. Considerations for management of cancer patients is thus a multifactorial and interdisciplinary process that requires careful evaluation. The study from Yip and colleagues not only oversimplifies the descriptions of patient populations, but the background data for each patient is not provided to allow for objective review by their readers. We are not given crucial details such as key findings in pathology reports (mitoses, lymphovascular invasion, capsular invasion, histologic subtype of the cancer) nor the number of patients with positive lymph nodes found during resection of the cancer. Instead, the patient demographics and molecular characteristics provided (Table 1) include simplifications such as generalized cancer types without subclassification (Papillary, Follicular, or Oncocytic) and non-specific metastatic locations (Bone, Lung, “>1” and Other). Additionally, the propensity matched description table (Table 2) only lists mean age at diagnosis, mean tumor size, and gender ratio.

Yip and colleagues also did not provide significant insight into why some controls (n=8) were ranked as high risk while one patient with DM was categorized as low risk.57 The purpose of the intermediate risk category is unclear and concerningly unhelpful when the number of patients with this risk category were basically the same between propensity matched DM and control patients (n=17 versus n=19 respectively).

Ultimately, it was unclear how this test would be used in patient care.57 Given that the test is performed on cytology before resection, the authors conjectured their test could be used to guide extent of surgery (lobectomy versus total thyroidectomy) or help direct patients to therapeutic trials. However, these potential clinical utilities were not assessed in this paper.

In the second publication evaluating ThyroSeq CRC, Skaugen and colleagues performed a single-institution retrospective cohort study assessing 128 Bethesda V (suspicious for malignancy) cytology specimens.58 The study assessed both the ThyroSeq v3 diagnostic test as well as the ThyroSeq CRC test. For the CRC portion of the study, 100 specimens were assessed, with exclusion of five due to a benign diagnosis upon resection and three excluded due to concurrent metastatic disease discovered at resection. For the remaining 92 specimens, there was a mean follow-up of 51.2 months (about four years). The shortest follow-up time was less than one month, and the longest follow-up time was of 470 months (nearly 40 years). It must immediately be noted here that the ThyroSeq CRC test claims to predict a five year risk of DM, which means over half of the CRC tested specimens (more than 46 specimens) demonstrated potentially inadequate follow-up to assess the core five year prognostic claim. The importance of these follow-ups become even more evident when the authors drew conclusions about the prognostic power of the CRC’s three risk categories: High, Intermediate, and Low. Distant metastases was identified in 12 of the 92 specimens: 6 of 11 specimens with the high risk result and 6 of 63 specimens with the intermediate risk result. The authors did not provide a deeper analysis of the five high risk specimens without DM, including no speculation as to why the test potentially misclassified these specimens. Additionally, the authors did not provide significant discussion into the meaning of the intermediate-risk result that was given to 66 of the 100 specimens tested. In the paper’s conclusions, the ThyroSeq CRC was again proposed as potentially helpful in deciding the extent of surgery required.

Much like the first paper, the second paper (Skaugen and colleagues) lacked data transparency, making further assessments by readers difficult.58 While Table 3 provided patient characteristics, surgical findings, and pathologic findings, all to a much greater extent than the first paper, readers were still unable to synthesize how data categories corresponded to each other (e.g., of the patients who received lymph node dissection, what subtypes of thyroid cancer were represented).

Ultimately, Skaugen and colleagues lacked sufficient follow-up to draw significant conclusions about the accuracy of the ThyroSeq CRC results.58 The paper, while data rich, was not transparent nor thorough enough for readers to draw their own conclusions about the validity of the test. Moreover, the conclusions given by the authors regarding the prognostic test were overly simplified, such as highlighting the presence of DMs in some patients with intermediate and high risk results, and considering this correlation to be significant when they also noted absence of DMs in patients with low risk results. Finally, the actual use of ThyroSeq CRC in the clinical setting is still unclear based on the discussion of the paper.

In the third publication, Liu and colleagues assessed their three tier classification system (low-risk, intermediate-risk, and high-risk for recurrence) in the context of primary thyroid cancer recurrence after a primary thyroidectomy and subsequent initial oncologic therapy.56 Notably, the test name ThyroSeq CRC was never used in this paper, even though the three tier system of risk stratification appeared to be the same. This raises a concern that the classification system used in this paper may not be the same methodology as used for the marketed ThyroSeq CRC. With that caveat and for the purposes of this Analysis of Evidence, this third paper will be considered contribuatory to the body of literature evaluating ThyroSeq CRC.

Just from the methodology section of the publication alone, we can see immediate differences between this paper and the previous two papers described.57-59 Firstly, surgical specimens were permitted in the study, not just cytology specimens. This allowance of a non-cytology specimen type (“final surgical specimens,” without specification of post resection handling of tissue, formalin fixation versus fresh-frozen preservation) in a test that was presumably designed for cytologic specimen would require a separate validation of the test for a new type of specimen. Validation for this change in pre-analytic procedure was not evidenced in this paper nor either of the two prior publications. Secondly, the study was not blinded due to its retrospective nature. Thirdly, in cases where multifocal cancer was identified, only samples from the “most aggressive biology” were selected for molecular testing; however, the paper does not define what constitutes “most aggressive biology.” Fourthly, the specimens included in the study included patients with preoperative Bethesda I, II, III, and IV cytology as well as Bethesda V and VI cytology. This starkly contrasts the inclusion criteria seen in the prior two studies. Overall, these methodologic differences between papers reduces the comparability of results between the three studies.

Data collection in this study from Liu and colleagues was also different from the previous two papers.57-59 For instance, Liu and colleagues recorded several details on the surgical and post-surgical treatments of the patients. This data included types of lymph node dissection (central versus lateral and prophylactic versus therapeutic), postoperative complications (e.g., hematoma, hypercalcemia, surgical site infection), and long-term complications (such as hypocalcemia and recurrent laryngeal nerve paresis). Several of these data categories were similar to those seen in Skaugen and colleagues’ study, but differences found in Liu’s publication included post-operative details, grouping of several types of papillary thyroid cancer (such as tall cell variant) into a more general category (“Papillary, high risk"), evaluating only all-cause mortality (not substratifying into disease specific mortality), and detailing AJCC prognostic stages. Note that as mentioned above, there was a paucity of clinical and pathologic data provided for samples in the study from Yip and colleagues, and the amount of data from Liu and colleagues was far more diverse than that prior study.

The study followed up patients for a median of 19 months ([Interquartile range] IQR of 10-31 months).59 None of the patients were followed for a total of five years, which means the data in this study is insufficient to substantiate the five year prognostication claims of the ThyroSeq CRC test.

The above analysis captures only some of the issues identified with the study from Liu and colleagues.59 In fact, careful reading of the paper’s discussion brings up numerous other “limitations” to the study, not already described above, as identified by the authors. While the authors’ discussion remains upbeat, statements such as “how to manage the intermediate group?” draw attention to the novelty of this classification schemea and the uncertainty of what and how the results can impact patient care and outcome. While the authors suggest numerous ways their classifications can affect patient management, and even suggest that they use this test within their institution to guide their decision-making, the lack of evidence demonstrating this prognostic test’s clinical utility through carefully designed studies suggests that currently the test may not be adequately studied for use in patient care. Ultimately, in spite of the numerous data supplied, this paper still failed to adequately evaluate the clinical validity and utility of the prognostic three tier system.

In summary, the validity of the ThyroSeq CRC test is not sufficiently supported by the three peer-reviewed papers identified. The three papers were exceptionally difficult to compare to each other due to differences in information provided, types of samples tested, and methodologies described. The clinical utility of the test is not significantly evaluated by any of the papers. Due to the inadequate quality of the papers and the insufficiency of data, this test does not have sufficient evidence to prove clinical reasonableness and necessity and will be considered non-covered in Medicare patients.

PancraGEN- Interpace Diagnostics

A 2006 patent described a topographic genotyping molecular analysis test (which would later become PathfinderTG and then renamed PancraGEN) for risk classification of pancreatic cysts and solid pancreaticobiliary lesions when first line results were inconclusive.185 PancraGEN integrates the molecular results (loss of heterozygosity markers and oncogene variants) with a pathologist interpretation to provide four categories of risk (benign, statistically indolent, statistically higher-risk, or aggressive).

Topographic genotyping (also called integrated molecular pathology [IMP]) was created to integrate molecular and microscopic analyses when a definitive pathologic diagnosis or prognosis was inconclusive. Typically, investigation of a pancreatic cyst or solid pancreaticobiliary lesion is an interdisciplinary process that involves a battery of clinical evaluations including imaging, cytology, and, when applicable, cyst fluid analysis. Given the complexity of this work-up process, it is surprising that only a small number of PancraGEN studies compared its test results with the histology, cytology, and/or pathology of surgical biopsy specimens. For the PancraGEN studies addressing pancreatic cysts, all three studies were retrospective in nature and contained significant limitations. The largest study by Al-Haddad in 2015, analyzed 492 patients registered with the National Pancreatic Cyst Registry (NPCR).85 The majority of the patients reviewed for inclusion (n=1,732) did not meet the study inclusion criteria, due to insufficient or inaccessible documentation, which resulted in many cases not meeting the follow-up threshold of ≥23 months. Researchers evaluated how well PancraGEN (PathfinderTG), and the 2012 Sendai International Consensus Guideline classification categorized patients with pancreatic cysts in terms of their chance of developing cancer. However, the publication from Al-Haddad and colleagues did not adequately address the validity of the PancraGEN due to several shortcomings and limitations, including, but not limited to:

  • Data that would be used for the categorization of patients according to Sendai 2012 criteria were also not specified for most patients, as the collection of information was started prior to publication of the 2012 guidelines.
  • Only a small fraction of all enrolled patients (26%) met inclusion guidelines.
  • The study only used a retrospective design without randomization or prognostic data.
  • All patients in the study had been scheduled for surgery, while typically not all patients with pancreatic cysts get surgery referrals.
  • The mean follow-up period for benign disease in this study was too short for firm conclusions to be made beyond three years (insufficient follow-up times).

Of note, during the study, the criteria for the test evolved and older cases on the registry had to be recategorized based on new criteria.

Two other retrospective studies by Malhotra et al (2014) and Winner et al (2015) analyzed data from patients with pancreatic cysts in 2006 and 2012 who had surgical resection and analysis with PancraGEN (Pathfinder TG).86,87 The study by Winner and colleagues had an extremely small cohort of 36 patients, 85% of whom were “white” with no other race/ethnicity reported. All patients in the study were scheduled for surgery even though not all patients with pancreatic cysts undergo surgery. Moreover, the authors were unable to include the majority of patients recruited because of the lack of final pathology results. The study by Malhotra and colleagues utilized 26 patients with no demographic characteristics reported and only three months of follow-up. In Malhotra (2014), no clinical validity outcomes such as sensitivity, specificity, and predictive values were calculated or reported. Both Malhotra et al (2014) and Winner et al (2015) were performed at single institutions with no blinding.

Al-Haddad and colleagues assessed clinical utility by describing how PancraGEN might provide incremental improvement to international consensus guidelines (Sendai [2006] and Fukuoka [2012]).85 Of the 289 patients who met the consensus criteria for surgery, 229 had a benign outcome. The PancraGEN test correctly classified 84% as benign and correctly categorized four out of six as high risk. A 2016 study by Loren and colleagues evaluated clinical utility by comparing the association between PancraGEN diagnoses and international consensus guidelines for the classification of intraductal papillary mucinous neoplams and mucinous cystic pancreatic neoplasms.88 In the study, 491 patients were categorized as (1) "low-risk" or "high-risk" using the PancraGEN diagnostic algorithm; (2) meeting "surveillance" criteria or "surgery" criteria using consensus guidelines; and (3) having "benign" or "malignant" outcomes during clinical follow-up. Additionally, the real-world management decision was categorized as "intervention" if there was a surgical report, surgical pathology, chemotherapy, or positive cytology within 12 months of the index EUS-FNA, and otherwise categorized as "surveillance". A 2016 study by Kowalski and colleagues analyzed false negatives from the NPCR to examine clinical utility.89 The study hypothesized that PancraGEN might appropriately classify some pancreatic cysts that had been misclassified by consensus guidelines, but the numbers where the PancraGEN and consensus guidelines disagreed were small, limiting the value of these results.

The clinical validity of PancraGEN has been addressed in several retrospective studies. Most evaluated performance characteristics of PancraGEN for classifying pancreatic cysts according to the risk of malignancy without comparison to current diagnostic algorithms. The best evidence regarding incremental clinical validity comes from the report from the NPCR, which found that PancraGEN has slightly lower sensitivity (83% vs. 91%), similar NPV (97% vs. 97%), but better specificity (91% vs. 46%) and PPV (58% vs. 21%) than consensus guidelines.85

Throughout their publications, Interpace Diagnostics has indicated that the PancraGEN test is meant to support first-line testing, but no process for combining PancraGEN with consensus guidelines for decision making has been proposed, and the data reporting outcomes in patients where the PancraGEN and consensus guideline diagnoses disagreed was limited. There are no prospective studies with a simultaneous control population that proves PancraGEN can affect patient-relevant outcomes (e.g., survival, reduction in unnecessary surgeries). The evidence reviewed moreover does not demonstrate that PancraGEN has incremental clinical value in the prognosis of pancreatic cysts and associated cancer.

The evidence for the clinical validity of using PancraGEN to evaluate solid pancreaticobiliary lesions consists of three retrospective studies by Khosravi and colleagues (2018), Kushnir and colleagues (2018), and Gonda and colleagues (2017).90-92 One study assessed the ability of PancraGEN for classifying solid pancreatic lesions while the other two evaluated the classification of biliary strictures. Biliary strictures can be caused by solid pancreaticobiliary lesions but also other causes such as pancreatitis or trauma. Additionally, the studies did not specify what percentage of patients with biliary stricture had solid pancreaticobiliary lesions. While the three retrospective studies noted that the use of cytology plus FISH plus PancraGEN increased sensitivity significantly, the incremental value of using cytology plus FISH plus PancraGEN over cytology plus FISH is unclear. Interpace Diagnostics has indicated that PancraGEN is meant as an adjunct to first-line testing for pancreatic cysts but has not effectively tested or assessed how PancraGEN performs for solid pancreaticobiliary lesions. Therefore, the evidence reviewed does not demonstrate that PancraGEN has incremental clinical value for the diagnosis of solid pancreaticobiliary lesions.

Notably, there are no studies assessing the analytical validity of the PancraGEN test. Without such data, the technical performance of the test cannot be truly determined. A Technology Assessment and systematic review of Pathfinder TG prepared by CMS found no studies that evaluated the analytic validity of LOH analyses in the Pathfinder framework as compared to a reference standard such as pathology reports or radiologic findings.91 The systematic review addressed questions about analytical validity, clinical validity, and clinical utility, but found no studies which “directly measured whether using LOH-based topographic genotyping with PancraGEN/[PathfinderTG] improved patient-relevant clinical outcomes.” The study also found that sample sizes were small, had methodological limitations, and were all retrospective without any prospective studies. To date, no study has been performed to identify how PancraGEN impacts patient outcomes such as reducing patient mortality from pancreatic cancer or leading to improved survival.

In summary, the body of peer-reviewed literature concerning PancraGEN is insufficient to establish the analytic validity, clinical validity, and clinical utility of this test in the Medicare populace. There is insufficient literature evidence to demonstrate that the topographic genotyping used in PancraGEN is an effective method to aid in the management of individuals with pancreatic cysts or solid pancreaticobiliary lesions when other testing methods are inconclusive or unsuccessful. There is also a lack of peer-reviewed evidence demonstrating that the use of topographic genotyping in the management of individuals with pancreatic cysts results in improved clinical outcomes. As such, this test will not be covered currently as medically reasonable and necessary in Medicare patients.

DecisionDx – Castle Biosciences

DecisionDx-Melanoma

In order to systematically evaluate such a large body of publications, we will discuss both test design and study design of DecisionDx-Melanoma in the context of what information is offered to providers and patients. Looking at an example report from the Castle Biosciences website, we can see that their test provides prognostic information based on GEP data alone (class assignment) and GEP data in combination with clinicopathologic data (i31-ROR and i31-SLNB).186 We will spend most of the discussion below focused on the class assignment portion of the results since most of the peer-reviewed literature focuses on this result alone and the i31-ROR and i31-SLNB were developed much more recently.

Fundamentally, DecisionDx-Melanoma is a GEP that analyzes 28 genes of interest (and considered by the company to be significantly informative of the prognosis of melanoma) and anchors these 28 genes to three reference genes. As controls, the three reference genes should provide a consistent baseline across all types of melanoma and non-melanoma tissue. Unfortunately, the literature described in the summary of evidence did not provide insight into the consistency of expression of the three reference genes across tissue types and other pre-analytic variables (for instance fixation time and age of formalin-fixed paraffin-embedded [FFPE] tissue). Of note, one of the reference genes (FXR1) serves as a gene of interest (not a control gene) in Castle Bioscience’s DecisionDx-UM.187

Gene expression profiles are founded on the principle of differential gene expression in a cell of interest, like a cancer cell, when compared to background cells, namely all other cells in the surrounding tissue. Tissues are comprised of many different types of cells. Skin, for instance, is comprised of a variety of cell types including melanocytes, keratinocytes, immune cells such as lymphocytes, structural cells such as fibroblasts and fibrocytes, hair generating cells, and specialized glandular cells such as apocrine cells. Many different factors influence gene expression in a cell including changes in the cell’s surroundings (as is seen in a response to sun damage) and the cell’s stage of development, such as whether a cell is part of a germinal layer (and mitotically active) or terminally differentiated. This means that even among cells of the same type (same lineage) the GEP can be different. Considering all of these complexities, a GEP from cells of interest can be difficult to untangle from the GEP of background cells.

In terms of DecisionDx-Melanoma, the test development publication from Gerami and colleagues did not adequately address these complexities inherent to GEPs.109 Many key questions were left unaddressed, such as: how did the relative quantity of tumor versus non-tumor cells affect test sensitivity, did lighter or darker skin tones affect the test outcome, did the test perform differently between histologically distinct melanomas such as acral lentiginous melanoma versus desmoplastic melanoma, and did the presence of sun damage affect test results? As a more specific example, we would expect the presence of tumor-infiltrating lymphocytes (TILs) to affect the outcome of a GEP test. Tumor-infiltrating lymphocytes change the composition of the background tissue, increasing the density of background cells. Additionally, TILs are thought to interact with tumor cells, which would suggest tumor cells would respond with differential GEP expression. These factors alone could influence the expression profile of the 28 genes in DecisionDx-Melanoma test, but until this scenario is tested, its impact is unknown. In summary, understanding the potential pitfalls of GEP testing is critical for understanding the reliability, performance, and accuracy of a GEP test.

During the development of any test, consideration and documentation of pre-analytic variables is critical for establishing test accuracy and precision. For instance, RNA extracted from formalin-fixed, paraffin-embedded tissue is a challenging analyte. Of all the macromolecules (like DNA, carbohydrates, lipids, and proteins), RNA is one of the most fragile and unstable macromolecules, meaning that adverse conditions such as delayed tissue fixation can result in the rapid degradation of RNA.188 When creating a valid clinical test (such as a GEP that uses RNA extracted from FFPE specimens), the material used to develop a test must be comparable to the material tested in the clinics. One cannot legitimately develop a test for blood using only urine, a test for females using only male specimens, or a test for breast cancer using only prostate cancer specimens. Similarly, a test using archived, older specimens cannot represent newer, <1 year-old specimens without, at minimum, comparisons demonstrating that the specimens behave similarly in the test. In the case of DecisionDx-Melanoma, test development and validation utilized archival specimens with ages up to 14 years.109 The effect of aged material on RNA integrity was not thoroughly addressed (only a brief statement about quantity and quality assessed using NanoDrop 1000 and Agilent Bioanalyzer 2100 was provided), and older material was not differentiated from newer material.

In a later study assessing the analytic validity of DecisionDx-Melanoma, Cook and colleagues presented data supporting the performance and reproducibility of their test.113 Per the paper introduction, Cook and colleagues stated that they performed their study in accordance with published guidelines, specifically the 2010 publication “Quality, Regulation and Clinical Utility of Laboratory-developed Molecular Tests” from the AHRQ and the 2011 publication “NCCN molecular testing white paper: effectiveness, efficiency, and reimbursement.”189,190 In the study, several points of concern were identified, a few of which are described below.

First, RNA stability studies only extracted RNA once from each specimen, relying on downstream analysis of the same pool of RNA (kept at -80°F) to confirm analytic validity.113 This means that the study did not assess the processes of macrodissection and RNA extraction for reproducibility and reliability. Note that per the AHRQ, “if the assay incorporates an extraction step, reproducibility of the extraction step should be incorporated into the validation studies, and likewise for any other steps of the procedure.”189

Second, while Cook and colleagues did try to evaluate the affect of FFPE block age on GEP testing, the study did not compare GEP results from the same tumor specimen at different time points.113 Instead, the study evaluated whether or not a GEP result could be obtained from an older FFPE block. Interestingly, Figure 3 in the paper diagrams test failure rates in yearly increments (for specimens aged up to four years) and then lumps together all data from specimens older than four years. Although 6,772 FFPE specimens were represented by Figure 3, the break down of how many specimens were found in each age category was not given. Also, it is not clear as to the origin or handling of these 6,772 specimens. Nowhere else in the paper are such large numbers (thousands versus hundreds) of specimens evaluated despite the apparent availability of 6,772 FFPE specimens. Not only does Figure 3 demonstrate an expected decline in the testability of older specimens, but it also highlights the quandary of using older, less reliable specimens to develop a test intended for clinical specimens that will invariably be under one year of age. Moreover, data regarding the measurement of RNA integrity (as was done in the Gerami publication from 2015) was not provided, even though this would be valuable for comparing specimens of different ages.109 Altogether, the evaluation represented by Figure 3 does not answer the question of whether an older specimen would have the same test result as a younger version of itself.

Cook and colleagues (along with other studies from Castle Biosciences) also failed to sufficiently address many other pre-analytic variables including, but not limited to:113

  • Protocols for central pathologic review of cohort specimens:
  • How many pathologists participated?
    • What specific features were evaluated in each slide?
    • How were discrepancies between outside report and internal review handled?
  • Protocols for diagnosis of sentinel lymph nodes?
    • How many sections/levels were evaluated per lymph node?
    • Was immunohistochemistry used for every specimen to identify occult or subtle tumor deposits?
  • How much time passed between biopsy or wide excision and placing the specimen in fixative?
  • Was the fixation time (time in fixation before processing to FFPE) consistent for each specimen?
  • Was the same fixative (e.g., formalin) used for each specimen?
  • What was the time between tissue sectioning for slide creation and RNA extraction? Was this time consistent between different specimens?
  • When cDNA was “preamplified” prior to testing, was the process confirmed to consistently amplify all relevant genes to the same, consistent degree or were some amplifications more efficient than others?

These questions address known pitfalls in both the comparability of specimens and the integrity of extracted RNA. Moreover, even if some of the above questions were addressed during test development, the lack of transparency in the published literature prevents clear assessment of the integrity of the test development.

In terms of study design, a prognostic test should ideally evaluate itself in the context of the current standard of care. We would anticipate that a prognostic test for malignancy would both compare its accuracy with the best prognostic standards available and would also compare itself against real world outcomes. Once accuracy is sufficiently established, proving clinical utility becomes crucial. One of the key factors in determining clinical utility is a test’s impact on patient outcome. A test without an improvement in patient outcome is not clinically utile for the purposes of Medicare coverage.

The initial assessment of newly diagnosed melanoma is complicated. For the primary melanoma alone, clinical and pathologic evaluations are critical for developing a proper plan of management for the patient. This plan must consider many factors both in the primary melanoma and the surrounding clinical context, including exposure and family histories. The American Joint Committee on Cancer (AJCC) acts as an authority on the grading and staging of primary melanomas based on many clinical, radiologic, and pathologic factors.191,192 Additionally, the AJCC provides extensive prognostic data tied to the factors used in the grading and staging of melanomas. Many of these factors are assessed during pathologic evaluation and include histologic features such as tumor mitotic rate, surface ulceration, and Breslow depth. At the same time, it must be recognized that AJCC staging is only one consideration in a multitude of data points that are considered by the clinical team when developing plans for patient management. For instance, a melanoma subtype, which is not explicitly factored into the AJCC scoring, can play a significant role in determining patient management. In general, the term “melanoma” represents a category of malignancies that are actually comprised of a spectrum of subtypes, each with their own etiologies, behaviors, and properties. For instance, acral lentiginous melanoma is known to be more aggressive than other subtypes of melanoma and have a poorer prognosis.193 Without consideration of this subtype, a patient could be misclassified as having a less dangerous form of melanoma based on AJCC clinical and pathologic staging alone. For this reason, while AJCC staging is invaluable to patient management in melanoma, it does not represent the only clinical consideration in patient care.

The development and assessment of DecisionDx-Melanoma relied heavily on comparisons to AJCC clinical and pathologic staging and the factors used in these AJCC scores. Often, the authors of DecisionDx-Melanoma studies would focus primarily on a single factor, such as sentinel lymph node positivity, and compare the prognostic value of that factor to the prognostic value of DecisionDx-Melanoma. This strategy often set up false dichotomies since in clinical practice a single prognostic factor such as sentinel lymph node biopsy is not considered in isolation without considering other clinical data. Even in more complicated, multifactorial comparisons, studies involving DecisionDx-Melanoma failed to account for the whole clinical and pathologic picture, sometimes only evaluating a limited number of factors used in AJCC scoring when attempting to establish the prognostic validity of the test. This can be seen in the variability of demographics and clinical information provided from study to study. In general, most studies at least provided information on patient age, Breslow thickness, presence/absence of tumor ulceration, and AJCC clinical and pathologic stages. Conversely, most studies did not provide information regarding the subtype of melanoma, location of primary tumor, presence/absence of transected tumor base, and presence/absence of lymphovascular invasion. Moreover, none of the studies identified in the Summary of Evidence provided sufficient information to determine the interrelationships between demographic and clinicopathologic data points. For instance, despite knowing the count of patients with a specific subtype of melanoma, one could not further explore other characteristics within a melanoma subtype group such as the average Breslow thickness per subtype group or the AJCC clinical stages represented in a subtype group.116

Of all the clinicopathologic factors used in describing melanoma, the Breslow thickness is a central factor, critical in both AJCC clinical and pathologic staging. Measuring Breslow thickness requires histologic identification of both the surface of the melanoma and the deepest point of tumor growth. Obviously, transection of the tumor base during biopsy or wide excision would compromise the accurate measurement of Breslow thickness. Moreover, since AJCC pathologic staging is primarily based on Breslow thickness (with subgrouping currently based on presence or absence of ulceration), undermeasurement of Breslow thickness can dramatically affect both clinical and pathologic stage assignment. For instance, according to AJCC’s 8th edition, the cutoff between pathologic stage T2 and T3 tumors is a Breslow thickness of 2 mm. Just looking at the pathologic stage without consideration of nodal or metastasis status, a T2 could be AJCC clinical stage I or II depending on the presence or absence of ulceration.192 However, a T3 melanoma will always be at least a clinical stage II tumor. Undermeasurement of a T3 melanoma without ulceration would drop the melanoma at least one clinical stage, from II to I. While this seems to be a minor technicality, several DecisionDx-Melanoma studies draw conclusions through comparison of clinical stage I and stage II melanomas (such as Zager, 2018).117 Interestingly, most of the DecisionDx-Melanoma studies do not present data on how many specimens were transected at the base of tumor, although this metric does appear in three more recent studies.110,126,194 In fact, the rate of transection in the more recent studies is striking, seen in 39.5%, 34.9%, and 53.29% of all specimens respectively.110,126,194 It is further notable that even with the presence of transection, the specimens were still used in the papers’ analyses and conclusions.

Limited patient follow-up proved to be another critical weakness in many of the DecisonDx-Melanoma studies. DecisionDx-Melanoma advertises its results as five year prognosticators for risk of recurrence, metastases, and/or death.186 At a baseline, data supporting this assertion must account for a minimum of five years of patient follow-up, even if the patient experiences a recurrence event. If the patient experiences a local recurrence, they may still develop distant metastases and/or pass away from the melanoma within the five year time frame, both events of which would be relevant to the DecisionDx-Melanoma prognostics. Of all the studies reviewed in the summary of evidence, only one study monitored all of its patients for a minimum of five years. 115 Even the publication from Gerami and colleagues in 2015 that described the development and validation of DecisionDx-Melanoma reported use of specimens with well under five years of follow-up.109 Their training cohort included patients with 0.06 years of follow up (claiming a median of 6.8 years for all training specimens), and their validation cohort included patients with 0.5 years of follow-up (claiming a median of 7.3 years all validation specimens).109 Overall, studies demonstrated median follow-ups of patients without disease recurrence that ranged from 1.5 to 7.5 years.115,117

Publications involving DecisionDx-Melanoma also lacked consistent definitions from study to study. Definitional inconsistency was well captured by Marchetti and colleagues in the Melanoma Prevention Working Group, which convened in 2020 to discuss prognostic GEP tests for melanoma.108 For instance, the definition of “melanoma recurrence,” as used in the outcome metrics of Disease-Free Recurrence (DFS) or Recurrence-Free Survival (RFS), differed from study to study. In Hsueh (2017), RFS was defined by regional and distant metastases while in Zager (2018) RFS included local metastases in addition to regional and distant metastases and excluded sentinel lymph node positivity.115,117 Podlipnik (2019) used the term DFS, defining it by “relapse” without further detail, and Keller (2019) used the term RFS without providing a clear definition altogether.121,122 A majority of studies indicated that the outcome risk estimates represented the first five years following a primary diagnosis of melanoma with only a few studies reducing the risk estimate to cover only the first three years. Note again that only one study from Zager and colleagues in 2018 included patients all followed-up for a minimum of five years.117

As evidenced in the previous paragraphs, there are several weaknesses in both the quality and thoroughness of data collection in DecisionDx-Melanoma studies as well as methodologic and definitional inconsistencies. In terms of conclusions and results, we see the potential corollaries of these weaknesses. For instance, the PPV and NPV of these studies are particularly striking. Not all papers used these metrics when evaluating their results, but when PPV and NPV were provided, their values changed dramatically from study to study. This finding is particularly relevant when examining the latest version of the DecisionDx-Melanoma report.186 The DecisionDx-Melanoma report supplies a three to four tier prognostic classification of melanomas (Classes 1A, 1B/2A, 2B). In one of the interpretation tables in the DecisionDx-Melanoma report, the classes (1A, 1B/2A, and 2B) are paired with the AJCC clinical stages (I, II, or III) to provide five year risk estimates for three potential outcomes: Melanoma-Specific Survival(MSS), Distant Metastasis-Free Survival (DMFS), and RFS. According to the report, this interpretation table may be referenced to Greenhaw and colleague’s publication in 2020.107 If we look at Greenhaw’s meta-analysis study, we find the PPV and NPV is only provided for RFS (PPV 46%; NPV 92%) and DMFS (PPV 35%; NPV 93%). Remember that PPV and NPV scores represent the number of true results divided by the number of all positive or negative results respectively, both true and false results. This means that for patients with a positive result, 35 of 100 patients will experience a distant metastasis (positive for this event) within five years of their original melanoma diagnosis and 65 will not experience a distance metastasis within five years of their original diagnosis. Negative predictive value provides the opposite reassurance, namely that a negative result means 93 of 100 patients will NOT experience a distant metastasis within five years while 7 of 100 patients will still experience a distant metastasis within five years. The reason the concept of PPV and NPV is described here in such basic detail is to highlight the risks of relying on the Class designation provided by DecisionDx-Melanoma to prognosticate patient outcome. The concern for test accuracy is further compounded when one considers that the PPV and NPV are different from study to study. The PPVs for DMFS for studies as described in the Summary of Evidence ranged from 14.6% to 62%.117,124 For reference, the only study with a minimum of five years of follow-up for all patients recorded a PPV of 40% for DMFS.117

Several studies were published addressing the clinical utility of DecisionDx-Melanoma. All of these studies focused on how DecisionDx-Melanoma would impact patient management, typically by measuring to what degree and how the test result changed patient management. Several of the studies utilized hypothetical scenarios and polled providers (ranging from trainee residents to practicing attendings) on how they would respond to these scenarios with and without DecisionDx-Melanoma results.195-198 These studies did not assess real world interactions of the test with patient management. A couple of studies prospectively measured changes in physician behavior and patient management when provided with DecisionDx-Melanoma results for their patients.184,199 However, as defined for the purposes of this LCD, a clinically utile test must positively affect patient outcome. While these six studies altogether demonstrated changes in physician behavior and/or patient management when DecisionDx-Melanoma was used, none of the studies demonstrated how this positively impacted patient outcome, ie, increasing patient survival. A demonstration of clinical utility could be accomplished in a clinical trial where patients’ overall survival is compared between patients where the test is used or patients managed without test results. To date, such a trial has not been performed for DecisionDx-Melanoma.

Finally, as discussed in an editorial review by WH Chan, MS and H Tsao MD, PhD in 2020, management of cutaneous melanoma has dramatically changed within the past few years.102 Prognosis determination plays less of a role in determining patient management when other factors are used to determine predictive (therapy-related) outcomes. For instance, sentinel lymph node biopsy status is used to determine if a patient should receive adjuvant chemotherapy. Targeted sequence analyses for specific gene mutations (such as BRAF V600E) now can inform clinicians on which targeted therapy would most benefit their patients. This changing landscape appears to be recognized by Castle Biosciences, who most recently added clinicopathologic algorithmic prognostic results to their test.186 Unfortunately, there is currently insufficient peer-reviewed literature to establish the clinical validity and utility of these new features: two papers as of the writing of this LCD, both of which are validation papers, one for i31-SLNB and the other for i31-ROR.110,111 Without more published literature, including clinical trials, the i31-SLNB and i31-ROR cannot be considered clinically valid or utile for Medicare patients.

It is beyond the scope of this LCD to provide comprehensive analysis of all individual papers reviewed. While the major concerns regarding peer-reviewed literature for DecisionDx-Melanoma are well characterized above, many other concerns were not detailed and still should be addressed, even if not detailed in this Analysis of Evidence. Examples of concerns not expounded in the analysis of evidence include:

  • Inadequate information regarding patients with hereditary melanoma disorders
  • Inadequate study of the effects of therapies on measured outcomes
  • Inadequate information comparing melanomas with different mutational profiles (e.g., tumors with BRAF V600E)

In summary, the body of peer-reviewed literature concerning DecisionDx-Melanoma is insufficient to establish the analytic validity, clinical validity, and clinical utility of this test in the Medicare population. As such, this test does not currently meet medically reasonable and necessary criteria for Medicare patients and will not be currently covered.

DecisionDx-SCC

In 2020, Wysong and colleagues described a 40 GEP test (which would later become DecisionDx-SCC) for risk classification in cases of cutaneous squamous cell carcinoma (cSCC).134 Their biomarker study aimed to validate a GEP test that could assess the risk of metastasis in cSCC. Fundamentally, DecisionDx-SCC is a GEP that analyzes 34 genes of interest (considered by Castle Biosciences to be significantly informative of the prognosis of cSCC) and six control genes.

Gene expression profiles are founded on the principle of differential gene expression in a cell of interest, like a cancer cell, when compared to background cells, namely all other cells in the surrounding tissue. Tissues are comprised of many different types of cells. Skin, for instance, is comprised of a variety of cell types including melanocytes, keratinocytes, immune cells such as lymphocytes, structural cells such as fibroblasts and fibrocytes, hair generating cells, and specialized glandular cells such as apocrine cells. Many different factors influence gene expression in a cell including changes in the cell’s surroundings (as is seen in a response to sun damage) and the cell’s stage of development, such as whether a cell is part of a germinal layer (and mitotically active) or terminally differentiated. This means that even among cells of the same type (same lineage) the gene expression profile can be different. Considering all of these complexities, a GEP from cells of interest can be difficult to untangle from the GEP of background cells.

In terms of DecisionDx-SCC, the test development publication from Wysong and colleagues did not adequately address these complexities inherent to GEPs.134 Many key questions were left unaddressed, such as: how did the relative quantity of tumor versus non-tumor cells affect test sensitivity, did lighter or darker skin tones affect the test outcome, did the test perform differently between histologically distinct cSCCs, and did the presence of sun damage affect test results? As a more specific example, we would expect the presence of TILs to affect the outcome of a GEP test. Tumor infiltrating lymphocytes change the composition of the background tissue, increasing the density of background cells. Additionally, TILs are thought to interact with tumor cells, which would suggest tumor cells would respond with differential GEP expression. These factors alone could influence the expression profile of the 40 genes in the DecisionDx-SCC test, but until this scenario is tested, its impact is unknown. In summary, understanding the potential pitfalls of GEP testing is critical for understanding the reliability, performance, and accuracy of a GEP test.

An additional validation study from Borman and colleagues was published in 2022.135 This paper primarily focused on whether or not the DecisionDx-SCC test would provide “actionable class call outcomes.” They did not provide any information regarding patient follow-up or accuracy of the class call outcomes. While they did test for replication and precision, the sample sizes for these assessments were considerably smaller than the overall cohort used in the study. Additionally, they (as seen in other studies from Castle Biosciences) failed to sufficiently address many other pre-analytic variables including, but not limited to:

  • Protocols for central pathologic review of cohort specimens:
    • How many pathologists participated?
    • What specific features were evaluated in each slide?
    • How were discrepancies between outside report and internal review handled?
  • How much time passed between biopsy or wide excision and placing the specimen in fixative?
  • Was the fixation time (time in fixation before processing to FFPE) consistent for each specimen?
  • Was the same fixative (e.g., formalin) used for each specimen?
  • What was the time between tissue sectioning for slide creation and RNA extraction? Was this time consistent between different specimens?
  • When cDNA was “preamplified” prior to testing, was the process confirmed to consistently amplify all relevant genes to the same, consistent degree or were some amplifications more efficient than others?

These questions address known pitfalls in both the comparability of specimens and the integrity of extracted RNA. Moreover, even if some of the above questions were addressed during test development, the lack of transparency in the published literature prevents clear assessment of the integrity of the test development.

The additional observational studies produced by Castle Biosciences included three cohort studies and a case series, all published between 2020 and 2022.136-138

Farberg and colleagues published a paper, using the same dataset as the validation study, aiming to assess whether or not the DecisionDx-SCC test could be integrated into the existing NCCN guidelines for the management of patients with cSCC.136 Another paper, from Aaron and colleagues, also used samples from the same dataset as the original validation study.137 This paper assessed whether DecisionDx-SCC could predict recurrence and “provide independent prognostic value to complement current risk assessment methods.” The third cohort study was from Ibrahim and colleagues whose paper attempted to clinically validate the DecisionDx-SCC test.138 In general, these studies had the same issues as those outlined above, and in the two studies that assessed rates of recurrence, patient follow-up data was not reported. The studies stated that “cases had a documented regional or distant metastasis, or documented follow-up of at least three years post-diagnosis of the primary tumor without a metastatic event” but did not give any further information.136-138 DecisionDx-SCC advertises its results as three year prognosticators for risk of recurrence, metastases, and/or death. At a baseline, data supporting this assertion must account for a minimum of three years of patient follow-up, even if the patient experiences a recurrence event. If the patient experiences a local recurrence, they may still develop distant metastases and/or death may result from the cSCC within the three year time frame, both events of which would be relevant to the DecisionDx-SCC prognostics.

Au and colleagues described two cases of cSCC, one with fatal recurrence and one without recurrence, and the retrospective results of DecisionDx-SCC testing on tissue samples from each case.139 While the results did show that the recurrent case was classified as high risk of recurrence and the non-recurrent case was classified as low risk, two cases are insufficient to provide meaningful insight into generalizability of the test in the normal population. Additionally, there is no evidence that the test results would have resulted in a change of management decisions for the cases or in eventual patient outcomes.

Aside from the paper from Au and colleagues, papers addressing clinical utility included surveys, a panel review, and literature reviews.128-133 These papers had a number of shortcomings and limitations, including, but not limited to:

  • A high likelihood of selection and response bias in the surveys
  • No description of survey participant recruitment methods
  • An expert panel composed of Castle Bioscience employees, consultants, and researchers
  • Reviews that cited the authors’, or their colleagues’, previous work without acknowledgement
  • Lack of methods descriptions or, in review papers, inclusion criteria

Notably, there are no significant studies assessing patient outcomes or clinician treatment decisions in a real-world setting following a DecisionDx-SCC test. Without such data, clinical utility cannot be determined. For example, a demonstration of clinical utility could be accomplished in a clinical trial where patients’ overall survival is compared between patients tested with DecisionDx-SCC and patients managed without this test. To date, such a trial has not been performed for DecisionDx-SCC.

In summary, the body of peer-reviewed literature concerning DecisionDx-SCC is insufficient to establish the analytic validity, clinical validity, and clinical utility of this test in the Medicare population. As such, this test does not currently meet medically reasonable and necessary criteria for Medicare patients and will not be currently covered.

UroVysion fluorescence in situ hybridization (FISH) – Abbott

In order to systematically evaluate the available research surrounding the uFISH test for bladder cancer it is important to first discuss the test within the context of the test information provided for clinicians and patients. The product page on Abbott Molecular’s website states that the uFISH test “is designed to detect aneuploidy for chromosomes 3, 7, 17, and loss of the 9p21 locus via fluorescence in situ hybridization (FISH) in urine specimens from persons with hematuria suspected of having bladder cancer.”200 UroVysion fluorescence in situ hybridization (uFISH) is often compared to urine cytology, and the manufacturer specifically states that the uFISH test has “greater sensitivity in detecting bladder cancer than cytology across all stages and grades.” A positive result of the test is defined by the manufacturer as four or more cells out of 25 showing gains for two or more chromosomes (3, 7, or 17) in the same cell, or 12 or more out of 25 cells having no 9p21 signals detected. However, not all bladder cancers have these alterations, and these chromosomal changes can also be seen occasionally in healthy tissues and other types of cancer, as noted by Bonberg and colleagues (2014) and Ke and colleagues (2022).140,161 Additionally, genomic profiles and chromosomal abnormalities can vary between low grade and high grade bladder cancer which can make the detection of low grade cancer less likely.

As noted by Lavery and colleagues in 2017 and Mettman and colleagues in 2021, much of the literature assessing uFISH uses a variety of definitions for positivity.150,157 Lavery aimed to overcome these shortcomings by using a strict definition for a positive uFISH test – which used the manufacturer’s definition along with the addition of “tetraploidy in at least 10 morphologically abnormal cells.”157 Tetraploidy can be seen in normal cell division and in other non-cancerous processes, so this addition was made to account for false-positive results from the uFISH test. The blinded study described in the paper found no significant difference between uFISH and urine cytology, with sensitivities of 67% and 69% and specificities of 72% and 76%, respectively. Additionally, the authors found that inclusion of the tetraploidy requirement in their definition effectively reduced false-positive rates, but also determined that some bladder cancer tumors do not have the chromosomal alterations for which uFISH assesses (30% of the tumors tested by the authors). Mettman similarly attempted to increase the accuracy of the uFISH test by including tetraploidy in their positivity definition.150 The authors reported considerably different results than the paper from Lavery, with sensitivity of uFISH ranging from 58-95% depending on the definition used, and a specificity using each definition of 99%. The study was specifically evaluating the test in patients suspected of having pancreatobiliary stricture malignancies, however, which could account for the differences seen between the two papers.

Sassa and colleagues (2019) compared the uFISH test to urine cytology in 113 patients prior to nephrouterectomy and 23 volunteers with no history of urothelial carcinoma.153 In cases of high-grade urothelial carcinoma (HGUC), the sensitivity, specificity, positive PPV, and NPV for detection by urinary cytology were 28.0%, 100.0%, 100.0%, and 31.6%, respectively. For uFISH, these values were 60.0%, 84.0%, 93.8%, and 41.2%, respectively. In cases of low-grade urothelial carcinoma (LGUC), however, the results were significantly worse, with sensitivities for both UroVysion and urine cytology of only 30%.

UroVysion fluorescence in situ hybridization (uFISH) has also been assessed as a prognostic test for the recurrence of bladder cancer in patients and as a means of identifying recurrence in patients sooner. A paper from Guan and colleagues in 2018 evaluated the value of uFISH as a prognostic risk factor of bladder cancer recurrence and survival in patients with upper tract urothelial cancer (UTUC).155 One hundred and fifty-nine patients in China received a uFISH test prior to surgery and were then monitored for recurrence. While the authors did indicate that there was a relationship between uFISH results and recurrence, the results were non-significant (p=.07). Liem and colleagues (2017) conducted a prospective cohort study to evaluate whether uFISH can be used to early identify recurrence during treatment with Bacillus Calmette–Guerin (BCG).156 During the study, three bladder washouts at different time points during treatment (t0 = week 0, pre-BCG, t1 = 6 weeks following transurethral resection of bladder tumor [TURBT], t2 = 3 months following TURBT) were collected for uFISH from patients with bladder cancer that were treated with BCG. The authors found no significant association between a positive uFISH result at t0 or t1 but found that a positive uFISH result at t2 was associated with a higher risk of recurrence. Additionally, in 2020, Ikeda and colleagues published a paper that aimed to evaluate the relationship between uFISH test results following TURBT and subsequent intravesical recurrence.158 They indicated that uFISH test positivity was a prognostic indicator for recurrence following TURBT. However, recurrence in patients with two positive uFISH tests was only 33.3%, and in patients with one positive uFISH test (out of two tests total) the recurrence rate was only 16.5%.

Limited patient follow-up was a repeated weakness in papers evaluating uFISH to detect or predict recurrence. For example, the paper from Guan had a median follow-up of 27 months (range: 3-55 months), the paper from Liem had a median of 23 months of follow-up (range: 2-32 months), and the paper from Ikeda had a median follow-up of 27 months (range: 1-36.4).155,156,160 The ranges of follow-up indicate that at least one patient was only followed for one month, and at least half of all patients had less than the median follow-up time. This limited follow-up means that cases of recurrence were likely overlooked in the studies. Even in cases where shorter follow-up may have been due to the early detection of recurrence, lack of continued follow-up could result in overlooking a patient with reduced survival following a recurrence; this additional information would be relevant to the uFISH prognostics.

Other observational studies identified included two cohort studies from Nagai and colleagues (2019) and Gomella and colleagues (2017), a case-control study from Freund and colleagues (2018), and a cross-sectional study from Todenhöfer and colleagues (2014).152,154,158,159 Each of these studies reported similar results and limitations to the papers described above. Additionally, Breen and colleagues (2015)45 evaluated uFISH in a comparative study with other tests used to detect urothelial carcinoma in urine. The other tests included Cxbladder Detect, cytology, and NMP22. The study utilized five cohorts of patients, only one of which evaluated all four tests for the entire cohort. Data from the five cohorts were evaluated and integrated, with several different imputation analyses utilized to fill in for missing test values and create a “new, imputed, comprehensive dataset.” The authors report that before imputation uFISH had a sensitivity of 40% (the lowest of the four tests) and a specificity of 87.3% (the second lowest of the four tests). Utilizing several different imputation methodologies, similar findings for comparative sensitivities and specificities were seen, leading to the conclusion that the imputed data sets were valid, with the best imputation methodology being the 3NN model. In this 3NN model, uFISH had considerably lower sensitivity than the other three tests and lower specificity than two of the three tests.

In recent years, other authors have conducted reviews and meta-analyses in order to better address the validity and utility of uFISH, and other urinary biomarkers in general. In 2022, Zheng and colleagues published a meta-analysis and review that assessed the prognostic value of uFISH to detect recurrence in the surveillance of non-muscle invasive bladder cancer (NMIBC).141 They identified 15 studies from 2005-2019 that met their inclusion criteria and in their meta-analysis determined that the pooled sensitivity of uFISH in detecting recurrence was 68% (95% CI:58-76%) and the pooled specificity was 64% (95% CI: 53-74%).

Sciarra and colleagues (2019) conducted a systematic review to evaluate the diagnostic performance of urinary biomarkers for the initial diagnosis of bladder cancer.146 The review identified 12 studies addressing uFISH, with a combined sample size of 5,033 uFISH test results. The mean sensitivity was 64.3% and the median was 64.4%, with a range of 37-100%. Additionally, the mean specificity was 88.4% and the median was 91.3%, with a range of 48-100%.

Another recent paper identified was from Soputro and colleagues (2022), who conducted a literature review and meta-analysis to evaluate the diagnostic performance of urinary biomarkers to detect bladder cancer in primary hematuria.145 The authors identified only two studies assessing uFISH that met their inclusion criteria. The pooled sensitivity and specificity of uFISH in the identified studies was 0.712 and 0.818, respectively. The authors noted that the “current diagnostic abilities of the FDA-approved biomarkers remain insufficient for their general application as a rule out test for bladder cancer diagnosis and as a triage test for cystoscopy in patients with primary hematuria.”145

Sathianathen and colleagues also conducted a literature review and meta-analysis to evaluate the performance of urinary biomarkers in the evaluation of primary hematuria.148 The authors were only able to identify one paper addressing uFISH which met their inclusion criteria, which determined that uFISH was comparable to the other biomarker tests being evaluated. However, given the fact that only one paper was identified which met the authors’ criteria for inclusion, the findings regarding uFISH could not be properly assessed.

The most recent meta-analysis identified was written by Papavasiliou and colleagues (2023) who assessed the diagnostic performance of urinary biomarkers potentially suitable for use in primary and community care settings.143 The authors identified 10 studies addressing the diagnostic performance of uFISH between 2000 and 2022. These studies had a wide range of sensitivities (0.38-0.96) but a narrower range of specificities (0.76-0.99).

Three additional literature reviews were identified from Bulai and colleagues (2022), Miyake and colleagues (2018), and Nagai and colleagues (2021). Each of these papers noted significant issues with the literature support for these biomarkers in general, and uFISH in particular, but also lacked non-ambiguous inclusion criteria, search methods, and other necessary information to validate their assessments.142,144,147

Only two identified papers significantly addressed the clinical utility of the uFISH test – Guan (2018) and Meleth (2014).149,155 Guan noted that they did not find any association between a positive uFISH test and survival in patients; however, as noted above, limited follow-up was a significant shortcoming of their study.155 Meleth and colleagues conducted a review of the available literature and were unable to find any papers that met their inclusion criteria which directly assessed patient survival, physician decision-making, or downstream health outcomes in relation to uFISH test results.149 This lack of information regarding clinical utility is notable and without studies assessing for improvement in patient outcomes in a real-world setting, the evidence supporting the uFISH test for use in the Medicare population is severely lacking.

It is also important to note that no studies were identified that established that uFISH was able to accurately distinguish between urothelial carcinoma and other cancers or other non-cancer urological conditions. As noted above, the specific chromosomal changes that uFISH uses to identify urothelial carcinoma have been identified in non-cancerous tissues and other types of carcinomas. This very notable gap in the identified research included a lack of details or definitions for non-urothelial cancers, of which many would feed into the urinary system, including prostate cancers, renal cancers, and metastatic or locally invasive cancers from other organs. With the knowledge that the chromosomal changes that uFISH uses to identify urothelial carcinoma can also be found in the context of other malignancies and non-malignancies, and that their identification in urine may not coincide with clinically detectable (e.g., cystoscopically visible) carcinoma, confusion could arise with false positives, especially when the PPV of uFISH tests tends to be very low. If numerous false positive results in uFISH are accepted as an inherent trait of the test, providers may not be as vigilant in closely following patients with a positive uFISH result after a negative cystoscopy. In addition, providers may not search for other malignancies as a potential cause for the “false positive” uFISH result.

It is beyond the scope of this LCD to provide comprehensive analysis of all individual papers reviewed. However, the major concerns regarding peer-reviewed literature for uFISH are well represented above.

In summary, the body of peer-reviewed literature concerning UroVysion FISH is insufficient to establish the analytic validity, clinical validity, and clinical utility of this test in the Medicare population. As such, this test does not currently meet medically reasonable and necessary criteria for Medicare patients and will not currently be covered.

Colvera – Clinical Genomics

In April 2015, Pedersen and colleagues published a validation paper in which they described a blood test which would later come to be named Colvera.162 The test was designed to identify two methylated genes, namely branched-chain amino acid transaminase 1 (BCAT1) and ikaros family zinc finger protein 1 (IKZF1). Clinical Genomics had previously identified both genes as being important in the screening of colorectal cancer (CRC). Their study used methylation-specific PCR assays to measure the level of methylated BCAT1 and IKZF1 in DNA extracted from plasma obtained from colonoscopy-confirmed 144 healthy controls and 74 CRC cases. The authors found that their test was positive in 77% of cancer cases and 7.6% of controls. This study, however, failed to sufficiently address many pre-analytic variables, such as the protocols for pathologic review and diagnosis.

Later that same year, another validation paper (also led by Pedersen) was published.161 This cohort study included both prospective and retrospective methods to collect plasma samples from 2,105 volunteers and reported a test sensitivity of 66%, (95% CI: 57–74). For CRC stages I-IV, respective positivity rates were 38% (95% CI: 21%–58%), 69% (95% CI: 53%–82%), 73% (95% CI: 56%–85%) and 94 % (95 % CI: 70%–100%). Specificity was 94% (95% CI: 92%–95%) in all 838 non-neoplastic pathology cases and 95% (95% CI: 92%–97%) in those with no colonic pathology detected (n = 450). It is important to note that case diagnosis was performed by one independent physician and that there was no control of colonoscopy or pathology procedures. The authors stated that this was due to their aim “to assess marker performance relative to outcomes determined in usual clinical practice.”

An additional validation paper was published by Murray and colleagues in 2017 which assessed both the analytic and clinical validity of the Colvera blood test.164 The authors reported using archived samples from the previous study from Pedersen and colleagues (n=2,105 samples), but only used a subset of these archived samples (n=222 specimens, 26 with cancer).162,163 The authors did not describe selection criteria for these samples specifically, namely, whether or not sample selection was a randomized process or why a majority of the archived specimens were not selected. Murray and colleagues found that the Colvera test had good reproducibility and repeatability with a reported sensitivity of 73.1% and specificity of 89.3%. In addition to questions regarding sample selection, other questions were left unanswered in the paper including, but not limited to:

  • Does the precision of the test vary in different stages of cancer?
  • Does treatment (such as chemotherapy/radiation) impact the precision of the test?
  • For apparent false positives, would a longer follow-up reveal them to be true positives?
  • In general, would serial sampling or longitudinal data impact the precision estimates of the test?

An additional paper published in 2018 from Murray and colleagues, sought to establish the clinical validity of the Colvera test.170 In the paper, the authors tested patients post-surgery (median of 2.3 months after surgery) and followed them to establish whether or not recurrence was detected. Median follow-up for recurrence was 23.3 months, with an IQR 14.3-29.5 months. Twenty-three participants were diagnosed with recurrence, but the Colvera test was positive in 28 participants. It should be noted that the cancer treatment varied considerably between cases, even between patients with a positive Colvera test result and those with negative result. Only 61% of patients with a positive Colvera result completed their initial course of treatment, while 87% of patients with a negative result completed the initial course of treatment. The authors state that this was due “to either patients declining ongoing therapy, or due to comorbidities or complications precluding a full course of treatment.”170 This could have significantly confounded the results given the higher likelihood of recurrence in a patient who did not receive a full course of treatment as opposed to patients who did receive a full course. Additionally, while the median follow-up was 23.3 months, half of the patients had a shorter follow-up than the median, and without long-term follow-up, additional cases of recurrence were likely missed.

Five other papers, all cohort studies, were identified that assessed the clinical validity of the Colvera test, in particular, Colvera’s performance compared to carcinoembryonic antigen (CEA) and/or fecal immunochemical tests (FIT).165-167,169,171 These papers from Clinical Genomics found the sensitivity of Colvera to be 62-68% and the specificity to be 87-97.9%, better than the results for both CEA and FIT in the same studies.

Young and colleagues (2016) assessed 122 patients that were being monitored for recurrent CRC (28 of whom had confirmed recurrence) to determine if Colvera or CEA was more accurate.165 The study only obtained a blood sample 12 months prior to or three months following verification of a patient’s recurrence status. This method of determining test accuracy was problematic, in particular because the follow-up lengths varied considerably between patients. In patients with confirmed recurrence, the median follow-up was 28.3 months, with an IQR of 21.9-41.0. In patients without confirmed recurrence, the median follow-up was only 17.3 months, with an IQR of 12.0-29.2. This indicates that the majority of “confirmed” cases of no recurrence were followed for less time than the median follow-up in recurrent cases. Without an adequate length of follow-up, it is certainly possible that cases of recurrence would be missed. Additionally, while the authors did report on some longitudinal data (the concordance of test results in the same patient taken at different times), that data was limited to only 30 cases out of the total 122. Of the cases that did have longitudinal data, multiple cases were reported to have false-positive test results. When combined with the insufficient follow-up already discussed, the likelihood of incorrectly identified false-positive tests increases considerably.

Musher and colleagues (2020) and Symonds and colleagues (2020) also compared Colvera to CEA for detecting recurrent CRC.166,167 Musher, similar to the paper from Young (2016), also had short follow-up periods and insufficient longitudinal data (median follow-up was 15 months, range: 1-60 months).166,167 Symonds (2020), however, did obtain relatively more longitudinal data and longer follow-up periods, and showed that months prior to imaging confirmation, Colvera could show a positive result.167 However, without any assessment of the impact of test results on clinical outcomes, the utility of the test cannot be ascertained. Also, in the papers from Young, Musher, and Symonds, CEA sensitivity was considerably lower than normally reported in other literature (32%, 48%, and 32%, respectively).165-167 While not a direct reflection on the validity or utility of Colvera, it is important to note this discrepancy since the authors were comparing Colvera to the CEA test.

The two additional cohort studies evaluating Colvera, Symonds and colleagues (2016) and Symonds and colleagues (2018), had similar findings and shortcomings as the three studies described above, with test sensitivities of 62% in both papers and similar study designs.169,171

Finally, the paper from Cock and colleagues (2019) assessed the precision of both Colvera and FIT testing in the detection of sessile serrated adenomas/polyps (SSPs).168 For this study, the authors used the same samples that were used in Symonds and colleagues (2016).169 While the paper did address pre-analytic variables and other shortcomings more sufficiently than the previous studies discussed, the results do not support the use of Colvera for the detection of SSPs. Forty-nine SSPs were identified during the colonoscopies of 1,403 participants who were also tested with the Colvera test. In those patients with SSPs, the Colvera test only had a sensitivity of 8.8%, and when combined with FIT, the sensitivity only increased to 26.5%.

Notably, there are no studies assessing patient outcomes or clinician treatment decisions in a real-world setting following a Colvera test. Without such data, clinical utility cannot be determined. One of the key factors in determining clinical utility is a test’s impact on patient outcomes. For example, a demonstration of clinical utility could be accomplished in a clinical trial where patients’ overall survival is compared between patients tested with Colvera and patients managed without this test. To date, such a trial has not been performed for Colvera.

In general, papers assessing the validity of the Colvera test for CRC have a number of shortcomings, including short follow-up time, insufficient longitudinal data, insufficient description of study methodology, and a failure to sufficiently address important pre-analytic variables. Additionally, no paper has been published addressing the clinical utility of Colvera; a test without an improvement in patient outcomes is not clinically useful for the purposes of Medicare coverage.

In summary, the body of peer-reviewed literature concerning Colvera is insufficient to establish the analytic validity, clinical validity, and clinical utility of this test in the Medicare population. As such, this test does not currently meet medically reasonable and necessary criteria for Medicare patients and will not be currently covered.

PancreaSeq® Genomic Classifier, Molecular and Genomic Pathology Laboratory, University of Pittsburgh Medical Center

Due to the lack of peer-reviewed literature identified, and thus a lack of peer-reviewed evidence detailing analytic and clinical validity and clinical utility, this test is currently non-covered for the Medicare population.

Proposed Process Information

Synopsis of Changes
Changes Fields Changed
N/A
Associated Information
Sources of Information
Bibliography
Open Meetings
Meeting Date Meeting States Meeting Information
N/A
Contractor Advisory Committee (CAC) Meetings
Meeting Date Meeting States Meeting Information
N/A
MAC Meeting Information URLs
N/A
Proposed LCD Posting Date
Comment Period Start Date
Comment Period End Date
Reason for Proposed LCD
Requestor Information
This request was MAC initiated.
Requestor Name Requestor Letter
View Letter
N/A
Contact for Comments on Proposed LCD

Coding Information

Bill Type Codes

Code Description

Please accept the License to see the codes.

N/A

Revenue Codes

Code Description

Please accept the License to see the codes.

N/A

CPT/HCPCS Codes

Please accept the License to see the codes.

N/A

ICD-10-CM Codes that Support Medical Necessity

Group 1

Group 1 Paragraph:

N/A

Group 1 Codes:

N/A

N/A

ICD-10-CM Codes that DO NOT Support Medical Necessity

Group 1

Group 1 Paragraph:

N/A

Group 1 Codes:

N/A

N/A

Additional ICD-10 Information

General Information

Associated Information

Please refer to the related Local Coverage Article: Billing and Coding: Genetic Testing for Oncology (A59125) for documentation requirements, utilization parameters and all coding information as applicable.

Sources of Information

N/A

Bibliography

This bibliography presents those sources that were obtained during the development of this policy. The Contractor is not responsible for the continuing viability of Website addresses listed below.

  1. Healthcare Fraud Prevention Partnership. Genetic Testing: Fraud, Waste, & Abuse. White Paper. July 2020. HFPP Genetic Testing FWA White Paper (cms.gov). Accessed March 15, 2023.
  2. FDA-NIH Biomarker Working Group. BEST (Biomarkers, EndpointS, and other Tools) Resource [Internet]. Silver Spring (MD): Food and Drug Administration (US); 2016-. Co-published by National Institutes of Health (US), Bethesda (MD).
  3. The Genetics of Cancer. National Cancer Institute website. https://www.cancer.gov/about-cancer/causes-prevention/genetics. Published October 12, 2017. Accessed January 5, 2022 to April 7, 2022.
  4. Biomarker Testing for Cancer Treatment. National Cancer Institute website. https://www.cancer.gov/about-cancer/treatment/types/biomarker-testing-cancer-treatment. Published October 5, 2017. Accessed January 5, 2022 to April 7, 2022.
  5. Califf RM. Biomarker definitions and their applications. Exp Biol Med (Maywood). 2018;243(3):213-221. doi:10.1177/1535370217750088.
  6. CMS.gov Centers for Medicare and Medicaid Service National Coverage Determination (NCD): 90.2 Decision Memo. March 2018.
  7. NCI Dictionary of Cancer Terms Definition of Biomarker. National Cancer Institute. https://www.cancer.gov/publications/dictionaries/cancer-terms/def/biomarker. Accessed November 14, 2022.
  8. Youngson RM. Collins Dictionary: Medicine. Glasgow, Scotland: HarperCollins; 2004.
  9. NCI Dictionary of Cancer Terms Definition of Surveillance. National Cancer Institute. https://www.cancer.gov/publications/dictionaries/cancer-terms/def/surveillance. Accessed November 15, 2022.
  10. Talking Glossary of Genomic and Genetic Terms. National Human Genome Research Institute. Talking Glossary of Genetic Terms | NHGRI (genome.gov). Accessed March 16, 2023.
  11. CMS.gov Centers for Medicare and Medicaid Service National Coverage Determination (NCD): 90.2 Decision Memo. January 2020.
  12. NCI Dictionary of Cancer Terms Definition of Circulating Tumor DNA. National Cancer Institute. https://www.cancer.gov/publications/dictionaries/cancer-terms/def/circulating-tumor-dna. Accessed March 16, 2023.
  13. Joseph L, Cankovic M, Caughron S, et al. The Spectrum of Clinical Utilities in Molecular Pathology Testing Procedures for Inherited Conditions and Cancer: A Report of the Association for Molecular Pathology. J Mol Diagn. 2016;18(5):605-619. doi:10.1016/j.jmoldx.2016.05.007.
  14. Genomic Sequencing Procedures. In: CPT 2023 Professionals Edition. Chicago, IL: American Medical Association; 2022:645.
  15. Miller-Keane Encyclopedia and Dictionary of Medicine, Nursing, and Allied Health, Seventh Edition. © 2003 by Saunders.
  16. Laboratory Developed Tests. U.S. Food and Drug Administration. Laboratory Developed Tests | FDA. Accessed March 16, 2023.
  17. NCI Dictionary of Cancer Terms Definition of Liquid Biopsy. National Cancer Institute. https://www.cancer.gov/publications/dictionaries/cancer-terms/def/liquid-biopsy. Accessed November 15, 2022.
  18. NCI Dictionary of Cancer Terms Definition of Minimal Residual Disease. National Cancer Institute. https://www.cancer.gov/publications/dictionaries/cancer-terms/def/minimal-residual-disease. Accessed November 14, 2022.
  19. Multianalyte Assays with Algorithmic Analyses (MAAAs). In: CPT 2023 Professionals Edition. Chicago, IL: American Medical Association; 2022:649.
  20. NCI Dictionary of Cancer Terms Definition of Neoplasm. National Cancer Institute. https://www.cancer.gov/publications/dictionaries/cancer-terms/def/neoplasm. Accessed November 18, 2022.
  21. NCI Dictionary of Cancer Terms Definition of Next-Generation Sequencing. National Cancer Institute. https://www.cancer.gov/publications/dictionaries/cancer-terms/def/next-generation-sequencing. Accessed November 14, 2022.
  22. Somatic In: CPT 2023 Professionals Edition. Chicago, IL: American Medical Association; 2022:616.
  23. NCI Dictionary of Cancer Terms Definition of Tumor Mutational Burden. National Cancer Institute. https://www.cancer.gov/publications/dictionaries/cancer-terms/def/tumor-mutational-burden. Accessed November 15, 2022.
  24. ClinGen. Explore the clinical relevance of genes & variants. https://www.clinicalgenome.org. Published May 2021. Updated January 2, 2022. Accessed January 5, 2022 to February 2023.
  25. National Comprehensive Cancer Network (NCCN). Biomarkers Compendium. https://www.nccn.org/compendia-templates/compendia/biomarkers-compendium. Accessed January 5, 2022 to February 2023.
  26. Chakravarty D, Gao J, Phillips SM, et al. OncoKB: A Precision Oncology Knowledge Base. JCO Precision Oncology. 2017;(1):1-16. doi:10.1200/po.17.00011.
  27. Institute of Medicine (US) Committee to Advise the Public Health Service on Clinical Practice Guidelines, Field MJ, Lohr KN, eds. Clinical Practice Guidelines: Directions for a New Program. Washington (DC): National Academies Press (US); 1990.
  28. Development and Update of Guidelines. National Comprehensive Cancer Network website. https://www.nccn.org/guidelines/guidelines-process/development-and-update-of-guidelines. Accessed January 5, 2022 to April 7, 2022.
  29. Birkeland ML, McClure JS. Optimizing the Clinical Utility of Biomarkers in Oncology: The NCCN Biomarkers Compendium. Arch Pathol Lab Med. 2015;139(5):608-611. doi:10.5858/arpa.2014-0146-RA.
  30. Poonacha TK, Go RS. Level of scientific evidence underlying recommendations arising from the National Comprehensive Cancer Network clinical practice guidelines. J Clin Oncol. 2011;29(2):186–191. doi:10.1200/JCO.2010.31.6414.
  31. Chengappa M, Desai A, Go R, Poonacha T. Level of Scientific Evidence Underlying the National Comprehensive Cancer Network Clinical Practice Guidelines for Hematologic Malignancies: Are We Moving Forward? Oncology (Williston Park). Jul 16, 2021;35(7):390-396. doi:10.46883/ONC.2021.3507.390. PMID:34270186.
  32. Genetic Database Recognition Decision Summary for ClinGen Expert Curated Human Variant Data. US Food & Drug Administration. https://www.fda.gov/media/119313/download. Accessed January 5, 2022 to April 7, 2022.
  33. Hunter JE, Irving SA, Biesecker LG, et al. A standardized, evidence-based protocol to assess clinical actionability of genetic disorders associated with genomic variation. Genet Med. Dec 2016;18(12):1258-1268. doi:10.1038/gim.2016.40.
  34. FDA Recognition of Public Human Genetic Variant Databases. US Food & Drug Administration. FDA. https://www.fda.gov/ medical-devices/precision-medicine/fda-recognition-public-human-genetic-variant-databases. Published online October 7, 2021. Accessed January 5, 2022 to April 7, 2022.
  35. Strande NT, Riggs ER, Buchanan AH, et al. Evaluating the Clinical Validity of Gene-Disease Associations: An Evidence-Based Framework Developed by the Clinical Genome Resource. The American Journal of Human Genetics. Jun 2017 1;100(6):895-906. doi:10.1016/j.ajhg.2017.04.015.
  36. Berg JS, Foreman AK, O'Daniel JM, et al. A semiquantitative metric for evaluating clinical actionability of incidental or secondary findings from genome-scale sequencing. Genet Med. 2016;18(5):467-475. doi:10.1038/gim.2015.104.
  37. Genetic Database Recognition Decision Summary for OncoKB. US Food & Drug Administration. https://www.fda.gov/media/152847/download. Accessed January 5, 2022 to April 7, 2022.
  38. FDA Fact Sheet. US Food & Drug Administration. https://www.oncokb.org/levels#version=FDA_NGS. Updated March 29, 2022. Accessed January 5, 2022 to April 7, 2022.
  39. Holyoake A, O'Sullivan P, Pollock R, et al. Development of a multiplex RNA urine test for the detection and stratification of transitional cell carcinoma of the bladder. Clin Cancer Res. Feb 1, 2008;14(3):742-9. doi:10.1158/1078-0432.CCR-07-1672. PMID:18245534.
  40. O'Sullivan P, Sharples K, Dalphin M, et al. A multigene urine test for the detection and stratification of bladder cancer in patients presenting with hematuria. J Urol. Sep 2012;188(3):741-7. doi:10.1016/j.juro.2012.05.003. Epub Jul 19, 2012. PMID:22818138.
  41. Kavalieris L, O'Sullivan PJ, Suttie JM, et al. A segregation index combining phenotypic (clinical characteristics) and genotypic (gene expression) biomarkers from a urine sample to triage out patients presenting with hematuria who have a low probability of urothelial carcinoma. BMC Urol. Mar 27, 2015;15:23. doi:10.1186/s12894-015-0018-5. PMID:25888331; PMCID:PMC4391477.
  42. Kavalieris L, O’Sullivan P, Frampton C, et al. Performance Characteristics of a Multigene Urine Biomarker Test for Monitoring for Recurrent Urothelial Carcinoma in a Multicenter Study. J Urol. Jun 2017;197(6):1419-1426. doi:10.1016/j.juro.2016.12.010. Epub Dec 1,2 016 4. PMID:27986532.
  43. Raman JD, Kavalieris L, Konety B, et al. The Diagnostic Performance of Cxbladder Resolve, Alone and in Combination with Other Cxbladder Tests, in the Identification and Priority Evaluation of Patients at Risk for Urothelial Carcinoma. J Urol. Dec 2021;206(6):1380-1389. doi:10.1097/JU.0000000000002135. Epub Aug 5, 2021. PMID:34348469; PMCID:PMC8584223.
  44. Lotan Y, Raman JD, Konety B, et al. Urinary Analysis of FGFR3 and TERT Gene Mutations Enhances Performance of Cxbladder Tests and Improves Patient Risk Stratification. J Urol. Dec 30, 2022:101097JU0000000000003126. doi:10.1097/JU.0000000000003126. Epub ahead of print. PMID:36583640.
  45. Breen V, Kasabov N, Kamat AM, et al. A holistic comparative analysis of diagnostic tests for urothelial carcinoma: a study of Cxbladder Detect, UroVysion® FISH, NMP22® and cytology based on imputation of multiple datasets. BMC Med Res Methodol. May 12, 2015;15:45. doi:10.1186/s12874-015-0036-8. PMID:25962444; PMCID:PMC4494166.
  46. Lotan Y, O'Sullivan P, Raman JD, et al. Clinical comparison of noninvasive urine tests for ruling out recurrent urothelial carcinoma. Urol Oncol. Aug 2017;35(8):531.e15-531.e22. doi:10.1016/j.urolonc.2017.03.008. Epub Mar 31, 2017. PMID:28366272.
  47. Darling D, Luxmanan C, O'Sullivan P, Lough T, Suttie J. Clinical Utility of Cxbladder for the Diagnosis of Urothelial Carcinoma. Adv Ther. May 2017;34(5):1087-1096. doi:10.1007/s12325-017-0518-7. Epub Mar 24, 2017. PMID:28341930; PMCID:PMC5427134.
  48. Lough T, Luo Q, O'Sullivan P, et al. Clinical Utility of Cxbladder Monitor for Patients with a History of Urothelial Carcinoma: A Physician-Patient Real-World Clinical Data Analysis. Oncol Ther. Jun 2018;6(1):73-85. doi:10.1007/s40487-018-0059-5. Epub Apr 19, 2018. PMID:32700139; PMCID:PMC7359999.
  49. Konety B, Shore N, Kader AK, et al. Evaluation of Cxbladder and Adjudication of Atypical Cytology and Equivocal Cystoscopy. Eur Urol. Aug 2019;76(2):238-243. doi:10.1016/j.eururo.2019.04.035. Epub May 16, 2019. PMID:31103391.
  50. Koya M, Osborne S, Chemaslé C, Porten S, Schuckman A, Kennedy-Smith A. An evaluation of the real world use and clinical utility of the Cxbladder Monitor assay in the follow-up of patients previously treated for bladder cancer. BMC Urol. Feb 11,2020;20(1):12. doi:10.1186/s12894-020-0583-0. PMID:32046687; PMCID:PMC7014779.
  51. Li KD, Chu CE, Patel M, Meng MV, Morgan TM, Porten SP. Cxbladder Monitor testing to reduce cystoscopy frequency in patients with bladder cancer [published online ahead of print, Mar 1, 2023]. Urol Oncol. 2023;S1078-1439(23)00009-1. doi:10.1016/j.urolonc.2023.01.009.
  52. Davidson PJ, McGeoch G, Shand B. Inclusion of a molecular marker of bladder cancer in a clinical pathway for investigation of haematuria may reduce the need for cystoscopy. N Z Med J. Jun 21, 2019;132(1497):55-64. PMID:31220066.
  53. Davidson PJ, McGeoch G, Shand B. Assessment of a clinical pathway for investigation of haematuria that reduces the need for cystoscopy. N Z Med J. Dec 18, 2020;133(1527):71-82. PMID:33332329.
  54. Chou R, Gore JL, Buckley D, et al. Urinary Biomarkers for Diagnosis of Bladder Cancer: A Systematic Review and Meta-analysis. Ann Intern Med. Dec 15, 2015;163(12):922-31. doi:10.7326/M15-0997. Epub Dec 15, 2015. PMID:26501851.
  55. Chou R, Buckley D, Fu R, et al. Emerging Approaches to Diagnosis and Treatment of Non–Muscle-Invasive Bladder Cancer [Internet]. Rockville (MD): Agency for Healthcare Research and Quality (US); Oct 2015. (Comparative Effectiveness Reviews, No. 153.) Available from: https://www.ncbi.nlm.nih.gov/books/NBK330472/.
  56. Laukhtina E, Shim SR, Mori K, et al. European Association of Urology–Young Academic Urologists (EAU-YAU): Urothelial Carcinoma Working Group. Diagnostic Accuracy of Novel Urinary Biomarker Tests in Non-muscle-invasive Bladder Cancer: A Systematic Review and Network Meta-analysis. Eur Urol Oncol. Dec 2021;4(6):927-942. doi:10.1016/j.euo.2021.10.003. Epub Nov 6, 2021. Erratum in: Eur Urol Oncol. Jan 19, 2022. PMID: 34753702.
  57. Yip L, Gooding WE, Nikitski A, et al. Risk assessment for distant metastasis in differentiated thyroid cancer using molecular profiling: A matched case-control study. Cancer. Jun 1, 2021;127(11):1779-1787. doi:10.1002/cncr.33421. Epub 2021 Feb 4. PMID:33539547; PMCID:PMC8113082.
  58. Skaugen JM, Taneja C, Liu JB, et al. Performance of a Multigene Genomic Classifier in Thyroid Nodules with Suspicious for Malignancy Cytology. Thyroid. Dec 2022;32(12):1500-1508. doi:10.1089/thy.2022.0282. Epub Sep 1, 2022. PMID:35864811; PMCID:PMC9807251.
  59. Liu JB, Ramonell KM, Carty SE, et al. Association of comprehensive thyroid cancer molecular profiling with tumor phenotype and cancer-specific outcomes. Surgery. Jan 2023;173(1):252-259. doi:10.1016/j.surg.2022.05.048. Epub Oct 20, 2022. PMID:36272768.
  60. Das A, Brugge W, Mishra G, Smith DM, Sachdev M, Ellsworth E. Managing incidental pancreatic cystic neoplasms with integrated molecular pathology is a cost-effective strategy. Endosc Int Open. 2015;3(5):E479-E486. doi:10.1055/s-0034-1392016.
  61. Gillis A, Cipollone I, Cousins G, Conlon K. Does EUS-FNA molecular analysis carry additional value when compared to cytology in the diagnosis of pancreatic cystic neoplasm?A systematic review. HPB (Oxford). 2015;17(5):377-386. doi:10.1111/hpb.12364.
  62. Khalid A, Brugge W. ACG practice guidelines for the diagnosis and management of neoplastic pancreatic cysts. Am J Gastroenterol. Oct 2007; 102(10): 2339-49. PMID17764489.
  63. Khalid A, Nodit L, Zahid M, et al. Endoscopic ultrasound fine needle aspirate DNA analysis to differentiate malignant and benign pancreatic masses. Am J Gastroenterol. Nov 2006; 101(11): 2493-500. PMID17029619.
  64. Khalid A, Zahid M, Finkelstein SD, et al. Pancreatic cyst fluid DNA analysis in evaluating pancreatic cysts: a report of the PANDA study. Gastrointest Endosc. May 2009; 69(6): 1095-102. PMID19152896.
  65. Khalid A, McGrath KM, Zahid M, et al. The role of pancreatic cyst fluid molecular analysis in predicting cyst pathology. Clin Gastroenterol Hepatol. Oct 2005; 3(10): 967-73. PMID16234041.
  66. Khalid A, Pal R, Sasatomi E, et al. Use of microsatellite marker loss of heterozygosity in accurate diagnosis of pancreaticobiliary malignancy from brush cytology samples. Gut. Dec 2004; 53(12): 1860-5. PMID15542529.
  67. Siddiqui AA, Kowalski TE, Kedika R, et al. EUS-guided pancreatic fluid aspiration for DNA analysis of KRAS and GNAS mutations for the evaluation of pancreatic cystic neoplasia: a pilot study. Gastrointest Endosc. Apr 2013; 77(4): 669-70. PMID23498145.
  68. Schoedel KE, Finkelstein SD, Ohori NP. K-Ras and microsatellite marker analysis of fine-needle aspirates from intraductal papillary mucinous neoplasms of the pancreas. Diagn Cytopathol. Sep 2006; 34(9): 605-8. PMID16900481.
  69. Sawhney MS, Devarajan S, O'Farrel P, et al. Comparison of carcinoembryonic antigen and molecular analysis in pancreatic cyst fluid. Gastrointest Endosc. May 2009; 69(6): 1106-10. PMID19249035.
  70. Sreenarasimhaiah J, Lara LF, Jazrawi SF, et al. A comparative analysis of pancreas cyst fluid CEA and histology with DNA mutational analysis in the detection of mucin producing or malignant cysts. JOP. Mar 09, 2009; 10(2): 163-8. PMID19287110.
  71. Mertz H. K-ras mutations correlate with atypical cytology and elevated CEA levels in pancreatic cystic neoplasms. Dig Dis Sci. Jul 2011; 56(7): 2197-201. PMID21264513.
  72. Talar-Wojnarowska R, Pazurek M, Durko L, et al. A comparative analysis of K-ras mutation and carcinoembryonic antigen in pancreatic cyst fluid. Pancreatology. Sep-Oct 2012; 12(5): 417-20. PMID23127529.
  73. Chai SM, Herba K, Kumarasinghe MP, et al. Optimizing the multimodal approach to pancreatic cyst fluid diagnosis: developing a volume-based triage protocol. Cancer Cytopathol. Feb 2013; 121(2): 86-100. PMID22961878.
  74. Nikiforova MN, Khalid A, Fasanella KE, et al. Integration of KRAS testing in the diagnosis of pancreatic cystic lesions: a clinical experience of 618 pancreatic cysts. Mod Pathol. Nov 2013; 26(11): 1478-87. PMID23743931.
  75. Lapkus O, Gologan O, Liu Y, et al. Determination of sequential mutation accumulation in pancreas and bile duct brushing cytology. Mod Pathol. Jul 2006; 19(7): 907-13. PMID16648872.
  76. Tamura K, Ohtsuka T, Date K, et al. Distinction of Invasive Carcinoma Derived From Intraductal Papillary Mucinous Neoplasms From Concomitant Ductal Adenocarcinoma of the Pancreas Using Molecular Biomarkers. Pancreas. Jul 2016; 45(6): 826-35. PMID26646266.
  77. Panarelli NC, Sela R, Schreiner AM, et al. Commercial molecular panels are of limited utility in the classification of pancreatic cystic lesions. Am J Surg Pathol. Oct 2012; 36(10): 1434-43. PMID22982886.
  78. Deftereos G, Finkelstein SD, Jackson SA, et al. The value of mutational profiling of the cytocentrifugation supernatant fluid from fine-needle aspiration of pancreatic solid mass lesions. Mod Pathol. Apr 2014; 27(4): 594-601. PMID24051700.
  79. Arner DM, Corning BE, et al. Molecular analysis of pancreatic cyst fluid changes clinical management. Endosc Ultrasound. 2018; 7(1):29-33.
  80. Farrell JJ, Al-Haddad MA, et al. Incremental value of DNA analysis in pancreatic cysts stratified by clinical risk factors. Gastrointest Endosc. 2019; 89(4):832-841.
  81. Simpson RE, Cockerill NJ, Yip-Schneider MT, et al. Clinical criteria for integrated molecular pathology in intraductal papillary mucinous neoplasm: less is more. HPB (Oxford). 2019;21(5):574-581. doi:10.1016/j.hpb.2018.09.004.
  82. Vege SS, Ziring B, Jain R, Moayyedi P. Clinical Guidelines Committee; American Gastroenterology Association. American gastroenterological association institute guideline on the diagnosis and management of asymptomatic neoplastic pancreatic cysts. Gastroenterology. 2015;148(4):819-quize13. doi:10.1053/j.gastro.2015.01.015.
  83. Kung JS, Lopez OA, McCoy EE, et al. Fluid genetic analyses predict the biological behavior of pancreatic cysts: three-year experience. JOP. Sep 28, 2014; 15(5): 427-32. PMID25262708.
  84. Shen J, Brugge WR, Dimaio CJ, et al. Molecular analysis of pancreatic cyst fluid: a comparative analysis with current practice of diagnosis. Cancer. Jun 25, 2009; 117(3): 217-27. PMID19415731.
  85. Al-Haddad MA, Kowalski T, Siddiqui A, et al. Integrated molecular pathology accurately determines the malignant potential of pancreatic cysts. Endoscopy. 2015; 47(2):136-142.
  86. Winner M, Sethi A, Poneros JM, et al. The role of molecular analysis in the diagnosis and surveillance of pancreatic cystic neoplasms. JOP. Mar 20, 2015; 16(2): 143-9. PMID25791547.
  87. Malhotra N, Jackson SA, Freed LL, et al. The added value of using mutational profiling in addition to cytology in diagnosing aggressive pancreaticobiliary disease: review of clinical cases at a single center. BMC Gastroenterol. Aug 01 2014; 14: 135. PMID25084836.
  88. Loren D, Kowalski T, Siddiqui A, et al. Influence of integrated molecular pathology test results on real-world management decisions for patients with pancreatic cysts: analysis of data from a national registry cohort. Diagn Pathol. Jan 20, 2016; 11: 5. PMID26790950.
  89. Kowalski T, Siddiqui A, Loren D, et al. Management of Patients With Pancreatic Cysts: Analysis of Possible False-Negative Cases of Malignancy. J Clin Gastroenterol. Sep 2016; 50(8): 649-57. PMID27332745.
  90. Khosravi F, Sachdev M, Alshati, A, et al. Mutation Profiling Impacts Clinical Decision Making and Outcomes of Patients with Solid Pancreatic Lesions Indeterminate by Cytology. Journal of the Pancreas. 2018;19(1):1-6.
  91. Kushnir V, Mullady D, Das K, et al. The Diagnostic Yield of Malignancy Comparing Cytology, FISH, and Molecular Analysis of Cell Free Cytology Brush Supernatant in Patients With Biliary Strictures Undergoing Endoscopic Retrograde Cholangiography (ERC) A Prospective Study. Journal of Clinical Gastroenterology. 2018. DOI:10.1097/MCG.0000000000001118.
  92. Gonda T, Viterbo D, Gausman V, et al. Mutation Profile and Fluorescence In Situ Hybridization Analyses Increase Detection of Malignancies in Biliary Strictures. Clinical Gastroenterology and Hepatology. 2017;15:913-919.
  93. Trikalinos T, Terasawa T, Raman G, et al. Technology Assessment: A systematic review of loss-of- heterozygosity based topographic genotyping with PathfinderTG. Rockville, MD: Agency for Healthcare Research and Quality; 2010.
  94. Scheiman JM, Hwang JH, Moayyedi P. American gastroenterological association technical review on the diagnosis and management of asymptomatic neoplastic pancreatic cysts. Gastroenterology. Apr 2015; 148(4): 824-48.e22. PMID25805376.
  95. Dillon LD, Gadzia JE, Davidson RS. Prospective, Multicenter Clinical Impact Evaluation of a 31-Gene Expression Profile Test for Management of Melanoma Patients. SKIN The Journal of Cutaneous Medicine, 2018; 2(2): 111–121. https://doi.org/10.25251/skin.2.2.3
  96. Berman B, Ceilley R Cockerell C. Appropriate Use Criteria for the Integration of Diagnostic and Prognostic Gene Expression Profile Assays into the Management of Cutaneous Malignant Melanoma: An Expert Panel Consensus-Based Modified Delphi Process Assessment. SKIN The Journal of Cutaneous Medicine, 2019; 3(5): 291–306. https://doi.org/10.25251/skin.3.5.1.
  97. Marks E, Caruso HG, Kurley SJ, Ibad S, Plasseraud KM, Monzon F. A & Cockerell CJ. Establishing an evidence-based decision point for clinical use of the 31-gene expression profile test in cutaneous melanoma. SKIN The Journal of Cutaneous Medicine. (2019); 3(4): 239–249. https://doi.org/10.25251/skin.3.4.2.
  98. Litchman GH, Prado G, Teplitz RW, & Rigel D (2020). A Systematic Review and Meta-Analysis of Gene Expression Profiling for Primary Cutaneous Melanoma Prognosis. SKIN The Journal of Cutaneous Medicine. 4(3): 221–237. https://doi.org/10.25251/skin.4.3.3.
  99. Zakria D, Brownstone N, Berman B, Ceilley R, Goldenberg G, Lebwohl M, Litchman G, & Siegel D. Incorporating Prognostic Gene Expression Profile Assays into the Management of Cutaneous Melanoma: An Expert Consensus Panel Report. SKIN The Journal of Cutaneous Medicine. 2023; 7(1): 556–569. https://doi.org/10.25251/skin.7.1.1.
  100. Greenhaw BN, Covington KR, Kurley SJ, et al. Reply to Problematic methodology in a systematic review and meta-analysis of DecisionDx-Melanoma. J Am Acad Dermatol. Nov 2020;83(5):e359-e360. doi:10.1016/j.jaad.2020.06.009. Epub Jun 8, 2020. PMID:32526325.
  101. Marchetti MA, Coit DG, Dusza SW, et al. Performance of Gene Expression Profile Tests for Prognosis in Patients With Localized Cutaneous Melanoma: A Systematic Review and Meta-analysis. JAMA Dermatol. Sep 1, 2020;156(9):953-962. doi:10.1001/jamadermatol.2020.1731. PMID:32745161; PMCID:PMC7391179.
  102. Chan WH, Tsao H. Consensus, Controversy, and Conversations About Gene Expression Profiling in Melanoma. JAMA Dermatol. 2020;156(9):949–951. doi:10.1001/jamadermatol.2020.1730.
  103. Winkelmann RR, Farberg AS, Glazer AM, et al. Integrating Skin Cancer-Related Technologies into Clinical Practice. Dermatol Clin. Oct 2017;35(4):565-576. doi:10.1016/j.det.2017.06.018. Epub Aug 12, 2017. PMID:28886814.
  104. Dubin DP, Dinehart SM, Farberg AS. Level of Evidence Review for a Gene Expression Profile Test for Cutaneous Melanoma. Am J Clin Dermatol. Dec 2019;20(6):763-770. doi:10.1007/s40257-019-00464-4. PMID:31359351; PMCID:PMC6872504.
  105. Grossman D, Okwundu N, Bartlett EK, et al. Prognostic Gene Expression Profiling in Cutaneous Melanoma: Identifying the Knowledge Gaps and Assessing the Clinical Benefit. JAMA Dermatol. Sep 1, 2020;156(9):1004-1011. doi:10.1001/jamadermatol.2020.1729. PMID:32725204; PMCID:PMC8275355.
  106. Farberg AS, Marson JW, Glazer A, et al. Expert Consensus on the Use of Prognostic Gene Expression Profiling Tests for the Management of Cutaneous Melanoma: Consensus from the Skin Cancer Prevention Working Group. Dermatol Ther (Heidelb). Apr 2022;12(4):807-823. doi:10.1007/s13555-022-00709-x. Epub Mar 30, 2022. PMID:35353350; PMCID:PMC9021351.
  107. Greenhaw BN, Covington KR, Kurley SJ, et al. Molecular risk prediction in cutaneous melanoma: A meta-analysis of the 31-gene expression profile prognostic test in 1,479 patients. J Am Acad Dermatol. Sep 2020;83(3):745-753. doi:10.1016/j.jaad.2020.03.053. Epub Mar 27, 2020. PMID:32229276.
  108. Marchetti MA, Coit DG, Dusza SW, et al. Performance of Gene Expression Profile Tests for Prognosis in Patients With Localized Cutaneous Melanoma: A Systematic Review and Meta-analysis. JAMA Dermatol. Sep 1, 2020;156(9):953-962. doi:10.1001/jamadermatol.2020.1731. PMID:32745161; PMCID:PMC7391179.
  109. Gerami P, Cook RW, Wilkinson J, et al. Development of a prognostic genetic signature to predict the metastatic risk associated with cutaneous melanoma. Clin Cancer Res. 2015 Jan 1;21(1):175-83. doi:10.1158/1078-0432.CCR-13-3316. PMID:25564571.
  110. Whitman ED, Koshenkov VP, Gastman BR, et al. Integrating 31-Gene Expression Profiling With Clinicopathologic Features to Optimize Cutaneous Melanoma Sentinel Lymph Node Metastasis Prediction. JCO Precis Oncol. Sep 13, 2021;5:PO.21.00162. doi:10.1200/PO.21.00162. PMID:34568719; PMCID:PMC8457832.
  111. Jarell A, Gastman BR, Dillon LD, et al. Optimizing treatment approaches for patients with cutaneous melanoma by integrating clinical and pathologic features with the 31-gene expression profile test. J Am Acad Dermatol. Dec 2022;87(6):1312-1320. doi:10.1016/j.jaad.2022.06.1202. Epub Jul 8, 2022. PMID:35810840.
  112. Thorpe RB, Covington KR, Caruso HG, et al. Development and validation of a nomogram incorporating gene expression profiling and clinical factors for accurate prediction of metastasis in patients with cutaneous melanoma following Mohs micrographic surgery. J Am Acad Dermatol. Apr 2022;86(4):846-853. Doi:10.1016/j.jaad.2021.10.062. Epub Nov 20, 2021. PMID:34808324.
  113. Cook RW, Middlebrook B, Wilkinson J, et al. Analytic validity of DecisionDx-Melanoma, a gene expression profile test for determining metastatic risk in melanoma patients. Diagn Pathol. Feb 13, 2018;13(1):13. doi:10.1186/s13000-018-0690-3. PMID:29433548; PMCID:PMC5809902.
  114. Gerami P, Cook RW, Russell MC, et al. Gene expression profiling for molecular staging of cutaneous melanoma in patients undergoing sentinel lymph node biopsy. J Am Acad Dermatol. May 2015;72(5):780-5.e3. doi:10.1016/j.jaad.2015.01.009. Epub Mar 3, 2015. PMID:25748297.
  115. Hsueh EC, DeBloom JR, Lee J, et al. Interim analysis of survival in a prospective, multi-center registry cohort of cutaneous melanoma tested with a prognostic 31-gene expression profile test. J Hematol Oncol. Aug 29, 2017;10(1):152. doi:10.1186/s13045-017-0520-1. Erratum in: J Hematol Oncol. Oct 5, 2017;10 (1):160. PMID:28851416; PMCID:PMC5576286.
  116. Ferris LK, Farberg AS, Middlebrook B, et al. Identification of high-risk cutaneous melanoma tumors is improved when combining the online American Joint Committee on Cancer Individualized Melanoma Patient Outcome Prediction Tool with a 31-gene expression profile-based classification. J Am Acad Dermatol. May 2017;76(5):818-825.e3. doi:10.1016/j.jaad.2016.11.051. Epub Jan 19, 2017. PMID:28110997.
  117. Zager JS, Gastman BR, Leachman S, et al. Performance of a prognostic 31-gene expression profile in an independent cohort of 523 cutaneous melanoma patients. BMC Cancer. Feb 5, 2018;18(1):130. doi:10.1186/s12885-018-4016-3. PMID:29402264; PMCID:PMC5800282.
  118. Greenhaw BN, Zitelli JA, Brodland DG. Estimation of Prognosis in Invasive Cutaneous Melanoma: An Independent Study of the Accuracy of a Gene Expression Profile Test. Dermatol Surg. Dec 2018;44(12):1494-1500. doi:10.1097/DSS.0000000000001588. PMID:29994951.
  119. Gastman BR, Zager JS, Messina JL, et al. Performance of a 31-gene expression profile test in cutaneous melanomas of the head and neck. Head Neck. Apr 2019;41(4):871-879. doi:10.1002/hed.25473. Epub Jan 29, 2019. PMID:30694001; PMCID:PMC6667900.
  120. Gastman BR, Gerami P, Kurley SJ, Cook RW, Leachman S, Vetto JT. Identification of patients at risk of metastasis using a prognostic 31-gene expression profile in subpopulations of melanoma patients with favorable outcomes by standard criteria. J Am Acad Dermatol. Jan 2019;80(1):149-157.e4. doi:10.1016/j.jaad.2018.07.028. Epub Aug 4, 2018. PMID:30081113.
  121. Keller J, Schwartz TL, Lizalek JM, et al. Prospective validation of the prognostic 31-gene expression profiling test in primary cutaneous melanoma. Cancer Med. May 2019;8(5):2205-2212. doi: 10.1002/cam4.2128. Epub Apr 5, 2019. PMID:30950242; PMCID:PMC6536922.
  122. Podlipnik S, Carrera C, Boada A, et al. Early outcome of a 31-gene expression profile test in 86 AJCC stage IB-II melanoma patients. A prospective multicentre cohort study. J Eur Acad Dermatol Venereol. May 2019;33(5):857-862. doi:10.1111/jdv.15454. Epub 2019 Feb 28. PMID:30702163; PMCID:PMC6483866.
  123. Vetto JT, Hsueh EC, Gastman BR, et al. Guidance of sentinel lymph node biopsy decisions in patients with T1-T2 melanoma using gene expression profiling. Future Oncol. Apr 2019;15(11):1207-1217. doi:10.2217/fon-2018-0912. Epub Jan 29, 2019. PMID:30691297.
  124. Scott AM, Dale PS, Conforti A, Gibbs JN. Integration of a 31-Gene Expression Profile Into Clinical Decision-Making in the Treatment of Cutaneous Melanoma. Am Surg. Nov 2020;86(11):1561-1564. doi:10.1177/0003134820939944. Epub Aug 5, 2020. PMID:32755379.
  125. Hsueh EC, DeBloom JR, Lee JH, et al. Long-Term Outcomes in a Multicenter, Prospective Cohort Evaluating the Prognostic 31-Gene Expression Profile for Cutaneous Melanoma. JCO Precis Oncol. Apr 6, 2021;5:PO.20.00119. doi:10.1200/PO.20.00119. PMID:34036233; PMCID:PMC8140806.
  126. Jarell A, Skenderis B, Dillon LD, et al. The 31-gene expression profile stratifies recurrence and metastasis risk in patients with cutaneous melanoma. Future Oncol. Dec 2021;17(36):5023-5031. doi:10.2217/fon-2021-0996. Epub Sep 30, 2021. PMID:34587770.
  127. Wisco OJ, Marson JW, Litchman GH, et al. Improved cutaneous melanoma survival stratification through integration of 31-gene expression profile testing with the American Joint Committee on Cancer 8th Edition Staging. Melanoma Res. Apr 1, 2022;32(2):98-102. doi:10.1097/CMR.0000000000000804. PMID:35254332; PMCID:PMC8893124.
  128. Arron ST, Blalock TW, Guenther JM, et al. Clinical Considerations for Integrating Gene Expression Profiling into Cutaneous Squamous Cell Carcinoma Management. J Drugs Dermatol. Jun 1, 2021;20(6):5s-s11. doi:10.36849/JDD.2021.6068. PMID:34076385.
  129. Work Group; Invited Reviewers; Kim JYS, Kozlow JH, Mittal B, Moyer J, Olenecki T, Rodgers P. Guidelines of care for the management of cutaneous squamous cell carcinoma. J Am Acad Dermatol. Mar, 2018;78(3):560-578. doi:10.1016/j.jaad.2017.10.007. Epub Jan 1, 2018. PMID:29331386; PMCID:PMC6652228.
  130. Hooper PB, Farberg AS, Fitzgerald AL, Get al. Real-World Evidence Shows Clinicians Appropriately Use the Prognostic 40-Gene Expression Profile (40-GEP) Test for High-Risk Cutaneous Squamous Cell Carcinoma (cSCC) Patients. Cancer Invest. Nov 2022;40(10):911-922. doi:10.1080/07357907.2022.2116454. Epub Sep 15, 2022. PMID:36073945.
  131. Litchman GH, Fitzgerald AL, Kurley SJ, Cook RW, Rigel DS. Impact of a prognostic 40-gene expression profiling test on clinical management decisions for high-risk cutaneous squamous cell carcinoma. Curr Med Res Opin. Aug 2020;36(8):1295-1300. doi:10.1080/03007995.2020.1763283. Epub May 18, 2020. PMID:32372702.
  132. Farberg AS, Fitzgerald AL, Ibrahim SF, et al. Current Methods and Caveats to Risk Factor Assessment in Cutaneous Squamous Cell Carcinoma (cSCC): A Narrative Review. Dermatol Ther (Heidelb). Feb 2022;12(2):267-284. doi:10.1007/s13555-021-00673-y. Epub Jan 7, 2022. PMID:34994967; PMCID:PMC8850485.
  133. Newman JG, Hall MA, Kurley SJ, et al. Adjuvant therapy for high-risk cutaneous squamous cell carcinoma: 10-year review. Head Neck. Sep 2021;43(9):2822-2843. doi:10.1002/hed.26767. Epub Jun 7,2021. PMID:34096664; PMCID:PMC8453797.
  134. Wysong A, Newman JG, Covington KR, et al. Validation of a 40-gene expression profile test to predict metastatic risk in localized high-risk cutaneous squamous cell carcinoma. J Am Acad Dermatol. Feb 2021;84(2):361-369. doi:10.1016/j.jaad.2020.04.088. Epub Apr 25, 2020. Erratum in: J Am Acad Dermatol. Jun 2021;84(6):1796. PMID:32344066.
  135. Borman S, Wilkinson J, Meldi-Sholl L,. Analytical validity of DecisionDx-SCC, a gene expression profile test to identify risk of metastasis in cutaneous squamous cell carcinoma (SCC) patients. Diagn Pathol. Feb 25, 2022;17(1):32. doi:10.1186/s13000-022-01211-w. PMID:35216597; PMCID:PMC8876832.
  136. Farberg AS, Hall MA, Douglas L, et al. Integrating gene expression profiling into NCCN high-risk cutaneous squamous cell carcinoma management recommendations: impact on patient management. Curr Med Res Opin. Aug 2020;36(8):1301-1307. doi:10.1080/03007995.2020.1763284. Epub May 18, 2020. PMID:32351136.
  137. Arron ST, Wysong A, Hall MA, et al. Gene expression profiling for metastatic risk in head and neck cutaneous squamous cell carcinoma. Laryngoscope Investig Otolaryngol. Jan 6, 2022;7(1):135-144. doi:10.1002/lio2.724. PMID:35155791; PMCID:PMC8823155.
  138. Ibrahim SF, Kasprzak JM, Hall MA, et al. Enhanced metastatic risk assessment in cutaneous squamous cell carcinoma with the 40-gene expression profile test. Future Oncol. Mar18, 2022 ;(7):833-847. doi:10.2217/fon-2021-1277. Epub Nov 25, 2021. PMID:34821148.
  139. Au JH, Hooper PB, Fitzgerald AL, Somani AK. Clinical Utility of the 40-Gene Expression Profile (40-GEP) Test for Improved Patient Management Decisions and Disease-Related Outcomes when Combined with Current Clinicopathological Risk Factors for Cutaneous Squamous Cell Carcinoma (cSCC): Case Series. Dermatol Ther (Heidelb). Feb 2022;12(2):591-597. doi:10.1007/s13555-021-00665-y. Epub Dec 23, 2021. PMID:34951694; PMCID:PMC8850491.
  140. Ke C, Hu Z, Yang C. UroVysion Fluorescence In Situ Hybridization in Urological Cancers: A Narrative Review and Future Perspectives. Cancers (Basel). Nov 3, 2022;14(21):5423. doi:10.3390/cancers14215423. PMID:36358841; PMCID:PMC9657137.
  141. Zheng W, Lin T, Chen Z, et al. The Role of Fluorescence In Situ Hybridization in the Surveillance of Non-Muscle Invasive Bladder Cancer: An Updated Systematic Review and Meta-Analysis. Diagnostics (Basel). Aug 19, 2022;12(8):2005. doi:10.3390/diagnostics12082005. PMID:36010354; PMCID:PMC9407231.
  142. Nagai T, Naiki T, Etani T, et al. UroVysion fluorescence in situ hybridization in urothelial carcinoma: a narrative review and future perspectives. Transl Androl Urol. Apr 2021;10(4):1908-1917. doi:10.21037/tau-20-1207. PMID:33968678; PMCID:PMC8100858.
  143. Papavasiliou E, Sills VA, Calanzani N, et al. Diagnostic Performance of Biomarkers for Bladder Cancer Detection Suitable for Community and Primary Care Settings: A Systematic Review and Meta-Analysis. Cancers (Basel). Jan 24, 2023;15(3):709. doi:10.3390/cancers15030709. PMID:36765672; PMCID:PMC9913596.
  144. Bulai C, Geavlete P, Ene CV, et al. Detection of Urinary Molecular Marker Test in Urothelial Cell Carcinoma: A Review of Methods and Accuracy. Diagnostics (Basel). Nov 4, 2022;12(11):2696. doi:10.3390/diagnostics12112696. PMID:36359539; PMCID:PMC9689047.
  145. Soputro NA, Gracias DN, Dias BH, Nzenza T, O'Connell H, Sethi K. Utility of urinary biomarkers in primary haematuria: Systematic review and meta-analysis. BJUI Compass. Mar 28, 2022;3(5):334-343. doi:10.1002/bco2.147. PMID:35950042; PMCID:PMC9349596.
  146. Sciarra A, Di Lascio G, Del Giudice F, et al. Comparison of the clinical usefulness of different urinary tests for the initial detection of bladder cancer: a systematic review. Curr Urol. Mar 2021;15(1):22-32. Doi:10.1097/CU9.0000000000000012. Epub Mar 29, 2021. PMID:34084118; PMCID:PMC8137038.
  147. Miyake M, Owari T, Hori S, Nakai Y, Fujimoto K. Emerging biomarkers for the diagnosis and monitoring of urothelial carcinoma. Res Rep Urol. Dec 14, 2018;10:251-261. doi:10.2147/RRU.S173027. PMID:30588457; PMCID:PMC6299471.
  148. Sathianathen NJ, Butaney M, Weight CJ, Kumar R, Konety BR. Urinary Biomarkers in the Evaluation of Primary Hematuria: A Systematic Review and Meta-Analysis. Bladder Cancer. Oct 29, 2018;4(4):353-363. Doi:10.3233/BLC-180179. PMID: 0417046; PMCID: MC6218111.
  149. Meleth S, Reeder-Hayes K, Ashok M, et al. Technology Assessment of Molecular Pathology Testing for the Estimation of Prognosis for Common Cancers [Internet]. Rockville (MD): Agency for Healthcare Research and Quality (US); May 29, 2014. PMID:25905152.
  150. Mettman D, Saeed A, Shold J, et al. Refined pancreatobiliary UroVysion criteria and an approach for further optimization. Cancer Med. Sep 2021;10(17):5725-5738. doi:10.1002/cam4.4043. Epub Aug 10, 2021. PMID:34374212; PMCID:PMC8419786.
  151. Montalbo R, Izquierdo L, Ingelmo-Torres M. Urine cytology suspicious for urothelial carcinoma: Prospective follow-up of cases using cytology and urine biomarker-based ancillary techniques. Cancer Cytopathol. Jul 2020;128(7):460-469. doi:10.1002/cncy.22252. Epub Feb 21, 2020. PMID:32083810.
  152. Nagai T, Okamura T, Yanase T, et al. Examination of Diagnostic Accuracy of UroVysion Fluorescence In Situ Hybridization for Bladder Cancer in a Single Community of Japanese Hospital Patients. Asian Pac J Cancer Prev. Apr 29, 2019;20(4):1271-1273. doi:10.31557/APJCP.2019.20.4.1271. PMID:31030505; PMCID:PMC6948889.
  153. Sassa N, Iwata H, Kato M, et al. Diagnostic Utility of UroVysion Combined With Conventional Urinary Cytology for Urothelial Carcinoma of the Upper Urinary Tract. Am J Clin Pathol. Apr 2, 2019;151(5):469-478. Doi:10.1093/ajcp/aqy170. PMID:30668617.
  154. Freund JE, Liem EIML, Savci-Heijink CD, de Reijke TM. Fluorescence in situ hybridization in 1 mL of selective urine for the detection of upper tract urothelial carcinoma: a feasibility study. Med Oncol. Nov 29, 2018;36(1):10. Doi:10.1007/s12032-018-1237-x. PMID:30499061; PMCID:PMC6267383.
  155. Guan B, Du Y, Su X, et al. Positive urinary fluorescence in situ hybridization indicates poor prognosis in patients with upper tract urothelial carcinoma. Oncotarget. Jan 4, 2018;9(18):14652-14660. doi:10.18632/oncotarget.24007. PMID:29581871; PMCID:PMC5865697.
  156. Liem EIML, Baard J, Cauberg ECC, et al. Fluorescence in situ hybridization as prognostic predictor of tumor recurrence during treatment with Bacillus Calmette-Guérin therapy for intermediate- and high-risk non-muscle-invasive bladder cancer. Med Oncol. Sep 2, 2017;34(10):172. doi:10.1007/s12032-017-1033-z. PMID:28866819; PMCID:PMC5581817.
  157. Lavery HJ, Zaharieva B, McFaddin A, Heerema N, Pohar KS. A prospective comparison of UroVysion FISH and urine cytology in bladder cancer detection. BMC Cancer. Apr 7, 2017;17(1):247. doi:10.1186/s12885-017-3227-3. PMID:28388880; PMCID:PMC5383950.
  158. Gomella LG, Mann MJ, Cleary RC, et al. Fluorescence in situ hybridization (FISH) in the diagnosis of bladder and upper tract urothelial carcinoma: the largest single-institution experience to date. Can J Urol. Feb 2017;24(1):8620-8626. PMID:28263126.
  159. Todenhöfer T, Hennenlotter J, Esser M, et al. Stepwise application of urine markers to detect tumor recurrence in patients undergoing surveillance for non-muscle-invasive bladder cancer. Dis Markers. 2014;2014:973406. doi:10.1155/2014/973406. Epub Dec 22, 2014. PMID:25587206; PMCID:PMC4284969.
  160. Ikeda A, Kojima T, Kawai K, et al. Risk for intravesical recurrence of bladder cancer stratified by the results on two consecutive UroVysion fluorescence in situ hybridization tests: a prospective follow-up study in Japan. Int J Clin Oncol. Jun 2020;25(6):1163-1169. doi:10.1007/s10147-020-01634-9. Epub Mar 3, 2020. PMID:32125546; PMCID:PMC7261273.
  161. Bonberg N, Pesch B, Behrens T, et al. Chromosomal alterations in exfoliated urothelial cells from bladder cancer cases and healthy men: a prospective screening study. BMC Cancer. Nov 20, 2014;14:854. doi:10.1186/1471-2407-14-854. PMID:25412927; PMCID:PMC4247705.
  162. Pedersen SK, Baker RT, McEvoy A, Murray DH, Thomas M, Molloy PL, Mitchell S, Lockett T, Young GP, LaPointe LC. A two-gene blood test for methylated DNA sensitive for colorectal cancer. PLoS One. Apr 30, 2015;10(4):e0125041. doi:10.1371/journal.pone.0125041. PMID:25928810; PMCID:PMC4416022.
  163. Pedersen SK, Symonds EL, Baker RT, et al. Evaluation of an assay for methylated BCAT1 and IKZF1 in plasma for detection of colorectal neoplasia. BMC Cancer. Oct 6, 2015;15:654. doi:10.1186/s12885-015-1674-2. PMID:26445409; PMCID:PMC4596413.
  164. Murray DH, Baker RT, Gaur S, Young GP, Pedersen SK. Validation of a Circulating Tumor-Derived DNA Blood Test for Detection of Methylated BCAT1 and IKZF1 DNA. The Journal of Applied Laboratory Medicine. 2017;2(2):165-175. doi:10.1373/jalm.2017.023135.
  165. Young GP, Pedersen SK, Mansfield S, et al. A cross-sectional study comparing a blood test for methylated BCAT 1 and IKZF1 tumor-derived DNA with CEA for detection of recurrent colorectal cancer. Cancer medicine. Oct 2016;5(10):2763-72. doi:10.1002/cam4.868.
  166. Musher BL, Melson JE, Amato G, et al. Evaluation of circulating tumor DNA for methylated BCAT1 and IKZF1 to detect recurrence of stage II/stage III colorectal cancer (CRC). Cancer Epidemiology, Biomarkers & Prevention. Dec 2020;29(12):2702-9. doi:10.1158/1055-9965.EPI-20-0574.
  167. Symonds EL, Pedersen SK, Murray D, et al. Circulating epigenetic biomarkers for detection of recurrent colorectal cancer. Cancer. 2020;126(7):1460-1469. doi:10.1002/cncr.32695.
  168. Cock C, Anwar S, Byrne SE, et al. Low sensitivity of fecal immunochemical tests and blood-based markers of DNA hypermethylation for detection of sessile serrated adenomas/polyps. Digestive Diseases and Sciences. 2019;64(9):2555-2562. doi:10.1007/s10620-019-05569-8.
  169. Symonds EL, Pedersen SK, Baker RT, et al. A Blood Test for Methylated BCAT1 and IKZF1 vs. a Fecal Immunochemical Test for Detection of Colorectal Neoplasia. Clinical and Translational Gastroenterology. 2016;7(1):e137. doi:10.1038/ctg.2015.67.
  170. Murray DH, Symonds EL, Young GP, et al. Relationship between post-surgery detection of methylated circulating tumor DNA with risk of residual disease and recurrence-free survival. Journal of Cancer Research and Clinical Oncology. 2018;144(9):1741-1750. doi:10.1007/s00432-018-2701-x.
  171. Symonds EL, Pedersen SK, Murray DH, et al. Circulating tumour DNA for monitoring colorectal cancer—a prospective cohort study to assess relationship to tissue methylation, cancer characteristics and surgical resection. Clinical epigenetics. Dec 2018;10(1):1-1. doi:10.1186/s13148-018-0500-5.
  172. Sapkota U, Cavers W, Reddy S, Avalos-Reyes E, and Johnson KA. Total cost of care differences in National Comprehensive Cancer Center (NCCN) concordant and non-concordant breast cancer patients. Journal of Clinical Oncology 2022 40:16_suppl, e18833-e18833.
  173. Sapkota U, Cavers W, Reddy S, Avalos-Reyes E, and Johnson KA. Total cost of care differences in National Comprehensive Cancer Center (NCCN) concordant and non-concordant patients with colon cancer. Journal of Clinical Oncology 2022 40:16_suppl, 3624-3624.
  174. Erickson Foster J, Velasco JM, Hieken TJ. Adverse outcomes associated with noncompliance with melanoma treatment guidelines. Ann Surg Oncol. Sep 2008;15(9):2395-402. doi:10.1245/s10434-008-0021-0. Epub Jul 4, 2008. PMID:18600380.
  175. Visser BC, Ma Y, Zak Y, Poultsides GA, Norton JA, Rhoads KF. Failure to comply with NCCN guidelines for the management of pancreatic cancer compromises outcomes. HPB (Oxford). Aug 2012;14(8):539-47. doi:10.1111/j.1477-2574.2012.00496.x. Epub Jun 12, 2012. PMID 22762402; PMCID:PMC3406351.
  176. Mearis M, Shega JW, Knoebel RW. Does adherence to National Comprehensive Cancer Network guidelines improve pain-related outcomes? An evaluation of inpatient cancer pain management at an academic medical center. J Pain Symptom Manage. Sep 2014;48(3):451-8. doi:10.1016/j.jpainsymman.2013.09.016. Epub Jan 16, 2014. PMID:24439844.
  177. Schwam ZG, Sosa JA, Roman S, Judson BL. Receipt of Care Discordant with Practice Guidelines is Associated with Compromised Overall Survival in Nasopharyngeal Carcinoma. Clin Oncol (R Coll Radiol). Jun 2016;28(6):402-9. doi: 10.1016/j.clon.2016.01.010. Epub Feb 8, 2016. PMID:26868285.
  178. OncoKB. https://www.oncokb.org/sop. Accessed January 5, 2022 to April 7, 2022.
  179. National Comprehensive Cancer Network (NCCN). https://www.nccn.org/home. Accessed January 5, 2022 to April 7, 2022.
  180. McGlaughon JL, Goldstein JL, Thaxton C, Hemphill SE, Berg JS. The progression of the ClinGen gene clinical validity classification over time. Hum Mutat. Nov 2018;39(11):1494-1504. doi:10.1002/humu.23604. PMID:30311372; PMCID:PMC6190678.
  181. ClinGen. https://clinicalgenome.org/site/assets/files/2164/clingen_standard_gene-disease_validity_recuration_procedures_v1.pdf Accessed January 5, 2022 to April 7, 2022.
  182. Eloisa Arbustini, Elijah R Behr, Lucie Carrier, et al. Interpretation and actionability of genetic variants in cardiomyopathies: a position statement from the European Society of Cardiology Council on cardiovascular genomics, European Heart Journal. May 2022; 43(20, 21): 1901–1916, https://doi.org/10.1093/eurheartj/ehab895.
  183. cBioPortal for Cancer Genomics. cBioPortal for Cancer Genomics. Accessed March 16, 2023.
  184. Hayashi Y, Fujita K, Netto GJ, Nonomura N. Clinical Application of TERTPromoter Mutations in Urothelial Carcinoma. Front Oncol. Jul 29, 2021;11:705440. doi:10.3389/fonc.2021.705440. PMID:34395278; PMCID:PMC8358429.
  185. Finkelstein S and Swalsky P. Topographic genotyping for determining the diagnosis, malignant potential, and biologic behavior of pancreatic cysts and related conditions Google Patents. https://patents.google.com/patent/US20060088870A1/en. Accessed March 3, 2023.
  186. Castle Biosciences DecisionDx-Melanoma Final Report Sample. https://castletestinfo.com/wp-content/uploads/2022/12/SAMPLE_CM_Final_Report_unknown_2B.pdf. Accessed March 4, 2023.
  187. Castle Biosciences DecisionDx-UM Final Report Sample. https://castlebiosciences.com/wp-content/uploads/2015/01/DecisionDx-UM-Sample-Report.pdf. Accessed March 4, 2023.
  188. von Ahlfen S, Missel A, Bendrat K, Schlumpberger M. Determinants of RNA quality from FFPE samples. PLoS One. Dec 5, 2007;2(12):e1261. doi:10.1371/journal.pone.0001261. PMID:18060057; PMCID:PMC2092395.
  189. Sun F, Bruening W, Uhl S, Ballard R, Tipton K, Schoelles K. Quality, Regulation and Clinical Utility of Laboratory-developed Molecular Tests. Rockville, MD: Agency for Healthcare Research and Quality (US); 2010.
  190. Engstrom PF, Bloom MG, Demetri GD, et al. NCCN molecular testing white paper: effectiveness, efficiency, and reimbursement. J Natl Compr Cancer Netw. 2011;9(Suppl 6):S1–16.
  191. Balch CM, Gershenwald JE, Soong SJ, et al. Final version of 2009 AJCC melanoma staging and classification. J Clin Oncol. Dec 20, 2009;27(36):6199-206. doi:10.1200/JCO.2009.23.4799. Epub 2009 Nov 16. PMID:19917835; PMCID:PMC2793035.
  192. Gershenwald JE, Scolyer RA, Hess KR, et al; for members of the American Joint Committee on Cancer Melanoma Expert Panel and the International Melanoma Database and Discovery Platform. Melanoma staging: Evidence-based changes in the American Joint Committee on Cancer eighth edition cancer staging manual. CA Cancer J Clin. Nov 2017;67(6):472-492. doi:10.3322/caac.21409. Epub Oct 13, 2017. PMID:29028110; PMCID:PMC5978683.
  193. Hall KH and Rapini RP. Acral Letiginous Melanoma. Stat Pearls. July 2022. https://www.ncbi.nlm.nih.gov/books/NBK559113/. Accessed March 4, 2023.
  194. Yamamoto M, Sickle-Santanello B, Beard T, et al. The 31-gene expression profile test informs sentinel lymph node biopsy decisions in patients with cutaneous melanoma: results of a prospective, multicenter study. Curr Med Res Opin. Jan 16, 2023:1-7. doi:10.1080/03007995.2023.2165813. Epub ahead of print. PMID:36617959.
  195. Berger AC, Davidson RS, Poitras JK, et al. Clinical impact of a 31-gene expression profile test for cutaneous melanoma in 156 prospectively and consecutively tested patients. Curr Med Res Opin. Sep 2016;32(9):1599-604. doi:10.1080/03007995.2016.1192997. Epub Jun 3, 2016. PMID:27210115.
  196. Farberg AS, Glazer AM, White R, Rigel DS. Impact of a 31-gene Expression Profiling Test for Cutaneous Melanoma on Dermatologists' Clinical Management Decisions. J Drugs Dermatol. May 1, 2017;16(5):428-431. PMID:28628677.
  197. Svoboda RM, Glazer AM, Farberg AS, Rigel DS. Factors Affecting Dermatologists' Use of a 31-Gene Expression Profiling Test as an Adjunct for Predicting Metastatic Risk in Cutaneous Melanoma. J Drugs Dermatol. May 1, 2018;17(5):544-547. PMID:29742186.
  198. Mirsky R, Prado G, Svoboda R, Glazer A, Rigel D. Management Decisions Made by Physician Assistants and Nurse Practitioners in Cutaneous Malignant Melanoma Patients: Impact of a 31-Gene Expression Profile Test. J Drugs Dermatol. Nov 1, 2018;17(11):1220-1223. PMID:30500144.
  199. Schuitevoerder D, Heath M, Cook RW, et al. Impact of Gene Expression Profiling on Decision-Making in Clinically Node Negative Melanoma Patients after Surgical Staging. J Drugs Dermatol. Feb 1, 2018;17(2):196-199. PMID:29462228.
  200. Urovysion Bladder Cancer Kit. https://www.molecular.abbott/us/en/products/oncology/urovysion-bladder-cancer-kit. Accessed March 4, 2023.
  201. Arber DA, Orazi A, Hasserjian RP, et al. International Consensus Classification of Myeloid Neoplasms and Acute Leukemias: integrating morphologic, clinical, and genomic data. Blood. 2022;140(11):1200-1228. doi:10.1182/blood.2022015850.
  202. Acanda De La Rocha AM, Fader M, Coats ER, et al. Clinical Utility of Functional Precision Medicine in the Management of Recurrent/Relapsed Childhood Rhabdomyosarcoma. JCO Precis Oncol. 2021;5:PO.20.00438. Published Oct 27, 2021. doi:10.1200/PO.20.00438.
  203. Albitar, M., Zhang, H., Goy, A. et al. Determining clinical course of diffuse large B-cell lymphoma using targeted transcriptome and machine learning algorithms. Blood Cancer J. 2022;12(25). https://doi.org/10.1038/s41408-022-00617-5.
  204. Azzam D, Volmar CH, Hassan AA, Perez A, et al. A Patient-Specific Ex Vivo Screening Platform for Personalized Acute Myeloid Leukemia (AML) Therapy. Blood 2015; 126 (23): 1352. https://doi.org/10.1182/blood.V126.23.1352.1352.
  205. Daver N, Venugopal S, Ravandi F. FLT3 mutated acute myeloid leukemia: 2021 treatment algorithm. Blood Cancer J. 2021;11(5):104. Published May 27, 2021. doi:10.1038/s41408-021-00495-3.
  206. Finkelstein SD, Sistrunk JW, Malchoff C, et al. A Retrospective Evaluation of the Diagnostic Performance of an Interdependent Pairwise MicroRNA Expression Analysis with a Mutation Panel in Indeterminate Thyroid Nodules. Thyroid. 2022;32(11):1362-1371. doi:10.1089/thy.2022.0124.
  207. Greenhaw BN, Covington KR, Kurley SJ et al. Molecular Risk Prediction in Cutaneous Melanoma: A Meta-analysis of the 31-gene expression profile prognostic test in 1,479 patients. Journal of the American Academy of Dermatology. 2020; 83(3): 745-753. https://doi.org/10.1016/j.jaad.2020.03.053.
  208. Harris NL, Jaffe ES, Diebold J, et al. The World Health Organization classification of hematological malignancies report of the Clinical Advisory Committee Meeting, Airlie House, Virginia, November 1997. Mod Pathol. 2000;13(2):193-207. doi:10.1038/modpathol.3880035.
  209. He J, Abdel-Wahab O, Nahas MK, et al. Integrated genomic DNA/RNA profiling of hematologic malignancies in the clinical setting. Blood. 2016;127(24):3004-3014. doi:10.1182/blood-2015-08-664649.
  210. Kiyoi H, Kawashima N, Ishikawa Y. FLT3 mutations in acute myeloid leukemia: Therapeutic paradigm beyond inhibitor development. Cancer Sci. 2020;111(2):312-322. doi:10.1111/cas.14274.
  211. Kwatra SG, Hines H, Semenov YR, Trotter SC, Holland E, Leachman S. A Dermatologist's Guide to Implementation of Gene Expression Profiling in the Management of Melanoma. J Clin Aesthet Dermatol. 2020;13(11 Suppl 1):s3-s14.
  212. Lohse I, Azzam DJ, Al-Ali H, et al. Ovarian Cancer Treatment Stratification Using Ex Vivo Drug Sensitivity Testing. Anticancer Res. 2019;39(8):4023-4030. doi:10.21873/anticanres.13558.
  213. Lynch TJ, Bell DW, Sordella R, et al. Activating mutations in the epidermal growth factor receptor underlying responsiveness of non-small-cell lung cancer to gefitinib. N Engl J Med. 2004;350(21):2129-2139. doi:10.1056/NEJMoa040938.
  214. Malani D, Kumar A, Brück O, et al. Implementing a Functional Precision Medicine Tumor Board for Acute Myeloid Leukemia. Cancer Discov. 2022;12(2):388-401. doi:10.1158/2159-8290.CD-21-0410.
  215. Malla M, Loree JM, Kasi PM, Parikh AR. Using Circulating Tumor DNA in Colorectal Cancer: Current and Evolving Practices. J Clin Oncol. 2022;40(24):2846-2857. doi:10.1200/JCO.21.02615.
  216. Martin NA, Tepper JE, Giri VN, et al. Adopting Consensus Terms for Testing in Precision Medicine. JCO Precis Oncol. 2021;5:PO.21.00027. Published Oct 6, 2021. doi:10.1200/PO.21.00027.
  217. Morin RD, Arthur SE, Hodson DJ. Molecular profiling in diffuse large B-cell lymphoma: why so many types of subtypes?. Br J Haematol. 2022;196(4):814-829. doi:10.1111/bjh.17811.
  218. Morton DL, Thompson JF, Cochran AJ, et al. Final trial report of sentinel-node biopsy versus nodal observation in melanoma. N Engl J Med. 2014;370(7):599-609. doi:10.1056/NEJMoa1310460.
  219. Pakkala S, Ramalingam SS. Personalized therapy for lung cancer: striking a moving target. JCI Insight. 2018;3(15):e120858. Published Aug 9, 2018. doi:10.1172/jci.insight.120858.
  220. Pao W, Miller V, Zakowski M, et al. EGF receptor gene mutations are common in lung cancers from "never smokers" and are associated with sensitivity of tumors to gefitinib and erlotinib. Proc Natl Acad Sci U S A. 2004;101(36):13306-13311. doi:10.1073/pnas.0405220101.
  221. Sobahy TM, Tashkandi G, Bahussain D, Al-Harbi R. Clinically actionable cancer somatic variants (CACSV): a tumor interpreted dataset for analytical workflows. BMC Med Genomics. 2022;15(1):95. Published Apr 25, 2022. doi:10.1186/s12920-022-01235-7.
  222. Summers RJ, Castellino SM, Porter CC, et al. Comprehensive Genomic Profiling of High-Risk Pediatric Cancer Patients Has a Measurable Impact on Clinical Care. JCO Precis Oncol. 2022;6:e2100451. doi:10.1200/PO.21.00451.
  223. Swords RT, Azzam D, Al-Ali H, et al. Ex-vivo sensitivity profiling to guide clinical decision making in acute myeloid leukemia: A pilot study. Leuk Res. 2018;64:34-41. doi:10.1016/j.leukres.2017.11.008.
  224. Taylor J, Xiao W, Abdel-Wahab O. Diagnosis and classification of hematologic malignancies on the basis of genetics. Blood. 2017;130(4):410-423. doi:10.1182/blood-2017-02-734541.
  225. Tie J, Wang Y, Tomasetti C, et al. Circulating tumor DNA analysis detects minimal residual disease and predicts recurrence in patients with stage II colon cancer. Sci Transl Med. 2016;8(346):346ra92. doi:10.1126/scitranslmed.aaf6219.
  226. Whiteman DC, Baade PD, Olsen CM. More people die from thin melanomas (≤1 mm) than from thick melanomas (>4 mm) in Queensland, Australia. J Invest Dermatol. 2015;135(4):1190-1193. doi:10.1038/jid.2014.452.
  227. Yadav S, Couch, FJ. Germline Genetic Testing for Breast Cancer Risk: The Past, Present, and Future. American Society of Clinical Oncology educational book. American Society of Clinical Oncology. Annual Meeting. 2019; 39, 61-74. doi:10.1200/EDBK_238987.
  228. Yip L, Gooding WE, Nikitski A, et al. Risk assessment for distant metastasis in differentiated thyroid cancer using molecular profiling: A matched case-control study. Cancer. 2021;127(11):1779-1787. doi:10.1002/cncr.33421.
  229. Landaas EJ, Eckel AM, Wright JL et al. Application of Health Technology Assessment (HTA) to Evaluate New Laboratory Tests in a Health System: A Case Study of Bladder Cancer Testing. Acad Pathol. 2020; 7:2374289520968225.
  230. Chai CA, Yeoh WS, Rajandram R et al. Comparing CxBladder to Urine Cytology as Adjunct to Cystoscopy in Surveillance of Non-muscle Invasive Bladder Cancer-A Pilot Study. Front Surg. 2021; 8:659292.

Revision History Information

Revision History Date Revision History Number Revision History Explanation Reasons for Change
N/A

Associated Documents

Keywords

N/A

Read the LCD Disclaimer