PROPOSED Local Coverage Determination (LCD)

Automated Detection and Quantification of Brain MRIs

DL40224

Expand All | Collapse All
Links in PDF documents are not guaranteed to work. To follow a web link, please use the MCD Website.
Proposed LCD
Proposed LCDs are works in progress that are available on the Medicare Coverage Database site for public review. Proposed LCDs are not necessarily a reflection of the current policies or practices of the contractor.

Document Note

Note History

Contractor Information

Proposed LCD Information

Document Information

Source LCD ID
L40224
Proposed LCD ID
DL40224
Original ICD-9 LCD ID
Not Applicable
Proposed LCD Title
Automated Detection and Quantification of Brain MRIs
Proposed LCD in Comment Period
Source Proposed LCD
Original Effective Date
N/A
Revision Effective Date
N/A
Revision Ending Date
N/A
Retirement Date
ANTICIPATED 11/12/2026
Notice Period Start Date
N/A
Notice Period End Date
N/A

CPT codes, descriptions, and other data only are copyright 2025 American Medical Association. All Rights Reserved. Fee schedules, relative value units, conversion factors and/or related components are not assigned by the AMA, are not part of CPT, and the AMA is not recommending their use. The AMA does not directly or indirectly practice medicine or dispense medical services. The AMA assumes no liability for data contained or not contained herein. CPT is a registered trademark of the American Medical Association.

Current Dental Terminology © 2025 American Dental Association. All rights reserved.

Copyright © 2025, the American Hospital Association, Chicago, Illinois. Reproduced with permission. No portion of the AHA copyrighted materials contained within this publication may be copied without the express written consent of the AHA. AHA copyrighted materials including the UB‐04 codes and descriptions may not be removed, copied, or utilized within any software, product, service, solution, or derivative work without the written consent of the AHA. If an entity wishes to utilize any AHA materials, please contact the AHA at ub04@aha.org or 312‐422‐3366.

Making copies or utilizing the content of the UB‐04 Manual, including the codes and/or descriptions, for internal purposes, resale and/or to be used in any product or publication; creating any modified or derivative work of the UB‐04 Manual and/or codes and descriptions; and/or making any commercial use of UB‐04 Manual or any portion thereof, including the codes and/or descriptions, is only authorized with an express license from the American Hospital Association. The American Hospital Association (the "AHA") has not reviewed, and is not responsible for, the completeness or accuracy of any information contained in this material, nor was the AHA or any of its affiliates, involved in the preparation of this material, or the analysis of information provided in the material. The views and/or positions presented in the material do not necessarily represent the views of the AHA. CMS and its products and services are not endorsed by the AHA or any of its affiliates.

Issue

Issue Description

This LCD outlines noncoverage for this service with specific details under Coverage Indications, Limitations and/or Medical Necessity. 

Issue - Explanation of Change Between Proposed LCD and Final LCD

CMS National Coverage Policy

Title XVIII of the Social Security Act, §1862 (a)(1)(A) allows coverage and payment for only those services that are considered to be reasonable and necessary for the diagnosis or treatment of illness or injury or to improve the functioning of a malformed body member.

Title XVIII of the Social Security Act, §1862 (a)(1)(D) Items and services related to research and experimentation.

Title XVIII of the Social Security Act, §1862 (a)(7) states Medicare will not cover any services or procedures associated with routine physical checkups.

Title XVIII of the Social Security Act, §1833 (e) prohibits Medicare payment for any claim which lacks the necessary information to process the claim.

42 CFR §410.32 indicates that diagnostic tests may only be ordered by the treating physician (or other treating practitioner acting within the scope of his or her license and Medicare requirements).

CMS Internet-Only Manual, Pub 100-03, Medicare National Coverage Determinations Manual, Chapter 1, Part 4,

  • 220.

The Protecting Access to Medicare Act (PAMA) of 2014, Section 218(b), established a new program to increase the rate of appropriate advanced diagnostic imaging services provided to Medicare beneficiaries.

42 CFR §414.92 codifies the Appropriate use Criteria Program policies. Data quality and AI model development with external testing

NCD 200.3 Monoclonal Antibodies Directed Against Amyloid for the Treatment of Alzheimer's Disease (AD)

Coverage Guidance

Coverage Indications, Limitations, and/or Medical Necessity

This is a non-coverage policy for artificial intelligence assistive software tool for automated detection and quantification of the brain. 

 

Summary of Evidence

Background

There is interest in artificial intelligence algorithms (machine learning and deep learning) to provide automation of neuroimaging in effort to improve accuracy, reduce bias and aid in clinical decision-making.1

Structural imaging has the potential to reduce variance and improve diagnostic and prognostic inferences from MRI scans. However, the tools must be trained and validated in a manner that provides generalizability to broader populations. When a single data set or small number of individuals are used to train the programs, the results may be overestimated. A systematic review was conducted that describes and compares the available tools, with the aim to assess their translational potential into real-world clinical settings.2 Of 8 tools identified 2 were not approved for medical use and one had no associated references. Most of the tools were found to have been validated using a small number of cases and a single data set. They compared the tools based on the number of validation methods for which was conducted. None of the tools account for intrascanner variability resulting from differences in the scanners, magnetic field and acquisition parameters and therefore lack generalizability. The author concludes that the majority of available tools make use of multivariant machine learning methods and have potential to open up new possibilities in personalized medicine. However, they caution results should be interpreted with vigilance due to the limitations in these studies especially related to small sample size and poor methodology. They also caution that results must be interpreted in light of the patient’s clinical history and symptomatology.

The American Society of Functional Neuroradiology (ASFNR) and the American Society of Neuroradiology (ASNR) acknowledge the challenges with artificial intelligence in neurology an created an Artificial intelligence Workshop Technology Workgroup.3 This group published a critical appraisal of Artificial Intelligence (AI)-enabled imaging tools using the levels of evidence system in the American Journal of Neuroradiology. They call for critical appraisal of enabled image tools throughout the life cycle from development to implementation using systematic, standardized and an objective approach that can verify both the technical and clinical efficacy of the tool. A challenge in developing AI models is access to comprehensive and large data sets that can be utilized to train the technology. This data should represent the intended population and provide a diverse group from which the data may be extrapolated. This paper provides a resource for clinicians to aid in critical assessment of AI technologies to ensure safe and effective implementation into healthcare practices.

FDA cleared devices as of the publication date of this LCD:

  • NeuroQuantTM Medical Image Processing Software is registered as a Class II device under FDA 510(k) intended for “automatic labeling, visualization and volumetric quantification of segmentable brain structures from a set of magnetic resonance images (MRI)”. NeuroQuant 4.0 uses AI modalities of machine learning and deep learning to aid in identifying complex patterns in imaging data.4
  • Icobrain aria is registered as a Class II device under FDA 510(k) pathway.5 It is described as a software-only device for assisting radiologist with the detection and quantification of amyloid-related imaging abnormalities (ARIA) on brain MRI scans for patients undergoing a amyloid-beta directed antibody therapy. Icobrain aria automatically processes inputs from brain MRI scans from 2 time points and calculates the ARIA-E (edema/sulcal effusion): the length of the longest axis computed from the segmented ARIA-E abnormalities, and the number of brain sites affected by ARIA-E; and ARIA-H (hemorrhage/superficial siderosis): the count of stable and new T2*-GRE hypo-intensities indicated as microhemorrhages or superficial siderosis.5 Using these measurements the ARIA radiographic severity is automatically derived based on deep learning technology and reported electronically. The intended use of the device is “as a computer-assisted detection and diagnosis software to be used as a concurrent reading aid to help trained radiologists in the detection, assessment, and characterization of ARIA. The software provides information about the presence, location, size, severity and changes of ARIA-E and ARIA-H. Patient management decisions should not be made solely based on analysis by ICO brain aria.” The device is not intended to replace radiologist review of images or clinical judgment and is not intended to be used to segment macro hemorrhages (diameter 10mm or more).6
  • Icobrain is registered as a Class II device under FDA 510(k) pathway.7-10 It is intended for “automatic labeling, visualization and volumetric quantification of segmental brain structures from a set of MRI images”. The predicate device is NeuroQuant.
  • DeepBrain is registered as a Class II device under FDA 510(k) pathway.11 The device is intended for “automatic labeling, quantification and visualization of segmental brain structures from MRI images.” It is intended to be used by trained health professionals.
  • Siemens Morphometry Analysis is registered as a Class II device under FDA 510(k) pathway.12 This product is a syngo based post-acquisition imaging processing software for “viewing, manipulating, evaluating and analyzing MRI, MR PET, CT, PE, CT-PET and MR spectra using deep learning algorithms”.

Other FDA cleared devices may also be available not but listed as they were not found in the literature search.

The literature search for the evidence related to quantitative analysis of brain MRI was conducted using PubMed and EBSCO using search teams: Alzheimer’s, ARIA, imaging, artificial intelligence or AI, automated or software or computation or deep learning, machine learning or artificial neural network, multiple sclerosis, MRI and AI, brain or neurology, AI or MRI. Searches under known product that are commercially available in the United States included names NeuroQuant, NeuroGage, Icobrain, icobrain aria, Jung Diagnostic, quantib, Qure, and volBrain was conducted. There were no RCT identified. Unpublished reports, posters, abstracts, case reports and small case series were omitted from the review unless there was no other evidence available to consider. Review papers were utilized in background and summary but not considered for evidence review. There were no guidelines or recommendations on the use of automated software identified.

Alzheimer’s Disease

Amyloid beta (Aβ)-directed antibody therapies, such as aducanumab, lecanemab and donanemab, are approved in the United States (U.S.) for the treatment of patients with mild cognitive dementia due to Alzheimer’s disease. There is evidence of slowing disease progression and improvement in clinical outcomes for those treated with mild cognitive impairment from Alzheimer’s dementia (AD), however it is known that Aβ-directed antibody therapies increase the risk of ARIA and ARIA related complications.13 ARIA complications include autoimmune or inflammatory conditions, seizures, or disorders associated with extensive white matter pathology. Symptoms and signs of ARIA can include new focal neurological signs, headache, confusion, altered mental status, dizziness, nausea, vomiting, fatigue, blurred vision or vision disturbances, gait disturbance or seizure.

As novel therapies for the management of AD emerge the need for surveillance for complications has developed. Radiologists have had to develop criteria and become familiar with the appearance of amyloid-related image abnormalities, develop appropriate imaging protocols and clinicians also must determine the optimal pathways for management of ARIA.14 This is an area of ongoing investigations and several studies have contributed to the current knowledge of these challenges, but literature remains sparce.

Multiple grading schemes to determine the severity of ARIA have been proposed.15-18 The ARIA Radiographic Severity categories ARIA into mild, moderate and severe and this classification system was used in the pivotal clinical trials for anti-amyloid immunotherapies and by the FDA for drug approval. While becoming the accepted standard for classification this has not published in peer-reviewed literature at the date of this LCD other than as a research poster. 19,20 A comparison of the Barkhof Grand Total Scale (BGTS) or the 3- or 5-point Severity Scales of ARIA-E demonstrated a high degree of correlation between the scales.17

To monitor for ARIA the Appropriate Use Criteria for Aducanumab recommends MRI before the 5th, 7th, 9th and 12th infusions to improve detection.21 The criteria recommends discontinuation of aducanumab for any macro-hemorrhage, more than 1 area of superficial siderosis, more than 10 microhemorrhages occurring since the initiation of treatment, more than 2 episodes of ARIA, severe symptoms of ARIA or development of any medical condition requiring anticoagulation. The protocol allows continuation of aducanumab for mild ARIA-E or ARIA-H with monthly MRI monitoring and discontinuation for worsening symptoms.

The Appropriate Use criteria for Lecanemab recommends obtaining MRI scans at baseline and prior to the 5th, 7th, 14th and 26th infusions. They explain that 81% of ARIA-E occur early and resolve spontaneously within 4 months of radiographic detection.22 The protocol allows continuation of aducanumab for mild ARIA-E or ARIA-H with monthly MRI monitoring and discontinuation for worsening symptoms. Once the ARIA resolves or stabilizes monthly imaging can be discontinued. The criteria states that the imaging should be read by knowledgeable MRI readers proficient in detection and interpretation of ARIA or clinicals skilled in the conduction of lumbar puncture.

The Appropriate Use recommendations for Donanemab should be performed prior to the 2nd, 3rd, 4th and 7th infusions and 12th in those at high risk for ARIA. The protocol allows continuation of aducanumab for mild ARIA of edema type mild ARIA-E or ARIA-H with monthly MRI monitoring and discontinuation for worsening symptoms.6

Since ARIA is a new entity, standard for radiographic interpretation of ARIA are being developed and published. Those interpreting the imaging need education and training to ensure accuracy and consistency in reporting. The American College of Radiology, the Alzheimer’s Association, and the Radiological Society of North America are all offering training and continued medical education.19

Radiology

A retrospective report on ARIA was conducted reviewing the imaging of 262 subjects in Phase 2 studies investigating subjects with mild to moderate AD treated with bapineuzumab, a humanized monoclonal antibody against amyloid β. Two neuroradiologists independently reviewed 2572 MRI scans from 262 participants. The readers were masked to the patient's treatment arm. Patients were included in the risk analysis (n=210) if they did not have evidence of ARIA-E in their pretreatment MRI, had received bapineuzumab, or had at least one MRI scan after treatment. Thirty-six patients (17%) developed ARIA-E during treatment with bapineunzumab, of which 28 patients (78%) were asymptomatic, while 8 were symptomatic. Fifteen of these with ARIA-E detected (42%) on re-read of MRI were not detected previously. All of these patients were asymptomatic and had fewer brain regions involved (mean 1.3, SD=0.5) than patients identified during the clinical studies (mean 2.6, SD=2.4, p=0.0193). Thirteen of the patients whose ARIA-E findings were not detected during the clinical trial continued the bapinezumab infusions for up to 2 years and remained asymptomatic.18

Using the same study population investigators sought to describe imaging characteristics of ARIA-E and ARIA-H identified.15 The rate of ARIA-H was reported as 12.4% (26/210). They also found that in 49% of those with ARIA-E there was also associated appearance of ARIA-H. The authors conclude this may suggest a common pathophysiologic mechanism. All scans were reviewed by local MR imaging readers and subsequently independently reviewed by the same 2 neuroradiologist as part of the study protocol. They reported the inter-reader kappa value of 0.76 indicated high inter-reader reliability with 94% agreement between neuroradiologist regarding the presence or absence of ARIA-E.

Using this same population a retrospective analysis of 242 patients the incidence of ARIA-E was detected more frequently by trained neuroradiologist as compared to local site radiologist.23 The MRI were performed in patients with mild to moderate AD in a Phase III trial of bapineuzumab. Seventy-six cases of ARIA-E were not detected on the initial read and reported on the final MRI review with the expert radiologist including 51 cases not identified by central/local readers. These represented low radiologic severity. A final read analysis found that the readers’ ability to detect ARIA-E improved as the study duration increased resulting in the majority of occurrences of ARIA-E later in the study to be identified by the local side radiologist, suggesting that ability to detect ARIA improved with increasing experience. It is unclear if the outside readers were using the same imaging criteria, the duration and type of training received, or if this finding resulted in any treatment changes between the groups. This supports the need for appropriate training for radiologist reading these studies and continued standardization of findings to ensure consistency between readers. The clinical significance of this finding is not determined as none of those with mild ARIA-E had symptoms despite continuation of therapy and the sample size was too small to generalize to a larger population.

A volumetric analysis of structural MRI images of the brain was tested and reported to have a high correlation to independent computer-aided manual segmentation for detection of atrophy changes seen in mild AD.24 Using Open Access series of Imaging Studies database 40 subjects with mild probable AD were compared to health controls. These images were processed by the NeuroQuant software package. The investigators reported that volumetric results obtained by the software offers a high correlation and could benefit in evaluation of brain atrophy. The study is limited by very small sample size, lack of human subjects and correlation to expert readers in the same population and represents very low-quality evidence.

A study reviewing the MRIs of 122 patients with dementia compared readings with NeuroQuant and radiology readings in effort to determine if the automated software could determine if the dementia was AD or other type. They concluded that the software could not be used alone as the changes in brain segments were not specific for AD.25

A prospective study evaluated 40 patients brain MRIs on 6 scanners from 5 institutions with both NeuroQuant quantitative analysis and neuroradiologist readings. Image processing was conducted with FAST-DL a DOCOM-based convolution neural network-dependent deep learning AI enhancement software product called SubtleMR. Clinical classification performance was compared for standard of care scans, FAST-DL and NeuroQuant. The authors reported FAST-DL was statistically superior to standard of care in subjective image quality for perceived signal to noise ratio (SNR), sharpness, artifact reduction, anatomic/lesion conspicuity, and image contrast (all P values < 0.008), despite a 60% reduction in sequence scan time. They conclude that deep learning can provide 60% faster image acquisitions with statically perceived image quality with accuracy comparable to standard of care scans.26 This study is limited by small samples size, use of a vendor-based software for comparison, risk of bias and conflicts of interest with investigators.

Clinical Validity (or Technical efficacy)

Sima et al.27 conducted a diagnostic study to assess the clinical performance of an AI–based software tool for assisting radiological interpretation of brain MRI scans in patients monitored for ARIA. This study enrolled 16 US Board of Radiology–certified radiologists to perform radiological reading with (assisted) and without the software (unassisted). A total of 199 cases, where each case consisted of a pre-dosing baseline and a postdosing follow-up MRI of patients from aducanumab clinical trials PRIME (NCT01677572), EMERGE (NCT02484547), and ENGAGE (NCT02477800) were retrospectively evaluated. End points were the difference in diagnostic accuracy between assisted and unassisted detection of ARIA-E and ARIA-H independently, assessed with the area under the receiver operating characteristic curve (AUC).

Demographics included mean age was 70.4 (7.2) years; 105 (52.8%) were female; 23 (11.6%) were Asian, 1 (0.5%) was Black, 157 (78.9%) were White, and 18 (9.0%) were other or unreported race and ethnicity. Among the 16 radiological readers included, 2 were specialized neuroradiologists (12.5%), 11 were male individuals (68.8%), 7 were individuals working in academic hospitals (43.8%), and they had a mean (SD) of 9.5 (5.1) years of experience. Radiologists assisted by the software were significantly superior in detecting ARIA than unassisted radiologists, with a mean assisted AUC of 0.87 (95% CI, 0.84-0.91) for ARIA-E detection (AUC improvement of 0.05 [95% CI, 0.02-0.08]; P = .001]) and 0.83 (95% CI, 0.78-0.87) for ARIA-H detection (AUC improvement of 0.04 [95% CI, 0.02-0.07]; P = .001). Sensitivity was higher in assisted reading compared with unassisted reading (87% vs 71% for ARIA-E detection; 79% vs 69% for ARIA-H detection). Specificity remained above 80% for the detection of both ARIA types. The software had the greatest improvement in detection of mild cases (70% compared to 47%). The unassisted readers distinguished ARIA grades well but had higher inter-reader agreement with software assistance. Time for reading was similar in both groups. The authors concluded that radiological reading performance for ARIA detection and diagnosis was significantly better when using the AI-based assistive software. The study is limited by concerns for indirectness (small sample size), lack of generalizability (limited representation in the population), and risk of software errors due to computer learning from individual readers which may have been trained on discrepant data. The software was not trained to detect cerebral hemorrhages larger than 1 cm. The software assists but does not replace the need for qualified radiologists to read the studies. An additional validation study compared isobrain to Free Surfer with comparable results.28 The lead author is employed by the device maker.

A study compared the MRIs from healthy controls (n=90) to those with subjective cognitive decline (n=930), mild cognitive impairment (n=357), and AD (n=820). Icobrain dm results were compared to FreeSurfer software and the investigators reported isobrain dm had less failures, was faster and improved clinical accuracy.29 FreeSurfer is not available clinically.

An empirical study in South Korea analyzed the MRI for 98 patients with AD using VUNO Med-DeepBrain AD (DBAD) deep learning algorithm. The compared the results of the (DBAD) imaging reads to that of 3 expert readers and reported comparable accuracy (87.1% for DBAD and 84.3% for ME), sensitivity (93.3% for DBAD and 80.0% for ME), and specificity (85.5% for DBAD and 85.5% for ME).30

Clinical Utility (or Clinical efficacy)

There are no published studies to date on clinical utility. The software was found to be particularly helpful in improving detecting on mild cases of ARIA-E, however it is not determined if this improves patient outcomes.27 The ARIA the Appropriate Use Criteria allows continuation of aducanumab for mild ARIA of ARIA-E or ARIA-H with monthly MRI monitoring and discontinuation for worsening symptoms. There is a lack of evidence continuation of medication in these cases correlates with improved outcomes or if this increases the risk of serious adverse events.

Multiple Sclerosis

Multiple sclerosis is an autoimmune demyelinating disease impacting the central nervous system and diagnosis is made by MRI findings, laboratory findings and clinical data. This has led to investigations into developing machine learning tools to aid in the diagnosis of multiple sclerosis (MS) have been ongoing.

Several review papers have identified multiple investigations for AI models and the potential this technology may bring but acknowledge these investigations did not yield a clinically usable model.31-34

A 2018 systematic review included 30 articles of which 18% utilized artificial neural network method reporting an overall high sensitivity, specificity and accuracy of the reasoning methods.35

A 2022 systematic review included 38 studies focusing on deep learning or AI to analyze any modalities with purpose of diagnosing MS. They conclude this is a growing field and can result in drastic improvements in the future.36 Another systematic review with meta-analysis from the same group in 2023 included 41 articles (n=5989) and reported a high precision in MS diagnosis for AI studies (95%CI: 88%, 97%) suggesting that AI can aid the clinician in accurate diagnosis of MS.37 The meta-analysis is limited by very high heterogeneity with overall I2=93% limiting the validity of these results. The authors conclude that more studies are necessary to create a generalizable algorithm.

Another 2022 systematic review included 66 papers which addressed developing classifiers for MS identification or measuring its progression. They also acknowledge the potential benefits of this approach if applied appropriately and provides guidance for further research.1

A retrospective study compared the MRIs for patients with MS were analyzed with icobrain software platform for 6826 MRIs with 1207 MRI pairs meeting inclusion criteria. The investigators reported that icobrain could be utilized for percentage brain volume change based on strict selection criteria.38 Another explored the potential role of icobrain and use of an MS app MS to inform treatment changes in a small population.39

Volumetric data for patients with MS were analyzed on the same and different-scanner MRI pairs. Of 6826 MRIs, 85% had appropriate volumetric sequences and 4446 serial MRI pairs were analyzed and 3335 (75%) met inclusion criteria. The percentage brain volume change (PBVC) of the included MRI pairs showed variance of 0.78 % for same-scanner pairs and 0.80 % for different-scanner pairs, but further selection of included MRI pairs with the best variance resulted in 1885 (42%) MRI pairs with PBVC variance of 0.34%. The authors acknowledge the challenges and limitations for brain volumetry measurements and need for standardization to perform adequately. The authors conclude icobrain should be utilized for PBVC determination only on selected MRIs with best alignment similarity and with strict selection criteria for the included MRI pairs to reduce PBVC variability.38

Brain tumors

Two poster abstracts and several review papers were identified.40-43

A novel AI-driven application to aid in brain tumor detection from MRI images reports on the development of EfficientNetB2. This report focuses on the proposed technology and evaluation of performance in non-clinical setting.44 EfficentNetB2 is not FDA cleared as a medical device.

Epilepsy

Comparison of MRI images read by neuroradiologist and analyzed with NeuroQuant software in 144 patients with temporal lobe epilepsy (TLE) was performed. The investigators found similar specificity to neuroradiologist visual MR imaging analysis (90.4% versus 91.6%; P = .99) but a lower sensitivity (69.0% versus 93.0%, P < .001). The positive predictive value of NeuroQuant analysis was comparable with visual MR imaging analysis (84.0% versus 89.1%), whereas the negative predictive value was not comparable (79.8% versus 95.0%). They conclude that the neuroradiologist had a higher sensitivity likely due to the software’s inability to evaluate changes in hippocampal T2 signal or architecture.45 They conclude the technology may aid in evaluations when a neuroradiologist is not available, however product information states this is intended as an adjuvant, not replacement for the radiologist.

A prospective study measured volumetric MRI imaging data for 34 patients with TLE and compared to 116 control subjects.46 Structural volumes were calculated using automated quantitative MRI imaging analysis software (NeuroQuant). Results of the quantitative MRI imaging were compared with visual detection of atrophy and histological specimens if available. Quantitative MRI imaging results compared to visual inspection of the volumetric MRI imaging studies by two experienced neuroradiologists had a concordance between hippocampal asymmetry (91-97%). They reported the software discriminated patients with TLE from control subjects with high sensitivity (86.7%–89.5%) and specificity (92.2%–94.1%). The authors conclude that the software can provide “an expert eye” in centers that lack expertise, however the FDA indications for this software indicates it is intended to be an adjunct to the radiologist reader, not a substitute. Limitations of the study include lack of generalizability to non-expert readers, small sample size, lack of confirmative histological confirmation available for 12 patients (35%).

A retrospective report included 36 patients with mesial temporal sclerosis (MTS) which is important to detect for temporal lobe epilepsy as it often guides surgical intervention. One of the features of MTS is hippocampal volume loss. Using electronic medical records researchers scanned patients with proven MTS and analyzed the imaging with volumetric assessment software (NeuroQuant). They reported an estimated accuracy of the neuroradiologist as 72.6% with Kappa statistic of 0.512 (95% CI, 0.388–0.787). They conclude that the NeuroQuant software compared favorably with trained neuroradiologists in predicting MTS.47

Other literature identified was limited to review papers, case reports and series and not included in this assessment.

Traumatic Brain Injury

Twenty MRIs images from patients with mild to moderate traumatic brain injury (TBI) were analyzed with NeuroQuant automated software and compared to attending radiologist interpretation. The investigators reported radiologist’s traditional approach found at least one sign of atrophy in 10.0% of patients compared to NeuroQuant finding this in 50.0% of patients concluding higher sensitivity of Neuroquant.48 Subsequent expanded study with 24 subjects found similar results.49 These studies are limited by very small sample size, and lack of knowledge if the atrophy was caused by TBI or other conditions that can cause similar findings. The authors state “we have never seen an MRI report on a patient that used a qualitative rating scale to assess level of atrophy or ventricular enlargement. With the rapid advances in computer-based technology, instead of focusing on understanding and developing the approach based on qualitative ratings, it may be more advantageous to focus on the computer-automated approaches.” In the absence of qualitative ratings, the comparison of the gold standard visual approach and qualitative approach is not established and to the claim that the software was 50% more accurate is not validated.

Comparison between technologies

A review paper focused on AI for brain neuroanatomical segmentation in MRI imaging concludes high accuracy and fast performance overall. This technology is challenged by robustness to anatomical variability and pathology related to lack of large datasets necessary for sufficient training.43 One of the challenges in developing automated volumetry software is lack of a gold standard for similar brain measurements to establish if the software correlates with reality. In current software product measurements of the brain segments are made with different methods and tools and therefore lack standardized measures for comparison. Efforts to understand the performance of different software modalities is undergoing as consistency between programs is an important component to create reliable standards which can be applied to clinical practice.

Multiple investigations compared the inter-method reliability between NeuroQuant and FreeSurfer computer-automated programs for measuring MRI brain volume. These demonstrate high inter-method reliability between the modalities with 2/21 brain regions being less reliable.50,51 Using 56 MRIs in patients with AD or MS investigators compared results between NeuroQuant and volBrain automated brain analysis software and found high reliability except in the thalamus and amygdala where reliability has been proven to be poor. Using 115 MRIs with clinically isolated syndrome both measured with NeuroQuant and FMRIB's Integrated Registration Segmentation Tool (FIRST) found some variability between the modalities with larger volumes achieving better agreement.52 Another investigation compared NeuroQuant to DeepBrain and found significant differences in many brain regions.30 A retrospective report compared 87 patients images with memory impairment with FreeSurfer, NeuroQuant, and Heuron AD and found significant differences between the programs. Heuron AD indication is for PET amyloid scans.53

A study compared brain volumetrics in MS using Structural Image Evaluation using Normalisation of Atrophy-Cross-sectional (SIENAX) to NeuroQuant and MSmetrix.54 SIENAX is widely used in cross-sectional MS studies, but clinical applications limited. The authors compare the performance of NeuroQuant and MSmetrix to SIENAX and concluded comparable results.

FreeSurfer, volBrain and FIRST are not FDA cleared and used for research purposes.

 

Analysis of Evidence (Rationale for Determination)

Monoclonal antibody therapies directed against amyloid are a novel treatment option for Alzheimer’s disease. As of the publication date of this LCD this drug is currently covered under NCD 200.3 as part of the coverage with evidence development (CED) pathway. To receive coverage patients must meet the criteria within the NCD for eligibility and the study must be CMS-approved and registered on the Clinical.Trails.gov website. These investigations will aid in the understanding of complications as a result of these treatments and optimal management. It is expected those conducting the studies have the expertise required for the study including appropriately trained radiologists and clinicians to diagnosis and manage ARIA and other complications.

Software has been developed to facilitate automated detection and quantification of ARIA. The investigators have conducted a clinical validity study demonstrating the software that can detect ARIA and may have improved diagnostic capabilities as compared to community-based readers especially in detection of mild ARIA findings.27 The community-based readers referred to were not in the U.S. and the population studied was part of a drug trail for a drug not approved by the FDA. This population also both mild and moderate AD so lacks generalizability to the U.S. population.15,18,23 There is no clinical utility evidence demonstrating automated detection and quantification of ARIA improves patient outcomes or contributes to patient management decisions. The use of AI augmented technology does not replace the need for appropriate training for readers and the results are not intended to be used without physician readers. CGS Administrators does not consider automated detection and quantification of brain MRI imaging reasonable and necessary.

While investigations have been exploring the potential of automated quantification technology for evaluation of ARIA, MS, TBI, epilepsy, brain tumors and other neurological conditions, this has been challenged by lack of established standards for measurements and access to large datasets to train the devices. While expert radiologists read the images based on visual patterns these programs quantify the brain volumes. While this is promising there is a lack of standards to establish what the normal values for brain volumes should be and each program has proprietary data so it is not interchangeable. There is not sufficient diversity within the data sets used to train the models to ensure changes based on age, gender, or ethnicity are accounted for. This is especially pertinent in the Medicare population as there are changes to brain volume related to age and with lack of standardized data it is challenging at this time to ensure subtle changes represent pathology and not variations of normal. At this time there is not sufficient clinical utility or validity data and use of this technology is considered investigational and not covered. CGS will continue to monitor the progression of research for these devices.

Proposed Process Information

Synopsis of Changes
Changes Fields Changed
Not Applicable N/A
Associated Information
N/A
Sources of Information
N/A
Bibliography
  1. Hossain MZ, Daskalaki E, Brüstle A, Desborough J, Lueck CJ, Suominen H. The role of machine learning in developing non-magnetic resonance imaging based biomarkers for multiple sclerosis: a systematic review. BMC Med Inform Decis Mak. 2022;22(1):242.
  2. Scarpazza C, Ha M, Baecker L, et al. Translating research findings into clinical practice: a systematic and critical review of neuroimaging-based clinical tools for brain disorders. Translational Psychiatry. 2020;10(1):107.
  3. Pham N HV, Rauschecker A, Lui Y, Niogi S, Fillipi C.G, Chang P, Zaharchuk G and Wintermark M. Critical appraisal of artificial intelligence–enabled imaging tools using the levels of evidence system. American Journal of Neuroradiology 2023;44(5):E21-E28.
  4. FDA. K170981. 510(k) premarket notification. https://www.accessdata.fda.gov/cdrh_docs/pdf17/K170981.pdf. Published 9/7/2017. Accessed 5/28/25.
  5. FDA. K240712. 510(k) premarket notification. . https://www.accessdata.fda.gov/cdrh_docs/pdf17/K170981.pdf. Published 11/7/24. Accessed 5/27/25.
  6. Rabinovici GD, Selkoe DJ, Schindler SE, et al. Donanemab: Appropriate use recommendations. J Prev Alzheimers Dis. 2025;12(5):100150.
  7. FDA. K161148. 510(k) premarket notification. https://www.accessdata.fda.gov/cdrh_docs/pdf16/K161148.pdf. Published 8/9/2016. Accessed 5/30/25.
  8. FDA. K181939. 510(k) premarket notification. . https://www.accessdata.fda.gov/cdrh_docs/pdf18/K181939.pdf. Published 11/6/2018. Accessed 5/27/25.
  9. FDA. K192130. 510(k) premarket notification. https://www.accessdata.fda.gov/cdrh_docs/pdf19/K192130.pdf. Published 12/13/2019. Updated 12/13/2019. Accessed 5/27/25.
  10. FDA. K180326. 510(k) premarket notification. https://www.accessdata.fda.gov/cdrh_docs/pdf18/K180326.pdf. Published 3/8/2018. Accessed 5/30/25.
  11. FDA. K231398. 510(k) premarket notification. https://www.accessdata.fda.gov/cdrh_docs/pdf23/K231398.pdf. Published 10/4/2023. Accessed 5/29/25.
  12. FDA. K182904. 510(k) premarket notification. https://www.accessdata.fda.gov/cdrh_docs/pdf18/K182904.pdf. Published 7/5/2019. Accessed 5/29/25.
  13. Wu W, Ji Y, Wang Z, et al. The FDA-approved anti-amyloid-β monoclonal antibodies for the treatment of Alzheimer's disease: a systematic review and meta-analysis of randomized controlled trials. European journal of medical research. 2023;28(1):544.
  14. Cogswell PM, Barakos J, Barkhof F, al. e. Amyloid-related imaging abnormalities with emerging Alzheimer disease therapeutics: detection and reporting recommendations for clinical practice. Am J Neuroradiol. 2022;43(9):E19-E35.
  15. Barakos J, Sperling R, Salloway S, et al. MR Imaging Features of Amyloid-Related Imaging Abnormalities. American Journal of Neuroradiology. 2013;34(10):1958-1965.
  16. Barkhof F, Daams M, Scheltens P, et al. An MRI rating scale for amyloid-related imaging abnormalities with edema or effusion. AJNR Am J Neuroradiol. 2013;34(8):1550-1555.
  17. Klein G, Scelsi MA, Barakos J, et al. Comparing ARIA-E severity scales and effects of treatment management thresholds. Alzheimers Dement (Amst). 2022;14(1):e12376.
  18. Sperling R, Salloway S, Brooks DJ, et al. Amyloid-related imaging abnormalities in patients with Alzheimer's disease treated with bapineuzumab: a retrospective analysis. The Lancet Neurology. 2012;11(3):241-249.
  19. Cogswell PM, Andrews TJ, Barakos JA, et al. Alzheimer disease anti-amyloid immunotherapies: imaging recommendations and practice considerations for monitoring of amyloid-related imaging abnormalities. American Journal of Neuroradiology. 2025;46(1):24-32.
  20. Bracoud L, Fiebach JB, Purcell DD, et al. [P1–047]: Validation of a simple severity scale for assessing ARIA-E. Alzheimer's & Dementia. 2017;13(7S_Part_5):P253-P254.
  21. Cummings J, Rabinovici GD, Atri A, et al. Aducanumab: Appropriate Use Recommendations Update. The Journal of Prevention of Alzheimer's Disease. 2022;9(2):221-230.
  22. Cummings J, Apostolova L, Rabinovici GD, et al. Lecanemab: Appropriate Use Recommendations. The Journal of Prevention of Alzheimer's Disease. 2023;10(3):362-377.
  23. Ketter N, Brashear HR, Bogert J, et al. Central review of amyloid-related imaging abnormalities in two Phase III clinical trials of Bapineuzumab in mild-To-moderate Alzheimer’s Disease patients. Journal of Alzheimer’s Disease. 2017;57(2):557-573.
  24. Brewer JB, Magda S, Airriess C, Smith ME. Fully-automated quantification of regional brain volumes for improved detection of focal atrophy in Alzheimer disease. American Journal of Neuroradiology. 2009;30(3):578-580.
  25. Engedal K, Brækhus A, Andreassen OA, Nakstad PH. Diagnosis of dementia—automatic quantification of brain structures. Tidsskr Nor Laegeforen. 2012;132(15):1747-1751.
  26. Bash S, Wang L, Airriess C, et al. Deep learning enables 60% accelerated volumetric brain MRI while preserving quantitative performance: A prospective, multicenter, multireader trial. AJNR Am J Neuroradiol. 2021;42(12):2130-2137.
  27. Sima DM, Vân Phan T, Van Eyndhoven S, et al. Artificial intelligence assistive software tool for automated detection and quantification of amyloid-related imaging abnormalities. JAMA Network Open. 2024;7(2):e2355800-e2355800.
  28. Struyfs H, Sima DM, Wittens M, et al. Automated MRI volumetry as a diagnostic tool for Alzheimer's disease: Validation of icobrain dm. Neuroimage Clin. 2020;26:102243.
  29. Wittens MMJ, Sima DM, Houbrechts R, et al. Diagnostic performance of automated MRI volumetry by icobrain dm for Alzheimer's Disease in a clinical setting: A REMEMBER study. J Alzheimers Dis. 2021;83(2):623-639.
  30. Yang MH, Kim EH, Choi ES, Ko H. Comparison of normative percentiles of brain volume obtained from NeuroQuant(®) vs. DeepBrain(®) in the Korean population: Correlation with cranial shape. J Korean Soc Radiol. 2023;84(5):1080-1090.
  31. Seccia R, Romano S, Salvetti M, Crisanti A, Palagi L, Grassi F. Machine learning use for prognostic purposes in multiple sclerosis. Life (Basel). 2021;11(2).
  32. Shoeibi A, Khodatars M, Jafari M, et al. Applications of deep learning techniques for automated multiple sclerosis detection using magnetic resonance imaging: A review. Computers in Biology and Medicine. 2021;136:104697.
  33. Amin M, Martínez-Heras E, Ontaneda D, Prados Carrasco F. Artificial intelligence and multiple sclerosis. Curr Neurol Neurosci Rep. 2024;24(8):233-243.
  34. Aslam N, Khan IU, Bashamakh A, et al. Multiple sclerosis diagnosis using machine learning and deep learning: challenges and opportunities. Sensors (Basel). 2022;22(20).
  35. Arani LA, Hosseini A, Asadi F, Masoud SA, Nazemi E. Intelligent computer systems for multiple sclerosis diagnosis: a systematic review of reasoning techniques and methods. Acta Informatica Medica. 2018;26(4):258.
  36. Nabizadeh F, Masrouri S, Ramezannezhad E, et al. Artificial intelligence in the diagnosis of multiple sclerosis: A systematic review. Mult Scler Relat Disord. 2022;59:103673.
  37. Nabizadeh F, Ramezannezhad E, Kargar A, Sharafi AM, Ghaderi A. Diagnostic performance of artificial intelligence in multiple sclerosis: a systematic review and meta-analysis. Neurol Sci. 2023;44(2):499-517.
  38. Nguyen AL, Sormani MP, Horakova D, et al. Utility of icobrain for brain volumetry in multiple sclerosis clinical practice. Mult Scler Relat Disord. 2024;92:106148.
  39. Van Hecke W, Costers L, Descamps A, et al. A novel digital care management platform to monitor clinical and subclinical disease activity in multiple sclerosis. Brain Sci. 2021;11(9).
  40. White N, Baig S, Vidic I, et al. Segmentation of pre and post-tretment glioma tissue types including rescection cavities. Paper presented at: Neuro-Oncology2022.
  41. Gagnon L, Gupta D, Mastorakos G, et al. NIMG-30. Deep learning segmentation of inflitrative and enchancing cellular tumor in post-treatment glioblastoma patients with multi-shell diffusion MRI Neuro-Oncology. 2023;25(Supplement_5):v191-v192.
  42. White NS, McDonald CR, Farid N, et al. Diffusion-weighted imaging in cancer: physical foundations and applications of restriction spectrum imaging. Cancer research. 2014;74(17):4638-4652.
  43. Andrews M, Di Ieva A. Artificial intelligence for brain neuroanatomical segmentation in magnetic resonance imaging: A literature review. J Clin Neurosci. 2025;134:111073.
  44. Zubair Rahman AMJ, Gupta M, Aarathi S, et al. Advanced AI-driven approach for enhanced brain tumor detection from MRI images utilizing EfficientNetB2 with equalization and homomorphic filtering. BMC Med Inform Decis Mak. 2024;24(1):113.
  45. Louis S, Morita-Sherman M, Jones S, et al. Hippocampal sclerosis detection with NeuroQuant compared with neuroradiologists. AJNR Am J Neuroradiol. 2020;41(4):591-597.
  46. Farid N, Girard HM, Kemmotsu N, et al. Temporal lobe epilepsy: quantitative MR volumetry in detection of hippocampal atrophy. Radiology. 2012;264(2):542-550.
  47. Azab M, Carone M, Ying S, Yousem D. Mesial temporal sclerosis: accuracy of NeuroQuant versus neuroradiologist. American Journal of Neuroradiology. 2015;36(8):1400-1406.
  48. Ross DE, Ochs AL, Seabaugh JM, Shrader CR. Man versus machine: comparison of radiologists' interpretations and NeuroQuant® volumetric analyses of brain MRIs in patients with traumatic brain injury. J Neuropsychiatry Clin Neurosci. 2013;25(1):32-39.
  49. Ross DE, Ochs AL, DeSmit ME, Seabaugh JM, Havranek MD. Man versus machine Part 2: Comparison of radiologists' interpretations and NeuroQuant measures of brain asymmetry and progressive atrophy in patients with traumatic brain injury. J Neuropsychiatry Clin Neurosci. 2015;27(2):147-152.
  50. Ochs AL, Ross DE, Zannoni MD, Abildskov TJ, Bigler ED. Comparison of automated brain volume measures obtained with NeuroQuant and FreeSurfer. J Neuroimaging. 2015;25(5):721-727.
  51. Ross DE, Ochs AL, Tate DF, et al. High correlations between MRI brain volume measurements based on NeuroQuant(®) and FreeSurfer. Psychiatry Res Neuroimaging. 2018;278:69-76.
  52. Pareto D, Sastre-Garriga J, Alberich M, et al. Brain regional volume estimations with NeuroQuant and FIRST: a study in patients with a clinically isolated syndrome. Neuroradiology. 2019;61(6):667-674.
  53. FDA. K231642 510(k) premarket notification. https://www.accessdata.fda.gov/cdrh_docs/pdf23/K231642.pdf Published 10/13/2023. Accessed 5/29/25.
  54. Wang C, Beadnall HN, Hatton SN, et al. Automated brain volumetrics in multiple sclerosis: a step closer to clinical application. Journal of neurology, neurosurgery, and psychiatry. 2016;87(7):754-757.
Open Meetings
Meeting Date Meeting States Meeting Information
10/28/2025 Kentucky
Ohio

Please review the CGS website for more information.

N/A
Contractor Advisory Committee (CAC) Meetings
Meeting Date Meeting States Meeting Information
N/A
MAC Meeting Information URLs
N/A
Proposed LCD Posting Date
09/25/2025
Comment Period Start Date
09/25/2025
Comment Period End Date
11/08/2025
Reason for Proposed LCD
  • Provider Education/Guidance
Requestor Information
This request was MAC initiated.
Requestor Name Requestor Letter
View Letter
N/A
Contact for Comments on Proposed LCD
Meredith Loveless, MD
Attn: Medical Review
26 Century Blvd., Ste ST610
Nashville, TN 37214-3685
cmd.inquiry@cgsadmin.com

Coding Information

Bill Type Codes

Code Description

Please accept the License to see the codes.

N/A

Revenue Codes

Code Description

Please accept the License to see the codes.

N/A

CPT/HCPCS Codes

Please accept the License to see the codes.

N/A

ICD-10-CM Codes that Support Medical Necessity

Group 1

Group 1 Paragraph:

N/A

Group 1 Codes:

N/A

N/A

ICD-10-CM Codes that DO NOT Support Medical Necessity

Group 1

Group 1 Paragraph:

N/A

Group 1 Codes:

N/A

N/A

Additional ICD-10 Information

General Information

Associated Information
N/A
Sources of Information
N/A
Bibliography
  1. Hossain MZ, Daskalaki E, Brüstle A, Desborough J, Lueck CJ, Suominen H. The role of machine learning in developing non-magnetic resonance imaging based biomarkers for multiple sclerosis: a systematic review. BMC Med Inform Decis Mak. 2022;22(1):242.
  2. Scarpazza C, Ha M, Baecker L, et al. Translating research findings into clinical practice: a systematic and critical review of neuroimaging-based clinical tools for brain disorders. Translational Psychiatry. 2020;10(1):107.
  3. Pham N HV, Rauschecker A, Lui Y, Niogi S, Fillipi C.G, Chang P, Zaharchuk G and Wintermark M. Critical appraisal of artificial intelligence–enabled imaging tools using the levels of evidence system. American Journal of Neuroradiology 2023;44(5):E21-E28.
  4. FDA. K170981. 510(k) premarket notification. https://www.accessdata.fda.gov/cdrh_docs/pdf17/K170981.pdf. Published 9/7/2017. Accessed 5/28/25.
  5. FDA. K240712. 510(k) premarket notification. . https://www.accessdata.fda.gov/cdrh_docs/pdf17/K170981.pdf. Published 11/7/24. Accessed 5/27/25.
  6. Rabinovici GD, Selkoe DJ, Schindler SE, et al. Donanemab: Appropriate use recommendations. J Prev Alzheimers Dis. 2025;12(5):100150.
  7. FDA. K161148. 510(k) premarket notification. https://www.accessdata.fda.gov/cdrh_docs/pdf16/K161148.pdf. Published 8/9/2016. Accessed 5/30/25.
  8. FDA. K181939. 510(k) premarket notification. . https://www.accessdata.fda.gov/cdrh_docs/pdf18/K181939.pdf. Published 11/6/2018. Accessed 5/27/25.
  9. FDA. K192130. 510(k) premarket notification. https://www.accessdata.fda.gov/cdrh_docs/pdf19/K192130.pdf. Published 12/13/2019. Updated 12/13/2019. Accessed 5/27/25.
  10. FDA. K180326. 510(k) premarket notification. https://www.accessdata.fda.gov/cdrh_docs/pdf18/K180326.pdf. Published 3/8/2018. Accessed 5/30/25.
  11. FDA. K231398. 510(k) premarket notification. https://www.accessdata.fda.gov/cdrh_docs/pdf23/K231398.pdf. Published 10/4/2023. Accessed 5/29/25.
  12. FDA. K182904. 510(k) premarket notification. https://www.accessdata.fda.gov/cdrh_docs/pdf18/K182904.pdf. Published 7/5/2019. Accessed 5/29/25.
  13. Wu W, Ji Y, Wang Z, et al. The FDA-approved anti-amyloid-β monoclonal antibodies for the treatment of Alzheimer's disease: a systematic review and meta-analysis of randomized controlled trials. European journal of medical research. 2023;28(1):544.
  14. Cogswell PM, Barakos J, Barkhof F, al. e. Amyloid-related imaging abnormalities with emerging Alzheimer disease therapeutics: detection and reporting recommendations for clinical practice. Am J Neuroradiol. 2022;43(9):E19-E35.
  15. Barakos J, Sperling R, Salloway S, et al. MR Imaging Features of Amyloid-Related Imaging Abnormalities. American Journal of Neuroradiology. 2013;34(10):1958-1965.
  16. Barkhof F, Daams M, Scheltens P, et al. An MRI rating scale for amyloid-related imaging abnormalities with edema or effusion. AJNR Am J Neuroradiol. 2013;34(8):1550-1555.
  17. Klein G, Scelsi MA, Barakos J, et al. Comparing ARIA-E severity scales and effects of treatment management thresholds. Alzheimers Dement (Amst). 2022;14(1):e12376.
  18. Sperling R, Salloway S, Brooks DJ, et al. Amyloid-related imaging abnormalities in patients with Alzheimer's disease treated with bapineuzumab: a retrospective analysis. The Lancet Neurology. 2012;11(3):241-249.
  19. Cogswell PM, Andrews TJ, Barakos JA, et al. Alzheimer disease anti-amyloid immunotherapies: imaging recommendations and practice considerations for monitoring of amyloid-related imaging abnormalities. American Journal of Neuroradiology. 2025;46(1):24-32.
  20. Bracoud L, Fiebach JB, Purcell DD, et al. [P1–047]: Validation of a simple severity scale for assessing ARIA-E. Alzheimer's & Dementia. 2017;13(7S_Part_5):P253-P254.
  21. Cummings J, Rabinovici GD, Atri A, et al. Aducanumab: Appropriate Use Recommendations Update. The Journal of Prevention of Alzheimer's Disease. 2022;9(2):221-230.
  22. Cummings J, Apostolova L, Rabinovici GD, et al. Lecanemab: Appropriate Use Recommendations. The Journal of Prevention of Alzheimer's Disease. 2023;10(3):362-377.
  23. Ketter N, Brashear HR, Bogert J, et al. Central review of amyloid-related imaging abnormalities in two Phase III clinical trials of Bapineuzumab in mild-To-moderate Alzheimer’s Disease patients. Journal of Alzheimer’s Disease. 2017;57(2):557-573.
  24. Brewer JB, Magda S, Airriess C, Smith ME. Fully-automated quantification of regional brain volumes for improved detection of focal atrophy in Alzheimer disease. American Journal of Neuroradiology. 2009;30(3):578-580.
  25. Engedal K, Brækhus A, Andreassen OA, Nakstad PH. Diagnosis of dementia—automatic quantification of brain structures. Tidsskr Nor Laegeforen. 2012;132(15):1747-1751.
  26. Bash S, Wang L, Airriess C, et al. Deep learning enables 60% accelerated volumetric brain MRI while preserving quantitative performance: A prospective, multicenter, multireader trial. AJNR Am J Neuroradiol. 2021;42(12):2130-2137.
  27. Sima DM, Vân Phan T, Van Eyndhoven S, et al. Artificial intelligence assistive software tool for automated detection and quantification of amyloid-related imaging abnormalities. JAMA Network Open. 2024;7(2):e2355800-e2355800.
  28. Struyfs H, Sima DM, Wittens M, et al. Automated MRI volumetry as a diagnostic tool for Alzheimer's disease: Validation of icobrain dm. Neuroimage Clin. 2020;26:102243.
  29. Wittens MMJ, Sima DM, Houbrechts R, et al. Diagnostic performance of automated MRI volumetry by icobrain dm for Alzheimer's Disease in a clinical setting: A REMEMBER study. J Alzheimers Dis. 2021;83(2):623-639.
  30. Yang MH, Kim EH, Choi ES, Ko H. Comparison of normative percentiles of brain volume obtained from NeuroQuant(®) vs. DeepBrain(®) in the Korean population: Correlation with cranial shape. J Korean Soc Radiol. 2023;84(5):1080-1090.
  31. Seccia R, Romano S, Salvetti M, Crisanti A, Palagi L, Grassi F. Machine learning use for prognostic purposes in multiple sclerosis. Life (Basel). 2021;11(2).
  32. Shoeibi A, Khodatars M, Jafari M, et al. Applications of deep learning techniques for automated multiple sclerosis detection using magnetic resonance imaging: A review. Computers in Biology and Medicine. 2021;136:104697.
  33. Amin M, Martínez-Heras E, Ontaneda D, Prados Carrasco F. Artificial intelligence and multiple sclerosis. Curr Neurol Neurosci Rep. 2024;24(8):233-243.
  34. Aslam N, Khan IU, Bashamakh A, et al. Multiple sclerosis diagnosis using machine learning and deep learning: challenges and opportunities. Sensors (Basel). 2022;22(20).
  35. Arani LA, Hosseini A, Asadi F, Masoud SA, Nazemi E. Intelligent computer systems for multiple sclerosis diagnosis: a systematic review of reasoning techniques and methods. Acta Informatica Medica. 2018;26(4):258.
  36. Nabizadeh F, Masrouri S, Ramezannezhad E, et al. Artificial intelligence in the diagnosis of multiple sclerosis: A systematic review. Mult Scler Relat Disord. 2022;59:103673.
  37. Nabizadeh F, Ramezannezhad E, Kargar A, Sharafi AM, Ghaderi A. Diagnostic performance of artificial intelligence in multiple sclerosis: a systematic review and meta-analysis. Neurol Sci. 2023;44(2):499-517.
  38. Nguyen AL, Sormani MP, Horakova D, et al. Utility of icobrain for brain volumetry in multiple sclerosis clinical practice. Mult Scler Relat Disord. 2024;92:106148.
  39. Van Hecke W, Costers L, Descamps A, et al. A novel digital care management platform to monitor clinical and subclinical disease activity in multiple sclerosis. Brain Sci. 2021;11(9).
  40. White N, Baig S, Vidic I, et al. Segmentation of pre and post-tretment glioma tissue types including rescection cavities. Paper presented at: Neuro-Oncology2022.
  41. Gagnon L, Gupta D, Mastorakos G, et al. NIMG-30. Deep learning segmentation of inflitrative and enchancing cellular tumor in post-treatment glioblastoma patients with multi-shell diffusion MRI Neuro-Oncology. 2023;25(Supplement_5):v191-v192.
  42. White NS, McDonald CR, Farid N, et al. Diffusion-weighted imaging in cancer: physical foundations and applications of restriction spectrum imaging. Cancer research. 2014;74(17):4638-4652.
  43. Andrews M, Di Ieva A. Artificial intelligence for brain neuroanatomical segmentation in magnetic resonance imaging: A literature review. J Clin Neurosci. 2025;134:111073.
  44. Zubair Rahman AMJ, Gupta M, Aarathi S, et al. Advanced AI-driven approach for enhanced brain tumor detection from MRI images utilizing EfficientNetB2 with equalization and homomorphic filtering. BMC Med Inform Decis Mak. 2024;24(1):113.
  45. Louis S, Morita-Sherman M, Jones S, et al. Hippocampal sclerosis detection with NeuroQuant compared with neuroradiologists. AJNR Am J Neuroradiol. 2020;41(4):591-597.
  46. Farid N, Girard HM, Kemmotsu N, et al. Temporal lobe epilepsy: quantitative MR volumetry in detection of hippocampal atrophy. Radiology. 2012;264(2):542-550.
  47. Azab M, Carone M, Ying S, Yousem D. Mesial temporal sclerosis: accuracy of NeuroQuant versus neuroradiologist. American Journal of Neuroradiology. 2015;36(8):1400-1406.
  48. Ross DE, Ochs AL, Seabaugh JM, Shrader CR. Man versus machine: comparison of radiologists' interpretations and NeuroQuant® volumetric analyses of brain MRIs in patients with traumatic brain injury. J Neuropsychiatry Clin Neurosci. 2013;25(1):32-39.
  49. Ross DE, Ochs AL, DeSmit ME, Seabaugh JM, Havranek MD. Man versus machine Part 2: Comparison of radiologists' interpretations and NeuroQuant measures of brain asymmetry and progressive atrophy in patients with traumatic brain injury. J Neuropsychiatry Clin Neurosci. 2015;27(2):147-152.
  50. Ochs AL, Ross DE, Zannoni MD, Abildskov TJ, Bigler ED. Comparison of automated brain volume measures obtained with NeuroQuant and FreeSurfer. J Neuroimaging. 2015;25(5):721-727.
  51. Ross DE, Ochs AL, Tate DF, et al. High correlations between MRI brain volume measurements based on NeuroQuant(®) and FreeSurfer. Psychiatry Res Neuroimaging. 2018;278:69-76.
  52. Pareto D, Sastre-Garriga J, Alberich M, et al. Brain regional volume estimations with NeuroQuant and FIRST: a study in patients with a clinically isolated syndrome. Neuroradiology. 2019;61(6):667-674.
  53. FDA. K231642 510(k) premarket notification. https://www.accessdata.fda.gov/cdrh_docs/pdf23/K231642.pdf Published 10/13/2023. Accessed 5/29/25.
  54. Wang C, Beadnall HN, Hatton SN, et al. Automated brain volumetrics in multiple sclerosis: a step closer to clinical application. Journal of neurology, neurosurgery, and psychiatry. 2016;87(7):754-757.

Revision History Information

Revision History Date Revision History Number Revision History Explanation Reasons for Change
N/A

Associated Documents

Attachments
N/A
Related National Coverage Documents
NCDs
N/A
Public Versions
Updated On Effective Dates Status
09/18/2025 N/A - N/A Superseded You are here

Keywords

N/A

Read the LCD Disclaimer