From Joanne Lynn, MD, MA, MS, Director, Center for Elder Care and Advanced Illness, Altarum Institute
I appreciate the work that CMS and others have done to develop the processes and legitimacy of providing coverage during evidence development. I recognize that many astute observers have already contributed to this effort, and I will not aim to duplicate or take issue with that record. I am writing mainly to encourage CMS to consider allowing more consideration of the variety of evidence sources that might make a workable plan for evidence development for the Medicare population. As it stands now, the guidance only addresses classic, formal trials of a particular intervention. In order to have the rigor that the Guidance document aspires to attain, many compromises on generalizability and applicability to the full array of Medicare beneficiaries are routine. Ordinarily, the patient has to be mobile to get to a research center; capable of consent (and actually consenting); untouched by myriad competing causes of illness, disability or death; and in one of a limited number of participating centers. There are issues for which this approach is exactly the right approach.
However, there also are a number of alternative ways of gaining sufficient insight to rule on the merits of an intervention for a particular population. The most obvious is that, especially as we get dramatically larger datasets with clinical information across time, we could mimic clinical trials very efficiently in observational databases. The Archimedes work is an example, though there are more examples coming along often. In a database of linked Medicare claims, Part D use, and MDS and OASIS (as in the Chronic Data Warehouse), estimates of outcome effects could be generated at very low cost in the phase of dissemination in which an intervention is being used at different rates by different providers. The database is large enough to support substantial sub-set analyses. It may well be that this sort of approach would clarify the merits of coverage for part of the population or some of the indications for use (both showing merit and justifying coverage, and showing harms or lack of merit and justifying non-coverage). In addition, some issues might remain unclear and require focused investigation, which could be well-targeted and efficient with the grounding of insight from concurrent and historical data.
This is one example of the research that could be allowed under a broader CED guidance document. I believe that we are only a few years away from being able to give advice to patients and families on the basis of concurrent analysis of large databases (whether from a large cohort, a regional repository, or a particular provider’s experience base). A patient will be able to get answers to questions like, “For the group of patients relevantly similar to me that you treated 1 to 2 years ago, how many have died, how many live in nursing homes or have serious ADL impairments, and how many resumed independent living?” We will have to work on the “relevantly similar” and standardizing the documentation of outcomes, but that work is a useful side-effect of the standard-setting in meaningful use of electronic records. The point here is that, if patients can ask for that sort of information, surely CMS can also. In parallel, CMS can ask, “For the beneficiaries with relevantly similar situations, how did those who got the proposed intervention do, compared with those who did not?” Obviously, we would need to work on how to stratify or adjust for differences in the cohorts, and we would need to weigh the risks of observational data in making judgments about the insights gained. This is, however, not different from having to weigh, for randomized clinical trials, the impact of all the restrictions on inception cohorts and the differences in practice settings that can support rigorous research
Thus, I would encourage CMS to finalize this guidance but also to open the possibilities of building on a “learning health care system” and taking advantage of the opportunities now and in the near future to learn quickly and efficiently from naturally occurring variations in implementation of services. This approach might well apply to many more coverage determinations than the rare use of the methods now endorsed.