Abstract
In the absence of effective pharmacological treatment to halt or reverse the course of Alzheimer’s disease and related dementias (ADRDs), population-level research on the modifiable determinants of dementia risk and outcomes for those living with ADRD is critical. The Harmonized Cognitive Assessment Protocol (HCAP), fielded in 2016 as part of the U.S. Health and Retirement Study (HRS) and multiple international counterparts, has the potential to play an important role in such efforts. The stated goals of the HCAP are to improve our ability to understand the determinants, prevalence, costs, and consequences of cognitive impairment and dementia in the U.S. and to support cross-national comparisons. The first wave of the HCAP demonstrated the feasibility and value of the more detailed cognitive assessments in the HCAP compared to the brief cognitive assessments in the core HRS interviews. To achieve its full potential, we provide eight recommendations for improving future iterations of the HCAP. Our highest priority recommendation is to increase the representation of historically marginalized racial/ethnic groups disproportionately affected by ADRDs. Additional recommendations relate to the timing of the HCAP assessments; clinical and biomarker validation data, including to improve cross-national comparisons; dropping lower performing items; enhanced documentation; and the addition of measures related to caregiver impact. We believe that the capacity of the HCAP to achieve its stated goals will be greatly enhanced by considering these changes and additions.
1 Introduction
As of 2017, an estimated 55 million people were living with Alzheimer’s disease and Alzheimer’s disease related dementias (AD/ADRD) worldwide; approximately 10 million individuals are diagnosed with AD/ADRDs each year (World Health Organization 2017). In the absence of effective pharmacological treatment to halt or reverse the course of AD/ADRDs (2020 Alzheimers Disease Facts and Figures 2020; Ackley et al. 2021), population-level research on the modifiable determinants of AD/ADRD risk and outcomes for those living with the disease is critical (Livingston et al. 2020). The U.S. Health and Retirement Study (HRS) as well as its global partner studies have been foundational to the effort to improve population-level knowledge on AD/ADRD prevention and care, with hundreds of manuscripts indexed in PubMed in the past 15 years. As part of the larger HRS, the extensive Harmonized Cognitive Assessment Protocol (HCAP) (Langa et al. 2020) was fielded in 2016 to a subset of respondents 65 years and older, supplementing the brief cognitive assessments historically administered in the core study.
The HCAP has the potential to play a critical role in future dementia research, both in the U.S. and globally. In the U.S., the HCAP promises to enable updated classifications of mild cognitive impairment and dementia that may be translated to the broader HRS sample. Current HRS studies rely on dementia classifications generated from the Aging, Demographics and Memory Study (ADAMS) (Langa et al. 2005), which similarly fielded a comprehensive cognitive assessment to a subset of HRS participants. However, the ADAMS was fielded nearly two decades ago, includes less than a third of the participants who were included in the HCAP study, and – most importantly – has too few respondents from historically marginalized racial/ethnic backgrounds to produce reliable estimates for these groups (Wu et al. 2013). This final point is critical given both the disproportionate burden of AD/ADRD faced by Black, Latinx, and Indigenous individuals in the U.S. (Mayeda et al. 2016) and the growing diversity of the U.S. older adult population (Administration for Community Living 2020).
Globally, the HCAP has been replicated across multiple partner studies – in Mexico (Mejia-Arango et al. 2020), China (Meng et al. 2019), India (Lee et al. 2019), and England (Cadar et al. 2021) – and there are plans for implementation in other settings. Despite tremendous investment in these global aging studies to support correspondence with the HRS, true harmonization of the cognitive measures historically included in the core surveys has been challenging due to the substantial cross-study differences in the cognitive assessment batteries fielded to the core samples. The HCAP is thus a valuable initiative to improve the quality of cognitive assessment, permit some degree of domain specificity, and support better harmonization. Despite this potential, researcher use of the HCAP data has been somewhat slow initially, with relatively few citations using the HCAP in empirical studies indexed in PubMed to as of late 2021.
In light of the urgent need for population-level AD/ADRDs research, we have several recommendations to support the HCAP in achieving its stated goals:
Prioritize the addition of participants from historically marginalized racial/ethnic backgrounds in the HRS HCAP sample, aiming to maximize statistical power to evaluate between-group differences (i.e. precision of estimates of inequity) and within-group predictors (i.e. precision of estimates of determinants of cognitive aging within racial/ethnic groups).
Administer the HCAP as frequently as possible; for small, representative subsamples incorporate the HCAP into the main HRS questionnaire to enhance longitudinal analyses and support precise characterization of practice and period effects.
Add a clinical dementia assessment for at least a small sample of the HCAP participants to establish sensitivity and specificity of the HCAP predictions.
Phase in a baseline HCAP assessment for HRS participants under age 60.
Drop low-performing items from the HCAP to increase the feasibility of scaling the HCAP sample size and increasing the frequency of its administration.
Enhance documentation, multi-lingual options, training activities, and support randomized sub-studies of language effects and other tools to promote cross-national comparisons.
Incorporate blood-based AD biomarkers into the HCAP and international partner studies as soon as feasible within the structure of ongoing HRS operations.
Improve the capacity of the HCAP and/or the broader HRS for capturing the consequences of dementia.
The motivation for each of these recommendations is described in more detail below.
2 Prioritize Recruitment of Racial/Ethnic Minorities
Our highest priority recommendation is oriented to improving the utility of the HCAP for understanding determinants of AD/ADRD and AD/ADRD inequities in the U.S. To accomplish this, it is critical that the HCAP increase the representation of Black, Latinx, and other historically marginalized racial/ethnic groups. These groups are disproportionately impacted by AD/ADRD (Mayeda et al. 2016; Power et al. 2021) and inequities in dementia diagnosis and care (Lin et al. 2021; Tsoy et al. 2021). People of color represent a growing share of the U.S. older adult population; by 2040 an expected 34% of older adults will be people of color, up from 19% in 2008 and 23% in 2018 (Administration for Community Living 2020). The combination of elevated incidence and growing representation as a percent of population implies that from a population health perspective, older people of color and their families will comprise an ever larger fraction of people affected by AD/ADRD.
While the HRS has historically oversampled Black and Latinx adults relative to their share of the community-dwelling population 50 years and older, the 2016 HCAP sample remains 71% non-Latinx white. The absolute numbers of Black and Latinx respondents are very small – only 527 Black respondents and 363 Latinx respondents with non-missing cognitive scores are included. While classifications of mild cognitive impairment (MCI) and dementia are not yet available for the 2016 HCAP, we estimated that approximately 27% of those in the HCAP sample would have been classified as having mild cognitive impairment or dementia (approximately 21% with MCI and 6% with dementia), based on our application of a well-established algorithm (Crimmins et al. 2011) to the core HRS cognitive measures. This means that the HCAP sample may only include approximately 142 Black and 98 Latinx respondents with probable MCI or dementia, and 32 Black and 22 Latinx respondents with probable dementia. This compares to 642 non-Latinx white respondents who we estimate would have probable MCI or dementia (143 with probable dementia only). While these numbers are estimates based on the core HRS assessments and may change slightly if using the full HCAP battery, they illustrate that even with oversampling relative to the population, the absolute numbers of Black and Latinx adults with the primary outcome of interest are very small in the current HCAP data.
The very small numbers of Black and Latinx older adults in the analytic sample of the 2016 HCAP have several important consequences. First, many within-HCAP analyses focused on AD/ADRD disparities (i.e. comparing outcomes for Black or Latinx respondents to those for non-Latinx white respondents) and/or drivers of AD/ADRDs among Black or Latinx respondents will be underpowered and imprecise. Second, estimates that rely on a crosswalk between the HRS and the HCAP (i.e. to generate probabilities of MCI/dementia based on shared information across the two studies) could be less reliable for Black and Latinx participants due to the small sample size. The ADAMS similarly carried out detailed neuropsychological exams with a subset of respondents from the core HRS, and authors have used the diagnoses available in ADAMs to develop thresholds or probabilities for MCI and dementia classification in the core HRS sample (Collaborators 2021; Crimmins et al. 2011; Wu et al. 2013). In one such study, Wu et al. (2013) reported that the very small number of Latinx participants in the ADAMs (n = 84) meant that stable estimates could not be derived for this subgroup in the HRS. Third, research on the social and behavioral drivers of AD/ADRD overall is rendered less efficient because the range of social and behavioral profiles is unnecessarily restricted. For example, the distribution of household wealth, neighborhood resources, or childhood socioeconomic conditions is artificially narrowed when considering only non-Latinx white older adults instead of the full diversity of US older adults. This implicitly leads to prioritization of risk factors most relevant to white respondents, at the expense of evaluating risk factors disproportionately pertinent to non-white adults. The precision of many common statistical estimators, such as linear regression coefficients, is inversely proportional to the standard deviation of the independent variable. Adding a single individual from an underrepresented group will often improve statistical power more than adding a single individual similar to the majority of the sample. Racial/ethnic diversity therefore enhances the variability of risk factors that could be evaluated as determinants of AD/ADRD, and will therefore improve our ability to identify modifiable social or behavioral targets for prevention or improving the well-being of people living with AD/ADRD.
We recommend that both the proportion and the absolute number of Black and Latinx participants be increased substantially, ideally to account for a full two-thirds of the HCAP sample. This may mean increasing the overall HCAP sample size or reducing the number of non-Latinx white participants, as determined by budgetary constraints. Of course, increasing the overall sample size would be far preferable if financially feasible. This shift would mean that the proportion of participants from historically marginalized racial/ethnic groups in the HCAP exceed their proportion of community-dwelling older adults more broadly. However, nationally representative estimates could still be easily achieved with crosswalks to the core HRS sample and sample weighting.
3 Prioritize Longitudinal Assessments
Administering the HCAP repeatedly and at frequent intervals for the same subset of respondents in the HRS and its international counterparts is essential for assessments and comparisons of the rate of within-person cognitive decline for all respondents. This recommendation is partially driven by the fact that cognitive performance assessments from a single time point – as well as indicators of mild cognitive impairment and dementia based on these single time point assessments – are often strongly influenced by factors such as length and/or quality of education and test-taking abilities that do not necessarily predict the rate of cognitive decline or correspond with AD/ADRD risk (Seblova, Berggren, and Lövdén 2020; Zahodne et al. 2011). In addition, measures of longitudinal cognitive change better predict brain aging than cognitive performance assessments at a single time point (Mungas et al. 2010; Walter et al. 2019). Repeated assessments among the HCAP participants and their global counterparts with therefore allow for better science on the determinants of brain aging.
This recommendation is also made in service of improving the capacity for cross-national harmonization of HCAP-based outcomes. One of the central goals of the HCAP is to facilitate cross-national comparisons across the HRS and international partner studies in the interest of making “comparable classifications to discriminate normal, cognitive impairment, and dementia status” (Langa et al. 2020). This goal will be nearly impossible to achieve without the completion of repeated HCAP assessments within all relevant studies.
Despite significant investments in creating comparable measurement protocols across studies, there remain important cross-study differences that make interpretation of comparisons of single time point assessments ambiguous. These differences partly stem from the fact that translated versions of the HCAP within or across country settings may have yielded more or less difficult tasks. For example, when the words included in the word lists administered as part of verbal memory tasks are translated verbatim across language(s) spoken in international partner countries, the comparability is not maintained. Translated words could be shorter or longer or more or less common across contexts, thereby altering the level of difficulty regardless of a participant’s cognitive status. Variation in the level of difficulty across cognitive tasks based on their translation and their familiarity across study contexts may contribute to substantial differences in baseline scores for reasons that have nothing to do with cognitive aging.
For many components of the HCAP, international partner studies did alter their tasks in their counterpart cognitive batteries in order to fit their specific contexts (Banerjee et al. 2020; Mejia-Arango et al. 2020). For example, instead of the original MMSE administered in the HCAP, studies administered modified, contextually sensitive versions of the MMSE. In the original MMSE, participants are asked to spell “World” backwards. However, the Hindi Mental State Examination (Ganguli et al. 1995), included in the LASI-DAD study, accounts for the lower rates of literacy among the current birth cohorts of Indian older adults by asking participants to instead list the days of the week in reverse order from Sunday to Monday. In another example, as part of the Telephone Interview for Cognitive Status (TICs), respondents in the HRS HCAP were asked to correctly name “cactus” in response to being asked “what is that prickly plant that grows in the desert?”. In the LASI-DAD, respondents were instead asked to correctly name a coconut, given that cactuses are not common in all parts of India (Banerjee et al. 2020). These and many other context-specific modifications are critical. However, with a single wave of data for each study it would be challenging to convincingly attribute mean cross-study differences in scores on these items to true underlying cognitive differences rather than measurement differences.
Challenges to comparing single time-point cognitive performance scores may also result from the fact that current older adult cohorts within and across countries have had dramatically different educational and occupational opportunities, resulting in systematic differences in exposure to test-taking, mental math practice, or other activities that might prepare them for the format of an extensive cognitive assessment like the HCAP. These differences may prepare participants to achieve better scores across and even within settings for reasons that may be largely or entirely driven by earlier-life socio-economic conditions. Once again, it is unclear whether these same factors that influence cognitive test-taking performance will also be influential for cognitive aging. As a result, it may be impossible to tell whether cross-national differences in baseline cognitive scores are driven by true differences in cognitive aging versus differences in protocols across studies and/or cross-country contextual differences (e.g. in average levels of formal education).
Item response theory and other modern psychometric tools to estimate latent variables make the most efficient use of available data and can help solve the problem of meaningfully comparing across versions of a scale (Chan et al. 2015; Kobayashi et al. 2020). These methods do not provide a panacea, however. Psychometric approaches identify off of either assuming that at least some items are truly comparable between versions (i.e. anchor items) or making assumptions about the distribution of the latent variable (e.g. that the distribution is the same across settings). The latter approach, based on distributional assumptions, is circular if the goal is to compare the prevalence of cognitive impairment across settings. The former approach, based on identifying anchor items, is fragile if there are only a small number of anchoring items (fragile in the sense that small violations of the assumptions may have large implications for the overall findings). The assumption that an anchor item is truly equivalent across languages (and cultures) is not easily testable. Within-person change, in contrast, has intrinsic relevance that is more plausibly comparable across settings.
With cross-sectional data only, the best insights for cross-national comparisons are more likely to arise from contrasting the age-slope (i.e. between-person comparisons of people of different ages) between different settings than directly comparing cognitive scores. This is similar conceptually to comparing older adults in each setting to younger individuals in the same setting and using the deviation from within-setting healthy norms as the cognitive assessment, instead of the raw cognitive score. For this, we would need to field the HCAP in younger adults who are unlikely to have had substantial age-related cognitive deterioration. A final approach would involve randomly assigning bilingual individuals to take the version of the test offered in different languages. For example, a bilingual English-Hindi speaker would be randomly assigned to either name the days of the week backwards or spell world backwards. This would allow us to estimate an average version effect or even a crosswalk between expected scores on each item in different versions of the test. It may enhance feasibility to note that a crosswalk between versions need not be based on HRS study participants: a convenience sample of generally comparable older, bilingual adults could be enrolled into a randomization study to derive a crosswalk.
The above methods are ad hoc and each introduce their own new challenges. For example, within-country age-slopes generated with single time point data may be partially influenced by within-country cohort differences in the quantity or quality of education. Our main hope for cross-national harmonization going forward rests therefore on the ability to compare within-person slopes in cognitive performance scores derived from repeated assessments using the same protocol for the same set of individuals. Longitudinal analyses allow within-person comparison which will substantially control for stable national differences that affect test-taking skills but have no effect on AD/ADRD.
With a small number of repeated assessments, the HCAP may also be able to help disentangle “practice” or retest effects from cognitive aging; practice effects are a perennial methodological challenge for longitudinal cognitive aging research (Vivot et al. 2016; Weuve et al. 2015). Such disentangling would be enabled by randomly selecting some individuals for more frequent repeated measures and others for delayed introduction to the HCAP protocol. In particular, we suggest that if the HCAP enrolls a new group of participants at follow-up, average practice effects could be calculated as the difference in scores for respondents who are completing the HCAP for the second time and respondents matched on age and education who are completing the HCAP for the first time. This would be an improvement over the typical method of quantifying the magnitude of practice effects, which entails administering the same cognitive assessment to a group of respondents within a period of time that is so short that changes in performance would most plausibly be due to practice effects (Goldberg et al. 2015). This retesting approach generates practice effect estimates that may be of limited relevance for the typical cognitive aging cohort study, which usually collects repeated assessments two or three years apart.
Finally, the COVID-19 pandemic has illustrated the plausibility of major period effects in cognitive assessments. As long-time users of HRS, we suspect there have been substantial period effects in prior waves for one reason or another (e.g. protocol or interview changes or happenstance of timing). Period effects compromise the ability to evaluate cognitive change or trends in the population. If a small subsample of the HCAP participants receives repeated assessments more frequently than the main repeat iteration cycle, it offers important advantages for modeling and accounting for period effects when evaluating rates of age-related change or population trends.
4 Add a Clinical Dementia Assessment
While the HCAP includes a rigorous and extensive set of cognitive performance assessments, one of the challenges to generating dementia classifications and to understanding the utility of any given component of the HCAP battery is the lack of an available gold standard. While clinical dementia assessments – included in the ADAMs study as well as in select international HCAP studies – may have imperfect reliability and validity and are therefore not a gold standard, they are likely the best standard available for benchmarking against. Clinical assessments are therefore important for evaluating the sensitivity and specificity of cognitive assessments, including within and across global studies. Clinical dementia assessments could be used alongside other imperfect benchmarks such as age or mortality to evaluate the performance of specific tasks within the HCAP battery as well as the accuracy of dementia classifications. We therefore recommend that the HCAP incorporate a clinical dementia assessment for at least a subset of participants that may serve as a benchmark against which to evaluate cognitive performance-based classifications.
The HCAP and global partner studies may also consider a unified approach to clinical dementia assessments in order to improve our understanding of cross-country differences in the sensitivity and specificity of the HCAP and its subcomponents. Such an effort could capitalize on the innovation undertaken for clinical assessments in the LASI-DAD study, in which an online platform for consensus-based dementia diagnosis was validated against in-person diagnoses (Lee et al. 2020). This online platform allowed for broad geographic coverage of its dementia assessments rather than restricting LASI-DAD respondents to those living in urban centers and in close proximity to multiple clinical experts required for consensus-based in-person diagnosis. Such a platform could allow for broader geographic coverage within each international HCAP study, and also may have the potential to facilitate more unified, cross-country diagnostic assessments.
5 Consider Phasing-in Baseline HCAP Assessments for People Under Age 65
The HCAP sample includes respondents 65 years and older. The sample age range was intentionally lowered to include slightly younger participants as compared to ADAMS, which included participants age 70 and older. The HCAP based its age criterion on the fact that Medicare begins at age 65; HCAP aims to shed light on the costs of dementia, and many of the costs incurred at the federal level would be driven by Medicare expenditures (Langa et al. 2020). The HCAP authors also rightly indicated that lowering the starting age relative to the ADAMS study will allow for understanding earlier stages of cognitive impairment. We suggest that lowering the starting age even further, potentially to include those 50 years and older, as with the broader HRS, would have many advantages. Select international partner studies already include respondents younger than 65 years of age in their HCAP equivalents.
First, including younger participants could have important benefits for understanding midlife determinants of dementia and cognitive decline, particularly if the same respondents are followed as they age. It is now well understood that the biology of dementia unfolds over multiple decades. For example, by the early 50s, genetic profiles associated with ADRD predict lower body weight (Brenowitz et al. 2021) and worse performance on specific cognitive tests (Zimmerman et al. 2022). Many population-level prevention and intervention efforts should focus on the decades spanning mid-life (Livingston et al. 2020). It is challenging to identify causes of dementia risk that are relevant for mid-life prevention and intervention efforts with an analytic sample that begins at age 65. By this age, more advanced stages of dementia are more commonly presenting themselves – well after earlier opportunities for prevention have passed.
Second, starting the analytic sample at younger ages may help ameliorate the selection bias present in many cognitive aging studies that begin in late life (Mayeda et al. 2018). Concerns about selection bias may not be as relevant if the HCAP is simply used as a tool to generate dementia classifications that are imposed on the broader HRS analytic sample. However, the empirical studies produced with the HCAP have yet to use the study in this way and have instead used the HCAP sample only to evaluate within-sample associations; without statistical correction, these analyses are likely subject to greater selection bias than parallel analyses using the broader HRS sample because of the HCAP age restriction. Lowering the starting age for HCAP may have the ancillary benefit of including a more diverse group of participants (aligned with our first recommendation) given that younger birth cohorts in the US are more diverse than their older counterparts (Administration for Community Living 2020).
One potential concern is that variability in cognitive performance scores may be limited among those under 65 years of age, thereby reducing the utility of supplementing the HCAP sample with younger participants. These younger participants may not contribute meaningfully to the HCAP goals of providing dementia classifications, given that relatively few participants younger than 65 will be classified as having mild cognitive impairment or dementia. Including theseparticipants may therefore come with added costs with little apparent benefit for the narrow goal of creating dementia classifications. However, there is substantial heterogeneity among people classified as “cognitively normal”, and at least some of that heterogeneity reflects very early changes associated with progressive neurodegenerative disease. Measuring these early changes is important. In the context of repeated pharmacological failures for dementia treatment and a growing emphasis on midlife prevention, today’s investments in population-level research that begins in midlife are critical down payments on future knowledge about dementia prevention.
6 Drop Low-Performing Items from HCAP to Increase Feasibility
While the comprehensive nature of the HCAP is a strength and there have been tremendous investments in mirroring the HCAP measures across global partner studies, there are multiple reasons to consider modifications that might make the HCAP easier to scale. As we have argued, there is particular urgency around including a more diverse group of HRS respondents and in administering repeated, more frequent assessments across the same set of respondents in order to be able to compare within-person slopes in cognitive performance cross-nationally. It may be necessary to consider dropping items from the HCAP in order to increase the feasibility of scaling the study in these ways.
The factor structure of HCAP allows separate estimation of episodic memory, executive functioning, processing speed, language, and visuo-construction domains. Items load strongly on these domains, as expected given the selection process (Table 1 below reproduced from Zahodne et al. 2020). However, some of these domains include multiple items that may be considered for removal. Reliability and validity of the episodic memory assessment is probably not strongly improved by inclusion of the “Brave man” story for example, in the context of eight other measures of episodic memory, all of which have factor loadings above 0.6. Dropping the lowest two performing language items would also likely have limited impact on reliability.
Standardized factor loadings from the HCAP measurement model, reproduced from Table 2 in Zahodne et al. 2020.
Cognitive measure | Estimate | Standard error | p |
---|---|---|---|
Episodic memory | |||
Word list immediate | 0.79 | 0.01 | <0.001 |
Logical memory immediate | 0.73 | 0.01 | <0.001 |
Brave man immediate | 0.61 | 0.01 | <0.001 |
Word list delayed | 0.76 | 0.01 | <0.001 |
Logical memory delayed | 0.71 | 0.01 | <0.001 |
Brave man delayed | 0.6 | 0.01 | <0.001 |
MMSE word list delayed | 0.61 | 0.01 | <0.001 |
Word list recognition | 0.63 | 0.01 | <0.001 |
Logical memory recognition | 0.62 | 0.01 | <0.001 |
Constructional praxis-delay | 0.73 | 0.01 | <0.001 |
Executive functioning | |||
Raven’s | 0.86 | 0.01 | <0.001 |
Number series | 0.67 | 0.01 | <0.001 |
Trails B | −0.67 | 0.01 | <0.001 |
Processing speed | |||
Symbol digit modalities test | 0.90 | 0.01 | <0.001 |
Trails A | −0.74 | 0.01 | <0.001 |
Backward counting | 0.69 | 0.01 | <0.001 |
Letter cancellation | 0.70 | 0.01 | <0.001 |
Language | |||
Animal fluency | 0.77 | 0.01 | <0.001 |
TICS naming | 0.75 | 0.01 | <0.001 |
MMSE naming | 0.62 | 0.01 | <0.001 |
MMSE writing | 0.56 | 0.01 | <0.001 |
Visuoconstruction | |||
Constructional praxis-copy | 0.75 | 0.01 | <0.001 |
MMSE polygons | 0.64 | 0.01 | <0.001 |
-
MMSE, mini-mental state exam; TICS, telephone interview for cognitive status.
We propose that the HCAP consider a framework for continual evaluation of item performance. We have already acknowledged the lack of a gold standard against which to definitively assess item performance. However, items could be assessed against several metrics that collectively serve as a benchmark against which to evaluate their utility. These metrics may include participant burden, sensitivity to age, contribution to the reliability of a domain measure (including across racial/ethnic groups), and association with subsequent adverse outcomes, such as death. Top among these is the burden of assessing the item. Items that are easy or brief may be retained even if less reliable. Detailed data on administration time from the 2016 implementation can be used to compare time demand versus contribution to measurement reliability. This should be considered within the context of specific subgroups (i.e. if performance of some items varies by subgroups). If an HCAP item does not vary or demonstrates limited variation with respondents’ age, it may be an indication that the item is instead measuring an aspect of cognitive performance that is established earlier in life and does not decline over time, and is therefore not a useful measure of cognitive aging. Items could also be evaluated against clinical dementia diagnoses, if those are incorporated into the HCAP going forward, and/or in relationship to subsequent mortality, institutionalization, or incidence of ADL limitations. Items that are overly burdensome and/or not associated with age, clinical diagnosis, or patient-centered outcomes could be considered for removal.
We acknowledge that removing items needs to be weighed against an interest in including well-known cognitive assessment batteries, which were carefully considered in designing the HCAP and meant to enable harmonization across a number of studies both in the U.S. and globally. These include the Mini-Mental State Examination – included in numerous studies of cognitive aging worldwide –, the Telephone Interview for Cognitive Status – included in the core HRS --, and the assessment battery included in the 10/66 global studies. Removing individual items from these batteries could challenge efforts to harmonize the HCAP data with data from other sources and to create necessary crosswalks to the core HRS. These needs for harmonization need to be balanced against the needs for an assessment battery that may be more feasibly scaled to a larger, more diverse sample and administered more frequently.
7 Enhance Documentation and Other Resources to Promote Cross-National Comparisons
The NIA has invested substantially in providing outstanding support to data users of HRS and to some extent for international sister studies. The HCAP and its global counterparts need similar support. Investing in enhanced documentation that presents detailed protocols for each study and highlights cross-study differences as well as the rationale for any differences would enhance usability. The detailed item-by-item codebook provided by the LASI-DAD study could serve as a template that could be expanded to compare specific cognitive assessment tasks and the rationale for study differences across international partner studies.
Harmonized code in common statistical languages (including in R or other software that is free or low-cost) should similarly be made available in a way that is modeled after the code available for the core studies on the Gateway to Global Aging website. The Mex-Cog study team already provides publicly available code in SAS and STATA that can be used to clean their HCAP-equivalent data. This code both facilitates the use of the Mex-Cog data by reducing the arduous task of cleaning raw data across the many, many variables yielded from the extensive cognitive assessment battery and helps reduce the chance that coding errors made during data cleaning contribute to incorrect and/or inconsistent results. The code provided by the Mex-Cog team could be used as a template for the HCAP and other partner studies.
Training short courses or applied data workshops using the HCAP and international partner study data could also enhance uptake. There is a longstanding and successful tradition of such training opportunities related to HRS. These could be fielded remotely to promote international participation and collaboration.
Beyond improved documentation, small embedded substudies to evaluate language or version effects would be valuable to facilitating cross-country comparisons if these are feasible. The HRS has a tradition of embedding randomized substudies, for example to evaluate mode effects for phone versus in-person interviews. Likewise, bilingual participants in the HRS or partner studies could be randomly assigned to alternative language versions of the test in order to more directly evaluate cross-country comparability of the HCAP and its global counterparts. As noted above, these randomized studies do not necessarily have to be completed in HRS or HRS sister study participants; age-appropriate volunteers who do not participate in the studies would likely be adequate for the purposes of creating the international cross-walks.
8 Add Blood AD Biomarkers
The scientific landscape of blood-based AD biomarkers is expanding rapidly, with multiple studies showing that blood-based markers can predict future progression to AD (Brickman et al. 2021; Palmqvist et al. 2021; Schindler and Bateman 2021) These developments are exciting, given that AD biomarkers have long been dependent on invasive and/or costly procedures – namely, brain imaging, lumbar punctures, or autopsies. The logistic complications and high costs of these invasive biomarkers has contributed to highly selective samples, with no population-representative research on AD biomarkers available to-date. Older adults of color, those with low socio-economic status, and those living far from academic medical centers have historically been excluded from biomarker-based AD/ADRD research (Barnes 2019), despite the fact that these are the very populations that face a disproportionate burden of AD/ADRD. The severe sample selection bias present in studies that include neuroimaging data has been shown to contribute to misleading conclusions (Barnes 2019; LeWinn et al. 2017). Less invasive and costly means of assessing biological indicators of AD may allow for population-representative research that includes historically excluded groups and reduces selection bias. Studies have already shown these measures to be feasibly collected in multi-ethnic, community-based samples (Brickman et al. 2021).
Nevertheless, studies exploring blood-based AD biomarkers to-date have largely been based on the same highly select set of respondents included in existing AD biomarker studies, potentially replicating the same issues described above. Specifically, they are vulnerable to strong selection biases, largely do not reflect the diversity of the U.S. older population, and would not necessarily give a clear benchmark for performance of the biomarkers in different groups of people, including across racial/ethnic groups.
As a population-based study with oversamples of Black and Latinx community-dwelling older adults, the HCAP has a potentially important role to play in advancing AD/ADRD research via the inclusion of blood-based AD biomarkers. The HCAP team already plans to collect venous blood in its second round of data collection. In addition, numerous international partner studies have demonstrated the feasibility of collecting blood as part of their HCAP equivalent studies or their core studies, suggesting that there may be an opportunity to expand biomarker-based AD research across harmonized, global samples.
The science around blood-based biomarkers for dementia is changing rapidly and the HCAP’s planning must allow for the possibility that the preferred biomarkers based on biological relevance and technical feasibility will evolve. Given the early state of the science, the HCAP has the opportunity to provide rigorous population-based evidence before biases based on highly select samples (e.g. clinical populations, community-based populations near academic medical centers) are baked into foundational knowledge on blood-based AD biomarkers. If the HCAP waits until the science is more fully established, critical opportunities to evaluate these biomarkers within a prospective, population-based cohort study in association with longitudinal AD/ADRDs outcome measures may be missed.
In addition, the potential inclusion of respondents 50 years and older represents a particularly important opportunity for studying how values on blood-based AD biomarkers in mid-life might predict later-life dementia risk – as well as how multi-level environmental and individual-level factors may modify the association between AD biomarker values and future dementia risk. Long-term prospective data that is inclusive of younger age groups may be important for informing prevention and early intervention efforts, including early pharmacological intervention should an effective treatment become available.
9 Improve Ability to Assess Consequences of Dementia, and Specifically Measures of Long-Term Services and Supports Utilization and Caregiver Experiences
As already noted, the HCAP aims to understand the determinants, prevalence, consequences, and costs of dementia in the US. However, the ability of the HCAP to shed light on the consequences of dementia is limited. In particular, the HRS has historically not included sufficient questions in its core surveys on respondents’ utilization of long-term services and supports (LTSS), the specific nature of these LTSS (formal, informal, or both), whether respondents have a need for LTSS is that is not being met, or the experiences and wellbeing of respondents’ caregivers. These measures reflect several important consequences of AD/ADRDs, including the need for LTSS that are sufficiently funded by government entities and the consequences of going without sufficient LTSS, as well as caregiver burden.
Utilization of and unmet needs for home and community-based services (HCBS) are particularly difficult to characterize well based on the items currently assessed in the HRS. HCBS utilization questions were the subject of an experimental module within the 2012 core HRS as well as the 2011 Health Care Mail Survey (Pepin et al. 2017; Robinson, Menne, and Gaeta 2021), although no questions to our knowledge have addressed whether respondents have an unmet need for these services and/or the consequences of going without any or adequate LTSS. While there are also opportunities to understand LTSS utilization patterns via linked Medicaid data, these analyses are limited to Medicaid beneficiaries who gave permission for data linkage. Furthermore, Medicaid claims data only reflects utilization patterns, rather than a comprehensive understanding of respondents’ needs for care and the consequences of not receiving adequate services and supports to meet these needs.
We suggest that modifications to future iterations of the HCAP consider enhancing the ability of the HCAP and/or the core HRS to understand specific details about LTSS access and utilization, including both formal and informal home and community-based services. In particular, questions about unmet or inadequately met needs and their consequences are critical for informing policies and programs that seek to address dementia care.
The HCAP represents a particularly compelling opportunity for research on LTSS and caregiver outcomes. First, the HCAP represents an important opportunity to triangulate information on LTSS utilization between respondents and informants, potentially increasing the reliability of respondent or informant-only reports. In the core HRS, interviews are primarily completed with respondents themselves. However, respondent-reported information on need for and utilization of LTSS may be of limited reliability if respondents are already experiencing dementia symptoms. On the other hand, informants may also not be able to provide fully accurate information on respondents’ needs for care and utilization patterns, depending on the nature of their relationship to the respondent. Unlike the core HRS, the HCAP includes respondent-informant dyads for all participants. Asking the same questions about LTSS utilization and unmet needs of both respondents and informants may help ensure that data on this important set of outcomes is accurately captured, particularly for those already experiencing memory impairment.
Second, the inclusion of informants throughout the HCAP presents an opportunity to ask directly about experiences of caregiving for the subset of informants who provide care to HRS respondents, including HRS respondents with dementia. Currently, the HRS relies on respondents (in the case of direct respondent interviews) to report on who, if anyone, provides care for their activities of daily living. However, family informants may have different perspectives on the extent to which they are providing in-home assistance. HCAP offers the opportunity to gauge informants’ own assessments so that respondent and informant-reported information can be compared. In addition, these family members could report on their experiences of providing care, including their own needs and unmet needs (e.g. for mental health care, financial compensation). These questions would be of critical relevance to informing policies around support for family caregivers and the dynamic processes in caregiving relationships. For example, some states made an exception to allow for the family caregivers of Medicaid beneficiaries to be paid during the COVID-19 pandemic. Decision-making about whether this option is maintained or even more widely adopted will require rigorous population-based science.
Before the potential to measure and triangulate information on LTSS can be fully realized, the HCAP should consider additional measures of need for and utilization of LTSS, including HCBS. These additional measures might be patterned after existing studies, such as the National Health and Aging Trends Study (NHATS) and the California Health Interview Survey LTSS follow-along module, so that multiple data sources may be compared and/or pooled in the future. Measures of caregiving experiences and needs may similarly come from the National Study of Caregiving, which is linked to the NHATS. While incorporating questions about LTSS, including formal and informal caregiving, into the HCAP would likely represent a substantial investment, information gained from these questions will contribute to improved population-based science necessary for informing current policy debates about where and how we should invest resources to support older adults – including those living with AD/ADRD -- and their family members. In the absence of effective pharmacological interventions, investments in dementia care may be our best chance at improving the quality of life of those living with dementia and their caregivers.
10 Conclusion
The HCAP, along with its international equivalents, represents a critical effort in the landscape of population-based dementia research, both in the U.S. and globally. While the 2016 HCAP represents a tremendous step forward in this overall effort, the core goals of the project have yet to be fully realized and adoption of the study for many of its intended purposes (i.e. dementia classification, cross-national research) could be accelerated. We offer a series of recommendations with the hope of supporting the HCAP in realizing its core goals. Some of these recommendations simply build on the original plans of the HCAP team (e.g. to field longitudinal HCAP assessments, to collect venous blood). We believe that these recommendations may help extend the influence of the HCAP even further by, for example, providing rigorous population-based evidence on blood-based AD biomarkers or on the unmet needs of individuals with dementia and their loved ones. Most critically, prior to making these additional investments, we suggest that the HCAP set a path to ensuring that the analytic sample is more inclusive of historically marginalized racial and ethnic groups that have been both disproportionately burdened by AD/ADRD and excluded from AD/ADRD research.
Acknowledgments
We thank Kaitlin Swinnerton for expert analyses of HRS data as we were preparing this document.
-
Research Funding: This paper was prepared at the request of the HRS Data Monitoring Committee. JMT and MMG report funding from the National Institutes of Health for the present study. JMT reports funding from the National Institutes of Health outside of the present study. MMG reports funding from the National Institutes of Health and the Robert Wood Johnson Foundation outside of the present study.
-
Conflicts of interest: None to declare.
References
2020. “2020 Alzheimer’s Disease Facts and Figures.” Alzheimers Dementia; https://doi.org/10.1002/alz.12068.Suche in Google Scholar
Ackley, S. F., S. C. Zimmerman, W. D. Brenowitz, E. J. Tchetgen Tchetgen, A. L. Gold, J. M. Manly, E. R. Mayeda, T. J. Filshtein, M. C. Power, F. M. Elahi, A. M. Brickman, and M. M. Glymour. 2021. “Effect of Reductions in Amyloid Levels on Cognitive Change in Randomized Trials: Instrumental Variable Meta-Analysis.” BMJ 372: n156, doi:https://doi.org/10.1136/bmj.n156.Suche in Google Scholar
Administration for Community Living. 2020. 2019 Profile of Older Americans. Washington, D.C.: U.S. Department of Health and Human Services. Also available at https://acl.gov/sites/default/files/Aging%20and%20Disability%20in%20America/2019ProfileOlderAmericans508.pdf.Suche in Google Scholar
Brenowitz, W. D., S. C. Zimmerman, T. J. Filshtein, K. Yaffe, S. Walter, T. J. Hoffmann, E. Jorgenson, R. A. Whitmer, and M. M. Glymour. 2021. “Extension of Mendelian Randomization to Identify Earliest Manifestations of Alzheimer Disease: Association of Genetic Risk Score for Alzheimer Disease with Lower Body Mass Index by Age 50 Years.” American Journal of Epidemiology 190 (10): 2163–71, doi:https://doi.org/10.1093/aje/kwab103.Suche in Google Scholar
Banerjee, J., U. Jain, P. Khobragade, B. Weerman, P. Hu, S. Chien, S. Dey, P. Chaterjee, J. Saxton, B. Keller, E. Crimmins, A. Toga, A. Jain, G. S. Shanthi, R. Kurup, A. Raman, S. Chakrabarti, M. Varghese, J. John, H. Joshi, P. Koul, D. Goswami, A. Talukdar, R. Mohanty, Y. Yadati, M. Padmaja, L. Sankhe, S. Pedgaonkar, P. Arokiasamy, D. Bloom, K. Langa, J. Jovicich, A. Dey, J. Lee, I. Gambhir, and C. Rajguru. 2020. “Methodological Considerations in Designing and Implementing the Harmonized Diagnostic Assessment of Dementia for Longitudinal Aging Study in India (LASI-DAD).” Biodemography and Social Biology 65 (3): 189–213, doi:https://doi.org/10.1080/19485565.2020.1730156.Suche in Google Scholar
Barnes, L. L. 2019. “Biomarkers for Alzheimer Dementia in Diverse Racial and Ethnic Minorities-A Public Health Priority.” JAMA Neurology 76 (3): 251–3, https://doi.org/10.1001/jamaneurol.2018.3444.Suche in Google Scholar
Brickman, A. M., J. J. Manly, L. S. Honig, D. Sanchez, D. Reyes-Dumeyer, R. A. Lantigua, P. J. Lao, Y. Stern, J. P. Vonsattel, A. F. Teich, D. C. Airey, N. K. Proctor, J. L. Dage, and R. Mayeux. 2021. “Plasma P-Tau181, P-Tau217, and Other Blood-Based Alzheimer’s Disease Biomarkers in a Multi-Ethnic, Community Study.” Alzheimers Dementia 17 (8): 1353–64, doi:https://doi.org/10.1002/alz.12301.Suche in Google Scholar
Cadar, D., J. Abell, F. E. Matthews, C. Brayne, G. D. Batty, D. J. Llewellyn, and A. Steptoe. 2021. “Cohort Profile Update: The Harmonised Cognitive Assessment Protocol Sub-study of the English Longitudinal Study of Ageing (ELSA-HCAP).” International Journal of Epidemiology 50 (3): 725–6i, doi:https://doi.org/10.1093/ije/dyaa227.Suche in Google Scholar
Chan, K. S., A. L. Gross, L. E. Pezzin, J. Brandt, and J. D. Kasper. 2015. “Harmonizing Measures of Cognitive Performance across International Surveys of Aging Using Item Response Theory.” Journal of Aging and Health 27 (8): 1392–414, https://doi.org/10.1177/0898264315583054.Suche in Google Scholar
Collaborators, G. D. 2021. “Use of Multidimensional Item Response Theory Methods for Dementia Prevalence Prediction: an Example Using the Health and Retirement Survey and the Aging, Demographics, and Memory Study.” BMC Medical Informatics and Decision Making 21 (1): 241, https://doi.org/10.1186/s12911-021-01590-y.Suche in Google Scholar
Crimmins, E. M., J. K. Kim, K. M. Langa, and D. R. Weir. 2011. “Assessment of Cognition Using Surveys and Neuropsychological Assessment: the Health and Retirement Study and the Aging, Demographics, and Memory Study.” Journals of Gerontology Series B: Psychological Sciences and Social Sciences 66 (Suppl 1): i162–71, https://doi.org/10.1093/geronb/gbr048.Suche in Google Scholar
Ganguli, M. R. G., V. Chandra, S. Sharma, J. Gilby, R. Pandav, S. Belle, C. Ryan, C. Baker, E. Seaberg, and S. Dekosky. 1995. “A Hindi Version of the MMSE: The Development of a Cognitive Screening Instrument for a Largely Illiterate Rural Elderly Population in India.” International Journal of Geriatric Psychiatry 10 (5): 367–77, doi:https://doi.org/10.1002/gps.930100505.Suche in Google Scholar
Goldberg, T. E., P. D. Harvey, K. A. Wesnes, P. J. Snyder, and L. S. Schneider. 2015. “Practice Effects Due to Serial Cognitive Assessment: Implications for Preclinical Alzheimer’s Disease Randomized Controlled Trials.” Alzheimers Dementia (Amst) 1 (1): 103–11, https://doi.org/10.1016/j.dadm.2014.11.003.Suche in Google Scholar
Kobayashi, L. C., A. L. Gross, L. E. Gibbons, D. Tommet, R. E. Sanders, S. E. Choi, S. Mukherjee, M. M. Glymour, J. J. Manly, L. F. Berkman, P. K. Crane, D. M. Mungas, and R. N. Jones. 2020. “You Say Tomato, I Say Radish: Can Brief Cognitive Assessments in the US Health Retirement Study Be Harmonized with its International Partner Studies?” Journals of Gerontology Series B: Psychological Sciences and Social Sciences 76 (9): 1767–1776, doi:https://doi.org/10.1093/geronb/gbaa205.Suche in Google Scholar
Langa, K. M., B. L. Plassman, R. B. Wallace, A. R. Herzog, S. G. Heeringa, M. B. Ofstedal, J. R. Burke, G. G. Fisher, N. H. Fultz, M. D. Hurd, G. G. Potter, W. L. Rodgers, D. C. Steffens, D. R. Weir, and R. J. Willis. 2005. “The Aging, Demographics, and Memory Study: Study Design and Methods.” Neuroepidemiology 25 (4): 181–91, doi:https://doi.org/10.1159/000087448.Suche in Google Scholar
Langa, K. M., L. H. Ryan, R. J. McCammon, R. N. Jones, J. J. Manly, D. A. Levine, A. Sonnega, M. Farron, and D. R. Weir. 2020. “The Health and Retirement Study Harmonized Cognitive Assessment Protocol Project: Study Design and Methods.” Neuroepidemiology 54 (1): 64–74, doi:https://doi.org/10.1159/000503004.Suche in Google Scholar
Lee, J., J. Banerjee, P. Y. Khobragade, M. Angrisani, and A. B. Dey. 2019. “LASI-DAD Study: a Protocol for a Prospective Cohort Study of Late-Life Cognition and Dementia in India.” BMJ Open 9 (7): e030300, https://doi.org/10.1136/bmjopen-2019-030300.Suche in Google Scholar
Lee, J., M. Ganguli, A. Weerman, S. Chien, D. Y. Lee, M. Varghese, and A. B. Dey. 2020. “Online Clinical Consensus Diagnosis of Dementia: Development and Validation.” Journal of the American Geriatrics Society 68 (Suppl 3): S54–S59, doi:https://doi.org/10.1111/jgs.16736.Suche in Google Scholar
LeWinn, K. Z., M. A. Sheridan, K. M. Keyes, A. Hamilton, and K. A. McLaughlin. 2017. “Sample Composition Alters Associations between Age and Brain Structure.” Nature Communications 8 (1): 874, https://doi.org/10.1038/s41467-017-00908-7.Suche in Google Scholar
Lin, P. J., A. T. Daly, N. Olchanski, J. T. Cohen, P. J. Neumann, J. D. Faul, H. M. Fillit, and K. M. Freund. 2021. “Dementia Diagnosis Disparities by Race and Ethnicity.” Medical Care 59 (8): 679–86, doi:https://doi.org/10.1097/MLR.0000000000001577.Suche in Google Scholar
Livingston, G., J. Huntley, A. Sommerlad, D. Ames, C. Ballard, S. Banerjee, C. Brayne, A. Burns, J. Cohen-Mansfield, C. Cooper, S. G. Costafreda, A. Dias, N. Fox, L. N. Gitlin, R. Howard, H. C. Kales, M. Kivimaki, E. B. Larson, A. Ogunniyi, V. Orgeta, K. Ritchie, K. Rockwood, E. L. Sampson, Q. Samus, L. S. Schneider, G. Selbaek, L. Teri, and N. Mukadam. 2020. “Dementia Prevention, Intervention, and Care: 2020 Report of the Lancet Commission.” Lancet 396 (10248): 413–46, doi:https://doi.org/10.1016/S0140-6736(20)30367-6.Suche in Google Scholar
Mayeda, E. R., M. M. Glymour, C. P. Quesenberry, and R. A. Whitmer. 2016. “Inequalities in Dementia Incidence between Six Racial and Ethnic Groups over 14 Years.” Alzheimers Dementia 12 (3): 216–24, https://doi.org/10.1016/j.jalz.2015.12.007.Suche in Google Scholar
Mejia-Arango, S., R. Nevarez, A. Michaels-Obregon, B. Trejo-Valdivia, L. R. Mendoza-Alvarado, A. L. Sosa-Ortiz, A. Martinez-Ruiz, and R. Wong. 2020. “The Mexican Cognitive Aging Ancillary Study (Mex-Cog): Study Design and Methods.” Archives of Gerontology and Geriatrics 91: 104210, doi:https://doi.org/10.1016/j.archger.2020.104210.Suche in Google Scholar
Meng, Q., H. Wang, J. Strauss, K. M. Langa, X. Chen, M. Wang, Q. Qu, W. Chen, W. Kuang, N. Zhang, T. Li, Y. Wang, and Y. Zhao. 2019. “Validation of Neuropsychological Tests for the China Health and Retirement Longitudinal Study Harmonized Cognitive Assessment Protocol.” International Psychogeriatrics 31 (12): 1709–19, doi:https://doi.org/10.1017/S1041610219000693.Suche in Google Scholar
Mayeda, E. R., T. J. Filshtein, Y. Tripodis, M. M. Glymour, and A. L. Gross. 2018. “Does Selective Survival before Study Enrolment Attenuate Estimated Effects of Education on Rate of Cognitive Decline in Older Adults? A Simulation Approach for Quantifying Survival Bias in Life Course Epidemiology.” International Journal of Epidemiology 47 (5): 1507–17, https://doi.org/10.1093/ije/dyy124.Suche in Google Scholar
Mungas, D., L. Beckett, D. Harvey, S. T. Farias, B. Reed, O. Carmichael, J. Olichney, J. Miller, and C. DeCarli. 2010. “Heterogeneity of Cognitive Trajectories in Diverse Older Persons.” Psychology and Aging 25 (3): 606–19, doi:https://doi.org/10.1037/a0019502.Suche in Google Scholar
Palmqvist, S., P. Tideman, N. Cullen, H. Zetterberg, K. Blennow, J. L. Dage, E. Stomrud, S. Janelidze, N. Mattsson-Carlgren, and O. Hansson, the Alzheimer’s Disease Neuroimaging Initiative. 2021. “Prediction of Future Alzheimer’s Disease Dementia Using Plasma Phospho-Tau Combined with Other Accessible Measures.” Nature Medicine 27 (6): 1034–42, doi:https://doi.org/10.1038/s41591-021-01348-z.Suche in Google Scholar
Pepin, R., A. Leggett, A. Sonnega, and S. Assari. 2017. “Depressive Symptoms in Recipients of Home- and Community-Based Services in the United States: Are Older Adults Receiving the Care They Need?” The American Journal of Geriatric Psychiatry 25 (12): 1351–60, https://doi.org/10.1016/j.jagp.2017.05.021.Suche in Google Scholar
Power, M. C., E. E. Bennett, R. W. Turner, M. Dowling, A. Ciarleglio, M. M. Glymour, and K. Z. Gianattasio. 2021. “Trends in Relative Incidence and Prevalence of Dementia across Non-hispanic Black and White Individuals in the United States, 2000-2016.” JAMA Neurology 78 (3): 275–84, doi:https://doi.org/10.1001/jamaneurol.2020.4471.Suche in Google Scholar
Robinson, K. N., H. L. Menne, and R. Gaeta. 2021. “Use of Informal Support as a Predictor of Home- and Community-Based Services Utilization.” Journals of Gerontology Series B: Psychological Sciences and Social Sciences 76 (1): 133–40, https://doi.org/10.1093/geronb/gbaa046.Suche in Google Scholar
Seblova, D., R. Berggren, and M. Lövdén. 2020. “Education and Age-Related Decline in Cognitive Performance: Systematic Review and Meta-Analysis of Longitudinal Cohort Studies.” Ageing Research Reviews 58: 101005, https://doi.org/10.1016/j.arr.2019.101005.Suche in Google Scholar
Schindler, S. E., and R. J. Bateman. 2021. “Combining Blood-Based Biomarkers to Predict Risk for Alzheimer’s Disease Dementia.” Nature Aging 1: 26–8, https://doi.org/10.1038/s43587-020-00008-0.Suche in Google Scholar
Tsoy, E., R. E. Kiekhofer, E. L. Guterman, B. L. Tee, C. C. Windon, K. A. Dorsman, S. C. Lanata, G. D. Rabinovici, B. L. Miller, A. J. Kind, and K. L. Possin. 2021. “Assessment of Racial/Ethnic Disparities in Timeliness and Comprehensiveness of Dementia Diagnosis in California.” JAMA Neurology 78 (6): 657–65, doi:https://doi.org/10.1001/jamaneurol.2021.0399.Suche in Google Scholar
Vivot, A., M. C. Power, M. M. Glymour, E. R. Mayeda, A. Benitez, A. Spiro, J. J. Manly, C. Proust-Lima, C. Dufouil, and A. L. Gross. 2016. “Jump, Hop, or Skip: Modeling Practice Effects in Studies of Determinants of Cognitive Change in Older Adults.” American Journal of Epidemiology 183 (4): 302–14, doi:https://doi.org/10.1093/aje/kwv212.Suche in Google Scholar
Weuve, J., C. Proust-Lima, M. C. Power, A. L. Gross, S. M. Hofer, R. Thiébaut, G. Chêne, M. M. Glymour, and C. Dufouil, the MELODEM Initiative. 2015. “Guidelines for Reporting Methodological Challenges and Evaluating Potential Bias in Dementia Research.” Alzheimers Dementia 11 (9): 1098–109, doi:https://doi.org/10.1016/j.jalz.2015.06.1885.Suche in Google Scholar
World Health Organization. 2017. Global Action Plan on the Public Health Response to Dementia, 2017–2025. Geneva: WHO.Suche in Google Scholar
Wu, Q., E. J. Tchetgen Tchetgen, T. L. Osypuk, K. White, M. Mujahid, and M. Maria Glymour. 2013. “Combining Direct and Proxy Assessments to Reduce Attrition Bias in a Longitudinal Study.” Alzheimer Disease and Associated Disorders 27 (3): 207–12, https://doi.org/10.1097/WAD.0b013e31826cfe90.Suche in Google Scholar
Walter, S., C. Dufouil, A. L. Gross, R. N. Jones, D. Mungas, T. J. Filshtein, J. J. Manly, T. E. Arpawong, and M. M. Glymour. 2019. “Neuropsychological Test Performance and MRI Markers of Dementia Risk: Reducing Education Bias.” Alzheimer Disease and Associated Disorders 33 (3): 179–85, doi:https://doi.org/10.1097/WAD.0000000000000321.Suche in Google Scholar
Zahodne, L. B., M. M. Glymour, C. Sparks, D. Bontempo, R. A. Dixon, S. W. MacDonald, and J. J. Manly. 2011. “Education Does Not Slow Cognitive Decline with Aging: 12-year Evidence from the Victoria Longitudinal Study.” Journal of the International Neuropsychological Society 17 (6): 1039–46, doi:https://doi.org/10.1017/S1355617711001044.Suche in Google Scholar
Zahodne, L. B., E. P. Morris, N. Sharifian, A. B. Zaheed, A. Z. Kraal, and K. Sol. 2020. “Everyday Discrimination and Subsequent Cognitive Abilities across Five Domains.” Neuropsychology 34 (7): 783–90, doi:https://doi.org/10.1037/neu0000693.Suche in Google Scholar
Zimmerman, S. C.W. D. Brenowitz, C. Calmasini, S. F. Ackley, R. E. Graff, S. B. Asiimwe, A. M. Staffaroni, T. J. Hoffmann, and M. M. Glymour. 2022. in press. “Genetic Variants Associated with Late Onset Alzheimer’s Disease Predict Subtle Divergence in Cognitive Tests by Mid-Life.” JAMA Network Open.10.1001/jamanetworkopen.2022.5491Suche in Google Scholar
© 2022 Walter de Gruyter GmbH, Berlin/Boston
Artikel in diesem Heft
- Frontmatter
- Articles
- Preface: Expert Advice to Enhance Aging Research and the Health and Retirement Study
- Future Directions for the HRS Harmonized Cognitive Assessment Protocol
- The Health and Retirement Study: Contextual Data Augmentation
- Reducing Nonresponse and Data Linkage Consent Bias in Large-Scale Panel Surveys
- Enhancing the Utility of the Health and Retirement Study (HRS) to Identify Drivers of Rising Mortality Rates in the United States
- Using the Health and Retirement Study for Research on the Impact of the Working Conditions on the Individual Life Course
Artikel in diesem Heft
- Frontmatter
- Articles
- Preface: Expert Advice to Enhance Aging Research and the Health and Retirement Study
- Future Directions for the HRS Harmonized Cognitive Assessment Protocol
- The Health and Retirement Study: Contextual Data Augmentation
- Reducing Nonresponse and Data Linkage Consent Bias in Large-Scale Panel Surveys
- Enhancing the Utility of the Health and Retirement Study (HRS) to Identify Drivers of Rising Mortality Rates in the United States
- Using the Health and Retirement Study for Research on the Impact of the Working Conditions on the Individual Life Course