Home Medicine A critical appraisal of the WHO 2024 systematic review of the effects of RF-EMF exposure on tinnitus, migraine/headache, and non-specific symptoms
Article Open Access

A critical appraisal of the WHO 2024 systematic review of the effects of RF-EMF exposure on tinnitus, migraine/headache, and non-specific symptoms

  • John W. Frank , Ronald L. Melnick and Joel M. Moskowitz EMAIL logo
Published/Copyright: July 15, 2024

Abstract

The World Health Organization (WHO) in 2012 initiated an expert consultation about research on the health effects of radio-frequency electromagnetic fields (RF-EMF) for a WHO monograph that was last updated in 1993. The project was abandoned over concerns about the quality of the commissioned review papers. The WHO restarted the project in 2019 by commissioning 10 systematic reviews (SRs) of the research on RF-EMF exposure and adverse biological and health outcomes in laboratory animals, cell cultures, and human populations. The second of these SRs, published in 2024, addresses human observational studies of RF-EMF exposure and non-specific symptoms, including tinnitus, migraine/headache, and sleep disturbance. The present commentary is a critical appraisal of the scientific quality of this SR (SR7) employing criteria developed by the Oxford Centre for Evidence-Based Medicine. Based upon our review, we call for a retraction of SR7 and an impartial investigation by unconflicted experts of the currently available evidence and future research priorities.

Introduction

The World Health Organization (WHO) in 2012 began an international expert consultation about research on the health effects of radio-frequency electromagnetic fields (RF-EMF) for a WHO monograph (last updated in 1993) which was abandoned due to concerns about the quality of the commissioned review papers [1]. In 2019 the WHO restarted the process by commissioning a series of 10 systematic reviews (SRs) of the scientific evidence on RF-EMF exposure and adverse biological and health outcomes in both laboratory animals/in vitro settings, as well as human populations [2] (2021). The second of these SRs to reach publication in 2024, prepared by Röösli and his colleagues [3], focuses on the human observational evidence of associations between RF-EMF exposure and relatively non-specific symptoms, including tinnitus, migraine/headache, and sleep disturbance.

The authors of this commentary recognize that the assessment of scientific evidence in this field has long been controversial [4], [5], [6], [7], [8], [9], [10], [11]. We therefore seek, in this paper, to critically appraise the scientific quality of the SR by Röösli et al. [3], using standard, globally-used criteria developed by the Oxford Centre for Evidence-Based Medicine (CEBM) [12] (Table 1).

Table 1:

Critical appraisal questions for systematic reviews (Oxford CEBM, 2024 – adapted).

Internal validity

  1. What question (PECO: Population; Exposure; Controls; and Outcomes) did the systematic review address, and was it appropriate and adhered to?

  1. Is it unlikely that important, relevant studies were missed? What are the implications?

  1. Were the criteria used to select articles for inclusion/exclusion appropriate? If not, why not?

  1. Were the included studies sufficiently valid for the type of question asked? If not, why not?

  1. Were the results similar from study to studya? If not, why not?

  1. What were the results? Are they credible?


External validity


  1. Were the results suitable for generating scientifically robust exposure limits for “real-world” RF-EMFs?

  1. aThis question asks whether heterogeneity of the primary studies was properly analysed to be sure they are suitable for pooling by meta-analysis: i) statistically, in terms of their results? ii) substantively, in terms of design, exposure, outcome measurement, and analysis?

Methods

Before selecting the Oxford CEBM tool [12] for critically appraising SRs, we examined other tools in wide use for this purpose, such as the Navigation Guide [13] and related OHAT Risk of Bias tool [14] which was the tool used by Röösli et al. [3], 15], as well as Nakagawa’s more recently published “ten (SR) appraisal questions for biologists.” [16] We found the Navigation and OHAT tools to be onerous to apply and overly complex, with considerable internal duplication – as well as much jumping back and forth between experimental and observational studies, the former of which are of no relevance to the purely observational epidemiological studies reviewed by Röösli et al. [3] [such human epidemiological studies have largely been observational in nature, due to the obvious ethical constraints on experimental studies randomizing human subjects to such potentially hazardous exposures].

In addition, as noted in a recent thorough critique [17], the OHAT risk-of bias tool does not cover some important aspects of SR methodology of major relevance to observational epidemiological studies – e.g. the need to carefully analyse the designs, analyses and results of the relevant primary studies for substantive heterogeneity, in addition to formal statistical tests for heterogeneity across the results of those studies (cf. Table 1 footnote). Similarly, Nakagawa’s tool [16], not yet widely adopted, focuses on those aspects of the reviewed literature which biologists feel comfortable appraising, omitting consideration of several key epidemiological features of high-quality SRs. The Oxford CEBM tool [12], on the other hand, is largely in non-technical language, short and easy to use, with only six questions about internal validity, and one about external validity. It is therefore relatively transparent, especially for use in the profoundly multidisciplinary context of this field.

Our two goals in writing this paper are to examine: a) the robustness of the recently published SR by Röösli et al. [3] and b) the scientific credibility of the relevant primary studies available to that team, when they conducted their final literature search in February 2023. The authors of this critique have therefore sought to answer the research question:

Do a) the SR by Röösli et al. [3] and b) the primary studies (to February 2023) of associations, between RF-EMF exposure and five outcomes (tinnitus, headache, migraine, sleep disturbance, and composite symptom scores), present a compelling case that these associations are repeatably and consistently measurable, at a strength compatible with causation [18], with a clear dose-response relationship, if relevant, and are these results unlikely to be biased by significant methodological weaknesses in design and analysis?

Results and discussion

Internal validity

  1. What was the SR’s research question (PECO: Population; Exposure; Controls; Outcomes)? Was it appropriate and adhered to?

Röösli et al. [3] state that the aim of their SR was “to assess the effects of continuous or repeated local and whole human body RF-EMF exposure per unit-increase of one week or longer (E) on the occurrence of tinnitus, migraine and nonspecific symptoms (O), in the general population or workers (P) and to assess whether there is an exposure–response relationship between these outcomes and RF-EMF exposure levels (C).” The research question was therefore clearly posed, in the recommended format for SRs. As for the degree to which the review adhered to this research question, we are not concerned that this SR veered from its stated aims per se. What we focus on in this critique is the methods utilized by Röösli et al. in executing the SR.

  1. Is it unlikely that important, relevant studies were missed? What are the implications?

The literature search protocol is well described and appears to follow the published protocol for this SR [15] providing confidence that the search is reproducible. It is unlikely that any important, relevant studies were missed, as witnessed by the fact that the initial count of papers found was 4,458. However, we are concerned by the omission of both interrupted time-series studies and human experimental (provocation) studies of RF-EMF exposures, as discussed in the next section. We are also concerned that only one study included in the final meta-analysed lists of Röösli et al. [3], Schoeni et al. [19], was focused on children and adolescents. There are published accounts of significant burdens of illness in these age-groups, such as headaches, attributed by study subjects to cellphone use [20], 21].

  1. Were the criteria used to select articles for inclusion/exclusion appropriate? If not, why not?

This is a difficult and contentious question, because the 4,458 citations initially identified in the literature search were reduced by the authors, according to the paper’s flow-chart, to only 13 relevant, unique publications, concerning only eight independent studies of a quality deemed adequate for detailed analysis. While this level of exclusion is not unusual in literatures with many sub-standard primary studies, as is common in environmental epidemiology [22], 23] we would question some of the exclusion criteria, as potentially omitting useful evidence for this SR. For example, human experimental studies, typically involving deliberate provocation of such symptoms by laboratory-controlled exposures to RF-EMFs, have been made the subject of a separate SR, still to appear [24], and so are excluded from consideration by Röösli et al. [3] Although that literature is controversial [25] we would argue that omitting it entirely, a priori, from the SR by Röösli et al. [3] is contrary to best epidemiological practice, in that experimental studies – when of high quality – virtually always contribute stronger evidence of causation than observational studies [18].

Röösli et al. [3], 15] also excluded from detailed consideration any cross-sectional studies, on the grounds that they are not able to demonstrate clear temporality (i.e. that the exposure preceded the outcome in time), a strong Bradford Hill criterion for causation [18]. This meant that the only epidemiological study designs included in their final list of relevant and reasonable-quality studies were traditional cohort or case-control studies. However, we would point out that the symptoms focused on by Röösli et al. [3] (tinnitus, migraine/headache, sleep disturbance, as well as more non-specific complaints, such as fatigue, exhaustion and nervousness) are typically recurrent over long periods of time – often years. These symptoms’ potential causes/precipitants are therefore not ideally studied by standard cohort or case-control designs used in observational epidemiology, which tend to assume that the outcome is more or less irreversible, as is the case, for example, in completed stroke/myocardial infarction or confirmed cancer. For recurrent symptomatic symptoms over prolonged periods, alternative etiological study designs are often preferable, such as interrupted time-series analyses of cases, assessing the association between changes in exposure levels and onset or exacerbation of symptoms. In fact, Röösli himself has recently published, as senior author, a case time-series analysis of acute psychiatric symptom exacerbations in admitted patients, and peaks of noise from a local military airbase [26].

The omission of any such “non-standard” epidemiological study designs in this SR – which ended up with only 12 cohort and one case-control publications, based on only eight separate studies, worthy of detailed analysis, may well have introduced bias. That is because the traditional cohort study design aiming to illuminate disease causation, requires the exclusion of subjects with a history of the health outcome at baseline. In this case, persons with a history of tinnitus, migraine/headache, etc. in response to RF-EMF exposure (including persons regarding themselves as having Electromagnetic Hypersensitivity Syndrome – EHS, referred to as “IEI-EMF” by Röösli et al. [3]) appear to have been excluded at baseline from the eight separate cohort studies finally determined by Röösli et al. [3] to be of sufficient relevance and quality to be assessed in detail in their SR. This exclusion effectively limits the SR’s results to the causation of new-onset symptoms only – arguably a very small proportion of all relevant cases in the population at a given time, given that these conditions are typically recurrent.[1] Furthermore, as discussed in detail below, related to the questionable statistical power of almost all the meta-analyses conducted by Röösli et al. [3], failure to include even a handful of such “non-standard” observational epidemiological studies would have contributed substantially to the very small number of studies finally analysed – which varied from just three (for tinnitus and sleep disturbance) to four (for migraine/headache and composite symptoms), across all the broad categories of RF-EMF exposure examined.

In sum, we submit that Röösli et al. [3] over-excluded potentially valuable studies with non-standard epidemiological designs such as time-series analysis, and also did not adequately justify their a priori exclusion of all human experimental “provocation” studies.

  1. Were the included studies sufficiently valid for the type of question asked? If not, why not?

As described in great detail in the OHAT Risk of Bias tool, the full assessment of the validity of each of the primary studies, both those included and excluded in such an SR, requires intensive analysis of potential biases of several distinct types. Röösli et al. [3] cite eight such “domains” of bias related to: subject selection; attrition (losses to follow-up in cohort studies); information bias (errors) in both exposure (RF-EMF) and outcome measurement; selective reporting; appropriate statistical methods; and reverse causality.

We have examined the Risk of Bias analyses of Röösli et al. [3], for example, as tabulated in their Figure 2. We find their summary assessment to have been, if anything, more lenient than is warranted, in stating “About half of the assessed outcome-exposure-population combinations had low probability for risk of bias.” We reviewed a number of the primary studies included in the final meta-analysis of Röösli et al. [3] and find major methodological weaknesses that would normally exclude them from the final list of core studies for an SR.

An example of a primary study selected for meta-analysis by Röösli et al., despite its major methodological weaknesses, is the COSMOS study of headache and tinnitus as related to cellphone use in Finland and Sweden [27]. The COSMOS study contributes a substantial proportion of the data that go into two of the six meta-analyses of various exposure/outcome combinations which are depicted in Röösli et al.’s Figures 3–7 [3], so its influence on the pooled estimates of effect in those Forest Plots is substantial. However, the four-year long COSMOS cohort study suffered from: exclusion at baseline of subjects with a history of tinnitus or weekly headaches (see subheading above for why this is problematic); over-adjustment for potential confounding by inclusion in the main multivariable analysis of the covariate “daily painkiller use” (which is likely to reduce observed effect-sizes); and the choice of the reference level of cellphone use as “the bottom 50 % of the cellphone use distribution at baseline,” rather than a more extreme percentile cutoff. In addition, there is no real rationale for a four-year study duration of symptoms which are usually induced, in persons who report them related to RF-EMF exposure, after a much briefer latency (typically hours to days). Ascertaining such outcomes so long after baseline exposures were measured risks increasing exposure misclassification substantially, during an era when both cellphone technology and user habits were changing rapidly. In particular, ongoing major changes in cellphone signal technology throughout the long period of the COSMOS study’s estimation of exposure (including a stratum of subjects with over 15 years of exposure in total) could only increase the inherent exposure misclassification arising from the study’s use of rather crude proxy measures of actual RE-EMF exposure among study subjects (cumulative number and duration of calls) despite the fact that operator information was used to cross-validated self-reported phone use.

Virtually all of the methodological weaknesses listed above would create biases in the direction of reducing the observed strengths of association towards the null (e.g. Relative Risk of 1). For example, the overall Relative Risks (RRs) reported in the COSMOS study were not statistically significantly different from 1 for both tinnitus and headache, despite the study’s huge statistical power. In our view [28] these COSMOS findings were so likely to have been biased that this study should not have been selected for the final meta-analysis by Röösli et al. [3] Notably, removing it from the two meta-analyses in which it was included by Röösli et al. [3] would have substantially increased the pooled RR across the remaining two or three primary studies, since the large sample-size of the COSMOS study led to its receiving heavy weighting, in the meta-analyses by Röösli et al. [3], accounting for 35–40 % of the overall weight of all studies pooled.

Finally, some of the primary studies selected for meta-analyses clearly had lower than required statistical power to show a clinically significant association (e.g. RR>, say, 1.5) as statistically significant. This is illustrated by the three primary studies analysed for the outcome “tinnitus” – two of which [29], 30] have 95 % confidence intervals on their central estimates of Relative Risk which are very wide. In fact, the equivalent RR intervals of uncertainty run the effect-size equivalent from 1 (the null) to more than 2 (a moderate strength of association, almost always of clinical significance if valid, especially in environmental exposures with high rates of exposure in the general population, as is the case here.) There are equally low-powered studies in the other meta-analyses of Röösli et al. [3] – for example, in their Figure 7, three of the four primary studies of non-specific symptoms which were meta-analysed for “modelled exposure,” and two of the four primary studies of those same outcomes meta-analysed for “self-perceived exposure” have remarkably wide 95 % confidence intervals for their estimate of effect-size, which in all cases span the null hypothesis.

Low power in primary studies is normally not a reason to exclude them from meta-analysis, the main purpose of which is to overcome power problems by pooling results across studies. However, when the total number of primary studies is as small (three to four) as in all six Forest Plots depicted by Röösli et al.’s [3] Figures 3–7, the inclusion of so many small studies substantially reduces the power of the meta-analyses. The meta-analyses of Röösli et al. [3] therefore suffer from two separate but synergistic power problems: 1) the very small number of studies included for each exposure/outcome combination examined; 2) inadequate sample-size for a substantial proportion of those studies.

  1. Were the results similar from study to study? If not, why not?

The inadequate power of the meta-analyses conducted by Röösli et al. [3] is made further problematic because of the very high levels of across-study heterogeneity [31]. This is explicitly clear from the I-squared values cited below the six Forest Plots (Figures 3–7) in their paper. These I-squared values are above 85 % for all the meta-analyses summarizing more than two primary studies – below which such statistics are pretty much meaningless. Any I-squared value above 75 % is regarded as indicating “substantial heterogeneity” across primary studies’ results [32], 33]. Such heterogeneity across primary studies’ results renders meta-analyses extremely unreliable and risks producing biased pooled results, due to the excessive influence of just one or two larger studies (as exemplified by the COSMOS study’s strong influence on two of the meta-analyses by Röösli et al. [3], discussed above). Widely cited methodological guidance for meta-analyses strongly recommends that identifying as both relevant and of adequate quality less than ten– and certainly five–primary studies of a given exposure/outcome combination, is a contraindication to performing meta-analysis at all, especially in the presence of substantial heterogeneity [34], [35], [36].

It is here that we encounter perhaps our major concern with the approach of Röösli et al. [3], in calculating meta-analytic summaries of the RRs (or standardized mean difference in the case of continuous outcomes, such as symptom scores) and their 95 % confidence intervals for each of five health outcomes, pooled across the finally selected primary studies (which never number more than four for any Forest Plot of a given exposure/outcome combination). Pooling results in this way is not best epidemiological practice, when – as Röösli et al. [3] freely admit – the number of studies available for each meta-analysis is small and the heterogeneity of effect-sizes found across those primary studies is very high. The appropriate scientific conclusion here is that the available and selected studies, for each exposure-outcome-population combination assessed, are so inconsistent in their results that they must not be estimating the same actual strength of association, so that pooling the results is contra-indicated [36].

A particular consequence of pooling studies with heterogeneous results is that it inevitably “dilutes” the influence of studies finding higher effect-sizes by adding in studies with smaller ones, which (as noted above) typically tend to suffer from biases that underestimate association strength, especially in environmental cohort studies, due to widespread misclassification/measurement-error for the exposure of interest. This effect is evident for the meta-analytic results depicted in Forest Plots, for several of the outcomes examined by Röösli et al. [3] Of even greater concern, three of the six depicted Forest Plots (those for “headaches,” “sleep disturbance” and “non-specific symptoms”) show some primary studies with statistically significant observed associations in the opposite direction to that expected. Such a counterintuitive finding can be confidently attributed to various sorts of study biases, simply because there is no known mechanism by which increased EMF exposure would protect from such symptoms. Rather than pool such conflicting and likely biased results, thereby reducing the overall pooled effect to near or below the null in these very heterogeneous sets of primary studies, it would have been better to treat studies finding such “opposite to expected” directions of association as “inadmissible due to bias,” and exclude them from further consideration (typically, when heterogeneity is encountered in a meta-analysis, supplemental analyses are conducted to identify effect modifiers; however, this was not practical for Röösli et al. [3] due to the small number of studies available after the extensive elimination of studies included in this SR.).

As a result of these concerns, we are of the view that calculating and depicting – in the six Forest Plots of this SR – pooled estimates, of the strengths of associations across primary studies, borders on pseudo-science. It implies that such pooled estimates are valid, when they are not. In short, they should not have been calculated for such sparse and heterogeneous sets of studies, many with profound methodological weaknesses likely to have led to bias.

  1. What were the results? Are they credible?

We credit Röösli et al. [3] with acknowledging – e.g. in the very last line of their abstract – the “substantial uncertainty” arising from the limitations of the 13 included papers from eight independent studies, spanning several exposure-categories and the four primary outcomes tabulated in their Figure 2. We are concerned, however, that the phrasing used by Röösli et al. [3] to summarise their conclusions does not accurately reflect the implications for meta-analysis of this uncertainty. For example, the main conclusion of their abstract, “There is no indication that RF-EMF below guideline values causes symptoms …” is simply not scientifically accurate; indeed, it is misleading. It should read: “Overall, the quantity and quality of evidence available from the primary studies reviewed is insufficient to draw any valid conclusions about whether or not RF-EMF exposures below guideline values cause the symptoms studied.” This sort of scientific misstatement has long been acknowledged as one of the most frequent errors in meta-analyses, often referred to as “mistaking absence of evidence for evidence of absence (of an effect)” [36], 37].

One further consideration, in assessing the potential of the SR by Röösli et al. [3] for bias, is the affiliations and funding sources of the authors, which are only partly declared in their paper, compared to those documented in publicly available sources. This worrisome under-declaration of potential conflicts of interest is demonstrated by contrasting the statement of such potential conflicts at the end of this SR (first quotation below) with the statement of funding from the institution with which Röösli et al. [3] are associated (second quotation below):

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Martin Röösli’s research is entirely funded by public or not-for-profit foundations. He has served as advisor to a number of national and international public advisory and research steering groups concerning the potential health effects of exposure to nonionizing radiation, including the World Health Organization, the International Agency for Research on Cancer, the International Commission on Non-Ionizing Radiation Protection, the Swiss Government (member of the working group “mobile phone and radiation” and chair of the expert group BERENIS), the German Radiation Protection Commission (member of the committee Non-ionizing Radiation (A6) and member of the working group 5G (A630)) and the Independent Expert Group of the Swedish Radiation Safety Authority.

This statement does not reflect the true extent of the relationships with telecommunications firms of Röösli et al. [3] and their home institution (Swiss Research Foundation for Electricity and Mobile Communication – FSM), which are described in that Foundation’s 2022 Annual Report as follows [38]:

Organization and financing: the Research Foundation is sponsored by the ETH Zurich, and the companies Cellnex, Ericsson, Sunrise, Swisscom, and Swissgrid. Institutionally, the FSM is supported by the Swiss Federal Offices of Public Health (FOPH), Communications (OFCOM), Environment (FOEN), and Energy (SFOE), as well as by the Federal Inspectorate for Heavy Current Installations (ESTI). In addition, the following NGOs support the Foundation: the Swiss Academy of Engineering Sciences (SATW), Swiss Consumer Forum (KF), the Swiss Heritage Society (SHS), the Swiss Cancer League, Ingenieur Hospital Schweiz, the Swiss Electricity Industry Association (VSE), the Swiss Telecommunications Association (ASUT), Suissedigital, Electrosuisse, Swico, the Swiss Conferences of Cantonal Ministers for Construction, Planning and the Environment (BPUK), and for Energy (EnDK).

These potential conflicts of interest are of great concern. There are many aspects of both systematic reviews and meta-analysis which – despite the use of widely recommended tools such as the OHAT Risk of Bias scale and GRADE scheme for assessing strength of evidence – involve inherently subjective decisions. Such subjectivity inevitably leads to significant variation across such reviews, even when precisely the same primary studies are being assessed [17], 39]. This potential for subjectivity requires clearcut independence of such reviews’ co-authors from any and all influences which might lead to bias related to conflicts of interest.

External validity

  1. Were the results suitable for generating scientifically robust exposure limits for “real-world” RF-EMFs?

We have a specific concern regarding the extrapolation of this SR’s results to the current challenges of protecting the public from the rapidly evolving RF-EMF exposures now occurring in many countries, such as those related to the rollout of 5G telecommunications technologies. That concern is that all of the specific exposures studied in the literature reviewed by Röösli et al. [3] are not representative of the very diverse emerging RF-EMF exposures now affecting human populations (and the environment more generally) in the real world [8], 9]. To be fair, this is a general problem with many health protection topics, not just regulatory policies for RF-EMFs – although it is barely discussed by Röösli et al. [3] Overcoming this major lag, between real-world technological changes and the available research literature on their safety for the public, will require a substantial reorganization of, and massively increased funding for, international studies by top, impartial researchers, of newer RF-EMFs’ adverse effects, both in humans and other species [4], [5], [6], [7], [8], [9].

Conclusions

To summarize, the way in which any epidemiologically unsophisticated reader is likely to be misled by this SR is clear. It appears to conclude unequivocally that the body of scientific evidence reviewed supports the safety of current (e.g. ICNIRP-based) population exposure limits for RF-EMF [10]. We reiterate that, on the contrary, this body of evidence is not adequate to either support or refute the safety of current exposure limits – largely due to the very small number and low methodological quality of the relevant primary studies to date, and the fundamental inappropriateness of meta-analysis for the handful of very heterogeneous primary studies identified by Röösli et al. [3] for each of the exposure/outcome combinations analysed.

We therefore call for a retraction of the SR by Röösli et al., and an impartial international investigation, by unconflicted experts, of both the currently available evidence base on these issues, as well as related research priorities for the future. That investigation should particularly address, above and beyond the topic of priority health outcomes to be researched (which was already assessed in the international expert consultation by WHO in 2018) [2] the need for improved methods of accurately measuring RF-EMF exposures, suitable for large human observational studies in the general population – the Achilles heel of the current literature.


Corresponding author: Joel M. Moskowitz, PhD, School of Public Health, University of California, Berkeley, CA 94720, USA, E-mail:
On behalf of the International Commission on the Biological Effects of Electromagnetic Fields (ICBE-EMF).

Acknowledgments

The authors wish to thank their colleagues in the International Commission on the Biological Effects on Electromagnetic Fields (ICBE-EMF) for their support and approval of this paper.

  1. Research ethics: Not applicable.

  2. Informed consent: Not applicable.

  3. Author contributions: The authors have accepted responsibility for the entire content of this manuscript and approved its submission.

  4. Competing interests: The authors state no conflict of interest.

  5. Research funding: The Electromagnetic Safety Alliance provided funding for open access publication.

  6. Data availability: Not applicable.

References

1. Microwave News. Will WHO kick its ICNIRP habit? Non-thermal effects hang in the balance: Repacholi’s legacy of industry cronyism. Updated June 5, 2023. Available from: https://microwavenews.com/news-center/can-who-kick-icnirp-habit [Accessed 30 Apr 2024].Search in Google Scholar

2. Verbeek, J, Oftedal, G, Feychting, M, van Rongen, E, Rosaria Scarfì, M, Mann, S, et al.. Prioritizing health outcomes when assessing the effects of exposure to radiofrequency electromagnetic fields: a survey among experts. Environ Int 2021;146:106300. https://doi.org/10.1016/j.envint.2020.106300.Search in Google Scholar PubMed

3. Röösli, M, Dongus, S, Jalilian, H, Eyers, J, Esu, E, Oringanje, CM, et al.. The effects of radiofrequency electromagnetic fields exposure on tinnitus, migraine and non-specific symptoms in the general and working population: a systematic review and meta-analysis on human observational studies. Environ Int 2024;183:108338. https://doi.org/10.1016/j.envint.2023.108338.Search in Google Scholar PubMed

4. Russell, CL. 5G wireless telecommunications expansion: public health and environmental implications. Environ Res 2018;165:484–95. https://doi.org/10.1016/j.envres.2018.01.016.Search in Google Scholar PubMed

5. Di Ciaula, A. Towards 5G communication systems: are there health implications? Int J Hyg Environ Health 2018;221:367–75. https://doi.org/10.1016/j.ijheh.2018.01.011.Search in Google Scholar PubMed

6. Moskowitz, JM. We have no reason to believe 5G is safe. Scientific American blogs; 2019. Available from: https://blogs.scientificamerican.com/observations/we-have-no-reason-to-believe-5g-is-safe/[Accessed 26 Feb 2024].Search in Google Scholar

7. Simkó, M, Mattsson, MO. 5G wireless communication and health effects – a pragmatic review based on available studies regarding 6 to 100 GHz. Int J Environ Res Publ Health 2019;16:3406. https://doi.org/10.3390/ijerph16183406.Search in Google Scholar PubMed PubMed Central

8. Kostoff, RN, Heroux, P, Aschner, M, Tsatsakis, A. Adverse health effects of 5G mobile networking technology under real-life conditions. Toxicol Lett 2020;323:35–40. https://doi.org/10.1016/j.toxlet.2020.01.020.Search in Google Scholar PubMed

9. Frank, JW. Electromagnetic fields, 5G and health: what about the precautionary principle? J Epidemiol Community Health 2021. https://doi.org/10.1136/jech-2019-213595.Search in Google Scholar PubMed

10. ICNIRP – International Commission on Non-Ionizing Radiation Protection. Availabe from: https://www.icnirp.org/ [Accessed 26 Feb 2024].Search in Google Scholar

11. ICBE-EMF – International Commission on Biological Effects of Electromagnetic Fields. Availabe from: https://icbe-emf.org/resources/ [Accessed 26 Feb 2024].Search in Google Scholar

12. Oxford Centre for Evidence-Based Medicine. Systematic review. Oxford, UK: University of Oxford. Availabe from: https://www.cebm.net/wp-content/uploads/2019/01/Systematic-Review.pdf [Accessed 21 Feb 2024].Search in Google Scholar

13. Woodruff, TJ, Sutton, P. The navigation guide systematic review methodology: a rigorous and transparent method for translating environmental health science into better health outcomes. Environ Health Perspect 2014;122:1007–14. https://doi.org/10.1289/ehp.1307175.Search in Google Scholar PubMed PubMed Central

14. Rooney, AA, Boyles, AL, Wolfe, MS, Bucher, JR, Thayer, KA. Systematic review and evidence integration for literature-based environmental health science assessments. Environ Health Perspect 2014;122:711–8. https://doi.org/10.1289/ehp.1307972 [Accessed 26 Feb 2024].Search in Google Scholar PubMed PubMed Central

15. Röösli, M, Dongus, S, Jalilian, H, Feychting, M, Eyers, J, Esu, E, et al.. The effects of radiofrequency electromagnetic fields exposure on tinnitus, migraine and non-specific symptoms in the general and working population: a protocol for a systematic review on human observational studies. Environ Int 2021;157:106852. https://doi.org/10.1016/j.envint.2021.106852.Search in Google Scholar PubMed PubMed Central

16. Nakagawa, S, Noble, DW, Senior, AM, Lagisz, M. Meta-evaluation of meta-analysis: ten appraisal questions for biologists. BMC Biol 2017;15:18. https://doi.org/10.1186/s12915-017-0357-7.Search in Google Scholar PubMed PubMed Central

17. Boogaard, H, Atkinson, RW, Brook, JR, Chang, HH, Hoek, G, Hoffmann, B, et al.. Evidence synthesis of observational studies in environmental health: lessons learned from a systematic review on traffic-related air pollution. Environ Health Perspect 2023;131:115002. https://doi.org/10.1289/EHP11532.Search in Google Scholar PubMed PubMed Central

18. Hill, AB. The environment and disease: association or causation? 1965. J R Soc Med 2015;108:32–7. https://doi.org/10.1177/0141076814562718.Search in Google Scholar PubMed PubMed Central

19. Schoeni, A, Roser, K, Röösli, M. Symptoms and the use of wireless communication devices: a prospective cohort study in Swiss adolescents. Environ Res 2017;154:275–83. https://doi.org/10.1016/j.envres.2017.01.004.Search in Google Scholar PubMed

20. Durusoy, R, Hassoy, H, Özkurt, A, Karababa, AO. Mobile phone use, school electromagnetic field levels and related symptoms: a cross-sectional survey among 2150 high school students in Izmir. Environ Health 2017;16:51. https://doi.org/10.1186/s12940-017-0257-x.Search in Google Scholar PubMed PubMed Central

21. Chongchitpaisan, W, Wiwatanadate, P, Tanprawate, S, Narkpongphan, A, Siripon, N. Trigger of a migraine headache among Thai adolescents smartphone users: a time series study. Environ Anal Health Toxicol 2021;36:e2021006–0. https://doi.org/10.5620/eaht.2021006.Search in Google Scholar PubMed PubMed Central

22. Armstrong, BG. Effect of measurement error on epidemiological studies of environmental and occupational exposures. Occup Environ Med 1998;55:651–6. https://doi.org/10.1136/oem.55.10.651.Search in Google Scholar PubMed PubMed Central

23. Arroyave, WD, Mehta, SS, Guha, N, Schwingl, P, Taylor, KW, Glenn, B, et al.. Challenges and recommendations on the conduct of systematic reviews of observational epidemiologic studies in environmental and occupational health. J Expo Sci Environ Epidemiol 2021;31:21–30. https://doi.org/10.1038/s41370-020-0228-0.Search in Google Scholar PubMed PubMed Central

24. Bosch-Capblanch, X, Esu, E, Dongus, S, Oringanje, CM, Jalilian, H, Eyers, J, et al.. The effects of radiofrequency electromagnetic fields exposure on human self-reported symptoms: a protocol for a systematic review of human experimental studies. Environ Int 2022;158:106953. https://doi.org/10.1016/j.envint.2021.106953.Search in Google Scholar PubMed PubMed Central

25. Schmiedchen, K, Driessen, S, Oftedal, G. Methodological limitations in experimental studies on symptom development in individuals with idiopathic environmental intolerance attributed to electromagnetic fields (IEI-EMF) – a systematic review. Environ Health 2019;18:88. https://doi.org/10.1186/s12940-019-0519-x.Search in Google Scholar PubMed PubMed Central

26. Wicki, B, Vienneau, D, Schäffer, B, Müller, TJ, Raub, U, Widrig, J, et al.. Acute effects of military aircraft noise on sedative and analgesic drug administrations in psychiatric patients: a case-time series analysis. Environ Int 2024;185:108501. https://doi.org/10.1016/j.envint.2024.108501.Search in Google Scholar PubMed

27. Auvinen, A, Feychting, M, Ahlbom, A, Hillert, L, Elliott, P, Schüz, J, et al.. Headache, tinnitus and hearing loss in the International Cohort Study of Mobile Phone Use and Health (COSMOS) in Sweden and Finland. Int J Epidemiol 2019;48:1567–79. https://doi.org/10.1093/ije/dyz127.Search in Google Scholar PubMed PubMed Central

28. Moskowitz, JM, Frank, JW, Melnick, RL, Hardell, L, Belyaev, I, Héroux, P, et al.. COSMOS: a methodologically-flawed cohort study of the health effects from exposure to radiofrequency radiation from mobile phone use. (Letter to the Editor). Environ Int 2024;190:108807. https://doi.org/10.1016/j.envint.2024.108807.Search in Google Scholar PubMed

29. Hutter, HP, Moshammer, H, Wallner, P, Cartellieri, M, Denk-Linnert, DM, Katzinger, M, et al.. Tinnitus and mobile phone use. Occup Environ Med 2010;67:804–8. https://doi.org/10.1136/oem.2009.048116.Search in Google Scholar PubMed

30. Frei, P, Mohler, E, Braun-Fahrländer, C, Fröhlich, J, Neubauer, G, Röösli, M, et al.. Cohort study on the effects of everyday life radio frequency electromagnetic field exposure on non-specific symptoms and tinnitus. Environ Int 2012;38:29–36. https://doi.org/10.1016/j.envint.2011.08.002.Search in Google Scholar PubMed

31. Dekkers, OM, Vandenbroucke, JP, Cevallos, M, Renehan, AG, Altman, DG, Egger, M. COSMOS-E: guidance on conducting systematic reviews and meta-analyses of observational studies of etiology. PLoS Med 2019;16:e1002742. https://doi.org/10.1371/journal.pmed.1002742.Search in Google Scholar PubMed PubMed Central

32. Thompson, SG, Higgins, JP. How should meta-regression analyses be undertaken and interpreted? Stat Med 2002;21:1559–73. https://doi.org/10.1002/sim.1187.Search in Google Scholar PubMed

33. Harrer, M, Cuijpers, P, Furukawa, TA, Ebert, DD. Doing meta-analysis with R: a hands-on guide. Boca Raton, FL and London: Chapman & Hall/CRC Press; 2021 [Accessed 1 May 2024].10.1201/9781003107347Search in Google Scholar

34. Hedges, LV, Pigott, TD. The power of statistical tests in meta-analysis. Psychol Methods 2001;6:203–17. https://doi.org/10.1037/1082-989x.6.3.203.Search in Google Scholar

35. Higgins, JPT, Thomas, J, Chandler, J, Cumpston, M, Li, T, Page, MJ, et al., editors, Cochrane handbook for systematic reviews of interventions version 6.4. Cochrane; 2023 (updated August 2023). Available from: www.training.cochrane.org/handbook.Search in Google Scholar

36. Valentine, JC, Pigott, TD, Rothstein, HR. How many studies do you need? A primer on statistical power for meta-analysis. J Educ Behav Stat 2010;35:215–47. https://doi.org/10.3102/1076998609346961.Search in Google Scholar

37. Rosenthal, R, Rubin, DB. The counternull value of an effect size: a new statistic. Psychol Sci 1994;5:329–34. https://doi.org/10.1111/j.1467-9280.1994.tb00281.x.Search in Google Scholar

38. Swiss Research Foundation for Electricity and Mobile Communication – FSM. Annual report. Zurich: FSM; 2022. Available from: https://www.emf.ethz.ch/fileadmin/redaktion/public/downloads/3_angebot/wissensvermittlung/jahresberichte/2022_Jahresbericht_FSM.pdf [Accessed 1 May 2024].Search in Google Scholar

39. Eick, SM, Goin, DE, Chartres, N, Lam, J, Woodruff, TJ. Assessing risk of bias in human environmental epidemiology studies using three tools: different conclusions from different tools. Syst Rev 2020;9:249. https://doi.org/10.1186/s13643-020-01490-8.Search in Google Scholar PubMed PubMed Central

Published Online: 2024-07-15
Published in Print: 2025-06-26

© 2024 the author(s), published by De Gruyter, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.

Articles in the same Issue

  1. Frontmatter
  2. Reviews
  3. Analytical methods, source, concentration, and human risks of microplastics: a review
  4. Solid fuel use and low birth weight: a systematic review and meta-analysis
  5. The human health effects of unconventional oil and gas (UOG) chemical exposures: a scoping review of the toxicological literature
  6. WHO to build neglect of RF-EMF exposure hazards on flawed EHC reviews? Case study demonstrates how “no hazards” conclusion is drawn from data showing hazards
  7. The role of environmental pollution in the development of pulmonary exacerbations in cystic fibrosis: a narrative review
  8. Semi-IPN polysaccharide-based hydrogels for effective removal of heavy metal ions and dyes from wastewater: a comprehensive investigation of performance and adsorption mechanism
  9. A structured review of the associations between breast cancer and exposures to selected organic solvents
  10. A review of the potential adverse health impacts of atrazine in humans
  11. Comprehensive approach to clinical decision-making strategy, illustrated by the Gulf War
  12. A systematic review and quality assessment of estimated daily intake of microplastics through food
  13. Adapting to heat-health vulnerability in temperate climates: current adaptation and mitigation responses and future predictions in Aotearoa New Zealand
  14. Evaluation of the impact of environmental pollutants on the sex ratio: a systematic review
  15. A critical review on the toxicological and epidemiological evidence integration for assessing human health risks to environmental chemical exposures
  16. The association between screen exposure and autism spectrum disorder in children: meta-analysis
  17. The association between maternal perfluoroalkylated substances exposure and neonatal birth weight: a system review and meta-analysis
  18. School built environment and children’s health: a scientometric analysis
  19. Letter to the Editors
  20. Underground power lines as a confounding factor in observational studies concerning magnetic fields and childhood leukemia
  21. A critical appraisal of the WHO 2024 systematic review of the effects of RF-EMF exposure on tinnitus, migraine/headache, and non-specific symptoms
Downloaded on 19.12.2025 from https://www.degruyterbrill.com/document/doi/10.1515/reveh-2024-0069/html?lang=en
Scroll to top button