Home The Now-Defunct ResearchGate Score and the Extant Research Interest Score: A Continued Debate on Metrics of a Highly Popular Academic Social Networking Site
Article Open Access

The Now-Defunct ResearchGate Score and the Extant Research Interest Score: A Continued Debate on Metrics of a Highly Popular Academic Social Networking Site

  • Jaime A. Teixeira da Silva EMAIL logo
Published/Copyright: February 14, 2025
Become an author with De Gruyter Brill

Abstract

Academics might employ science social media or academic social networking sites (ASNSs), such as ResearchGate (RG), to showcase and promote their academic work, research, or published papers. In turn, RG provides usage statistics and performance metrics such as the now-defunct RG Score and the Research Interest Score (RIS) that offer a form of recognition about a researcher’s popularity, or how research is being used or appreciated. As part of a larger appreciation of how ASNSs contribute to knowledge sharing, in this article, the RG Score is reappraised, reflecting on why this metric may have been abandoned while reflecting on whether RIS is any better as an author-based altmetric. Similar to the RG Score, RG does not transparently indicate the precise equation used to calculate RIS, nor is any rationale provided for the weighting of its four factors (other reads, full-text reads, recommendations, and citations), which carry a relative weighting of 0.05, 0.15, 0.25, and 0.5, respectively. Ultimately, the responsible use of RG’s altmetrics lies in users’ hands, although caution is advised regarding their use to formally characterize or rank academics or research institutes.

1 ResearchGate (RG) Metrics: Now-Defunct RG Score and the Extant Research Interest Score (RIS)

Currently, one of the most popular academic social networking sites (ASNSs) is RG.[1] The popularity of ASNSs stems from academics’ desire to use them to amplify their reputation (Nicholas, Clark, & Herman, 2016) and influence (Desai, Mehta, & Rana, 2024) by making their work – mainly in the form of published research – more visible (D’Alessandro et al., 2020; Francke & Hammarfelt, 2022; Ostermaier-Grabow & Linek, 2019), thus showcasing their productivity (Vinay, Sampath Kumar, & Shiva Kumara, 2020). It also allows researchers to network with industry, although the presence of corporations’ researchers on RG was, until fairly recently, still limited (Yan, Liu, Chen, & Yi, 2020). RG provides a simple yet powerful platform to achieve these objectives, and its metrics serve as altmetrics, or proxies for scientific impact (Hoffmann, Lutz, & Meckel, 2016; Meishar-Tal & Pieterse, 2017), while the presence of RG is strongly associated with productivity on Google Scholar (GS) (Sánchez-Teba, Rodríguez-Fernández, & Gaspar-González, 2021). RG is more popular than another ASNS, Academia.edu (Boudry & Durand-Barthez, 2020). RG serves as a useful complement to institutional repositories (Eva & Wiebe, 2019), or can even substitute them because it provides a publicly visible version of the record unlike some institutional repositories (Borrego, 2017). However, the deletion of an RG account, for example, through moderation caused by a violation of RG user regulations, may result in the loss of an entire repository and the deletion of that researcher’s RG profile (Tsigaris & Teixeira da Silva, 2019). Although RG is proprietary, there are currently no costs involved with establishing an RG account. This, together with its relatively simple and integrated user-friendly interface, makes it a popular ASNS (Kim & Oh, 2021; Kim, 2018), so much so that it now (December 20, 2024) houses, according to RG, 25 million scientists and 160 million publication pages.

Despite these benefits and popularity, some researchers, even established and experienced ones, may exhibit skepticism about the use of ASNSs to self-promote themselves or their work (Greifeneder et al., 2018). Moreover, popularity can breed risks and abuse or misuse by a portion of users and can be used to advertise unscholarly work, papers, or journals (Memon, 2016) or showcase retracted papers in their unretracted state (Teixeira da Silva & Bornemann-Cimenti, 2017), thereby inviting users to cite retracted work and placing the integrity of their own papers’ content at risk. If voluminous, such cases may bring disrepute to the platform and distrust in the science and scientists that are showcased there. RG regularly adjusts the platform’s dynamics to improve structural features, such as changes in rules, resources, or capabilities, or to adjust to users’ and public’s feedback or needs (Huang, Zha, Yan, & Wang, 2019). Collectively, these aspects have made RG an attractive research subject, with volumes of papers related to RG – as indexed on GS – increasing from between 8 and 19 per year in 2008–2013 to 108 in 2017 (Prieto-Gutiérrez, 2019).

Another attractive feature of RG used to be the RG Score, one of the ten reputational mechanisms that RG used to have (Nicholas et al., 2016). However, this altmetric was phased out in August 2022 (ResearchGate, 2022). The RG Score was determined by four factors: publications, questions, answers, and followers (Figure 1a and b). These, when observed exclusively, did not represent the full complement of scientists’ academic achievements and merits, nor did they represent academic failures such as retractions or suspicious publishing activity such as engagement with predatory journals or publishers (Teixeira da Silva & Yamada, 2023). So much so that an RG Score was only able to follow one growth pattern, an increase, and it could never be moderated nor reduced by negative factors. In April 2022, a notice appeared on the “Score” section of RG profiles, indicating that the RG Score would be phased out in July 2022 (Figure 1b). RG’s description of the RG Score changed between 2017 and 2022 (Figure 2a versus Figure 2b). To serve as a historical scientific record for academic posterity, the full statement is indicated in Appendix 1 (Suppl. file) while the full texts of the 2017 and 2022 descriptions are provided in Appendix 2 (Suppl. file). At that time, a query sent to RG regarding how the RG Score was calculated and why there were some discrepancies and accounts with apparent highly inflated RG Scores was met with a deflective response (Appendix 3; Suppl. file).

Figure 1 
               The RG Score was only visible in “private” mode, i.e., when a scientist logged into their account, and was not public. The visual of the author’s RG Score is displayed from 2017 (a) and 2022 (b). Sources: (a) https://www.researchgate.net/profile/Jaime_Teixeira_Da_Silva/reputation and (b) https://www.researchgate.net/profile/Jaime-Teixeira-Da-Silva/scores. Dates of screenshots: September 20, 2017 (a) and April 13, 2022 (b). Both URLs that were used to support these screenshots are now defunct (i.e., 404 errors), i.e., these URLs no longer exist, and now jump automatically to the author’s RG account top page (October 2023).
Figure 1

The RG Score was only visible in “private” mode, i.e., when a scientist logged into their account, and was not public. The visual of the author’s RG Score is displayed from 2017 (a) and 2022 (b). Sources: (a) https://www.researchgate.net/profile/Jaime_Teixeira_Da_Silva/reputation and (b) https://www.researchgate.net/profile/Jaime-Teixeira-Da-Silva/scores. Dates of screenshots: September 20, 2017 (a) and April 13, 2022 (b). Both URLs that were used to support these screenshots are now defunct (i.e., 404 errors), i.e., these URLs no longer exist, and now jump automatically to the author’s RG account top page (October 2023).

Figure 2 
               Top of the description of RG Score in 2017, 2022, and 2023. 2017 (a) and 2022 (b). The full textual content can be observed in Appendix 2. Sources: (a and c) https://www.researchgate.net/publicprofile.RGScoreFAQ.html; (b) https://explore.researchgate.net/display/support/RG+Score; and (c) https://help.researchgate.net/hc/en-us/articles/14293512753425. The URLs for (a) and (b) are now defunct and jump automatically to the URL in (c). Dates of screenshots: October 6, 2017 (a); April 14, 2022 (b); and October 18, 2023 (c). It is unclear if there were other variations of this text before, in between, or after these dates.
Figure 2

Top of the description of RG Score in 2017, 2022, and 2023. 2017 (a) and 2022 (b). The full textual content can be observed in Appendix 2. Sources: (a and c) https://www.researchgate.net/publicprofile.RGScoreFAQ.html; (b) https://explore.researchgate.net/display/support/RG+Score; and (c) https://help.researchgate.net/hc/en-us/articles/14293512753425. The URLs for (a) and (b) are now defunct and jump automatically to the URL in (c). Dates of screenshots: October 6, 2017 (a); April 14, 2022 (b); and October 18, 2023 (c). It is unclear if there were other variations of this text before, in between, or after these dates.

Several criticisms were leveled at the RG Score by academics. First, given its proprietary nature, the equation used to calculate this metric was never openly, transparently, or publicly disclosed, so it could not be replicated because the precise weighting of the four factors was unknown (Kraker & Lex, 2015; Meier & Tunger, 2018; Teixeira da Silva & Yamada, 2023). Another criticism was that the RG Score could be artificially amplified if users asked unrealistic or a disproportionate amount of questions (Cozma & Dimitrova, 2021; Orduna-Malea, Martín-Martín, Thelwall, & Lopez-Cozar, 2017). While questions and answers (Q&As) at RG served to obtain information, recommendations, or feedback (Huang et al., 2019; Jeng, DesAutels, He, & Li, 2017), some were highly subjective (Li, He, & Zhang, 2016) while others were questionable in nature (Figure 3), i.e., redundant questions and/or irrelevant answers (Desai, Mehta, & Rana, 2023), raising concerns about how they could possibly contribute to this supposedly objective altmetric. Another disturbing aspect of RG Q&As is the ease with which they can disappear (see Figure 3c as an example). Since such URLs are typically not captured as mementos at the Internet Archive, such knowledge is irretrievably lost. Conversely, well-structured questions invoked a wealth of informative answers, and such Q&As tended to amplify the RG Score (Li et al., 2023). Thus, pro-active engagement with RG Q&As required an element of positivity or evidence-based negativity, as well as succinct responses (Deng, Tong, Lin, Li, & Liu, 2019). The RG Score could also be abused to artificially amplify institutional rankings (Thelwall & Kousha, 2015; Yu, Wu, Alhalabi, Kao, & Wu, 2016), with institutional participation and thus researcher visibility not accurately reflecting publication output (Lepori, Thelwall, & Hoorani, 2018). Four examples of RG Scores that were assigned to institutions are provided in Figure 4.

Figure 3 
               Questions and answers purportedly used to contribute towards the RG Score. Despite this, joking questions (a), meaningless answers (b), or “cloned” answers (c) might have been perceived as unfair contributions to a scientist’s RG Score. Disclaimer: the objective of displaying these comments is not to discredit specific academics, but merely to randomly exemplify what is perceived to be redundant and/or problematic Q’s or A’s. Sources: (a) https://www.researchgate.net/post/Did_you_know8; (b) https://www.researchgate.net/post/Interested_in_doing_research_work; and (c) https://www.researchgate.net/search.Search.html?type=question&query=“I discovered a very interesting utopian Russian novel which predicted Putin’s war plan”. Dates of screenshots (a–c): April 18, 2022. The URL that was used to support the screenshot in C is now defunct (i.e., 404 errors) (October 18, 2023).
Figure 3

Questions and answers purportedly used to contribute towards the RG Score. Despite this, joking questions (a), meaningless answers (b), or “cloned” answers (c) might have been perceived as unfair contributions to a scientist’s RG Score. Disclaimer: the objective of displaying these comments is not to discredit specific academics, but merely to randomly exemplify what is perceived to be redundant and/or problematic Q’s or A’s. Sources: (a) https://www.researchgate.net/post/Did_you_know8; (b) https://www.researchgate.net/post/Interested_in_doing_research_work; and (c) https://www.researchgate.net/search.Search.html?type=question&query=“I discovered a very interesting utopian Russian novel which predicted Putin’s war plan”. Dates of screenshots (a–c): April 18, 2022. The URL that was used to support the screenshot in C is now defunct (i.e., 404 errors) (October 18, 2023).

Figure 4 
               The RG score was not only assigned to individual academics but also to research institutes, as a cumulative score of its individual researchers. Examples are provided: Karolinska Institutet, Sweden (a); Harvard University, USA (b); The University of Tokyo, Japan (c); and Curtin University, Australia (d). Sources: https://www.researchgate.net/institution/Karolinska_Institutet (a); https://www.researchgate.net/institution/Harvard_University (b); https://www.researchgate.net/institution/The_University_of_Tokyo (c); and https://www.researchgate.net/institution/Curtin_University2 (d). Date of screenshots: September 27, 2017. The four URLs are basically unchanged between 2017 and 2023.
Figure 4

The RG score was not only assigned to individual academics but also to research institutes, as a cumulative score of its individual researchers. Examples are provided: Karolinska Institutet, Sweden (a); Harvard University, USA (b); The University of Tokyo, Japan (c); and Curtin University, Australia (d). Sources: https://www.researchgate.net/institution/Karolinska_Institutet (a); https://www.researchgate.net/institution/Harvard_University (b); https://www.researchgate.net/institution/The_University_of_Tokyo (c); and https://www.researchgate.net/institution/Curtin_University2 (d). Date of screenshots: September 27, 2017. The four URLs are basically unchanged between 2017 and 2023.

RG thus serves as a way to amplify the academic profile of a scientist via citations (Bornmann, 2016), number of reads (NoRs), and influence, previously assessed by the RG Score, and now assessed by the RIS. Some claimed that the influence of RG’s citation or literature coverage was superior to that of Elsevier’s Scopus and Clarivate’s Web of Science but less than that of GS (Thelwall & Kousha, 2017a, b). The RIS is not publicly visible[2] on any RG user’s account, and one must first login to observe one’s own RIS and/or the RIS of other RG users. The “Stats” menu of an RG user’s profile indicates that four factors contribute to the RIS: citations, recommendations, full-text reads, and other reads (Figure 5a). Pressing “View details” leads to a pop-up screen that reveals more details (Figure 5b; Appendix 4; Suppl. file). It is fascinating to note how Q&As, which formed an integral part of the RG Score, are not factored into how RIS is calculated.[3] The RIS was criticized as merely recycling and collating already existing metrics into a new branded metric, thus adding no new metrics value (Copiello, 2019, 2020; Zhang & Kumaran, 2023). Knudson (2023) exemplified this with a small sample of 60 sports biomechanics researchers who were ranked top according to their GS profiles, noting a strong correlation between RIS and GS and RG and Scopus Hirsch (h)-indexes.

Figure 5 
               The RIS is not visible to the public and is only visible to users after logging in, in the “Stats” section of their profiles. Top page (a) and “Stats” page (b). In the RIS menu of the “Stats” page, clicking the “View details” button (red box) (b) provides additional details and an initial explanation of how the RIS is calculated, noting specifically that self-citations are excluded from the equation (c). More details about how the RIS is calculated are provided when pressing the “Learn more about the Research Interest Score” button (turquoise blue box) (c). Sources: https://www.researchgate.net/profile/Jaime-Teixeira-Da-Silva/stats (a and b) and https://help.researchgate.net/hc/en-us/articles/14293473316753 (c). Date of screenshots: October 21, 2023. A more recent (September 8, 2024) examination of the explanation of how the RIS is calculated indicated no changes between 2023 and 2024.
Figure 5

The RIS is not visible to the public and is only visible to users after logging in, in the “Stats” section of their profiles. Top page (a) and “Stats” page (b). In the RIS menu of the “Stats” page, clicking the “View details” button (red box) (b) provides additional details and an initial explanation of how the RIS is calculated, noting specifically that self-citations are excluded from the equation (c). More details about how the RIS is calculated are provided when pressing the “Learn more about the Research Interest Score” button (turquoise blue box) (c). Sources: https://www.researchgate.net/profile/Jaime-Teixeira-Da-Silva/stats (a and b) and https://help.researchgate.net/hc/en-us/articles/14293473316753 (c). Date of screenshots: October 21, 2023. A more recent (September 8, 2024) examination of the explanation of how the RIS is calculated indicated no changes between 2023 and 2024.

Two additional criticisms about RIS, not yet indicated in previous literature, are noted in this article: (1) self-citations are automatically discounted as being invalid for the total citations count, even if they offer valid support to statements, and (2) RIS (like the RG Score) can only increase, and there is no mechanism, e.g., as a negative weighting,[4] that considers negative social media coverage or negative curricular aspects, such as expressions of concern or retractions. To its credit, RG does explain in more transparent detail (relative to the RG Score previously) how RIS is calculated, i.e., the way in which the four main factors are weighted (see Appendix 4 text). However, like the RG Score, the precise equation is not provided, although it was shown to be strongly correlated with GS, Web of Science, and then Scopus citations, in this order, but not their h-indexes (Memisevic, 2022). Even so, no rational or logical explanation is provided by RG as to why, for example, a citation should have a weighting that is three-fold larger than full-text reads (0.5 versus 0.15, respectively).

Does a shift in focus from a scientific influence (metrics only) to a social altmetric (RG Score and RIS) reflect purely scientific merit (Bornmann & Haunschild, 2017), and is emphasis on engagement rather than on scientific output (i.e., publications) (Copiello & Bonifaci, 2018) a way to amplify self-branding? For example, the lack of regulation of RG Projects allowed the authors of some projects to engage in potentially commercial promotional activity, thereby gaining additional reads and artificially boosting their RG Scores (Teixeira da Silva, 2017).

Even though the volume of full-text reads can be amplified by the “open” (green or gold open access) nature of published papers (Copiello & Bonifaci, 2019; Sababi et al., 2017), confusion by authors regarding the licensing and/or copyrighted nature of their papers, or their papers’ versions of the record, in violation of copyright contracts, resulted in the exposure of a body of copyrighted work in an “open” format. This led not only to legal threats against RG by select publishers (Kwon, 2022) but also to its users who violated publishers’ copyright policies (Jamali, 2017), in part amplified by RG emails encouraging linking and sharing (Cress, 2021). In response to that episode, some publishers adjusted their collaborative relationship with RG to accommodate researchers’ needs in terms of knowledge exchange, as a way to try and defend their copyright without having to revert to legal action (Manley, 2019).

Finally, even though the RG Score ultimately provided an RG h-index, it was always inferior to the GS h-index due to the number of publications that GS considered for indexing relative to RG (Singh, Srichandan, & Lathabai, 2022). Although RG Score was recommended by some for amplifying networks or collaborations (Joshi et al., 2019), or for formal academic institutional use (Yan & Zhang, 2018), due to the complex set of concerns and negative debate related to its validity as a scientific metric, others discouraged its use (Bangani & Onyancha, 2021; Clavier et al., 2021; Orduna-Malea et al., 2017). Conversely, one study found that researchers at highly ranked universities used RG and other ASNSs more than researchers at lower-ranked universities, independent of the discipline (Yan, Zhang, Hu, & Kudva, 2021).

RG users should also be aware that metadata of their papers that were automatically transferred by publishers to RG, for example, of journals that form part of the Springer Nature-RG collaboration, cannot always be manually updated and edited, i.e., the initially transferred metadata of the “in press” version cannot be edited by RG users to update volume, issue and page numbers, or publication date, leaving those papers’ RG entries outdated, incomplete and inaccurate, and only a manual paper-by-paper request to RG to update the records can be considered (Teixeira da Silva, 2023a).

2 How are Reads Factored into RG’s RIS?

A core concept at the heart of using ASNSs is to self-promote and thereby self-preserve one’s legend and publishing (and research) repertoire (Hailu & Wu, 2021). At what level of citations or altmetrics can a scientist claim to have reached “fame” (Sternberg, 2016)? The altmetric score of a paper can serve as a reward proxy to traditional citation-based recognition, at three levels: research impact (Barnes, 2015), societal impact (Holmberg, Bowman, Bowman, Didegah, & Kortelainen, 2019), and knowledge democratization (Daraio, 2021). For that reason, NoRs are in themselves a distinct metric because they can, through readership, amplify general knowledge of a paper (and thus indirectly of an author and even a journal) and lead to more citations (Wasike, 2021), although they might not necessarily result in the citation of that paper, i.e., increased viewership or readership does not always imply increased citations (Banshal, Singh, & Muhuri, 2021). NoRs that formed part of the RG Score, and that still form part of RIS (Figure 5b and c; Appendix 4; Suppl. file), are a subset of this metric that can be used to evaluate or rate scientists’ performance and output (Panda & Kaur, 2023), so there is interest in appreciating cases of abuse that attempt to amplify NoRs and readership (Kirilova & Zoepfl, 2025). It has been argued that preprints (or post-prints) can serve as metrics amplifiers, given that they represent a “duplicated” (but not deemed as unethical) copy of a work, and thus an extended readership network, and could thus be interpreted by some as an unfair metrics advantage (Teixeira da Silva, 2023b).

Appendix 4 text (Suppl. file) informs us that “self-reads” do not count toward reads, following a similar logic for self-citations, although in this case, this rule makes sense, otherwise authors might be constantly clicking on their own papers to artificially boost reads. In addition, reads by the general public are not considered, i.e., one must be logged into an RG account in order for a read to count to another RG account user’s RIS. From that text, a link is provided: “Learn more about how we count reads.”[5] On that page (see Appendix 5 in the Suppl. file for the full text), RG provides an explanation about reads, their importance, how they are calculated, and who the readers are. Although Q&As are themselves not weighted, reads of Q&As are factored into reads (i.e., NoRs), and thus RIS.

3 Conclusion, Future Outlook, and Limitations

This article takes a historical look at some aspects of the now-defunct RG Score, and why RG might have ceased that altmetric, sourcing debate and criticisms from the published literature. Another RG altmetric, RIS, continues to function and is still displayed on all RG users’ accounts in 2024. While there are positive aspects and more transparency about RIS relative to the RG Score, there are still some worrisome aspects, including the risks of abuse, for example, for excessive self-branding, or the unfair (and potentially unethical) inflation of RG’s metrics and altmetrics, including RIS, through inter-researcher “citation rings” (Kirilova & Zoepfl, 2025). According to RG’s CEO, Ijad Madisch, RG serves to self-brand reputation (Winter, 2015). The use of ASNSs for self-branding and self-promotion (Francke, 2019) requires institutional education regarding how to better employ such altmetrics to promote research units at universities (Wiechetek & Pastuszak, 2022), or their use, including RIS, to rank or otherwise characterize universities (Haris, Ali, & Vaidya, 2023). Almost a decade ago, in a social experiment, a fake researcher profile was created at RG to appreciate how it would continue to gather metrics (reads, etc.) despite being dormant (Murray, 2014). That study should have served as a red flag, alerting RG and the academic community that ASNSs like RG could be used to create fake user accounts and thus create fake, cheap-fake, or deep-fake “scientists,” “researchers,” or aspects of RG accounts (Teixeira da Silva, 2023c), or to “hijack” the accounts of existing researchers who do not have an RG account. Some fake accounts were discovered by Eva and Wiebe (2019). It would thus be important, for those with skills to appreciate valid from fake, including cheap-fakes or deep-fakes, to appreciate the extent of this phenomenon at RG.

  1. Funding information: The author states no funding involved.

  2. Author contribution: Jaime A. Teixeira da Silva: conceptualization, data curation, formal analysis, investigation, methodology, project administration, supervision, validation, visualization, writing – original draft preparation, writing – review and editing.

  3. Conflict of interest: The author states no conflict of interest.

  4. Disclaimer: Any proprietary information that might have been used in figures’ screenshots and in the supplementary file is used for purely academic purposes under a fair-use clause.

  5. Data availability statement: No new data was generated in this paper.

References

Bangani, S., & Onyancha, O. B. (2021). Evaluation of the national research foundation-rated researchers’ output at a South African university. Global Knowledge, Memory and Communication, 70(1/2), 187–202. doi: 10.1108/GKMC-02-2020-0017.Search in Google Scholar

Banshal, S. K., Singh, V. K., & Muhuri, P. K. (2021). Can altmetric mentions predict later citations? A test of validity on data from ResearchGate and three social media platforms. Online Information Review, 45(3), 517–536. doi: 10.1108/OIR-11-2019-0364.Search in Google Scholar

Barnes, C. (2015). The use of Altmetrics as a tool for measuring research impact. Australian Academic & Research Libraries, 46(2), 121–134. doi: 10.1080/00048623.2014.1003174.Search in Google Scholar

Bornmann, L. (2016). Scientific revolution in scientometrics: The broadening of impact from citation to societal. In C. R. Sugimoto (Ed.), Theories of informetrics and scholarly communication (pp. 347–359). Berlin, Boston: De Gruyter. doi: 10.1515/9783110308464-020.Search in Google Scholar

Bornmann, L., & Haunschild, R. (2017). Does evaluative scientometrics lose its main focus on scientific quality by the new orientation towards societal impact? Scientometrics, 110(2), 937–943. doi: 10.1007/s11192-016-2200-2.Search in Google Scholar

Borrego, Á. (2017). Institutional repositories versus ResearchGate: The depositing habits of Spanish researchers. Learned Publishing, 30(3), 185–192. doi: 10.1002/leap.1099.Search in Google Scholar

Boudry, C., & Durand-Barthez, M. (2020). Use of author identifier services (ORCID, ResearcherID) and academic social networks (Academia.edu, ResearchGate) by the researchers of the University of Caen Normandy (France): A case study. PLoS One, 15(9), e0238583. doi: 10.1371/journal.pone.0238583.Search in Google Scholar

Clavier, T., Occhiali, E., Demailly, Z., Compère, V., Veber, B., Selim, J., & Besnier, E. (2021). The association between professional accounts on social networks Twitter and ResearchGate and the number of scientific publications and citations among anesthesia researchers: Observational study. Journal of Medical Internet Research, 23(10), e29809. doi: 10.2196/29809.Search in Google Scholar

Copiello, S. (2019). Research Interest: Another undisclosed (and redundant) algorithm by ResearchGate. Scientometrics, 120(1), 351–360. doi: 10.1007/s11192-019-03124-w.Search in Google Scholar

Copiello, S. (2020). Multi-criteria altmetric scores are likely to be redundant with respect to a subset of the underlying information. Scientometrics, 124(1), 819–824. doi: 10.1007/s11192-020-03491-9.Search in Google Scholar

Copiello, S., & Bonifaci, P. (2018). A few remarks on ResearchGate score and academic reputation. Scientometrics, 114(1), 301–306. doi: 10.1007/s11192-017-2582-9.Search in Google Scholar

Copiello, S., & Bonifaci, P. (2019). ResearchGate Score, full-text research items, and full-text reads: A follow-up study. Scientometrics, 119(2), 1255–1262. doi: 10.1007/s11192-019-03063-6.Search in Google Scholar

Cozma, R., & Dimitrova, D. (2021). Research Gate or revolving door? Uses and gratifications of academic social media among communication scholars. Journalism & Mass Communication Educator, 76(3), 282–296. doi: 10.1177/1077695820965030.Search in Google Scholar

Cress, P. (2021). Clever emails from ResearchGate encourage authors to breach copyright law. Aesthetic Surgery Journal, 41(7), 854–858. doi: 10.1093/asj/sjab205.Search in Google Scholar

Daraio, C. (2021). Altmetrics as an answer to the need for democratization of research and its evaluation. Journal of Altmetrics, 4(1), 5. doi: 10.29024/joa.43.Search in Google Scholar

D’Alessandro, S., Miles, M., Martínez-López, F. J., Anaya-Sánchez, R., Esteban-Millat, I., & Torrez-Meruvia, H. (2020). Promote or perish? A brief note on academic social networking sites and academic reputation. Journal of Marketing Management, 36(5–6), 405–411. doi: 10.1080/0267257X.2019.1697104.Search in Google Scholar

Deng, S. L., Tong, J. J., Lin, Y. Q., Li, H. X., & Liu, Y. (2019). Motivating scholars’ responses in academic social networking sites: An empirical study on ResearchGate Q&A behavior. Information Processing & Management, 56(6), 102082. doi: 10.1016/j.ipm.2019.102082.Search in Google Scholar

Desai, M., Mehta, R. G., & Rana, D. P. (2023). A model to identify redundancy and relevancy in question-answer systems of digital scholarly platforms. Procedia Computer Science, 218, 2383–2391. doi: 10.1016/j.procs.2023.01.213.Search in Google Scholar

Desai, M., Mehta, R. G., & Rana, D. P. (2024). Anatomising the impact of ResearchGate followers and followings on influence identification. Journal of Information Science, 50(3), 607–624. doi: 10.1177/01655515221100716.Search in Google Scholar

Eva, N. C., & Wiebe, T. A. (2019). Whose research is it anyway? Academic social networks versus institutional repositories. Journal of Librarianship and Scholarly Communication, 7(general issue), eP2243. doi: 10.7710/2162-3309.2243.Search in Google Scholar

Francke, H. (2019). The academic web profile as a genre of “self-making”. Online Information Review, 43(5), 760–774. doi: 10.1108/OIR-12-2017-0347.Search in Google Scholar

Francke, H., & Hammarfelt, B. (2022). Competitive exposure and existential recognition: Visibility and legitimacy on academic social networking sites. Research Evaluation, 31(4), 429–437. doi: 10.1093/reseval/rvab043.Search in Google Scholar

Greifeneder, E., Pontis, S., Blandford, A., Attalla, H., Neal, D., & Schlebbe, K. (2018). Researchers attitudes towards the use of social networking sites. Journal of Documentation, 74(1), 119–136. doi: 10.1108/JD-04-2017-0051.Search in Google Scholar

Hailu, M., & Wu, J. H. (2021). The use of academic social networking sites in scholarly communication: Scoping review. Data and Information Management, 5(2), 277–298. doi: 10.2478/dim-2020-0050.Search in Google Scholar

Haris, M., Ali, P. M. N., & Vaidya, P. (2023). Assessment of ResearchGate to unfurl the academic pursuits of physics scholars. Journal of Scientometric Research, 12(2), 490–500. doi: 10.5530/jscires.12.2.045.Search in Google Scholar

Hoffmann, C. P., Lutz, C., & Meckel, M. (2016). A relational altmetric? Network centrality on ResearchGate as an indicator of scientific impact. Journal of the Association for Information Science and Technology, 67(4), 765–775. doi: 10.1002/asi.23423.Search in Google Scholar

Holmberg, K., Bowman, S., Bowman, T., Didegah, F., & Kortelainen, T. (2019). What is societal impact and where do Altmetrics fit into the equation? Journal of Altmetrics, 2(1), 6. doi: 10.29024/joa.21.Search in Google Scholar

Huang, C. S., Zha, X. J., Yan, Y. L., & Wang, Y. Z. (2019). Understanding the social structure of academic social networking sites: The case of ResearchGate. Libri, 69(3), 189–199. doi: 10.1515/libri-2019-0011.Search in Google Scholar

Jamali, H. R. (2017). Copyright compliance and infringement in ResearchGate full-text journal articles. Scientometrics, 112(1), 241–254. doi: 10.1007/s11192-017-2291-4.Search in Google Scholar

Jeng, W., DesAutels, S., He, D. Q., & Li, L. (2017). Information exchange on an academic social networking site: A multidiscipline comparison on ResearchGate Q&A. Journal of the Association for Information Science and Technology, 68(3), 638–652. doi: 10.1002/asi.23692.Search in Google Scholar

Joshi, N. D., Lieber, B., Wong, K., Al-Alam, E., Agarwal, N., & Diaz, V. (2019). Social media in neurosurgery: Using ResearchGate. World Neurosurgery, 127, e950–e956. doi: 10.1016/j.wneu.2019.04.007.Search in Google Scholar

Kim, Y. (2018). An empirical study of biological scientists’ article sharing through ResearchGate. Aslib Journal of Information Management, 70(5), 458–480. doi: 10.1108/ajim-05-2018-0126.Search in Google Scholar

Kim, Y., & Oh, J. S. (2021). Researchers’ article sharing through institutional repositories and ResearchGate: A comparison study. Journal of Librarianship and Information Science, 53(3), 475–487. doi: 10.1177/0961000620962840.Search in Google Scholar

Kirilova, S., & Zoepfl, F. (2025). Metrics fraud on ResearchGate. Journal of Informetrics, 19(1), 101604. doi: 10.1016/j.joi.2024.101604.Search in Google Scholar

Knudson, D. (2023). Association of ResearchGate research influence score with other metrics of top cited sports biomechanics scholars. Biomedical Human Kinetics, 15(1), 57–62. doi: 10.2478/bhk-2023-0008.Search in Google Scholar

Kraker, P., & Lex, E. (2015). A critical look at the ResearchGate score as a measure of scientific reputation. In Quantifying and Analysing Scholarly Communication on the Web (ASCW'15) (p. 3). Oxford, UK. doi: 10.5281/zenodo.35401.Search in Google Scholar

Kwon, D. (2022). ResearchGate dealt a blow in copyright lawsuit. Nature, 603, 375–376. doi: 10.1038/d41586-022-00513-9.Search in Google Scholar

Lepori, B., Thelwall, M., & Hoorani, B. H. (2018). Which US and European higher education institutions are visible in ResearchGate and what affects their RG Score? Journal of Informetrics, 12(3), 806–818. doi: 10.1016/j.joi.2018.07.001.Search in Google Scholar

Li, L., He, D., & Zhang, C. (2016). Evaluating academic answer quality: A pilot study on ResearchGate Q&A. In F. H. Nah & C. H. Tan (Eds.), HCI in Business, Government, and Organizations: eCommerce and Innovation. HCIBGO 2016. Lecture Notes in Computer Science (vol. 9751, pp. 61–71). Cham, Switzerland: Springer. doi: 10.1007/978-3-319-39396-4_6.Search in Google Scholar

Li, L., Li, A., Song, X., Li, X., Huang, K., & Ye, E. M. (2023). Characterizing response quantity on academic social Q&A sites: A multidiscipline comparison of linguistic characteristics of questions. Library Hi Tech, 41(3), 921–938. doi: 10.1108/LHT-05-2021-0161.Search in Google Scholar

Manley, S. (2019). On the limitations of recent lawsuits against Sci‐Hub, OMICS, ResearchGate, and Georgia State University. Learned Publishing, 32(4), 375–381. doi: 10.1002/leap.1254.Search in Google Scholar

Meier, A., & Tunger, D. (2018). Investigating the transparency and influenceability of altmetrics using the example of the RG score and the ResearchGate platform. Information Services & Use, 38(1–2), 99–110. doi: 10.3233/ISU-180001.Search in Google Scholar

Meishar-Tal, H., & Pieterse, E. (2017). Why do academics use academic social networking sites? International Review of Research in Open and Distributed Learning, 18(1), 1–22. doi: 10.19173/irrodl.v18i1.2643.Search in Google Scholar

Memisevic, R. (2022). Research Interest Score in ResearchGate: The silver bullet of scientometrics or the emperor’s new clothes? Central Asian Journal of Medical Hypotheses and Ethics, 3(3), 187–191. doi: 10.47316/cajmhe.2022.3.3.05.Search in Google Scholar

Memon, A. R. (2016). ResearchGate is no longer reliable: Leniency towards ghost journals may decrease its impact on the scientific community. The Journal of the Pakistan Medical Association, 66(12), 1643–1647.Search in Google Scholar

Murray, M. (2014). Analysis of a scholarly social networking site: The case of the dormant user. In Proceedings of the Southern Association for Information Systems Conference (pp. 1–7). Macon, GA, USA.Search in Google Scholar

Nicholas, D., Clark, D., & Herman, E. (2016). ResearchGate: Reputation uncovered. Learned Publishing, 29(3), 173–182. doi: 10.1002/leap.1035.Search in Google Scholar

Orduna-Malea, E., Martín-Martín, A., Thelwall, M., & Lopez-Cozar, E. D. (2017). Do ResearchGate Scores create ghost academic reputations? Scientometrics, 112(1), 443–460. doi: 10.1007/s11192-017-2396-9.Search in Google Scholar

Ostermaier-Grabow, A., & Linek, S. B. (2019). Communication and self-presentation behavior on academic social networking sites: An exploratory case study on profiles and discussion threads on ResearchGate. Journal of the Association for Information Science and Technology, 70(10), 1153–1164. doi: 10.1002/asi.24186.Search in Google Scholar

Panda, S., & Kaur, N. (2023). Research performance of top cited Indian researchers on ResearchGate platform: An altmetric analysis. Journal of Information and Knowledge, 60(4), 267–280. doi: 10.17821/srels/2023/v60i4/168065.Search in Google Scholar

Prieto-Gutiérrez, J. (2019). Ten years of research on ResearchGate: A scoping review using Google Scholar (2008–2017). European Science Editing, 45(3), 60–64. doi: 10.20316/ESE.2019.45.18023.Search in Google Scholar

ResearchGate. (2022). Removing the RG Score. https://www.researchgate.net/researchgate-updates/removing-the-rg-score (29 March 2022; last accessed: 1 February 2025).Search in Google Scholar

Sababi, M., Marashi, S. A., Pourmajidian, M., Pourtabatabaei, S. S., Darki, F., Sadrzadeh, M. R., … Nejadi, P. (2017). How accessibility influences citation counts: The case of citations to the full text articles available from ResearchGate. RT, 5(1), 1–12. doi: 10.13130/2282-5398/7997.Search in Google Scholar

Sánchez-Teba, E. M., Rodríguez-Fernández, M., & Gaspar-González, A. I. (2021). Social networks and open innovation: Business academic productivity. Journal of Open Innovation: Technology, Market, and Complexity, 7(2), 158. doi: 10.3390/joitmc7020158.Search in Google Scholar

Singh, V. K., Srichandan, S. S. & Lathabai, H. H. (2022). ResearchGate and Google Scholar: How much do they differ in publications, citations and different metrics and why? Scientometrics, 127(3), 1515–1542. doi: 10.1007/s11192-022-04264-2.Search in Google Scholar

Sternberg, R. J. (2016). “Am I famous yet?” Judging scholarly merit in psychological science: An introduction. Perspectives on Psychological Science, 11(6), 877–881. doi: 10.1177/1745691616661777.Search in Google Scholar

Teixeira da Silva, J. A. (2017). ResearchGate projects: Unregulated academic social media. Social Communication, 1(15), 6–13. doi: 10.1515/sc-2017-0001.Search in Google Scholar

Teixeira da Silva, J. A. (2023a). Letter to the editor regarding: Social media in neurosurgery: Using ResearchGate. World Neurosurgery, 176, 253–255. doi: 10.1016/j.wneu.2023.04.106.Search in Google Scholar

Teixeira da Silva, J. A. (2023b). Do peer-reviewed papers with a preprint version have an unfair metrics advantage? Journal of Food Science, 88(7), 2738–2739. doi: 10.1111/1750-3841.16707.Search in Google Scholar

Teixeira da Silva, J. A. (2023c). AI in the era of fakes and deepfakes: Risk of fabricated photographs and identities in academic publishing. Journal of Information Security and Cybercrimes Research, 6(2), 71–73. doi: 10.26735/KNJA7076.Search in Google Scholar

Teixeira da Silva, J. A., & Bornemann-Cimenti, H. (2017). Why do some retracted papers continue to be cited? Scientometrics, 110(1), 365–370. doi: 10.1007/s11192-016-2178-9.Search in Google Scholar

Teixeira da Silva, J. A., & Yamada, Y. (2023). Reflection on ResearchGate’s terminated ResearchGate Score, and Interest Score, as social media altmetrics and academic evaluation tools. Journal of Scholarly Publishing, 54(2), 239–259. doi: 10.3138/jsp-2022-0043.Search in Google Scholar

Thelwall, M., & Kousha, K. (2015). ResearchGate: Disseminating, communicating, and measuring scholarship? Journal of the Association for Information Science and Technology, 66(5), 876–889. doi: 10.1002/asi.23236.Search in Google Scholar

Thelwall, M., & Kousha, K. (2017a). ResearchGate articles: Age, discipline, audience size, and impact. Journal of the Association for Information Science and Technology, 68(2), 468–479. doi: 10.1002/asi.23675.Search in Google Scholar

Thelwall, M., & Kousha, K. (2017b). ResearchGate versus Google Scholar: Which finds more early citations? Scientometrics, 112(2), 1125–1131. doi: 10.1007/s11192-017-2400-4.Search in Google Scholar

Tsigaris, P., & Teixeira da Silva, J. A. (2019). Moderation by ResearchGate related to comments on “predatory” publishing practices. Social Communication, 5(1), 1–8. doi: 10.2478/sc-2019-0001.Search in Google Scholar

Vinay, R. S., Sampath Kumar, B. T., & Shiva Kumara, S. U. (2020). RG score of science academics: An ideal tool to measure the research productivity. Library Philosophy and Practice, 4796, 1–16.Search in Google Scholar

Wasike, B. (2021). Citations gone #social: Examining the effect of altmetrics on citations and readership in communication research. Social Science Computer Review, 39(3), 416–433. doi: 10.1177/0894439319873563.Search in Google Scholar

Wiechetek, Ł., & Pastuszak, Z. (2022). Academic social networks metrics: An effective indicator for university performance? Scientometrics, 127(3), 1381–1401. doi: 10.1007/s11192-021-04258-6.Search in Google Scholar

Winter, R. (2015). Interview with Ijad Madisch on “The future of publishing and discussing research”. Business and Information Systems Engineering, 57(2), 135–138. doi: 10.1007/s12599-015-0368-2.Search in Google Scholar

Yan, W. W., Liu, Q., Chen, R. Y., & Yi, S. W. (2020). Social networks formed by follower–followee relationships on academic social networking sites: An examination of corporation users. Scientometrics, 124(3), 2083–2101. doi: 10.1007/s11192-020-03553-y.Search in Google Scholar

Yan, W. W., & Zhang, Y. (2018). Research universities on the ResearchGate social networking site: An examination of institutional differences, research activity level, and social networks formed. Journal of Informetrics, 12(1), 385–400. doi: 10.1016/j.joi.2017.08.002.Search in Google Scholar

Yan, W. W., Zhang, Y., Hu, T., & Kudva, S. (2021). How does scholarly use of academic social networking sites differ by academic discipline? A case study using ResearchGate. Information Processing & Management, 58(1), 102430. doi: 10.1016/j.ipm.2020.102430.Search in Google Scholar

Yu, M. C., Wu, Y. C. J., Alhalabi, W., Kao, H. Y., & Wu, W. H. (2016). ResearchGate: An effective altmetric indicator for active researchers? Computers in Human Behavior, 55(B), 1001–1006. doi: 10.1016/j.chb.2015.11.007.Search in Google Scholar

Zhang, L., & Kumaran, M. (2023). STEM librarians’ presence on academic profile websites. Science & Technology Libraries, 42(2), 247–263. doi: 10.1080/0194262X.2022.2049954.Search in Google Scholar

Received: 2024-09-11
Revised: 2024-12-19
Accepted: 2025-01-07
Published Online: 2025-02-14

© 2025 the author(s), published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 7.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/opis-2024-0011/html?lang=en&srsltid=AfmBOopojBzmXpjorpXKgVZORTprsPfrRNZ2xz7eKT9z82wrt5ejbN3X
Scroll to top button