Multi-dimensional research impact assessment through bibliometrics, altmetrics, semantometrics, and webometrics

Authors

DOI:

Keywords

multi-dimensional, bibliometrics,
altmetrics, semantometrics,
webometrics

Correspondence

Tanvir C Turin 
Email: turin.chowdhury@ucalgary.ca

Publication history

Received: 13 Feb 2025
Accepted: 23 May 2025
Published online: 28 May 2025

Responsible editor

Reviewer

Peer review was not sought because this is an invited article.

Funding

None

Ethical approval

Not applicable 

Trial registration number

Not applicable

Copyright

© The Author(s) 2025; all rights reserved. 
Published by Bangabandhu Sheikh Mujib Medical University (currently, Bangladesh Medical University).
Abstract
Research impact assessment (RIA) has emerged as a critical approach for evaluating the societal, academic, and policy-related influence of scholarly work, particularly within the evolving landscape of Open Science. This paper provides a synthesis of quantitative RIA metrics, which offer standardised, data-driven insights into the reach and significance of research outputs. It outlines four principal methodologies: (i) bibliometrics (analyse citation patterns through indicators such as citation counts, co-citation, and bibliographic coupling); (ii) altmetrics (track online engagement and dissemination); (iii) semantometrics (assess textual contributions using semantic similarity measures); and (iv) webometrics (evaluate digital presence through web interactions and backlink analysis). While these quantitative approaches are valuable for benchmarking and strategic decision-making, they often fail to capture the nuanced societal and intellectual impacts of research. To address this limitation, the paper advocates for a hybrid assessment model that integrates quantitative metrics with qualitative methods, such as case studies and narrative analyses, to provide both scalability and contextual depth. Ultimately, the work underscores the importance of critically and judiciously interpreting RIA metrics to fully reflect the multifaceted nature of research impact across disciplines and stakeholder domains. 
Key messages
Employing diverse metricssuch as bibliometrics, altmetrics, semantometrics, and webometricsallows for multifaceted and comprehensive evaluation of research impact. Each of these metrics offers unique insights into different dimensions of scholarly influence: bibliometrics primarily measure academic citation impact, altmetrics capture broader online engagement, semantometrics focus on the semantic and content-based relationships between works, and webometrics assess the online presence and visibility of research outputs. 
Research impact assessment
Research impact assessment (RIA) has become an increasingly important topic in academia and scientific communities, further underscored by the rise of Open Science practices that promote greater accessibility and transparency of research outputs [1]. RIA refers to the process of evaluating the influence and effects of research outputs on various actors, including other researchers, policymakers, and society at large [2]. The importance of measuring research impact lies in its ability to demonstrate the value of scientific endeavours, inform funding decisions, and guide future research directions. 
RIA employs two main approaches
a) The qualitative approach involves collecting and analysing rich, narrative-based information to understand how research findings shape behaviours, practices, and policies in nuanced ways. They provide in-depth, contextual insights into the broader societal and academic influence of research. Common methods include peer reviews, in-depth case studies, stakeholder interviews, and documentary or policy analysis. These methods capture context-specific impacts, like changes in public discourse or organisational culture, that are not reflected in numerical metrics, though they can be time-consuming and subjective. Although qualitative evaluations can be time-consuming and subjective, they offer deeper contextual understanding, revealing the breadth and complexity of research influence in real-world settings. In contrast, quantitative approaches use numerical data and statistical analyses, including bibliometric indicators and citation counts, to measure research impact. These include bibliometric indicators, citation counts, and various impact factors. This methods paper will primarily focus on quantitative RIA metrics, examining their types, applications, strengths, and limitations to provide an overview of data-driven tools for evaluating research influence.
b)The quantitative metrics play an integral role in assessing the reach, influence, and significance of scholarly outputs. Among the various approaches, bibliometrics focus on citation-based academic impact, altmetrics capture broader online engagement, semantometrics explore semantic and content-based relationships, and webometrics/cybermetrics gauge digital visibility and presence (Figure 1). In combination, these metrics offer a multifaceted lens through which to evaluate research influence.
Figure 1 Research impact matrices
Bibliometrics
Bibliometrics is indeed a common and widely used method for assessing research impact. It primarily emphasises on citation counts and publication patterns [3]. A highly cited paper in a scientific journal indicates its significant influence in that field. Bibliometrics can show how often this paper is referenced by other researchers (in terms of citation), highlighting its importance. Registering in bibliometric tools enhances the visibility and discoverability of a researcher’s work, facilitating broader academic recognition. Maintaining an up-to-date profile supports accurate attribution, fosters networking and collaboration, and is increasingly important for career advancement, funding, and institutional evaluations.
 
Tools/Websites  
Scopus (https://www.scopus.com/): Scopus is one of the largest curated abstract and citation databases, covering a wide range of scientific journals, conference proceedings, and books. It includes over 76 million records, ensuring extensive coverage of global and regional scientific literature. It offers comprehensive citation data and analytics. Scopus data is widely used in research assessments, science policy evaluations, and university rankings. Its reliable and high-quality data supports large-scale analyses and benchmarking studies [4]. 
 
Web of science (https://www.webofscience.com/): Web of Science includes millions of records from high-quality, peer-reviewed journals, conference proceedings, and books. It covers a wide range of disciplines, including natural sciences, social sciences, arts, and humanities. It provides citation reports and impact factors for journals. 
 
Google scholar (https://scholar.google.com/): Tracks citations and provides metrics like the year-wise citation counts, total citation count, h-index, i-10 index, i-100 index, article-wise citation counts, article-wise cited papers list etc. The h-index (Hirsch index) was proposed in 2005 by Jorge E. Hirsch, a physicist at UC San Diego, as a metric to measure both the productivity and citation impact of a researcher’s publications [5]. It represents the number of papers (h) that have received at least h citations each [6]. For example, if a researcher has published five papers with citation counts of 250, 170, 120, 15, and 8, their h-index would be 4, as they have four papers each cited at least four times. Related metrics include the i10-index (number of papers with at least 10 citations) and i100-index (number of papers with at least 100 citations). In our example, the researcher would have an i10-index of 4 and an i100-index of 3. These indices help evaluate research quality and impact more dynamically than simple publication or citation counts alone, as they consider both productivity and influence of a researcher's work. The h-index varies between bibliometric tools because of differences in database coverage, citation-tracking methods, and the accuracy of author disambiguation. Variations in indexing criteria and update frequency also contribute to these discrepancies. 
Bibliometric analysis
The analysis approach is to collect the citation data and conduct in-depth analysis using the citation counts. Some of the widely used methods in bibliometrics are citation analysis [7], co-citation analysis [8], bibliographic coupling [9] etc. Citation happens when one research paper mentions another in its references. Citation analysis [7] looks at how often a paper is cited, which can indicate its influence in a field. For example, if article “A” cites article “B”, it means that article “A” is using information from article “B”. If article “B” is cited by many other papers, it is likely an important study. Co-citation occurs when two papers are cited together by a third paper. This suggests that both papers are related in content. The more often two papers are cited together, the stronger their connection. The process of finding these from various research articles is known as co-citation analysis [8]. For example, if article “C” cites both article “A” and article “B”, we say that article “A” and article “B” are co-cited. If many other papers also cite them together, it suggests that they belong to the same research area. Bibliographic coupling [9] happens when two papers cite the same earlier paper. This indicates that these two papers are working with similar background research and are related. For Example, if both article “A” and article “B” cite article “C”, then article “A” and article “B” are bibliographically coupled. This means they are likely studying similar topics.
 
Altmetrics
Altmetrics, abridgement of "alternative metrics," measure the impact of scholarly work based on online interactions and mentions [10]. This includes social media shares, blog posts, news articles, and other online platforms. Unlike the bibliometrics, which focuses on academic citations and publication trends, altmetrics analyses quantitative values of online presence. 
Imagine a research paper on diabetes epidemiology that gets shared widely on Twitter, Facebook, discussed in blog posts, and covered in news articles. Altmetrics would track these interactions, showing how the paper is influencing public discourse and reaching a broader audience.
Tools/Websites
Altmetric (https://www.altmetric.com/): Altmetric captures where and how often a research output is mentioned online over the news portals, policy documents, social media platforms (Facebook, X , Twitter etc.), Wikipedia, or any blogs. For example, as of 22 May 2025, a paper titled “Lifetime risk of diabetes among First Nations and non–First Nations people” published in Canadian Medical Association Journal had an “altmetric attention score” of 640, which places the paper in the top 5% of all research outputs scored by Altmetric (https://cmaj.altmetric.com/details/12089307). This paper was mentioned in 80 news outlets, 1 policy source, 1 Facebook page, 24 X posts, etc. The overall score represents the summation of weighted counts all over online presence. This paper is in the top 5% of all research outputs scored by Altmetric and has High Attention Score compared to outputs of the same age (99th percentile). 
PlumX (https://www.elsevier.com/insights/metrics/plumx): This is a product of Elsevier which offers insights into how research is being used and discussed across various platforms.
 
Semantometrics
Semantometrics evaluates research by analysing the full text of publications, rather than just counting citations [11]. It measures the semantic similarity and contribution of research papers based on their content.
      By comparing the text of a new research paper with existing literature, semantometrics can assess how much new knowledge the paper contributes to its field. For instance, it can identify novel concepts or methodologies introduced by the paper. Grounded on the idea of semantic similarity-based method, Petr Knoth et al. developed a formula assessing the publication's contribution [11].
which is based on quantifying the semantic distance between publications cited by p to the publications citing p. In the above shown formulation, “B” is the set of publications citing publication ‘p’ and “A” is the set cited by ‘p’. The sum in the equation is used to calculate the total distance between all combinations of publications in the sets “A” and “B”. It is expected that the distance is calculated using semantic similarity measures on the full-text of the publications, such as with cosine similarity on TF-IDF[12] document vectors. 

Tools/Websites
As of now, there is no mainstream, publicly available app or web platform where we can simply upload a publication and receive a semantometric score in the same way that one might check an Altmetric score or bibliometric citation count.
 
Semantometrics-python Package: The Digital Humanities Lab of Utrecht University, Netherlands developed the “semantometrics-python” package [13], which is open source. This tool analyzes the semantic content of research papers to evaluate their contribution. For example [11], the paper “The Triple Helix of university-industry-government relations” by Loet Leydesdorff (2012) had a contribution score of 0.4026, while “Search engine user behaviour: How can users be guided to quality content?” by Dirk Lewandowski (2008) had a contribution score of 0.5117. Although these two papers had similar citation counts 27 and 30, respectively- their contribution scores differed noticeably. This highlights the value of considering contribution score in addition to citation count, as publications with similar citation numbers can vary significantly in their actual contribution. 
 
Webometrics/Cybermetrics
Webometrics studies the quantitative aspects of the web, such as the structure and usage of websites [14]. It applies informetric methods to analyze web-based content and interactions. Analysing the number of backlinks to a university's website can indicate its online visibility and influence. Webometrics can also measure the impact of online publications and the reach of digital content.
 
Tools/Websites
Webometric Analyst (http://lexiurl.wlv.ac.uk/): Webometric Analyst is a free Windows-based program for analysis of webometrics, including link analysis, network diagrams of the links between a collection of web sites. Users can find where a specific digital object and digital collections are being connected to.
 
Webometrics (https://www.webometrics.info/) Ranking of World Universities: Ranks universities based on their web presence, visibility, and open access to knowledge. For Example, Harvard University has been ranked as number one university in 2024 based on its strong digital footprint for scholarly activities [15]. This approach for RIA considers factors such as the volume of scholarly content published on the web, the number of external networks linking to the institution’s web domain, and overall, how accessible and influential a university's research outputs are on the internet.

 

Figure 2 Tools used for bibliometrics, altmetrics, semantometrics, and webometrics 
The evolving role of university libraries in research impact assessment
As research impact measurement becomes increasingly complex, university libraries are well-positioned to play a more strategic role — but many require additional support, training, and resources to do so effectively. Building institutional capacity within libraries to engage with bibliometrics, altmetrics, webometrics, and scientometrics is essential. This includes investing in tools and infrastructure, developing staff expertise on digitalization and computing, and fostering cross-campus, national, and global collaborations. By equipping librarians with the knowledge and systems needed to support impact analysis, universities can ensure more meaningful, responsible, and inclusive research evaluation that reflects both academic and societal contributions. Strengthening libraries in this way also reinforces their evolving identity as active partners in research development and strategic decision-making. 
 
Conclusions  
Quantitative RIA metrics provide valuable, standardised insights into scholarly influence and reach, offering a bird's-eye view of research performance that enables comparisons across domains, institutions, and time periods. These metrics are essential for academic decision-making and strategic planning. However, while they offer comparability, they may not fully capture the complexity or broader significance of research impact, which often goes beyond numerical measures. Qualitative methods such as study quality assessments, narratives, and case studies complement quantitative metrics by adding depth and context. Therefore, the most effective research assessment combines both approaches, providing a holistic view that better reflects the multifaceted nature of research impact on academia and society. 

Background characteristics

Number (%)

Age at presentation (weeks)a

14.3 (9.2)

Gestational age at birth (weeks)a

37.5 (2.8)

Birth weight (grams)a

2,975.0 (825.0)

Sex

 

Male

82 (41)

Female

118 (59)

Affected side

 

Right

140 (70)

Left

54 (27)

Bilateral

6 (3)

Delivery type

 

Normal vaginal delivery

152 (76)

Instrumental delivery

40 (20)

Cesarean section

8 (4)

Place of delivery

 

Home delivery by traditional birth attendant

30 (15)

Hospital delivery by midwife

120 (60)

Hospital delivery by doctor

50 (25)

Prolonged labor

136 (68)

Presentation

 

Cephalic

144 (72)

Breech

40 (20)

Transverse

16 (8)

Shoulder dystocia

136 (68)

Maternal diabetes

40 (20)

Maternal age (years)a

27.5 (6.8)

Parity of mother

 

Primipara

156 (78)

Multipara

156 (78)

aMean (standard deviation), all others are n (%)

Background characteristics

Number (%)

Age at presentation (weeks)a

14.3 (9.2)

Gestational age at birth (weeks)a

37.5 (2.8)

Birth weight (grams)a

2,975.0 (825.0)

Sex

 

Male

82 (41)

Female

118 (59)

Affected side

 

Right

140 (70)

Left

54 (27)

Bilateral

6 (3)

Delivery type

 

Normal vaginal delivery

152 (76)

Instrumental delivery

40 (20)

Cesarean section

8 (4)

Place of delivery

 

Home delivery by traditional birth attendant

30 (15)

Hospital delivery by midwife

120 (60)

Hospital delivery by doctor

50 (25)

Prolonged labor

136 (68)

Presentation

 

Cephalic

144 (72)

Breech

40 (20)

Transverse

16 (8)

Shoulder dystocia

136 (68)

Maternal diabetes

40 (20)

Maternal age (years)a

27.5 (6.8)

Parity of mother

 

Primipara

156 (78)

Multipara

156 (78)

aMean (standard deviation), all others are n (%)

Mean escape latency of acquisition day

Groups                 

NC

SC

ColC

Pre-SwE Exp

Post-SwE Exp

Days

 

 

 

 

 

1st

26.2 (2.3)

30.6 (2.4) 

60.0 (0.0)b

43.2 (1.8)b

43.8 (1.6)b

2nd

22.6 (1.0) 

25.4 (0.6)

58.9 (0.5)b

38.6 (2.0)b

40.5 (1.2)b

3rd

14.5 (1.8) 

18.9 (0.4) 

56.5 (1.2)b

34.2 (1.9)b 

33.8 (1.0)b

4th

13.1 (1.7) 

17.5 (0.8) 

53.9 (0.7)b

35.0 (1.6)b

34.9 (1.6)b

5th

13.0 (1.2) 

15.9 (0.7) 

51.7 (2.0)b

25.9 (0.7)b 

27.7 (0.9)b

6th

12.2 (1.0) 

13.3 (0.4) 

49.5 (2.0)b

16.8 (1.1)b

16.8 (0.8)b

Average of acquisition days

5th and 6th 

12.6 (0.2)

14.6 (0.8)

50.6 (0.7)b

20.4 (2.1)a

22.4 (3.2)a

NC indicates normal control; SC, Sham control; ColC, colchicine control; SwE, swimming exercise exposure.

aP <0.05; bP <0.01.

Categories

Number (%)

Sex

 

   Male

36 (60.0)

   Female

24 (40.0)

Age in yearsa

8.8 (4.2)

Education

 

   Pre-school

20 (33.3)

   Elementary school

24 (40.0)

   Junior high school

16 (26.7)

Cancer diagnoses

 

Acute lymphoblastic leukemia

33 (55)

Retinoblastoma

5 (8.3)

Acute myeloid leukemia

4 (6.7)

Non-Hodgkins lymphoma

4 (6.7)

Osteosarcoma

3 (5)

Hepatoblastoma

2 (3.3)

Lymphoma

2 (3.3)

Neuroblastoma

2 (3.3)

Medulloblastoma

1 (1.7)

Neurofibroma

1 (1.7)

Ovarian tumour

1 (1.7)

Pancreatic cancer

1 (1.7)

Rhabdomyosarcoma

1 (1.7)

aMean (standard deviation)

Narakas classification

Total

200 (100%)

Grade 1

72 (36%)

Grade 2

64 (32%)

Grade 3

50 (25%)

Grade 4

14 (7%)

Complete recoverya

107 (54)

60 (83)

40 (63)

7 (14)

-

Near complete functional recovery but partial deformitya

22 (11)

5 (7)

10 (16)

6 (12)

1 (7)

Partial recovery with gross functional defect    and deformity

31 (16)

7 (10)

13 (20)

10 (20)

1 (7)

No significant improvement 

40 (20)

-

1 (1.5)

27 (54)

12 (86)

aSatisfactory recovery

bGrade 1, C5, 6, 7 improvement; Grade 2, C5, 6, 7 improvement; Grade 3, panpalsy C5, 6, 7, 8, 9, Grade 4, panpalsy with Hornon’s syndrome.

Narakas classification

Total

200 (100%)

Grade-1

72 (36%)

Grade-2

64 (32%)

Grade-3

50 (25%)

Grade-4

14 (7%)

Complete recoverya

107 (54)

60 (83)

40 (63)

7 (14)

-

Near complete functional recovery but partial deformitya

22 (11)

5 (7)

10 (16)

6 (12)

1 (7)

Partial recovery with gross functional defect    and deformity

31 (16)

7 (10)

13 (20)

10 (20)

1 (7)

No significant improvement 

40 (20)

-

1 (1.5)

27 (54)

12 (86)

aSatisfactory recovery

bGrade 1, C5, 6, 7 improvement; Grade 2, C5, 6, 7 improvement; Grade 3, panpalsy C5, 6, 7,8,9, Grade 4, panpalsy with Hornon’s syndrome.

Variables in probe trial day

Groups

NC

SC

ColC

Pre-SwE Exp

Post-SwE Exp

Target crossings

8.0 (0.3)

7.3 (0.3) 

1.7 (0.2)a

6.0 (0.3)a

5.8 (0.4)a

Time spent in target

18.0 (0.4) 

16.2 (0.7) 

5.8 (0.8)a

15.3 (0.7)a

15.2 (0.9)a

NC indicates normal control; SC, Sham control; ColC, colchicine control; SwE, swimming exercise exposure.

aP <0.01.

Pain level

Number (%)

P

Pre

Post 1

Post 2

Mean (SD)a pain score

4.7 (1.9)

2.7 (1.6)

0.8 (1.1)

<0.001

Pain categories

    

   No pain (0)

-

(1.7)

31 (51.7)

<0.001

   Mild pain (1-3)

15 (25.0)

43 (70.0)

27 (45.0)

 

   Moderete pain (4-6)

37 (61.7)

15 (25.0)

2 (3.3)

 

   Severe pain (7-10)

8 (13.3)

2 (3.3)

-

 

aPain scores according to the visual analogue scale ranging from 0 to 10; SD indicates standard deviation

Surgeries

Number  

(%)

Satisfactory outcomes n (%)

Primary surgery (n=24)

 

 

Upper plexus

6 (25)

5 (83)

Pan-palsy

18 (75)

6 (33)

All

24 (100)

11 (46)

Secondary Surgery (n=26)

 

 

Shoulder deformity

15 (58)

13 (87)

Wrist and forearm deformity

11 (42)

6 (54)

All

26 (100)

19 (73)

Primary and secondary surgery

50 (100)

30 (60)

Mallet score 14 to 25 or Raimondi score 2-3 or Medical Research grading >3 to 5.

Narakas classification

Total

200 (100%)

Grade-1

72 (36%)

Grade-2

64 (32%)

Grade-3

50 (25%)

Grade-4

14 (7%)

Complete recoverya

107 (54)

60 (83)

40 (63)

7 (14)

-

Near complete functional recovery but partial deformitya

22 (11)

5 (7)

10 (16)

6 (12)

1 (7)

Partial recovery with gross functional defect    and deformity

31 (16)

7 (10)

13 (20)

10 (20)

1 (7)

No significant improvement 

40 (20)

-

1 (1.5)

27 (54)

12 (86)

aSatisfactory recovery

bGrade 1, C5, 6, 7 improvement; Grade 2, C5, 6, 7 improvement; Grade 3, panpalsy C5, 6, 7,8,9, Grade 4, panpalsy with Hornon’s syndrome.

Trials

Groups

NC

SC

ColC

Pre-SwE Exp

Post-SwE Exp

1

20.8 (0.6)

22.1 (1.8)

41.1 (1.3)b

31.9 (1.9)b

32.9 (1.8)a, b

2

10.9 (0.6)

14.9 (1.7)

37.4 (1.1)b

24.9 (2.0)b

26.8 (2.5)b

3

8.4 (0.5)

9.9 (2.0)

32.8 (1.2)b

22.0 (1.4)b

21.0 (1.4)b

4

7.8 (0.5)

10.4 (1.3)

27.6(1.1)b

12.8 (1.2)b

13.0 (1.4)b

Savings (%)c

47.7 (3.0)

33.0 (3.0)

10.0 (0.9)b

23.6 (2.7)b

18.9 (5.3)b

NC indicates normal control; SC, Sham control; ColC, colchicine control; SwE, swimming exercise exposure.

aP <0.05; bP <0.01.

cThe difference in latency scores between trials 1 and 2, expressed as the percentage of savings increased from trial 1 to trial 2

Acknowledgements
We take full responsibility for the content of this paper. We acknowledge the use of AI (perplexity.ai) for assistance with English language editing. We used prompts to improve the structure of sentences that we deemed could be further improved. AI was prompted to improve clarity by improving grammar and choice of vocabularies used in the text. All suggestions were critically reviewed and revised to uphold the reliability and precision of the write-up. Additionally, we ensured the integrity of our own expressions with careful consideration.  
Author contributions
Conception and design: TCT and AAM. Acquisition, analysis, and interpretation of data: TCT and AAM, Manuscript drafting and revising it critically: TCT and AAM. Approval of the final version of the manuscript: TCT, and AAM. Guarantor of accuracy and integrity of the work: TCT, and AAM.  
Conflict of interest
We do not have any conflict of interest.
Data availability statement
Not applicable
Supplementary file
None
    References
    1.Turin TC, Raihan MMH, Chowdhury N. Open Science: Knowledge for the people, by the people, and with the people. Bangabandhu Sheikh Mujib Med Univ J. 2025;18(1):e78080. doi: https://doi.org/10.3329/bsmmuj.v18i1.78080
    [Google Scholar]
    2. Penfield T, Baker MJ, Scoble R, Wykes MC. Assessment, evaluations, and definitions of research impact: A review. Research Evaluation. 2014;23(1):21-32.  doi: https://doi.org/10.1093/reseval/rvt021
     [Google Scholar]
    3. Passas I. Bibliometric analysis: the main steps. Encyclopedia. 2024;4(2) . doi: https://doi.org/10.3390/encyclopedia4020065
    4. Baas J, Schotten M, Plume A, Côté G, Karimi R. Scopus as a curated, high-quality bibliometric data source for academic research in quantitative science studies. Quant Sci Stud. 2020;1(1):377-386. Available from: https://researchcollaborations.elsevier.com/en/publications/scopus-as-a-curated-high-quality-bibliometric-data-source-for-aca
    [Google Scholar]
    5. WHirsch JE. An index to quantify an individual's scientific research output. Proc Natl Acad Sci U S A. 2005 Nov 15;102(46):16569-16572. doi: https://doi.org/10.1073/pnas.0507655102
    [PubMed]  [Google Scholar]
    6. McDonald K. Physicist proposes new way to rank scientific output. Phys Org. 2005 Nov 8. Available from: https://phys.org/news/2005-11-physicist-scientific-output.pdf. Accessed on 27 May 2025.
    [Google Scholar]
    7. Garfield E. Citation indexes for science: A new dimension in documentation through association of ideas. Science. 1955 Jul 15;122(3159):108-111. doi: https://doi.org/10.1126/science.122.3159.108
    [PubMed]     [Google Scholar]
    8. Small H. Co-citation in the scientific literature: A new measure of the relationship between two documents. Journal of the American Society for Information Science. 1973;24(4):265-269. doi: https://doi.org/10.1002/asi.4630240406 
    [Google Scholar]
    9. Kessler MM. Bibliographic coupling between scientific papers. American Documentation. 1963;14(1):10-25. doi: https://doi.org/10.1002/asi.5090140103
    [Google Scholar]
    10. Priem J, Taraborelli D, Groth P, Neylon C. Altmetrics: A manifesto. 2010. doi: https://doi.org/10.5281/zenodo.12684249
    [Google Scholar]
    11. Knoth P, Herrmannova D. Towards semantometrics: A new semantic similarity based measure for assessing a research publication’s contribution. D-Lib Mag. 2014;20(11):8. doi: https://www.dlib.org/dlib/november14/knoth/11knoth.html
    [Google Scholar]
     
    12. Ramos J. Using TF-IDF to determine word relevance in document queries. In: Proceedings of the First Instructional Conference on Machine Learning. 2003:29-48. Available from: https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=b3bf6373ff41a115197cb5b30e57830c16130c2c. Accessed on 10 February 2025
    [Google Scholar]
     
    13. Semantometrics-python package. Available from: https://github.com/UUDigitalHumanitieslab/semantometrics-python 
    [Google Scholar]
     
    14. Almind TC, Ingwersen P. Informetric analyses on the World Wide Web: Methodological approaches to 'webometrics'. J Doc. 1997;53(4):404-426. doi: https://doi.org/10.1108/EUM0000000007205
    [Google Scholar]
     
    15. Webometrics Ranking Web of Universities [Online]. 2024. Available from: https://www.universityguru.com/rankings-explained/webometrics-ranking-web-of-universities. Accessed on 16 May 2025. 
    [Google Scholar]