techhub.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A hub primarily for passionate technologists, but everyone is welcome

Administered by:

Server stats:

4.6K
active users

#wos

0 posts0 participants0 posts today

📢 #Scientometric indicators in #research evaluation and research #misconduct: analysis of the Russian #university excellence initiative

👉 "The results showed that #RUEI #universities had a significantly higher number of retracted #publications in #WoS - and #Scopus -indexed #journals, suggesting that pressure to meet quantitative scientometric #indicators may have encouraged unethical research practices and #researchmisconduct."

link.springer.com/article/10.1

SpringerLinkScientometric indicators in research evaluation and research misconduct: analysis of the Russian university excellence initiative - ScientometricsThis study aimed to examine the impact of the Russian University Excellence Initiative (RUEI), also known as Project 5–100, on research misconduct in Russian higher education. Launched in 2013, the RUEI incentivized universities to increase the number of publications in internationally indexed journals. The analysis compares the prevalence of retracted publications—as a proxy for research misconduct—between universities that participated in the RUEI and a control group of universities that did not. A total of 2621 retracted papers affiliated with at least one Russian institution were identified. Of which 203 papers were indexed in Web of Science (WoS) and/or Scopus databases. The results showed that RUEI universities had a significantly higher number of retracted publications in WoS- and Scopus-indexed journals, suggesting that pressure to meet quantitative scientometric indicators may have encouraged unethical research practices and research misconduct. In addition, different reasons for retraction were found between publications indexed and not indexed in WoS and/or Scopus databases. These findings suggest that the direct and irresponsible use of scientometric indicators as performance measures may have unintended negative consequences that may undermine research integrity.
Continued thread

Update. "#OpenAlex exhibits a far more balanced linguistic coverage than #WoS. However, language metadata is not always accurate, which leads OpenAlex to overestimate the place of English while underestimating that of other languages."
arxiv.org/abs/2409.10633

arXiv logo
arXiv.orgEvaluating the Linguistic Coverage of OpenAlex: An Assessment of Metadata Accuracy and CompletenessClarivate's Web of Science (WoS) and Elsevier's Scopus have been for decades the main sources of bibliometric information. Although highly curated, these closed, proprietary databases are largely biased towards English-language publications, underestimating the use of other languages in research dissemination. Launched in 2022, OpenAlex promised comprehensive, inclusive, and open-source research information. While already in use by scholars and research institutions, the quality of its metadata is currently being assessed. This paper contributes to this literature by assessing the completeness and accuracy of its metadata related to language, through a comparison with WoS, as well as an in-depth manual validation of a sample of 6,836 articles. Results show that OpenAlex exhibits a far more balanced linguistic coverage than WoS. However, language metadata is not always accurate, which leads OpenAlex to overestimate the place of English while underestimating that of other languages. If used critically, OpenAlex can provide comprehensive and representative analyses of languages used for scholarly publishing. However, more work is needed at infrastructural level to ensure the quality of metadata on language.

New study: "Non-selective databases (#Dimensions, #OpenAlex, #Scilit, and #TheLens) index a greater amount of retracted literature than do databases that rely their indexation on venue selection (#PubMed, #Scopus, and #WoS)…The high coverage of OpenAlex and Scilit could be explained by the inaccurate labeling of retracted documents in #Scopus, Dimensions, and The Lens."
link.springer.com/article/10.1

SpringerLinkThe indexation of retracted literature in seven principal scholarly databases: a coverage comparison of dimensions, OpenAlex, PubMed, Scilit, Scopus, The Lens and Web of Science - ScientometricsIn this study, the coverage and overlap of retracted publications, retraction notices and withdrawals are compared across seven significant scholarly databases, with the aim to check for discrepancies, pinpoint the causes of those discrepancies, and choose the best product to produce the most accurate picture of retracted literature. Seven scholarly databases were searched to obtain all the retracted publications, retraction notices and withdrawal from 2000. Only web search interfaces were used, excepting in OpenAlex and Scilit. The findings demonstrate that non-selective databases (Dimensions, OpenAlex, Scilit, and The Lens) index a greater amount of retracted literature than do databases that rely their indexation on venue selection (PubMed, Scopus, and WoS). The key factors explaining these discrepancies are the indexation of withdrawals and proceeding articles. Additionally, the high coverage of OpenAlex and Scilit could be explained by the inaccurate labeling of retracted documents in Scopus, Dimensions, and The Lens. 99% of the sample is jointly covered by OpenAlex, Scilit and WoS. The study suggests that research on retracted literature would require querying more than one source and that it should be advisable to accurately identify and label this literature in academic databases.