Updating search results...

Search Resources

19 Results

View
Selected filters:
  • research-assessment
Annotated Bibliography of Educational, Scholarly, Professional, and Community Resources on Research Impact & Bibliometrics
Unrestricted Use
CC BY
Rating
0.0 stars

Created as a supplement for the Impact Measurement collection of the ScholarlyCommunication Notebook (SCN) to describe some of the core literature in the field as well asresources that cannot be included on the SCN, because they are not openly licensed but arefree to read.This annotated bibliography is separated into three sections: Peer reviewed scholarly articles,Blog posts, initiatives, and guides, and Resources for further education and professionaldevelopment. The first section is intended to help practitioners in the field of researchassessment and bibliometrics to understand high-level core concepts in the field. The secondsection offers resources that are more applicable to practice. The final section includes links toblogs, communities, discussion lists, paid and free educational courses, and archivedconferences, so that practitioners and professionals can stay abreast of emerging trends,improve their skills, and find community. Most of these resources could not be included on theScholarly Communication Notebook, because they are not openly licensed. However, allresources on this bibliography are freely available to access and read.

Subject:
Education
Higher Education
Information Science
Material Type:
Reading
Author:
Rachel Miles
Date Added:
06/26/2023
Badges to Acknowledge Open Practices: A Simple, Low-Cost, Effective Method for Increasing Transparency
Unrestricted Use
CC BY
Rating
0.0 stars

Beginning January 2014, Psychological Science gave authors the opportunity to signal open data and materials if they qualified for badges that accompanied published articles. Before badges, less than 3% of Psychological Science articles reported open data. After badges, 23% reported open data, with an accelerating trend; 39% reported open data in the first half of 2015, an increase of more than an order of magnitude from baseline. There was no change over time in the low rates of data sharing among comparison journals. Moreover, reporting openness does not guarantee openness. When badges were earned, reportedly available data were more likely to be actually available, correct, usable, and complete than when badges were not earned. Open materials also increased to a weaker degree, and there was more variability among comparison journals. Badges are simple, effective signals to promote open practices and improve preservation of data and materials by using independent repositories.

Subject:
Biology
Life Science
Psychology
Social Science
Material Type:
Reading
Provider:
PLOS Biology
Author:
Agnieszka Slowik
Brian A. Nosek
Carina Sonnleitner
Chelsey Hess-Holden
Curtis Kennett
Erica Baranski
Lina-Sophia Falkenberg
Ljiljana B. Lazarević
Mallory C. Kidwell
Sarah Piechowski
Susann Fiedler
Timothy M. Errington
Tom E. Hardwicke
Date Added:
08/07/2020
Com millorar l'impacte de la recerca: gestió de la identitat digital
Unrestricted Use
CC BY
Rating
0.0 stars

Presentació utilitzada en activitats formatives per a investigadors de la UPC. S'analitza la importància de gestionar la identatitat digital de l'investigador per incrementar el seu impacte.

Subject:
Higher Education
Information Science
Material Type:
Module
Author:
Miquel Puertas
Date Added:
12/16/2020
Counting what counts in recruitment, promotion and tenure
Conditional Remix & Share Permitted
CC BY-NC-SA
Rating
0.0 stars

Slides from the Keynote talk given at Virginia Tech Open Access Week on 20 October 2020. See the full presentation recording and panel discussion at https://vtechworks.lib.vt.edu/handle/10919/100682.

Subject:
Education
Higher Education
Material Type:
Lecture Notes
Reading
Author:
Elizabeth Gadd
Date Added:
04/20/2022
Counting what counts in recruitment, promotion and tenure (Open Access Week 2020 Keynote Event)
Unrestricted Use
CC BY
Rating
0.0 stars

Virginia Tech's Open Access Week 2020 keynote speaker, Elizabeth (Lizzie) Gadd, Research Policy Manager (Publications) at Loughborough University in the UK, gives a talk about how what we reward through recruitment, promotion and tenure processes is not always what we actually value about research activity. The talk explores how we can pursue value-led evaluations - and how we can persuade senior leaders of their benefits.

The keynote talk is followed by a panel discussion with faculty members at Virginia Tech: Thomas Ewing (Associate Dean for Graduate Studies and Research and Professor of History), Carla Finkielstein (Associate Professor of Biological Sciences), Bikrum Gill (Assistant Professor of Political Science), and Sylvester Johnson (Professor and Director of the Center for Humanities. The panel is moderated by Tyler Walters (Dean, University Libraries).

The slides from this presentation are in Loughborough University's repository under a CC BY-NC-SA 4.0 license. https://repository.lboro.ac.uk/articles/presentation/Counting_what_counts_in_recruitment_promotion_and_tenure/13113860

Subject:
Education
Higher Education
Material Type:
Lecture
Provider:
Virginia Tech
Author:
Bikrum Singh Gill
Carla Finkielstein
Elizabeth Gadd
Rachel Miles
Sylvester Johnson
Tom Ewing
Tyler Walters
Date Added:
04/20/2022
Current Incentives for Scientists Lead to Underpowered Studies with Erroneous Conclusions
Unrestricted Use
CC BY
Rating
0.0 stars

We can regard the wider incentive structures that operate across science, such as the priority given to novel findings, as an ecosystem within which scientists strive to maximise their fitness (i.e., publication record and career success). Here, we develop an optimality model that predicts the most rational research strategy, in terms of the proportion of research effort spent on seeking novel results rather than on confirmatory studies, and the amount of research effort per exploratory study. We show that, for parameter values derived from the scientific literature, researchers acting to maximise their fitness should spend most of their effort seeking novel results and conduct small studies that have only 10%–40% statistical power. As a result, half of the studies they publish will report erroneous conclusions. Current incentive structures are in conflict with maximising the scientific value of research; we suggest ways that the scientific ecosystem could be improved.

Subject:
Biology
Life Science
Material Type:
Reading
Provider:
PLOS Biology
Author:
Andrew D. Higginson
Marcus R. Munafò
Date Added:
08/07/2020
Helsinki Initiative on Multilingualism in Scholarly Communication
Unrestricted Use
CC BY
Rating
0.0 stars

The signatories of the Helsinki Initiative on Multilingualism in Scholarly Communication support recommendations to keep research international and multilingual to be adopted by policy-makers, leaders, universities, research institutions, research funders, libraries, and researchers. This initiative helps to support bibliodiversity, protect locally relevant research, and promote language diversity in research evaluation. Signatories, events, media, and more information can be found at https://www.helsinki-initiative.org/

Subject:
Applied Science
Arts and Humanities
Education
Higher Education
Information Science
World Cultures
Material Type:
Reading
Author:
European Network For Research Evaluation in the Social Sciences and the Humanities
Federation of Finnish Learned Societies
The Committee for Public Information
The Finnish Association for Scholarly Publishing
Universities Norway
Date Added:
02/01/2023
The Hong Kong Principles for assessing researchers: Fostering research integrity
Unrestricted Use
CC BY
Rating
0.0 stars

Abstract
For knowledge to benefit research and society, it must be trustworthy. Trustworthy research is robust, rigorous, and transparent at all stages of design, execution, and reporting. Assessment of researchers still rarely includes considerations related to trustworthiness, rigor, and transparency. We have developed the Hong Kong Principles (HKPs) as part of the 6th World Conference on Research Integrity with a specific focus on the need to drive research improvement through ensuring that researchers are explicitly recognized and rewarded for behaviors that strengthen research integrity. We present five principles: responsible research practices; transparent reporting; open science (open research); valuing a diversity of types of research; and recognizing all contributions to research and scholarly activity. For each principle, we provide a rationale for its inclusion and provide examples where these principles are already being adopted.

Subject:
Education
Higher Education
Material Type:
Reading
Author:
Anne-Marie Coriat
David Moher
Lex Bouter
Mai Har Sham
Nicole Foeger
Paul Glasziou
Sabine Kleinert
Ulrich Dirnagl
Virginia Barbour
Date Added:
06/26/2023
IATUL Research Impact Things – A self-paced training program for IATUL libraries
Conditional Remix & Share Permitted
CC BY-SA
Rating
0.0 stars

The programme aims to equip learners with the skills and knowledge required to engage in the use of a range of metrics around research impact and gain understanding of the research landscape. This is a flexible programme – you can do as much or as little as suits you. While some Things are interlinked, each of the Things is designed to be completed separately, in any order and at any level of complexity. Choose your own adventure!

There are three levels for each Thing:

Getting started is for you if you are just beginning to learn about each topic
Learn more is if you know a bit but want to know more
Challenge me is often more in-depth or assumes that you are familiar with at least the basics of each topic

Subject:
Education
Higher Education
Material Type:
Lesson
Module
Reading
Author:
IATUL Special Interest Group Metrics and Research Impact (SIG-MaRI)
Date Added:
04/20/2022
The Metric Tide: Review of Metrics in Research Assessment
Rating
0.0 stars

This UK report presents the findings and recommendations of the Independent Review of the Role of Metrics in Research Assessment and Management. The review was chaired by Professor James Wilsdon, supported by an independent and multidisciplinary group of experts in scientometrics, research funding, research policy, publishing, university management and administration. This review has gone beyond earlier studies to take a deeper look at potential uses and limitations of research metrics and indicators. It has explored the use of metrics across different disciplines, and assessed their potential contribution to the development of research excellence and impact. It has analysed their role in processes of research assessment, including the next cycle of the Research Excellence Framework (REF). It has considered the changing ways in which universities are using quantitative indicators in their management systems, and the growing power of league tables and rankings. And it has considered the negative or unintended effects of metrics on various aspects of research culture. The report starts by tracing the history of metrics in research management and assessment, in the UK and internationally. It looks at the applicability of metrics within different research cultures, compares the peer review system with metric-based alternatives, and considers what balance might be struck between the two. It charts the development of research management systems within institutions, and examines the effects of the growing use of quantitative indicators on different aspects of research culture, including performance management, equality, diversity, interdisciplinarity, and the ‘gaming’ of assessment systems. The review looks at how different funders are using quantitative indicators, and considers their potential role in research and innovation policy. Finally, it examines the role that metrics played in REF2014, and outlines scenarios for their contribution to future exercises.

Subject:
Education
Higher Education
Material Type:
Reading
Textbook
Author:
Ben Johnson
Eleonora Befiore
Ian Viney
Jane Tinkler
Jude Hill
Liz Allen
Mike Thelwall
Paul Wouters
Philip Campbell
Richard Jones
Roger Kain
Simon Richard Kerridge
Stephen Curry
Steven Hill
James Wilsdon
Date Added:
04/27/2022
Metrics Toolkit
Unrestricted Use
CC BY
Rating
0.0 stars

The Metrics Toolkit co-founders and editorial board developed the Metrics Toolkit to help scholars and evaluators understand and use citations, web metrics, and altmetrics responsibly in the evaluation of research.

The Metrics Toolkit provides evidence-based information about research metrics across disciplines, including how each metric is calculated, where you can find it, and how each should (and should not) be applied. You’ll also find examples of how to use metrics in grant applications, CV, and promotion packages.

Subject:
Education
Higher Education
Social Science
Sociology
Material Type:
Reading
Author:
Heather Coates
Robin Champieux
Stacy Konkiel
Metrics Toolkit Editorial Board
Date Added:
04/27/2022
NIH Bibliometrics Training Series
Unrestricted Use
Public Domain
Rating
0.0 stars

This resource links to the full course (all 13 weeks of modules) on the Internet Archive. The video lectures for the courses are also available on YouTube at https://www.youtube.com/watch?v=maRP_Wvc4eY&list=PLWYwQdaelu4en5MZ0bbg-rSpcfb64O_rd

This series was designed and taught by Chris Belter, Ya-Ling Lu, and Candace Norton at the NIH Library. It was originally presented in weekly installments to NIH Library staff from January-May 2019 and adapted for web viewing later the same year.

The goal of the series is to provide free, on-demand training on how we do bibliometrics for research evaluation. Although demand for bibliometric indicators and analyses in research evaluation is growing, broadly available and easily accessible, training on how to provide those analyses is scarce. We have been providing bibliometric services for years, and we wanted to share our experience with others to facilitate the broader adoption of accurate and responsible bibliometric practice in research assessment. We hope this series acts as a springboard for others to get started with bibliometrics so that they feel more comfortable moving beyond this series on their own.

Navigating the Series
The training series consists of 13 individual courses, organized into 7 thematic areas. Links to each course in the series are provided on the left. Each course includes a training video with audio transcription, supplemental reading to reinforce the concepts introduced in the course, and optional practice exercises.

We recommend that the courses be viewed in the order in which they are listed. The courses are listed in the same order as the analyses that we typically perform to produce one of our standard reports. Many of the courses also build on concepts introduced in previous courses, and may be difficult to understand if viewed out of order. We also recommend that the series be taken over the course of 13 consecutive weeks, viewing one course per week. A lot is covered in these courses, so it is a good idea to take your time with them to make sure you understand each course before moving on to the next. We also recommend you try to complete the practice exercises that accompany many of the courses, because the best way to learn bibliometrics is by doing it.

Subject:
Mathematics
Measurement and Data
Material Type:
Lecture
Module
Reading
Provider:
National Institutes of Health
Author:
Candace Norton
Chris Belter
Ya-Ling Lu
Date Added:
01/31/2023
Narrative CV: resources to help you write one
Unrestricted Use
CC BY
Rating
0.0 stars

This 25-min course, from the University of Glasgow looks at: the thinking behind a move towards narrative CV and assessment formats; how the research landscape and research assessment practices are evolving and efforts to develop fairer assessment approaches; advice and tips on what to include in a more narrative format; and examples from real narrative CVs, written by early-career researchers. This course is directed at early-career researchers, specifically those who are making use of the Resume for Researchers format (e.g., via the UK Research and Innovation (UKRI), which is a non-departmental public body of the Government of the United Kingdom that directs research and innovation funding). Many funding agencies, the industry and corporate sector, and universities now require a more narrative-style CV to incorporate qualitative aspects into job applications (e.g. particularly in relation to describing input to publications, and the significance of these).

The goal of these formats is to help researchers to share their varied contributions to research in a consistent way and across a wide range of career paths and personal circumstances, and to move away from relying on narrowly focused performance indicators that can make it harder to assess, reward or nurture the full range of contributions that a researcher or academic makes to their field or discipline. This course helps researchers to structure, write, and craft a narrative CV to highlight and emphasize their individual academic accomplishments, contributions with a particular emphasis on 'how' they contributed rather than only 'what' they contribute.

Subject:
Education
Higher Education
Material Type:
Module
Reading
Provider:
University of Glasgow
Author:
Lab for Academic Culture at the University of Glasgow
Date Added:
04/20/2022
Research Evaluation Metrics
Conditional Remix & Share Permitted
CC BY-SA
Rating
0.0 stars

This module dwells on a number of methods (including old and new) available for research evaluation. The module comprises the following four units:
Unit 1. Introduction to Research Evaluation Metrics and Related Indicators.
Unit 2. Innovations in Measuring Science and Scholarship: Analytical Tools and Indicators in Evaluation Scholarship Communications.
Unit 3. Article and Author Level Measurements, and
Unit 4. Online Citation and Reference Management Tools.
Brief overviews of the units are presented below.
Unit 1 encompassed and discussed citation analysis, use of citation-based indicators for research evaluation, common bibliometric indicators, classical bibliometric laws, author level indicators using authors' public profiles, article level metrics using altmetric tools. It is to be noted that author level indicators and article level metrics are new tools for research evaluation. Author level indicators encompasses h index, citations count, i10 index, g index, articles with citation, average citations per article, Eigenfactor score, impact points, and RG score. Article level metrics or altmetrics are based on Twitter, Facebook, Mendeley, CiteULike, and Delicious which have been discussed. All technical terms used in the Unit have been defined.
Unit 2 deals with analytical tools and indicators used in evaluating scholarly communications. The tools covered are The Web of Science, Scopus, Indian Citation Index (ICI), CiteSeerX, Google Scholar and Google Scholar Citations. Among these all the tools except Indian Citation Index (ICI) are international in scope. ICI is not very much known outside India. It is a powerful tool as far Indian scholarly literature is concerned. As Indian journals publish a sizable amount of foreign literature, the tool will be useful for foreign countries as well. The analytical products with journal performance metrics Journal Citation Reports (JCR®) has also been described. In the chapter titled New Platforms for Evaluating Scholarly Communications three websites i.e. SCImago Journal & Country Rank (SJR) [ScimagoJR.com], eigenFACTOR.org, JournalMetrics.com and one software called Publish or Perish (POP) Software have been discussed.
Article and author level measurements have been discussed in Unit 3. Author and researcher identifiers are absolutely essential for searching databases in the WWW because a name like D Singh can harbour a number of names such as Dan Singh, Dhan Singh, Dhyan Singh, Darbara Singh, Daulat Singh, Durlabh Singh and more. The ResearcherID.com, launched by Thomson Reuters, is a web-based global registry of authors and researchers that individualises each and every name. Open Researcher and Contributor ID (ORCID) is also a registry that uniquely identifies an author or researcher. Both have been discussed in this Unit. Article Level Metrics (Altmetrics) has been treated in this Unit with the discussion as to how altmetrics can be measured with Altmetric.com and ImpactStory.org. Altmetrics for Online Journals has also been touched. There are a number of academic social networks of which ResearchGate.net, Academia.edu, GetCited.org, etc. have been discussed. Regional journal networks with bibliometric indicators are also in existence. Two networks of this type such as SciELO – Scientific Electronic Library Online, and Redalyc have been dealt with.
The last unit (Unit 4) is on online citation and reference management tools. The tools discussed are Mendeley, CiteULike, Zotero, Google Scholar Library, and EndNote Basic. The features of all the management tools have been discussed with figures, tables, and text boxes.
This is Module Four of the UNESCO's Open Access Curriculum for Researchers.
Full-Text is available at http://unesdoc.unesco.org/images/0023/002322/232210E.pdf

Subject:
Applied Science
Career and Technical Education
Education
Higher Education
Information Science
Material Type:
Full Course
Module
Textbook
Unit of Study
Author:
Anup Kumar Das
Date Added:
09/12/2018
Risk of Bias in Reports of In Vivo Research: A Focus for Improvement
Unrestricted Use
CC BY
Rating
0.0 stars

The reliability of experimental findings depends on the rigour of experimental design. Here we show limited reporting of measures to reduce the risk of bias in a random sample of life sciences publications, significantly lower reporting of randomisation in work published in journals of high impact, and very limited reporting of measures to reduce the risk of bias in publications from leading United Kingdom institutions. Ascertainment of differences between institutions might serve both as a measure of research quality and as a tool for institutional efforts to improve research quality.

Subject:
Biology
Life Science
Material Type:
Reading
Provider:
PLOS Biology
Author:
Aaron Lawson McLean
Aikaterini Kyriakopoulou
Andrew Thomson
Aparna Potluru
Arno de Wilde
Cristina Nunes-Fonseca
David W. Howells
Emily S. Sena
Gillian L. Currie
Hanna Vesterinen
Julija Baginskitae
Kieren Egan
Leonid Churilov
Malcolm R. Macleod
Nicki Sherratt
Rachel Hemblade
Stylianos Serghiou
Theo Hirst
Zsanett Bahor
Date Added:
08/07/2020
San Francisco Declaration on Research Assessment
Unrestricted Use
CC BY
Rating
0.0 stars

The Declaration on Research Assessment (DORA) recognizes the need to improve the ways in which the outputs of scholarly research are evaluated. The declaration was developed in 2012 during the Annual Meeting of the American Society for Cell Biology in San Francisco. It has become a worldwide initiative covering all scholarly disciplines and all key stakeholders including funders, publishers, professional societies, institutions, and researchers. The DORA initiative encourages all individuals and organizations who are interested in developing and promoting best practice in the assessment of scholarly research to sign DORA.

Other resources are available on their website, such as case studies of universities and national consortia that demonstrate key elements of institutional change to improve academic career success.

Subject:
Education
Higher Education
Material Type:
Reading
Author:
American Society for Cell Biology
Date Added:
04/20/2022
Using Altmetric Data Responsibly: A Guide to Interpretation and Good Practice
Unrestricted Use
CC BY
Rating
0.0 stars

This guide focuses specifically on data from the data provider and company, Altmetric, but other types of altmetrics are mentioned and occasionally used as a comparison in this guide, such as the Open Syllabus database to find the educational engagement with scholarly outputs. This guide opens with an introduction followed by an overview of Altmetric and the Altmetric Attention Score, Altmetrics and Responsible Research Assessment, Output Types Tracked by Altmetric, and the Altmetric Sources of Attention, which include: News and Mainstream Media, Social Media (X (formerly Twitter), Facebook, Reddit, and historical data from Google+, Pinterest, LinkedIn, and Sina Weibo); Patents, Peer Review, Syllabi (historical data only), Multimedia, Public Policy Documents, Wikipedia, Research Highlights, Reference Managers, and Blogs; finally, there is a conclusion, a list of related resources and readings, two appendices, and references. This guide is intended for use by librarians, practitioners, funders, and other users of Altmetric data or those who are interested in incorporating altmetrics into their bibliometric practice and/or research analytics. It can also help researchers who are going up for annual evaluations and promotion and tenure reviews, who can use the data in informed and practical applications. It can also be a useful reference guide for research managers and university administrators who want to understand the broader online engagement with research publications beyond traditional scholarly citations, also known as bibliometrics, but who also want to avoid misusing, misinterpreting, or abusing Altmetric data when making decisions, creating policies, and evaluating faculty members and researchers at their institutions.

Subject:
Applied Science
Education
Higher Education
Information Science
Material Type:
Reading
Provider:
Virginia Tech
Provider Set:
VTech Works
Author:
Rachel Miles
Robyn Price
Date Added:
12/04/2023
Using InCites responsibly: a guide to interpretation and good practice
Unrestricted Use
CC BY
Rating
0.0 stars

This guide has been created by bibliometric practitioners to support other users of InCites, a research analytics tool from Clarivate Analytics that uses bibliographic data from Web of Science; the guide promotes a community of informed and responsible use of research impact metrics. The recommendations in this document may be more suited to other academic sector users, but the authors hope that other users may also benefit from the suggestions. The guide aims to provide plain-English definitions, key strengths and weaknesses and some practical application tips for some of the most commonly-used indicators available in InCites. The indicator definitions are followed by explanations of the data that powers InCites, attempting to educate users on where the data comes from and how the choices made in selecting and filtering data will impact on final results. Also in this document are a comparative table to highlight differences between indicators in InCites and SciVal, another commonly used bibliometric analytic programme, and instructions on how to run group reports. All of the advice in this document is underpinned by a belief in the need to use InCites in a way that respects the limitations of indicators as quantitative assessors of research outputs. Both of the authors are members of signatory institutions of DORA, the San Francisco Declaration on Research Assessment. A summary of advice to using indicators and bibliometric data responsibly is available on pages 4-5 and should be referred to throughout. Readers are also recommended to refer to the official InCites Indicators Handbook produced by Clarivate Analytics. The guide was written with complete editorial independence from Clarivate Analytics, the owners of InCites. Clarivate Analytics supported the authors of this document with checking for factual accuracy only.

Subject:
Education
Higher Education
Material Type:
Reading
Author:
Gray A
Price R
Date Added:
05/09/2022
Using SciVal responsibly: a guide to interpretation and good practice
Conditional Remix & Share Permitted
CC BY-NC-SA
Rating
0.0 stars

This guide is designed to help those who use SciVal, a research analytics tool from Elsevier that sources bibliographic data from Scopus, to source and apply bibliometrics in academic institutions. It was originally devised in February 2018 by Dr. Ian Rowlands of King’s College London as a guide for his university, which makes SciVal widely available to its staff. King’s does this because it believes that bibliometric data are best used in context by specialists in the field. A small group of LIS-Bibliometrics committee members reviewed and revised the King’s guide to make it more applicable to a wider audience. SciVal is a continually updated source and so feedback is always welcome at LISBibliometrics@jiscmail.ac.uk. LIS-Bibliometrics is keen that bibliometric data should be used carefully and responsibly and this requires an understanding of the strengths and limitations of the indicators that SciVal publishes.

The purpose of this Guide is to help researchers and professional services staff to make the most meaningful use of SciVal. It includes some important `inside track’ insights and practical tips that may not be found elsewhere. The scope and coverage limitations of SciVal are fairly widely understood and serve as a reminder that these metrics are not appropriate in fields where scholarly communication takes place mainly outside of the journals and conference literature. This is one of the many judgment calls that need to be made when putting bibliometric data into their proper context. One of the most useful features of SciVal is the ability to drill down in detail using various filters. This allows a user to define a set of publications accurately, but that may mean generating top level measures that are based on small samples with considerable variance. Bibliometrics distributions are often highly skewed, where even apparently simple concepts like the `average’ can be problematic. So one objective of this Guide is to set out some advice on sample sizes and broad confidence intervals, to avoid over-interpreting the headline data. Bibliometric indicators should always be used in combination, not in isolation, because each can only offer partial insights. They should also be used in a 'variable geometry' along with other quantitative and qualitative indicators, including expert judgments and non-publication metrics, such as grants or awards, to flesh out the picture.

Subject:
Education
Higher Education
Material Type:
Reading
Author:
Elizabeth Gadd
Ian Rowlands
LIS-Bibliometrics Committee
Date Added:
05/09/2022