I would like to take a break from being rude about university league tables to talk about a league table that is (too) good for my ego. This is of publishing academics and ranks me at 32,524 (data here). Being 32,524th may not sound impressive but the world’s population is over 7 billion – admittedly most of the 7 billion don’t publish academic papers. It also may be highest rank of anyone in my School at Surrey (but not the highest rank of anyone at Surrey). It is also higher than my sister’s rank*.
Of course this ranking is pretty arbitrary, it is by Ioannidis and coworkers and is based on what they call the C-score. The C-score is well described in a post by Wil van der Aalst – which you should take seriously because he is ranked at the heady heights of 243 (🤣).
The C in C-score is for composite and it combines (the normalised log of) six different metrics, including total citations, h-index and a couple of others that attempt to give more credit to those who contributed the most to the paper. These are things like citations to papers where the author is first or last author. The idea being that the first author has often contributed more to the paper than the other authors in the list, and that in many (not all) fields the senior authors who led the work are often at the end of the list with the senior author who was overall lead being last. Here senior author means the academics who guided the research mostly done by PhD students and postdocs who are at the front of the author list.
Like any metric it has a set of biases. Like the widely used h-index (h= number of papers published by an author that have been cited at least h times) it favours older scientists. The longer your career, the more time you have had to publish papers that are around long enough to get cited. So you can’t use either the h-index or the C-score to compare older established academics with those early in their career.
What the C-score tries to do is compensate for the fact that metrics like total citations or the h-index give an academic the same credit however much or little they did for a paper. It counts the same if they are the sole author or made a very big contribution to the paper, or whether they are in the middle of the list 5,154 authors of a particle physics paper on the Higgs boson, and so may not have done much more than check some code, make some tea, … But working out how much an author contributed just from their position in the author list is tricky as the variation in the number of authors is huge – from one to 5,154 – and practices of positioning authors in the author list vary from field to field. So there is no unique way of comparing any two academics who have different patterns of collaboration.
But if you are comparing say a theoretical physicist who mostly publishes one or two author papers with say an experimentalist who routinely works with ten or more authors on papers, the C-score may be a bit more sensible than the h-index. And having more than one index is sensible, for example I rank above a colleague or two in the school by C-score but below them on h-index**, and this provides a useful warning that these rankings are all a bit arbitrary.
* My “career impact” measured by the c-score is higher than that of my sister but they also do a 2022 single-year score, and there my sister wins, so honours equal amongst siblings. Or maybe this just means my younger sister is catching up on me …
** I have more single/first authors than most physics academics, in part because I am a computational/theoretical physicist. Experimental physics academics very rarely publish single-author papers, they need people in the lab to turn their ideas into data. In practice this should mean that the C-score is systematically more favourable to most theoreticians than the h-index.