Collaborative Science: The Benefits and Challenges of Large-Scale Experiments

  • Thread starter sbrothy
  • Start date
  • Tags
In summary, the large number of authors on a paper does not necessarily mean that the paper is difficult to read. There are many people who have participated in the paper in some way or another and the paper is generally well-read.
  • #1
Gold Member
In science today, experiments in particular, a good thing has come of the fact that most experiments able to move the goal posts of our accumulated knowledge now necessarily must be group efforts. Thus, like for instance in an article like this:

the sheer number of authors carries with it - if not a guarantee then at least some assurance - that they're not wasting everyone's time and resources. Access to the equipment alone is probably only granted once you've proven that what you're doing is serious (and consistenly proven for many years I figure).

The downside, for uneducated hacks like me, is of course that the material becomes increasingly dense, specialized and/or obfuscated, to the point of ineffable incomprehension. :)

EDIT: Sprinkled some commas in there in a sad attempt at punctuation. :)
Last edited:
  • Like
Likes mcastillo356
Physics news on
  • #2
sbrothy said:
The downside, for uneducated hacks like me, is of course that the material becomes increasingly dense, specialized and/or obfuscated, to the point of ineffable incomprehension. :)

I don't think that's a function of number of authors. It's a function of specialization. I can point you at some few-author papers that are just as hard to read.

Also, there is information in the number of authors that is not captured by "lots". In 2001 a major collaboration published an unexpected (and ultimately unreproduced) result. 460 authors signed that paper. But other papers around this time had around 490. So 30 people were unconvinced enough to take the drastic step of asking to have their names removed from that paper.
  • Like
Likes sbrothy, berkeman and jim mcnamara
  • #3
The big collaborations all have internal review processes. What is uploaded to arXiv had more than one round of peer review already - not formally from a journal, but from people working on the same experiment. That filters out problems with the analyses and it also tends to improve readability of the paper at the same time.

The large number of people in the author list doesn't say much, however. They have a big author list and by default everyone on that list is listed as author for every paper. I'm guessing numbers, might have been different for this particular paper but here is the typical distribution. Of the 500 authors:
* 3-4 have actually written it and did most of the work being reported in this paper
* 3-4 others have contributed notably to the main analysis
* 5-10 have reviewed the process from the first internal draft to the submission on arXiv
* 30-60 have reviewed the paper and sent comments
* Maybe half of the rest has read the abstract, the other half read the title or didn't care about the paper at all. Maybe the numbers were a bit higher here because it's one of the first papers from this collaboration. ATLAS and CMS circulate two new papers every week, not many people read all of them.

Most of the last group will still have contributed to the publication in one way or another: Running one of the subdetectors, doing shifts during data-taking, working on the reconstruction of particles from raw data or one of the many other tasks needed in the background. Usually the big collaborations require some of that work from new members before they are added to the author list.
  • Like
Likes sbrothy

Related to Collaborative Science: The Benefits and Challenges of Large-Scale Experiments

1. What is "cranky science"?

"Cranky science" refers to scientific research or findings that are not supported by evidence, use flawed methodology, or are biased in some way.

2. How can I spot "cranky science"?

Some red flags to look for when evaluating scientific information include overgeneralizations, reliance on anecdotes instead of data, and claims that are too good to be true.

3. Why is it important to identify "cranky science"?

Identifying "cranky science" is crucial because it can lead to misinformation and potentially harmful decisions. It also undermines the credibility of the scientific community.

4. What are some strategies for avoiding "cranky science"?

To avoid "cranky science," it is important to critically evaluate the source of information, look for supporting evidence and data, and seek out multiple perspectives on the topic.

5. How can I communicate effectively about "cranky science"?

When discussing "cranky science," it is important to use clear and accurate language, present evidence-based information, and acknowledge any limitations or uncertainties in the research.

Similar threads

  • General Discussion
  • Sticky
  • Feedback and Announcements