Is the Scientific Paper Still Effective?

  • Thread starter Thread starter Greg Bernhardt
  • Start date Start date
  • Tags Tags
    Paper Scientific
Click For Summary
SUMMARY

The discussion centers on the effectiveness of the scientific paper in modern research. Participants agree that while early scientific papers were more accessible and less jargon-heavy, contemporary papers have become longer and more complex, contributing to a replication crisis. The conversation highlights the need for improved communication methods in science, such as interactive notebooks like Mathematica and open-access databases like ModelDB. Additionally, there is a concern regarding the potential for AI to centralize scientific knowledge, which could undermine the chaotic yet beneficial nature of current scientific discourse.

PREREQUISITES
  • Understanding of the historical evolution of scientific communication
  • Familiarity with the replication crisis in scientific research
  • Knowledge of interactive computational tools like Mathematica and IPython
  • Awareness of open-access databases such as ModelDB and ZFIN
NEXT STEPS
  • Research the impact of the replication crisis on scientific credibility
  • Explore the functionalities of interactive notebooks like Jupyter and Mathematica
  • Investigate the role of open-access publishing in enhancing scientific communication
  • Examine the implications of AI in scientific research and knowledge management
USEFUL FOR

Researchers, academic writers, data scientists, and anyone involved in scientific communication and publication processes.

Messages
19,865
Reaction score
10,850
I'm going to assume there will be some strong options on this piece!

The scientific paper—the actual form of it—was one of the enabling inventions of modernity. Before it was developed in the 1600s, results were communicated privately in letters, ephemerally in lectures, or all at once in books. There was no public forum for incremental advances. By making room for reports of single experiments or minor technical advances, journals made the chaos of science accretive. Scientists from that point forward became like the social insects: They made their progress steadily, as a buzzing mass.

The earliest papers were in some ways more readable than papers are today. They were less specialized, more direct, shorter, and far less formal. Calculus had only just been invented. Entire data sets could fit in a table on a single page. What little “computation” contributed to the results was done by hand and could be verified in the same way.

The more sophisticated science becomes, the harder it is to communicate results. Papers today are longer than ever and full of jargon and symbols. They depend on chains of computer programs that generate data, and clean up data, and plot data, and run statistical models on data. These programs tend to be both so sloppily written and so central to the results that it’s contributed to a replication crisis, or put another way, a failure of the paper to perform its most basic task: to report what you’ve actually discovered, clearly enough that someone else can discover it for themselves.

https://www.theatlantic.com/science/archive/2018/04/the-scientific-paper-is-obsolete/556676/
 
  • Like
Likes   Reactions: Buzz Bloom
Physics news on Phys.org
I think that the objection that they have become far more complicated, and difficult (sometimes) to replicate because of the complexity associated with data analysis if a fair one, but it's quite a stretch to go from there to "obsolete"
 
Sounds like it was written by a cynic.

I have read some early papers and I did not find them all short and to the point nor free of jargon.
Many of them were pretty chatty rather than to the point.

These programs tend to be both so sloppily written...
Most programs I have used for science have worked fine.

I like the point about papers being more focused on single points and how they accrete to a larger body of knowledge. I think this is important to a more rapid advance of science.
Also, the growth of scientific knowledge has lead to more scientific knowledge of the world than anyone person can know as an individual. This increase of knowledge has also lead to the use of jargon (specialized terms particular to a field) as communication aides to people in a field.
 
  • Like
Likes   Reactions: dlgoff
The earliest papers were in some ways more readable than papers are today. They were less specialized, more direct, shorter, and far less formal. Calculus had only just been invented. Entire data sets could fit in a table on a single page. What little “computation” contributed to the results was done by hand and could be verified in the same way.
The more sophisticated science becomes, the harder it is to communicate results. Papers today are longer than ever and full of jargon and symbols.
I don't have much knowledge about this topic, but I'm pretty sure that earlier scientific papers were less accessible than the author implies. Many publications are hidden behind paywalls but it's much easier to get a hold almost any kind of information on the internet. I doubt that any means of distributing papers in the past can rival this kind of accessibility. Even assuming what the author says is true, I also doubt that academics intentionally try to complicate their work. Jargon is necessary to ensure precision when communicating within the field. Scientific communication with the public, on the other hand, is an arduous and equally important task that involves retaining this level of precision while teaching in an engaging and understandable manner. This is an entirely different can of worms I won't get into here.

I'm more interested in his point about interactivity. The interactivity of notebooks in Mathematica is something that I wish was more prevalent in standardized document formats. But it's difficult to standardize these kinds of documents. Even though I am a fan of programs like Mathematica, their licenses can be hideously expensive. IPython, Sage, and other free software are potential solutions but change rapidly and haven't become widespread enough as of now. Academia is fairly traditional and is very slow to change. I hope academic papers can come up to speed to accommodate for the wonderful capabilities of technology.
 
  • Like
Likes   Reactions: StatGuy2000
Papers should be shorter for sure. As the number of papers increases exponentially, journals should encourage short, concise and far less redundant work.
 
In theoretical/computational neiroscience, we have modelDB:

https://senselab.med.yale.edu/modeldb/

Host your model's scripts and data, reference the link in your paper, and let people run it themselves.
 
  • Like
Likes   Reactions: Aufbauwerk 2045
I used to have a pet solution to this problem. My AI fantasy used to be to have all mathematical and scientific knowledge stored in a computer which can serve as encyclopedia, proof checker, tutor, experiment proposer, theorem originator, conversationalist, data analyzer, etc. When I say "stored" I mean the knowledge representation would be such that it would be suitable for semantic processing by a machine.

One aspect would be that all scientific papers would be written in such a way that they could be machine translated into a format that could be parsed by the computer. It would then validate the paper according to various criteria and either accept or reject. Papers which are basically duplicates of those already accepted would of course be rejected.

If you wanted to review the latest literature on a given topic, you would log into the system and engage in a conversation with the computer using natural language. It would serve as a super-intelligent scientist who would know all existing scientific knowledge and be able to out-think the greatest geniuses in history.

But I have nightmares when I think of that kind of system being controlled by the type of people we have in prominence now in the tech world, or even by benevolent people. I like the way the current system is somewhat redundant and perhaps chaotic and full of mistakes. Some redundancy, error, and even chaos is far better than central control by a small group of humans.
 
Last edited by a moderator:
  • Like
Likes   Reactions: gibberingmouther
Aufbauwerk 2045 said:
could be machine translated into a format that could be parsed by the computer.
I know some people who work at the ZFIN (Zebrafish Information Network) DB.
They have a lot of people who read articles published on zebrafish biology and curate entries into the DB about the findings, methods and techniques used in the articles. You can then do searches on the information.
This is analogous to our machine reading of the papers, but probably more open ended and less restrictive, in that what the papers are about would not have to be defined before the papers are read. A better approach in my opinion.

Aufbauwerk 2045 said:
Papers which are basically duplicates of those already accepted would of course be rejected.
I think this would be a bad idea. It is good to publish things that ether confirm or deny previous publications. Both are useful in the progress of science.
There could be many valid reasons to publish either something that confirms or denies a previous publication which a program may not understand.
 
  • Like
Likes   Reactions: Aufbauwerk 2045
Aufbauwerk 2045 said:
I used to have a pet solution to this problem. My AI fantasy used to be to have all mathematical and scientific knowledge stored in a computer which can serve as encyclopedia, proof checker, tutor, experiment proposer, theorem originator, conversationalist, data analyzer, etc. When I say "stored" I mean the knowledge representation would be such that it would be suitable for semantic processing by a machine.

One aspect would be that all scientific papers would be written in such a way that they could be machine translated into a format that could be parsed by the computer. It would then validate the paper according to various criteria and either accept or reject. Papers which are basically duplicates of those already accepted would of course be rejected.

If you wanted to review the latest literature on a given topic, you would log into the system and engage in a conversation with the computer using natural language. It would serve as a super-intelligent scientist who would know all existing scientific knowledge and be able to out-think the greatest geniuses in history.

But I have nightmares when I think of that kind of system being controlled by the type of people we have in prominence now in the tech world, or even by benevolent people. I like the way the current system is somewhat redundant and perhaps chaotic and full of mistakes. Some redundancy, error, and even chaos is far better than central control by a small group of humans.

I have also thought about the idea that in the future we could have AI super scientists who could know more than any single individual would be capable of due to mortality and whatnot. If there were many of them, this future would not have the problem of the AIs having "central control by a small group of humans".
 
  • #10
gibberingmouther said:
this future would not have the problem of the AIs having "central control by a small group...".
Until the AIs unionized.
 

Similar threads

  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 12 ·
Replies
12
Views
3K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 2 ·
Replies
2
Views
631
  • · Replies 14 ·
Replies
14
Views
4K
  • · Replies 12 ·
Replies
12
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 5 ·
Replies
5
Views
4K
  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 0 ·
Replies
0
Views
2K