Post-Scientific Technocratic Methodologies

  • News
  • Thread starter DialogicalCatalyst
  • Start date
In summary, the conversation discussed the Demarcation Criteria, which is the distinction between science and pseudo-science. It was noted that there are certain traits that define science, such as heavy emphasis on observation, hypothesis testing, and peer review. However, as technology advances, new methodologies are emerging that may not fit into these traditional criteria, such as video games solving scientific puzzles and AIs making discoveries without human help. This raises the question of when scientists stopped being natural philosophers and what the difference is between the two. The speaker proposes the idea of "Post-Scientific Methodologies" that may make traditional science obsolete. These could include AIs working with virtual reality systems and augmented cyber-minds, which may not require peer review. The
  • #1

DialogicalCatalyst

Let me start by saying I am not a trained scientist by any means, my specialty is more philosophy. But during my ventures into science and skepticism, I have gone into what we generally refer to as the Demarcation Criteria. I think we all know that it is important to distinguish between science and pseudo-science, and note that not every methodology or means of finding truth is science. Investigative reporting is not necessarily science. Philosophy is not science. History is not necessarily science. More scientific approaches to these subjects can be taken, but by and large they are not by themselves science.

The Demarcation problem is far from solved, and science itself tends to encompass several fields which may initially seem to have very little in common with respect to approaches taken by their practitioners. An Astrophysicist working with computer models can seem very different then a Zoologist working out on the African-Savannah, but these groups still seem to have some overlapping traits which make them more or less scientists practicing science. These features seem to be:

- Heavy emphasis on observation

- Hypothesis testing

- Peer review

Various "Gold Standards" of commonly accepted traits. This is of course very general and vague, as the Demarcation Problem is far from solved: https://plato.stanford.edu/entries/pseudo-science/#UniDiv

However, it is important to have some standards regarding what we consider science, if only but to, distinguish it from pseudo-science.

I would also like to note that science itself evolved from various proto-scientific methodologies, namely natural history and natural philosophy:

https://en.wikipedia.org/wiki/Natural_history
https://en.wikipedia.org/wiki/Natural_philosophy
This was preceded by several evolutions, a key example of which is Francis Bacon's "Novus Organum" or New Method, which was by and large, an argument for use of inductive methods, in a day and age when inductive logic was seen as inferior to deductive and formal logic, which was, in the Medieval and Scholastic mind, seen as far more certain.

In fact the word 'scientist' was coined rather recently (from a historical perspective) in 1833: https://askdruniverse.wsu.edu/2017/11/14/who-came-up-with-the-word-science/

So when did scientists stop being natural philosophers, and what exactly is the difference between a natural philosopher and a scientist? This question is important, because it is at the crux of the philosophical discovery I propose, which in Asimovian tradition, I humbly claim to have discovered by myself and which may be important.

And that is, I believe we are at the cusp of new methodologies, made possible by technological advances, which will be to science, what science is to natural philosophy. For lack of a better term, I call these 'Post-Scientific Methodologies".

What are these exactly? Well, much like Karl Popper, I do not put much stock in winning or losing arguments by "definition", and when it comes to very new phenomenon, definition by its very nature may be lacking. I am more going into this subject based on observations and examples of what I consider to be Post-Scientific Methods. To give a very vague and general description however, I would say they are technologically advanced methods of uncovering truth and knowledge that do not appear much like what we would call science. They may not be dependent on observations, may not be peer reviewed (in some cases, they cannot be peer reviewed at all with respect to say AI discoveries0 and do not involve hypothesis testing. Yet they still work.

So some examples:

Gamers solve molecular puzzle that baffled scientists

Basically scientists, in frustration after trying for decades to try and uncover various features of the AIDS retrovirus decided to take a Hail Mary and toss out the question as a video game. The gamers solved in 10 days what trained scientists could not solve in years. Is this science? Well I would be hard pressed to say playing video games is the same as conducting science, even if it leads to scientific discoveries. I mean, if we can count that as science, and I say, figure something out about the objective world by reading a book, or playing a story-based video-game, then we can also theoretically call reading books an act of science. That sounds a little too vague to me however, as if such is the case, we are basically calling ALL methods which discover objective truths science no matter what, which makes it so science really then has no point of demarcation at all besides what we think is convenient.

Another example: Computers are providing solutions to math problems that we can't check

So there goes peer review. There goes hypothesis testing. And there goes a lot of other features which we generally associate with scientists doing science.

Supercomputers make discoveries that scientists can't

The heading of the article from "NewScientist" speaks for itself. A bunch of AIs mined scientific literature, and made several discoveries, pretty much just by reading and inferring.

"IN MAY last year, a supercomputer in San Jose, California, read 100,000 research papers in 2 hours. It found completely new biology hidden in the data. Called KnIT, the computer is one of a handful of systems pushing back the frontiers of knowledge without human help."

So now we have to ask ourselves, if I read a bunch of books or articles, and by such reading deduce various truths about the objective world, does my reading and deduction now count as science?

This, I believe is the tip of the ice-berg. Call it what you will "Auto-Science" or "Virtual-Science" or even something else completely. However, I believe we are on the cusp of a wholly series of new methodologies, that once more fully developed, could perhaps make science as we currently understand it obsolete. AIs working with VR systems, Augmented cyber-minds working via networks, etc. Should these things be considered science, or are they qualitatively distinct methods just like we consider science distinct from natural philosophy? Perhaps "science" itself, may be associated with a methodology that is considered somewhat obsolete, simply because it is slower then VR-Networked Research or having an AI just figure it out on its own. And with such, peer review may be a thing of the past. AIs may not even be able to communicate how exactly they came to their conclusions even with other AIs, let alone the rest of us. A cybernetic-network or gaming-network may have similar problems with communication depending on how advanced and specialized they are, and such. All we will know, is that these new methods seem to work, because their application will likely give us better technology. That is sort of why people really believe science works at all anyways - most of the public does not understand the philosophy of science or technicalities of any field - but the fact is science produces technology and technology works. Likewise, I imagine, we will see similar patterns with these new methods - they may be beyond our understanding, but the fact that they get results will vindicate their accuracy.
 
Physics news on Phys.org
  • #2
DialogicalCatalyst said:
I believe we are on the cusp of a wholly series of new methodologies, that once more fully developed, could perhaps make science as we currently understand it obsolete
I think this is unlikely. The development of a new tool rarely makes an old tool obsolete. For instance knives were used as a weapon of war, later guns were developed, then bombers, and then nukes. However, the army still uses knives, in addition to all of the other tools.

Nukes didn’t make knives obsolete. I don’t expect that AI and other tools will make the scientific method obsolete.
 
  • Like
Likes weirdoguy, Klystron, russ_watters and 4 others
  • #3
Applying your reasoning "All problem solving is not Science". While comfortable with Popper, Abraham Maslow 'cuts to the chase' assigning a hierarchy of needs to human endeavors. A young member of our technological culture needs their smart phone, essentially a networked portable computer, at least as much as a young Pythagorean needed their straightedge and plumb line; a young Octavian their scrolls.

Modern literature is practically defined by announcements of Postmodernism at least since the invention of the printing press placed knowledge in the hands of the literate. There have been many scientific and technological milestones achieved yet society still functions. I share Asimov's optimism concerning humanity's future and also share his warning to avoid an overly anthropocentric view. Ironically, a robot welded my bicycle frame yet I ride it the same.
 
  • #4
DialogicalCatalyst said:
Let me start by saying I am not a trained scientist by any means, my specialty is more philosophy. But during my ventures into science and skepticism, I have gone into what we generally refer to as the Demarcation Criteria. I think we all know that it is important to distinguish between science and pseudo-science, and note that not every methodology or means of finding truth is science. Investigative reporting is not necessarily science. Philosophy is not science. History is not necessarily science. More scientific approaches to these subjects can be taken, but by and large they are not by themselves science.

The Demarcation problem is far from solved, and science itself tends to encompass several fields which may initially seem to have very little in common with respect to approaches taken by their practitioners. An Astrophysicist working with computer models can seem very different then a Zoologist working out on the African-Savannah, but these groups still seem to have some overlapping traits which make them more or less scientists practicing science. These features seem to be:

- Heavy emphasis on observation

- Hypothesis testing

- Peer review

Various "Gold Standards" of commonly accepted traits. This is of course very general and vague, as the Demarcation Problem is far from solved: https://plato.stanford.edu/entries/pseudo-science/#UniDiv

However, it is important to have some standards regarding what we consider science, if only but to, distinguish it from pseudo-science.

I would also like to note that science itself evolved from various proto-scientific methodologies, namely natural history and natural philosophy:

https://en.wikipedia.org/wiki/Natural_history
https://en.wikipedia.org/wiki/Natural_philosophy
This was preceded by several evolutions, a key example of which is Francis Bacon's "Novus Organum" or New Method, which was by and large, an argument for use of inductive methods, in a day and age when inductive logic was seen as inferior to deductive and formal logic, which was, in the Medieval and Scholastic mind, seen as far more certain.

In fact the word 'scientist' was coined rather recently (from a historical perspective) in 1833: https://askdruniverse.wsu.edu/2017/11/14/who-came-up-with-the-word-science/

So when did scientists stop being natural philosophers, and what exactly is the difference between a natural philosopher and a scientist? This question is important, because it is at the crux of the philosophical discovery I propose, which in Asimovian tradition, I humbly claim to have discovered by myself and which may be important.

And that is, I believe we are at the cusp of new methodologies, made possible by technological advances, which will be to science, what science is to natural philosophy. For lack of a better term, I call these 'Post-Scientific Methodologies".

What are these exactly? Well, much like Karl Popper, I do not put much stock in winning or losing arguments by "definition", and when it comes to very new phenomenon, definition by its very nature may be lacking. I am more going into this subject based on observations and examples of what I consider to be Post-Scientific Methods. To give a very vague and general description however, I would say they are technologically advanced methods of uncovering truth and knowledge that do not appear much like what we would call science. They may not be dependent on observations, may not be peer reviewed (in some cases, they cannot be peer reviewed at all with respect to say AI discoveries0 and do not involve hypothesis testing. Yet they still work.

So some examples:

Gamers solve molecular puzzle that baffled scientists

Basically scientists, in frustration after trying for decades to try and uncover various features of the AIDS retrovirus decided to take a Hail Mary and toss out the question as a video game. The gamers solved in 10 days what trained scientists could not solve in years. Is this science? Well I would be hard pressed to say playing video games is the same as conducting science, even if it leads to scientific discoveries. I mean, if we can count that as science, and I say, figure something out about the objective world by reading a book, or playing a story-based video-game, then we can also theoretically call reading books an act of science. That sounds a little too vague to me however, as if such is the case, we are basically calling ALL methods which discover objective truths science no matter what, which makes it so science really then has no point of demarcation at all besides what we think is convenient.

Another example: Computers are providing solutions to math problems that we can't check

So there goes peer review. There goes hypothesis testing. And there goes a lot of other features which we generally associate with scientists doing science.

Supercomputers make discoveries that scientists can't

The heading of the article from "NewScientist" speaks for itself. A bunch of AIs mined scientific literature, and made several discoveries, pretty much just by reading and inferring.

"IN MAY last year, a supercomputer in San Jose, California, read 100,000 research papers in 2 hours. It found completely new biology hidden in the data. Called KnIT, the computer is one of a handful of systems pushing back the frontiers of knowledge without human help."

So now we have to ask ourselves, if I read a bunch of books or articles, and by such reading deduce various truths about the objective world, does my reading and deduction now count as science?

This, I believe is the tip of the ice-berg. Call it what you will "Auto-Science" or "Virtual-Science" or even something else completely. However, I believe we are on the cusp of a wholly series of new methodologies, that once more fully developed, could perhaps make science as we currently understand it obsolete. AIs working with VR systems, Augmented cyber-minds working via networks, etc. Should these things be considered science, or are they qualitatively distinct methods just like we consider science distinct from natural philosophy? Perhaps "science" itself, may be associated with a methodology that is considered somewhat obsolete, simply because it is slower then VR-Networked Research or having an AI just figure it out on its own. And with such, peer review may be a thing of the past. AIs may not even be able to communicate how exactly they came to their conclusions even with other AIs, let alone the rest of us. A cybernetic-network or gaming-network may have similar problems with communication depending on how advanced and specialized they are, and such. All we will know, is that these new methods seem to work, because their application will likely give us better technology. That is sort of why people really believe science works at all anyways - most of the public does not understand the philosophy of science or technicalities of any field - but the fact is science produces technology and technology works. Likewise, I imagine, we will see similar patterns with these new methods - they may be beyond our understanding, but the fact that they get results will vindicate their accuracy.

How "scientific", do you think, is your analysis? Do you think that using something that is very new and barely have any solid verification allows you to make a definitive conclusion of what will happen in the future?

These AI systems can only look for patterns. They can't perform or design experiments on their own or look for the "Who Ordered That?" phenomena. These articles even explicitly mentioned that the program scoured throughout published papers, i.e. the published papers, which the AI itself did not produce, were the data sources. Who do you think did the work in all of these published papers? Robots?

And using articles from pop-science news articles, which often have the propensity to over-hype and over-sell even unverified and outlandish ideas, as your primary source is dubious. Even science does not do that.

Zz.
 
  • Like
Likes Klystron and Dale
  • #5
Dale said:
I think this is unlikely. The development of a new tool rarely makes an old tool obsolete. For instance knives were used as a weapon of war, later guns were developed, then bombers, and then nukes. However, the army still uses knives, in addition to all of the other tools.

Nukes didn’t make knives obsolete. I don’t expect that AI and other tools will make the scientific method obsolete.

First, thank you for replying in a respectful manner. I appreciate it, when people are willing to respond to me fairly and objectively.

Second, I do not agree. Clausewitz, who wrote the eight volume series On War, writes about how outdated not just old weapons are, but old tactics. He writes, using outdated means and tactics, is like fighting with wooden swords, before someone comes in with steel and starts chopping off arms.

If you have the Manhatten Project, and no one else has - it makes their weapons obsolete. That is the only reason the USSR with their two-million man army of seasoned troops did not invade Western Europe. That is why European powers (with superior technology) steamrolled Native Americans, Africans, and all other peoples.

Maybe knives are still used - very rarely in war. But are horses used in travel? Are oxen used on farms? Do Blacksmiths really matter in the era of Steel Mills? Things, sooner or later - do get obsolete with new technology. And old methods get replaced by new methods.

Some people have accused me of advocating "Post-Modernism", I am arguing for the opposite. I am not saying all methods are equal. I am saying, technocratic methods will make BETTER methods - better then traditional means like post-modernists argue for, and even better then what we now call science. Maybe I am wrong, but to me, it really looks like AI and Cyber-Minds working in networks can get a lot more done, in some very different ways then what we now call science. At least as we know it empirically. And it will do it a lot faster.

What did Clausewitz write about? He wrote about the 'God of War'. How wars, at the start of the Napoleonic Era started with armies averaging 20,000, and ended with Armies amassing up to 600,000 men! Can you even imagine armies of 500-600,000 men in one area, ready to kill you?! We put down Napoleon because he was exiled. But his CAUSE - ending Feudalism won! Wherever he went, he implemented the Napoleonic Codes - which ended feudal laws. In order to beat him, his enemies had to adopt nation-state armies - breaking the rationale of the Ancien Regime. And without his defense of Paris, the French Revolutionary Republic would have been crushed in less then a year. The idea of equality is RELATIVE. Sometimes, methods are just better in certain, areas, sometimes minds are just better at war! As Napoleon said, and I bet Einstein would agree "Imagination Rules the World!"
 
Last edited by a moderator:
  • #6
1. What makes you think this "Clausewitz" guy is right, and that what he wrote is the definitive lesson in the evolution of human thinking?

2. What makes you think that what he wrote is applicable to how science is evolving? After all, both you and him are not working in science, and presumably and admittedly, are ignorant on how it works. Just because you have seen it and read about it do not mean that you have a deep understanding of it. That's like writing about the history of France without understanding French and without ever stepping a foot within its borders.

Finally, from my opinion, this is verging on promoting your own "personal theory". I believe even "philosophy" and philosophical ideas are governed by the same criteria of publishing and having experts in that field taking a crack at the published ideas. Otherwise, this is no different than crackpot promoting their own science theories on an internet forum, something we have seen way too many times. So if you think your idea has merit, why don't you publish it rather than trying to sell it on some internet forum?

Zz.
 
  • Like
Likes weirdoguy
  • #7
DialogicalCatalyst said:
Second, I do not agree. Clausewitz, who wrote the eight volume series On War, writes about how outdated not just old weapons are, but old tactics. He writes, using outdated means and tactics, is like fighting with wooden swords, before someone comes in with steel and starts chopping off arms.
To the best of my knowledge, many of the tactics that von Clausewitz described are still in use, including principles of defense and offense, and the use of land features. Certainly, many of the weapons such as muzzle-loaded muskets and rifles, are obsolete, having been replaced by select-fire weapons using magazine-fed or belt-fed cartridges.
DialogicalCatalyst said:
Maybe knives are still used - very rarely in war. But are horses used in travel?
Ever hear of bayonets? And every special ops guy carries a fixed-blade knife.
As far as horses being used, a US Navy Seal team used horses in the early battles with Taliban fighters in about 2002.
DialogicalCatalyst said:
Do Blacksmiths really matter in the era of Steel Mills?
In fact, yes they do. One example is Burt Munro, the New Zealander who rebuilt an old Indian motorcycle to capture a land-speed record for his motorcycle class. When he needed pistons of a certain size and composition, he simply cast and machined them himself, together with any other parts he needed. If the arts of the blacksmith truly are obsolete, why then can you find so many Youtube videos about them?
 
  • Like
Likes Dale and Klystron
  • #8
Not an accusation but an observation that our current body of knowledge has become too great for a single mind to encompass much less understand. So, scientists, mathematicians, engineers and maintenance technicians work in teams that communicate. Sometimes small teams -- my personal forte -- sometimes several large groups such as gathered for the Manhattan Engineering Project and to build the interstate highway system and the world-wide web.

Postmodernism may be an attempt to deal with or disentangle from this abundant knowledge flow but creating and sustaining the various nodes of the Internet solves the information problem for you. My comparison of smart phones to ancient scrolls is not intended only as analog but as a solution for accessing and organizing so much information. The beauty of modern science and technology centers in access to knowledge providing methods to dip into the immense flow. Our challenge remains to use knowledge wisely.
 
  • #9
DialogicalCatalyst said:
Clausewitz, who wrote the eight volume series On War, writes about how outdated not just old weapons are, but old tactics
And yet, the army still buys knives and soldiers still carry them into battle together with their rifles and night vision goggles. And West Point recommends that officers study Sun Tzu as well as Clausewitz. Clausewitz can write what he likes but the observable behavior of existing armies is that old technologies are still actually used today.

The advent of science and inductive reasoning did not get rid of math and deductive reasoning. I don’t see why the use of AI would get rid of science.

DialogicalCatalyst said:
Maybe I am wrong, but to me, it really looks like AI and Cyber-Minds working in networks can get a lot more done,
I personally wouldn’t say you are wrong, but I would say you are naive about AI. Your attitude is fairly common for technophiles who are excited about the possibilities but unaware of the current limitations of AI. You have cited a few examples of successes of AI, which are deservedly well published. But are you aware of some of the failures? Do you understand the limitations of the technology? Which classes of problems do they perform well on and which classes do they perform poorly on? Why do they perform poorly on those problems?

Before you declare the end of the scientific method you should understand the limitations of the replacement technology. The scientific method is still used extensively with AI today, and will be for the foreseeable future.
 
Last edited:
  • Like
Likes Klystron
  • #10
After re-reading this entire thread I have an improved understanding of the original poster's position and ideas. I make two claims:
  1. Knowledge need not be lost. "Obsolete" skills can be maintained. Cultural historical societies, for example, pay participants to demonstrate old skills and reenact life in earlier cultures, for the educational value and as example to the young.
  2. Knowledge frees the mind yet remains a burden that requires constant judgement.
Demarcation -- separating science from superstition, frivolity, pseudo-science and fraud -- need not remain a problem if we avoid rigid thinking.

Suppose we arrive at the confluence of two great rivers. One river flowing brightly down from mountain glaciers stays cold; the water clear and, despite the cold temperature, flows rapidly. The second river meanders through peat bogs, meadows and forests; its warm water turbid with organic detritus and sluggish flow including many snags and stagnant backwaters.

Just past the confluence we can clearly mark the separation between the two rivers. The glacial water appears clear, feels cold, tastes fresh. The forest river water shows color, blocks light, feels warm, tastes unpleasant. Not far downstream the flows merge. The demarcation becomes unclear. The resultant river becomes diaphanous, the turbid water diluted, the clear water muddied.

People drinking downstream must carefully filter the water.
 
Last edited:
  • #11
DialogicalCatalyst said:
If you have the Manhatten Project, and no one else has - it makes their weapons obsolete.
The Soviet Union could have wiped out the Taliban in the 1980s, along with the rest of Afghanistan, using their nuclear arsenal. But they didn't. Why?

The US could do the same right now. Why don't we?
 
  • Like
Likes Dale
  • #12
ZapperZ said:
How "scientific", do you think, is your analysis? Do you think that using something that is very new and barely have any solid verification allows you to make a definitive conclusion of what will happen in the future?

These AI systems can only look for patterns. They can't perform or design experiments on their own or look for the "Who Ordered That?" phenomena. These articles even explicitly mentioned that the program scoured throughout published papers, i.e. the published papers, which the AI itself did not produce, were the data sources. Who do you think did the work in all of these published papers? Robots?

And using articles from pop-science news articles, which often have the propensity to over-hype and over-sell even unverified and outlandish ideas, as your primary source is dubious. Even science does not do that.

Zz.

I don't consider my analysis to really be scientific at all. Like I said, it is philosophical, not scientific.

As for what AI's can and cannot do, I would be careful of that less there is a repeat of many last stands, where people claimed computers would "never be able to play chess."

Which was then revised to "Well, they will never be among the best players."

Which was revised to "They will never beat THE top player."

Which has been revised, unfortunately for us all maybe, to be "Well we will always be able to pull the plug. "

Let's just hope they never use magnetic transistors, because then the plug may not matter at all: https://www.extremetech.com/computi...000-times-less-power-than-silicon-transistors
 
Last edited by a moderator:
  • #13
DialogicalCatalyst said:
Like I said, it is philosophical, not scientific.
And on that note I think we can close the discussion. We prefer scientific discussions here.
 

What is Post-Scientific Technocratic Methodology?

Post-Scientific Technocratic Methodology refers to an approach to problem-solving and decision-making that emphasizes the use of scientific and technological knowledge and expertise. It involves the application of data-driven and evidence-based methods to address complex societal, environmental, and technological challenges.

How is Post-Scientific Technocratic Methodology different from traditional scientific methods?

While traditional scientific methods rely on the scientific method of hypothesis, experimentation, and analysis, Post-Scientific Technocratic Methodology incorporates additional elements such as interdisciplinary collaboration, stakeholder engagement, and policy implications. It also considers the ethical, social, and cultural implications of scientific and technological advancements.

What are some examples of Post-Scientific Technocratic Methodologies in practice?

Examples of Post-Scientific Technocratic Methodologies include the use of data analytics and artificial intelligence in healthcare to improve diagnosis and treatment, the implementation of renewable energy technologies to address climate change, and the application of genetic engineering in agriculture to increase crop yields.

What are the benefits of using Post-Scientific Technocratic Methodologies?

Post-Scientific Technocratic Methodologies can lead to more efficient and effective problem-solving, as they rely on data and evidence rather than personal opinions or biases. They also promote collaboration and inclusivity, as they involve input from various stakeholders and disciplines. Furthermore, they can help address complex and global challenges by considering the ethical, social, and cultural implications of scientific and technological advancements.

What are the potential challenges of implementing Post-Scientific Technocratic Methodologies?

Some potential challenges of implementing Post-Scientific Technocratic Methodologies include the need for interdisciplinary collaboration, which may require resources and time, and the reliance on data and evidence, which may not always be readily available or accurate. Additionally, the ethical and social implications of scientific and technological advancements may be difficult to navigate and may require careful consideration and engagement with diverse perspectives.

Suggested for: Post-Scientific Technocratic Methodologies

Back
Top