The implications of Socratic inquiry: Do 'experts' really know nothing?

  • Thread starter hypnagogue
  • Start date
In summary, the conversation discusses the idea of expertise and whether experts truly have knowledge in their field or if their expertise is simply an intuitive process based on experiential familiarity. The article "From Socrates to Expert Systems" argues that true expertise cannot be explicitly stated or captured by formal rules, and this is evident in the lack of success of expert systems in artificial intelligence research. The post also brings up the idea of expertise in physical activities versus intellectual ones, and how the complexity of human behavior makes it difficult to fully understand and replicate with AI.
  • #1
hypnagogue
Staff Emeritus
Science Advisor
Gold Member
2,285
2
This post is a response to a minor point raised in the intial post of the thread
Socratic Inquiry in Modern Life. Since the response is tangential to the main question posed in the thread I thought it would be more appropriate to branch off to a new thread. Anyhow:

Originally posted by Another God and Dark Wing
Socrates spent his whole life walking around talking to people, inquiring into their beliefs and challenging them. He would approach people who believed themselves to be an authority on an issue, and ask them to explain to him what they knew. He would approach generals and ask them what bravery was, and what it takes to be brave. He would ask poets what love was, judges what justice was, priests what piety was etc. In every case he would talk to these people, questioning them so they could show him to the true meaning of such terms, thereby imparting such knowledge to him. In every case he found that not one person actually knew what they claimed to know.

I do not question the validity or usefulness of the Socratic method as an invaluable philosophical tool. But can we infer from Socratic inquiry the specific claim that since experts cannot state explicit rules for their decision making processes, that the experts actually know nothing about the field in which they supposedly excel? Or does this simply indicate that their expertise cannot be explicitly stated in words or adequately captured by formal rules?

Hubert Dreyfus contends the latter in his article "From Socrates to Expert Systems: The Limits and Dangers of Calculative Rationality," by way of analogy to modern efforts in the field of artificial intelligence. According to the Socratic viewpoint, if experts truly had knowledge in their field of expertise, they should be able to explicitly state their knowledge and thus, the expertise that it grants them. AI researchers seeking to automate intelligent behavior have held the same belief, and thus have interviewed experts to translate their expert knowledge into explicit facts and rules, such that the knowledge can be formally programmed into a computer program. Such computers imbued with expert knowledge are called expert systems.

Theoretically speaking, if expertise is really just the logical unfolding of an explicit set of facts and rules, then an expert system should have a tremendous advantage over a human expert-- it should be doing exactly what the human expert is doing, only with a much vaster database and much faster calculations, and with less room for error to boot. But in actuality research of expert systems has floundered, since expert systems do not perform up to par with their human counterparts.

Dreyfus concludes that the disappointment of expert systems has stemmed from a fundamental misunderstanding of expertise, arguing that true expertise does not stem from logical operations on a set of facts and rules, but rather is an intuitive, implicit process based on experiential familiarity with a vast array of special cases. Thus, experts can have expertise without knowing how to explicitly state the reasoning behind their knowledge and deductions for the same reasons that a normal, healthy human can be an expert at facial recognition without being able to explicitly state exactly how he discerns one face from the next.

This is just a brief recap of the article, to which a link is posted below. If you find the above interesting or provocative then you should give the entire article a read; it will be well worth your time.

http://ist-socrates.berkeley.edu/~hdreyfus/html/paper_socrates.html
 
Physics news on Phys.org
  • #2
I do not question the validity or usefulness of the Socratic method as an invaluable philosophical tool. But can we infer from Socratic inquiry the specific claim that since experts cannot state explicit rules for their decision making processes, that the experts actually know nothing about the field in which they supposedly excel? Or does this simply indicate that their expertise cannot be explicitly stated in words or adequately captured by formal rules?

Well, I too would have to agree with the latter. As you may have noticed in the past, I pride myself in my dirtbiking ability. Been doing it longer then some people my age have been walking properly.

Anyhow, (and just for note, I DO NOT need to prove or justify my riding ability) as much as I've learned about riding a motorcycle, I could consider myself an expert. However, if you, assuming you've no experience on a bike, come to me and ask me how to become an expert motorcycle rider, about all you'll get from me is a blank stare and few pointers on not busting your ass. I could try to write a book on the subject, but just because I write it and you read it, it doesn't make you an expert.

And see, here is a clue. EXPERT. Very similar to Experiance. You need to latter to become the former. Sometimes it is difficult and/or impossible to put the experience into words that will properly inform an observer.
 
  • #3
Well, there is a difference between being an expert at a physical activity and being an expert at an intellectual one. With dirtbiking, you train your muscle memory and reflexes. There are also things that you learn that you can recall at any time (the "basic pointers") that a non-expert probably would not know. There is also circumstantial knowledge that is generally only unlocked to your consciousness with the correct cue (stimuli). Being in a certain situation can remind you what to do in that situation, while you might not be able to freely recall that information while you are experiencing something different.

As far as AI goes, that is a clear-cut example that demonstrates how complex animal (especially human) behavior is, and how little we really know about it. There are sets of behavior that we lump into categories (being aggressive, quick to laugh) that have many complex interactions underlying them. Just imagine if you tried to catalogue all the things that you find funny. The list would be unbearably long. No one can quite describe what "funny" is, but you can come up with some general rules. There has to be contrast or conflict with expection, but a joke generally can't be too controversial. The material must be relatively new to the recipient. The list of criteria could go on an on...and that's just humor. For a person to keep track of all this and program it into a computer for all aspects of intelligence would be an incredible task for a programmer. The brain has at least billions of neurons. With all those connections, it's no wonder that we haven't been able to reproduce intelligence quite as we observe it in humans. Our limits in knowledge of intelligence and time, patience, and understanding to implement the knowledge that we do have, I believe, are what are limiting our efforts in AI.

I think that the "lumping together" that I described above is what the "intuitive part of knowledge is. We don't have the ability and desire to analyze every little detail. This brings problems when we try to translate this information to a system that only deals with rather irreducible details.
 
  • #4
Originally posted by Dissident Dan
Well, there is a difference between being an expert at a physical activity and being an expert at an intellectual one. With dirtbiking, you train your muscle memory and reflexes. There are also things that you learn that you can recall at any time (the "basic pointers") that a non-expert probably would not know. There is also circumstantial knowledge that is generally only unlocked to your consciousness with the correct cue (stimuli). Being in a certain situation can remind you what to do in that situation, while you might not be able to freely recall that information while you are experiencing something different.

You make a distinction between physical and mental capacities where I think there should be a distinction between declarative and procedural memory. Much of the time, intellectual ability is exclusively associated with the former and physical ability with the latter. But the points where these associations break down is exactly where this thread gets interesting.

First let's drop some Wikipedia knowledge to get on the same page:

Declarative memory is the aspect of memory that stores facts and figures. It applies to standard textbook learning. It is based on pairing the stimulus and the correct response. For example, the question "What is the capital of Sierra Leone?" and the answer "Freetown". The name declarative comes from the fact that we can explicitly "ask" our brain to make a connection between a pair of simuli. Declarative memory is subject to forgetting and requires repetition to last for years. Declarative memories are best established by using active recall combined with mnemonic techniques and spaced repetition.

Declarative memory can be divided into episodic memory, about things you have personally experienced (e.g. what you had for breakfast), and semantic memory, about general knowledge of the world (e.g. what is the capital of Canada).

Procedural memory is a memory of skills and procedures. As compared with declarative memory, it is governed by different mechanisms and different brain circuits. An example of procedural learning is learning to ride a bike, learning to touch-type, learning to swim, etc. There is no simple stimulus-response pairing. Instead, the brain is trying to figure out optimum memory pattern by trial and error. Procedural memory can be very durable.

Studies of people with certain brain injuries (such as damage to the hippocampus) suggest that procedural memory and episodic memory use different parts of the brain, and can work independently. For example, some patients are repeatedly trained in a task and remember previous training, but don't improve in a task (functioning declarative memory, damaged procedural memory.) Other patients put through the same training can't recall having been through the experiment, but their performance in the task improves over time (functioning procedural memory, damaged declarative memory).

So, much of our intellectual activity is of the declarative type and much of our physical activity is procedural. But given the article by Dreyfus, I think it should be well-established that intellectual expertise is really more akin to procedural memory. In fact, Dreyfus contends that the process of becoming an expert in an intellectual field is precisely the process of shifting our intellectual prowess in this field from declarative memory (explicit rules) to procedural memory (implicit intuition).

Now, we can succinctly restate the initial idea of the thread: Socrates demanded experts to express explicitly and linguistically the underlying principles of their expertise, as if expertise belongs to declarative memory-- but in reality, expertise comes from procedural memory. Similarly, AI expert systems have failed because they have tried to duplicate human procedural mastery with purely 'declarative' methods.
 
  • #5
Which is more essential to an expert: her personal experiences or her ability to relate them to others?
 
  • #6
I don't think one is more important than the other in a really meaningful way. That's kind of like asking "what is more essential to a car, that it has a properly functioning motor or that its tank has gas?" You need both. If you have one and not the other, then you're not going to get anywhere, although a weakness in one can probably be compensated for a bit by a strength in the other. But you really need both areas-- the underlying well of experience and the ability to make intelligent use of those experiences-- to be very strong to be an expert.

edit: although I think it would be safe to say that an expert is differentiated from a non-expert much more by the number of his/her experiences than by his/her ability to relate these experiences to novel situations. The latter is basically a consequence of pattern recognition, which all humans are good at to some extent; but the vast majority of non-experts do not have even a sliver of the actual experience that an expert has in his/her field.
 
Last edited:
  • #7
Originally posted by hypnagogue
So, much of our intellectual activity is of the declarative type and much of our physical activity is procedural. But given the article by Dreyfus, I think it should be well-established that intellectual expertise is really more akin to procedural memory. In fact, Dreyfus contends that the process of becoming an expert in an intellectual field is precisely the process of shifting our intellectual prowess in this field from declarative memory (explicit rules) to procedural memory (implicit intuition).

Well, that doesn't really make any sense, because that makes it more difficult to impart knowledge. That is really only useful for being an expert at performing an activity. If you want to be an expert on cell biology, you better damn sure know some explicit rules.
 
  • #8
Originally posted by Dissident Dan
Well, that doesn't really make any sense, because that makes it more difficult to impart knowledge. That is really only useful for being an expert at performing an activity. If you want to be an expert on cell biology, you better damn sure know some explicit rules.

No one is saying that an expert doesn't know explicit rules. Indeed, learning the explicit rules is the first step on the road to becoming an expert. That said, there is more to the expert's expertise than just those explicit rules. If there were not, I could go out and buy a book with all the explicit rules of cellular biology and would effectively be an expert. Rather, the expert's familiarity with numerous different experiential situations in the field coalesces into a greater and more subtle and nuanced understanding of the field. Although this greater understanding had its root in first learning all the explicit rules, the greater understanding itself is not confined to these explicit rules, nor can it necessarily be explicitly stated in a new/revised set of rules.
 
  • #9
If the expert knows the explicit rules, then his instructions should be able to be converted into computer form, right?
 
  • #10
Well, that's the thing. The expert's decision making is not reducible to rules, in practice at least-- decisions are made on a case-by-case basis, for thousands of permutations of special cases. There may be a decision an expert makes that has no clear precedent but is nonetheless valid. Such a decision could not be anticipated and written down as a rule beforehand, until the special case itself was presented to the expert.

The upshot of all this is just that the expert does not have instructions in his head saying "in case of such and such, do this," which is what you would need to program a classical AI expert system. The 'rules' for special cases do not exist a priori but are dynamically generated on the fly by the expert as a function of his deeper procedural understanding, which itself is a function of the expert's declarative understanding, experience in the field, and intelligence.

edit: In fact, this is precisely what bugged both Socrates and modern AI researchers. When asked to expound on his/her understanding of something beyond the basic declarative information that can be found in a textbook, an expert tends to give examples of decisions made in specific cases but not general rules underlying them-- precisely because there are no general rules underlying them, or rather these 'rules' are implicitly encoded in the neural structure of the expert's brain.

Simulated neural networks could probably do this sort of thing down the road as they get more powerful and sophisticated and as we obtain a greater understanding of how the human brain works, but there are lots of philosophical problems in encoding such decision-making processes in a program based on explicitly stated rules.
 
Last edited:
  • #11
In the situation of an unforeseen special case, it could be that an expert just extrapolates already explicity=known rules, changing them based on the changed parameters, so I do not think that that is a good example for your case.

As far as Socrates goes, I thought that the whole lesson of that tale was about people thinking they know things when they really don't, because they just follow tradition and swallow the crap handed down to them.
 
  • #12
I think it was mostly the fact that no one could describe what it was that motivated them. Generals knew not what bravery was, and judges did not understand justice. Priests, who center their lives, and their follower's lives, around a religion, cannot satisfactorily describe faith. most of the things that are named are emotions, not skills. socrates was not going to the bronzesmith to have the man describe how he knew the proper way to shape metal, but rather, he would ask the man the meaning of art. He would not ask a warrior how to kill, but would question the meaning of anger, of honor, of patriotim, and of fear. He would not ask a mathematician how to memorize equations, but he would ask the man the meaning of math.
 
  • #13
Originally posted by Dissident Dan
In the situation of an unforeseen special case, it could be that an expert just extrapolates already explicity=known rules, changing them based on the changed parameters, so I do not think that that is a good example for your case.

Of course it is the case that an expert will depend upon his knowledge of rules and previous experiences in order to solve a problem. In this sense, we might say that the expert simply extrapolates the explicit rules for every special case he sees, bending them to the needs of the problem. But once we get sufficiently many of these cases, the decision making process looks less and less like a compact, efficient textbook rule applicable to all problems and more like just a huge set of special cases. Moreover, if there were a mechanistic way to extrapolate the rules as a function of parameters to special cases, this would simply amount to a sort of meta-rule. (Note that formally defined heuristic rules are included in this category.) But the point is that these extrapolations are made intuitively, not mechanically. We are left with two options: either we can say no such meta-rules exist, or we can say that they are completely opaque to current human understanding. Either way, they clearly are not in the same class of those things that we normally think of as 'rules.'

As far as Socrates goes, I thought that the whole lesson of that tale was about people thinking they know things when they really don't, because they just follow tradition and swallow the crap handed down to them.

Agreed, to some extent. That is certainly part it-- people not questioning their knowledge. But with procedural knowledge, it doesn't matter how much you question it-- you still can't explain it very well. So it becomes difficult to distinguish those cases where the person in question should be able to explain himself but cannot because he has not adequately examined himself, and those cases where he cannot explain himself simply because of the very nature of the thing he is trying to explain.
 
  • #14
Originally posted by Pyrite
I think it was mostly the fact that no one could describe what it was that motivated them. Generals knew not what bravery was, and judges did not understand justice. Priests, who center their lives, and their follower's lives, around a religion, cannot satisfactorily describe faith. most of the things that are named are emotions, not skills. socrates was not going to the bronzesmith to have the man describe how he knew the proper way to shape metal, but rather, he would ask the man the meaning of art. He would not ask a warrior how to kill, but would question the meaning of anger, of honor, of patriotim, and of fear. He would not ask a mathematician how to memorize equations, but he would ask the man the meaning of math.

Part of describing justice is deciding who is just and who is to be punished, and part of describing faith is deciding who is pious and who is not, and so on. Certainly these more 'emotional' things are not built on foundations as formally rigorous as science, but the basic premise for experts in both is the same: they cannot readily produce rules that can generate their decisions. At best, they can give you specific situational examples illustrating their understanding of the topic, but they can't give you a handful of general rules to link all these disjoint cases together.

Again from the Dreyfus article at http://ist-socrates.berkeley.edu/~hdreyfus/html/paper_socrates.html :

In one of his earliest dialogues, The Euthyphro, Plato tells us of such an encounter between Socrates and Euthyphro, a religious prophet and so an expert on pious behavior. Socrates asks Euthyphro to tell him how to recognize piety: "I want to know what is characteristic of piety ... to use as a standard whereby to judge your actions and those of other men." But instead of revealing his piety-recognizing heuristic, Euthyphro does just what every expert does when cornered by Socrates. He gives him examples from his field of expertise, in this case mythical situations in the past in which men and gods have done things which everyone considers pious. Socrates gets annoyed and demands that Euthyphro, then, tell him his rules for recognizing these cases as examples of piety, but although Euthyphro claims he knows how to tell pious acts from impious ones, he cannot state the rules which generate his judgments.
 
  • #15
The Euthyphro example is not a good one to relate to science. It is an example of one of the emotional "expertises".

------------------------

One thing about trying to impart knowledge to a discrete set of rules is that there can often be so many cases, that you will not know how you would evaluate a certain case until you experience it.

There is also the problem of unpredictability. Physics (other than quantum physics) is a good example where you can easily dictate discrete rules that a computer can use. However, if you look at behavioral biology, things are just too uncertain. Nervous systems are so complex that you cannot predict with 100% accuracy. So what researchers do is look at aggregates and understand generalities.
 
  • #16
All of the above seems to me to be a perfect example of why Richard Feynman said that we do not really understand something until we can explain it to our mother. There is a difference between knowing somthing to the point that one is an expert in that field and understanding it well enough to be able to articulate it simply enough for a "mother" to understand it.

We often have knowledge but do not have it well enough organized in our own minds to be able to impart it understandably to others who are not experts in the same field. This does not mean that we are not experts or are not knowledgeable enough to work successfully in our field. It simply means that we have not or can not verbalize that which we know well enough that a lay person can really understand it.

As far as AI is concerned, computers have to work with absolute rules and will follow those absolute rules in every case. This amounts to nothing more than repeating over and over again that which was already known and programmed into it. Our minds do not work with absolute rules but constantly make new connection and interpetations to the rules. We dismiss that which we think is obviouly absurd or usless but may follow even the most unlikely path if we think that it might lead somewhere. If it does lead somewhere we call it intuitive thinking because we are not even aware of how our minds made the seemingly unrelated connections. It is sometimes a case of conceptual thinking instead of linear cause and effect reasoning. Computers can not do any of this and there is as yet no way to design hardware that can or write programs that can. If hardware could be designed and software written to solve a problem or perform a task then that problem is already solved, the way to the solution known, or the task is already done. Computers do not and can not create new thinking or knowledge. The are simply very fast and accurate when doing what we already know how to do.
 
  • #17
Royce, could your description of computers following rigid rules be based too closely on present day digital computers? There are such things as stochastic computers.
 
  • #18
It is based solely on todays digital computers as that is what I am most familar with. I know nothing about stochastic computers. I have read a little about attempting to design circuits that would mimic neural networks to a small degree (learn) but at that time they were having problems with them. I would appreciate it if you told me (us) a little about them.
 
  • #19
Originally posted by Dissident Dan
The Euthyphro example is not a good one to relate to science. It is an example of one of the emotional "expertises".

As I said.. "Certainly these more 'emotional' things are not built on foundations as formally rigorous as science, but the basic premise for experts in both is the same: they cannot readily produce rules that can generate their decisions. At best, they can give you specific situational examples illustrating their understanding of the topic, but they can't give you a handful of general rules to link all these disjoint cases together." I'm sure a present day Socrates would inquire professional scientists and come to much the same conclusions as he did for those experts that actually existed in his own time.

One thing about trying to impart knowledge to a discrete set of rules is that there can often be so many cases, that you will not know how you would evaluate a certain case until you experience it.

There is also the problem of unpredictability. Physics (other than quantum physics) is a good example where you can easily dictate discrete rules that a computer can use. However, if you look at behavioral biology, things are just too uncertain. Nervous systems are so complex that you cannot predict with 100% accuracy. So what researchers do is look at aggregates and understand generalities.

But we still have examples of so called expert systems that cannot reproduce the success of their human counterparts, even in scientific/formal endeavours, for all the reasons explained above. Just because the basic tenets of a field can be stated succinctly in formal notation does not mean that advanced problem solving in that field is possible by mechanical manipulation of the basic formal system alone.
 
  • #20
Originally posted by Royce
As far as AI is concerned, computers have to work with absolute rules and will follow those absolute rules in every case. This amounts to nothing more than repeating over and over again that which was already known and programmed into it. Our minds do not work with absolute rules but constantly make new connection and interpetations to the rules. We dismiss that which we think is obviouly absurd or usless but may follow even the most unlikely path if we think that it might lead somewhere. If it does lead somewhere we call it intuitive thinking because we are not even aware of how our minds made the seemingly unrelated connections. It is sometimes a case of conceptual thinking instead of linear cause and effect reasoning. Computers can not do any of this and there is as yet no way to design hardware that can or write programs that can. If hardware could be designed and software written to solve a problem or perform a task then that problem is already solved, the way to the solution known, or the task is already done. Computers do not and can not create new thinking or knowledge. The are simply very fast and accurate when doing what we already know how to do.

Computers aren't necessarily as rigid as you portray them to be. We can program AI software with learning algorithms, whereby the software will rewrite itself to some extent based on input it has received in the past. Of course, the mechanisms of such learning algorithms themselves are ultimately rigid structures, in that a classically programmed computer can only 'learn' what we program it to 'learn' and cannot generate new learning algorithms or heuristics on its own. More concisely put, it cannot come to see what is really salient about a problem beyond the strict formal limitations we bound it with; it's too much syntax, not enough semantics.

However, I do believe that the central problem is that the classical approach to AI is indeed too rigid, learning algorithms or not. The classical approach involves working with explicitly defined variables, which implicitly but necessarily places absolute limits on what the system can achieve. The solution to this, I believe, lies in using a more pliable structure where variables (or, equivalently, representational structures) are not explicitly defined as such but are allowed to be implicitly generated by the system itself, as a function of given inputs. Such systems are much better equipped to 'find out' what is salient in a given problem of their own accord, and thus seem to achieve a closer approximation to what we think of as semantic understanding of a problem, as opposed to 'blind' syntactic shuffling of pre-defined symbols.

Simulated neural networks provide just such a computational framework. The limitations arrived at thus far in neural networks, I believe, is not so much inherent in the system itself but arise from our difficulty in learning how to create and employ them effectively. This problem will probably be around for a while given the great amount of complexity and 'opacity' of such systems (ie, if a neural network puts out a surprising or interesting answer, it is not at all a trivial matter to analyze the system and see just how it produced this answer; plus, since the representational structure of such systems is implicitly encoded in the strength of connections between nodes, it's not clear what the meaning of any given nodes or set of nodes is, in the context of the given problem).

As far as stochastic computers go, I'm not sure they're relevant to the current discussion. I admit that I have little familiarity with stochastic computers, but unless they can compute functions that are not effectively computable in the sense defined in the Church-Turing thesis, they can't do anything that a normal digital computer can't do as well.
 
  • #21
That is my point exactly. We design the circuits and we attempt to program them. All of the intelligence that is in the system is derived from us. We put it there. There is no such thing to date as AI.
I think that until we learn how our brain actually works and how it comes up with new ideas, relationships and connections; how we formulate algorithms ourselves we will not be able to build real AI machines that are any more intelligent than an ant.
 
  • #22
Originally posted by hypnagogue

But we still have examples of so called expert systems that cannot reproduce the success of their human counterparts, even in scientific/formal endeavours, for all the reasons explained above. Just because the basic tenets of a field can be stated succinctly in formal notation does not mean that advanced problem solving in that field is possible by mechanical manipulation of the basic formal system alone.

If you're talking physics, chemistry, or any branch of mathematics, you had better be able to do it through mechanical manipulation of a basic formal system.

And there are also fields like biology where they can defend statements with "That's what we observed," which is a perfectly valid defense.

In either case, a true expert should be able to explain.
 
  • #23
Originally posted by Dissident Dan
If you're talking physics, chemistry, or any branch of mathematics, you had better be able to do it through mechanical manipulation of a basic formal system.

And there are also fields like biology where they can defend statements with "That's what we observed," which is a perfectly valid defense.

In either case, a true expert should be able to explain.

There's a difference between explaining expertise in general and explaining how specific results are derived. For instance, of course a mathematician must be able to show formally, say, how he proved a theorem; but can the mathematician show formally how he generated the insight into the nature of the problem that he used to get the idea of using this specific method of proof in the first place?

To explain expertise means to explain how one is to become an expert, not to explain specific results of expert analysis after the fact. A chess grandmaster can show you why a certain move he made was a good one, but he can't show you a set of rules by which you too would have been able to see this move and its benefits, and in general how to play chess like a grandmaster.
 
  • #24
One usually has to assume they don't know before they can know more, if one were to assume that all experts are at the very top of all knowledge and nothing more remains then I'm sure many people have been thinking that same thing for a very long time, on the other hand I distrust most of the recent knowledge today that comes in the form of a book because if one really sought truth then they can more efficiently communicate through free worldwide discussion and a book is only a one sided argument and usually just an opinion at that, even if it is a convincing argument why should I have to pay such high prices to be enlightened? Isn't it more important to get things out to as many as possible arguers than make a dollar? Most of the older information has too much dust on it to sell and so we don't see it but visit an antique book shop and you'll see what I mean.
A grandmasters ability is mainly long years of practice and method and can be learned like riding a bike, there's nothing magical about it except that they seem to have grown to love what they do more than anything else.
 
  • #25
I'm sorry Dave. I can't do that.

This is an interesting thread.
I read an account by a knowledge engineer once regarding an attempt to create an expert system in crystal growing that contained some of the problems cited by Hubert Dreyfuss. I've also encountered the problem in a field of work.
I'm not sure it's fair to AI to ask for perfection. The criticism is often that the program works fine up to a point and then fails miserably. People dwell on the fact that the system can add fine but it can't work with tensors. Perhaps the current paradigm in AI leads necessarily to brittle expert systems.People talk about how massive the input to the computer has to be to get a small skill. Von Neumann said if you can tell me how to do something I can program it into a computer.
In some respects I think intuition is an evasive word that hides what is really going on in humans. I'm inclined to think intuition is thinking on automatic.People don't really understand what thinking is.
Ask an actor how he acts and the self conscious attention to what he is doing and why, can really mess up his acting.
Ask a University student to define some small common word like A or THE. Most can't define it and yet use them all the time.
At the same time, I wonder about ideas regarding the Mechanism of Mind advanced by Dr. Edward deBono in the late 1960's regarding self-organizing information systems which is now part of neural network ideas. Dr. deBono also had a few things to say about Socrates as did Isaac Asimov. Socrates was a real gadfly.
Dr. Hestenes made some interesting comments regarding chess expertise in one of his essays on Newtonian Games in Physics.
 

1. What is Socratic inquiry?

Socratic inquiry is a method of questioning and critical thinking developed by the philosopher Socrates. It involves asking open-ended questions in order to challenge assumptions, explore complex ideas, and arrive at deeper understanding.

2. How does Socratic inquiry challenge the idea of "experts"?

Socratic inquiry challenges the idea of "experts" by emphasizing the importance of questioning and critical thinking over relying blindly on the authority of so-called experts. Socrates believed that true knowledge comes from questioning and examining one's own beliefs rather than simply accepting the opinions of others.

3. What are the implications of Socratic inquiry for modern society?

The implications of Socratic inquiry for modern society are far-reaching. It encourages people to think for themselves and not blindly accept information from authority figures. It also promotes open-mindedness and critical thinking, which are essential skills for navigating a complex and ever-changing world.

4. Can Socratic inquiry be applied in all fields of study?

Yes, Socratic inquiry can be applied in all fields of study. Any subject, from science to philosophy to art, can benefit from the questioning and critical thinking involved in Socratic inquiry. It is a universal method of learning and understanding.

5. How does Socratic inquiry impact the learning process?

Socratic inquiry can greatly enhance the learning process by encouraging active engagement and critical thinking. It allows individuals to challenge their own assumptions and arrive at a deeper understanding of complex ideas. It also promotes a more collaborative and open-minded approach to learning, as it encourages individuals to listen to and learn from each other's perspectives.

Similar threads

Replies
4
Views
1K
Replies
33
Views
5K
Replies
8
Views
2K
  • General Discussion
Replies
2
Views
2K
  • General Discussion
Replies
18
Views
8K
Replies
21
Views
4K
Replies
11
Views
1K
Replies
2
Views
792
  • General Discussion
Replies
8
Views
2K
Replies
10
Views
2K
Back
Top