Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

The implications of Socratic inquiry: Do 'experts' really know nothing?

  1. Sep 22, 2003 #1

    hypnagogue

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    This post is a response to a minor point raised in the intial post of the thread
    Socratic Inquiry in Modern Life. Since the response is tangential to the main question posed in the thread I thought it would be more appropriate to branch off to a new thread. Anyhow:

    I do not question the validity or usefulness of the Socratic method as an invaluable philosophical tool. But can we infer from Socratic inquiry the specific claim that since experts cannot state explicit rules for their decision making processes, that the experts actually know nothing about the field in which they supposedly excel? Or does this simply indicate that their expertise cannot be explicitly stated in words or adequately captured by formal rules?

    Hubert Dreyfus contends the latter in his article "From Socrates to Expert Systems: The Limits and Dangers of Calculative Rationality," by way of analogy to modern efforts in the field of artificial intelligence. According to the Socratic viewpoint, if experts truly had knowledge in their field of expertise, they should be able to explicitly state their knowledge and thus, the expertise that it grants them. AI researchers seeking to automate intelligent behavior have held the same belief, and thus have interviewed experts to translate their expert knowledge into explicit facts and rules, such that the knowledge can be formally programmed into a computer program. Such computers imbued with expert knowledge are called expert systems.

    Theoretically speaking, if expertise is really just the logical unfolding of an explicit set of facts and rules, then an expert system should have a tremendous advantage over a human expert-- it should be doing exactly what the human expert is doing, only with a much vaster database and much faster calculations, and with less room for error to boot. But in actuality research of expert systems has floundered, since expert systems do not perform up to par with their human counterparts.

    Dreyfus concludes that the disappointment of expert systems has stemmed from a fundamental misunderstanding of expertise, arguing that true expertise does not stem from logical operations on a set of facts and rules, but rather is an intuitive, implicit process based on experiential familiarity with a vast array of special cases. Thus, experts can have expertise without knowing how to explicitly state the reasoning behind their knowledge and deductions for the same reasons that a normal, healthy human can be an expert at facial recognition without being able to explicitly state exactly how he discerns one face from the next.

    This is just a brief recap of the article, to which a link is posted below. If you find the above interesting or provocative then you should give the entire article a read; it will be well worth your time.

    http://ist-socrates.berkeley.edu/~hdreyfus/html/paper_socrates.html
     
  2. jcsd
  3. Sep 22, 2003 #2

    megashawn

    User Avatar
    Science Advisor

    Well, I too would have to agree with the latter. As you may have noticed in the past, I pride myself in my dirtbiking ability. Been doing it longer then some people my age have been walking properly.

    Anyhow, (and just for note, I DO NOT need to prove or justify my riding ability) as much as I've learned about riding a motorcycle, I could consider myself an expert. However, if you, assuming you've no experiance on a bike, come to me and ask me how to become an expert motorcycle rider, about all you'll get from me is a blank stare and few pointers on not busting your ass. I could try to write a book on the subject, but just because I write it and you read it, it doesn't make you an expert.

    And see, here is a clue. EXPERT. Very similar to Experiance. You need to latter to become the former. Sometimes it is difficult and/or impossible to put the experiance into words that will properly inform an observer.
     
  4. Sep 23, 2003 #3
    Well, there is a difference between being an expert at a physical activity and being an expert at an intellectual one. With dirtbiking, you train your muscle memory and reflexes. There are also things that you learn that you can recall at any time (the "basic pointers") that a non-expert probably would not know. There is also circumstantial knowledge that is generally only unlocked to your consciousness with the correct cue (stimuli). Being in a certain situation can remind you what to do in that situation, while you might not be able to freely recall that information while you are experiencing something different.

    As far as AI goes, that is a clear-cut example that demonstrates how complex animal (especially human) behavior is, and how little we really know about it. There are sets of behavior that we lump into categories (being aggressive, quick to laugh) that have many complex interactions underlying them. Just imagine if you tried to catalogue all the things that you find funny. The list would be unbearably long. No one can quite describe what "funny" is, but you can come up with some general rules. There has to be contrast or conflict with expection, but a joke generally can't be too controversial. The material must be relatively new to the recipient. The list of criteria could go on an on...and that's just humor. For a person to keep track of all this and program it into a computer for all aspects of intelligence would be an incredible task for a programmer. The brain has at least billions of neurons. With all those connections, it's no wonder that we haven't been able to reproduce intelligence quite as we observe it in humans. Our limits in knowledge of intelligence and time, patience, and understanding to implement the knowledge that we do have, I believe, are what are limiting our efforts in AI.

    I think that the "lumping together" that I described above is what the "intuitive part of knowledge is. We don't have the ability and desire to analyze every little detail. This brings problems when we try to translate this information to a system that only deals with rather irreducible details.
     
  5. Sep 24, 2003 #4

    hypnagogue

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    You make a distinction between physical and mental capacities where I think there should be a distinction between declarative and procedural memory. Much of the time, intellectual ability is exclusively associated with the former and physical ability with the latter. But the points where these associations break down is exactly where this thread gets interesting.

    First let's drop some Wikipedia knowledge to get on the same page:

    So, much of our intellectual activity is of the declarative type and much of our physical activity is procedural. But given the article by Dreyfus, I think it should be well-established that intellectual expertise is really more akin to procedural memory. In fact, Dreyfus contends that the process of becoming an expert in an intellectual field is precisely the process of shifting our intellectual prowess in this field from declarative memory (explicit rules) to procedural memory (implicit intuition).

    Now, we can succinctly restate the initial idea of the thread: Socrates demanded experts to express explicitly and linguistically the underlying principles of their expertise, as if expertise belongs to declarative memory-- but in reality, expertise comes from procedural memory. Similarly, AI expert systems have failed because they have tried to duplicate human procedural mastery with purely 'declarative' methods.
     
  6. Sep 25, 2003 #5
    Which is more essential to an expert: her personal experiences or her ability to relate them to others?
     
  7. Sep 25, 2003 #6

    hypnagogue

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    I don't think one is more important than the other in a really meaningful way. That's kind of like asking "what is more essential to a car, that it has a properly functioning motor or that its tank has gas?" You need both. If you have one and not the other, then you're not going to get anywhere, although a weakness in one can probably be compensated for a bit by a strength in the other. But you really need both areas-- the underlying well of experience and the ability to make intelligent use of those experiences-- to be very strong to be an expert.

    edit: although I think it would be safe to say that an expert is differentiated from a non-expert much more by the number of his/her experiences than by his/her ability to relate these experiences to novel situations. The latter is basically a consequence of pattern recognition, which all humans are good at to some extent; but the vast majority of non-experts do not have even a sliver of the actual experience that an expert has in his/her field.
     
    Last edited: Sep 25, 2003
  8. Sep 26, 2003 #7
     
  9. Sep 26, 2003 #8

    hypnagogue

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    No one is saying that an expert doesn't know explicit rules. Indeed, learning the explicit rules is the first step on the road to becoming an expert. That said, there is more to the expert's expertise than just those explicit rules. If there were not, I could go out and buy a book with all the explicit rules of cellular biology and would effectively be an expert. Rather, the expert's familiarity with numerous different experiential situations in the field coalesces into a greater and more subtle and nuanced understanding of the field. Although this greater understanding had its root in first learning all the explicit rules, the greater understanding itself is not confined to these explicit rules, nor can it necessarily be explicitly stated in a new/revised set of rules.
     
  10. Sep 26, 2003 #9
    If the expert knows the explicit rules, then his instructions should be able to be converted into computer form, right?
     
  11. Sep 26, 2003 #10

    hypnagogue

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Well, that's the thing. The expert's decision making is not reducible to rules, in practice at least-- decisions are made on a case-by-case basis, for thousands of permutations of special cases. There may be a decision an expert makes that has no clear precedent but is nonetheless valid. Such a decision could not be anticipated and written down as a rule beforehand, until the special case itself was presented to the expert.

    The upshot of all this is just that the expert does not have instructions in his head saying "in case of such and such, do this," which is what you would need to program a classical AI expert system. The 'rules' for special cases do not exist a priori but are dynamically generated on the fly by the expert as a function of his deeper procedural understanding, which itself is a function of the expert's declarative understanding, experience in the field, and intelligence.

    edit: In fact, this is precisely what bugged both Socrates and modern AI researchers. When asked to expound on his/her understanding of something beyond the basic declarative information that can be found in a textbook, an expert tends to give examples of decisions made in specific cases but not general rules underlying them-- precisely because there are no general rules underlying them, or rather these 'rules' are implicitly encoded in the neural structure of the expert's brain.

    Simulated neural networks could probably do this sort of thing down the road as they get more powerful and sophisticated and as we obtain a greater understanding of how the human brain works, but there are lots of philosophical problems in encoding such decision-making processes in a program based on explicitly stated rules.
     
    Last edited: Sep 26, 2003
  12. Sep 27, 2003 #11
    In the situation of an unforeseen special case, it could be that an expert just extrapolates already explicity=known rules, changing them based on the changed parameters, so I do not think that that is a good example for your case.

    As far as Socrates goes, I thought that the whole lesson of that tale was about people thinking they know things when they really don't, because they just follow tradition and swallow the crap handed down to them.
     
  13. Sep 28, 2003 #12
    I think it was mostly the fact that no one could describe what it was that motivated them. Generals knew not what bravery was, and judges did not understand justice. Priests, who center their lives, and their follower's lives, around a religion, cannot satisfactorily describe faith. most of the things that are named are emotions, not skills. socrates was not going to the bronzesmith to have the man describe how he knew the proper way to shape metal, but rather, he would ask the man the meaning of art. He would not ask a warrior how to kill, but would question the meaning of anger, of honor, of patriotim, and of fear. He would not ask a mathematician how to memorize equations, but he would ask the man the meaning of math.
     
  14. Sep 28, 2003 #13

    hypnagogue

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Of course it is the case that an expert will depend upon his knowledge of rules and previous experiences in order to solve a problem. In this sense, we might say that the expert simply extrapolates the explicit rules for every special case he sees, bending them to the needs of the problem. But once we get sufficiently many of these cases, the decision making process looks less and less like a compact, efficient text book rule applicable to all problems and more like just a huge set of special cases. Moreover, if there were a mechanistic way to extrapolate the rules as a function of parameters to special cases, this would simply amount to a sort of meta-rule. (Note that formally defined heuristic rules are included in this category.) But the point is that these extrapolations are made intuitively, not mechanically. We are left with two options: either we can say no such meta-rules exist, or we can say that they are completely opaque to current human understanding. Either way, they clearly are not in the same class of those things that we normally think of as 'rules.'

    Agreed, to some extent. That is certainly part it-- people not questioning their knowledge. But with procedural knowledge, it doesn't matter how much you question it-- you still can't explain it very well. So it becomes difficult to distinguish those cases where the person in question should be able to explain himself but cannot because he has not adequately examined himself, and those cases where he cannot explain himself simply because of the very nature of the thing he is trying to explain.
     
  15. Sep 28, 2003 #14

    hypnagogue

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Part of describing justice is deciding who is just and who is to be punished, and part of describing faith is deciding who is pious and who is not, and so on. Certainly these more 'emotional' things are not built on foundations as formally rigorous as science, but the basic premise for experts in both is the same: they cannot readily produce rules that can generate their decisions. At best, they can give you specific situational examples illustrating their understanding of the topic, but they can't give you a handful of general rules to link all these disjoint cases together.

    Again from the Dreyfus article at http://ist-socrates.berkeley.edu/~hdreyfus/html/paper_socrates.html :

     
  16. Sep 28, 2003 #15
    The Euthyphro example is not a good one to relate to science. It is an example of one of the emotional "expertises".

    ------------------------

    One thing about trying to impart knowledge to a discrete set of rules is that there can often be so many cases, that you will not know how you would evaluate a certain case until you experience it.

    There is also the problem of unpredictability. Physics (other than quantum physics) is a good example where you can easily dictate discrete rules that a computer can use. However, if you look at behavioral biology, things are just too uncertain. Nervous systems are so complex that you cannot predict with 100% accuracy. So what researchers do is look at aggregates and understand generalities.
     
  17. Sep 29, 2003 #16
    All of the above seems to me to be a perfect example of why Richard Feynman said that we do not really understand something until we can explain it to our mother. There is a difference between knowing somthing to the point that one is an expert in that field and understanding it well enough to be able to articulate it simply enough for a "mother" to understand it.

    We often have knowledge but do not have it well enough organized in our own minds to be able to impart it understandably to others who are not experts in the same field. This does not mean that we are not experts or are not knowledgable enough to work successfully in our field. It simply means that we have not or can not verbalize that which we know well enough that a lay person can really understand it.

    As far as AI is concerned, computers have to work with absolute rules and will follow those absolute rules in every case. This amounts to nothing more than repeating over and over again that which was already known and programmed into it. Our minds do not work with absolute rules but constantly make new connection and interpetations to the rules. We dismiss that which we think is obviouly absurd or usless but may follow even the most unlikely path if we think that it might lead somewhere. If it does lead somewhere we call it intuitive thinking because we are not even aware of how our minds made the seemingly unrelated connections. It is sometimes a case of conceptual thinking instead of linear cause and effect reasoning. Computers can not do any of this and there is as yet no way to design hardware that can or write programs that can. If hardware could be designed and software written to solve a problem or perform a task then that problem is already solved, the way to the solution known, or the task is already done. Computers do not and can not create new thinking or knowledge. The are simply very fast and accurate when doing what we already know how to do.
     
  18. Sep 29, 2003 #17

    selfAdjoint

    User Avatar
    Staff Emeritus
    Gold Member
    Dearly Missed

    Royce, could your description of computers following rigid rules be based too closely on present day digital computers? There are such things as stochastic computers.
     
  19. Sep 29, 2003 #18
    It is based solely on todays digital computers as that is what I am most familar with. I know nothing about stochastic computers. I have read a little about attempting to design circuits that would mimic neural networks to a small degree (learn) but at that time they were having problems with them. I would appreciate it if you told me (us) a little about them.
     
  20. Sep 30, 2003 #19

    hypnagogue

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    As I said.. "Certainly these more 'emotional' things are not built on foundations as formally rigorous as science, but the basic premise for experts in both is the same: they cannot readily produce rules that can generate their decisions. At best, they can give you specific situational examples illustrating their understanding of the topic, but they can't give you a handful of general rules to link all these disjoint cases together." I'm sure a present day Socrates would inquire professional scientists and come to much the same conclusions as he did for those experts that actually existed in his own time.

    But we still have examples of so called expert systems that cannot reproduce the success of their human counterparts, even in scientific/formal endeavours, for all the reasons explained above. Just because the basic tenets of a field can be stated succinctly in formal notation does not mean that advanced problem solving in that field is possible by mechanical manipulation of the basic formal system alone.
     
  21. Sep 30, 2003 #20

    hypnagogue

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Computers aren't necessarily as rigid as you portray them to be. We can program AI software with learning algorithms, whereby the software will rewrite itself to some extent based on input it has received in the past. Of course, the mechanisms of such learning algorithms themselves are ultimately rigid structures, in that a classically programmed computer can only 'learn' what we program it to 'learn' and cannot generate new learning algorithms or heuristics on its own. More concisely put, it cannot come to see what is really salient about a problem beyond the strict formal limitations we bound it with; it's too much syntax, not enough semantics.

    However, I do believe that the central problem is that the classical approach to AI is indeed too rigid, learning algorithms or not. The classical approach involves working with explicitly defined variables, which implicitly but necessarily places absolute limits on what the system can achieve. The solution to this, I believe, lies in using a more pliable structure where variables (or, equivalently, representational structures) are not explicitly defined as such but are allowed to be implicitly generated by the system itself, as a function of given inputs. Such systems are much better equipped to 'find out' what is salient in a given problem of their own accord, and thus seem to achieve a closer approximation to what we think of as semantic understanding of a problem, as opposed to 'blind' syntactic shuffling of pre-defined symbols.

    Simulated neural networks provide just such a computational framework. The limitations arrived at thus far in neural networks, I believe, is not so much inherent in the system itself but arise from our difficulty in learning how to create and employ them effectively. This problem will probably be around for a while given the great amount of complexity and 'opacity' of such systems (ie, if a neural network puts out a surprising or interesting answer, it is not at all a trivial matter to analyze the system and see just how it produced this answer; plus, since the representational structure of such systems is implicitly encoded in the strength of connections between nodes, it's not clear what the meaning of any given nodes or set of nodes is, in the context of the given problem).

    As far as stochastic computers go, I'm not sure they're relevant to the current discussion. I admit that I have little familiarity with stochastic computers, but unless they can compute functions that are not effectively computable in the sense defined in the Church-Turing thesis, they can't do anything that a normal digital computer can't do as well.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?



Similar Discussions: The implications of Socratic inquiry: Do 'experts' really know nothing?
  1. Do We Know Nothing? (Replies: 77)

  2. The nothing to do thread (Replies: 99)

Loading...