chrisalviola said:
I know its not impossible, since us humans are cable of self healing to diseases and viruses why not operating systems? I know if there's a good AI program embedded in windows maybe can be used to search for abnormalities in its everyday operation and when found it will develop or create a file that will regulate it activity maybe.
There are some things that OS people can do to help prevent certain things that viruses take advantage of: one thing is to limit which parts of the address space are available for execution, since a lot of viruses exploit this and "inject" code into some address space for execution.
I would say that the best way is probably what you find nowadays which is a combination of a rule based framework with some "smart" capabilities. The truth is that applications turn millions of lines of code into workable products that perform a variety of complex tasks. Firewalls and security software that place applications in "trusted" or "banned" categories will require the user to know what application is running. If they are attempting to run something then their security product can track its network activity and possibly its process activity (perhaps changes to the registry) to see if its doing something "unexpected", but I would say that in most circumstances, if you run something that you the user want to run, then the risk should be ok. If it is something you have no idea about then you probably should be worried.
Essentially the model of security products is to find places where vulnerabilities are most likely to occur. In firewalls you find critical points of code where packets are sent to the operating system and then redirect control to the security program. This requires knowledge of the underlying architecture of the operating system. As humans we observe, find patterns, and encapsulate problems in very little time in comparison to what a "computer" currently does. In saying this I think that perhaps in the future, this could be done, but for the time being, human methods that are simple, minimally intrusive, and more importantly do their job will be valued over other methods. There will be innovation no doubt, but its usually incrementally introduced as slow changes.
Personally as far as what can be done with code, pretty much anything that involves decision making can be done with code as that is what a computer was designed for: to make decisions with data. A computer that can't make decisions is useless in my view.
Given that "intelligence" is all about making "good decisions" then I guess within the context of computing that AI could be accomplished in any area due to the fact that the inventors of the computer allowed this form of dynamic "decision making" capability. However I believe that our models of intelligence, cognitive thinking, and other sciences of decision making are "primitive". They are advanced in the context of what human beings have discovered but by no means are they at a level where some sort of "unified" view of intelligence that transcends all areas and encompasses all areas of activity in intelligence.
I was reading an article in some journal a short while ago about how the structure of neurons exhibited that of markov chain "mechanics" in the brain. Now markov chains were brought into mathematics less than a hundred years ago. Who knows what's going to happen in math and technology and the synthesis of these two fields in the next one to two hundred years. Hopefully I'll be reincarnated to enjoy what lies ahead ;)
I got a little side tracked but I guess the point I was trying to make is that a) The security products that are available as preventative measures to such instances are in my opinion, quite good and b) Intelligence is something that is not really defined in a way that is unified and transcends all areas dealing with structure and behavioural aspects of intelligence and c) that the future will hold many exciting possibilities and that I do believe what you say will be possible in the future