Isn't "Artificial Intelligence" an oxymoron? I find it interesting that it is so hard to express how we know what we know. For example, how hard is it to make a robot that get on and ride a bicycle? Pretty hard I'll venture... try Googling it.
Anyway, these are some source notes of ideas I'm developing into a project of sorts. Mostly things to cogitate upon and you can come up with your own conclusions.
"It seems that given the artificial intelligence worker's conception of reason as calculation on facts, and his admission that which facts are relevant and significant is not just given but context determined, his attempt to produce intelligent behavior leads to an antinomy. On the one hand, we have the thesis: there must always be a broader context; otherwise we have no way to distinguish relevant from irrelevant facts. On the other hand, we have the antithesis: there must be an ultimate context, which requires no interpretation; otherwise there will be an infinite regress of contexts, and we can never begin our formalization.
Human beings seem to embody a third possibility which would offer a way out of this dilemma. Instead of a hierarchy of contexts, the present situation is recognized as a continuation or modification of the previous one." but, "how can we originally select from the infinity of facts those relevant to the human form of life so as to determine a context we can sequentially update?...there must be another alternative, however, since language is used and understood... the only alternative way of denying the separation of fact and situation is to give up the independence of the facts and understand them as a product of the situation." (Hubert Dreyfus, "What Computers Still Can't Do", 1993, p. 222.)
"One can find out which features of the current state of affairs are relevant only by determining what sort of situation this state of affairs is. But that requires retrieving relevant past situations. This problem might be called the circularity of relevance....how does the brain do it?...it appears that experience statistically determines individual neural synaptic connections, so that the brain, with its hundreds of thousands of billions of adjustable synapses, can indeed accumulate statistical information on a scale far beyond current or foreseeable computers...the brain clearly has internal states that we experience as moods, anticipations, and familiarities that are correlated with the current activity of its hidden neurons when the input arrives. These are determined by its recent inputs as well as by the synaptic connection strengths developed on the basis of long-past experiences, and these as well as the input determine the output...no one knows how to incorporate internal states appropriately..."
(Dreyfus, 1993, p.xliii-xliv)
"On the surface, neural networks seemed to be a great fit with my own interests. But I quickly became disillusioned with the field...I had formed an opinion that three things were essential to understanding the brain. My first criterion was the inclusion of time in brain function. Real brains process rapidly changing streams of information. There is nothing static about the flow of information into and out of the brain.
The second criterion was the importance of feedback...for every fiber feeding information forward into the neocortex, there are ten fibers feeding information back toward the senses...
The third criterion was that any theory or model of the brain should account for the physical architecture of the brain. The neocortex...is organized as a repeating hierarchy.
But as the neural network phenomenon exploded on the scene, it mostly settled on a class of ultrasimple models that didn't meet any of these criteria. Most neural networks consisted of a small number of neurons connected in three rows. A pattern (the input) is presented to the first row. These input neurons are connected to the next row of neurons, the so-called hidden units. The hidden units then connect to the final row of neurons, the output units. The connections between neurons have variable strengths, meaning the activity in one neuron might increase the activity in another and decrease the activity in a third neuron depending on the connection strengths. By changing these strengths, the network learns to map input patterns to output patterns.
These simple neural networks only processed static patterns, did not use feedback, and didn't look anything like brains. The most common type of neural network, called a "back propagation" network, learned by broadcasting an error from the output units back toward the input units....when the neural network was working nortmally, after being trained, the information flowed only one way...and the models had no sense of time. A static inout pattern got converted into a static output pattern. There was no history or record in the network of what happenbed even a short time earlier. And finally, the architecture of these neural networks was trivial compared to the complicated and hierarchical structure of the brain."
(Jeff Hawkins "On Intelligence", 1999 pp. 25, 26)
Tuesday, October 13, 2009
AI: Unreal Intelligence?
Posted by tomawesome at 6:53 PM 0 comments
Subscribe to:
Posts (Atom)