Wednesday, November 04, 2009

MoreOn Intelligence...


Continuing on the topic of my last post, interesting ideas by Barry Kumnick in 2008 reprinted here:

We cannot use symbolic information or symbolic computation as the basis for the development of sentient systems due to multiple DEEP PROBLEMS in the fundamental representation of information.

I consider these deep problems because all logic, mathematics, natural and artificial languages, communication, computation, and most of science are based on the concept of "information". In essence, all of humankinds written records are based on information of one kind or another. If we must replace the use of information as the substrate for the development of sentient computation we need to dig deep indeed. Information is so ingrained in our education and communication that it is difficult for most people to even conceive the possibility of any other alternative representation.

The concept of information suffers from the following deep problems as it relates to its use as the basis for sentient computation:

1) We think from the first person direct perspective. We represent information from the 3rd person indirect perspective. Cogito ergo sum. I think, therefore I exist. This is fundamental. It is impossible for an indirect representation to represent anything directly. Information represents everything using reference semantics. Information is always a label or reference that represents something else. Information can never represent anything directly, in the sense of first person direct representation. Even if we write "I thought that", the "I" is really a proxy, substitute, or representation of the writer. The I is not the writer himself or herself. "I thought that" is a 3rd person indirect representation of a 1st person direct statement. I don't think there is any way around this. This limitation is fundamental to the concept and formal definition of information. It is built into the foundation of sentential logic, set theory, mathematics, language and all symbolic computation. This fundamental limitation prevents a computer from thinking for itself, from the first person direct perspective. There is no way an information processing system can have a true sense of self if all its computation is based on indirect representation.

2) When we think, our mind allows us to remember and understand the semantics or meaning of information or knowledge. We inherently understand the meaning of our own knowledge, and we can interpret and understand the meaning of information and convert the information into knowledge for subsequent storage and recall. Somehow, our brain must use an internal knowledge representation that can encode, represent, store and recall semantic meaning, not just the syntax of information. In contrast, information only encodes and represents syntax. Information is just a sequence of symbols with syntax, but no meaning outside the mind of an intelligent observer. The meaning of information is not encoded or represented by information. A book cannot understand the meaning of the writing contained within it. A computer cannot understand the meaning of the symbol sequences it manipulates. It can recognize symbols and symbol sequences and manipulate symbols based on preexisting instructions, but there is more to meaning than recognition of symbols and symbolic manipulation. A symbolic information processing system cannot represent, process, store, or recall that which information does not even encode or represent. I don't think there is any way any information based computational system can work around this fundamental limitation.

3) We think in context. It is reasonable to assume we utilize a context sensitive encoding, and/or a context sensitive representation of thought and knowledge. Doing so would be much less complex, and much more efficient than using a context free encoding and context free representation, and then being forced to "simulate" the context using "higher-order" representational structures. Why represent, store and process representation for context dependencies if it can be built in to the underlying knowledge representation or encoding? On the other hand, information is encoded and represented in context free form. For example, we always use the same symbol to represent the letter "e" in the latin alphabet. We always spell the same word the same way. We always use the same binary encoding to represent the same number. For example, we always represent the number 5 using the binary sequence 101. Information requires the use of context free encoding and representation to support efficient and effective communication between individuals. However, thought is private. Why should the same requirements apply to the representation of thought? The different parts of the brain have no need to send each other information, decode it and interpret it. Why can't each brain use a unique private encoding specifically optimized for maximal compression of the unique knowledge that it stores and processes? Why can't the encoding encode the semantic meaning along with the syntax? While it may be possible to use a context free representation to represent context dependent thought, it would certainly be a lot more complex and much less efficient than it would be to move the context dependencies into the encoding or knowledge representation.

4) Godel's Incompleteness Theorems. All fixed formal symbolic systems above a certain minimal complexity (that of Peano arithmetic) are either incomplete or inconsistent. Yet thought appears to be both complete and consistent. (I am using the terms "complete" and "consistent" in their formal mathematical sense). Thought outruns logic. We can think about things that we can't represent using logic or any logic based fixed formal system. We can use a multitude of different fixed formal systems to work around this problem, but if we do, then to represent the entire universe of knowledge, we must find a way to translate between each representation and ensure the mutual consistency of all the interdependent representations. This is all very complex, cumbersome, error prone, and innefficient. I can't see the brain using multiple representations to get around this problem if a single representation can avoid the problem all together. It would be too much of a kluge, too complex, too slow, and too innefficient.

I have invented a new knowledge representation that avoids the consistency and completeness limitations of Godel's Incompleteness theorems. The key to overcoming the consistency and completeness limitations of Godel's Incompleteness theorems is to create a representation based on direct representation instead of indirect representation. Using a direct representation, one can create a fixed formal system of minimal complexity that is both consistent and complete. This can be done by creating a first person direct representation of abstraction that represents a non-extensible upper ontology. The upper ontology is based on the representation of abstraction. It is the first order abstraction of abstraction itself. Since anything can be represented as an abstraction, it is then possible to represent everything else in the universe of thought indirectly in terms of an abstraction. Simultaneously, in a single representation, this allows us to think both directly from the first person direct perspective in context using the representation of abstraction, and indirectly by using abstraction to form an abstract representation of anything we can think about. In one shot, this solves problems 1, 2, 3, and 4 above. It will allow us to create sentient systems that can think and understand the meaning of knowledge from the first person direct perspective in context. Cogito ergo sum in a sentient machine.

BTW: biological neurons are direct physical implementations of the upper ontology of abstraction.

Best regards,

Barry Kumnick