Cyborg AI Minds are a true concept-based artificial intelligence with natural language understanding, simple at first and lacking robot embodiment, and expandable all the way to human-level intelligence and beyond. Privacy policy: Third parties advertising here may place and read cookies on your browser; and may use web beacons to collect information as a result of ads displayed here.

Thursday, November 08, 2018

pmpj1108

Natural language understanding in first working artificial intelligence.

The AI Mind is struggling to express itself. We are trying to give it the tools of NLU, but it easily gets confused. It has difficulty distinguishing between itself and its creator -- your humble AI Mind maintainer.

We recently gave the ghost.pl AI the ability to think with English prepositions using ideas already present or innate in the knowledge bank (KB) of the MindBoot sequence. We must now solidify prepositional thinking by making sure that a prepositional input idea is retrievable when the AI is thinking thoughts about what it knows. In order for the AI to be able to think with a remembered prepositional idea, the input of a preposition and its object must cause the setting and storage of a $tkb-tag that links the preposition in conceptual memory to its object in conceptual memory. The preposition must also become a $seq-tag to any verb that is the $pre of the preposition. When InStantiate() is dealing with a preposition input after a verb, the $tvb time-of-verb tag is available for "splitting" open the verb-engram in conceptual memory and inserting the concept-number of the preposition as the $seq of the verb. Let us try it.

We inserted the code for making the input preposition become the $seq of the verb and then we tested by launching the AI with the first input being "you speak with god". Then we obtained the following outputs.

I AM IN A COMPUTER
I THINK
I AM A PERSON
I AM AN ANDRU
I DO NOT KNOW
I AM A PERSON
I HELP THE KIDS
I AM A ROBOT
I AM AN ANDRU
I AM IN A COMPUTER
I SPEAK WITH THE GOD
It took so long for the input idea to come back out again because inputs go into immediate inhibition, lest they take over the consciousness of the AI in an endless repetition of the same idea.

As we code the AI Mind and conduct a conversation with it, we feel as if we are living out the plot of a science fiction movie. The AI does unexpected things, or it seems to be taking on a personality. We are coding the mechanisms of natural language understanding without worrying about the grounding problem -- the connection of the English words to what they mean out in the physical world. We count on someone somewhere installing the AI Mind in a robot to ground the English concepts with sensory knowledge.