First Working AGI

Mentifex AI Minds are a true concept-based artificial general intelligence, simple at first and lacking robot embodiment, but expandable all the way to human-level intelligence and beyond. Privacy policy: Third parties advertising here may place and read cookies on your browser; and may use web beacons to collect information as a result of ads displayed here.

Thursday, November 08, 2018

pmpj1108

Natural language understanding in first working artificial intelligence.

The AI Mind is struggling to express itself. We are trying to give it the tools of NLU, but it easily gets confused. It has difficulty distinguishing between itself and its creator -- your humble AI Mind maintainer.

We recently gave the ghost.pl AI the ability to think with English prepositions using ideas already present or innate in the knowledge bank (KB) of the MindBoot sequence. We must now solidify prepositional thinking by making sure that a prepositional input idea is retrievable when the AI is thinking thoughts about what it knows. In order for the AI to be able to think with a remembered prepositional idea, the input of a preposition and its object must cause the setting and storage of a $tkb-tag that links the preposition in conceptual memory to its object in conceptual memory. The preposition must also become a $seq-tag to any verb that is the $pre of the preposition. When InStantiate() is dealing with a preposition input after a verb, the $tvb time-of-verb tag is available for "splitting" open the verb-engram in conceptual memory and inserting the concept-number of the preposition as the $seq of the verb. Let us try it.

We inserted the code for making the input preposition become the $seq of the verb and then we tested by launching the AI with the first input being "you speak with god". Then we obtained the following outputs.

I AM IN A COMPUTER
I THINK
I AM A PERSON
I AM AN ANDRU
I DO NOT KNOW
I AM A PERSON
I HELP THE KIDS
I AM A ROBOT
I AM AN ANDRU
I AM IN A COMPUTER
I SPEAK WITH THE GOD
It took so long for the input idea to come back out again because inputs go into immediate inhibition, lest they take over the consciousness of the AI in an endless repetition of the same idea.

As we code the AI Mind and conduct a conversation with it, we feel as if we are living out the plot of a science fiction movie. The AI does unexpected things, or it seems to be taking on a personality. We are coding the mechanisms of natural language understanding without worrying about the grounding problem -- the connection of the English words to what they mean out in the physical world. We count on someone somewhere installing the AI Mind in a robot to ground the English concepts with sensory knowledge.

Sunday, November 04, 2018

pmpj1104

First working artificial intelligence thinks with prepositional phrases.

The ghost.pl immanence of the first working artificial intelligence is undergoing minor changes as the AI Mind becomes able to think with English prepositional phrases. At first the AI was able to use a preposition only to answer a where-question such as "where are you" and the Ai would respond "I AM IN THE COMPUTER". Now we need to implement a general ability of the AI to think with prepositional phrases loosely tied to nouns or verbs or adjectives or adverbs. The quasi-neuronal associative $seq tag may soon be re-purposed to lead not only from, say, nouns to verbs but also from nouns to prepositions. However a preposition is arrived at, it is time to implement the activation and retrieval of a whole prepositional phrase whenever the preposition itself is activated.

We begin experimenting by going into the MindBoot sequence and entering a $seq tag of "638=IN" for the verb "800=AM" in the knowledge-base sentence "I AM IN THE COMPUTER". The plan is to insert into EnVerbPhrase() some code to pass activation to the "638=IN" preposition when the AI thinks the innate idea "I AM IN...." So we insert some active code to capture the $seq tag and some diagnostic code to let us know what is happening. Ooh, mind-design is emotionally fun and intellectually exciting! The first thing captured is not a preposition but the "537=PERSON" noun when the AI is thinking, "I AM A PERSON". Next our fishing expedition lands a "638=IN" preposition when the AI issues the output "I AM" while trying to say "I AM IN THE COMPUTER".

Once the $seq tag has been captured, the AI software needs to determine if the captured item is a preposition. A search is in order. We search backwards in time for an @Psy concept-number matching the $seq tag and if we find a match we check its $pos tag for a "6=prep" match, upon which we assign the concept-number to the $prep variable in case we decide to send the designated preposition into the EnPrep() module for inclusion in thinking.

We go back into the code for assigning the $seq tag and in the same line of code we set the $tselp variable falsely and temporarily equal to the $verblock time, so that we may increment the $tselp variable until it becomes true. We insert some code that increments the phony $tselp time by unitary one and uses it to "split" each succeeding conceptual @Psy array row into its fourteen constituent elements, including "$k[1]" which we check for a match with the designated $prep variable. We make several copies of the search-snippet, and it easily finds the $prep engram within just a few time-points of the verb-engram, but now we need to convert the series of search-snippets into a self-terminating loop that will terminate, Arnold, upon finding the prepositional engram in memory. But we have forgotten how to code such a loop in Strawberry Perl Five, so we go into another room of the Mentifex AI Lab and we fetch the books Perl by Example (Quigley) and PERL Black Book (Holzner) to seek some help. We find some sample code for an until loop on page 193 of Quigley. We do not initialize the scalar $tselp at zero, because we are searching for an English preposition quite near to the already-known time-point. For the sake of safety, we insert a line of "last" escape-code in the event that the incrementing $tselp value exceeds the $cns value. The resulting until loop works just fine and it locates the nearby English preposition for us.

Next we insert a warranted call to SpreadAct() into the EnVerbPhrase() module just after the point where Speech() has been called to speak the verb. We wish to set up a routine for spreading activation throughout a prepositional phrase not only after a verb but also after a noun or an adjective (e.g. "young at heart" or an adverb (e.g. "ostensibly at random"). In SpreadAct() we send the $aud tag associated with the located preposition directly into Speech() and the ghost.pl AI starts saying not just "I AM" but "I AM IN". We need to insert more code for finishing the prepositional phrase. By the way, these improvements or mental enhancements are perhaps making the AI Mind capable of much more sophisticated thinking than heretofore. The AI is using words without really knowing what the words mean in terms of sensory perception -- for which robot embodiment is necessary -- but the AI may nevertheless develop self-awareness on top of its innate concept of self or ego. Knowing how to use prepositions, the AI may become curious and ask the human users for all sorts of exploratory information.

Now in SpreadAct() we throw in a call to EnArticle(), even though we have not yet coded in the elocution of the object of the preposition. The AI says "I AM IN A" without stating the object of the preposition. Let us create a new $tselo variable for time of selection of object so that we may use SpreadAct() to zero in on the object and send it into the Speech()module. Finally the ghost.pl AI Mind says "I AM IN A COMPUTER".

Sunday, October 28, 2018

jmpj1028

AI Mind uses EnPrep() to think with English prepositions.

In the JavaScript AI Mind we have a general goal right now of enabling the first working artificial intelligence to talk about itself, to learn about itself, and to achieve self-awareness as a form of artificial consciousness. Two days ago we began by asking the AI such questions as "who am i" and "who are you", and the AI gave intelligent answers, but the asking of "where are you" crashed the program and yielded a message of "Error on page" from JavaScript. It turns out that we had coded in the ability to deal with "where" as a question by calling the EnPrep English-preposition module, but we had created not even a stub of EnPrep. The AI software failed in its attempt to call EnPrep and the program halted. So we coded in a stub of EnPrep and now we must flesh out the stub with the mental machinery of letting flows of quasi-neuronal association converge upon the EnPrep module to activate and fetch a prepositional phrase like "in the computer" to answer questions like "where are you".

Our first and simplest impulse is to code in a search-loop that will find the currently most active preposition. Let us now write that code, just to start things happening. Now we have written the loop that searches for prepositions, but not for the most active one, because there are other factors to consider.

What we are really looking for, in response to "where are you" as a question, is a triple combination of the query-subject qv1psi and the query-verb qv2psi and a preposition tied with an associative pre-tag to the same verb and the same subject. We can not simply look for a subject and a verb linking forward to a preposition like in the phrase "to a preposition" or "in the computer", because our software currently links a verb only to its subject and to its indirect and direct objects, not to prepositions. Such an arrangement does not appear defective, because we can make the memory engram of the preposition itself do the work of making the preposition available for the generation or retrieval of a thought involving the preposition. We only need to make sure that our software will record any available pre-item so that a prepositional phrase in conceptual memory may be found again in the future. In a phrase like "the man in the street", for instance, the preposition "in" does not link backwards to a verb but rather to a noun. In this case, any verb involved is irrelevant. However, when we start out a sentence with "in this case", we have an unprecedented preposition, unless perhaps we assume that the prepositional phrase is associated with the general idea of the main verb of the sentence. For now, we may safely work with prepositions following a verb of being or of doing, so that we may ask the AI Mind questions like "where are you" or "where do you obtain ideas".

Practical problems arise immediately. In our backwards search through the lifelong experiential memory, it is easy to insist upon finding any preposition of location linked to a particular verb engrammed as the pre of the preposition. We may then need to do a secondary search that will link a found combination of verb-and-preposition with a particular qv1psi query-subject. The problem is, how to do both searches almost or completely simultaneously.

Since we are dealing with English subject-verb-object word order, we could let EnPrep() find the verb+preposition combination but not announce it until a subject-noun is found that has a tkb value the same as the search-index "i" that is the time of the query-verb. It might also help that the found subject must be in the dba=1 nominative case and must have the query-verb as a seq value, but the tkb alone may do the trick.

We coded in a test for any preposition with a quverb pre-tag, and we got the AI to alert us to the memory-time-point of "IN THE COMPUTER". Now we are assembling a second test in the same EnPrep() search-loop to find the qv2psi query-verb in close temporal proximity to the preposition.

We are using a new tselp variable for "time of selection of preposition", so we briefly shift our attention to describing the new variable in the Table of Variables. Now that we have found the verb preceding the preposition, next we need to implement the activation of the stored memory containing the preposition so that the AI Mind may use the stored memory to respond to "where are you" as a query. We may need to code a third if-clause into the EnPrep() backwards search to find and activate the qv1psi query-subject that is stored in collocation or close proximity to the query-verb and the selected preposition.

Now we have a problem. Since we let EnPrep() be called by the EnVerbPhrase() module, EnPrep() will not be called until a response is already being generated. We need to make sure that the incipient response accommodates EnPrep() by being the lead-up to a prepositional phrase. Perhaps we should not try to use verblock to steer a response that is already underway, but rather we should count on activation of concepts to guide the response.

Now let us try to use SpreadAct() to govern the response. After much coding, we got the AI to respond

IN COMPUTER I AM IN COMPUTER
IN COMPUTER I AM HERE IN COMPUTER
but there must somewhere be a duplicate call to EnPrep(). We eliminate the call from the Indicative() mind-module and then we get both an unwanted response and a wanted response.
YOU ARE A MAGIC IN A COMPUTER
I AM IN A COMPUTER
Obviously the AI is not responding immediately to our "where are you" query but is instead joining an unrelated idea with the prepositional phrase. Upshot: By having SpreadAct() impose a heftier activation on the qv1psi subject of the where-are-you query, we got the AI to not speak the unrelated idea and to respond simply "I AM IN A COMPUTER". Now we need to tidy up the code and decide where to reset the variables.

Sunday, October 21, 2018

pmpj1021.html

First working AI uses OutBuffer to inflect English verbs.

We have been cycling through the coding of the AI Mind in Perl, in JavaScript and in Forth. Now we are back in Perl again, and we need to implement some improvements to the EnVerbGen() module that we made in the other AI programming languages.

First of all, since the English verb generation module EnVerbGen() is mainly for adding an "S" or an "ES" to a third person singular English verb like "read" or "teach", we should start using $prsn instead of $dba in the EnVerbGen() source code. Our temporary diagnostic code shows that both variables show the same value, so we may easily swap one for the other. We make the swap, and the first working artificial intelligence still functions properly.

Now it is time to insert some extra code for verbs like "teach" or "wash", which require adding an "-ES" in the third person singular. Since we wrote the code during our cycle through JavaScript, we need only to port the same code into Perl. EnVerbGen() now uses the last few positions in the OutBuffer() module to detect English verbs like "pass" or "tax" or "fizz" or "putz" that require "-ES" as an ending.