Cyborg AI Minds are a true concept-based artificial intelligence with natural language understanding, simple at first and lacking robot embodiment, and expandable all the way to human-level intelligence and beyond. Privacy policy: Third parties advertising here may place and read cookies on your browser; and may use web beacons to collect information as a result of ads displayed here.

Sunday, October 28, 2018

jmpj1028

AI Mind uses EnPrep() to think with English prepositions.

In the JavaScript AI Mind we have a general goal right now of enabling the first working artificial intelligence to talk about itself, to learn about itself, and to achieve self-awareness as a form of artificial consciousness. Two days ago we began by asking the AI such questions as "who am i" and "who are you", and the AI gave intelligent answers, but the asking of "where are you" crashed the program and yielded a message of "Error on page" from JavaScript. It turns out that we had coded in the ability to deal with "where" as a question by calling the EnPrep English-preposition module, but we had created not even a stub of EnPrep. The AI software failed in its attempt to call EnPrep and the program halted. So we coded in a stub of EnPrep and now we must flesh out the stub with the mental machinery of letting flows of quasi-neuronal association converge upon the EnPrep module to activate and fetch a prepositional phrase like "in the computer" to answer questions like "where are you".

Our first and simplest impulse is to code in a search-loop that will find the currently most active preposition. Let us now write that code, just to start things happening. Now we have written the loop that searches for prepositions, but not for the most active one, because there are other factors to consider.

What we are really looking for, in response to "where are you" as a question, is a triple combination of the query-subject qv1psi and the query-verb qv2psi and a preposition tied with an associative pre-tag to the same verb and the same subject. We can not simply look for a subject and a verb linking forward to a preposition like in the phrase "to a preposition" or "in the computer", because our software currently links a verb only to its subject and to its indirect and direct objects, not to prepositions. Such an arrangement does not appear defective, because we can make the memory engram of the preposition itself do the work of making the preposition available for the generation or retrieval of a thought involving the preposition. We only need to make sure that our software will record any available pre-item so that a prepositional phrase in conceptual memory may be found again in the future. In a phrase like "the man in the street", for instance, the preposition "in" does not link backwards to a verb but rather to a noun. In this case, any verb involved is irrelevant. However, when we start out a sentence with "in this case", we have an unprecedented preposition, unless perhaps we assume that the prepositional phrase is associated with the general idea of the main verb of the sentence. For now, we may safely work with prepositions following a verb of being or of doing, so that we may ask the AI Mind questions like "where are you" or "where do you obtain ideas".

Practical problems arise immediately. In our backwards search through the lifelong experiential memory, it is easy to insist upon finding any preposition of location linked to a particular verb engrammed as the pre of the preposition. We may then need to do a secondary search that will link a found combination of verb-and-preposition with a particular qv1psi query-subject. The problem is, how to do both searches almost or completely simultaneously.

Since we are dealing with English subject-verb-object word order, we could let EnPrep() find the verb+preposition combination but not announce it until a subject-noun is found that has a tkb value the same as the search-index "i" that is the time of the query-verb. It might also help that the found subject must be in the dba=1 nominative case and must have the query-verb as a seq value, but the tkb alone may do the trick.

We coded in a test for any preposition with a quverb pre-tag, and we got the AI to alert us to the memory-time-point of "IN THE COMPUTER". Now we are assembling a second test in the same EnPrep() search-loop to find the qv2psi query-verb in close temporal proximity to the preposition.

We are using a new tselp variable for "time of selection of preposition", so we briefly shift our attention to describing the new variable in the Table of Variables. Now that we have found the verb preceding the preposition, next we need to implement the activation of the stored memory containing the preposition so that the AI Mind may use the stored memory to respond to "where are you" as a query. We may need to code a third if-clause into the EnPrep() backwards search to find and activate the qv1psi query-subject that is stored in collocation or close proximity to the query-verb and the selected preposition.

Now we have a problem. Since we let EnPrep() be called by the EnVerbPhrase() module, EnPrep() will not be called until a response is already being generated. We need to make sure that the incipient response accommodates EnPrep() by being the lead-up to a prepositional phrase. Perhaps we should not try to use verblock to steer a response that is already underway, but rather we should count on activation of concepts to guide the response.

Now let us try to use SpreadAct() to govern the response. After much coding, we got the AI to respond

IN COMPUTER I AM IN COMPUTER
IN COMPUTER I AM HERE IN COMPUTER
but there must somewhere be a duplicate call to EnPrep(). We eliminate the call from the Indicative() mind-module and then we get both an unwanted response and a wanted response.
YOU ARE A MAGIC IN A COMPUTER
I AM IN A COMPUTER
Obviously the AI is not responding immediately to our "where are you" query but is instead joining an unrelated idea with the prepositional phrase. Upshot: By having SpreadAct() impose a heftier activation on the qv1psi subject of the where-are-you query, we got the AI to not speak the unrelated idea and to respond simply "I AM IN A COMPUTER". Now we need to tidy up the code and decide where to reset the variables.

Sunday, October 21, 2018

pmpj1021.html

First working AI uses OutBuffer to inflect English verbs.

We have been cycling through the coding of the AI Mind in Perl, in JavaScript and in Forth. Now we are back in Perl again, and we need to implement some improvements to the EnVerbGen() module that we made in the other AI programming languages.

First of all, since the English verb generation module EnVerbGen() is mainly for adding an "S" or an "ES" to a third person singular English verb like "read" or "teach", we should start using $prsn instead of $dba in the EnVerbGen() source code. Our temporary diagnostic code shows that both variables show the same value, so we may easily swap one for the other. We make the swap, and the first working artificial intelligence still functions properly.

Now it is time to insert some extra code for verbs like "teach" or "wash", which require adding an "-ES" in the third person singular. Since we wrote the code during our cycle through JavaScript, we need only to port the same code into Perl. EnVerbGen() now uses the last few positions in the OutBuffer() module to detect English verbs like "pass" or "tax" or "fizz" or "putz" that require "-ES" as an ending.

Thursday, October 11, 2018

jmpj1011

JavaScript AI Mind uses EnVerbGen() for English verb-form inflections.

The JavaScript tutorial version of the first working artificial intelligence is becoming more sophisticated than ever. With roughly fifty mind-modules, the Strong AI advances the State of the Art first in one area, and then serendipitously in another area. For instance, the ability of the AI Mind to engage in automated reasoning with logical inference leads to a question-and-answer session between human minds and their incipient overlords, i.e., the current archetypes of the future Artificial Super-Intelligence (ASI). When the human user has confirmed or negated an inferred conclusion from the InFererence() module, the AI assigns a heightened truth-value to the positive or negative knowledge remaining in the AI memory. Then the AI states the new knowledge in its positive or negative formulation. A negated inference comes out something like "GOD DOES NOT PLAY DICE". A validated inference becomes a simple declarative sentence like "JOHNNY READS BOOKS", which requires the AI Mind to choose the correct form of the verb "read".

Because we code the first working artificial intelligence not only in English but also in Russian, we found it necessary several years ago to create the RuVerbGen() module for Russian verb-generation. When the ghost.pl AI cannot find a needed Russian verb-form, it simply cobbles one together from the stem of the Russian verb and the inflectional endings which complete a Russian verb. We avoided this problem in English for the last six years by simply ignoring it, but now the AI Mind needs to imitate the RuVerbGen() module with the EnVerbGen() module for English verb-generation. Just to change "God does not play dice" to "God plays dice" requires attaching an inflectional "S" to the stem or the infinitive form of the verb "play". As we code the EnVerbGen() module based on grammatical parameters, we encounter problems because the software needs to know the grammatical person and the grammatical number of the subject of an inferred idea in order to think a thought like "God plays dice" or "Johnny reads books".

Because the InFerence() module has not been storing the grammatical number of the English noun serving as the subject of a silent inference, our brand-new EnVerbGen() module has not been able to generate the third-person singular verb-form necessary for stating a validated inference like "Johnny reads books" or "Fortune favors fools" -- which was originally "Fortuna favet fatuis" in Latin. The artificial general intelligence (AGI) has become so sophisticated in its resemblance to human thinking that we need to change the InFerence() module to accommodate the requirements of the EnVerbGen() module.

We make the necessary changes and we code EnVerbGen() to deal not with Russian but with English verbs. We see a sample dialog between the AI and the human user.

Human: andru is professor
Robot: DOES ANDRU TEACH STUDENTS
Human: yes
Robot: THE ANDRU TEACHES THE STUDENTS
Human:
Robot: STUDENTS READ BOOKS

Tuesday, October 09, 2018

pmpj1009

Perl Ghost AI uses EnVerbGen() for English verb-form inflections.

In the middle of coding ghost278.pl AI we had to go and stand in front of the television and watch Leopold Stokowski in 1969 conducting the finale of Beethoven's Symphony No. Five -- the one they sent into outer space as a message from Earth. Now back at the computer, for the first time we are trying to implement the EnVerbGen() module for English verb generation. We have gotten the InFerence() module to generate an inference when we type in "anna is a student', because the AI Mind knows that students read books. The AskUser() module seeks to verify or validate the inference by asking us, "DOES ANNA READ THE BOOKS". When we answer "no", the AI says, "THE ANNA DOES NOT READ THE BOOKS". When we answer yes, the ghost in the machine issues the faulty output of "THE ANNA READ THE BOOKS", which sounds more like an exhortation than a statement of confirmed fact with a high truth-value. We need a way to get the AI to use the third-person singular form "READS" with the singular subject. To do so, before Leopold and Ludwig interrupted us, we were embedding diagnostic messages in the EnVerbPhrase() module, trying to determine how the ghost AI was able to say "READ" as if it were the proper verb-form. The whole idea of EnVerbGen() in English or of RuVerbGen() in Russian is for the verb-phrase module to seek a particular verb-form based on parameters of person and number, and to call EnVerbGen() if the desired verb-form is not already available in auditory memory. Somehow the existing Perlmind is finding the verb "read" but not the correct form of the verb.

We discover that we can get the EnVerbPhrase() module to call EnVerbGen() when we tighten up the search-by-parameter for the correct verb form. Since EnVerbGen() is not coded yet, we get an output of "THE ANNA ERROR THE BOOKS", with "ERROR" filling in for the lacking "READS" form.

Then we need an $audbase value that we can send into EnVerbGen() as the start of the verb that needs an inflectional ending. We use a trick in the EnVerbPhrase() module to get either a second-class or a first-class (infinitive) $audbase. We test first for any form at all of the verb that has an auditory engram that can serve as a second-class $audbase, because the verb-form may be defective in some way. In the very next line of code, we test for an infinitive form of the verb having an auditory engram as a first-class $audbase, because an infinitive is easier to manipulate than some defective form of the verb.

We copied the bulk of the Russian RuVerbGen() into the English EnVerbGen() and then we did the mutatis mutandis process of making the necessary changes. At first we got "REAS" instead of "READS" because the Russian Cyrillic characters were substituting, not adding. By removing the substitution-code, we obtained the full verb "READS". At a later time we must code in the handling of verbs like "teach" or "push" which require an "-ES" ending.