Cyborg AI Minds are a true concept-based artificial intelligence with natural language understanding, simple at first and lacking robot embodiment, and expandable all the way to human-level intelligence and beyond. Privacy policy: Third parties advertising here may place and read cookies on your browser; and may use web beacons to collect information as a result of ads displayed here.

Sunday, September 30, 2018

pmpj0930

Ghost AI says when it does not know the answer to a query.

When the ghost.pl AI considers a what-query such as "what do kids make", some mind-module must call the SpreadAct() module to handle the what-query, but which module? We could say that the Indicative() module should make the call to SpreadAct() just before making a response in the indicative mood, but perhaps a response may need to be uttered in a mood other than indicative. The AI Mind might wish to answer the query with an imperative command like "DO NOT BOTHER ME". Or the AI might not understand the what-query and might want to ask a question about it. So perhaps we should have the Sensorium() module call SpreadAct() to respond to a what-query.

We have now introduced a new technique for answering "I DO NOT KNOW" in response to a what-query for which the AI Mind does not find an answer. The AI briefly elevates the $tru truth-value and the activation-level of the idea "I DO NOT KNOW" as stored in the MindBoot() knowledge base (KB), so that the Indicative() module expresses the momentarily true idea. Immediately afterward, the AI returns the $tru truth-value to zero.

Friday, September 28, 2018

pmpj0928

Perl AI improves Russian MindBoot and introduces RuIndicative module.

In the ghost275.pl AI we are consolidating the Russian-language knowledge-case (KB) directly below the English-language KB near the beginning of the MindBoot sequence, so that we may add a new item without complicating a future re-location of the Russian knowledge base.

We should probably stub in the RuIndicative() module, so that it will exist not only in our AI diagrams but also in the software itself.

When we start the AI out thinking in Russian, we have been encountering a bug that shows up with the second sentence of output. Perl complains about the use of "uninitialized value in concatenation or string" in the PsiDecay() module. To troubleshoot, we go through the PsiDecay concatenation of associative tags in the @Psy conceptual array and we replace the various variables one by one with a numeric value, to see if the complaint disappears. The complaint disappears when we replace the $k[2] variable for the $hlc human-language code with a numeric value of one (1) instead of "en" for English or "ru" for Russian.

Perl continues to complain about uninitialized values when we have the Perlmind think in Russian, but not when it thinks in English. Therefore we know that the lurking bug is not in the PsiDecay() module or in the InStantiate() module, even though the bug manifests itself in those modules. We spent hours on each of the past two days searching for an elusive bug which must certainly be hiding in one or more of the Russian-language modules. Therefore it is time to isolate the bug by isolating the Russian-language modules. First let us look at the RuNounPhrase() module. We insert some diagnostic messages and we see that the bug manifests itself when program-flow goes back up to the RuThink() module which calls the PsiDecay() module.

Since we catch sight of the bug when PsiDecay() is called, let us temporarily insert some extra calls to PsiDecay() and see what happens. First we make an extra call to PsiDecay() from the end of RuNounPhrase(). Huh?! Now we get two complaints from Perl about uninitialized values showing up for a program line-number belonging to a concatenation in the PsiDecay() module. Let us also try an extra call to PsiDecay() from the RuVerbPhrase() module. We do so, and now we get three complaints from Perl about uninitialized values. However, the glitch does not seem to be occurring during the first call from RuIndicative() to RuNounPhrase(), but rather during or after the call to RuVerbPhrase(). For extra clarity, let us have the start of RuVerbPhrase() make a call to PsiDecay(). We do so, and there is no concomitant complaint from Perl about uninitialized values. Therefore, the subject-choosing part of RuNounPhrase() must not be the source of the problem, but the direct-object portion of RuNounPhrase() is still under suspicion.

Now we are discovering something strange. Towards the end of RuNounPhrase() there is a concatenation which is supposed to impose inhibition upon a noun selected by the module, as identified by the $tsels variable which pertains to the "time of selection of the subject", and which has been used earlier in RuNounPhrase() to indeed inhibit the selected subject. However, a diagnostic message reveals to us AI Mind maintainers that the $tsels variable has been zeroed out by the end of RuNounPhrase() and that therefore the software is trying to concatenate the associative tags purportedly available at a zero time-point -- where there are no associative tags. Let us see what happens when we comment out the suspicious concatenation code. We do so, and we get no change in the reporting of the bug. Let us see if the earlier inhibition in the RuNounPhrase() module is causing any problems. First off, a diagnostic message shows us that the $tsels variable has been zeroed out, or perhaps never loaded, even at the time of the first inhibition in the RuNounPhrase() module. Let us comment out the concatenation of the first inhibition and see what happens. By the way, if there are any secret AI Labs in Russia or elsewhere working on the further development or evolution of these AI Minds in Perl and in tutorial JavaScript and in Forth for intelligent humanoid robots, this journal entry shows that the AI coding problems are indeed tractable and soluble, given enough persistence and effort. Now, when we have commented out both the inhibitional concatenations in the RuNounPhrase() module, we still get the same complaints from Perl about uninitialized values, and we notice in the diagnostic display of the memory-array contents that the Russian nouns are still being inhibited -- but where? Oh, the InStantiate() module is imposing a trough of inhibition. Let us do another commenting out and see what happens. Nothing happens, and the inhibition is still occurring.

As we go through RuVerbPhrase() and comment out the various concatenations, the complaint from Perl about uninitialized values suddenly disappears when we comment out the concatenation where Russian verbs are competing to be selected as the most active verb. We also notice that a comment seems to be missing at the end of the first line in the two-line concatenation. When we insert the missing comma and we do not comment out the concatenation, there are no further complaints from Perl about uninitialized values. Of course, we just spent three days wracking our brains, trying to figure out what was wrong, when the problem was one single missing comma. Now it is time to clean up the Perlmind code and upload it to the Web.

Sunday, September 23, 2018

mfpj0923

MindForth AI beeps to request input from any nearby human.

In MindForth we attempt now to update the AudMem and AudRecog mind-modules as we have recently done in the ghost.pl Perl AI and in the tutorial JavaScript AI for Internet Explorer. Each of the three versions of the first working artificial intelligence was having a problem in recognizing both singular and plural English noun-forms after we simplified the Strong AI by using a space stored after each word as an indicator that a word of input or of re-entry had just come to an end.

In AudMem we insert a Forth translation of the Perl code that stores the audpsi concept-number one array-row back before an "S" at the end of a word. MindForth begins to store words like "books" and "students" with a concept-number tagged to both the singular stem and to the plural word. We then clean up the AudRecog code and we fix a problem with nounlock that was interfering with answers to the query of "what do you think".

Next we implement the Imperative module to enable MindForth to sound a beep and to say to any nearby human user: "TEACH ME SOMETHING."

Friday, September 21, 2018

jmpj0921

Improving auditory recognition of singular and plural noun-forms.

The JavaScript Artificial Intelligence (JSAI) is currently able to recognize the word "book" in the singular number but not "books" in the plural number. It is because the AudRecog() mind-module is not recognizing the stem "book" within the inflected word "books". To correct the situation, we must first update the AudMem() module so that it will impose an audpsi tag not only on the final "S" in a plural English noun being stored, but also one space back on the final character or letter of the stem of the noun.

We copy the pertinent code from the AudMem() module in the ghost.pl Perl AI and the JavaScript AI begins to store the stem-tag, but only when the inflected word is recognized so that AudRecog() produces an audpsi recognition-tag.

Now we have a problem because the AI is keeping the audpsi of "800" for "IS" and attaching it mistakenly to the next word being stored by the AudMem() module. We fix the problem.

Next we implement the Imperative() module to enable the AI Mind to order any nearby human user: "TEACH ME SOMETHING."

Wednesday, September 19, 2018

pmpj0919

Students may teach the first working artificial intelligence.

In the ghost274.pl version of the Perlmind AI, we are having not Volition() but rather EnThink() call the Imperative() module to blurt out the command "TEACH ME SOMETHING". We are also trying to have Imperative() be the only module that sounds a beep for the human user, for several reasons. Although we were having the AskUser() module sound a beep to alert the user to the asking of a question, a beep can be very annoying. It is better to reserve the beep or "bell" sound for a special situation, namely the time when there has been no human input for an arbitrary period as chosen by the AI Mind maintainer and when we wish to let the Ghost in the Machine call out for some attention.

We are also eager for Netizens to set the potentially immortal ghost.pl AI running for long periods of time, both as a background process on a desktop computer or a server, and as part of a competition to see who can have the longest running AI Mind.

If a high school or college computer lab has the ghost.pl AI running on a machine off in the corner while the students are tending to other matters, a sudden beep from the AI Mind may cause students or visitors to step over to the AI and see what it wants. "TEACH ME SOMETHING" is a very neutral command, not at all like, "Shall we play a game? How about GLOBAL THERMONUCLEAR WAR?"

The teacher or professor could let any student respond to the beep by adding to the knowledge base of the AI Mind. Of course, clever students with a knowledge of Perl could put the AI Mind out on the Web for any and all visitors to interact with.

Sunday, September 16, 2018

pmpj0916

Auditory recognition in the first working artificial intelligence

In the AudRecog() module of the ghost.pl free AI software, we need to figure out why a plural English noun like 540=BOOKS is not being stored with an $audpsi tag of 540 both after the stem of "book" and after the end of "books". The stem of any word stored in auditory memory needs an $audpsi tag so that a new input of the word in the future will be recognized and will activate the same underlying concept.

In AudMem() we have added some code that detects a final "S" on a word and stores the $audpsi concept number both at the end of the word in auditory memory, and also one row back in the @ear auditory array, in case the "S" is an inflectional ending. We leave for later the detection of "-ES" as an inflectional ending, as in "TEACHES" or "BEACHES".

In AudRecog() we tweak some code involving the $prc tag for provisional recognition, and the first working artificial intelligence in Perl does a better job at recognizing both singular and plural forms of the same word representing the same concept.

Wednesday, September 12, 2018

pmpj0912

Ask the first working artificial intelligence what it thinks.

Houston, we have a problem. The ghost.pl AI Mind in freely available -- download it now -- Strawberry Perl Five -- is not properly answering the question of "what do you think". The JSAI (JavaScript Artificial Intelligence) easily answers the same question with "I THINK THAT I HELP KIDS". So what is the Perl AI doing wrong that the JavaScript AI is doing right?

The problem seems to lie in the SpreadAct() module. We notice one potential problem right away. SpreadAct() in Perl is still using "$t-12" as a rough approximation for when to start a backwards search for previous knowledge about the subject-noun $qv1psi of a what-query, whereas the JSAI uses the more exact $tpu for the same search. So let us start using the penultimate time $tpu, which excludes the current-most input, and see if there is any improvement. There is no improvement, so we test further for both $qvipsi and $qv2psi, which are the subject-noun and the associated verb conveyed in the what-query.

SpreadAct() easily responds correctly to what-queries for which there is an answer ready in the knowledge base (KB), such as "I AM A PERSON" in response to "what are you". However, when we ask "what think you" or "what do you think", there is no pre-set answer, and the AI is supposed to generate a response starting with "I THINK" followed by the conjunction "that" and a statement of whatever the AI Mind is currently thinking.

From diagnostic messages we learn that program-flow is not quickly transiting from SpreadAct() to EnThink(). The AI must be searching through the entire MindBoot() sequence and not finding any matches. When the program-flow does indeed pass through EnThink() to Indicative() to EnNounPhrase(), there are no pre-set subjects or verbs, but rather there are concepts highly activated by the SpreadAct() module. So EnNounPhrase() must find the highly activated pronoun 701=I in order to start the sentence "I THINK..." in response to "what do you think".

Now we discover that the JavaScript version of EnNounPhrase() has special code for a topical response to "what-think" queries. In the course of AI evolution, it may be time now to go beyond such a hard-coded response and instead to let the activated "think" concept play its unsteered, unpredetermined role, which will happen not in EnNounPhrase() but in EnVerbPhrase().

It is possible that EnNounPhrase() finds the activated subject 701=I but then unwarrantedly calls the SpreadAct() module. However, it turns out that EnVerbPhrase() is making the unwarranted call to the SpreadAct() module, which we now prevent by letting it proceed only if there is no what-query being processed as evidenced by a $whatcon flag set to zero.

The early part of EnVerbPhrase() in the JavaScript AI has some special code for dealing with "what-think" queries. In the ghost.pl AI, let us try to insert some similar code but without it being geared specifically to the verb "think". We would like to enable responses to any generic verb of having an idea, such as "think" or "know" or "fear" or "imagine" or "suspect" and so forth.

By bringing some code from the JavaScript EnVerbPhrase() into the Perl EnVerbPhrase, but with slight changes in favor of generality, we get the ghost.pl AI to respond "I THINK". Next we need to generate the conjunction "that". But first let us remark that the AI also says "I KNOW" when we ask it "what do you know", so the attempt at generality is paying off. Let us try "what do you suspect". It even says "I SUSPECT". It also works with "what do you fear". We ask it "what do you suppose" and it answers "I SUPPOSE".

We still have a problem, Houston, because EnVerbPhrase() is calling EnNounPhrase() for a direct object instead of returning to Indicative() as a prelude to calling the ConJoin() module to say "I THINK THAT...." We set up some conditional testing to end that problem.

Friday, September 07, 2018

mfpj0907

Updating the EnArticle module for inserting English articles.

Today we update the EnArticle module for English articles in the MindForth first working artificial intelligence. We previously did the update somewhat unsatisfactorily in the ghost.pl AI in Perl, and then much more successfully in the tutorial JavaScript AI. We anticipate no problems in the MindForth update. As an initial test, we enter "you have a book" and after much unrelated thinking, the AI outputs "I HAVE BOOK" without inserting an article.

We trigger an inference by entering "anna is woman". In broken English, the AskUser module responds, "DO ANNA HAS THE CHILD", which lets us see that EnArticle has been called. We reply "no". MindForth opines, "ANNA DOES NOT HAVE CHILD".

We discover that the EnNounPhrase module of recent versions has not been calling the EnArticle module, so we correct that situation. We also notice that the input of a noun and its transit through InStantiate do not involve a call to EnArticle, so we insert into InStantiate some code to make EnArticle aware of the noun being encountered.

Thursday, September 06, 2018

jmpj0906

Improving EnArticle mind-module in first working artificial intelligence

The JavaScript artificial intelligence (JSAI) is not reliably transferring the usx values from InStantiate() to the EnArticle() module for inserting an English article into a thought, so we must debug the first working artificial intelligence in troubleshooting mode. We quickly see that the EnNounPhrase() module is calling EnArticle() for direct objects but not for all nouns in general. We also notice that we need to have EnNounPhrase() transfer the usx value.

We encounter and fix some other problems where the nphrnum value is not being set, as required for EnArticle() to decide between using "a" or "the".

Although we get the usx value to be transferred to the us1 value, we need a way to test usx not against a simultaneous value of us1 but rather against a recent value of us1. One solution may be to delay the transfer of usx by first testing in the EnArticle() module for equality between usx and us1 before we actually pass the noun-concept value of usx to us1. In that way, we will be testing usx against old, i.e., recent, values of the us1- us7 variables and not against the current value, which would automatically be equal. Then after the test for equality, we pass the actual, current value of usx.

We now have a JavaScript AI that works even better than the ghost.pl AI in Perl -- until we update the Perl AI. The JSAI now uses "a" or "the" rather sensibly, except when it now says "YOU ARE A MAGIC", because the default article for a non-plural noun is the indefinite article "a". Btw (by the way), today at a Starbucks store in Seattle WA USA we bought a green Starbucks gift card that says "You'Re MagicAL", because it reminds us of a similar idea in the AI MindBoot() sequence.

Monday, September 03, 2018

pmpj0903

Upstream variables make EnArticle insert the definite article.

We attempt now to code the proper use of the definite English article "the" in a conversation where one party mentions a noun-item and the AI Mind needs to refer to the item as the one currently under discussion. We need to implement a rotation of the upstream variables from $us1 to $us7, so that any input noun will remain in the AI consciousness as something recently mentioned upstream in the conversation. We had better create a $usx variable for the InStantiate() module to transfer the concept number of an incoming noun to whichever $us1 to $us7 variable is up next in the rotation of up to seven recently mentioned noun-concepts.

We start testing the EnArticle() module to see if it is receiving a $usx transfer-variable from the InStantiate() module. At first it is not getting through. We change the conditions for calling EnArticle and a line of diagnostic code shows that $usx is getting through. Then we set up a conditional test for EnArticle() to say the word "the" if an incoming $usx matches the $us1 variable. We start seeing "THE" along with "A" in the output of our ghost.pl AI Mind, but there is not yet a rotation of the us1-us7 variables or a forgetting of any no-longer-recent concept.

We should create a rotating $usn number-variable to be a cyclic counter from one to seven and back to one again so that the upstream us1-us7 variables may rotate through their duty-function. We let $usn increment up to a value of seven over and over again. We pair up the $usn and us1-us7 variables to transfer the $usx value on a rotating basis. At first a problem arises when the ghost.pl AI says both "A" and "THE", but we insert a "return" statement so that EnArticle() will say only "A" and then skip the saying of "THE". We enter "you have a book" and a while later the AI outputs, "I HAVE THE BOOK."

Sunday, September 02, 2018

pmpj0902

Using $t++ for discrete English and Russian MindBoot vocabulary.

In the ghost269.pl Perlmind it is time to switch portions of the MindBoot() sequence from hard-coded time-points to t-incremental time-points, as we have done already in the JavaScript AI and in the MindForth AI. We have saved the Perl AI for last because there are both English and Russian portions of the knowledge base (KB). We will put the hardcoded English KB first to be like the other AI Minds. Then we will put the hardcoded Russian KB followed by the t-incremental Russian vocabulary words so as to form a contiguous sequence. Finally we will put the t-incremental English vocabulary.

Saturday, September 01, 2018

mfpj0901

Switching MindBoot from all hardcoded to partially t-increment coded.

Today in accordance with AI Go FOOM we need to start switching the MindBoot sequence from using only hardcoded time-points to using a harcoded knowledge-base followed by single-word vocabulary more loosely encoded with t-increment time-points. The non-hardcoded time-points will permit the Spawn module to make a copy of a running MindForth program after adding any recently learned concepts to the MindBoot sequence. It will also be easier for an AI Mind Maintainer to re-arrange the non-hardcoded sequence or to add new words to the sequence.