Mentifex AI Minds are a true concept-based artificial general intelligence, simple at first and lacking robot embodiment, but expandable all the way to human-level intelligence and beyond. Privacy policy: Third parties advertising here may place and read cookies on your browser; and may use web beacons to collect information as a result of ads displayed here.

Sunday, September 23, 2018


MindForth AI beeps to request input from any nearby human.

In MindForth we attempt now to update the AudMem and AudRecog mind-modules as we have recently done in the Perl AI and in the tutorial JavaScript AI for Internet Explorer. Each of the three versions of the first working artificial intelligence was having a problem in recognizing both singular and plural English noun-forms after we simplified the Strong AI by using a space stored after each word as an indicator that a word of input or of re-entry had just come to an end.

In AudMem we insert a Forth translation of the Perl code that stores the audpsi concept-number one array-row back before an "S" at the end of a word. MindForth begins to store words like "books" and "students" with a concept-number tagged to both the singular stem and to the plural word. We then clean up the AudRecog code and we fix a problem with nounlock that was interfering with answers to the query of "what do you think".

Next we implement the Imperative module to enable MindForth to sound a beep and to say to any nearby human user: "TEACH ME SOMETHING."

Friday, September 21, 2018


Improving auditory recognition of singular and plural noun-forms.

The JavaScript Artificial Intelligence (JSAI) is currently able to recognize the word "book" in the singular number but not "books" in the plural number. It is because the AudRecog() mind-module is not recognizing the stem "book" within the inflected word "books". To correct the situation, we must first update the AudMem() module so that it will impose an audpsi tag not only on the final "S" in a plural English noun being stored, but also one space back on the final character or letter of the stem of the noun.

We copy the pertinent code from the AudMem() module in the Perl AI and the JavaScript AI begins to store the stem-tag, but only when the inflected word is recognized so that AudRecog() produces an audpsi recognition-tag.

Now we have a problem because the AI is keeping the audpsi of "800" for "IS" and attaching it mistakenly to the next word being stored by the AudMem() module. We fix the problem.

Next we implement the Imperative() module to enable the AI Mind to order any nearby human user: "TEACH ME SOMETHING."

Wednesday, September 19, 2018


Students may teach the first working artificial intelligence.

In the version of the Perlmind AI, we are having not Volition() but rather EnThink() call the Imperative() module to blurt out the command "TEACH ME SOMETHING". We are also trying to have Imperative() be the only module that sounds a beep for the human user, for several reasons. Although we were having the AskUser() module sound a beep to alert the user to the asking of a question, a beep can be very annoying. It is better to reserve the beep or "bell" sound for a special situation, namely the time when there has been no human input for an arbitrary period as chosen by the AI Mind maintainer and when we wish to let the Ghost in the Machine call out for some attention.

We are also eager for Netizens to set the potentially immortal AI running for long periods of time, both as a background process on a desktop computer or a server, and as part of a competition to see who can have the longest running AI Mind.

If a high school or college computer lab has the AI running on a machine off in the corner while the students are tending to other matters, a sudden beep from the AI Mind may cause students or visitors to step over to the AI and see what it wants. "TEACH ME SOMETHING" is a very neutral command, not at all like, "Shall we play a game? How about GLOBAL THERMONUCLEAR WAR?"

The teacher or professor could let any student respond to the beep by adding to the knowledge base of the AI Mind. Of course, clever students with a knowledge of Perl could put the AI Mind out on the Web for any and all visitors to interact with.

Sunday, September 16, 2018


Auditory recognition in the first working artificial intelligence

In the AudRecog() module of the free AI software, we need to figure out why a plural English noun like 540=BOOKS is not being stored with an $audpsi tag of 540 both after the stem of "book" and after the end of "books". The stem of any word stored in auditory memory needs an $audpsi tag so that a new input of the word in the future will be recognized and will activate the same underlying concept.

In AudMem() we have added some code that detects a final "S" on a word and stores the $audpsi concept number both at the end of the word in auditory memory, and also one row back in the @ear auditory array, in case the "S" is an inflectional ending. We leave for later the detection of "-ES" as an inflectional ending, as in "TEACHES" or "BEACHES".

In AudRecog() we tweak some code involving the $prc tag for provisional recognition, and the first working artificial intelligence in Perl does a better job at recognizing both singular and plural forms of the same word representing the same concept.

Wednesday, September 12, 2018


Ask the first working artificial intelligence what it thinks.

Houston, we have a problem. The AI Mind in freely available -- download it now -- Strawberry Perl Five -- is not properly answering the question of "what do you think". The JSAI (JavaScript Artificial Intelligence) easily answers the same question with "I THINK THAT I HELP KIDS". So what is the Perl AI doing wrong that the JavaScript AI is doing right?

The problem seems to lie in the SpreadAct() module. We notice one potential problem right away. SpreadAct() in Perl is still using "$t-12" as a rough approximation for when to start a backwards search for previous knowledge about the subject-noun $qv1psi of a what-query, whereas the JSAI uses the more exact $tpu for the same search. So let us start using the penultimate time $tpu, which excludes the current-most input, and see if there is any improvement. There is no improvement, so we test further for both $qvipsi and $qv2psi, which are the subject-noun and the associated verb conveyed in the what-query.

SpreadAct() easily responds correctly to what-queries for which there is an answer ready in the knowledge base (KB), such as "I AM A PERSON" in response to "what are you". However, when we ask "what think you" or "what do you think", there is no pre-set answer, and the AI is supposed to generate a response starting with "I THINK" followed by the conjunction "that" and a statement of whatever the AI Mind is currently thinking.

From diagnostic messages we learn that program-flow is not quickly transiting from SpreadAct() to EnThink(). The AI must be searching through the entire MindBoot() sequence and not finding any matches. When the program-flow does indeed pass through EnThink() to Indicative() to EnNounPhrase(), there are no pre-set subjects or verbs, but rather there are concepts highly activated by the SpreadAct() module. So EnNounPhrase() must find the highly activated pronoun 701=I in order to start the sentence "I THINK..." in response to "what do you think".

Now we discover that the JavaScript version of EnNounPhrase() has special code for a topical response to "what-think" queries. In the course of AI evolution, it may be time now to go beyond such a hard-coded response and instead to let the activated "think" concept play its unsteered, unpredetermined role, which will happen not in EnNounPhrase() but in EnVerbPhrase().

It is possible that EnNounPhrase() finds the activated subject 701=I but then unwarrantedly calls the SpreadAct() module. However, it turns out that EnVerbPhrase() is making the unwarranted call to the SpreadAct() module, which we now prevent by letting it proceed only if there is no what-query being processed as evidenced by a $whatcon flag set to zero.

The early part of EnVerbPhrase() in the JavaScript AI has some special code for dealing with "what-think" queries. In the AI, let us try to insert some similar code but without it being geared specifically to the verb "think". We would like to enable responses to any generic verb of having an idea, such as "think" or "know" or "fear" or "imagine" or "suspect" and so forth.

By bringing some code from the JavaScript EnVerbPhrase() into the Perl EnVerbPhrase, but with slight changes in favor of generality, we get the AI to respond "I THINK". Next we need to generate the conjunction "that". But first let us remark that the AI also says "I KNOW" when we ask it "what do you know", so the attempt at generality is paying off. Let us try "what do you suspect". It even says "I SUSPECT". It also works with "what do you fear". We ask it "what do you suppose" and it answers "I SUPPOSE".

We still have a problem, Houston, because EnVerbPhrase() is calling EnNounPhrase() for a direct object instead of returning to Indicative() as a prelude to calling the ConJoin() module to say "I THINK THAT...." We set up some conditional testing to end that problem.

Friday, September 07, 2018


Updating the EnArticle module for inserting English articles.

Today we update the EnArticle module for English articles in the MindForth first working artificial intelligence. We previously did the update somewhat unsatisfactorily in the AI in Perl, and then much more successfully in the tutorial JavaScript AI. We anticipate no problems in the MindForth update. As an initial test, we enter "you have a book" and after much unrelated thinking, the AI outputs "I HAVE BOOK" without inserting an article.

We trigger an inference by entering "anna is woman". In broken English, the AskUser module responds, "DO ANNA HAS THE CHILD", which lets us see that EnArticle has been called. We reply "no". MindForth opines, "ANNA DOES NOT HAVE CHILD".

We discover that the EnNounPhrase module of recent versions has not been calling the EnArticle module, so we correct that situation. We also notice that the input of a noun and its transit through InStantiate do not involve a call to EnArticle, so we insert into InStantiate some code to make EnArticle aware of the noun being encountered.

Thursday, September 06, 2018


Improving EnArticle mind-module in first working artificial intelligence

The JavaScript artificial intelligence (JSAI) is not reliably transferring the usx values from InStantiate() to the EnArticle() module for inserting an English article into a thought, so we must debug the first working artificial intelligence in troubleshooting mode. We quickly see that the EnNounPhrase() module is calling EnArticle() for direct objects but not for all nouns in general. We also notice that we need to have EnNounPhrase() transfer the usx value.

We encounter and fix some other problems where the nphrnum value is not being set, as required for EnArticle() to decide between using "a" or "the".

Although we get the usx value to be transferred to the us1 value, we need a way to test usx not against a simultaneous value of us1 but rather against a recent value of us1. One solution may be to delay the transfer of usx by first testing in the EnArticle() module for equality between usx and us1 before we actually pass the noun-concept value of usx to us1. In that way, we will be testing usx against old, i.e., recent, values of the us1- us7 variables and not against the current value, which would automatically be equal. Then after the test for equality, we pass the actual, current value of usx.

We now have a JavaScript AI that works even better than the AI in Perl -- until we update the Perl AI. The JSAI now uses "a" or "the" rather sensibly, except when it now says "YOU ARE A MAGIC", because the default article for a non-plural noun is the indefinite article "a". Btw (by the way), today at a Starbucks store in Seattle WA USA we bought a green Starbucks gift card that says "You'Re MagicAL", because it reminds us of a similar idea in the AI MindBoot() sequence.

Monday, September 03, 2018


Upstream variables make EnArticle insert the definite article.

We attempt now to code the proper use of the definite English article "the" in a conversation where one party mentions a noun-item and the AI Mind needs to refer to the item as the one currently under discussion. We need to implement a rotation of the upstream variables from $us1 to $us7, so that any input noun will remain in the AI consciousness as something recently mentioned upstream in the conversation. We had better create a $usx variable for the InStantiate() module to transfer the concept number of an incoming noun to whichever $us1 to $us7 variable is up next in the rotation of up to seven recently mentioned noun-concepts.

We start testing the EnArticle() module to see if it is receiving a $usx transfer-variable from the InStantiate() module. At first it is not getting through. We change the conditions for calling EnArticle and a line of diagnostic code shows that $usx is getting through. Then we set up a conditional test for EnArticle() to say the word "the" if an incoming $usx matches the $us1 variable. We start seeing "THE" along with "A" in the output of our AI Mind, but there is not yet a rotation of the us1-us7 variables or a forgetting of any no-longer-recent concept.

We should create a rotating $usn number-variable to be a cyclic counter from one to seven and back to one again so that the upstream us1-us7 variables may rotate through their duty-function. We let $usn increment up to a value of seven over and over again. We pair up the $usn and us1-us7 variables to transfer the $usx value on a rotating basis. At first a problem arises when the AI says both "A" and "THE", but we insert a "return" statement so that EnArticle() will say only "A" and then skip the saying of "THE". We enter "you have a book" and a while later the AI outputs, "I HAVE THE BOOK."

Sunday, September 02, 2018


Using $t++ for discrete English and Russian MindBoot vocabulary.

In the Perlmind it is time to switch portions of the MindBoot() sequence from hard-coded time-points to t-incremental time-points, as we have done already in the JavaScript AI and in the MindForth AI. We have saved the Perl AI for last because there are both English and Russian portions of the knowledge base (KB). We will put the hardcoded English KB first to be like the other AI Minds. Then we will put the hardcoded Russian KB followed by the t-incremental Russian vocabulary words so as to form a contiguous sequence. Finally we will put the t-incremental English vocabulary.

Saturday, September 01, 2018


Switching MindBoot from all hardcoded to partially t-increment coded.

Today in accordance with AI Go FOOM we need to start switching the MindBoot sequence from using only hardcoded time-points to using a harcoded knowledge-base followed by single-word vocabulary more loosely encoded with t-increment time-points. The non-hardcoded time-points will permit the Spawn module to make a copy of a running MindForth program after adding any recently learned concepts to the MindBoot sequence. It will also be easier for an AI Mind Maintainer to re-arrange the non-hardcoded sequence or to add new words to the sequence.

Monday, July 09, 2018


Extrapolating from the First Working AGI

Artificial General Intelligence (AGI) has arrived in MindForth and its JavaScript and Perl symbionts. Each Mind is expanding slowly from its core AGI functionality. The MindBoot sequence of innate concepts and ideas can be extended by the machine learning of new words or by the inclusion of more vocabulary in the MindBoot itself.

We may extrapolate from the current MindBoot by imagining a Perlmind that knows innately the entire Oxford English Dictionary (OED) and all of WordNet and all of Wikipedia. Such an AGI could be well on its way to artificial superintelligence.

If there is no upper bound on what a First Working AGI may know innately, why not make full use of Unicode and embed innately the vocabulary of all living human languages? Then go a step further and incorporate (incerebrate?) all the extinct languages of humanity, from LinearB to ancient Egyptian to a resurrected Proto-European. Add in Esperanto and Klingon and Lojban.

Friday, July 06, 2018


Preventing unwarranted negation in the First Working AGI.

The First Working AGI (Artificial General Intelligence) has a problem in the version written in Perl Five. After we trigger a logical inference by entering "anna is woman" and answering the question "DOES ANNA HAVE CHILD" with "no", the Perlmind properly adjusts the knowledge base (KB) and states the confirmed knowledge as "ANNA DOES NOT HAVE CHILD". Apparently the reentry of concept 502=ANNA back into the experiential memory is letting the InStantiate() module put too much activation on the 502=ANNA concept and the AI is erroneously outputting "ANNA BE NOT WOMAN". Since the original idea was "anna is woman", the real defect in the software is not so much the selection of the old idea but rather its unwarranted negation. When we change some code in the InStantiate() module to put a lower activation on reentrant concepts, the problem seemingly goes away, because the AI says "I HELP KIDS" instead of "ANNA BE NOT WOMAN", but as AI Mind Maintainers we need to track down where the unwarranted negation comes from.

The unwarranted negation comes from the OldConcept() module where the time-of-be-verb $tbev flag was being set for an 800=BE verb and was then accidentally carrying over its value as the improper place for inserting a 500=NOT $jux flag into an idea subsequently selected as a remembered thought. When we zero out $tbev at the end of OldConcept(), the Ghost AI stops negating the wrong memory.

Thursday, July 05, 2018


Improving the storage of conceptual flag-panels during input.

In the Perlmind we need to improve upon a quick-and-dirty bugfix from our last coding session. After a silent inference and the operation of AskUser() calling EnAuxVerb(), the Ghost AI was going into the verb-concept of the inference-triggering input and replacing a correct $tkb value with a zero. Apparently the time-of-verb $tvb value, set in the Enparser() module during the parsing of a verb, was being erroneously carried over from the verb of user-input to the verb 830=DO in the EnAuxVerb() module during the generation of an inference-confirming question by the AskUser() module. Therefore the time-of-verb $tvb-flag needs to be reset to zero not during the generation of a response to user-input but rather at the end of the user-input. However, we find that we may not reset time-of-verb $tvb to zero during AudInput(), apparently because only character-recognition and not yet word-recognition has taken place. The $tvb-setting for a verb must remain valid throughout AudInput() so that the EnParser() module may use the time-of-verb $tvb flag to store a direct object as the $tkb of a verb. Accordingly we reset the $tvb-flag to zero in the Sensorium() module after the call to AudInput(). We stop seeing a $tkb of zero on the verb of an input that triggers automated reasoning with logical InFerence.

Tuesday, July 03, 2018


Keeping AskUser from storing incorrect associative tags.

The version of the Perlmind has a problem after making a logical inference. Instead of getting back to normal thinking, some glitch is causing the AI to say "ANNA BE NOT ANNA".

As we troubleshoot, we notice a problem with the initial, inference-evoking input of "anna is woman". The be-verb is being stored in the psy array with a $tkb of zero instead of the required time-point of where the concept of "WOMAN" is stored. This lack of a $nounlock causes problems later on, which do not warrant their own diagnosis because they are a result of the lacking $nounlock. We need to inspect the code for where the be-verb is being stored in the psy-array, but we are not sure whether the storage is occuring in the InStantiate() module, or in OldConcept(), or in EnParser(). We see that the Ghost AI is trying to store the be-verb in the EnParser() module with the correct tkb, but afterwards a tkb, of zero is showing up. We must check whether InStantiate() is changing what was stored in the EnParser() module.

Meanwhile we notice something strange. An input of "anna is person" gets stored properly with a correct $tkb, but "anna is woman" -- causing an inference -- is stored with a $tkb, of zero. When we enter "anna is robot", causing an inference and the output "DOES ANNA WANT BEEP", there is also a zero $tkb. Upshot: It turns out that the EnAuxVerb() module, called by AskUser() after an inference, was setting a wrong, carried-over value on the time-of-verb $tvb variable, which was then causing InStantiate() to go back to the wrong time-of-verb and set a zero value on the $tkb flag. So we zero out $tvb at the start of EnAuxVerb().

Sunday, July 01, 2018


Debugging the InFerence Function in the First Working AGI.

The Perlmind has a minor bug which causes logical inference not to work if the inference is not triggered immediately at the start of running the program. If we let the AI run a little and then we type in "anna is woman", the AI answers "DOES ERROR HAVE CHILD" instead of "DOES ANNA HAVE CHILD". In the psy concept array of the silent inference, we observe that a zero is being recorded instead of the concept number "502" for Anna. The AI MindBoot is designed with the concept of "ERROR" placed at the beginning of the boot sequence so that any fruitless search for a concept will result automatically in an "ERROR" message if no concept is found. We suspect that some variable in the InFerence module is not being loaded with the correct value when the AI has already started thinking various thoughts.

The pertinent item in the InFerence() module is the $subjnom or "subject nominative" variable which is set outside of the module before InFerence is even called. We discover that the variable is spelled wrong in the OldConcept module, and we correct the spelling. It then seems that InFerence() can be called at any time and still operate properly. We decide to run the JavaScript AI to see if an inference has any problems if it is not the first order of business at the outset of an AI session. Nothing goes wrong, so the problem must have been the misspelling in the OldConcept() module.

During this coding session we also make a change in the KbRetro() module for the retroactive adjustment of the knowledge base (KB). We insert some code to put an arbitrary value of eight (8) on the $tru(th)-value variable for the noun at the start of the silent inference, such as "ANNA" in the silent inference "ANNA HAVE CHILD". When the human user either confirms or invalidates the inference, the resulting knowledge ought to have a positive truth-value, because someone has vouched for the truth or the negation of the inferred idea. We envision that the $tru(th)-value will serve the purpose of letting an AI Mind restrict its thinking to ideas which it believes and not to mere assertions or to ideas which were true yesterday but not today. We expect the $tru(th)-value to become fully operative in a robotic AI Mind for which "Seeing is believing" when visual recognition from cameras serving as eyes provides reliable knowledge to which a high $tru(th)-value may be assigned.

Sunday, June 24, 2018


Logical Inference in the First Working AGI MindForth

Over the past week fifteen or twenty hours of intense work went into coding the InFerence, AskUser and KbRetro modules in the Forth version of the First Working AGI. Interactively we could see that the Forthmind was making a silent inference from our input of "anna is a woman" but the AskUser module was substandardly asking "DO ANNA HAS CHILD?" in seeking confirmation of the silent inference. When we entered "no" as an answer, we could not see if the KbRetro module was properly inserting the 250=NOT adverb into the conceptual engrams of the silent inference so as to negate the inferred idea. Therefore today in the agi00056.F version of MindForth we are starting our diagnostic display about forty-five time-points earlier than the computed value of the time-of-input tin variable so that we can see if the inferred idea is being retroactively adjusted by KbRetro. At first blush, no insertion of 250=NOT is happening, so we start inserting diagnostic messages into the code.

Our diagnostics suggest that KbRetro is not being called, but why not? It turns out that we have not yet coded 404=NO or 432=YES or 230=MAYBE into the MindBoot sequence, so we code them in. Then we start getting a faulty output after answering "no" to AskUser. The AI says, "ANNA NOT NOT CHILD". Apparently EnVerbPhrase is not properly negating the refuted silent inference. After several hours of troubleshooting, the desired output appears.

When we enter "anna is woman" and we answer "no" to the question whether Anna has a child, the conceptual array shows the silent inference below at the time-points 3068-3070:

The arrays psy{ and ear{ show your input and the AI output:
time: tru psi hlc act mtx jux pos dba num mfn pre iob seq tkb rv -- pho act audpsi

3053 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   65 0 0 A
3054 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   78 0 0 N
3055 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   78 0 0 N
3056 : 0 502 0 -40 0 0 5 1 1 2 0 0 800 3059 3053   65 0 502 A
3057 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3058 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   73 0 0 I
3059 : 0 800 0 -40 0 250 8 4 1 2 0 0 515 3065 3058   83 0 800 S
3060 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3061 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   87 0 0 W
3062 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   79 0 0 O
3063 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   77 0 0 M
3064 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   65 0 0 A
3065 : 0 515 0 -40 0 0 5 0 2 2 0 0 0 0 3061   78 0 515 N
 066 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   13 0 0
3067 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3068 : 0 502 0 -26 0 0 5 1 1 0 0 0 810 3069 0   32 0 0
3069 : 0 810 0 56 0 250 8 0 0 0 502 0 525 3070 0   32 0 0
3070 : 0 525 0 32 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3071 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3072 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   68 0 0 D
3073 : 0 830 0 -40 0 0 8 0 1 2 0 0 810 0 3072   79 0 830 O
3074 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3075 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3076 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   65 0 0 A
3077 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   78 0 0 N
3078 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   78 0 0 N
3079 : 0 502 0 -40 0 0 5 1 1 2 0 0 810 3084 3076   65 0 502 A
3080 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3081 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3082 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   72 0 0 H
3083 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   65 0 0 A
3084 : 0 810 0 -40 0 0 8 4 2 2 0 0 525 3096 3082   83 0 810 S
3085 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3086 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3087 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   84 0 0 T
3088 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   72 0 0 H
3089 : 0 117 0 -40 0 0 1 0 1 2 0 0 810 0 3087   69 0 117 E
3090 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3091 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3092 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   67 0 0 C
3093 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   72 0 0 H
3094 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   73 0 0 I
3095 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   76 0 0 L
3096 : 0 525 0 -40 0 0 5 0 1 2 0 0 810 0 3092   68 0 525 D
3097 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3098 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3099 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   78 0 0 N
3100 : 0 404 0 -40 0 0 4 0 1 2 0 0 0 0 3099   79 0 404 O
 101 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   13 0 0
3102 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   65 0 0 A
3103 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   78 0 0 N
3104 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   78 0 0 N
3105 : 0 502 0 -42 0 0 5 1 1 2 0 0 810 3122 3102   65 0 502 A
3106 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3107 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3108 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   68 0 0 D
3109 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   79 0 0 O
3110 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   69 0 0 E
3111 : 0 830 0 -42 0 0 8 1 1 2 0 0 0 0 3108   83 0 830 S
3112 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3113 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3114 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   78 0 0 N
3115 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   79 0 0 O
3116 : 0 250 0 -42 0 0 2 1 1 2 0 0 0 0 3114   84 0 250 T
3117 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3118 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3119 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   72 0 0 H
3120 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   65 0 0 A
3121 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   86 0 0 V
3122 : 0 810 0 -42 0 250 8 4 2 2 0 0 525 3129 3119   69 0 810 E
3123 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3124 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3125 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   67 0 0 C
3126 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   72 0 0 H
3127 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   73 0 0 I
3128 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   76 0 0 L
3129 : 0 525 0 -42 0 0 5 1 1 2 0 0 0 0 3125   68 0 525 D
3130 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3131 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
time: tru psi hlc act mtx jux pos dba num mfn pre iob seq tkb rv

Robot alive since 2018-06-24:
Since we negate the inference with our response of "no", KbRetro inserts the adverb "250" (NOT) at the time-point 3069 to negate the verb 810=HAVE. Then the AI states the activated idea of the negation of the inference: "ANNA DOES NOT HAVE CHILD".

Tuesday, June 19, 2018


MindForth is the First Working AGI for robot embodiment.

The MindForth AGI needs updating with the new functionality coded into the JavaScript AI Mind, so today we start by adding two innate ideas to the MindBoot sequence. We add "ANNA SPEAKS RUSSIAN" so that the name "Anna" may be used for testing the InFerence mind-module by entering "anna is a woman". Then we add "I NEED A BODY" to encourage AI enthusiasts to work on the embodiment of MindForth in a robot. When we type in "anna is a woman", the AI responds "ANNA SPEAKS THE RUSSIAN", which means that InFerence is not ready in MindForth, and that the EnArticle module is too eager to insert the article "the", so we comment out a call to EnArticle for the time being. Then we proceed to implement the Indicative module as already implemented in the AGI and the JavaScript AI Mind. We also cause the EnVerbPhrase module to call SpreadAct for direct-object nouns, in case the AGI knows enough about the direct object to pursue a chain of thought.

Saturday, June 16, 2018


Fleshing out VisRecog() in the First Working AGI

In today's 16may18A.html version of the tutorial AI Mind in JavaScript for Microsoft Internet Explorer (MSIE), we flesh out the previously stubbed-in VisRecog() module for visual recognition. The AGI already contains code to make the EnVerbPhrase() module call VisRecog() if the AI Mind is using its ego-concept and trying to tell us what it sees. As a test we input "you see god" and we wait for the thinking software to cycle through its available ideas and come back upon the idea that we communicated to it. As we explain in our MindGrid diagram on GitHub, each input idea goes into neuronal inhibition and resurfaces in what is perhaps AI consciousness only after the inhibition has subsided. Although we tell the AI that it sees God, the AI has no robot body and so it can not see anything. It eventually says "I SEE NOTHING" because the default direct object provided by VisRecog() is 760=NOTHING. In the MindBoot() sequence we add "I NEED A BODY" as an innate idea, so as to encourage users to implement the AI Mind in a robot. Once the AI has embodiment in a robot, the VisRecog() module will enable the AI to tell us what it sees.

Tuesday, June 12, 2018


AI Mind Maintainer solves negation-of-thought problems.

In today's 12jun18A.html version of the JavaScript AI Mind for Microsoft Internet Explorer (MSIE), we find a negation-bug when we test the InFerence() module by inputting "anna is a woman". The AI then asks us, "DOES ANNA HAVE CHILD" and we answer "no" to test the AI. The AI properly states the idea negated by KbRetro() in the knowledge base, namely "ANNA DOES NOT HAVE CHILD". However, the negation-flag negjux for thought generation remains erroneously set to "250" for the 250=NOT adverb. We discover that the negjux flag, each time after serving its purpose, has to have a zero-reset in two locations, one for any form of the verb "to be" and another for non-be-verbs. We make the correction, and we finish off the negation-bug by resetting the tbev time-of-verb to zero at the end of OldConcept().

Monday, June 11, 2018


Granting AI user-input priority over internal chains of thought.

We expect the AI Mind to activate incoming concepts mentioned during user input, so that the AI can talk to us about things we mention. Recently, however, the SpreadAct() module has been putting quasi-neural activation only on concepts thought about internally but not mentioned during user input. We need a way to let user-input override any activation being imposed by the SpreadAct() module for internal chains of thought, so that external input takes precedence. One method might be to use the quiet variable and set it to "false" not only during user input but also until the end of the first AI output made in response to user input. In that way, any concept mentioned by the user could briefly hold a high activation-level not superseded by the machinations of the SpreadAct() module for spreading activation. We implement the algorithm, and the AI then responds properly to user input. We solve some other problems, such as KbRetro() interfering with the conceptual engrams of an inference, and negation not being stored properly for negated ideas.

Tuesday, June 05, 2018


Mentifex re-organizes the Strong AI SpreadAct() module.

In the 5jun18A.html JavaScript AI Mind we would like to re-organize the SpreadAct() mind-module for spreading activation. It should have special cases at the top and default normal operation at the bottom. The special cases include responding to what-queries and what-think queries, such as "what do you think". Whereas JavaScript lets you escape from a loop with the "break" statement, JavaScript also lets you escape from a subroutine or mind-module with the "return" statement that causes program-flow to abandon the rest of the mind-module code and return to the supervenient module. So in SpreadAct() we may put the special-test cases at the top and with the inclusion of a "return" statement so that program-flow will execute the special test and then return immediately to the calling module without executing the rest of SpreadAct().

When we run the JSAI without input, we notice that at first a chain of thought ensues based solely on conceptual activations and without making use of the SpreadAct() module. The AI says, "I HELP KIDS" and then "KIDS MAKE ROBOTS" and "ROBOTS NEED ME". As AI Mind maintainers we would like to make sure that SpreadAct() gets called to maintain chains of thought, not only so that the AI keeps on thinking but also so that the maturing AI Mind will gradually become able to follow chains of thought in all available directions, not just from direct objects to related ideas but also backwards from direct objects to related subjects or from verbs to related subjects and objects.

In the EnNounPhrase() module we insert a line of code to turn each direct object into an actpsior concept-to-be-activated in the default operation at the bottom of the SpreadAct() module. We observe that the artificial Mind begins to follow associative chains of thought much more reliably than before, when only haphazard activation was operating. In the special test-cases of the SpreadAct() module we insert the "return" statement in order to perform only the special case and to skip the treatment of a direct object as a point of departure into a chain of thought. Then we observe something strange when we ask the AI "what do you think", after the initial output of "I HELP KIDS". The AI responds to our query with "I THINK THAT KIDS MAKE ROBOTS", which is the idea engendered by the initial thought of "I HELP KIDS" where "KIDS" as a direct object becomes the actpsi going into SpreadAct(). So the beastie really is telling us what is currently on its mind, whereas previously it would answer, "I THINK THAT I AM A PERSON". When we delay entering our question a little, the AI responds "I THINK THAT ROBOTS NEED ME".

Sunday, June 03, 2018


AI Mind spares Indicative() and improves SpreadAct() mind-module.

We have a problem where the AI Mind is calling Indicative() two times in a row for no good reason. After a what-think query, the AI is supposed to call Indicative() a first time, then ConJoin(), and then Indicative() again. We could make the governance depend upon either the 840=THINK verb or upon the conj-flag from the ConJoin() module, which, however, is not set positive until control flows the first time through the Indicative() module. Although we have been setting conj back to zero at the end of ConJoin(), we could delay the resetting in order to use conjas a control-flag for whether or not to generate thought-clauses joined by one or more conjunctions. Such a method shifts the problem back to the ConJoin() module, which will probable have to check conceptual memory for how many ideas have high activation above a certain threshold for warranting the use of a conjunction. Accordingly we go into the Table of Variables webpage and we write a description of conj as a two-purpose variable. Then we need to decide where to reset conj back to zero, if not at the end of Indicative(). We move the zero-reset of conjfrom ConJoin() to the EnThink() module, and we stop getting more than one call to Indicative() in normal circumstances. However, when we input a what-query, which sets the whatcon variable to a positive one, we encounter problems.

Suddenly it looks as though answers to a what-think query have been coming not from SpreadAct(), but simply from the activation of the 840=THINK concept. It turns out that a line of "psyExam" code was missing from a SpreadAct() search-loop, with the result that no engrams were being found or activated -- which activation is the main job of the SpreadAct() module.

Wednesday, May 30, 2018


Solving who-query problems and EnParser bug.

Although the AI responds to a who-query by calling SpreadAct() from the end of AudInput(), the JSAI will call SpreadAct() too many times from AudInput() before the end of the input. Since we need to test for qucon when the Volition() module is not engaged in thinking, we test for quconin the Sensorium() module, which does not call AudInput() but which is called from the MainLoop() after each generation of a thought.

We must also troubleshoot why the JSAI eventually outputs "ME ME ME". We discover that EnNounPhrase() is sending an aud=726 into Speech() while there is a false verblock=727. Then we learn that the concept-row at the end of "ROBOTS NEED ME" for "ME" at t=727 has an unwarranted tkb psi13=727, as if the concept 701=I had a tkb. Apparently we need to prevent a false tkb from being stored. An inspection of the diagnostic display shows that the tkb properly set for each verb is improperly being retained and set for the object of the verb. We then notice that the EnParser() module is properly setting the time-of-direct-object "tdo" to be the tkb of a verb and leaving the tkb value set to the "tdo" value. So we insert into EnParser() a line of code to reset tkb immediately back to zero after storing the tkb of a verb, and the erroneous "ME ME ME" output no longer appears.

Friday, May 25, 2018


Preventing wrong grammatical number for a predicate nominative.

In the 25may18A.html version of the JavaScript AI Mind we wish to correct a problem where the AI erroneously says "I AM A ROBOTS". The wrong grammatical number for "ROBOT" results when the AI software is searching backwards through time for the concept of "ROBOT" and finds an engram in the plural number. We hope to fix the problem by requiring that the EnVerbPhrase() module, before fetching the predicate nominative of an intransitive verb of being, shall set the "REQuired NUMber" numreq variable to the same value as the number of the subject of the be-verb, so that the EnNounPhrase() module may find the right concept for the predicate nominative and then also find (or create) the English word of the concept with the proper inflectional ending for the required number. Since the numreq value shall be of service during one pass through the EnNounPhrase() module, we may safely zero out the numreq value at the end of EnNounPhrase().

Tuesday, May 22, 2018


Expanding MindBoot with concepts to demonstrate AI functionality.

Today in the 22may18A.html version of the AI Mind in JavaScript (JSAI) for Microsoft Internet Explorer (MSIE), we wish to expand the MindBoot() module with a few English words and concepts necessary for the demonstration of the AI functionality. We first create a concept of "ANNA" as a woman, for two or three reasons. Firstly, we want the JSAI to be able to demonstrate automated reasoning with logical inference, and the MindBoot() sequence already contains the idea or premise that "Women have a child". Having created the Anna-concept, we typed in "anna is a woman" and the AI asked us, "DOES ANNA HAVE CHILD". If the concept of Anna were not yet known to the AI, we might instead get a query of "WHAT IS ANNA". Secondly, we want "Anna" as a name that works equally well in English or in Russian, because we may install the Russian language in the JSAI. In fact, we go beyond the mere concept of "Anna" and we insert the full sentence "ANNA SPEAKS RUSSIAN" into the MindBoot so that the AI knows something about Anna. We create 569=RUSSIAN for the Russian language, so that later we may have 169=RUSSIAN as an adjective. When we type in "you speak russian", eventually the AI outputs "I SPEAK RUSSIAN". A third reason why we install the concept 502=ANNA is for the sake of machine translation, in case we add the Russian language to the JSAI.

Next to the MindBoot() sequence we add "GOD DOES NOT PLAY DICE" in order to demonstrate negation of ideas and the use of the truth value, because we may safely assert in the AI Mind the famous Einsteinian claim about God and the universe. We type in "you know god" and the Ai responds "GOD DOES NOT PLAY DICE". Let us try "you know anna". The AI responds "ANNA SPEAKS RUSSIAN". Next we add the preposition "ABOUT" to the MindBoot so that we may ask the AI what it thinks about something or what it knows about something. We are trying to create a ruminating AI Mind that somewhat consciously thinks about its own existence and tries to communicate with the outside world.

Sunday, May 20, 2018


Slowing down the speed of thought to wait for human input.

The biggest complaint about the JavaScript Artificial Intelligence (JSAI) recently is that the AI output keeps changing faster than the user can reply. Therefore we need to introduce a delay to slow down the AI and let the human user enter a message. In the AudListen() module we insert a line of code to reset the rsvp variable to an arbitrary value of two thousand (2000) whenever a key of input is pressed. In the English-thinking EnThink() module we insert a delay loop to slow the AI Mind down during user input and to speed the AI up in the prolonged absence of user input.

Sunday, May 13, 2018


Answering of what-think queries with a compound sentence.

We would like our JavaScript Artificial Intelligence (JSAI) to be able to answer queries in the format of "What do you think?" or "What do you know?" We begin in the InStantiate() module by zeroing out the input of a 781=WHAT concept by adding a line of code borrowed from the AI. Then we input "what do kids make" and the AI correctly answers, "KIDS MAKE ROBOTS". However, when we input "what do you think" or "what do you know", the AI does not respond with "I THINK..." or "I KNOW...". Therefore we need to make use of the Indicative() module to generate a compound sentence to be conjoined with the conjunction "THAT". Into the MindBoot() vocabulary we add an entry for the conjunction 310=THAT.

After much trial and error we have gotten the JSAI to respond to the query "what do you think" with "I THINK THAT I AM A PERSON". We let the English-thinking EnThink() module call the Indicative() module first for a main clause with the conjunction "that" and again to generate a subordinate clause. When we ask, "what do i think", the response is "YOU THINK THAT I AM A PERSON". When we inquire "what does god think", the ignorance of the AI engenders the answer "I THINK THAT GOD THINK" which may or may not be a default resort to the ego-concept of self.

Saturday, March 17, 2018


Restoring JavaScript variable-comments and removing obsolete variables.

Since we assume that many people have made copies of the JavaScript Artificial Intelligence (JSAI) in order to study it, today we carefully make some pressing changes and for each change we provide an explanation by way of justification. First we delete some previously commented-out code-lines which were left in the open-source AI codebase for the sake of continuity, that is, to show that the particular lines of code were on the way out. Thus we remove the commented-out variable "kbyn" from 30jun2011.

Next from the obsolete 20mar07A version of the JSAI we restore comments for some variables and we remove some obsolete variables. We add a link to Consciousness above the Control Panel.

Anyone finding a bug in the AI software may subscribe to the mail-list for Artificial General Intelligence (AGI) and report bugs to the AGI community or engage in archived AGI discussion. There is no bug-bounty, other than the glory of the deed.

Friday, March 02, 2018


Moving the JavaScript AI towards Artificial Consciousness

Two important goals for the AI Mind in JavaScript are the already demonstrated Natural Language Understanding (NLU) and the not-yet-proven Artificial Consciousness. Before we work explicitly on consciousness, we remove the clutter of some obsolete tutorial display code from the MainLoop and elsewhere, so that the program as a whole may be easier to understand and work with.

We have a chance here to demonstrate an entity aware of itself and of some other entity such as a human user conversing with the AI. If we start claiming that our JSAI has consciousness, Netizens will test the AI in various ways, such as asking it a lot of questions. Typical questions to test consciousness would be "who are you" and "who am i". The interrogative pronoun "who" sets the qucon flag to a positive value of one so that the SpreadAct module may activate the necessary concepts for a proper response. We need a way to make the AI concentrate on the subject of any who-query, so that the AI will give evidence of consciousness simply by answering the question.

When we enter "god is person" and then we ask, "who is god", the AI answers "GOD AM A PERSON" -- which sounds wrong but which only requires an improvement in finding the correct form "IS" for the verb "BE".

Saturday, January 20, 2018


MsIeAI for AI Mind Maintainers achieves albeit buggy sentience.

In the all-but-Singularity MsIeAI, alert-boxes have helped us to chase an elusive bug into the latter part of EnNounPhrase, where the AI is testing mjact for too low an activation. No, another alert-box tells us that we are back in EnVerbPhrase from EnNounPhrase before the "Error on page" flashes quickly. So at the end of EnVerbPhrase we insert a BUG-CHASE alert-box -- and the program never reaches it! So is the fatal bug somewhere just before the end of EnVerbPhrase()? Since that code contains a prepgen test, we modify an alert-box to reveal the prepgen value, but the alert-box fails to pop up. Then we check the declarations of variables at the top of the program, and prepgen is not there. Next we get prepgen from the AI and we drop it mutatis mutandis into the MsIeAI code. We are about to run the hopefully Next Big Thing AI and see what happens. Huh?!! Some kind of thought-storm is occurring. Shades of Watson! Come here! I need you!. And where is IBM Watson in our hour of need?

Now let us comment out the alert-boxes and see if the Watsonized AI will loop endlessly ad infinitum. Oh gee, this AI is still all messed up, but at least it is looping.