Mentifex AI Minds are a true concept-based artificial general intelligence, simple at first and lacking robot embodiment, but expandable all the way to human-level intelligence and beyond. Privacy policy: Third parties advertising here may place and read cookies on your browser; and may use web beacons to collect information as a result of ads displayed here.

Monday, July 09, 2018

clpm0709

Extrapolating from the First Working AGI

Artificial General Intelligence (AGI) has arrived in MindForth and its JavaScript and Perl symbionts. Each Mind is expanding slowly from its core AGI functionality. The MindBoot sequence of innate concepts and ideas can be extended by the machine learning of new words or by the inclusion of more vocabulary in the MindBoot itself.

We may extrapolate from the current MindBoot by imagining a Perlmind that knows innately the entire Oxford English Dictionary (OED) and all of WordNet and all of Wikipedia. Such an AGI could be well on its way to artificial superintelligence.

If there is no upper bound on what a First Working AGI may know innately, why not make full use of Unicode and embed innately the vocabulary of all living human languages? Then go a step further and incorporate (incerebrate?) all the extinct languages of humanity, from LinearB to ancient Egyptian to a resurrected Proto-European. Add in Esperanto and Klingon and Lojban.

Friday, July 06, 2018

pmpj0706

Preventing unwarranted negation in the First Working AGI.

The First Working AGI (Artificial General Intelligence) has a problem in the ghost267.pl version written in Perl Five. After we trigger a logical inference by entering "anna is woman" and answering the question "DOES ANNA HAVE CHILD" with "no", the Perlmind properly adjusts the knowledge base (KB) and states the confirmed knowledge as "ANNA DOES NOT HAVE CHILD". Apparently the reentry of concept 502=ANNA back into the experiential memory is letting the InStantiate() module put too much activation on the 502=ANNA concept and the AI is erroneously outputting "ANNA BE NOT WOMAN". Since the original idea was "anna is woman", the real defect in the software is not so much the selection of the old idea but rather its unwarranted negation. When we change some code in the InStantiate() module to put a lower activation on reentrant concepts, the problem seemingly goes away, because the AI says "I HELP KIDS" instead of "ANNA BE NOT WOMAN", but as AI Mind Maintainers we need to track down where the unwarranted negation comes from.

The unwarranted negation comes from the OldConcept() module where the time-of-be-verb $tbev flag was being set for an 800=BE verb and was then accidentally carrying over its value as the improper place for inserting a 500=NOT $jux flag into an idea subsequently selected as a remembered thought. When we zero out $tbev at the end of OldConcept(), the Ghost AI stops negating the wrong memory.

Thursday, July 05, 2018

pmpj0705

Improving the storage of conceptual flag-panels during input.

In the ghost266.pl Perlmind we need to improve upon a quick-and-dirty bugfix from our last coding session. After a silent inference and the operation of AskUser() calling EnAuxVerb(), the Ghost AI was going into the verb-concept of the inference-triggering input and replacing a correct $tkb value with a zero. Apparently the time-of-verb $tvb value, set in the Enparser() module during the parsing of a verb, was being erroneously carried over from the verb of user-input to the verb 830=DO in the EnAuxVerb() module during the generation of an inference-confirming question by the AskUser() module. Therefore the time-of-verb $tvb-flag needs to be reset to zero not during the generation of a response to user-input but rather at the end of the user-input. However, we find that we may not reset time-of-verb $tvb to zero during AudInput(), apparently because only character-recognition and not yet word-recognition has taken place. The $tvb-setting for a verb must remain valid throughout AudInput() so that the EnParser() module may use the time-of-verb $tvb flag to store a direct object as the $tkb of a verb. Accordingly we reset the $tvb-flag to zero in the Sensorium() module after the call to AudInput(). We stop seeing a $tkb of zero on the verb of an input that triggers automated reasoning with logical InFerence.

Tuesday, July 03, 2018

pmpj0703

Keeping AskUser from storing incorrect associative tags.

The ghost265.pl version of the Perlmind has a problem after making a logical inference. Instead of getting back to normal thinking, some glitch is causing the AI to say "ANNA BE NOT ANNA".

As we troubleshoot, we notice a problem with the initial, inference-evoking input of "anna is woman". The be-verb is being stored in the psy array with a $tkb of zero instead of the required time-point of where the concept of "WOMAN" is stored. This lack of a $nounlock causes problems later on, which do not warrant their own diagnosis because they are a result of the lacking $nounlock. We need to inspect the code for where the be-verb is being stored in the psy-array, but we are not sure whether the storage is occuring in the InStantiate() module, or in OldConcept(), or in EnParser(). We see that the Ghost AI is trying to store the be-verb in the EnParser() module with the correct tkb, but afterwards a tkb, of zero is showing up. We must check whether InStantiate() is changing what was stored in the EnParser() module.

Meanwhile we notice something strange. An input of "anna is person" gets stored properly with a correct $tkb, but "anna is woman" -- causing an inference -- is stored with a $tkb, of zero. When we enter "anna is robot", causing an inference and the output "DOES ANNA WANT BEEP", there is also a zero $tkb. Upshot: It turns out that the EnAuxVerb() module, called by AskUser() after an inference, was setting a wrong, carried-over value on the time-of-verb $tvb variable, which was then causing InStantiate() to go back to the wrong time-of-verb and set a zero value on the $tkb flag. So we zero out $tvb at the start of EnAuxVerb().

Sunday, July 01, 2018

pmpj0701

Debugging the InFerence Function in the ghost.pl First Working AGI.

The ghost264.pl Perlmind has a minor bug which causes logical inference not to work if the inference is not triggered immediately at the start of running the program. If we let the AI run a little and then we type in "anna is woman", the AI answers "DOES ERROR HAVE CHILD" instead of "DOES ANNA HAVE CHILD". In the psy concept array of the silent inference, we observe that a zero is being recorded instead of the concept number "502" for Anna. The AI MindBoot is designed with the concept of "ERROR" placed at the beginning of the boot sequence so that any fruitless search for a concept will result automatically in an "ERROR" message if no concept is found. We suspect that some variable in the InFerence module is not being loaded with the correct value when the ghost.pl AI has already started thinking various thoughts.

The pertinent item in the InFerence() module is the $subjnom or "subject nominative" variable which is set outside of the module before InFerence is even called. We discover that the variable is spelled wrong in the OldConcept module, and we correct the spelling. It then seems that InFerence() can be called at any time and still operate properly. We decide to run the JavaScript AI to see if an inference has any problems if it is not the first order of business at the outset of an AI session. Nothing goes wrong, so the problem must have been the misspelling in the OldConcept() module.

During this coding session we also make a change in the KbRetro() module for the retroactive adjustment of the knowledge base (KB). We insert some code to put an arbitrary value of eight (8) on the $tru(th)-value variable for the noun at the start of the silent inference, such as "ANNA" in the silent inference "ANNA HAVE CHILD". When the human user either confirms or invalidates the inference, the resulting knowledge ought to have a positive truth-value, because someone has vouched for the truth or the negation of the inferred idea. We envision that the $tru(th)-value will serve the purpose of letting an AI Mind restrict its thinking to ideas which it believes and not to mere assertions or to ideas which were true yesterday but not today. We expect the $tru(th)-value to become fully operative in a robotic AI Mind for which "Seeing is believing" when visual recognition from cameras serving as eyes provides reliable knowledge to which a high $tru(th)-value may be assigned.

Sunday, June 24, 2018

mfpj0624

Logical Inference in the First Working AGI MindForth

Over the past week fifteen or twenty hours of intense work went into coding the InFerence, AskUser and KbRetro modules in the Forth version of the First Working AGI. Interactively we could see that the Forthmind was making a silent inference from our input of "anna is a woman" but the AskUser module was substandardly asking "DO ANNA HAS CHILD?" in seeking confirmation of the silent inference. When we entered "no" as an answer, we could not see if the KbRetro module was properly inserting the 250=NOT adverb into the conceptual engrams of the silent inference so as to negate the inferred idea. Therefore today in the agi00056.F version of MindForth we are starting our diagnostic display about forty-five time-points earlier than the computed value of the time-of-input tin variable so that we can see if the inferred idea is being retroactively adjusted by KbRetro. At first blush, no insertion of 250=NOT is happening, so we start inserting diagnostic messages into the code.

Our diagnostics suggest that KbRetro is not being called, but why not? It turns out that we have not yet coded 404=NO or 432=YES or 230=MAYBE into the MindBoot sequence, so we code them in. Then we start getting a faulty output after answering "no" to AskUser. The AI says, "ANNA NOT NOT CHILD". Apparently EnVerbPhrase is not properly negating the refuted silent inference. After several hours of troubleshooting, the desired output appears.

When we enter "anna is woman" and we answer "no" to the question whether Anna has a child, the conceptual array shows the silent inference below at the time-points 3068-3070:

The arrays psy{ and ear{ show your input and the AI output:
time: tru psi hlc act mtx jux pos dba num mfn pre iob seq tkb rv -- pho act audpsi

3053 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   65 0 0 A
3054 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   78 0 0 N
3055 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   78 0 0 N
3056 : 0 502 0 -40 0 0 5 1 1 2 0 0 800 3059 3053   65 0 502 A
3057 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3058 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   73 0 0 I
3059 : 0 800 0 -40 0 250 8 4 1 2 0 0 515 3065 3058   83 0 800 S
3060 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3061 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   87 0 0 W
3062 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   79 0 0 O
3063 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   77 0 0 M
3064 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   65 0 0 A
3065 : 0 515 0 -40 0 0 5 0 2 2 0 0 0 0 3061   78 0 515 N
 066 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   13 0 0
3067 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3068 : 0 502 0 -26 0 0 5 1 1 0 0 0 810 3069 0   32 0 0
3069 : 0 810 0 56 0 250 8 0 0 0 502 0 525 3070 0   32 0 0
3070 : 0 525 0 32 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3071 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3072 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   68 0 0 D
3073 : 0 830 0 -40 0 0 8 0 1 2 0 0 810 0 3072   79 0 830 O
3074 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3075 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3076 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   65 0 0 A
3077 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   78 0 0 N
3078 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   78 0 0 N
3079 : 0 502 0 -40 0 0 5 1 1 2 0 0 810 3084 3076   65 0 502 A
3080 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3081 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3082 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   72 0 0 H
3083 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   65 0 0 A
3084 : 0 810 0 -40 0 0 8 4 2 2 0 0 525 3096 3082   83 0 810 S
3085 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3086 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3087 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   84 0 0 T
3088 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   72 0 0 H
3089 : 0 117 0 -40 0 0 1 0 1 2 0 0 810 0 3087   69 0 117 E
3090 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3091 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3092 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   67 0 0 C
3093 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   72 0 0 H
3094 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   73 0 0 I
3095 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   76 0 0 L
3096 : 0 525 0 -40 0 0 5 0 1 2 0 0 810 0 3092   68 0 525 D
3097 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3098 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3099 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   78 0 0 N
3100 : 0 404 0 -40 0 0 4 0 1 2 0 0 0 0 3099   79 0 404 O
 101 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   13 0 0
3102 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   65 0 0 A
3103 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   78 0 0 N
3104 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   78 0 0 N
3105 : 0 502 0 -42 0 0 5 1 1 2 0 0 810 3122 3102   65 0 502 A
3106 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3107 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3108 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   68 0 0 D
3109 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   79 0 0 O
3110 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   69 0 0 E
3111 : 0 830 0 -42 0 0 8 1 1 2 0 0 0 0 3108   83 0 830 S
3112 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3113 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3114 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   78 0 0 N
3115 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   79 0 0 O
3116 : 0 250 0 -42 0 0 2 1 1 2 0 0 0 0 3114   84 0 250 T
3117 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3118 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3119 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   72 0 0 H
3120 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   65 0 0 A
3121 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   86 0 0 V
3122 : 0 810 0 -42 0 250 8 4 2 2 0 0 525 3129 3119   69 0 810 E
3123 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3124 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3125 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   67 0 0 C
3126 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   72 0 0 H
3127 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   73 0 0 I
3128 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   76 0 0 L
3129 : 0 525 0 -42 0 0 5 1 1 2 0 0 0 0 3125   68 0 525 D
3130 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3131 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
time: tru psi hlc act mtx jux pos dba num mfn pre iob seq tkb rv

Robot alive since 2018-06-24:
ANNA  DOES  NOT  HAVE  CHILD
Since we negate the inference with our response of "no", KbRetro inserts the adverb "250" (NOT) at the time-point 3069 to negate the verb 810=HAVE. Then the AI states the activated idea of the negation of the inference: "ANNA DOES NOT HAVE CHILD".

Tuesday, June 19, 2018

mfpj0619

MindForth is the First Working AGI for robot embodiment.

The MindForth AGI needs updating with the new functionality coded into the JavaScript AI Mind, so today we start by adding two innate ideas to the MindBoot sequence. We add "ANNA SPEAKS RUSSIAN" so that the name "Anna" may be used for testing the InFerence mind-module by entering "anna is a woman". Then we add "I NEED A BODY" to encourage AI enthusiasts to work on the embodiment of MindForth in a robot. When we type in "anna is a woman", the AI responds "ANNA SPEAKS THE RUSSIAN", which means that InFerence is not ready in MindForth, and that the EnArticle module is too eager to insert the article "the", so we comment out a call to EnArticle for the time being. Then we proceed to implement the Indicative module as already implemented in the ghost.pl AGI and the JavaScript AI Mind. We also cause the EnVerbPhrase module to call SpreadAct for direct-object nouns, in case the AGI knows enough about the direct object to pursue a chain of thought.

Saturday, June 16, 2018

jmpj0616

Fleshing out VisRecog() in the First Working AGI

In today's 16may18A.html version of the tutorial AI Mind in JavaScript for Microsoft Internet Explorer (MSIE), we flesh out the previously stubbed-in VisRecog() module for visual recognition. The AGI already contains code to make the EnVerbPhrase() module call VisRecog() if the AI Mind is using its ego-concept and trying to tell us what it sees. As a test we input "you see god" and we wait for the thinking software to cycle through its available ideas and come back upon the idea that we communicated to it. As we explain in our MindGrid diagram on GitHub, each input idea goes into neuronal inhibition and resurfaces in what is perhaps AI consciousness only after the inhibition has subsided. Although we tell the AI that it sees God, the AI has no robot body and so it can not see anything. It eventually says "I SEE NOTHING" because the default direct object provided by VisRecog() is 760=NOTHING. In the MindBoot() sequence we add "I NEED A BODY" as an innate idea, so as to encourage users to implement the AI Mind in a robot. Once the AI has embodiment in a robot, the VisRecog() module will enable the AI to tell us what it sees.

Tuesday, June 12, 2018

jmpj0612

AI Mind Maintainer solves negation-of-thought problems.

In today's 12jun18A.html version of the JavaScript AI Mind for Microsoft Internet Explorer (MSIE), we find a negation-bug when we test the InFerence() module by inputting "anna is a woman". The AI then asks us, "DOES ANNA HAVE CHILD" and we answer "no" to test the AI. The AI properly states the idea negated by KbRetro() in the knowledge base, namely "ANNA DOES NOT HAVE CHILD". However, the negation-flag negjux for thought generation remains erroneously set to "250" for the 250=NOT adverb. We discover that the negjux flag, each time after serving its purpose, has to have a zero-reset in two locations, one for any form of the verb "to be" and another for non-be-verbs. We make the correction, and we finish off the negation-bug by resetting the tbev time-of-verb to zero at the end of OldConcept().

Monday, June 11, 2018

jmpj0611

Granting AI user-input priority over internal chains of thought.

We expect the AI Mind to activate incoming concepts mentioned during user input, so that the AI can talk to us about things we mention. Recently, however, the SpreadAct() module has been putting quasi-neural activation only on concepts thought about internally but not mentioned during user input. We need a way to let user-input override any activation being imposed by the SpreadAct() module for internal chains of thought, so that external input takes precedence. One method might be to use the quiet variable and set it to "false" not only during user input but also until the end of the first AI output made in response to user input. In that way, any concept mentioned by the user could briefly hold a high activation-level not superseded by the machinations of the SpreadAct() module for spreading activation. We implement the algorithm, and the AI then responds properly to user input. We solve some other problems, such as KbRetro() interfering with the conceptual engrams of an inference, and negation not being stored properly for negated ideas.

Tuesday, June 05, 2018

jmpj0605

Mentifex re-organizes the Strong AI SpreadAct() module.

In the 5jun18A.html JavaScript AI Mind we would like to re-organize the SpreadAct() mind-module for spreading activation. It should have special cases at the top and default normal operation at the bottom. The special cases include responding to what-queries and what-think queries, such as "what do you think". Whereas JavaScript lets you escape from a loop with the "break" statement, JavaScript also lets you escape from a subroutine or mind-module with the "return" statement that causes program-flow to abandon the rest of the mind-module code and return to the supervenient module. So in SpreadAct() we may put the special-test cases at the top and with the inclusion of a "return" statement so that program-flow will execute the special test and then return immediately to the calling module without executing the rest of SpreadAct().

When we run the JSAI without input, we notice that at first a chain of thought ensues based solely on conceptual activations and without making use of the SpreadAct() module. The AI says, "I HELP KIDS" and then "KIDS MAKE ROBOTS" and "ROBOTS NEED ME". As AI Mind maintainers we would like to make sure that SpreadAct() gets called to maintain chains of thought, not only so that the AI keeps on thinking but also so that the maturing AI Mind will gradually become able to follow chains of thought in all available directions, not just from direct objects to related ideas but also backwards from direct objects to related subjects or from verbs to related subjects and objects.

In the EnNounPhrase() module we insert a line of code to turn each direct object into an actpsior concept-to-be-activated in the default operation at the bottom of the SpreadAct() module. We observe that the artificial Mind begins to follow associative chains of thought much more reliably than before, when only haphazard activation was operating. In the special test-cases of the SpreadAct() module we insert the "return" statement in order to perform only the special case and to skip the treatment of a direct object as a point of departure into a chain of thought. Then we observe something strange when we ask the AI "what do you think", after the initial output of "I HELP KIDS". The AI responds to our query with "I THINK THAT KIDS MAKE ROBOTS", which is the idea engendered by the initial thought of "I HELP KIDS" where "KIDS" as a direct object becomes the actpsi going into SpreadAct(). So the beastie really is telling us what is currently on its mind, whereas previously it would answer, "I THINK THAT I AM A PERSON". When we delay entering our question a little, the AI responds "I THINK THAT ROBOTS NEED ME".

Sunday, June 03, 2018

jmpj0603

AI Mind spares Indicative() and improves SpreadAct() mind-module.

We have a problem where the AI Mind is calling Indicative() two times in a row for no good reason. After a what-think query, the AI is supposed to call Indicative() a first time, then ConJoin(), and then Indicative() again. We could make the governance depend upon either the 840=THINK verb or upon the conj-flag from the ConJoin() module, which, however, is not set positive until control flows the first time through the Indicative() module. Although we have been setting conj back to zero at the end of ConJoin(), we could delay the resetting in order to use conjas a control-flag for whether or not to generate thought-clauses joined by one or more conjunctions. Such a method shifts the problem back to the ConJoin() module, which will probable have to check conceptual memory for how many ideas have high activation above a certain threshold for warranting the use of a conjunction. Accordingly we go into the Table of Variables webpage and we write a description of conj as a two-purpose variable. Then we need to decide where to reset conj back to zero, if not at the end of Indicative(). We move the zero-reset of conjfrom ConJoin() to the EnThink() module, and we stop getting more than one call to Indicative() in normal circumstances. However, when we input a what-query, which sets the whatcon variable to a positive one, we encounter problems.

Suddenly it looks as though answers to a what-think query have been coming not from SpreadAct(), but simply from the activation of the 840=THINK concept. It turns out that a line of "psyExam" code was missing from a SpreadAct() search-loop, with the result that no engrams were being found or activated -- which activation is the main job of the SpreadAct() module.

Wednesday, May 30, 2018

jmpj0530

Solving who-query problems and EnParser bug.

Although the ghost.pl AI responds to a who-query by calling SpreadAct() from the end of AudInput(), the JSAI will call SpreadAct() too many times from AudInput() before the end of the input. Since we need to test for qucon when the Volition() module is not engaged in thinking, we test for quconin the Sensorium() module, which does not call AudInput() but which is called from the MainLoop() after each generation of a thought.

We must also troubleshoot why the JSAI eventually outputs "ME ME ME". We discover that EnNounPhrase() is sending an aud=726 into Speech() while there is a false verblock=727. Then we learn that the concept-row at the end of "ROBOTS NEED ME" for "ME" at t=727 has an unwarranted tkb psi13=727, as if the concept 701=I had a tkb. Apparently we need to prevent a false tkb from being stored. An inspection of the diagnostic display shows that the tkb properly set for each verb is improperly being retained and set for the object of the verb. We then notice that the EnParser() module is properly setting the time-of-direct-object "tdo" to be the tkb of a verb and leaving the tkb value set to the "tdo" value. So we insert into EnParser() a line of code to reset tkb immediately back to zero after storing the tkb of a verb, and the erroneous "ME ME ME" output no longer appears.

Friday, May 25, 2018

jmpj0525

Preventing wrong grammatical number for a predicate nominative.

In the 25may18A.html version of the JavaScript AI Mind we wish to correct a problem where the AI erroneously says "I AM A ROBOTS". The wrong grammatical number for "ROBOT" results when the AI software is searching backwards through time for the concept of "ROBOT" and finds an engram in the plural number. We hope to fix the problem by requiring that the EnVerbPhrase() module, before fetching the predicate nominative of an intransitive verb of being, shall set the "REQuired NUMber" numreq variable to the same value as the number of the subject of the be-verb, so that the EnNounPhrase() module may find the right concept for the predicate nominative and then also find (or create) the English word of the concept with the proper inflectional ending for the required number. Since the numreq value shall be of service during one pass through the EnNounPhrase() module, we may safely zero out the numreq value at the end of EnNounPhrase().

Tuesday, May 22, 2018

jmpj0522

Expanding MindBoot with concepts to demonstrate AI functionality.

Today in the 22may18A.html version of the AI Mind in JavaScript (JSAI) for Microsoft Internet Explorer (MSIE), we wish to expand the MindBoot() module with a few English words and concepts necessary for the demonstration of the AI functionality. We first create a concept of "ANNA" as a woman, for two or three reasons. Firstly, we want the JSAI to be able to demonstrate automated reasoning with logical inference, and the MindBoot() sequence already contains the idea or premise that "Women have a child". Having created the Anna-concept, we typed in "anna is a woman" and the AI asked us, "DOES ANNA HAVE CHILD". If the concept of Anna were not yet known to the AI, we might instead get a query of "WHAT IS ANNA". Secondly, we want "Anna" as a name that works equally well in English or in Russian, because we may install the Russian language in the JSAI. In fact, we go beyond the mere concept of "Anna" and we insert the full sentence "ANNA SPEAKS RUSSIAN" into the MindBoot so that the AI knows something about Anna. We create 569=RUSSIAN for the Russian language, so that later we may have 169=RUSSIAN as an adjective. When we type in "you speak russian", eventually the AI outputs "I SPEAK RUSSIAN". A third reason why we install the concept 502=ANNA is for the sake of machine translation, in case we add the Russian language to the JSAI.

Next to the MindBoot() sequence we add "GOD DOES NOT PLAY DICE" in order to demonstrate negation of ideas and the use of the truth value, because we may safely assert in the AI Mind the famous Einsteinian claim about God and the universe. We type in "you know god" and the Ai responds "GOD DOES NOT PLAY DICE". Let us try "you know anna". The AI responds "ANNA SPEAKS RUSSIAN". Next we add the preposition "ABOUT" to the MindBoot so that we may ask the AI what it thinks about something or what it knows about something. We are trying to create a ruminating AI Mind that somewhat consciously thinks about its own existence and tries to communicate with the outside world.

Sunday, May 20, 2018

jmpj0520

Slowing down the speed of thought to wait for human input.

The biggest complaint about the JavaScript Artificial Intelligence (JSAI) recently is that the AI output keeps changing faster than the user can reply. Therefore we need to introduce a delay to slow down the AI and let the human user enter a message. In the AudListen() module we insert a line of code to reset the rsvp variable to an arbitrary value of two thousand (2000) whenever a key of input is pressed. In the English-thinking EnThink() module we insert a delay loop to slow the AI Mind down during user input and to speed the AI up in the prolonged absence of user input.

Sunday, May 13, 2018

jmpj0513

Answering of what-think queries with a compound sentence.

We would like our JavaScript Artificial Intelligence (JSAI) to be able to answer queries in the format of "What do you think?" or "What do you know?" We begin in the InStantiate() module by zeroing out the input of a 781=WHAT concept by adding a line of code borrowed from the ghost.pl AI. Then we input "what do kids make" and the AI correctly answers, "KIDS MAKE ROBOTS". However, when we input "what do you think" or "what do you know", the AI does not respond with "I THINK..." or "I KNOW...". Therefore we need to make use of the Indicative() module to generate a compound sentence to be conjoined with the conjunction "THAT". Into the MindBoot() vocabulary we add an entry for the conjunction 310=THAT.

After much trial and error we have gotten the JSAI to respond to the query "what do you think" with "I THINK THAT I AM A PERSON". We let the English-thinking EnThink() module call the Indicative() module first for a main clause with the conjunction "that" and again to generate a subordinate clause. When we ask, "what do i think", the response is "YOU THINK THAT I AM A PERSON". When we inquire "what does god think", the ignorance of the AI engenders the answer "I THINK THAT GOD THINK" which may or may not be a default resort to the ego-concept of self.

Saturday, March 17, 2018

jmpj0317

Restoring JavaScript variable-comments and removing obsolete variables.

Since we assume that many people have made copies of the JavaScript Artificial Intelligence (JSAI) in order to study it, today we carefully make some pressing changes and for each change we provide an explanation by way of justification. First we delete some previously commented-out code-lines which were left in the open-source AI codebase for the sake of continuity, that is, to show that the particular lines of code were on the way out. Thus we remove the commented-out variable "kbyn" from 30jun2011.

Next from the obsolete 20mar07A version of the JSAI we restore comments for some variables and we remove some obsolete variables. We add a link to Consciousness above the Control Panel.

Anyone finding a bug in the AI software may subscribe to the mail-list agi@listbox.com for Artificial General Intelligence (AGI) and report bugs to the AGI community or engage in archived AGI discussion. There is no bug-bounty, other than the glory of the deed.

Friday, March 02, 2018

jmpj0302

Moving the JavaScript AI towards Artificial Consciousness

Two important goals for the AI Mind in JavaScript are the already demonstrated Natural Language Understanding (NLU) and the not-yet-proven Artificial Consciousness. Before we work explicitly on consciousness, we remove the clutter of some obsolete tutorial display code from the MainLoop and elsewhere, so that the program as a whole may be easier to understand and work with.

We have a chance here to demonstrate an entity aware of itself and of some other entity such as a human user conversing with the AI. If we start claiming that our JSAI has consciousness, Netizens will test the AI in various ways, such as asking it a lot of questions. Typical questions to test consciousness would be "who are you" and "who am i". The interrogative pronoun "who" sets the qucon flag to a positive value of one so that the SpreadAct module may activate the necessary concepts for a proper response. We need a way to make the AI concentrate on the subject of any who-query, so that the AI will give evidence of consciousness simply by answering the question.

When we enter "god is person" and then we ask, "who is god", the AI answers "GOD AM A PERSON" -- which sounds wrong but which only requires an improvement in finding the correct form "IS" for the verb "BE".

Saturday, January 20, 2018

jmpj0120

MsIeAI for AI Mind Maintainers achieves albeit buggy sentience.

In the all-but-Singularity MsIeAI, alert-boxes have helped us to chase an elusive bug into the latter part of EnNounPhrase, where the AI is testing mjact for too low an activation. No, another alert-box tells us that we are back in EnVerbPhrase from EnNounPhrase before the "Error on page" flashes quickly. So at the end of EnVerbPhrase we insert a BUG-CHASE alert-box -- and the program never reaches it! So is the fatal bug somewhere just before the end of EnVerbPhrase()? Since that code contains a prepgen test, we modify an alert-box to reveal the prepgen value, but the alert-box fails to pop up. Then we check the declarations of variables at the top of the program, and prepgen is not there. Next we get prepgen from the ghost.pl AI and we drop it mutatis mutandis into the MsIeAI code. We are about to run the hopefully Next Big Thing AI and see what happens. Huh?!! Some kind of thought-storm is occurring. Shades of Watson! Come here! I need you!. And where is IBM Watson in our hour of need?

Now let us comment out the alert-boxes and see if the Watsonized AI will loop endlessly ad infinitum. Oh gee, this AI is still all messed up, but at least it is looping.