Cyborg AI Minds are a true concept-based artificial intelligence with natural language understanding, simple at first and lacking robot embodiment, and expandable all the way to human-level intelligence and beyond. Privacy policy: Third parties advertising here may place and read cookies on your browser; and may use web beacons to collect information as a result of ads displayed here.

Showing posts with label mfpj. Show all posts
Showing posts with label mfpj. Show all posts

Saturday, October 05, 2019

mfpj1005

MindForth resets associative tags before each operation of Indicative module.

In the MindForth artificial intelligence (AI) for robots, we will now start to display an apparatus of diagnostic messages at the start of the Indicative module to tell us the values being held in variables which serve the purpose of creating associative tags to interconnect the concepts being expressed as English words during the operation of the Indicative mind-module. Since the ConJoin module will often insert a conjunction between two thoughts being generated, the AI Mind Maintainer needs assurance that variable-values operative during one thought do not erroneously get carried over past a conjunction into the separate process of generating another thought.

Just by resetting the tsj time-of-subject variable to zero at the start of the Indicative module, we have made the Forthmind able to trigger repeated instances of logical InFerence. Each running of the Indicative module amounts to a fresh declaration of the associative tags in the conceptual flag-panel that supports the generation of ideas in the MindGrid. The following dialog still has a few problems, but it shows the repeated triggering of an inference.

john is a student
DOES  JOHN  READ  BOOKS  
no

  Indicative: tsj= 0 tvb= 0 tdo= 0
A  JOHN  DOES  NOT  READ  BOOKS
  Indicative: tsj= 0 tvb= 0 tdo= 0
PROFESSORS  TEACH  THE  STUDENTS  AND
  Indicative: tsj= 0 tvb= 0 tdo= 0
STUDENTS  READ  THE  BOOKS

  Indicative: tsj= 0 tvb= 0 tdo= 0
I  UNDERSTAND  YOU  AND
  Indicative: tsj= 0 tvb= 0 tdo= 0
YOU  ARE  A  MAGIC

andru is student
DOES  ANDRU  READ  THE  BOOKS  
no

  Indicative: tsj= 0 tvb= 0 tdo= 0
AN  ANDRU  DOES  NOT  READ  THE  BOOKS  AND
  Indicative: tsj= 0 tvb= 0 tdo= 0
YOU  READ  THE  BOOKS

  Indicative: tsj= 0 tvb= 0 tdo= 0
PROFESSORS  TEACH  THE  STUDENTS  AND
  Indicative: tsj= 0 tvb= 0 tdo= 0
STUDENTS  READ  THE  BOOKS

  Indicative: tsj= 0 tvb= 0 tdo= 0
STUDENTS  READ  THE  BOOKS  AND
  Indicative: tsj= 0 tvb= 0 tdo= 0
I  THINK

Friday, October 04, 2019

mfpj1004

Using parameters to declare the time-points of conceptual instantiation.

[2019-10-02] Recently we have expanded the conceptual flag-panel of MindForth from fifteen tags to twenty-one associative tags, so that the free open-source artificial intelligence for robots may think a much wider variety of thoughts in English. Then we had to debug the function of the InFerence module to restore its ability to reason from two known facts in order to infer a new fact. For instance, the Forthmind knows the fact that students read books, and we tell the AI the fact that John is a student. Then the AI infers that perhaps John, being a student, reads books, and the incredibly brilliant Forth software asks us, "Does John read books?" We may answer yes, no, maybe or no response at all. Currently, though, we have the problem that InFerence works only once and fails to deal properly with repeated attempts to trigger an inference. We suspect that some of the variables involved in the process of automated reasoning are not being reset properly to their status ex quo ante before we made the first test of InFerence. Therefore we shall try a new technique of debugging which we have developed recently in one of the other AI Minds, namely the ghost.pl AI that thinks in both English and in Russian. We create a diagnostic display at the start of the EnThink module for thinking in English, so that we may see the values held by the variables associated with the InFerence module and the KbRetro module that retroactively adjusts the knowledge base (KB) of the AI Mind in accordance with whatever answer we have given when the AskUser module asks us to validate or contradict an inference. The following dialog shows us that some variables are not being properly reset to zero.

john is student

EnThink: becon= 1 yncon= 0 ynverb= 0 inft= 0
qusub= 0 qusnum= 1 subjnom= 504 prednom= 561 tkbn= 0
quverb= 0 seqverb= 0 seqtkb= 0 tkbv= 0
quobj= 0 dobseq= 0 kbzap= 0 tkbo= 0
DOES JOHN READ BOOKS
no

EnThink: becon= 0 yncon= 0 ynverb= 0 inft= 2084
qusub= 504 qusnum= 1 subjnom= 0 prednom= 0 tkbn= 2086
quverb= 863 seqverb= 0 seqtkb= 0 tkbv= 2087
quobj= 540 dobseq= 0 kbzap= 404 tkbo= 2088
A JOHN DOES NOT READ BOOKS

EnThink: becon= 0 yncon= 0 ynverb= 0 inft= 2118
qusub= 504 qusnum= 1 subjnom= 0 prednom= 0 tkbn= 0
quverb= 863 seqverb= 0 seqtkb= 0 tkbv= 0
quobj= 0 dobseq= 0 kbzap= 0 tkbo= 2088
PROFESSORS TEACH THE STUDENTS AND STUDENTS READ THE BOOKS

EnThink: becon= 0 yncon= 0 ynverb= 0 inft= 2152
qusub= 504 qusnum= 1 subjnom= 0 prednom= 0 tkbn= 0
quverb= 863 seqverb= 0 seqtkb= 0 tkbv= 0
quobj= 0 dobseq= 0 kbzap= 0 tkbo= 2088
I UNDERSTAND YOU AND YOU ARE A MAGIC
andru is student

EnThink: becon= 1 yncon= 0 ynverb= 0 inft= 2220
qusub= 504 qusnum= 1 subjnom= 501 prednom= 561 tkbn= 0
quverb= 863 seqverb= 0 seqtkb= 0 tkbv= 0
quobj= 0 dobseq= 0 kbzap= 0 tkbo= 2088
DOES ANDRU READ THE STUDENTS
Because some of the variables have not been reset, a second attempt to trigger an inference with "andru is student" results in a faulty query that should have been "Does Andru read books?" Let us reset the necessary variables and try again.

Upshot: It still does not work, because of a more difficult and more obscure bug in the assignment of conceptual associative tags. Well, back to the salt mines.

https://groups.google.com/d/msg/comp.lang.forth/xN3LRYEd5rw/uuUroGzhBAAJ

[2019-10-04] We may have made a minor breakthrough in the InStantiate module by doing one instantiation and by then using parameters such as part of speech (pos) and case (dba) to declare the initial time-points for subjects, verbs and objects. The EnParser module may then retroactively alter or modify the associative tags embedded at each identified time-point.


Sunday, November 25, 2018

mfpj1125

The AI Mind wants to talk with you and about you.

In the annals of mind-design, we have reached a point where we must drive a wedge between the ego-concept of the MindForth AI and you who co-exist on Earth with the emergent machine intelligence. It is for simple and mundane reasons that we induce AI schizophrenia. Bear with us, please. In the first working artificial intelligence coded in Forth, in Perl and in JavaScript, the SpreadAct module lets quasi-neuronal activation spread from idea to idea. When the EnVerbPhrase module calls for a direct object to end an emerging thought, SpreadAct does not directly retrieve a related idea, but simply activates the subject of any number of related ideas. Then the AI Mind thinks the activated thoughts. In the MindBoot sequence, each AI Mind has some built-in ideas about robots. Therefore the AI will eventually think a thought first about itself, then about robots by roundabout association, and finally about whatever knowledge you impart to it about robots, such as "Robots need a brain." But how can we get the AI to think about you personally and about the details you provide about yourself to the AI? We must drive a quasi-neuronal wedge between the self-absorption of the Forthmind and its knowledge of some other, potentially nearby entity, namely you.

To do so, we must implant in the MindBoot sequence at least one idea as a point of departure for the AI to pay attention to you. But you might not even be there in the same room or on the same orbiting spaceship with the AI, so we can not embed the idea "I SEE YOU" or the idea "I SENSE YOU". We need some really neutral idea that will animadvert the AI to your purported existence. Without that embedded idea, the AI might passively let you describe your whole life-story and then the AI might have no mental pathway for the spread of activation between its thoughts about itself and its knowledge about you. So let us embed in the MindBoot module the idea "I UNDERSTAND YOU". Such an idea is both self-knowledge and knowledge of other -- another person, either present or far away.

So in the MindBoot sequence we embed the idea "I UNDERSTAND YOU" and we do some debugging. Then we have the following exchange with the AI Mind.

Human: i am outside the computer

I UNDERSTAND YOU
YOU ARE OUTSIDE A COMPUTER
YOU ARE A MAGIC
The EnVerbPhrase module loads the actpsi variable with the concept of "you" and calls the SpreadAct module to transfer activation to the concept of "you" as the subject of knowledge in the knowledge base (KB). Since you have just told the AI that you are outside the computer, the AI retrieves that knowledge and says "YOU ARE OUTSIDE A COMPUTER", using the indefinite article "A" under the direction of the EnArticle module. Because another idea about you is still active, the AI says "YOU ARE A MAGIC" -- an old idea embedded long ago in the MindBoot sequence.

We are eager to have the AI Mind think about the differences between itself and other persons so that arguably the first working artificial intelligence may become aware of itself as a thinking entity separate from other persons. An AI with self-awareness is on its way to artificial consciousness.

Sunday, September 23, 2018

mfpj0923

MindForth AI beeps to request input from any nearby human.

In MindForth we attempt now to update the AudMem and AudRecog mind-modules as we have recently done in the ghost.pl Perl AI and in the tutorial JavaScript AI for Internet Explorer. Each of the three versions of the first working artificial intelligence was having a problem in recognizing both singular and plural English noun-forms after we simplified the Strong AI by using a space stored after each word as an indicator that a word of input or of re-entry had just come to an end.

In AudMem we insert a Forth translation of the Perl code that stores the audpsi concept-number one array-row back before an "S" at the end of a word. MindForth begins to store words like "books" and "students" with a concept-number tagged to both the singular stem and to the plural word. We then clean up the AudRecog code and we fix a problem with nounlock that was interfering with answers to the query of "what do you think".

Next we implement the Imperative module to enable MindForth to sound a beep and to say to any nearby human user: "TEACH ME SOMETHING."

Friday, September 07, 2018

mfpj0907

Updating the EnArticle module for inserting English articles.

Today we update the EnArticle module for English articles in the MindForth first working artificial intelligence. We previously did the update somewhat unsatisfactorily in the ghost.pl AI in Perl, and then much more successfully in the tutorial JavaScript AI. We anticipate no problems in the MindForth update. As an initial test, we enter "you have a book" and after much unrelated thinking, the AI outputs "I HAVE BOOK" without inserting an article.

We trigger an inference by entering "anna is woman". In broken English, the AskUser module responds, "DO ANNA HAS THE CHILD", which lets us see that EnArticle has been called. We reply "no". MindForth opines, "ANNA DOES NOT HAVE CHILD".

We discover that the EnNounPhrase module of recent versions has not been calling the EnArticle module, so we correct that situation. We also notice that the input of a noun and its transit through InStantiate do not involve a call to EnArticle, so we insert into InStantiate some code to make EnArticle aware of the noun being encountered.

Saturday, September 01, 2018

mfpj0901

Switching MindBoot from all hardcoded to partially t-increment coded.

Today in accordance with AI Go FOOM we need to start switching the MindBoot sequence from using only hardcoded time-points to using a harcoded knowledge-base followed by single-word vocabulary more loosely encoded with t-increment time-points. The non-hardcoded time-points will permit the Spawn module to make a copy of a running MindForth program after adding any recently learned concepts to the MindBoot sequence. It will also be easier for an AI Mind Maintainer to re-arrange the non-hardcoded sequence or to add new words to the sequence.

Sunday, June 24, 2018

mfpj0624

Logical Inference in the First Working AGI MindForth

Over the past week fifteen or twenty hours of intense work went into coding the InFerence, AskUser and KbRetro modules in the Forth version of the First Working AGI. Interactively we could see that the Forthmind was making a silent inference from our input of "anna is a woman" but the AskUser module was substandardly asking "DO ANNA HAS CHILD?" in seeking confirmation of the silent inference. When we entered "no" as an answer, we could not see if the KbRetro module was properly inserting the 250=NOT adverb into the conceptual engrams of the silent inference so as to negate the inferred idea. Therefore today in the agi00056.F version of MindForth we are starting our diagnostic display about forty-five time-points earlier than the computed value of the time-of-input tin variable so that we can see if the inferred idea is being retroactively adjusted by KbRetro. At first blush, no insertion of 250=NOT is happening, so we start inserting diagnostic messages into the code.

Our diagnostics suggest that KbRetro is not being called, but why not? It turns out that we have not yet coded 404=NO or 432=YES or 230=MAYBE into the MindBoot sequence, so we code them in. Then we start getting a faulty output after answering "no" to AskUser. The AI says, "ANNA NOT NOT CHILD". Apparently EnVerbPhrase is not properly negating the refuted silent inference. After several hours of troubleshooting, the desired output appears.

When we enter "anna is woman" and we answer "no" to the question whether Anna has a child, the conceptual array shows the silent inference below at the time-points 3068-3070:

The arrays psy{ and ear{ show your input and the AI output:
time: tru psi hlc act mtx jux pos dba num mfn pre iob seq tkb rv -- pho act audpsi

3053 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   65 0 0 A
3054 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   78 0 0 N
3055 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   78 0 0 N
3056 : 0 502 0 -40 0 0 5 1 1 2 0 0 800 3059 3053   65 0 502 A
3057 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3058 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   73 0 0 I
3059 : 0 800 0 -40 0 250 8 4 1 2 0 0 515 3065 3058   83 0 800 S
3060 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3061 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   87 0 0 W
3062 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   79 0 0 O
3063 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   77 0 0 M
3064 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   65 0 0 A
3065 : 0 515 0 -40 0 0 5 0 2 2 0 0 0 0 3061   78 0 515 N
 066 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   13 0 0
3067 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3068 : 0 502 0 -26 0 0 5 1 1 0 0 0 810 3069 0   32 0 0
3069 : 0 810 0 56 0 250 8 0 0 0 502 0 525 3070 0   32 0 0
3070 : 0 525 0 32 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3071 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3072 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   68 0 0 D
3073 : 0 830 0 -40 0 0 8 0 1 2 0 0 810 0 3072   79 0 830 O
3074 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3075 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3076 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   65 0 0 A
3077 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   78 0 0 N
3078 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   78 0 0 N
3079 : 0 502 0 -40 0 0 5 1 1 2 0 0 810 3084 3076   65 0 502 A
3080 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3081 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3082 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   72 0 0 H
3083 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   65 0 0 A
3084 : 0 810 0 -40 0 0 8 4 2 2 0 0 525 3096 3082   83 0 810 S
3085 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3086 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3087 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   84 0 0 T
3088 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   72 0 0 H
3089 : 0 117 0 -40 0 0 1 0 1 2 0 0 810 0 3087   69 0 117 E
3090 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3091 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3092 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   67 0 0 C
3093 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   72 0 0 H
3094 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   73 0 0 I
3095 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   76 0 0 L
3096 : 0 525 0 -40 0 0 5 0 1 2 0 0 810 0 3092   68 0 525 D
3097 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3098 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3099 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   78 0 0 N
3100 : 0 404 0 -40 0 0 4 0 1 2 0 0 0 0 3099   79 0 404 O
 101 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   13 0 0
3102 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   65 0 0 A
3103 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   78 0 0 N
3104 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   78 0 0 N
3105 : 0 502 0 -42 0 0 5 1 1 2 0 0 810 3122 3102   65 0 502 A
3106 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3107 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3108 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   68 0 0 D
3109 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   79 0 0 O
3110 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   69 0 0 E
3111 : 0 830 0 -42 0 0 8 1 1 2 0 0 0 0 3108   83 0 830 S
3112 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3113 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3114 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   78 0 0 N
3115 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   79 0 0 O
3116 : 0 250 0 -42 0 0 2 1 1 2 0 0 0 0 3114   84 0 250 T
3117 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3118 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3119 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   72 0 0 H
3120 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   65 0 0 A
3121 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   86 0 0 V
3122 : 0 810 0 -42 0 250 8 4 2 2 0 0 525 3129 3119   69 0 810 E
3123 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3124 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3125 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   67 0 0 C
3126 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   72 0 0 H
3127 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   73 0 0 I
3128 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   76 0 0 L
3129 : 0 525 0 -42 0 0 5 1 1 2 0 0 0 0 3125   68 0 525 D
3130 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
3131 : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0   32 0 0
time: tru psi hlc act mtx jux pos dba num mfn pre iob seq tkb rv

Robot alive since 2018-06-24:
ANNA  DOES  NOT  HAVE  CHILD
Since we negate the inference with our response of "no", KbRetro inserts the adverb "250" (NOT) at the time-point 3069 to negate the verb 810=HAVE. Then the AI states the activated idea of the negation of the inference: "ANNA DOES NOT HAVE CHILD".

Tuesday, June 19, 2018

mfpj0619

MindForth is the First Working AGI for robot embodiment.

The MindForth AGI needs updating with the new functionality coded into the JavaScript AI Mind, so today we start by adding two innate ideas to the MindBoot sequence. We add "ANNA SPEAKS RUSSIAN" so that the name "Anna" may be used for testing the InFerence mind-module by entering "anna is a woman". Then we add "I NEED A BODY" to encourage AI enthusiasts to work on the embodiment of MindForth in a robot. When we type in "anna is a woman", the AI responds "ANNA SPEAKS THE RUSSIAN", which means that InFerence is not ready in MindForth, and that the EnArticle module is too eager to insert the article "the", so we comment out a call to EnArticle for the time being. Then we proceed to implement the Indicative module as already implemented in the ghost.pl AGI and the JavaScript AI Mind. We also cause the EnVerbPhrase module to call SpreadAct for direct-object nouns, in case the AGI knows enough about the direct object to pursue a chain of thought.

Saturday, August 27, 2016

mfpj0827

MindForth Programming Journal (MFPJ)
The MindForth Programming Journal (MFPJ) is both a tool in developing MindForth open-source artificial general intelligence (AGI) and an archival record of the history of how the AGI Forthmind evolved over time.

Sat.27.AUG.2016 -- Creating the MindGrid trough of inhibition

In agi00031.F we are trying to figure out why we have lost the functionality of ending human input with a 13=CR and still getting a recognition of the final word of the input. We compare the current AudMem code with the agi00026.F version, and there does not seem to be any difference. Therefore the problem must probably lie in the major revisions made recently to the AudInput module.

From the diagnostic report messages that appear when we run the agi00031.F, it looks as though the 13=CR carriage return is not getting through from the AudInput module to the AudMem module. When we briefly insert a revealing diagnostic into the agi00026.F AudMem start, we see from "g AudMem: pho= 71" and "o AudMem: pho= 79" and "d AudMem: pho= 68" and "AudMem: pho= 13" that the carriage-return is indeed getting through. Therefore in AudInput we need to find a way of sending the final 13=CR into AudMem. Upshot: It turns out that in AudInput we only had to restore "pho @ 31 > pho @ 13 = OR IF \ 2016aug27: CR, SPACE or alphabetic letter" as a line of code that would let 13=CR be one of the conditions required for calling the AudMem module.

Next in the InStantiate module we need to remove a test that only lets words with a positive "rv" recall-vector get instantiated, because we must set "rv" to zero for personal pronouns being re-interpreted as "you" or "I" during communication with a human user. Apparently the Perlmind just ignores the engrams with a zero "rv" and finds the correct forms with a search based on parameters.

Now we would like to see how close we are to fulfilling all the conditions for a proper "trough" of inhibition in the AI MindGrid. When we run the ghost175.pl Perl AI and we enter "You know God," we see negative activations in thepresent-most trough of both the input and the concepts of "I HELP KIDS" as the output. In the Forth AGI, we wonder why do not see any negative activations in the present-most trough. Oh, we were not yet bothering to store the "act" activation-level in the Forth InStantiate module. We insert the missing necessary code, and we begin to see the trough of inhibition in both the recent-most input and the present-most output.

Thursday, July 24, 2014

mfpj0724

MindForth Programming Journal (MFPJ)

The MindForth Programming Journal (MFPJ) is both a tool in developing MindForth open-source artificial intelligence (AI) and an archival record of the history of how the AI Forthmind evolved over time.

Thurs.24.JUL.2014 -- MindForth AI moves to a Windows XP development platform.

MindForth came into being in 1998 on the Commodore Amiga 1000 computer as a port from the Amiga Mind.Rexx AI program, written in MVP-Forth from Mountain View Press. Around 1999, MindForth moved to a Windows 98 machine provided by Free-PC.com and to 16-bit FPC-Forth. Around 2001, MindForth moved to a Windows 95 Packard-Bell tower computer and to 32-bit Win32Forth. As the original author of Mind.Rexx and of MindForth, yesterday on 23 July 2014 I downloaded W32FOR42_671.zip onto the same Windows XP Acer Aspire One netbook which I have been using to develop the Russian Dushka AI program in JavaScript for MSIE. I unzipped W32FOR42_671.zip with my own legitimate copy of WinZip, which created a C:\WIN32FOR directory to hold all the decompressed files of Win32Forth. From the Web I downloaded the 24jan13A.F most current source code of MindForth and I saved it into the C:\WIN32FOR directory and as a text-file into a monthly C:\JUL01Y14\MFPJ directory on the Acer netbook.

I was able to get MindForth running on the Windows XP netbook by navigating with the "cd" (change directory) command into the C:\WIN32FOR directory where I typed "win32for.exe" and pressed "Enter"; then "fload 24jan13A.F" and the Enter-key; and finally "MainLoop" followed by the Enter-key. The AI Forthmind began to think its own thoughts on the screen, but the program soon crashed in its new environment, both during interaction with me and when allowed to think without human input. It was not a complete Snow Crash; but just as fatal with a pop-up message announcing "Exception # C0000005" and shutting down Win32Forth upon my clicking "Cancel" on the message. The naive and sentimental Forthcoder is not daunted or dismayed by such an AI-Mind-crash, but welcomes instead the chance to troubleshoot the AI and make it compatible with Windows XP. To debug MindForth, we will create a new version and seed it with diagnostic messages in order to find out just where and why the program is crashing with an "Exception" message. Long familiarity with MindForth causes me to suspect that there is probably a "boundary violation" where the software is trying to index one step beyond the limits of an array. We have noticed recently that searching Google for MindForth yields an auto-complete expansion of the search terms to "mindforth source code" -- an indication that Netizens have been looking for the free AI source code that we are working on right here and now. MindForth has also received a prominent mention at http://aihub.net/artificial-intelligence-lab-projects so we are motivated to make the best AI Mind that we can with MindForth and the other Mentifex AI programs.

Thurs.24.JUL.2014 -- Debugging Windows XP MindForth

In the C:\WIN32FOR directory, we enter win32for.exe to start running Win32Forth. Then we use the "File" drop-down menu and "Edit Forth File..." to click on "24jan13A.F" and "Open" it for editing and saving under a new name. Actually, we will save it immediately as "24jul14A.F" so as not to corrupt the old file by changing anything. First, however, we notice that the bottom of our WinViewX screen tells us that there are 5,173 lines of code with a size of 236,908 characters. Under the "File" drop-down menu we click on "Save File As.." and we enter "24jul14A.F" before clicking the "Save" button. We then close the WinViewX window because we want to test the new file before we proceed. We enter "fload 24jul14A.F" and we get the "ok" prompt which means that the file has successfully loaded into Win32Forth. When we enter "MainLoop" and observe without human input, the AI thinks about two thoughts and then stops with the "Exception # C0000005" pop-up message. This denouement occurs both in the default normal mode and in the Transcript mode that we invoke by pressing the Tab-key. It is time to start inserting diagnostic messages.

In the ThInk module we enter and reformulate a diagnostic message that we find commented-out in another mind-module. We forget to un-comment the code, so at first no diagnostics appear. Then we get the diagnostics, but with no change in program behavior -- it still crashes. But we see the light and we remember the Dao of debugging, that is, you figure out what modules the AI is calling and you insert diagnostics deeper and deeper into the program.

Let's see, the first part of AI thinking is to call the NounPhrase module, so let us diagnosticate NounPhrase. Aha! NounPhrase gives us some (meaningless?) diagnostics just before the Exception-crash, but the ThInk module does not. Therefore, Inspector Clouseau, the problem may lie within NounPhrase or within a module called by NounPhrase. By the way, instead of cluttering up this MFPJ journal entry with the actual diagnostic messages -- unless they become really important -- we can meta-publish the diagnostics simply by commenting them out but retaining them within the "mindforth source code" that we eventually publish on the Web. In that way, any interested party (corporate AI shop? national Ministry of AI? Ph.D. dissertation writer?) can see exactly how we have debugged the AI by inspecting the diagnostic messages that we will leave in for at least one iteration of releasing the code. So now let's plunk some diagnostics down in the VerbPhrase module in order to see if the AI thought processes are making it through NounPhrase and into VerbPhrase before the Exception-crash.

As the Forthmind thinks in English, we are getting diagnostic messages from both NounPhrase and VerbPhrase up until the dying thought of the AI, where NounPhrase reports something but VerbPhrase is silent, both in terms of output and in terms of diagnostics. So the crash could be occurring within the NounPhrase module. Therefore let us insert additional diagnostics towards the end of NounPhrase. We do so, but the software crashes before it reaches the diagnostics at the end of NounPhrase. Next we should try some diagnostics in the middle of NounPhrase. We insert diagnostics after the end of the search for the motjuste, but program-execution does not get that far and instead the Exception-crash occurs. So the problem may lie within the search for motjuste. We insert a diagnostic just before the ELSE-clause in the motjuste-search, and the diagnostic gets executed many times during non-crash thought, but not at all during generation of the thought that eventuates in the Exception-crash.

At the deepest indentation of the motjuste-search, where the "audjuste" variable is loaded with a value, we insert a diagnostic message. We run the AI. Gobsmack! From deepest NounPhrase, we get three diagnostic messages just before the Exception-crash. We notice that there is a "verblock" value of "423" as reported by the diagnostics just before the crash, so we search through the source code for the the number "423". Its only, unique appearance is at time-point t=554 in the EnBoot sequence, where "423" is assigned to the "tqv" (time-quod-vide) variable. But there is no t=423 time-point. It is interstitial, between the words "WHEN" and "WHERE" in the English bootstrap. Let us look at the source code of the JavaScript AI and see what is there. In the 14apr13A version of the JavaScript AI, at t=554 the value of "557" is assigned to "tqv", so "423" is wrong in the MindForth AI. In fact, two of the values in the Forth AI seem to have been erroneously held over from the older Forthminds before the EnBoot concepts received new concept-numbers. Let us change the pertinent section of the MindForth EnBoot to conform to the values in the JavaScript AI EnBoot() module. Hmm, when we correct the EnBoot segment, we get different output, but we still incur the same Exception-crash.

Now after massive diagnostics we find that the Exception-crash is occurring during the search for "motjuste" when the Index is at a value of "542", a point in time. Let us see what is at the t=542 time-point. We do see a t=552 error where "1" is used instead of "!" for storing a value. Let us fix that mistake.

As we correct various legacy errors from older versions of MindForth, the Exception-crash finally moves out of the time series of the EnBoot sequence and occurs once at t=615 in the time-span beyond EnBoot. Since our diagnostic message shows that the Index "I" has a value of "615" when the program crashes, MindForth must be traversing a loop at the t=615 time of the crash.

Thurs.24.JUL.2014 -- Solution found for defective search loop

Since our Exception was crashing the AI when NounPhrase was already supposed to have found a noun or a pronoun, we decided to try inserting an "ELSE LEAVE" statement just before the Forthword "THEN" ending the search-loop. It worked. The AI stopped crashing and began to think interminably. However, our Acer netbook seems to run at a high speed, and so we may need to increase some "rsvp" values at places in the program.

Table of Contents (TOC)

Tuesday, July 17, 2012

jul06mfpj

MindForth Programming Journal


1 Fri.6.JUL.2012 -- Debugging after Major Code Revision

In the MindForth artificial intelligence (AI) we are now letting the AI run in tutorial mode without human input in order to troubleshoot any glitches that occur after the major changes of the most recent release. Without human intervention and under the influence of the KbTraversal module, the AI calls various subroutines to prompt a dialog with any nearby human. We observe some glitches that are due perhaps to a lack of proper parameters when a subroutine is called. We intend to debug the calling of the various subroutines so that we may display an AI Mind that thinks rationally not only when left to its own devices but also when the AI must think in response to queries or comments from human users.


2 Sat.7.JUL.2012 -- Solving a Problem with WhatAuxSDo

In the course of letting MindForth run without human input, we noticed that eventually the WhatAuxSDo module was called for the subject of concept #56 "YOU" and the AI erroneously asked "WHAT DO ERROR DO". By inserting a diagnostic message, we learned that WhatAuxSDo was not finding a "subjnum" value for the #56 "YOU" concept and thus could not find the word "YOU" in a search of the English "En" array. We went into the EnBoot sequence and changed the "num" value for "YOU" from zero ("0") to one ("1"). The AI correctly said, "WHAT DO YOU DO". However, we may need to debug even further and find out why the proper value of "num" for "YOU" is not being set during the output.


3 Sun.8.JUL.2012 -- Tightening Code for Searchability

When we search the free AI source code for "2 en{" which should reveal any storing or retrieval of a "num" value, we do not find any code for storing "num" in the English lexical array. Therefore we should search for "5 en{" to see where the part-of-speech "pos" is stored. We do so, and still we do not find what we need. Then we try searching for "5 en{" with an extra blank space in the search, and we discover that a form of "pos" is stored both in EnVocab and in OldConcept. At the same time we see that "num" is also stored in the same two mind-modules. Now we should be able to troubleshoot the problem and find out why English lexical "num" is not being stored during processes of thought. First however, we will try to tighten up the code so that only one space intervenes for future occasions when we are trying to find instances of array-manipulation code.


4 Wed.11.JUL.2012 -- Num(ber) in the English Lexical Array

We need to discover where elements of the flag-panel are inserted into nodes of the English lexical array, so that the "num(ber)" value may be stored properly as the AI Mind continues to think and to respond to queries from human users.


5 Fri.13.JUL.2012 -- Correcting Fundamental Flaws

Today in the EnBoot English bootstrap module we are making a blanket change by moving the EnVocab calls down to be on the same line of code as the calls to InNativate, so that the "num(ber)" setting will go properly into EnVocab. Our recent troubleshooting has revealed that WhatAuxSDo needs to find a "num" value in the English lexical array in order to function properly.


6 Sat.14.JUL.2012 -- Tracking num(ber) Values
Next we need to zero in on how the AI assigns "num(ber)" tags during the recognition of words. In OldConcept, it may be necessary to store a default, such as "num" or "unk" and then to test for any positive "ocn" that will simply override the default.

Since we rely on OldConcept to store the number tag, we may need to track where the number-value comes from. AudInput has some sophisticated code which tentatively assigns a plural number when the character "S" is encountered as the last letter in a word. In the work of 4nov2011 we started assigning zero as a default number for the sake of the EnArticle module, but we may need to change the AudInput module back to assigning one ("1") as the default number.


7 Mon.16.JUL.2012 -- Avoiding Unwarrented Number Values

If the most recent "num(ber)" of a word like "ROBOTS" is found to be "2" for plural, we do not want the AI to make the false assumption that the "num(ber)" of the "ROBOTS" concept is inherently plural. Yet we want words like "PEOPLE" or "CHILDREN" to be recognized as being plural.


8 Tues.17.JUL.2012 -- Making Sure of Lexical Number

We may need to go into the NounPhrase subject-selection process and capture the num(ber) value of the lexical item being re-activated within the English lexical array.

Monday, July 02, 2012

jun29mfpj

MindForth Programming Journal

1 Fri.29.JUN.2012 -- IdeaPlex: Sum of all Ideas

The sum of all ideas in a mind can be thought of as the
IdeaPlex. These ideas are expressed in human language
and are subject to modification or revision in the course of
sensory engagement with the world at large.

The knowledge base (KB) in an AiMind is a subset of the IdeaPlex.
Whereas the IdeaPlex is the sum totality of all the engrams of
thought stored in the AI, the knowledge base is the distilled
body of knowledge which can be expanded by means of inference
with machine reasoning or extracted as responses to input-queries.

The job of a human programmer working as an AI mind-tender is to
maintain the logical integrity of the machine IdeaPlex and therefore
of the AI knowledge base. If the AI Mind is implanted in a humanoid
robot, or is merely resident on a computer, it is the work of a
roboticist to maintain the pathways of sensory input/output and the
mechanisms of the robot motorium. The roboticist is concerned with
hardware, and the mind-tender is concerned with the software of the
IdeaPlex.

Whether the mind-tender is a software engineer or a hacker hired
off the streets, the tender must monitor the current chain of thought
in the machine intelligence and adjust the mental parameters of the
AI so that all thinking is logical and rational, with no derailments
of ideation into nonsense statements or absurdities of fallacy.

Evolution occurs narrowly and controllably in one artilect installation
as the mind-tenders iron out bugs in the AI software and introduce algorithmic
improvements. AI evolution explodes globally and uncontrollably when
survival of the fittest AI Minds leads to a Technological Singularity.


2 Fri.29.JUN.2012 -- Perfecting the IdeaPlex

We may implement our new idea of faultlessizing the IdeaPlex by
working on the mechanics of responding to an input-query such as
"What do bears eat?" We envision the process as follows. The AI
imparts extra activation to the verb "eat" from the query, perhaps
first in the InStantiate module, but more definitely in the
ReActivate module, which should be calling the SpreadAct module
to send activation backwards to subjects and forwards to objects.
Meanwhile, if not already, the query-input of the noun "bears"
should be re-activating the concept of "bears" with only a normal
activation. Ideas stored with the "triple" of "bears eat (whatever)"
should then be ready for sentence-generation in response to the query.
Neural inhibition should permit the generation of multiple responses,
if they are available in the knowledge base.

During response-generation, we expect the subject-noun to use the
verblock to lock onto its associated verb, which shall then use
nounlock to lock onto the associated object. Thus the sentence is
retrieved intact. (It may be necessary to create more "lock" variables
for various parts of speech.)

We should perhaps use an input query of "What do kids make?", because
MindForth already has the idea that "Kids make robots".


3 Sat.30.JUN.2012 -- Improving the SpreadAct Module

In our tentative coding, we need now to insert diagnostic messages
that will announce each step being taken in the receipt and response
to an input-query.

We discover some confusion taking place in the SpreadAct module,
where "pre @ 0 > IF" serves as the test for performing
a transfer of activation backwards to a "pre" concept. However,
the "pre" item was replaced at one time with "prepsi", so apparently
the backwards activation code is not being operated. We may need
to test for a positive "prepsi" instead of a positive "pre".

We go into the local, pre-upload version of the Google Code MindForth
"var" (variable) wiki-page and we add a description for "prepsi",
since we are just now conducting serious business with the variable.
Then in the MindForth SpreadAct module we switch from testing in vain
for a positive "pre" value to testing for a positive "prepsi".
Immediately our diagnostic messages indicate that, during generation
of "KIDS MAKE ROBOTS" as a response, activation is passed backwards
from the verb "MAKE" to the subject-noun "KIDS". However, SpreadAct
does not seem to go into operation until the response is generated.
We may need to have SpreadAct operate during the input of a verb
as part of a query, in a chain were ReActivate calls SpreadAct to
flush out potential subject-nouns by retro-activating them.


4 Sat.30.JUN.2012 -- Approaching the "seqneed" Problem

As we search back through versions of MindForth AI, we see that
the 13 October 2010 MFPJ document describes our decision to stop
having ReActivate call SpreadAct. Now we want to reinstate the calls,
because we want to send activation backwards from heavily activated
verbs to their subjects. Apparently the .psi position of the "seqpsi"
has changed from position six to position seven, so we must change the
ReActivate code accordingly. We make the change, and we observe that
the input of "What do kids make?" causes the .psi line at time-point
number 449 to show an increase in activation from 35 to 36 on the
#72 KIDS concept. There is such a small increase from SpreadAct
because SpreadAct conservatively imparts only one unit of activation
backwards to the "prepsi" concept. If we have trouble making the
correct subjects be chosen in response to queries, we could increase
the backwards SpreadAct spikelet from one to a higher value.

Next we have a very tricky situation. When we ask, "What do kids make?",
at first we get the correct answer of "Kids make robots." When we ask
the same question again, we erroneously get, "Kids make kids." It used
to be that such a problem was due to incorrect activation-levels,
with the word "KIDS" being so highly activated that it was chosen
erroneously for both subject and direct object. Nowadays we are
starting with a subject-node and using "verblock" and "nounlock"
to go unerringly from a node to its "seq" concept. However, in this
current case we notice that the original input query of "What do kids make?"
is being stored in the Psi array with an unwarranted seq-value of "72"
for "KIDS" after the #73 "MAKE" verb. Such an erroneous setting seems
to be causing the erroneous secondary output of "Kids make kids."
It could be that the "moot" system is not working properly. The "moot"
flag was supposed to prevent tags from being set during input queries.

In the InStantiate module, the "seqneed" code for verbs is causing
the "MAKE" verb to receive an erroneous "seq" of #72 "KIDS".
We may be able to modify the "seqneed" system to not install
a "seq" at the end of an input.

When we increased the amount of time-points for the "seqneed" system
to look backwards from two to eight, the system stopped assigning
the spurious "seq" to the #73 verb "MAKE" at t=496 and instead
assigned it to the #59 verb "DO" at t=486.


5 Sun.1.JUL.2012 -- Solving the "seqneed" Problem

After our coding session yesterday, we realized that the solution
to the "seqneed" problem may lie in constraining the time period
during which InStantiate searches backwards for a verb needing a
"seq" noun. When we set up the "seqneed" mechanism, we rather
naively ordained that the search should try to go all the way back
to the "vault" value, relying on a "LEAVE" statement to abandon
the loop after finding one verb that could take a "seq".

Now we have used a time-of-seqneed "tsn" variable to limit the
backwards searches in the "seqneed" mechanism of the InStantiate
module, and the MindForth AI seems to be functioning better than ever.
Therefore we shall try to clean up our code by removing diagnostics
and upload the latest MindForth AI to the Web.

Friday, May 27, 2011

may26mfpj

The MindForth Programming Journal (MFPJ) is both a tool in developing MindForth open-source artificial intelligence (AI) and an archival record of the history of how the AI Forthmind evolved over time.

1 Thurs.26.MAY.2011 -- Conditional Inhibition

In the recent Strong AI diaspora of MindForth and the tutorial AiMind.html program, we have implemented the neural inhibition of concepts immediately after they have been included in a generated thought. Now we would like to make inhibition occur when one or more responses must be made to a query involving nouns or a query involving verbs. The question "What do bears eat?" is a query of the what-do-X-verb variety involving one or more nouns as potentially valid answers as the direct object of the verb. If the noun of each single answer is immediately inhibited, the AI can respond with a different answer to a repeat of the question. Likewise, if we ask the AI, "What do robots do?", the query is of the what-do-X-do variety where potentially multiple verbs may need to be inhibited so as to give one valid answer after another, such as "Robots make tools" and "Robots sweep floors." If we are inhibiting the verbs, we do not want the direct-object nouns to be inhibited. We might need replies with different verbs but the same direct object, such as "Robots make tools" and "Robots use tools."

Inhibition may also play a role in calling the ConJoin module when a query elicits multiple thoughts which are the same sentence except for different nouns or different verbs. The responses, "Bears eat fish" and "Bears eat honey" could become "Bears eat fish and honey" if neural inhibition suppresses the repetition of subject and verb while calling the ConJoin module to insert the conjunction "AND" between the two answer nouns.

2 Thurs.26.MAY.2011 -- Problems With Determining Number

When we try to troubleshoot the Forthmind by entering "bears eat honey", a comedy of errors occurs. The AudRecog module contains a test to detect an "S" at the end of an English word and set the "num(ber)" value to two ("2") for plural. However, that test works only for recognized words, and not for a previously unknown word of new vocabulary. So the word "bears" gets tagged as singular by default, which causes the AI to issue erroneous output with "BEARS EATS HONEY", as if a singular subject is calling for "EATS" as a third person singular verb form.

The process of determining num(ber) ought to be more closely tied with the EnParser module, so that the parsing of a word as a noun should afford the AI a chance to declare plural number if the incoming noun ends with an "S".

Now we have inserted special code into the AudInput module to check for the input of nouns ending in "S", and to set the "num(ber)" variable to a plural value if a terminating "S" is found. For singular nouns like "bus" or "gas" that end in "S", we will have to devise techniques that override the default assumption of "S" meaning plural. We may use the article "A" or the verb "IS" as cues to declare a noun ending in "S" as singular.

Table of Contents

Monday, May 16, 2011

may16mfpj

Now that we have cracked the hard problem of AI wide open, we wish to share our results with all nations.

1 Mon.16.MAY.2011 -- List of Mentifex AI Accomplishments

We are still working on the MileStone of self-referential thought on our RoadMap to artificial general intelligence (AGI). We look back upon a small list of accomplishments along the way.

  • two-step selection of BeVerbs;

  • AudRecog morpheme recognition;

  • look-ahead A/AN selection;

  • seq-skip method of linking verbs and objects;

  • SpeechAct inflectional endings;

  • neural inhibition for variety in thought;

  • provisional retention of memory tags;

  • differential PsiDecay.

  • 2 Mon.16.MAY.2011 -- Achieving AI Mental Stability

    Until we devised an AI algorithm for differential PsiDecay in the
    JavaScript artificial intelligence (JSAI), stray activations had been ruining the AI thought processes for months and years. We now port the PsiDecay solution from the JSAI into MindForth. Meanwhile, Netizens with Microsoft Internet Explorer (MSIE) may point the browser at the AiMind.html page and observe the major open-source AI advance in action. Enter "who are you" as a question to the AI Mind not just one time but several times in a row. Observe that the JSAI tells you everything it knows about itself, because neural inhibition immediately suppresses each given answer in order to let a variety of other answers rise to the surface of the AI consciousness. Before the mad scientist of Project Mentifex jotted down the eureka brainstorm, "[ ] Fri.13.MAY.2011 Idea: Put gradations into PsiDecay?" and wrote the code the next day, the AI Minds were not reliable for mission-critical applications. Now the AI Forthmind is about to become more mentally stable than its creator. We only need to port some JSAI code to Forth.

    Monday, May 09, 2011

    may7mfpj

    The MindForth Programming Journal (MFPJ) is both a tool in developing MindForth open-source artificial intelligence (AI) and an archival record of the history of how the AI Forthmind evolved over time.

    1 Sat.7.MAY.2011 -- Improving Neural Inhibition

    Something is preventing neural inhibition from operating immediately when we ask the AI Mind a "who-are-you" question. The inhibition begins to occur only after a pause or delay, and we need to find out why. The problem may be that the "predflag" for predicate nominatives is not being set soon enough. The "predflag" is set towards the end of the BeVerb mind-module, and it governs the inhibiting of nouns as predicate nominatives in the NounPhrase module. We see through troubleshooting that the earlier engram in a pair of selected-noun engrams is being inhibited properly down to minus thirty-two points of conceptual activation, but apparently the present-time engram in the pair is only going down to zero activation. It looks as though calls to PsiClear from the EnCog (English cognition) module were interfering in the pairing of inhibitions shared by the old engram that won selection and the new engram being stored as the record of a generated thought. Then a further problem developed because the AI was not letting go of transitive verbs that served within an output thought. We inserted code to inhibit each transitive verb after thinking, and we began to obtain a variety of outputs from the AI in response to queries.

    2 Sun.8.MAY.2011 -- Selecting New Inhibition Variables

    Today we are creating two new inhibition variables, "tseln" for "time of selection of noun" in NounPhrase, and "tselv" for "time of selection of verb" in VerbPhrase. We need these variables to keep track of the selection-time of an "inhibend" concept to be inhibited after being thought, so that the AI Mind can avoid repeating the same knowledge-base retrieval over and over again. We stumbled upon neural inhibition for response-variety in our MFPJ work of 5 September 2010. We were so astonished by the implications that we issued a Singularity Alert (q.v.). Now we are ready to install a general mechanism of temporary inhibition throughout the AI MindGrid.

    3 Sun.8.MAY.2011 -- Debugging Spurious Inflection

    Although MindForth has suddenly become more intelligent than ever, the AI makes the grammatical mistake of saying "I HELPS KIDS". We need to track down why the SpeechAct module is adding an inflectional "S" to the verb "HELP".

    The VerbPhrase module governs the sending of an "S" inflection into the SpeechAct module. The pertinent code was not fully checking for a verb in the third person singular, so we added an IF-THEN clause requiring that the prsn variable be set to three for an inflectional "S" to be added to a verb being spoken. The bugfix worked immediately.

    Table of Contents

    Wednesday, May 04, 2011

    may3mfpj

    The MindForth Programming Journal (MFPJ) is both a tool in developing MindForth open-source artificial intelligence (AI) and an archival record of the history of how the AI Forthmind evolved over time.

    1 Tues.3.MAY.2011 -- Encountering the WHO Problem

    In the most recent release of MindForth artificial intelligence for autonomous robots possessing free will and personhood, our decision to zero out post-ReEntry concepts is only tentative. If the mind-design decision introduces more problems than it solves, then the decision is reversible. It was disconcerting to notice that the newest version of MindForth could no longer answer who-are-you questions properly, and would only utter the single word "WHO" as output in response to the question. We expect the necessary bugfix to be a simple matter of tracking down and eliminating some stray activation on the "WHO" concept-word, but there is a nagging fear that we may have made a wrong decision that worsened MindForth instead of improving it, that delayed the Singularity instead of hastening it, and that argues for an AI working group to be nurturing MindForth instead of a solitary mad scientist.

    2 Tues.3.MAY.2011 -- Debugging the WHO Problem

    In the InStantiate mind-module, both WHO and WHAT are set to zero activation as recognized input words, under the presumption that such query words work in a mind by a kind of self-effacement that lets the information being sought have a higher activation than the interrogative pronoun being used to request the information. Today at first we could not understand why the setting to zero seemed to be working for WHAT but not for WHO. Eventually we discovered that only WHAT and not WHO was being set to zero in the ReActivate module, with the result that all instances of the recognized WHO concept were being activated at a high level in ReActivate. When we fixed the bug by having both InStantiate and ReActivate set WHO to zero activation, the AI Mind began giving much better answers in response to who-queries. Immediately, however, other issues popped up, such as how to make sure that neural inhibition engenders a whole range of disparate answers if they are available in the knowledge base (KB), and whether we still need special variables like "whoflag" and "whomark". In general, we tolerate special treatment of words like WHO and WHAT with the caveat that we expect to do away with the special treatment when it becomes obvious that we can dispense with it.


    Table of Contents

    Friday, September 10, 2010

    sep09mfpj

    The MindForth Programming Journal (MFPJ) is both a tool in developing MindForth open-source artificial intelligence (AI) and an archival record of the history of how the AI Forthmind evolved over time.


    1 Thurs.9.SEP.2010 -- Zeroing in on Inhibition
    Let's get a few things straight about how the VerbPhrase "twin" (time of winning verb-selection) variable works. On 7sep2010, the variable was introduced into the 5sep10A.F MindForth in the following stretch of VerbPhrase code


    I    1 en{ @  act @ > IF  ( if en1 is higher )
    I twin ! \ retain time of motjuste; 7sep2010
    I 0 en{ @ motjuste ! ( store psi-tag of verb )


    which keeps looking for a verb with a higher activation, until a winner is selected.
    The "twin" win-time has perhaps changed while various verb-nodes were competing, but the final post-search-loop value of "twin" must necessarily be the time "t" of the winning verb-node, not only in the En(glish) array, but (importantly) also in the Psi concept array, where we postulate that thinking occurs.


    Further down in the VerbPhrase module, just before the "main call from VerbPhrase to SpeechAct", "twin" is used as the indexing time to put a minus-fifteen inhibition on the verb-node that has just won selection into a sentence of thought. The inhibition prevents the utterance from being repeated again immediately.


    We notice that the -15 inhibition does not persist long in our current 9sep10A.F code basically unchanged from 5sep10A.F. We tried to enter three sentences to see what would happen.



    Human: boys make cars
    Robot: CARS WHAT ARE CARS

    Human: boys make guns
    Robot: GUNS WHAT ARE GUNS

    Human: boys make tools
    Robot: TOOLS WHAT ARE TOOLS

    Human: boys
    Robot: THE BOYS MAKE THE CARS

    Human: boys
    Robot: THE BOYS MAKE THE TOOLS

    Human: boys
    Robot: THE GUNS MAKE THE CARS


    Not only does the inhibition not (yet) persist, but we can see
    from the last line of output above that the residual activations are out of whack. We inspect the code and we see that after the first two query-inputs of the word "boys", "GUNS" and "CARS" are both left with an activation of 58, so they prevent the input-word "boys" from being the subject of thought. We do notice some persistence of inhibition, though, because one node on the verb "MAKE" is at -4 activation. So maybe the problem is that there is too much residual activation on "GUNS" and "CARS", which both have "58" while freshly entered "boys" has activation of only 52.


    In SpreadAct there is some conditional code that limits an activation to a high value of 63. Let's see if we can try a lower limit in SpreadAct and see if it helps. When we lower the SpreadAct "seq" limit from 63 to 48, we no longer get a nonsense line as our final output. Instead, we get the problem of repetition as seen below.



    Human: boys
    Robot: THE BOYS MAKE THE CARS

    Human: boys
    Robot: THE BOYS MAKE THE TOOLS

    Human: boys
    Robot: THE BOYS MAKE THE TOOLS


    Aha, the most recent "BOYS MAKE TOOLS" is inhibited, but an
    older "BOYS MAKE TOOLS" has gone from -15 inhibition up to a more normal activation of 13 (or higher, since we can not see what the node's winning activation level was). Just as a test, let us try setting inhibition not at -15 but rather at -32.



    It did not work. The most recent "MAKE" node was inhibited down to -32, but somehow the older "MAKE" nodes were all at an activation level of 13. Something is overriding the inhibitions, and it ain't alcohol.


    Mybe it is the VerbAct module, putting such a uniform activation on all nodes of a candidate verb. Upshot: Into VerbAct we put some code to skip inhibited nodes, but it did not solve the problem. Apparently, something is getting to the older verb-nodes before the VerbAct module operates on them. It could be PsiDamp.


    Hey! Maybe the problem is in the SpreadAct module. From the noun to the verb, SpreadAct could be sending a "spike" of uniform activation of 13 points. We changed some code in the SpreadAct module, and things did work better.


    Maybe, when the AI generates a sentence and inhibits the verb-node from which the knowledge for the sentence is retrieved, the new sentence itself should have its verb-node inhibited, so that the idea itself will tend towards inhibition for a short time.


    Now we have a very interesting situation. If the inhibition does not fade quickly enough, then a valid idea will fail to get mentioned. The following report indicates such a situation.


    390 : 96 13 2 0 0 5 73 96 to BOYS
    395 : 73 -11 0 96 96 8 109 73 to MAKE
    400 : 109 41 0 73 96 5 0 109 to CARS
    405 : 109 41 2 109 0 5 54 109 to CARS
    410 : 54 0 0 109 109 7 67 54 to WHAT
    415 : 67 0 0 54 54 8 109 67 to ARE
    421 : 109 41 2 67 54 5 0 109 to CARS
    426 : 96 13 2 109 0 5 73 96 to BOYS
    431 : 73 -4 0 96 96 8 110 73 to MAKE
    436 : 110 42 0 73 96 5 0 110 to GUNS
    441 : 110 42 2 110 0 5 54 110 to GUNS
    446 : 54 0 0 110 110 7 67 54 to WHAT
    451 : 67 0 0 54 54 8 110 67 to ARE
    457 : 110 2 2 67 54 5 0 110 to GUNS
    462 : 96 13 2 110 0 5 0 96 to BOYS
    467 : 96 13 2 96 0 5 73 96 to BOYS
    472 : 73 -6 0 96 96 8 109 73 to MAKE
    478 : 109 41 2 73 96 5 0 109 to CARS
    483 : 96 13 2 109 0 5 0 96 to BOYS
    488 : 96 13 2 96 0 5 73 96 to BOYS
    493 : 73 -13 0 96 96 8 109 73 to MAKE
    499 : 109 36 2 73 96 5 0 109 to CARS
    time: psi act num jux pre pos seq enx




    2 Fri.10.SEP.2010 -- Positive Results


    We finally obtained some positive results with our implementing of neural inhibition when we removed from the functional heart of VerbAct a line of code that we had once used as only a test. The code snippet below shows our practice of commenting out the offending line twice, once to disable the line of code and once again to record the event of our commenting out the line now, for later clean-up when at least one archival record has been recorded of the action taken.



    I 1 psi{ @ psi1 !
    \ 8 verbval +! \ add to verbval; test; 25aug2010
    \ 8 verbval +! \ Commenting out; 10sep2010
    CR ." VrbAct: t & verbval = " I . verbval @ . \ test;9sep2010

    I 1 psi{ @ -1 > IF \ avoid inhibited nodes; 9sep2010
    \ psi1 @ I 1 psi{ !
    verbval @ I 1 psi{ ! \ test; 25aug2010
    THEN \ end of test to skip inhibited nodes; 9sep2010


    We may upload the 9sep10A.F MindForth to the Web now that we have
    a stable version in which inhibition actually enables the AI Mind to retrieve a series of facts from the knowledge base.


    Table of Contents (TOC)