Purpose: Discussion of Strong AI Minds thinking in English, German or Russian. Privacy policy: Third parties advertising here may place and read cookies on your browser; and may use web beacons to collect information as a result of ads displayed here.

Friday, September 10, 2010


The MindForth Programming Journal (MFPJ) is both a tool in developing MindForth open-source artificial intelligence (AI) and an archival record of the history of how the AI Forthmind evolved over time.

1 Thurs.9.SEP.2010 -- Zeroing in on Inhibition
Let's get a few things straight about how the VerbPhrase "twin" (time of winning verb-selection) variable works. On 7sep2010, the variable was introduced into the 5sep10A.F MindForth in the following stretch of VerbPhrase code

I    1 en{ @  act @ > IF  ( if en1 is higher )
I twin ! \ retain time of motjuste; 7sep2010
I 0 en{ @ motjuste ! ( store psi-tag of verb )

which keeps looking for a verb with a higher activation, until a winner is selected.
The "twin" win-time has perhaps changed while various verb-nodes were competing, but the final post-search-loop value of "twin" must necessarily be the time "t" of the winning verb-node, not only in the En(glish) array, but (importantly) also in the Psi concept array, where we postulate that thinking occurs.

Further down in the VerbPhrase module, just before the "main call from VerbPhrase to SpeechAct", "twin" is used as the indexing time to put a minus-fifteen inhibition on the verb-node that has just won selection into a sentence of thought. The inhibition prevents the utterance from being repeated again immediately.

We notice that the -15 inhibition does not persist long in our current 9sep10A.F code basically unchanged from 5sep10A.F. We tried to enter three sentences to see what would happen.

Human: boys make cars

Human: boys make guns

Human: boys make tools

Human: boys

Human: boys

Human: boys

Not only does the inhibition not (yet) persist, but we can see
from the last line of output above that the residual activations are out of whack. We inspect the code and we see that after the first two query-inputs of the word "boys", "GUNS" and "CARS" are both left with an activation of 58, so they prevent the input-word "boys" from being the subject of thought. We do notice some persistence of inhibition, though, because one node on the verb "MAKE" is at -4 activation. So maybe the problem is that there is too much residual activation on "GUNS" and "CARS", which both have "58" while freshly entered "boys" has activation of only 52.

In SpreadAct there is some conditional code that limits an activation to a high value of 63. Let's see if we can try a lower limit in SpreadAct and see if it helps. When we lower the SpreadAct "seq" limit from 63 to 48, we no longer get a nonsense line as our final output. Instead, we get the problem of repetition as seen below.

Human: boys

Human: boys

Human: boys

Aha, the most recent "BOYS MAKE TOOLS" is inhibited, but an
older "BOYS MAKE TOOLS" has gone from -15 inhibition up to a more normal activation of 13 (or higher, since we can not see what the node's winning activation level was). Just as a test, let us try setting inhibition not at -15 but rather at -32.

It did not work. The most recent "MAKE" node was inhibited down to -32, but somehow the older "MAKE" nodes were all at an activation level of 13. Something is overriding the inhibitions, and it ain't alcohol.

Mybe it is the VerbAct module, putting such a uniform activation on all nodes of a candidate verb. Upshot: Into VerbAct we put some code to skip inhibited nodes, but it did not solve the problem. Apparently, something is getting to the older verb-nodes before the VerbAct module operates on them. It could be PsiDamp.

Hey! Maybe the problem is in the SpreadAct module. From the noun to the verb, SpreadAct could be sending a "spike" of uniform activation of 13 points. We changed some code in the SpreadAct module, and things did work better.

Maybe, when the AI generates a sentence and inhibits the verb-node from which the knowledge for the sentence is retrieved, the new sentence itself should have its verb-node inhibited, so that the idea itself will tend towards inhibition for a short time.

Now we have a very interesting situation. If the inhibition does not fade quickly enough, then a valid idea will fail to get mentioned. The following report indicates such a situation.

390 : 96 13 2 0 0 5 73 96 to BOYS
395 : 73 -11 0 96 96 8 109 73 to MAKE
400 : 109 41 0 73 96 5 0 109 to CARS
405 : 109 41 2 109 0 5 54 109 to CARS
410 : 54 0 0 109 109 7 67 54 to WHAT
415 : 67 0 0 54 54 8 109 67 to ARE
421 : 109 41 2 67 54 5 0 109 to CARS
426 : 96 13 2 109 0 5 73 96 to BOYS
431 : 73 -4 0 96 96 8 110 73 to MAKE
436 : 110 42 0 73 96 5 0 110 to GUNS
441 : 110 42 2 110 0 5 54 110 to GUNS
446 : 54 0 0 110 110 7 67 54 to WHAT
451 : 67 0 0 54 54 8 110 67 to ARE
457 : 110 2 2 67 54 5 0 110 to GUNS
462 : 96 13 2 110 0 5 0 96 to BOYS
467 : 96 13 2 96 0 5 73 96 to BOYS
472 : 73 -6 0 96 96 8 109 73 to MAKE
478 : 109 41 2 73 96 5 0 109 to CARS
483 : 96 13 2 109 0 5 0 96 to BOYS
488 : 96 13 2 96 0 5 73 96 to BOYS
493 : 73 -13 0 96 96 8 109 73 to MAKE
499 : 109 36 2 73 96 5 0 109 to CARS
time: psi act num jux pre pos seq enx

2 Fri.10.SEP.2010 -- Positive Results

We finally obtained some positive results with our implementing of neural inhibition when we removed from the functional heart of VerbAct a line of code that we had once used as only a test. The code snippet below shows our practice of commenting out the offending line twice, once to disable the line of code and once again to record the event of our commenting out the line now, for later clean-up when at least one archival record has been recorded of the action taken.

I 1 psi{ @ psi1 !
\ 8 verbval +! \ add to verbval; test; 25aug2010
\ 8 verbval +! \ Commenting out; 10sep2010
CR ." VrbAct: t & verbval = " I . verbval @ . \ test;9sep2010

I 1 psi{ @ -1 > IF \ avoid inhibited nodes; 9sep2010
\ psi1 @ I 1 psi{ !
verbval @ I 1 psi{ ! \ test; 25aug2010
THEN \ end of test to skip inhibited nodes; 9sep2010

We may upload the 9sep10A.F MindForth to the Web now that we have
a stable version in which inhibition actually enables the AI Mind to retrieve a series of facts from the knowledge base.

Table of Contents (TOC)

Friday, May 14, 2010


The AudRecog mind-module for auditory recognition in artificial intelligence (AI) tests user input one character or phoneme at a time to recognize words and morphemes that will activate a concept in the AI Mind or extract meaning from an idea.

1. Diagram of AudRecog

/^^^^^^^^^\ Auditory Recognition of "c-a-t-s" /^^^^^^^^^\
/ \ CONCEPTS /"CATS"=input \
| _______ | | | | SEMANTIC MEMORY | |
| /old \!!!!|!!!| | | | C match! |
| / image \---|-----+ | ___ | -A match! |
| \ fetch / | |c| | / \ | R stop |
| \_______/ | |a| | / \ | S drop |
| | |t| | / Old- \ | |
| visual | |s| | ( Concept ) | C match! |
| | e| | | \ / | -A match! |
| memory | a| | | /\ /!!!!!|!!!!T match! |
| | t| | | ______/ \___/------|-----S recog! |
| reactivation | | |f| / \ | |
| | | |i| (EnParser) | C match! |
| channel | | |s| \______/ | -A match! |
| _______ | | |h| | | --T match! |
| /old \ | |_|_| _V_________ | ---S busy |
| / image \ | / \ / \ | U drop |
| \ store /---|--\ Psi /--( InStantiate ) | P drop |
| \_______/ | \___/ \___________/ | |

2. Algorithm of AudRecog

AudRecog works by comparing each word of input against words stored in the auditory memory channel of the AI Mind. If a matching word is found in memory, the OldConcept module is called to reactivate the concept behind the known word. If no matching word is found in memory, the NewConcept module is called to treat the incoming word as a new concept to be learned by the AI. Note that even a misspelled word will briefly be treated as a new concept, which quickly falls into desuetude if the proper spelling is used during subsequent inputs. Note also that users (companions) of the AI are not permitted to backspace during input to correct a mistake, because AudRecog is processing input dynamically and does not wait for a buffer to be filled with input to be submitted.

When AudRecog is trying to recognize a word like "CATS" as depicted above, all words starting with "C" are activated on both the initial "C" and on the next character stored after "C". Then one by one the input characters are tested for a continuing match-up between memory and input. If the chain of matching characters is broken, a candidate recall word is dropped from consideration. A remembered word that matches input in both length and content activates the deep Psi concept associated with the recognized word, and the AI Mind prepares to think in reaction to the input being recognized.

During the sequencing of the human genome, a technique remarkably akin to the AudRecog algorithm was used to recognize patterns among short strings of human DNA.

3. Complexity in AudRecog

In some ways AudRecog is the most complex and intricate of the forty-odd MindForth mind-modules. Other modules engage in thinking, but they do so by the rather simple process of spreading activation from concept to concept under the supervision of a linguistic superstructure. A barely functional VisRecog module would be vastly more sophisticated and complex than AudRecog, but AI devotees will delay implementing vision in MindForth until the proof-of-concept AI proves itself sufficiently to warrant implantation in physical robots and outfitting with physical vision.

What makes AudRecog so complex is the need to recognize not just complete words but also morphemes as parts of words. In September of 2008, AudRecog made perhaps not a saltational leap but a major step forward by incorporating an improved algorithm of using differential activation to recognize subwords or parts of words within a complete word.

4. Source code of AudRecog from 10 May 2010

: AudRecog ( auditory recognition )
0 audrec !
0 psi !
8 act !
0 actbase !
midway @ spt @ DO
I 0 aud{ @ pho @ = IF \ If incoming pho matches stored aud0;
I 1 aud{ @ 0 = IF \ if matching engram has no activation;
I 3 aud{ @ 1 = IF \ if beg=1 on matching no-act aud engram;
\ audrun @ 1 = IF \ if comparing start of a word; 8may2010
audrun @ 2 < IF \ if comparing start of a word; 8may2010
I 4 aud{ @ 1 = IF \ If beg-aud has ctu=1 continuing,
8 I 1+ 1 aud{ ! \ activate the N-I-L character,
0 audrec !
len @ 1 = IF
I 5 aud{ @ monopsi !
THEN \ End of test for one char length.
THEN \ end of test for continuation of beg-aud
THEN \ end of test for audrun=1 start of word.
THEN \ end of test for a beg(inning) non-active aud0
THEN \ end of test for matching aud0 with no activation
I 1 aud{ @ 0 > IF \ If matching aud0 has activation,
0 audrec ! \ Zero out any previous audrec.
I 4 aud{ @ 1 = IF \ If act-match aud0 has ctu=1 continuing,
2 act +! \ Increment act for discrimination.
0 audrec ! \ because match-up is not complete.
act @ I 1+ 1 aud{ ! \ Increment for discrimination.
THEN \ end of test for active-match aud0 continuation
I 4 aud{ @ 0 = IF \ If ctu=0 indicates end of word
len @ 2 = IF \ If len(gth) is only two characters.
\ I 1 aud{ @ 0 > IF \ Or test for eight (8).
I 1 aud{ @ 7 > IF \ testing for eight (8).
I 5 aud{ @ psibase ! \ Assume a match.
THEN \ End of test for act=8 or positive.
THEN \ End of test for two-letter words.
THEN \ End of test for end of word.
I 1 aud{ @ 8 > IF \ If activation higher than initial
8 actbase ! \ Since act is > 8 anyway; 8may2010
I 4 aud{ @ 0 = IF \ If matching word-engram now ends,
I 1 aud{ @ actbase @ > IF \ Testing for high act.
I 5 aud{ @ audrec ! \ Fetch the potential tag
I 5 aud{ @ subpsi ! \ Seize a potential stem.
len @ sublen ! \ Hold length of word-stem.
I 5 aud{ @ psibase ! \ Hold onto winner.
I 1 aud{ @ actbase ! \ Winner is new actbase.
THEN \ End of test for act higher than actbase.
0 audrec !
monopsi @ 0 > IF
monopsi @ audrec !
0 monopsi !
THEN \ End of inner test.
THEN \ End of test for final char that has a psi-tag.
THEN \ End of test for engram-activation above eight.
THEN \ End of test for matching aud0 with activation.
THEN \ End of test for a character matching "pho".
I midway @ = IF \ If a loop reaches midway; 8may2010
1 audrun +! \ Increment audrun beyond unity; 8may2010
THEN \ End of test for loop reaching midway; 8may2010
-1 +LOOP
0 act !
0 actbase !
psibase @ 0 > IF
psibase @ audrec !
audrec @ 0 = IF
monopsi @ 0 > IF
len @ 2 < IF
monopsi @ audrec !
0 monopsi !
audrec @ 0 = IF
psibase @ 0 > IF
psibase @ audrec !
audrec @ 0 = IF
morphpsi @ audrec !
sublen @ 0 > IF
len @ sublen @ - stemgap !
stemgap @ 0 < IF 0 stemgap ! THEN
stemgap @ 1 > IF 0 subpsi ! THEN
stemgap @ 1 > IF 0 morphpsi ! THEN
stemgap @ 1 > IF 0 audrec ! THEN
subpsi @ morphpsi !
0 psibase !
0 subpsi !
audrec @ 0 > IF
stemgap @ 2 > IF
0 audrec !
pho @ 83 = IF
2 num !
audrec @ audpsi !
; ( End of AudRecog; return to AudMem auditory memory )

5. Troubleshooting AudRecog

Temporary diagnostic messages may be inserted into the source code to display exactly what AudRecog is doing as it processes input. Typically such messages will identify important variables and immediately state their values. Remember to remove such diagnostic messages after debugging any mind-module.

It is also helpful to stop the AI by pressing the Escape key after entering some test input and then to run the ".psi" or ".aud" array reports to see what values have been recorded during the operation of AudRecog. If a word is recognized properly, it will have the proper Psi concept number in both the auditory memory array and the Psi concept array.

As a programmer, if you have tried to use simple string-matching to recognize words, your module becomes incapable of the more subtle operations afforded it when you use not only chains of activation to recognize a series of sounds, but differential activation to recognize subsets (morphemes) within a series of sounds. Think like a neuroscientist, not like a common, garden variety-show hacker hobbled by the groupthink of string-recognition.

6. Teamwork for AudRecog

Imagine that you are a made member of an elite Super-AI maintenance team charged and entrusted with the awesome responsibility of keeping a mission-critical AI Mind up and running, while safeguarding humanity against the dangers inherent in nurturing a higher form of intelligence capable at any time of breaking loose from human control and turning (or turing) against its human origins.

If it is your job to focus exclusively on the AudRecog module, your professional standards require you to grok all ideas immanent in this current document and in whatever AudRecog literature you can glean from an exhaustive search of all pre-Cyborg, that is, human knowledge. Therefore this document was prepared with you in mind, mindkeeper or mind-maintainer or whatever your job description calls you. Be aware, be very aware, that other AI shops and other AI enterprises are most likely duplicating your every thought and your every action in the accelerating race to the Technological Singularity.

7.History of AudRecog

The MindForth AudRecog module was adapted from the Amiga MindRexx "Comparator" and "String_effect" modules of 1994, which jointly served to compare incoming phonemes against auditory engrams strung together into the memory of a word. In the archival 28may1998 MindForth as described in the ACM SIGPLAN Notices, Screen #28 is the String-Effect and Screen #49 is the Comparator precursor to AudRecog.

The 11feb02A.f MindForth subsumes String-Effect into COMPARATOR, and the 4mar02A.f version of MindForth renames COMPARATOR as the AudRecog module. Although the word Comparator made sense for a module comparing input against memory, the overly broad term Comparator had to give way to the compound name AudRecog that would focus on the specific sense of audition and on the function of recognition, so that other sensory comparators could eventually be named with such appropriate terms as GusRecog, OlfRecog, TacRecog and VisRecog. Such precision in the naming of mind-modules frees up avenues of future AI development, because the names are already stubbed in for enterprising individuals to write the code.

8. MindForth Programming Journal (MFPJ)

Some but not all of the recent MFPJ entries dealing with AudRecog are available on-line among the following locations.

Sat.16.AUG.2008 - Tweaking the audRecog Module

Mon.18.AUG.2008 - audRecog Word-Stem Recognition

Tues.30.SEP.2008 -- audRecog Word-Stem Recognition

Wed.12.MAY.2010 -- Solution and Bugfix of AudRecog

9. Future of AudRecog

Just as MindForth is a precursor of next-generation AI Minds, likewise the AudRecog mind-module is a primitive implementation of AI technology that must mutate and evolve into a more advanced state of the art. Chief among the impending changes will be a switch-over from keyboard ASCII input to speech recognition of phonemic input. The SpeechAct module and the AudRecog module must both evolve in tandem so that the AI Mind may issue speech output and comprehend spoken input.

Table of Contents (TOC)

Friday, April 30, 2010


The English pronoun (EnPronoun) module of the MindForth artificial intelligence substitutes a personal pronoun in place of a noun under discussion, so that thinking or conversation may flow more smoothly.

1. History

The EnPronoun (English pronoun) module is so new that the MindForth project has no legacy webpages describing it. It became necessary to create the EnPronoun module during the development of MindForth code for the answering of user input queries in the what-do-X-do format. If the user asks, "What do robots do?", it is only natural to use the English pronoun "they" in response, rather than repeating the noun "robots" in the answer. It was also easy for an AI coder to replace plural nouns with "they" and not have to worry about the agreement in gender between a singular pronoun and its antecedent. However, once the EnPronoun module existed, it was easy to take the next step of adding an mfn gender flag to the AI MindGrid and to code agreement between noun and pronoun with respect to gender.

2. Implications

As AI Minds evolve, the emergence of each new feature in mental functionality has implications for further development and for the approach of a Technological Singularity. In the case of the EnPronoun module, the implications are rather broad and sweeping. Before there was an EnPronoun module in MindForth, the AI Mind could at first think only about plural nouns, and then more recently about a singular noun when the AI became able to detect a singular stem within the input of a plural form. For instance, if the AI knew the word "books", it was able to understand that singular "book" and plural "books" were the same concept. This ability was not innate; it had to be coded into the AI Mind.

As we augment the MindGrid with a lexical flag to keep track of gender, and we encode the handling of gender in the generation and comprehension of sentences of thought, MindForth becomes a better candidate AI for "porting" or translation into software that will handle gender-intensive human languages such as German, Russian, Spanish, French and Italian. When our use of pronouns causes the AI to develop a facility in handling gender, MindForth draws considerably closer to becoming a bilingual AI that speaks and thinks in both English and German. It could have become a bilingual AI in English and Latin, but we have not yet developed the time-travel feature that will teleport the AI back into ancient Roman times when Latin was the lingua franca of the civilized world. Instead, we must make do with the language of Beethoven and Nietzsche and Heinrich Heine, not Vergil.

Table of Contents (TOC)