Purpose: Discussion of Strong AI Minds thinking in English, German or Russian. Privacy policy: Third parties advertising here may place and read cookies on your browser; and may use web beacons to collect information as a result of ads displayed here.

Tuesday, December 12, 2017


Inserting "AN" as AN English article before a vowel.

In ghost249.pl we first declare $us1 as the first of up to seven "upstream" variables meant to keep track of the recently mentioned nouns for which the EnArticle() module may slip in the definite article "THE" in reference to the subject very currently under discussion. A human user might say, "I know a tinker, a taylor, a soldier and a spy". We want our ghost.pl honourable schoolboy to be able to say, "Tell me about the tinker but not about the spy". The software shall have briefly filled $us1,2,3 and 4 with the concept numbers for all four mentioned items so as to be able to refer to each one of them as "THE" subject of discussion. We wish to declare these variables today but not use them yet, because motos praestat componere fluctus, as Vergil used to say, or "It is more important to calm the upset waves" of hidden bugs which imperil the smooth function of our AI Minds. So let us run ghost249.pl and ask it, "Who are you"?

It is irksome to watch the ghost in the machine output the erroneous sentence "I AM A ANDRU AND I HELP KIDS" without using "AN" as the indefinite article, so we will stop and re-implement the solution which we used in our previous AI Minds. First we will do something clever with the word "AN" in the $vault of the MindBoot. We restore "101" as the concept number of both "A" and "AN", but for "AN" we remove "$psi=102" (should have been "$psi=101") so that the input of "AN" will still be recognized as a form of the article "101=A", but the thinking ghost will not be able to find "AN" as an acceptable word for output. Instead, the AI will have to assemble the word "AN" from the legacy code which we are now about to re-implement.

Oops! We cannot find the legacy AN-substitution code in the agi00051.F 2017-09-17 version of MindForth, so let us look back even further in the 24jul14A.F MindForth from 2014-07-24. There we find the $anset variable which we must now declare as a flag in the Ghost AI so that EnArticle() may use "AN" instead of "A" before a noun beginning with a vowel. When we try to use the $anset flag, we get an output of "I AM A NANDRU". We may have to put the $anset code in the Speech module where "A" and "N" may be joined tightly together. When we do so, BINGO, we get an output of "I AM A ROBOTS AND I AM AN ANDRU", which is an achievement of inserting "AN" before a vowel. It is still irksome that the Ghost webserver AI uses the plural "ROBOTS", but as an AI Mind Maintainer we know from experience that the backwards search for the concept of "571=ROBOT" simply finds a plural example when a singular(ity:-) is needed. It should not be too hard to skip over a plural engram when circumstances require a singular noun.

Since we have solved the problem of how to insert the indefinite article "AN" before a noun starting with a vowel, it is time to upload our Perlmind code and the Table of AI Variables for Mind Maintainers. We also take this opportunity to editorialize about the overall trend of the Ghost AGI project. We sincerely believe that we have solved the AI-hard problem of Natural Language Understanding, not in its totality but in sufficient functionality to show the AI community that element after element of the NLU problem may be solved and incorporated into the open-source AI codebase. We do not yet know to what extent, if any, the Perlmind codebase is being downloaded and tested and tweaked. An outfit like IBM or Microsoft or Joe's Bar and Grill could assign a Manhattan-project-full of programmers to advance the Perl AI code far beyond our own meager efforts. Of course, to mention this possibility is hopefully to scare all 500 Fortune companies into at least having a new hire or an idle-hands old hire take a look at the Perl AI and report on its potential merits. Some Harvard drop-out might say "Nobody needs more than 640K of memory" and some Kenneth Olson might say, "Who would ever want to have a home computer?", and likewise people might say, Mentifex? "He is a known trafficker in radical ideas." So take 'em or leave 'em.

Monday, December 11, 2017


How a Mind Maintainer improves a major mind-module.

For anyone interested in a high-tech career as an AI Mind Maintainer, here is a typical debugging session. We run the ghost248.pl AI initially without human input, to see if any mind-bugs manifest themselves. We know from recent coding sessions that there is a bug lurking in the retrieval of the ego-pronoun "701=I" under certain circumstances, but nothing goes especially wrong when we let ghost248.pl engage in meandering thoughts. A few irksome things happen, such as a tendency for the AI to repeat itself in a conjoined thought like "I AM PERSON AND I AM PERSON". We suspect that we must make searches for active ideas start from $krt (knowledge representation time before thinking) instead of current time $t so that the AI does not think a thought and then repeat the very same thought lodged in recent memory. But it is much more irksome that the ghost in the machine is not saying "I AM A PERSON", so we decide now to attempt some refinements in the function of the EnArticle() mind-module for inserting an English article ("a" or "the") before the output of a noun.

We start experimenting with EnArticle() by inserting into it a diagnostic message to tell us whether the module is being called, and what are the values of $subjnum (subject number) and $qv2psi -- the second item, or verb, in a sentence of thought being generated as a combination potentially consisting of subject (qv1), verb (qv2), indirect object (qv3), and direct object (qv4). We run the AI and we inspect for any diagnostic message just before the output of "I AM PERSON", but there is no message. Then we remember that we turned off calls to EnArticle() because it was saying "THE" too frequently. In order to turn it back on, we search the code for "EnArticle", find it, and reinstate the call from EnNounPhrase(). The program goes back to saying the word "THE" too many times, but we also see the values of the variables we are interested in. So we shall try to insert some sensible code.

In the mind-module for English articles, we copy the loop that finds the article "THE" in memory and then, mutatis mutandis, we get the code to find the article "A". We run the fledgling AI and eventually it says, "I HELP A THE KIDS AND KIDS MAKE A THE ROBOTS". The AI is inserting the indefinite article "A" and the definite article "THE". Before we get called on the carpet before the corporate board for not being a good Mind Maintainer, we hasten to pledge that we have been brainstorming some ideas to vastly improve the function of the article-inserting mind-module. We go back into the module and we cook up some code to insert the singular article "A" when the $subjnum is a unitary one ("1") and the $qv2psi verb is "800=BE", in order to cover cases of "I AM...." At the same time we comment out the block of code that inserts "THE" until we are ready to refine the code in a later mind-maintaining session. We run the AI without input and soon it says "I AM A PERSON", so maybe we get to keep our job as an AI Mind Maintainer. Wouldn't you like to have some business cards made up that say you are an "AI Mind Maintainer"? Oh no! The urge has struck. Are we going to be at a 2018 New Year's Eve party and have people say to us, "What on earth is an AI Mind Maintainer?"

We had better put this code and this journal entry up on the Web lickety-split, but first let us mention our plans for how the article-module shall decide algorithmically to insert the definite article "THE". We will create a cluster of variables that briefly hold onto any noun being mentioned in a train of thought or a conversation with a human user. When a particular noun is first mentioned, its identifier goes into the brief-tenure variable. If the same noun gets mentioned again quite soon, in one of the next few sentences, the software will insert "THE" before the new mention of the noun. A human user might say, "I have a book." Then the AI can say something like, "What is the book?", because the place-holder variable prompts the saying of "THE".

Over the past weekend we uploaded a new webpage named MindBoot Module Documentation for Strong AI Mind Maintainers as one of a dozen new uploads with a similar nomenclature emphasizing the career pathway for mind-maintainers. If your corporate information technology (IT) department does not have at least one Mind Maintainer on staff, then (imagine some dire prediction here :-).

Friday, December 08, 2017


Cleaning up the interface between the thinking AI and the human user.

In ghost247.pl we are trying to clean up the display of the thinking of the AI as output in the man-machine dialog. Code in the module EnThink() which has been displaying intermediate output may now be commented out.

At the end of Sensorium() we change the point-of-view $pov flag back to one ("1") and we stop getting a duplicate line-display during the AudInput() module, where $pov is tested for a value of two ("2") during external input and not during the re-entry of thought.

All these minor adjustments to the AGI codebase are due to a healthy respect for the principle of "survival of the fittest" in these early stages of AI evolution. We are eager for so-called "early adopters" to install the Perlmind AI on numerous computers where the surviving codebase may mutate and evolve along multiple not blood-lines but codelines. We encourage the emerging cohort of AI Mind Maintainers to post news and ideas in diverse forums devoted to Perl, to AI, to robotics, to neuroscience and so forth. Anyone who finds a bug in the ghost.pl AI should please report it in the Perl subReddit or in comp.lang.perl.misc on Usenet.

Thursday, December 07, 2017


Fixing word-recognition bug and omission of first conjoined idea.

When we run ghost246.pl initially without input, no subject-noun is activated, so the EnNounPhrase() module by default activates the "701=I" ego-concept. However, the AudInput module fails to recognize "I" as the first input-word and mistakenly sends "I" into the NewConcept() module. We correct this problem by inserting into the Speech() module a line of code

$pho = " "; AudInput(); # 2017-12-07: prime AudInput with a 32=SPACE
which makes AudInput and OldConcept able to recognize the "I" pronoun.

Then we had a problem where the Perlmind AI was inserting the conjunction "AND" into its thinking but only showing the second idea, the conjoined idea. We bug-fixed by no longer resetting $output to zero in the Sensorium module.

Sunday, December 03, 2017


Calling out to human users after a period of no input to the AI.

In the ghost244.pl AI we begin implementing a tendency for the Perl AI to initiate conversation with the external world by sounding a beep and by demanding "TEACH ME SOMETHING" when there has been a no-input period of arbitrary duration as specified by the AI Mind Maintainer. In recent weeks we have enabled the Perlmind to answer who-queries and what-queries, and to inquire about the meaning of new words being introduced by human users to the AI. We invite human users to demonstrate the concept-based artificial Mind to AI enthusiasts and to discusss whether it truly seems that the AI thinking, or is conscious, or is exercising free will.

Wednesday, November 29, 2017


Implementing the Indicative Mind-Module for Generation of Thought.

The free-of-charge, open-source, concept-based artificial intelligence ghost.pl in Strawberry Perl Five has become a primitive Mind-in-a-Box which AI enthusiasts and Perl programmers may download and experiment with. Before this current last week of November 2017, the Perlmind AI was a proof-of-concept AI with a river of diagnostic messages flooding the human-computer-interface (HCI). Now each new release of the free AI source code has a bare-bones interface where the human user may query the lurking AI Mind with questions of "who..." or "what..." and users may respond in English or Russian to the on-screen output of the AI.

In order to let users see the genuine thinking of an artificial Mind, we are implementing the ConJoin() module in Perl, so that the AI may output two or three active ideas "conjoined" by a conjunction, such as "I know THAT you are a robot AND I think THAT you need an AI Mind. The expanded functionality of thought requires some major changes in the cognitive architecture of the AI software. Whereas previously the EnThink() mind-module called the linguistic generation modules directly, now the use of conjunctions will require a module of thinking to be called two or more times as two or three ideas are joined together by the operation of the ConJoin() module.

Therefore we must now insert a sub-module between the EnThink() module and the English generation modules. We introduce the Indicative() module for the generation of thoughts in the grammatically indicative mood, and we comment out the inclusion of future mind-modules such as Subjunctive(); Imperative(); Interrogative() and perhaps even Optative() -- if we want to cater to speakers of ancient Greek. We will introduce a $mood variable to trigger selectively the calling by EnThink() of its immediate sub-modules.

Saturday, November 25, 2017


Enabling the Ghost AI to respond to queries with "I do not know"

In ghost239.pl we would like to endow the artificial Perlmind with the ability to answer "I do not know" when a human user enters a who-query for which the knowledge base (KB) has not a clue for an answer. It is rather jarring to see some total non-sequitur in response to a query like "Who plays dice with the universe". We sense that a simple ELSE-clause in the SpreadAct module may be enough to divert the program-flow from not finding an answer, and giving up, to not finding an answer and stating "I do not know". We may need to involve the free-will Volition module in deciding to admit ignorance, but first it seems reasonable to write the underlying code.

When we enter "Who makes robots" we get the response, "KIDS MAKE THE ROBOTS", but when we ask, "Who knows God" we get "I HELP THE KIDS" as an early element in the self-knowledge of the AI. So in the section of the SpreadAct module that deals with a "who+verb+dir.object query" and in the snippet of code for outputting a response through Speech(), we introduce a few lines of ELSE-clause:

      } else {  # 2017-11-25: if no correct answer is found...
        print "I DO NOT KNOW \n"; # 2017-11-25
      } # 2017-11-25: End of else-clause
We then obtain a series of "I DO NOT KNOW" statements while the AI is trying in vain to supply an informed answer, but we still get the non-sequitur in the displayed output.

Suddenly we realize that we were appending the ELSE-clause at the end of an obsolete code-sequence for speaking the subject-noun of the query-response. Now instead we will try putting the ELSE-clause at the end of the code that finds the necessary direct-object for the query-answer. No, that code is not always invoked as an else-test, if the verb itself is not even found. Therefore we may need to try consolidating the outer and inner test into a single test, so that there will be a basis for an else-clause.

Next we try introducing a $dunnocon flag that we set to a positive one ("1") at the start of the segment in SpreadAct() that deals with English who+verb+object queries, so that finding at least one answer may knock $dunnocon back down to a value of zero. If the value does not revert to zero, we have a "dunno" situation which we may use to persuade the AI to state "I DO NOT KNOW" as a response. Quite soon the code works to at least detect the positive dunno-flag.

In a quick-and-dirty way we get the Perlmind to respond to who+verb+object queries with either a correct answer or an admission of "I DO NOT KNOW." It was necessary to change the starting time of a backwards search in SpreadAct() by about ten time points in order to skip over the conceptual elements of the query itself.

Friday, November 24, 2017


Restoring the ability of ghost.pl AI to answer who-queries.

The ghost238.pl AI is an accumulation of various problems. For instance, it has lost the ability properly to answer "Who are you" as an input-query, although it does properly answer "Who am I" with "YOU ARE THE MAGIC". Let us see which most recent version could still answer "Who are you". The last version answering properly was ghost234.pl with "I AM THE PERSON".

We notice that the $qv1psi variable was being assigned the proper value of "701' for "I" at the end of the "Who are you" input in ghost234.pl, but an inproper value of zero ("0") in the ghost238.pl AI. Since the SpreadAct module does not have the correct concept ("701=I") to search for, apparently the correct response is not being found. It also seems that $qv1psi does not matter for the "Who am I" query because of the paucity of knowledge about "707=YOU" in the innate knowledge base (KB). In other words, the ghost238.pl AI correctly responds "YOU ARE THE MAGIC" only because it knows nothing else about "707=YOU".

Towards the end of InStantiate, when we arbitrarily fill the $qv1psi variable with the $psi value, the problem with "Who are you" queries seems to be solved, rather inexplicably.

Tuesday, October 24, 2017


Tweaking EnVerbPhrase() for correct responses to who-queries.

We have been adding new material to the SpreadAct() documentation page so as to describe what happens during the processing of an input query in the format of WHO+Verb+Direct-Object. As we try to ensure that the AI generates a grammatically correct response, we change a line of code in the EnVerbPhrase() module to

if ($k[1]==$verbpsi && $i==$verblock) { # 2017-10-24: zero in!
in order to ensure that we obtain the knowledge-base memory that correctly and grammatically answers the input-query. Asking "Who has a child" we get "WOMEN HAVE THE CHILD". Asking "Who makes robots" we get "KIDS MAKE THE ROBOTS". The KB-engram is already grammatical; we do not generate the pre-existing, correct grammar.

Friday, October 20, 2017


Troubleshooting problems in responses to who-queries.

In the ghost236.pl Perl AI we need to troubleshoot why verbs used in response to who-queries are not using the correct grammatical number in agreement with the subject-noun in the response. First we need to find out whether the num(ber) from a remembered noun is kept track of so that any new engram of the same noun will have the same num(ber). We observe more than one variable used in previous AI Minds to keep track of the num(ber) dealt with in the OldConcept() mind-module, and we try now to standardize on $recnum or "recognized number". The JavaScript AI with version number "27jun15A" has a test in the InStantiate() module to replace $num with $recnum if $recnum is greater than zero, so we might use the same test in the ghost.pl AI. As we debug at length, we notice that the JavaScript AI easily distinguishes the number difference between "MAN" and "MEN" and between "WOMAN" and "WOMEN". We fear that the problem may lie in the AudRecog() module, which is always difficult to debug.

Making AudRecog() wait for end of word to declare recognition.

In AudRecog(), first we need a loop that activates each matching character in a sequence. We want AudRecog() not to declare an $audrec until a following 32=SPACE has come in. Upshot: At first we trying replacing the entire AudRecog() code, until we obtain such positive results that we restore AudRecog() and insert only the code-tweaks which yielded positive results. We also tweak AudMem() slightly for operation co-ordinated with AudRecog().

Tuesday, September 26, 2017


Using Natural Language Understanding (NLU) to answer questions.

Although in the proof-of-concept ghost235.pl searchable AI Mind we are dealing with an initially small knowledge base (KB), our coding of the ability to search for knowledge would apply equally well to an entire datacenter full of information. Today we are using the spreading-activation SpreadAct() mind-module to activate the conceptual elements of knowledge which will supply answers based upon input queries in the format of "Who + verb + noun", as in "Who makes robots?" The query-word "who" is subject to de-activation upon input, while the verb-concept and the noun-concept in the query are passed through SpreadAct() not as random parameters for an associative search, but in their specific roles as Main Verb and as Direct Object of the verb. Thus the AI Mind should respond with answers tailored to the structure of the query, in such a way as truly to demonstrate Natural Language Understanding (NLU).

We start by declaring the new flag-variable of "query-condition for who+verb+direct-object" $qvdocon to segregate the pertinent code in SpreadAct(), and also the "query-condition for who+verb+indirect-object" $qviocon to hold in reserve for when we code the AI response to input queries in the format of "To whom does God give help?" The creation of the one flag suggests the creation of the similar flag, so we declare both of them.

In the InStantiate() module we insert code to detect a who-query with a verb other than "be", and we set $qv2psi with the concept number of the verb. We set $qv4psi with the concept number of any input noun assumed to be the direct object of the incoming verb. Then in the pertinent area of SpreadAct() we need to start searching backwards through memory for instances of the verb in the who-query.

Eventually we obtain a rough but correct response to our queries of "Who does such-and-such?" but we need to debug and fine-tune the parameters. We ask, "Who makes robots" and we get "KIDS MAKES THE ROBOTS." We ask, "Who has a child" and we get "WOMEN HAS THE CHILD". We need to upload and release the ghost235.pl code which achieves the objective, albeit primitively, and we must not code the same version further lest we wreck or corrupt the new functionality of answering who-queries in the form of "who" plus verb plus direct object. As we debug future releases of our code, the ghost235.pl version remains safe and intact.

Monday, September 18, 2017


SpreadAct() finds subject and verb to respond to a who-query.

As we try to improve upon who-queries with the ghost229.pl AI, we realize that the input of "Who are you" as a query needs to activate instances of the 701=I concept with 800=BE as both a $seq and a $tkb $verblock. It is not enough to insist upon a positive $tkb verblock, because that value is only a time and not the identifier of a concept. The $seq value actually identifies the verb as a particular concept which the SpreadAct() module is trying to find.

It is not even necessary for SpreadAct() to impart activation to the conceptual node of the 800=BE $seq verb, because only the subject of the stored idea needs to have activation high enough to be selected as a response to an incoming query. We may therefore go into SpreadAct() and in the search code for $qv1psi as the subject of the query we only need to verify the existence of the 800=BE $seq verb, not activate it.

In SpreadAct() we make the necessary changes in the code searching for $qv1psi and $qv2psi. We ask "Who are you" and the ghost.pl AI properly answers "I AM THE PERSON." However, as the AI continues thinking, it makes some wrong associations. Suddenly we realize that we forgot to use the $moot flag to prevent the input who-query from leaving associative tags.

Sunday, September 17, 2017


Enabling conversation based upon input of query-words

In our eagerness to present a "Mind-in-a-Box", we had better restore to the Ghost AI some code from older AI Minds which enabled the machine intelligence to carry on a conversation with a human being. We would like the Ghost to be able to ask questions like "Who are you?" or "What do you think?" In older programs we used InStantiate() to depress the activation on question-words, so that information would flood in to fill the activational vacuum.

We just asked the AI "Who are you" and it answered, "I AM THE PERSON". But the program is not yet a stable version of the ghost.pl AI. In SpreadAct() we used $qv1psi to latch onto an activand subject, and in the same loop we used $qv2psi to super-activate the activand verb. But we need to let the AI go back to normal associations and not persist in answering the initial question.

  if ($qv1psi > 0) {  # 2017-09-17: if there is an activand subject...
    for (my $i=$t; $i>$midway; $i--) {  # 2017-09-17: search backwards in time.
      my @k=split(',',$psy[$i]);  # 2017-09-17: inspect @psy flag-panel
      if ($k[1] == $qv1psi && $k[12] > 0) { $seqpsi = $k[12] } # 2017-09-17: if seq, seqpsi
      if ($k[1] == $qv1psi && $k[13] > 0) {  # 2017-09-17: require verblock.
        print "  i= $i qv1psi= $qv1psi seqpsi= $seqpsi \n";  #2017-09-17
        $k[3] = ($k[3] + 32);  # 2017-09-17: impose less than half of subj-inhibition. 
        if ($k[12] == $qv2psi) { $k[3] = ($k[3] + 128) }  # 2017-09-17: hyper-activate
  print "   SprAct-mid: for $k[1] setting $k[3] activation \n"; # 2017-09-17
        $psy[$i]="$k[0],$k[1],$k[2],$k[3],$k[4],$k[5],$k[6]," # 2017-09-17
        . "$k[7],$k[8],$k[9],$k[10],$k[11],$k[12],$k[13],$k[14]"; # 2017-09-17
      }  # 2017-09-17: end of diagnostic test
    }  # 2017-09-17: end of (for loop) searching for $qv1psi concept.
  }  # 2017-09-17: end of test for a positive $qv1psi.

Wednesday, September 13, 2017


Dealing with problems in Russian be-verbs

As we start constructing the InFerence mind-module in Strawberry Perl 5, we enter the Russian statement "МАРК СТУДЕНТ" for "Mark is a student", but the ghost.pl AI does not create a be-verb after the subject. When we type in "ОН СТУДЕНТ" for "He is a student", we do indeed get the provisional be-verb. When we type in "РОБОТ СТУДЕНТ" for "The robot is a student," we do indeed get the instantiated be-verb, so perhaps the problem involves the use of a new concept instead of old, known concepts. (Now we are spreading "liquid paper" on the individual keys of the keyboard, because we need to write the Russian Cyrillic letters on each key.) When we first introduce the name with "He is Mark" and then "Mark is a student" in Russian, we do get the imputed be-verb. It turns out that we need to declare the $seqneed variable already carrying a value of "8" for expecting a verb, because the basic Parser module has not yet been called to set the value.

Proposing to consolidate the parsing functionality

We may be able to eliminate the original Parser() module by transferring its functionality to EnParser() for English and RuParser() for Russian.

Consolidating parser functionality into EnParser() and RuParser().

The original Parser() module starts with a $bias of "5" to expect a 5=noun. Then Parser() switches to a $bias of "8" to expect an 8=verb, after which the $bias switches back to "5" again, although an incoming noun could be an indirect object or a direct object. It may be possible to move the preposition-handling code and the object-handling code up into the Parser() module renamed as the EnParser() for English and the RuParser() for Russian.

We start a few versions back by renaming ghost221.pl as ghost225.pl so that we may skip some unstable intervening code. Into the Parser() module we drop the EnParser() code dealing with English prepositions and with indirect and direct objects. Then in the InStantiate() module we comment out the now obsolete call to EnParser. The new composite code of ghost225.pl does not properly register the indirect object of "BOY" in "I make the boy a robot."

Although in ghost218.pl we switched names between Parser() and EnParser(), now we will reverse the switch because we no longer want there to be simply a Parser() module, but instead for there to be both EnParser() for English and RuParser() for Russian. We need the separate modules for English and for Russian, because, for instance, English has to deal with "DO" as an auxiliary verb, but Russian does not deal with an auxiliary verb "DO". First from ghost217.pl we pick up the old RuParser() module and drop it into the ghost225.pl AI. In OldConcept() and in NewConcept() we make the necessary changes for calling EnParser() and the still simple RuParser().

We should now upload the ghost225.pl code to the Web for several reasons, before we debug the problem of failure to register an indirect object. Firstly, much code has been renamed and commented out. When we resume coding, we may clean up the new code by removing the old detritus. Secondly, it is vitally important to present the Ghost Perl AI as having the straightforward separation of EnParser() and InStantiate() because the consolidated parser functionality, that comprehends prepositions and both indirect and direct objects, holds the key to the Mentifex claim that "AI has been solved" inasmuch as the enhanced parser enables each AI Mind to demonstrate major progress against the problem of natural language understanding (NLU) which various published articles on the Web describe as an untractable problem and as a last main obstacle to True AI.

Sunday, September 10, 2017


Instantiating Imaginary Russian Be-Verbs in Perl

We are eager to implement the InFerence module in Russian, but first we must code the Russian way of leaving out verbs of being in making an Is-A statement, such as, "The brother is a student." We must examine the Dushka Russian AI from 2012-10-22 to see how it was done in JavaScript. Without the special Is-A code, right now we type in "Я студент" to say "I am a student," but the ghost.pl does not assign any associative tags between the subject "I" and the predicate nominative "student".

According to our Dushka coding journal of 2012-02-11 or 11.FEB.2012, Dushka uses the detection of a 32=SPACE character to impute provisionally the input of a be-verb. The Dushka InStantiate module checks for a SPACE when a verb is expected, and provisionally declares 800=BE as the verb. If a different verb does come in, apparently the AI leaves the spurious 800=BE engram in place but ignores it with respect to associative tagging. Into the ghost.pl we port in code from Dushka that cancels out the imputed be-verb.

Now when we type in the Russian for "I am a student," eventually the AI outputs erroneously "ТЫ БЫТЬ" which at least conveys the idea of "You to be...", but we want no actual be-verb to be expressed in Russian.

Finally in AudInput() we discover a line of code

if ($len == 0) { $rv = $t } # 2016feb29: set recall-vector.
which was causing the conceptual "$rv" for Russian words to be set too early by one time-point. Therefore Russian words after input could not be recalled properly. We corrected the line of code. Then when we entered "ТЫ СТУДЕНТ" the AI eventually made a clumsy output of "Я МЕНЯ БЫТБ СТУДЕНТ". Such output is actually encouraging, because we only need to make the AI find the correct form of the personal pronoun and not speak any form at all of the be-verb.

Thursday, August 31, 2017


Moving towards a Mind-in-a-Box

Recap: Since 2017-06-19 and the ghost209.pl Perlmind version, the ghost.pl artificial intelligence (AI) begins thinking initially in Russian and then in English after any non-Cyrillic input, so as to demonstrate to all users that the Russian mindset is built into the AI. Once a user with no Cyrillic keyboard causes the AI to switch its thinking away from Russian and into English, it is difficult to see any thinking in Russian again. Russian remains important for us in our development of Artificial General Intelligence (AGI) because we need both English and Russian to demonstrate our work on Machine Translation by Artificial Intelligence. We encourage Russian-language AI enthusiasts to experiment with the Russian-thinking ghost.pl and to develop the Perl AI codebase further along multiple branches of AI evolution. We have some evidence that the Russian-language AI community is waking up to the emergence of ghost.pl Strong AI from a recent perusal of the "Stats" (statistics) here on the Cyborg weblog. We see that about one third of our hundreds of weekly visitors are coming from Russia without a referring website. We vouchsafe to assume that news of an AI that thinks in Russian "out of the box" may have gone viral in Russian-speaking countries over the past ten weeks and may have caused the recent uptick in visits from Russia with love for Russian AI. Now we wish to improve the ghost.pl AI under a new rubric -- Mind-in-a-Box. Since we do not (yet :-) have a robot for enlarging the AI Mind into a sensorimotor being, our ghost.pl AI remains trapped inside a server or a host computer as a Mind trapped in a box, able to communicate with other minds and perhaps able to flit across the Web in metempsychosis but not yet able to go forth and multiply across the Earth as robotic beings. Still, the idea of an AI Mind-in-a-Box, which we broached in the Neuroscience SubReddit on 2017-07-28, may appeal not only to Russian AI enthusiasts but also and with Pavlovian salivation to AI tinkerers in general. Let the Meme go viral that Mentifex invites any Perl shop to install an immortal, proto-conscious, polyglot AI within the motherboard confines of the humblest DOS-machine or the grandest supercomputer.

As we release the Mind-in-a-Box ghost.pl code, let us stub in the EnArticle() module for the English articles "a" and the". After we enter one or more Roman characters to switch the AI from Russian thought to English thought, it is unsettling to see the Ghost AI assert, "I AM PERSON". The EnNounPhrase() module needs to call EnArticle() so that the boxmind may say whether it is "a person" or "the person". We place the module for English articles subsequent to the Speech() module in accordance with the governing lay-out of the MindForth AI, because location of a subroutine matters in Forth but not in Perl. We use the $unk variable to preserve the value of $aud during any call from EnNounPhrase() to EnArticle().

Friday, July 07, 2017


Cross-fertilizing ideas with another major AI project.

Yesterday we were able to do some cross-fertilization of ideas in the HTM Forum of Numenta, about which in 2005 we wrote our only Slashdot story. Numenta is where serious AI enthusiasts are taking the laborious approach of reverse-engineering the neocortex of the human brain. Then Mentifex here swoops in and claims to have solved AI with a totally top-down approach to how the mind works. The Mentifex AI Minds are based on theoretical ideas of the macro properties of neurons, such as extending spatially and temporally over a putative MindGrid and having as many as ten thousand synapses with other neurons. The Mentifex Minds use neural inhibition to dislodge briefly topmost ideas in favor of ascendant other ideas. Since Mentifex AI is concerned mainly with neuron-based concepts playing a role in thinking, we reverse-engineer neurons only enough to create AI software that can demonstrably think and reason in English, German and Russian. We hope to poach some great minds who think alike from the Numenta project. It could take a thousand years to reverse-engineer the neocortex, and Netizens who get tired of waiting for such a bottom-up approach are welcome to try out the ghost.pl top-down AI that runs in Strawberry Perl 5. Our goal is to release basic AI software with sufficient intellectual functionality that individuals and teams, even if working in secret, will latch on to our existing codebase, reverse-engineer it, and create from it even better AI Minds than we tenues grandia are capable of.

Monday, June 19, 2017


Ghost.pl is a Russian Strong AI that can also think in English.

Today in ghost209.pl we would like to rewrite the AudInput() module, but first we want to find out what causes the AI to switch from thinking in Russian to thinking in English. To our surprise, we find out that apparently during English thought, a Russian memory may become activated enough to rise up and switch the thinking from English into Russian.

It looks as though using "split" to break apart a conceptual engram into associative tags, including $hlc, is enough to change the human language code from Russian to English, or vice versa. Apparently the entry of an English word is not yet changing the $hlc to English, because in AudInput() there is a test for Russian Cyrillic characters but not for English characters. So in AudInput() we devise the following test for non-Russian, English characters:

if ($pho =~ /[a-z]/ || $pho =~ /[A-Z]/) { $hlc="en" }
It works! The code above means that if the incoming phoneme is either a lowercase or uppercase letter of the English alphabet, then we set the human-language-code $hlc to "en" for English. And it works immediately. In the immortal words of the Watergate figure John Dean, who forty years later is back in the news a lot recently, "What an exciting prospect!" Back then Mr. Dean was excited at the prospect of using the Internal Revenue Service (IRS) to go after the enemies of Richard Nixon. Now maybe he will get excited at what we can do with the ghost.pl Russian AI.

Remember, you read it first here on the Cyborg weblog. We have a chance now to do the following. What? The following of Deep Throat and other shady characters? No; the following of Cyrillic characters and Roman characters. Here is our plan, hatched in utmost glee and Russian (or is it French?) savoir faire. Since most American users of the ghost.pl artificial intelligence do not speak Russian and do not have their computer keyboard set up to type Russian letters into the AI Mind, they would not normally see the ability of the polyglot AI to think in Russian. Like they say on the Internet, "Pix or it did not happen." Well, our plan is to show everybody that the Perlmind can think in that exotic language of poets and world-class novelists: Russian. We will initially set the $hlc to Russian on every release or on alternating releases, so that users start out first seeing the Strong AI Mind thinking on and on in Russian, until somebody enters just one character of English. Most users will then not be able to bring the Russian thinking back, unless they press the ESCape-key to literally "kill" the Perl program and restart it with the Russian language showing. But by restarting the immortal AI Perlmind, said (sad) users lose their bragging rights to having one of the oldest living AI Minds.

The Ghost Perlmind may gradually become known as a Russian AI that just happens to think also in English, if you force it to switch to English by typing in English words instead of Russian. That's fine. It opens up the enormous community of skilled Russian programmers to work on open-source AI. When we were posting today in the Russian subReddit, we gave ourselves Искусственный Интеллектник as our "flair" meaning "AInik" in the tradition of "beatnik" or "refusenik".

Thursday, June 08, 2017


Adding $tru and $mtx to expanded and re-arranged @psy flag-panel.

Starting with ghost203.pl we want to implement our first new AI theory work in all the years we have been programming the AI based on the doubly original Theory of Mind -- original (meaning old) within our AI project, and original (meaning novel) outside of our AI project. We introduce a new $tru variable to hold dynamically the truth value of an idea as perceived by the conscious AI Mind. By default, ideas will tend to have a low or zero $tru value so that new code implementing the new theory may sparingly lend credence to ideas important only in the here and now, as the AI is forced to make decisions based on what it currently believes to be true. At the same time, we introduce a machine-translation $mtx transfer variable to let concepts being thought about in one language, such as English, cause the parallel activation of a similar concept in another natural language, such as German or Russian. With these new changes we are trying to create a ghost.pl software in Perl that SysAdmins and other persons may pass around from person to person, from computer to computer, and from website to website.

It would have been easy to simply add the new associative tags at the end of the pre-existing flag-panel for each concept in the Perl @psy array, but we seize the opportunity here not only to add two new elements to each row of the array, but also to re-arrange the order of the associative tags in the conceptual flag-panel so the tutorial presentation makes more sense and is more easily readable as the following sequence of variables

"$tru,$psi,$hlc,$act,$mtx, $jux,$pos,$dba,$num,$mfn, $pre,$iob,$seq,$tkb,$rv";

Wednesday, June 07, 2017


Ghost.pl uses parameters to think with the correct verb-form.

Today in ghost202.pl we work on selecting the correct form of be-verb for a personal pronoun such as "I" for the concept of self or ego. At first in the EnVerbPhase() module we need to determine which parameters are available from the chosen subject to help us select the correct verb-form. We already have $subjpsi available, but its 701=I value is not showing up as the $svo1 value. The $subjnum variable is not being set with the grammatical number of the subject, but we should be able to determine that singular number retroactively if the $subjpsi is 701=I. In the EnNounPhrase() module we insert code so that the selected concept becomes the value filling the $svo1 variable. Towards the end of EnThink(), we zero out the $svo1 to $svo4 values so that they will have been available during the calling of various modules of thought, but will be blank or empty when a new thought begins. Next we need to use the available parameters to steer the EnVerbPhrase() module into selecting the correct be-verb. We have success with "I AM NOT BOY" when we insert code into EnVerbPhrase() to trap for a 701=I $svo1 subject that sets the $subjnum value to a unitary one. Then the parameters of verb, number and person select "AM" as the correct form of the verb "BE".

Monday, May 29, 2017


Implementing the negation of be-verbs without auxiliary verbs.

In the numerically milestone ghost200.pl we are trying to implement the negation of verbs of being, as found in the otherwise obsolete 24JUL14A.F version of the MindForth AI. Negation of be-verbs does not require an auxiliary form of the verb "DO", but does require different word order than for ordinary verbs. In Perl we declare the variable $tbev from MindForth. We keep testing the AI by typing in, "you are not a boy," and it eventually says, "I ARE BOY", because the negation of be-verbs is not yet working. Halfway there, we get the Ghost AI to output "I DO NOT ARE NOT BOY". Apparently we need to suppress the negation for normal verbs when we negate a verb of being. We do so, and the AI outputs "I ARE NOT BOY." The selection of number and person needs more work.

Soon we will upload the ghost200.pl AI as the commented version perlmind.txt and as simply ghost.txt with the comments stripped out. Although an AI Mind Maintainer will have access to the fully commented version, we may expect end users typically to host the uncommented "ghost.pl" on their machines.

Sunday, April 30, 2017


Improving the storage of the number-flag for nouns.

Today in ghost199.pl we will try to make the AI error-free even before we go back to adding in the functionality already present in some of our obsolete AI Minds. For instance, we have not yet coded the negation of verbs into our Perlmind source code. Consequently, if you tell the AI something like "You are not a boy", it fails to attach a negative juxtaposition $jux flag to the verb during comprehension of the input sentence. A few cycles of thought later, the AI may then assert "I AM A BOY" because it has been informed of the negated proposition without the ability to process the negation.

We debug the AI by letting ghost199.pl think on its own without human input. Eventually the Perlmind erroneously says "I AM ROBOTS", which is grammatically incorrect because of the plural noun. We intuit immediately that the AI is retrieving the most recent engram of the concept #571 "ROBOT" without insisting on a singular number. We inspect other recent thoughts of the AI and we see that it thinks "KIDS MAKE ROBOTS" but it stores the word "ROBOTS" as a singular noun. We must look and see if the InStantiate() mind-module has a proper $num flag for storing "ROBOTS" correctly as a plural noun. We see that the OldConcept() module looks up the stored num(ber) of a found engram and tentatively assigns the same value to the $num flag, but there really needs to be an override if a different value is needed.

In the otherwise obsolete but still rather advanced 24jul14A.F version of MindForth, some AudInput code checks for an "S" at the end of an input noun as a reason to assign plural number to the noun. Let us try to implement the same test in the Perl AI. First we test for the presence of an 83=S, but we must also make sure that the "S" is the final character of a noun. First in OldConcept() we comment out the line of code that was transferring the found num(ber) of a noun to be the same number for a new instance of the noun, regardless of the presence or absence of a terminating "S". Then we notice that "ROBOTS" stops being stored as singular, and becomes plural. We create a variable $endpho to hold onto each previous character in AudInput() to test if a word ends in 83=S. Thus we are able to store a plural number if a noun ends in "S".

Sunday, April 23, 2017


Stubbing in MindMeld() and stopping derailment of thought.

We function now as an AI Mind Maintainer debugging the Perlmind free AI source code. In the ghost198.pl AI we first stub in the audacious MindMeld() module to nudge AI practitioners into devising a way for two AI Minds to share their dreams. Then we deal with some problems pointed out on Usenet by persons who have downloaded the Perlmind and evaluated its functionality.

We run ghost198.pl with "dogs are mammals" as input and we press the Escape-key to halt the AI after its first response, "I HELP KIDS". We notice immediately three problems with how the word "DOGS" is stored in the @psy and @ear memory arrays. For some reason, "DOGS" is being assigned new-concept #3002, even though the Tutorial display of diagnostic messages indicates that the AI is preparing to assign new-concept #3001 to the first new concept. We check the MindBoot() sequence to make sure that "DOG" is not already a known concept in the AI; it is not. Now let us inspect the source code to see where the new-concept number $nxt is incremented from 3001 to 3002. We see that the end of MindBoot() clearly assigns the number 3001 as the value of the $nxt variable. Now let us search for the $nxt++ increment. It is happening towards the end of the NewConcept() module. We immediately wonder if $nxt is being incremented before AudMem() stores the concept-number. We insert into AudMem() a diagnostic message to let us know the $nxt value before storage. The first diagnostic message does not tell us enough, so we insert a second diagnostic into the AudMem() module. It also does not help us.

In the AudInput() module we use some diagnostic messages to learn that the "S" in "DOGS" is first being stored with the correct $nxt value of "3001" and then a second time with the incorrect value of "3002". Perhaps we should increment $nxt not in NewConcept() but in AudInput(). We move the $nxt++ increment from NewConcept() into AudInput(), and we stop getting the wrong values of the $nxt variable.

A second problem is that the concept of "DOGS" is being stored with a zero instead of "2" for "plural" in the $num slot of the @psy conceptual flag-panel. The most recent incarnation of the InStantiate() module does not seem to address the $num value sufficiently, so let us inspect recent or older MindForth code. We discover that the obsolete 24jul14A.F version of MindForth uses some complex tricks to assign the num(ber) of a concept being stored, so we will put aside this problem to deal with more serious issues.

The third and presumably more serious problem is that the input word "DOGS" is being stored with the $nxt concept number "3001" only on the "S" phoneme and not on the "G" at the end of the word-stem "DOG". Let us leave that problem also aside for a while, because entering "dogs are mammals" repeatedly is running into more serious problems. FOr instance, all three words of the input are being stored erroneously with the same $rv recall-vector, which can cause the wrong auditory memories to be retrieved. Let us see if the previous ghost197.pl does the same error. Yes, and so does the ghost196.pl AI. However, we should not find it difficult to correct the $rv problem. We fix the problem by resetting $rv to zero at the end of the InStantiate() module. Now the Perlmind no longer goes off the rails of thought, and so we upload it to the Web.

Wednesday, April 12, 2017


Ghost Perl Strong AI cycles through Normal; Transcript; Tutorial; Diagnostic Mode

It is time now in ghost197.pl to show a clean human-computer interface (HCI) and to stop displaying masses of diagnostic messages. Accordingly in the AudInput module we change the user-prompt to say "Tab cycles mode; Esc(ape) quits AI born [date-of-birth]". We insert if-clauses to declare which user input mode is in effect: Normal; Transcript; Tutorial; or Diagnostic. Near the start of ghost197.pl we set the $fyi to a default starting value of unity ("1") so that the human user or Mind-maintainer may press the Tab-key to cycle among user input modes. In AudInput() we insert code to increment $fyi by one point with each press of the Tab-key and to cycle back to unity ("1") for Normal Mode if the user in Diagnostic Mode presses Tab again.

In the MainLoop module we change a line of code to test for $fyi being above a value of two ("2") and, if so, to display the contents of the @psy conceptual array and of the @ear auditory memory array. Thus the user in #3 Tutorial Mode or in #4 Diagnostic Mode will see the storage of current input and current output in the memory arrays. We consider the display of conceptual memory data in Tutorial Mode to be an extremely powerful tool for teaching how the artificial general intelligence (AGI) works. After any input, the user may see immediately how the input goes into memory and how the values in the flag-panel of each row of the @psy array represent the associative tags from concept to concept and from engram to engram.

Next we start commenting out or deleting the display of various diagnostic messages. Over time and over multiple releases of the Ghost AI source code, any AI coder may decide which messages to display in both Tutorial and Diagnostic Modes, or in only one of them. Although we comment out a message involving Russian input, we do not delete the diagnostic message because we may need it when we turn back on Russian as an input language. Russian has become much more important in our Ghost Perl AI because we need Russian or German to demonstrate Machine Translation by Artificial Intelligence. When we have commented out most of the diagnostic messages, we need to put back in some code to show what the user is entering.

Tuesday, April 11, 2017


Stubbing in the MetEmPsychosis module.

[2017-04-10] Today in ghost195.pl we stub in MetEmPsychosis() as an area for Perl code that will enable an AI Perlmind to either move itself across the Web or replicate itself across the Web. We foresee the advent of a kind of "AiBnb" or community of Web domains that invite and encourage AI Minds to take up temporary or long-term residence, with local embodiment in a robot and with opportunities for local employment as a specialized AI speaking the local language and familiar with the local history and customs.

[2017-04-10] In the AudInput() module today we insert the Cyrillic characters of the Russian alphabet for each line of code that converts lower case to upper case and sets the $hlc variable to "ru" as the human-language-code for Russian. We have not yet turned the Russian language back on again, but we will need it to test out our ideas for Machine Translation by Artificial Intelligence.

Coding VisRecog to say by default: I SEE NOTHING.

[2017-04-11] Today in ghost196.pl we would like to port in from MindForth the code that causes any statement of what the AI is seeing to default to the direct object "NOTHING," so that Perl coders and roboticists may work on integrating computer vision with the AI Mind. We make it clear that the visual recognition (VisRecog) system needs only to supply the English or Russian name of what it is seeing, and the AI will fill the slot for direct objects while generating a sentence about what the AI sees. The VisRecog mechanism does not need to be coded in Perl or in Forth. It only needs to communicate to the Perlmind a noun that names what the AI is seeing. When the generated statement passes through reentry back into the Mind, even a new noun will be assigned a concept-number and will enter into the knowledge-base (KB) of the AI.

First we declare the subject-verb-object variables $svo1, $svo2, $svo3, and $svo4 to hold a value that identifies a concept playing the role of subject, or verb, or indirect object, or direct object in a typical sentence being generated by the AI. If there is no direct object filling the slot for the object of the verb "SEE", then the VisRecog() module must try to fill the empty slot. Until a Perl expert fleshes out the VisRecog() code, the word "NOTHING" must remain the default object of the verb "SEE" when the ego-concept of "I" is the subject of the verb. We ran the AI and we typed in "you see kids." After a spate of outputs, the AI said, "I SEE KIDS," but we would really prefer for the AI to say, "I SEE NOTHING" as a default.

After coding a primitive VisRecog() module, next we go into the part of the EnVerbPhrase() module where it is looking for a direct object. We set conditions so that if the subject is "I" and the verb is "SEE", VisRecog() is called to say "NOTHING" as a direct object, and EnVerbPhrase() stops short of saying any other direct object by doing a "return" to the calling module. We now have a Perlmind that invites the integration of a seeing camera with the AI software.

Saturday, April 08, 2017


Retroactively setting associative $seq tags for direct objects of verbs.

In the ghost194.pl AI, we have a problem where the direct-object $seq of a verb is being indeed properly assigned for human user input, but not for reentrant ideas being summoned from experiential memory. Because the $seq is not yet known when a verb comes in, the $seq value must be assigned retroactively when the direct object of the verb comes in. The situation where the process works for human input but not for a reentrant idea, suggests that the cause of the problem could simply be that the value of some pertinent variable is not being reset as needed.

This problem of the retroactive assignment of the associative $seq tag for a verb is difficult to debug. It may involve making the reentry routine equal to the human-input routine, or it may involve porting into Perl some special code from the 24jul14A.F version of MindForth. We have meanwhile been offering in the computer-science compsci subReddit a suggestion that students in need of an undergraduate research project might look into the Ghost AI software coded in Strawberry Perl 5 as an opportunity to select a mind-module to work on. We feel some urgency to debug our code and get it working as well as possible when we are inviting undergraduate students and graduate students and professors to take over and maintain their own branch of the AI Mind development. There is a steep learning curve to be surmounted before participants in such an artificial general intelligence (AGI) project may move forward in AI evolution. So now we go back to the problem of debugging the retroactive assignment of $seq subsequent-concept tags.

We search our ghost194.pl source code for "$psy[" as any instance where a $seq is being inserted either currently or retroactively into a flag-panel row of the @psy conceptual array. We discover that a $verbcon flag for seeking direct or indirect objects is governing the storage of the $seq tag in the Parser() module. Immediately we suspect that the $verbcon flag is perhaps being set during actual human user input but not during the reentry of an idea retrieved from memory. We check and we see that $verbcon is set to unity ("1") in the Parser() module when the part-of-speech $pos variable is at a value of "8" for a verb. The $pos value is set in the OldConcept() module when a known verb is recognized.

We insert a diagnostic message about the direct object in the Parser() module, and the message shows up during human user input, but not during reentry. Apparently the Parser() module is not even being called during reentry. No, it is being called, but the $verbcon flag is not being set properly during reentry. When we comment out the reset of $verbcon at the end of the AudInput() module and we move the reset to the Sensorium() module, we start seeing the assignment of direct-object $seq tags during the reentry of ideas recalled from memory. However, in a later session we must deal with the new problem that improper direct-object $seq flags are being set for personal pronouns during human user input. No, we debug the problem now, simply by resetting time-of-verb $tvb at the start of the EnThink() module, to prevent an output-sentence from adjusting associative tags for a previous sentence with a previous time-of-verb. The AI becomes able to receive "i know you" as input and then somewhat later say "YOU KNOW ME."

Friday, April 07, 2017


Wrong solution to a bug briefly ruins word-recognition.

[2017-04-06] Let us run the ghost192.pl AI without input and try to fix the first thing that goes wrong with it. After a series of sensible outputs, at t=2562 the AI suddenly says "HELP I" without a subject for the verb. As we investigate, we see that EnNounPhrase is trying to activate a subject at t=2427, but the pronoun "I" is stored at t=2426 with an erroneous recall-vector "rv" of t=2427. The error in auditory storage causes the AI at a later moment not to find the auditory engram.

[2017-04-06] We notice that MindForth sets tult in the AudInput module, while the Perlmind is setting $tult in both the InStantiate module and the AudInput module. However, where $tult is set, does not seem to matter. We eventually notice that some MindForth code ported into AudInput() was letting the $rv recall-vector be set erroneously not only for an alphabetic character, but also for a CR-13 carriage-return or a SPACE-32. When we restricted the $rv setting to alphabetic characters, our current bug was fixed, and the AI no longer said "HELP I".

Letting $rv be set only once per word correctly solves a bug.

[2017-04-07] Yesterday in ghost192.pl our attempt at solving a recall-vector $rv bug made the AI unable to recognize reentrant words. Now in ghost193.pl we would like to isolate $rv so that its value can be set only once in each cycle of recognizing a word. When we do so, we obtain the proper $rv value for the first word stored by the AI, but it remains the same value for all subsequent words being stored. We must determine where to reset $rv to zero. We try resetting $rv to zero at the start of the Speech() module, as MindForth does. Immediately we see fresh values of $rv being stored for each reentrant word. We let the AI run on at length, and it no longer says "HELP I" without a subject for the verb. Then we start the AI with an input of "you know me" and somewhat later the AI remembers the self-referential knowledge and it outputs, "I KNOW YOU". Thus we have made a major improvement to the AI functionality by fixing the $rv bug. There remain grammatical issues, probably based on software bugs.

Wednesday, April 05, 2017


Perl Strong AI pauses briefly for human input.

[2017-04-02] Today in the ghost190.pl Perl AI we want to solve the problem of getting the AI to pause reliably and wait for human user input. The code for a pause-loop is already in the free AI source code, but the program keeps slipping out of the receptive point-of-view ("POV") status. Some diagnostic messages confirm our sneaky suspicion that maybe program-flow leaves the main AudInput loop without setting the loop-counter back to zero.

Preventing AudInput from causing unwarranted conceptual storage.

[2017-04-05] Coding ghost191.pl AI today, we need to differentiate among Normal; Transcript; Tutorial; and Diagnostic modes for the human-computer interaction (HCI). In the AudRecog module, we insert a test for the $fyi variable to hold a value of, say, "4" to indicate Diagnostic Mode and to display the very most informative diagnostic message during the AudRecog operation. Then the AI coder or mind-tender may either be satisfied with the deeply informative message or may insert additional diagnostic messages in pursuit of bugs.

[2017-04-05] In the ghost191.pl code we are tracking down a bug which causes the unwarranted storage of a redundant row of a conceptual flag-panel in the @psy conceptual array. Apparently, after the storage of the last word in an output, InStantiate() is being called one final, extra time. We remove the bug by inserting into the AudInput module a line of code which zeroes out the $audrec value for any word of zero length just before AudInput calls AudMem. In that way, a final CR-13 carriage-return may transit from the Speech module through the AudInput module without causing the storage or an unwarranted, extra row in the @psy conceptual array.

Saturday, April 01, 2017


Encouraging AI immortality by reminding users how long AI has been alive.

[2017-03-30] As we code the Perlmind running in Strawberry Perl 5, today we insert code to have the AI announce when it was born, so as to encourage AI enthusiasts to see how long they can keep the Ghost Perl AI alive and running.

[2017-03-30] Now we are trying to clean up the ghost187.pl code. In MindForth, the AudInput module handles both normal input from human users and the reentry of output from the speech module. During human input, MindForth AudInput calls the AudListen module. Otherwise, AudInput handles internal reentry.

Improving the storage of words in @ear auditory memory.

[2017-03-31] In ghost188.pl we are trying to fix a problem where the display of the AudInput pause-counter is not showing up when the AI Mind is thinking on its own. First, though, we analyze everything that is happening in the AudMem() module. In one instance, after the AI recalls the idea "You are magic", AudMem at first stores the "Y" in "you" and then writes over it with the storage of a blank character. In fact, AudMem is failing to store the first character in each word of an output idea. When we remove from AudInput() an obsolete duplicate call to AudMem(), the ghost188.pl AI starts storing the complete word of each remembered idea, but the proper $audpsi tags are not being assigned in the @aud auditory memory array.

Restoring the ability of Ghost Perl AI to recognize words.

[2017-04-01] In ghost189.pl we need to ferret out deeply hidden problems, so we have uncommented several diagnostic messages in the AudRecog module. We first learn that the first character of a reentrant word is falsely being declared to have a zero $len for word-length. At the same time, an ASCII CR-13 is being declared inside each AudInput loop.

[2017-04-01] Now we learn that $len is somehow being doubly incremented. We need to find $len++ somewhere and comment it out. We did so in the lower area of AudInput() and then the diagnostic messages no longer showed double lengthening, but still the reentrant words are not being recognized. Apparently AudMem() is not sending a blank space into AudRecog() to announce the end of a word. Apparently it is not the job of AudMem() to generate the blank space, but merely to pass it along into AudRecog(). Perusal of the agi00037.F MindForth code reveals to us that it is the job of the Speech() module to send one last space into AudInput. The generation modules do not attach a SPACE-32 to a word, but rather each word in the @ear auditory memory is followed by a SPACE-32 in storage. The Speech module finds the space character after each word and sends it along into the AudInput module. Somewhere we need to increment $len by one when the post-word SPACE-32 goes from AudMem() into AudRecog().

[2017-04-01] The ghost189.pl AI suddenly started recognizing words when we commented out several unwarranted calls to the AudDamp() module, which must have been interfering in auditory recognitions.

Wednesday, March 29, 2017


Improving the Ghost Perl AI Human-Computer Interface

The Ghost Perl AI has recently become able to pause its thinking long enough to accept keyboard input from a human user, and to stop waiting either when the user presses the Enter-key, or when there is no input at all, or when the user fails to enter the carriage-return. Now we need to discontinue the pause more quickly when there is no activity from the keyboard, so we will try creating a $gapcon variable to be incremented with each loop expecting but not receiving an input character, and to be reset to zero when there is indeed an input character. It works.

A minor problem in ghost186.pl is that the MainLoop is displaying the contents of the @psy and @aud memory arrays without a gap-line between input and output. The problem seems to lie with the setting of the time $t value. No, the solution was to set the $krt value in the Sensorium() module after the call to AudInput(), so that the MainLoop can separately display memory data before input and then after input, separated by a blank line.

Next in ghost186.pl we tackle the problem where the input diagnostic display was no longer showing each input character prominently left-justified down the edge of the MS-DOS window. We simply moved some old code from an obsolete area up into the currently operative AudInput code. Doing so not only gave us the left-justified display, but we also saw immediately that the $len value is not being reset to zero after each word of input, which prevents words beyond the very first word from being recognzed. We track down and fix the $len problem.

Monday, March 27, 2017


Perl Mind Programming Journal

[2017-03-26] The main problem with the ghost184.pl Perlmind is that it does not supply an automatic carriage-return CR-13 if the human user neglects to press the Enter-key. This defect prevents the AI from going back to its own chains of thought when a human user has begun but not completed a message of input.

[2017-03-26] We may be able to fix the problem by supplying a CR-13 carriage-return when the input loop of the AudInput module is making its last loop. We try it, and it seems to work, but we find that ReEntry() does not work after an incomplete human input is supplied with a CR-13 in the AudInput module. We deploy a diagnostic message or two and we learn that the $len variable is not at zero when ReEntry() is called, thus perhaps interfering with the proper function of AudInput. Let us try setting the $len variable to zero at the start of the ReEntry module.

Ghost 185.pl Strong AI enters each input character into memory.

[2017-03-27] In the Strong AI ghost185.pl it is time to change from the re-entry of complete sentences back into the Mind to a more immediate re-entry of each phoneme (character) back into the Mind. In AudInput(), when we start sending each input character directly into AudMem(), nothing gets recognized, perhaps because we need to change the characters to uppercase. We also start incrementing time "t" before AudInput() calls AudMem(), and we start getting auditory recognition of an input word. When we preserve the inner loop of AudInput but we comment out the outer loop for whole words, we start getting a display of the storage of input in memory.

Next we need to implement the switch-over from storing input to generating output. It appears that we are not getting memory-storage of output because time "t" is not being incremented. We also need to turn a nested AudInput() loop into just a one-time sequence. Then in Speech() we set $pho before we call AudInput() for the reentry of Speech() output. Suddenly we begin seeing both the input and the output as stored in conceptual and auditory memory. We tweak ghost185.pl a little and we upload it.

Saturday, March 25, 2017


Ghost183.pl pauses for human input and then continues thinking.

There is a chance that we will attain the Technological Singularity today in our coding of the Ghost Perl Webserver Strong AI, but then we will have to figure out how to blame somebody else for it. Meanwhile we start with a mundane problem. Persons who download Forth and run MindForth see a quivering prompt that invites input from the human user. More importantly, the dynamic, quivering prompt conveys the sense that something is alive and sentient in the AI Forthmind. We need the same user experience in the Ghost Perl AI.

The jittery prompt is achieved in the MindForth AudListen module by having it issue an ASCII SPACE-32 and BACK-SPACE-8 over and over again. When we try to achieve a similar human-computer-interface (HCI) in the Perlmind, it is confusing at first because we seem to be altering multiple lines of the screen simultaneously. Actually we are seeing the AudListen loop re-drawing the screen instantaneously.

Let us shift our attention to the problem of how to insert a default CR carriage-return if there is no human input from AudListen.

From page 354 of the Perl Black Book we have just learned how to use "ord" to deal with ASCII vales as we are so accustomed to do in Forth.

Since we are finding it difficult to detect a "CR" carriage-return in AudListen with ReadKey, we may just let both the AudInput loop and the AudListen loop run their course, with a really long AudInput loop as a way of presenting the human user with an apparent pause in the thinking of the AI.

Friday, March 24, 2017


Ghost Perl AI uses the AudListen() mind-module to detect keyboard input.

Yesterday we may have finally learned how to let the Ghost Perl AI think indefinitely without stopping to wait for a human user to press "Enter" after typing a message to the AI Mind. We want the Perlmind only to pause periodically in case the human attendant wishes to communicate with the AI. Even if a human types a message and fails to press the Enter-key, we want the Perl AI to register a CR (carriage-return) by default and to follow chains of thought internally, with or without outside influence from a human user.

Accordingly today we create the AudListen() module in between the auditory memory modules and the AudInput() module. We move the new input code from AudInput() into AudListen(), but the code does not accept any input, so we remove the current code and store it in an archival test-file. Then we insert some obsolete but working code into AudListen(). We start getting primitive input like we did yesterday in the ghost181.pl program. Then we start moving in required functionality from the MindForth AI, such as the ability to press the "Escape" key to stop the program.

Eventually we obtain the proper recognition and storage of input words in auditory memory, but the ghost182.pl AI is not switching over to thinking. Instead, it is trying to process more input. Probably no escape is being made from the AudInput() loop that calls the AudListen() module. We implement an escape from the AudInput() module.

The ghost182.pl program is now able take in a sentence of input and generate a sentence of output, so we will upload it to the Web. We still need to port from MindForth the code that only pauses to accept human input and then goes back to the thinking of the AI.

Tuesday, March 21, 2017


Machine Translation by Artificial Intelligence

As an independent scholar in polyglot artificial intelligence, I have just today on March 21, 2017, stumbled upon a possible algorithm for implementing machine translation (MT) in my bilingual Perlmind and MindForth programs. My Ghost Perl AI thinks heretofore in either English or Russian, but not in both languages interchangeably. Likewise my Forth AI MindForth thinks in English, while its Teutonic version Wotan thinks in German.

Today like Archimedes crying "Eureka" in the bathtub, while showering but not displacing bath-water I realized that I could add an associative tag mtx to the flag-panel of each conceptual memory engram to link and cross-identify any concept in one language to its counterpart or same concept in another language. The mtx variable stands for "machine-translation xfer (transfer)". The AI software will use the spreading-activation SpreadAct module to transfer activation from a concept in English to the same concept in Russian or German.

Assuming that an AI Mind can think fluently in two languages, with a large vocabulary in both languages, the nub of machine translation will be the simultaneous activation of semantically the same set of concepts in both languages. Thus the consideration of an idea expressed in English will transfer the conceptual activation to a target language such as Russian. The generation modules will then generate a translation of the English idea into a Russian idea.

Inflectional endings will not pass from the source language directly to the target language, because the mtx tag identifies only the basic psi concept in both languages. The generation modules of the target language will assign the proper inflections as required by the linguistic parameters governing each sentence being translated.

Thursday, March 16, 2017


2017-03-15: Porting AudRecog and AudMem from Forth into Perl

We start today by taking the 336,435 bytes of ghost176.pl from 2017-03-14 and renaming it as ghost177.pl in a text editor. Then in the Windows XP MS-DOS prompt we run the agi00045.F MindForth program of 166,584 bytes from 2016-09-18 in order to see a Win32Forth window with diagnostic messages and a display of "you see dogs" as input and "I SEE NOTHING" as a default output. From a NeoCities upload directory we put the agi00045.F source code up on the screen in a text editor so that we may use the Forth code to guide us in debugging the Perl Strong AI code.

Although in our previous PMPJ entry from yesterday we recorded our steps in trying to get the Perl AudRecog mind-module to work as flawlessly as the Forth AudRecog, today we will abandon the old Perl AudRecog by changing its name and we will create a new Perl AudRecog from scratch just as we did with the Forth AudRecog in 2016 when we were unable to tweak the old Forth AudRecog into a properly working version. So we stub in a new Perl AudRecog() and we comment out the old version by dint of renaming it "OldAudRecog()". Then we run "perl ghost177.pl" and the AI still runs but it treats every word of both input and output as a new concept, because the new AudRecog is not yet recognizing any English words.

Next we start porting the actual Forth AudRecog into Perl, but we must hit three of our Perl reference books to learn how to translate the Forth code testing ASCII values into Perl. We learn about the Perl "chr" function which lets us test input characters as if they were ASCII values such as CR-13 or SPACE-32.

Now we have faithfully ported the MindForth AudRecog into Perl, but words longer than one character are not being recognized. Let us comment out AudMem() by naming it OldAudMem() and let us start a new AudMem() from scratch as a port from MindForth.

We port the AudMem code from Forth into Perl, but we may not be getting the storage of SPACE or CR carriage-return.

2017-03-16: Uploading Ghost Perl Webserver Strong AI

Now into our third day in search of stable Perlmind code, we take the 344,365 bytes of ghost177.pl from 2017-03-15 and we save a new file as the ghost178.pl AI. We will try to track passage of characters from AudInput to AudMem to AudRec.

Through diagnostic messages in AudRecog, we discovered that a line of code meant to "disallow audrec until last letter of word" was zeroing out $audrec before the transfer from the end of AudRecog to AudMem.

In a departure from MindForth, we are having the Perl AudRecog mind-module fetch only the most recent recognition of a word. In keeping with MindForth, we implement the auditory storing of a $nxt new concept in the AudInput module, where we also increment the value of $nxt instead of in the NewConcept module.

Tuesday, March 14, 2017


PerlMind Programming Journal
Updating the Ghost Perl AI in conformance with MindForth AI.

Today we return to Perl AI coding after updating the MindForth code in July and August of 2016. In Forth we re-organized the calling of the subordinate mind-modules beneath the MainLoop module so as no longer to call the Think module directly, but rather to call the FreeWill module first so that eventually the FreeWill or Volition module will call Emotion and Think and Motorium.

We have discovered, however, that the MindForth code properly handles input which encounters a bug in the Perl code, so we must first debug the Perl code. When we enter, "you see dogs", MindForth properly answers "I SEE NOTHING", which is the default output for anything involving VisRecog since we have no robot camera eye attached to the Mind program. The old Perl Mind, however, incorrectly recognizes the input of "DOGS" as if it were a form of the #830 "DO" verb, and so we must correct the Perl code by making it as good as the Forth code. So we take the 335,790 bytes of ghost175.pl from from 2016-08-07 and we rename it as ghost176.pl for fresh coding.

We start debugging the Perl AudRecog module by inserting a diagnostic message to reveal the "$audpsi" value at the end of AudRecog. We learn that "DOGS" is misrecognized as "DO" when the input length reaches two characters. We know that MindForth does not misrecognize "DOGS", so we must determine where the Perl AudRecog algorithm diverges from the Forth algorithm. We are fortunate to be coding the AI in both Forth and Perl, so that in Perl we may implement what already works in Forth.

In Perl we try commenting out some AudRecog code that checks for a $monopsi. The AI still misrecognizes "DOGS" as the verb "DO". Next we try commenting out some Perl code that declares a $psibase when incoming word-length is only two. The AI still misrecognizes. Next we try commenting out a declaration of $subpsi. We still get misrecognition. We try commenting out another $psibase. Still misrecognition. We even try commenting out a major $audrec declaration, and we still get misrecognition. When we try commenting out a $prc declaration, AudRecog stops recognizing the verb "SEE". Then from MindForth we bring in a provisional $audrec, but the verb "SEE" is not being recognizied.

Although in the MS-DOS CLI prompt we can evidently not run MindForth and the Perlmind simultanously, today we learn that we can run MindForth and leave the Win32Forth window open, then go back to running the Perl AI. Thus we can compare the diagnostic messages in both Forth and Perl so as to further debug the Perl AI. We notice that the Forth AudMem module sends a diagnostic message even for the blank space ASCII 32 even after "SEE", which the Perl AI does not do.