Cyborg AI Minds are a true concept-based artificial intelligence with natural language understanding, simple at first and lacking robot embodiment, and expandable all the way to human-level intelligence and beyond. Privacy policy: Third parties advertising here may place and read cookies on your browser; and may use web beacons to collect information as a result of ads displayed here.

Sunday, January 31, 2016

pmpj0131

After many years of development, Perl6 has finally been released around the beginning of this new year 2016. We now position the emerging AI Perlmind as a killer app for the emerging Perl6 programming language. Yesterday we uploaded the Perl6 AI Manual to the Web for use with both P5 AI and P6 AI.

Apparently both Perl5 and Perl6 will have problems in accepting each single keystroke of input from a human user. Therefore we should shift our AI input target away from immediate human keyboard entry and towards the opening and reading of computer files by the AI Mind. Since we envision that a P6AI will sit quietly on a webserver and ingest both local and remote computer files, it makes sense now to channel input into the AI as a file rather than as dynamic keyboard entry.

Today we have created C:\Strawberry\perl_tests\input.txt as a textfile containing simply "boys play games john is a boy" as its only content. Then we have copied the code-sequence of AudInput() as FileInput() and we have made the necessary changes to accept input from an input.txt file instead of from the keyboard.


2016 January 11:

Today we need to figure out how to read in each line of input.txt and how to transfer each English word into quasi-auditory memory.

In the FileInput() subroutine of the mind0029.pl source, it looks as though the WHILE loop for reading a file may be running through completely before any individual line of input is extracted for AI processing. We move the NewConcept() and AudMem() calls into the WHILE loop so that each line of input is processed separately. However, not just each line, but each word within a line, needs to be processed separately.


2016 January 12:

A line of text input needs to be broken up into individual words. First we learn from the PERL Black Book, page 568, that the getc function lets us fetch each single character in a line from our input.txt file. Therefore in the FileInput() module of mind0031.pl we use the "#" symbol to comment out the WHILE-loop that was transferring a whole message "$msg" into AudMem(). Then we use getc in a new WHILE-loop to transfer a series of incoming characters from input.txt into AudMem(), where we comment out the string-reversing and chopping code and we convert a do-loop into a simple series of non-looping instructions, because the looping is being done up in the FileInput() module. We see that the program is now transferring individual input characters into auditory memory. Later we will need to make the transfers stop at the end of each input word, shown by a blank space or punctuation or some other indicator. The new code is messy, but we should upload it to the Web and clean it up when we continue programming.


2016 January 13:

In the FileInput() module of the mind0032.pl AI we are inserting the call to NewConcept() so that AudMem() will show an incrementing concept number for each word being stored in auditory memory. Uh-oh, running the AI shows that each stored character is getting its own concept number. Obviously, we will have to call NewConcept() only when an entire new word is being stored, not each individual character.

We were able to test for a blank space (probably not enough) after an input word in FileInput(), then order a "return" out of the WHILE-loop. We had to put "{ return }" in brackets to avoid crashing the program. Now the AI loads a first word "boys" over and over into auditory memory, but we have made progress.


2016 January 14:

Let us see what happens if we run the Perl AI with no input.txt available for the AI to read. We save input.txt elsewhere and then we delete input.txt from the perl_tests directory. We run the mind0033.pl AI program without an input.txt file available, and it goes into an infinite loop. We change the FileInput() code that opens the input.txt file by adding the "or die" function to halt the program and issue an error message. It works and we no longer get an infinite loop. Then we add the input.txt file back into the directory.

Now we need to work on getting the AI to store the first word of input and to move on to each succeeding word of input.

When we inspect the MindForth code, we see that the AudInput module first calls OldConcept at the end of a word, and only calls NewConcept if the incoming word is not recognized as an old concept. So we should create an OldConcept() module in the Perl AI program.

In the FileInput() module, we might just wait for a blank space-character and use it to initiate the saving of the word and the calling of both OldConcept and NewConcept(). Even if everything pauses to store the word and either recognize it or create a new concept, the reading of the input file should simply resume and there should be no special need to keep track of the position in the input-line.

In accordance with the MindForth code, any non-space character coming in should go into AudMem(). An ASCII-32 space character does not get stored, but rather a storage-space of one time-point gets skipped, because MindForth AudInput increments time "t" for all non-zero chararacters coming in. In other words, skipping one time-point in auditory memory makes it look as if a space-character were being stored.

It turns out that time "$t" was not yet being incremented in the mind0033.pl AI, so we put an autoincrement into the FileInput() module.


2016 January 15:

It is time in mind0034.pl to create the AudBuffer() module to be called by AudInput() or FileInput() and VerbGen(). The primitive coding may be subject to criticism, since the module treats a series of variables as a storage array, but the albeit primitive code not only serves its purpose but is easily understandable by the AI coder or system maintainer. For now we merely insert a stub of the AudBuffer() module.

After wondering where to place the AudBuffer() module, today we re-arrange all the mind-modules to be in the same sequence as MindForth has them, so that it will be easier in inspecting code to move among the Forth and JavaScript and Perl AI programs. MindForth compels a certain sequence because a module in a Forth program can call only modules higher up in the code.


2016 January 16:

The mind0035.pl program is going to get extremely serious and extremely complicated now, because for the first time in about eighteen years we are going to change the format of the storage of quasi-acoustic engrams in auditory memory. We are going to change the six auditory panel-flags from "pho act pov beg ctu audpsi" down to a group of only three: "pho" for the phoneme or character of auditory input; "act" for the activation-level; and "audpsi" for the concept number in the @psi conceptual memory array.

The point-of-view "pov" variable will no longer be stored in auditory memory, and instead other functions of memory will have to remember, if possible, who generated a sentence or a thought stored in auditory memory. Over the years it has been helpful to inspect the auditory memory array and to see whether a sentence came from the AI itself or from an external source.

The flag-variables "beg" for beginning of a word and "ctu" for continuation of a word served a purpose in the early AI Minds but are now ready for extinction. The Perl language is so powerful that it should simply detect the beginning or ending of a word without relying on superfluous flags stored in the engram itself. Removing obsolete flags makes the code easier to understand and easier to develop further.

We should probably next code the EnVocab() module for storing the fetch-tags of English vocabulary, because the @psi concept array will need to direct pointers into the @en array. In MindForth, EnVocab comes in between InStantiate for "psi" concepts and EnParser for English parts of speech. Oh, we already have a stub of EnVocab(). Then it is time to flesh out the module.

First we create the number-flag $num for grammatical number, which is important for the retrieval of a stored word in English or German or Russian. Then we create the masculine-feminine-neuter flag mfn for tracking the gender of a word in the @en English array.

We may now be able to discontinue the use of the fex flag for "fiber-out" and fin for "fiber-in". These flags were helpful for interpreting pronouns like "I" and "me" as referring to the AI itself or to an external person. The Perlmind should be able to use point-of-view "pov" code to catch pronouns or verb-forms that need routing to the correct concept.

We still need a part-of-speech pos flag to keep track of words in the @en array. We also need the $aud flag as an auditory recall-tag for activating engrams in the @aud array, unless it conflicts with the @aud designation and needs to be replaced with something like $rv for recall-vector.

The $nen flag is already incremented in NewConcept(), and now we begin storing $nen during the operation of EnVocab(). Then we had many problems because in TabulaRasa() we had filled the @en English array with zeroes instead of blank spaces.


2016 January 17:

In the mind0036.pl program we continue working on EnVocab() for English vocabulary being stored in the @en array. Today we create the variable $audbeg for auditory beginning of an auditory word-engram stored in the @aud array. We also create the variable $audnew to hold onto the value of a recall-vector onset-tag for the start of a word in memory while the rest of the word is still coming in. By setting the $audnew flag only if it is at zero, we keep the flag from changing its truly original value until the whole word has been stored and the $audnew value has been reset to zero for the sake of the next word coming in.

Today for a bug in the AI we kept getting a message something like, "Use of unitialized value in concatenation <.> or string at mind0036.pl line 295" at a point where we were trying to show the contents of a row in the @en English lexical array. In TabulaRasa() we solved the bug by declaring $en[$trc] = "0,0,0,0,0,0,0"; with seven flags set to zero. Apparently TabulaRasa() initializes all the items in the array.


2016 January 18:

In the mind0037.pl AI Perlmind, let us see what happens at each stage of reading an input.txt file.

The MainLoop calls sensorium() which in turn calls the FileInput() module. FileInput() goes into a WHILE-loop of reading with getc (get character) for as long as the resulting $char remains defined. As each character comes in, FileInput() calls AudMem() to store the character in auditory memory. Each time that $char becomes an empty non-letter at the end of an input word, FileInput() increments the $onset flag from $audnew and calls NewConcept(), because the AI must learn each new word as a new concept.

NewConcept() increments the number-of-English $nen lexical identifier and calls the English vocabulary EnVocab() module to set up a row of data in the @en array. NewConcept() calls the stub of the English parser EnParser() module. FileInput() calls the stub of the OldConcept() module.

The MainLoop module calls the Think() module which calls Speech() to output a word as if it were a thought, but the AI has not yet quickened and so the AI is not yet truly thinking. At the end of the mind0037.pl program, the MainLoop displays the contents of the experiential memory for the sake of troubleshooting the AI.


2016 January 19:

The mind0038.pl program is ready to instantiate the InStantiate() module for creating concepts in the @psi array of the artificial Mind. Let us change the @psi array into the @psy array so that a $psi variable will not conflict with the name of the conceptual array.


2016 January 20:

With mind0039.pl we may need to remove the activation-flag from the flag-panel of the @en English lexical array. In the previous Forth and JavaScript AI Minds, we had "act" there in case we needed it. Now it seems that in MindForth only the KbSearch module uses "act" in the English array, and the module could probably use @psy for searches instead of the @en lexicon.

There is some question whether part-of-speech $pos should be in the @psy conceptual array or in the @en lexical array. A search for "6 en{" in the MindForth code of 24 July 2014 reveals that no use seems to be made of part-of-speech "pos" in MindForth. Apparently part-of-speech has already been dealt with during the functions that use the Psi array, and therefore the English array does not concern itself with part-of-speech. So part-of-speech could be dropped from the @en English array.

It looks as though part-of-speech has to be assigned in the @psy array before inflections are fetched in a lexical array. If a person says, "I house you in a tent," then a word that is normally a noun becomes a verb, "to house." The software should override any knowledge of "house" as being a noun and store the specific, one-time usage of "house" as a verb. Then the AI robot can respond with "house" as a verb to suit the occasion: "Please house me in a shed." OldConcept() should not automatically insist that a known word always has a particular part-of-speech. In a German AI, VerbGen() should be called to create verb-endings as needed, if not already stored in auditory memory.

In the @psy concept array we should have seven flags: psi, act, pos, jux, pre, tkb, and seq. If we now change the tqv variable from MindForth to $tkb in the Perl AI, it clearly becomes "time-in-knowledge-base" for Perl coders and AI maintainers.

It suddenly dawns on us that we no longer need an enx flag in the @psy array. We may still need the $enx variable for passing a fetch-value, but it looks like the @psy concept number and the @en lexical number will always be the same, since we coded MindForth to find inflections for an unchanging concept number.


2016 January 21:

Now mind0040.pl invites us to make a drastic simplification by merging the @psy array and the @en array, because any distinction between the two arrays has gradually become redundant. The @psy array has psi, act, pos, jux, pre, tkb, seq flags. The @en array has nen, num, mfn, dba, rv flags. We could joint them together into one @psy conceptual array with psi, act, pos, jux, pre, tkb, seq, num, mfn, dba, rv flags.

The first thing we do is in TabulaRasa(), where we fill each row of the @psy array with eleven zeroes for the eleven flags. Next we have the InStantiate() module store all eleven flags in the combined flag-panel. We run the Perl AI and it makes no objections. Then we have InStantiate() announce the values of all eleven flags before storing them.

In the flag-panel of the @psy array, we should probably add a human-language-code "hlc" so that an AI can detect English or German or Russian and think in the indicated language.


2016 January 22:

In mind0042.pl where we have merged the @en array into the @psy conceptual array, we gradually need to eliminate the $nen variable. However, we need a replacement other than the $psi variable so that the replacement variable can hold steady and wait for each new word being learned in English, German, Russian or whatever human language is involved. Let us try using $nxt as the next-word-to-be-learned.


2016 January 23:

In mind0043.pl we are now trying to code the AudRecog() module taken from MindForth, although the timing may be premature.

As we began coding AudRecog() in the mind0043.pl AI, we discovered that the primitive EnBoot() sequence did not contain enough English words to serve as comparands with a word being processed in the AudRecog() module, so we must suspend the AudRecog() coding and fill up the EnBoot sequence properly before we resume coding AudRecog().

Today we rename the English bootstrap EnBoot() sequence as MindBoot() because the Perl AI with Unicode will not be limited to thinking only in English, but will eventually be able to think also in German and in Russian.


2016 January 24:

In mind0044.pl we are replacing the Think() module with EnThink() for English thinking, and we are declaring DeThink() as a future German thinking module and RuThink() as a future Russian thinking module.

Coding the AudRecog() module in Perl5, we move left-to-right through the nested if-clauses. At the surface we test for a matching $pho. Nested down one layer, we test for zero activation on the matching $pho, because we do not want a match-in-progress. At the second depth of nesting, we test for the onset-character of a word. In the previous AI Minds coded in Forth and JavaScript, we still had the "beg(inning)" flag to fasten upon a beginning character at the start of a comparand word in auditory memory. Now in Perl killer app AI we must rely on the $audnew variable which is set during FileInput() but which we have apparently neglected to reset to zero again. Let us try setting $audnew back to zero just before we close the audinput.txt file. Oh no, $audnew won't work here, because $audnew applies only to the beginning of an input word, not to the beginning of a word stored in memory. Maybe we can try testing for not only a zero-activation matching $pho but also for an adjacent blank space.

Now, we are going backwards in memory from space-time $spt down to $midway, which is set to zero in the primitive AI. The $i variable is being decremented at each step backwards. We would like to know if going one step further encounters the space before a word. We might have to start searching forwards through memory if we want to trap the occurrence of an initial character in a stored word. If we go forwards through memory, we could have a $penult variable that would always hold the value of each preceding moment in time. For the chain of activations resulting in recognition, it should not matter if the sweep goes backwards or forwards.


2016 January 25:

Now in mind0045.pl we will stop searching backwards in AudRecog() and search forwards so that it will be easier to find the beginning of a comparand word stored in auditory memory.

As we debug the mind0045.pl AI, we notice that the MindBoot sequence is not a subroutine, as EnBoot was in the previous AI Minds. We should call MindBoot() as a one-time subroutine from the MainLoop. We establish TabulaRasa() and MindBoot() as subroutines and we give them a one-time call from the MainLoop.

Throughout many tests we were puzzled because AudRecog() was not recognizing an initial "b" at zero activation preceded by a zero $penult string. Finally it dawned on us that the MindBoot() "BOY" was in uppercase, so for a test we switched to lowercase "boy", and suddenly the proper recognition of the initial character "b" was made. But we will need to make input characters go into uppercase, so that AudRecog() will not have to make distinctions.


2016 January 26:

Moving into mind0046.pl, we need to consult our Perl reference books for how to shift input words into UPPERCASE. The index of Perl by Example has no entry for "uppercase". None also for "lowercase". However, the index of the Perl Black Book says, "Uppercase, 341-342," BINGO! Mr. Steve Holzner explains the "uc" function quite well on page 341. Let us turn the page and see if we need any more info. Gee, page 342 says that you can use "ucfirst" to capitalize only the first character in a string sentence -- one more example of how powerful Perl is. Resident webserver superintelligence, here we come.

Now let us try to use the "uc" function in the free Perl AI source code of mind0046.pl as we continue. We had better look into the FileInput() module first. Hmm, let us go back to the index of Perl by Example, where in the index we find "uc function, 702." Okay, let us try using "uc $char" at the start of the input WHILE-loop in the FileInput() module. Huh? It did not work. Uh-oh. Houston, we have a problem. Our mission-critical Perl Supermind is stuck in lowercase. Here we have been trying to learn Perl, but we have never coded any Perl program other than artificial intelligence. Even our very first "Hello world" Perl program was a "mind.pl" program and we never did any scratch-pad Perl coding. Meanwhile there are legions of Perl coders waiting for us to finish the port of AI Minds first into Perl5 and then into Perl6. Let us check the Perl Black Book again. Let us try, $char = "uc . $char"; in the FileInput() module. We drag-and-drop the line of code from this journal entry straight into the AI code. Then we issue the MS-DOS command, "perl mind0046.pl" and take a look. Oh no, the "uc" itself is going into memory as if it were the input. Hey, it finally worked when we used $char = uc $char; as the line of code. Now the contents of auditory memory are being displayed in uppercase. We can go back to coding the AudRecog() module.


2016 January 27:

Although we have done away with the ctu-flag of MindForth in the Perl AI, because we want to reduce the number of flags stored in the @aud auditory memory, in AudRecog() we may create a non-engram "ctu" or its equivalent by using the split function to look ahead one array-row and see whether a stored comparand word continues beyond any given character.


2016 January 28:

In mind0050.pl we would like to have the FileInput() module call the human-computer-interaction AudInput() module if the input.txt file is not found. In that way, we can simply remove input.txt to have a coding session of direct human interaction with the AI Perlmind.


2016 January 29:

In mind0052.pl we are continuing to improve AudInput() towards equal functionality as we developed in FileInput().

The line of input goes into a $msg string, which AudInput() needs to process in the same way as FileInput() processes the input.txt file, except that AudInput() only has to deal with one line at a time, which is presumably one sentence or one thought at a time.


2016 January 30:

Today in mind0053.pl we hope to fix a problem that we noticed yesterday after we uploaded mind0052.pl to the Web. We had carefully gone about sending the input $pho (phoneme) into AudMem() and AudRecog(), but no word was being recognized in AudRecog() -- which we had coded for six hours straight three days ago. Then yesterday we saw that we had left a "Temporary test for audrec" in the AudMem() module and that the code was arbitrarily changing the $audpsi from any recognized $audrec concept to the $nxt (next) concept about to be named in the NewConcept() module. Now we will comment out that pesky test code and see if AudRecog() can recognize a word. Hmm, commenting out the code did not seem to work.

We hate to debug the pristine Perl AudRecog() by inserting diagnostic message triggers into it, but we start doing so, and pretty soon we discover that we neglected to begin AudRecog() with the activation-carrier $act set to eight (8), as it is in the predecessor Mindforth AI. So let us set $act to eight in the AudRecog() Perl code and see what happens. Uh-oh, it still does not work.

But gradually we got AudRecog() to work. Now in the mind0054.pl AI we are working on the AudMem() module. We want it to store each $pho phoneme in the @ear array as $audpsi if there has been an auditory recognition, and as simply $nxt if only the next word from NewConcept() is being stored.

The $audpsi shall be stored if the next time-point is caught by the ($nxr[0] !~ /[A-Z]/) test as not being a character of the alphabet.


2016 January 31:

Yesterday we got the Perl AI to either recognize a known word and store it in the @ear array with the correct $audpsi tag, or instead to store a word as a new concept with the $nxt identifier tag. However, in the @psy conceptual array, the Perlmind is improperly incrementing the $nxt tag because we have not yet figured out how to declare that a character flowing by is the last character in a word. Bulbflash: Maybe we can store the $nxt tag at the end of each @ear row, erasing it when each successive character comes in, so that only the last letter of the word will end up having the $nxt tag.

Monday, June 15, 2015

jmpj0613

JavaScript Mind Programming Journal (JMPJ) -- Saturday, June 13, 2015

These notes record the coding of the English tutorial AiMind.html in JavaScript for Microsoft Internet Explorer (MSIE).

Sat.13.JUN.2015 -- Troubles with InFerence in JavaScript

When we run the JavaScript AiMind.html in English and we try to show a Transcript of automated reasoning with logical InFerence, the Strong AI does indeed make an inference, but the dialog with the AI reveals that the AiMind program is failing to use some correct forms of verbs and personal pronouns. The thinking of the AI is correct and logical, but some mistakes are occurring in the expression of logical thought in proper English.

We suspect that grammatical errors are creeping in because the mind-modules related to inference are composing a sentence of thought outside of the normal routines of strictly grammatical English. We may be able to build up the same formalisms of strict grammaticality inside the inferential routines. For correct verb forms, however, we may need to start using the modules of OutBuffer and VerbGen.

Sat.13.JUN.2015 -- Troubleshooting the InFerence process

We notice that the AskUser module of the 14apr13A JSAI simply looks for the "quverb" query-verb to recall and speak, apparently without forcing the verb into the proper grammatical form, which is typically an infinitive form when a question is being asked with "DO" or "DOES" as an auxiliary verb. We should also check the Forth code and see if AskUser in MindForth has anything more advanced. Oh, the Forth code actually does test for a plural form to be used as if it were an infinitive.

The JSAI AskUser module looks for the "quobj" query-object without bothering to ensure that it will be an accusative form. The MindForth AskUser module also does not bother to check for an accusative case in the "quobj" word, so both the JavaScript AI and the MindForth AI need to be improved. The German Forth AI Wotan also seems to need improvement for grammatical forms in the AskUser module.

Mon.15.JUN.2015 -- Selecting Objects in Accusative

Now we have partially fixed the problem of ungrammatical English by inserting code into the AskUser() module to require the direct object or query-object to be in the accusative case. Instead of asking a question like "Does Mark need I?" the AI now asks, "Does Mark need me?"

However, when we answer "no" to the foregoing quesion, the AI eventually gets around to saying, "MARK DOES NOT NEEDS ME", because the AskUser() module is not insisting upon finding an infinitive form of the query-verb.

Table of Contents (TOC)

Sunday, April 12, 2015

pmpj0412

Perlmind Programming Journal (PMPJ) -- Sunday, April 12, 2015

The Perlmind Programming Journal (PMPJ) is a record from the very start of how the Mentifex Strong AI Mind project moves beyond REXX and Forth and JavaScript into the Perl programming environment.

Sun.12.APR.2015 -- Mentifex AI moves into Perl.

Since the Mentifex AI Minds are in need of a major algorithmic revision, it makes sense to reconstitute the Mentifex Strong AI in a new programming environment, namely Perl, beyond the original Mentifex encampments first in REXX (1993-1994), then in Forth (1998-present) and finally in JavaScript (2001-present). With Perl, we remain in a scripting language, but a language more modern and more prevalent than Forth. We savor the prospect of ensconcing our Perl mind-modules within the prestigious and Comprehensive Perl Archive Network (CPAN), where we already proposed some AI nomenclature a dozen years ago. With Perl we open up the mind-boggling and Mind-propagating vistas of seeding the noosphere with explosively metastatic and metempsychotic Perl AI that can transfer its files and its autopoiesis instantaneously across and beyond the vastness of the World Wide Web.

Sun.12.APR.2015 -- Downloading the Perl Language

Next we need to do a Google search-and-deploy mission for obtaining a viable version of the Perl language for our Acer Aspire One netbook running Windows XP home edition.

Ooh, sweet! When we search for "download Perl" on Google, we are immediately directed to http://www.perl.org/get.html which presents to us a choice among the Unix/Linux, MAC OS X, and Windows operating systems. Although we wish we were on 64-bit Linux so that we could be listed in a GNU/Linux AI website, we had better choose between ActiveState Perl and Strawberry Perl for our current Windows XP platform. Let's click on the link for Download Strawberry Perl because it is a 100% Open Source Perl for Windows without the need for binary packages. Perl.org recommends that we use the "latest stable version, currently 5.20.2." and Strawberry Perl 5.20.2.1 (32 bit) is offered to us. When we first click on the download, a Security Warning asks is whether we want to run or save this 68.6MB file. We click to save the file on our Acer Aspire One netbook. Huh? Almost instantaneously, after we see that the target will be our Acer C-drive, we get a pop-up window that says that we have completed a download not of 68.6 megabytes, but that we have downloaded 116KB in one second to C:\strawberry-perl-5.20.2.1-32bit.msi and we may now click on "Run" or "Open Folder" or "Close". Let us click on "Run" to see what happens. Now we get another Security Warning that "The publisher could not be verified. Are you sure you want to run this software?" Its name is "strawberry-per-5.20.2.1-32bit msi" and we can click on "Run" or "Don't Run". Let's click on "Run". It starts to show a green download transfer, but suddenly it stops and a "Windows Installer" message says, "This installation package could not be opened. Contact the application vendor to verify that this is a valid Windows Installer package." So we go back to where we had the choice between "Run" and "Save" and this time we click "Run" instead of "Save." In a space of between two and three minutes, the package downloads into a "temporary folder." Then a Security Warning says, "The publisher could not be verified. Are you sure you want to run this software?" Let's click "Run." Now it says "preparing to install" and "wait for the set-up wizard." Finally it says "The Setup Wizard will install Strawberry Perl on your computer. Click Next to continue or Cancel to exit Setup." Well, I have a complaint. Why did the process not work when I tried to "Save" the download instead of merely "Running" it for what I was afraid would be one single time? Why is the process of installing Perl so obfuscated and so counter-intuitive? Well anyway, let's click on "Next" and get with the program. Next we have to click the checkbox for "I accept the terms in the License Agreement." Now for a Destination Folder the Strawberry Perl Setup says to "Click Next to install to the default folder or click Change to choose another." C:\Strawberry\ is good enough for Mentifex here. Then we "Click Install to begin the installation." Oops. "Error reading from file C:\Documents and Settings\Arthur\Local Settings\Temporary Internet Files\ Content.IR5\R6BYZW40\strawberry-perl-5.20.2.1-32bit[1].msi. Verify that the file exists and that you can access it." Now we have ended prematurely because of an error. Then we went back again to the initial download process and we went with "Run" instead of "Save," and wonder of wonders, we were able to download Perl. We will "Click the Finish button to exit the Setup Wizard," and we will read the Release Notes and the README file" available from the start menu. Aha! Upon clicking the Windows XP "start" button, we proceed into "All Programs" through "Strawberry Perl" to Strawberry Perl README in a Notepad file on-screen.

Sun.12.APR.2015 -- Learning to program Perl Strong AI

Now we have to figure out how to run a program in Perl. We go to Learning Perl at http://learn.perl.org.

The page http://learn.perl.org/first_steps says to check that you have Perl installed by entering
perl -v and so we actually do
C:\Strawberry\perl -v and it works! It says "This is perl 5, version 20, subversion 2 (v5.20.2)" etc. Next with the MS-DOS make-directory "md" command we md perl_tests to create a "perl_tests" subdirectory.

Then we open the Notepad text editor and we create a file that we call not hello_world.pl but rather mind0001.pl because we want to start programming Perl artificial intelligence immediately. C:\Strawberry>perl /path/to/perl_tests/mind0001.pl is what we try to run. At first we get "No such file or directory" but when we changed directory and entered
C:\Strawberry\perl_tests>perl mind0001.pl we saw:
hi mind0001.pl and so we have run our first Perlmind AI program.

Sun.12.APR.2015 -- Perl Strong AI Resources

http://ai.neocities.org/PMPJ.html

http://mind.sourceforge.net/perl.html

http://www.cpan.org/authors/id/M/ME/MENTIFEX/mind.txt

http://cyborg.blogspot.com/2015/04/pmpj0412.html

http://www.reddit.com/r/perl


Table of Contents (TOC)

Thursday, July 24, 2014

mfpj0724

MindForth Programming Journal (MFPJ)

The MindForth Programming Journal (MFPJ) is both a tool in developing MindForth open-source artificial intelligence (AI) and an archival record of the history of how the AI Forthmind evolved over time.

Thurs.24.JUL.2014 -- MindForth AI moves to a Windows XP development platform.

MindForth came into being in 1998 on the Commodore Amiga 1000 computer as a port from the Amiga Mind.Rexx AI program, written in MVP-Forth from Mountain View Press. Around 1999, MindForth moved to a Windows 98 machine provided by Free-PC.com and to 16-bit FPC-Forth. Around 2001, MindForth moved to a Windows 95 Packard-Bell tower computer and to 32-bit Win32Forth. As the original author of Mind.Rexx and of MindForth, yesterday on 23 July 2014 I downloaded W32FOR42_671.zip onto the same Windows XP Acer Aspire One netbook which I have been using to develop the Russian Dushka AI program in JavaScript for MSIE. I unzipped W32FOR42_671.zip with my own legitimate copy of WinZip, which created a C:\WIN32FOR directory to hold all the decompressed files of Win32Forth. From the Web I downloaded the 24jan13A.F most current source code of MindForth and I saved it into the C:\WIN32FOR directory and as a text-file into a monthly C:\JUL01Y14\MFPJ directory on the Acer netbook.

I was able to get MindForth running on the Windows XP netbook by navigating with the "cd" (change directory) command into the C:\WIN32FOR directory where I typed "win32for.exe" and pressed "Enter"; then "fload 24jan13A.F" and the Enter-key; and finally "MainLoop" followed by the Enter-key. The AI Forthmind began to think its own thoughts on the screen, but the program soon crashed in its new environment, both during interaction with me and when allowed to think without human input. It was not a complete Snow Crash; but just as fatal with a pop-up message announcing "Exception # C0000005" and shutting down Win32Forth upon my clicking "Cancel" on the message. The naive and sentimental Forthcoder is not daunted or dismayed by such an AI-Mind-crash, but welcomes instead the chance to troubleshoot the AI and make it compatible with Windows XP. To debug MindForth, we will create a new version and seed it with diagnostic messages in order to find out just where and why the program is crashing with an "Exception" message. Long familiarity with MindForth causes me to suspect that there is probably a "boundary violation" where the software is trying to index one step beyond the limits of an array. We have noticed recently that searching Google for MindForth yields an auto-complete expansion of the search terms to "mindforth source code" -- an indication that Netizens have been looking for the free AI source code that we are working on right here and now. MindForth has also received a prominent mention at http://aihub.net/artificial-intelligence-lab-projects so we are motivated to make the best AI Mind that we can with MindForth and the other Mentifex AI programs.

Thurs.24.JUL.2014 -- Debugging Windows XP MindForth

In the C:\WIN32FOR directory, we enter win32for.exe to start running Win32Forth. Then we use the "File" drop-down menu and "Edit Forth File..." to click on "24jan13A.F" and "Open" it for editing and saving under a new name. Actually, we will save it immediately as "24jul14A.F" so as not to corrupt the old file by changing anything. First, however, we notice that the bottom of our WinViewX screen tells us that there are 5,173 lines of code with a size of 236,908 characters. Under the "File" drop-down menu we click on "Save File As.." and we enter "24jul14A.F" before clicking the "Save" button. We then close the WinViewX window because we want to test the new file before we proceed. We enter "fload 24jul14A.F" and we get the "ok" prompt which means that the file has successfully loaded into Win32Forth. When we enter "MainLoop" and observe without human input, the AI thinks about two thoughts and then stops with the "Exception # C0000005" pop-up message. This denouement occurs both in the default normal mode and in the Transcript mode that we invoke by pressing the Tab-key. It is time to start inserting diagnostic messages.

In the ThInk module we enter and reformulate a diagnostic message that we find commented-out in another mind-module. We forget to un-comment the code, so at first no diagnostics appear. Then we get the diagnostics, but with no change in program behavior -- it still crashes. But we see the light and we remember the Dao of debugging, that is, you figure out what modules the AI is calling and you insert diagnostics deeper and deeper into the program.

Let's see, the first part of AI thinking is to call the NounPhrase module, so let us diagnosticate NounPhrase. Aha! NounPhrase gives us some (meaningless?) diagnostics just before the Exception-crash, but the ThInk module does not. Therefore, Inspector Clouseau, the problem may lie within NounPhrase or within a module called by NounPhrase. By the way, instead of cluttering up this MFPJ journal entry with the actual diagnostic messages -- unless they become really important -- we can meta-publish the diagnostics simply by commenting them out but retaining them within the "mindforth source code" that we eventually publish on the Web. In that way, any interested party (corporate AI shop? national Ministry of AI? Ph.D. dissertation writer?) can see exactly how we have debugged the AI by inspecting the diagnostic messages that we will leave in for at least one iteration of releasing the code. So now let's plunk some diagnostics down in the VerbPhrase module in order to see if the AI thought processes are making it through NounPhrase and into VerbPhrase before the Exception-crash.

As the Forthmind thinks in English, we are getting diagnostic messages from both NounPhrase and VerbPhrase up until the dying thought of the AI, where NounPhrase reports something but VerbPhrase is silent, both in terms of output and in terms of diagnostics. So the crash could be occurring within the NounPhrase module. Therefore let us insert additional diagnostics towards the end of NounPhrase. We do so, but the software crashes before it reaches the diagnostics at the end of NounPhrase. Next we should try some diagnostics in the middle of NounPhrase. We insert diagnostics after the end of the search for the motjuste, but program-execution does not get that far and instead the Exception-crash occurs. So the problem may lie within the search for motjuste. We insert a diagnostic just before the ELSE-clause in the motjuste-search, and the diagnostic gets executed many times during non-crash thought, but not at all during generation of the thought that eventuates in the Exception-crash.

At the deepest indentation of the motjuste-search, where the "audjuste" variable is loaded with a value, we insert a diagnostic message. We run the AI. Gobsmack! From deepest NounPhrase, we get three diagnostic messages just before the Exception-crash. We notice that there is a "verblock" value of "423" as reported by the diagnostics just before the crash, so we search through the source code for the the number "423". Its only, unique appearance is at time-point t=554 in the EnBoot sequence, where "423" is assigned to the "tqv" (time-quod-vide) variable. But there is no t=423 time-point. It is interstitial, between the words "WHEN" and "WHERE" in the English bootstrap. Let us look at the source code of the JavaScript AI and see what is there. In the 14apr13A version of the JavaScript AI, at t=554 the value of "557" is assigned to "tqv", so "423" is wrong in the MindForth AI. In fact, two of the values in the Forth AI seem to have been erroneously held over from the older Forthminds before the EnBoot concepts received new concept-numbers. Let us change the pertinent section of the MindForth EnBoot to conform to the values in the JavaScript AI EnBoot() module. Hmm, when we correct the EnBoot segment, we get different output, but we still incur the same Exception-crash.

Now after massive diagnostics we find that the Exception-crash is occurring during the search for "motjuste" when the Index is at a value of "542", a point in time. Let us see what is at the t=542 time-point. We do see a t=552 error where "1" is used instead of "!" for storing a value. Let us fix that mistake.

As we correct various legacy errors from older versions of MindForth, the Exception-crash finally moves out of the time series of the EnBoot sequence and occurs once at t=615 in the time-span beyond EnBoot. Since our diagnostic message shows that the Index "I" has a value of "615" when the program crashes, MindForth must be traversing a loop at the t=615 time of the crash.

Thurs.24.JUL.2014 -- Solution found for defective search loop

Since our Exception was crashing the AI when NounPhrase was already supposed to have found a noun or a pronoun, we decided to try inserting an "ELSE LEAVE" statement just before the Forthword "THEN" ending the search-loop. It worked. The AI stopped crashing and began to think interminably. However, our Acer netbook seems to run at a high speed, and so we may need to increase some "rsvp" values at places in the program.

Table of Contents (TOC)

Friday, February 14, 2014

TuringTest

Abstract: In the mentifex-class AI Minds, TuringTest is a mind-module serving the purpose of human-computer interaction (HCI).

The TuringTest module serves as a human-computer interface between the AI Mind and one or more human users. Its purpose is to provide avenues of communication between man and machine. In the most primitive AI Minds, the keyboard and the screen of a computer are the main interface. The tactile keyboard serves as a substitute for auditory input, and the monitor screen serves as a substitute for voice output -- unless speech synthesis is channeling output through a loudspeaker or a headphone.

Earlier in AiEvolution, the same mind-module was called HCI for Human-Computer Interaction, before the module names were modified to serve as clickable links on the wiki-pages of the AI documentation. Renaming HCI as TuringTest serves the purpose of making users and coders aware of the well-known test for AI functionality named after the AI pioneer Alan Turing.

The SeCurity module calls the TuringTest module as one of potentially myriad operations affecting AI security. Since the TuringTest operation gives outside agents access into the AI Mind, the AI and the human user are mutually vulnerable to malicious intentions during the operation of the TuringTest. In MindForth and the German Wotan AI, the TuringTest module protects against liability by announcing that there is no warranty for the free AI source code. MindForth and Wotan also state the date and time that the AI Mind came to life, for inclusion during TranScript mode and for the purpose of any contest to see which AI Mind installation is the oldest or has been running the longest. MindForth and Wotan may display instructions for the user on-screen, while the JavaScript AiMind and Dushka programs present checkboxes for the user to click or unclick for a choice of display modes.

Since the JavaScript AI Minds in English and in Russian are flashier and more graphical than the bare-bones robot AI of MindForth and Wotan, there is more leeway for improvisation and razzle-dazzle effects in the JavaScript tutorial programs. Ambitious AI coders in any programming language have the opportunity and the challenge of graphically depicting even the most subtle of mental phenomena occurring in the artificial intelligence, such as the branching filaments of spreading activation and the volatile surfacing of concepts and ideas in the artificial ConSciousness.

The visibly operating TuringTest interface module is somewhat easier to troubleshoot and debug than the more hidden majority of AI mind-modules, because any glitch or software error will tend to show up immediately. Typical problems may involve timing where the rsvp variable is counting down too quickly if the host computer has an extremely fast central processing unity (CPU). The AI coder or installation supervisor may have to adjust the pertinent values.

More subtle problems may arise in connection with the happenstance timing of when a human user begins entering input into the AI Mind or how fast or how slow a user tries to communicate across the keyboard. Once again, the AI coder-in-charge may need to tweak some values not only in the TuringTest module but possibly in other modules involving input and output.

Saturday, April 06, 2013

apr6jsai

The JavaScript artificial intelligence (JSAI) is a clientside AiApp whose natural habitat is a desktop computer, a laptop or a smartphone.

1 Wed.3.APR.2013 -- "nounlock" May Not Need Parameters

In the English JSAI (JavaScript artificial intelligence), the "nounlock" variable holds onto the time-point of the direct object or predicate nominative for a specific verb. Since the auditory engram being fetched is already in the proper case, there may not be any need to specify any parameters during the search.

2 Fri.5.APR.2013 -- Orchestrating Flags in NounPhrase

As we run the English JSAI at length without human input and with the inclusion of diagnostic "alert" messages, we discover that the JSAI is sending a positive "dirobj" flag into NounPhrase without checking first for a positive "predflag".

3 Sat.6.APR.2013 -- Abandoning Obsolete Number Code

Yesterday we commented out NounPhrase code which was supposed to "make sure of agreement; 18may2011" but which was doing more harm than good. The code was causing the AI to send the wrong form of the self-concept "701=I" into the SpeechAct module. Now we can comment out our diagnostic "alert" messages and see if the free AI source code is stable enough for an upload to the Web. Yes, it is.

Sunday, March 17, 2013

mar16dkpj

The DeKi Programming Journal (DKPJ) is both a tool in coding German Wotan open-source artificial intelligence (AI) and an archival record of the history of how the German Supercomputer AI evolved over time.

1 Thurs.14.MAR.2013 -- Seeking Confirmation of Inference

In the German Wotan artificial intelligence with machine reasoning by inference, the AskUser module converts an otherwise silent inference into a yes-or-no question seeking confirmation of the inference with a yes-answer or refutation of the inference with a no-answer. Prior to confirmation or refutation, the conceptual engrams of the question are a mere proposition for consideration by the human user. When the user enters the answer, the KbRetro module must either establish associative tags from subject to verb to direct object in the case of a yes-answer, or disrupt the same tags with the insertion of a negational concept of "NICHT" for the idea known as "NOT" in English.

2 Fri.15.MAR.2013 -- Setting Parameters Properly

Although the AskUser module is asking the proper question, "HAT EVA EIN KIND" in German for "Does Eva have a child?", the concepts of the question are not being stored properly in the Psi conceptual array.

3 Sat.16.MAR.2013 -- Machine Learnig by Inference

Now we have coordinated the operation of InFerence, AskUser and KbRetro. When we input, "eva ist eine frau" for "Eva is a woman," the German AI makes a silent inference that Eva may perhaps have a child. AskUser outputs the question, "HAT EVA EIN KIND" for "Does Eva have a child?" When we answer "nein" in German for English "no", the KbRetro module adjusts the knowledge base (KB) retroactively by negating the verb "HAT" and the German AI says, "EVA HAT NICHT EIN KIND", or "Eva does not have a child" in English.

Wednesday, March 13, 2013

mar13dkpj

The DeKi Programming Journal (DKPJ) is both a tool in coding German Wotan open-source artificial intelligence (AI) and an archival record of the history of how the German Supercomputer AI evolved over time.

1 Sat.9.MAR.2013 -- Making Inferences in German

When the German Wotan AI uses the InFerence module to think rationally, the AI Mind creates a silent, conceptual inference and then calls the AskUser module to seek confirmation or refutation of the inference. While generating its output, the AskUser module calls the DeArticle module to insert a definite or indefinite article into the question being asked. The AI has been using the wrong article with "HAT EVA DAS KIND?" when it should be asking, "HAT EVA EIN KIND?" When we tweak the software to switch from the definite article to the indefinite article, the AI gets the gender wrong with "HAT EVA EINE KIND?"

2 Tues.12.MAR.2013 -- A Radical Departure

In the AskUsermodule, to put a German article before the direct object of the query, we may have to move the DeArticle call into the backwards search for the query-object (quobj), so that the gender of the query-object can be found and sent as a parameter into the DeArticle module.

It may seem like a radical departure to call DeArticle from inside the search-loop for a noun, but only one engram of the German noun will be retrieved, and so there should be no problem with inserting a German article at the same time. The necessary parameters are right there at the time-point from which the noun is being retrieved.

3 Wed.13.MAR.2013 -- Preventing False Parameters

When the OldConcept module recognizes a known German noun, normally the "mfn" gender of that noun is detected and stored once again as a fresh conceptual engram for that noun. However, today we have learned that in OldConcept we must store a zero value for the recognition of forms of "EIN" as the German indefinite article, because the word "EIN" has no intrinsic gender and only acquires the gender of its associated noun. When we insert the corrective code into the OldConcept module, finally we witness the German Wotan AI engaging in rational thought by means of inference when we input "eva ist eine frau", or "Eva is a woman." The German AI makes a silent inference about Eva and calls the AskUser module to ask us users, "HAT EVA EIN KIND", which means in English, "Does Eva have a child?" Next we must work on KbRetro to positively confirm or negatively adjust the knowledge base in accordance with the answer to the question.

Friday, March 08, 2013

mar8dkpj

The DeKi Programming Journal (DKPJ) is both a tool in coding German Wotan open-source artificial intelligence (AI) and an archival record of the history of how the German Supercomputer AI evolved over time.

Wed.6.MAR.2013 -- Problems with the WhatBe Module

As we implement InFerence in the Wotan German Supercomputer AI, the program tends to call the WhatBe module to ask a question about a previously unknown word. When we input to the AI, "eva ist eine frau", first Wotan makes an inference about Eva and asks if Eva has a child. Then the AI mistakenly says, "WAS IRRTUM EVA" when the correct output should be "WAS IST EVA". This problem affords us an opportunity to improve the German performance of the WhatBe module which came into the German AI from the English MindForth AI.

First we need to determine which location in the AI source code is calling the WhatBe mind-module, and so we insert some diagnostics. Knowing where the call comes from, lets us work on the proper preparation of parameters from outside WhatBe to be used inside WhatBe.

Thurs.7.MAR.2013 -- Dealing with Number in German

We are learning that we must handle grammatical number much differently in the German AI than in the English AI. English generally uses the ending "-s" to indicate plural number, but in German there is no one such simple clue. In German we have a plethora of clues about number, and we can use the OutBuffer to work with some of them, such as "-heit" indicating singular and "-heiten" indicating plural. In German we can also establish priority among rules, such as letting an "-e" ending in the OutBuffer suggest a plural noun, while letting the discovery of a singular verb overrule the suggestion that a noun is in the plural. The main point here is that in German we must get away from the simplistic English rules about number.

Fri.8.MAR.2013 -- Removing Obsolete Influences

In NewConcept let us try changing the default expectation of number for a new noun from plural to singular. At first we notice no problem with a default singular. Then we notice that the InFerence module is using a default plural ("2") for the subject-noun of the silent inference. We tentatively change the default to singular ("1") until we can devise a more robust determinant of number in InFerence.

We are having a problem with the "ocn" variable for "old concept number". Just as with the obsolete "recnum", there is no reason any more to use the "ocn" variable, so we comment out some code.

Tuesday, March 05, 2013

mar5dkpj

The DeKi Programming Journal (DKPJ) is both a tool in coding German Wotan open-source artificial intelligence (AI) and an archival record of the history of how the German Supercomputer AI evolved over time.

1 Sun.3.MAR.2013 -- Problems with AskUser

In our efforts to implement InFerence in the Wotan German AI, we have gotten the AI to stop asking "HABEN EVA KIND?" but now AskUser is outputting "HAT EVA DIE KIND" as if the German noun "Kind" for "child" were feminine instead of neuter. We should investigate to see if the DeArticle module has a problem.

2 Mon.4.MAR.2013 -- Problems with DeArticle

By the use of a diagnostic message, we have learned that the DeArticle module is finding the accusative plural "DIE" form without regard to what case is required. Now we need to coordinate DeArticle more with the AskUser module, so that when AskUser is seeking a direct object, so will DeArticle. There has already long been a "dirobj" flag, but it is perhaps time to use something more sophisticated, such as "dobcon" or even "acccon" for an accusative "statuscon". After a German preposition like "mit" or "bei" that requires the dative case, we may want to use a flag like "datcon" for a dative "statuscon". So perhaps now we should use "acccon" in preparation for using also "gencon" and "datcon" or maybe even "nomcon" for nominative.

3 Tues.5.MAR.2013 -- Coordinating AskUser and DeArticle

A better "statuscon" for coordinating between AskUser and DeArticle is "dbacon", because it can be used for all four declensional cases in German. When we use "dbacon" and when we make the "LEAVE" statement come immediately after the first instance of selecting an article with the correct "dbacon", we obtain "HAT EVA DAS KIND" as the question from AskUser after the input of "eva ist eine frau". We still need to take gender into account, so we may declare a variable of "mfncon" to coordinate searches for words having the correct gender.

Saturday, March 02, 2013

mar2dkpj

The DeKi Programming Journal (DKPJ) is both a tool in coding German Wotan open-source artificial intelligence (AI) and an archival record of the history of how theGerman Supercomputer AI evolved over time.

1 Sat.2.FEB.2013 -- Improving the AskUser Module

To begin a yes-or-no question in German, a form of the verb has to be generated either by a parameter-search or by VerbGen. We will first try the parameter-search using dba for person and nphrnum for number.

2 Tues.26.FEB.2013 -- Assigning Number to a New Noun

For learning a new noun in German, we need to use the OutBuffer in the process of assigning grammatical number to any new noun. We can use a previous article to suggest the number of a noun, and we may impose a default number which may be overruled first by indications obtained from OutBuffer-analysis and secondly by the continuation with a verb that reveals the number of its subject.

For OutBuffer-analysis, we may impose various rules, such as that a default presumption of singular number may be overruled by certain word-endings such as "-heiten" or "-ungen" which would rather clearly indicate a plural form. We may not so easily presume that endings in "-en" or "-e" indicate a plural, because a singular noun may have such an ending. An ensuing verb is a much better indicator of the perceived number of a noun than the ending of the noun is.

Although we may be tempted to detect the ensuing singular verb "ist" and use it to retroactively establish a noun-number as being singular, it may be simpler to use the OutBuffer to look for singular verbs that end in "-t", such as "ist" or "geht". Likewise, a verb ending in "-n" could indicate a plural subject. So should the default presumption for a German noun be singular or plural?

3 Wed.27.FEB.2013 -- Assigning Plural Number by Default

In both German and English, we should probably make the default presumption be plural for new nouns being learned. Then we have a basic situation to be changed retroactively if a singular verb is detected. So let us examine the NewConcept module to see if we can set a plural value of "2" there on the "num" which will be imposed in the InStantiate module.

When we set a num default of "2" for plural in NewConcept and we run the German AI, the value of "2" shows up for a new noun in both the ".psi" report and the ".de" lexical report. Next we need to work on retroactively changing the default value on the basis of detecting a singular verb.

We have tried various ways to detect the "T" at the end of the input of the verb "IST". In the InStantiate module, we were able to test first for a pov of external input and then for the value of the OutBuffer rightmost "b16" value. Thus we were able to detect the ending "T" on the verb. Immediately we face the problem of how retroactively to change the default number of the subject noun from "2" for plural to "1" for singular.

Changing anything retroactively is no small matter in the Wotan German AI, because other words may have intervened between the alterand subject-noun and the determinant verb. We have previously worked on assigning tqv and seq values retroactively from a direct object back to a verb, so we do have some experience here.

4 Thurs.28.FEB.2013 -- Creating the RetroSet Module

Today we will try to create a RetroSet mind-module for retroactively setting parameters like the number of a new subject-noun which has been revealed to be singular in number because it was followed by a singular verb-form, such as "IST" or "HAT" in German. First we must figure out where to place the RetroSet module in the grand scheme of a Forth AI program. Since the "T" at the end of a German verb is discovered in the InStantiate module, we could either call RetroSet from InStantiate, or use a "statuscon" variable to set a flag that will call RetroSet from higher up in the Wotan AI program. Let us create a "numcon" flag that can be set to call Retroset and then immediately be reset to zero. Since InStantiate is called from the DeParser module, we should perhaps let DeParser call RetroSet.

Now we have stubbed in the RetroSet AI mind-module just before the DeParser mind-module in the Wotan German artificial intelligence. RetroSet diagnostically displays the positive value of the numcon flag and then resets the flag to zero. In future coding, we will use the numcon flag not only to call RetroSet but also to change the default value of "2" for plural to "1" for singular in the case of a new German noun that the Wotan AI is learning for the first time.

5 Fri.1.MAR.2013 -- Implementing RetroSet in the German AI

In the German Wotan potentially superintelligent AI, the AudListen module sets time-of-seqneed ("tsn") as a time-point for searches covering only current input from the keyboard into the AI Mind. In the new RetroSet module, we may use "tsn" as a parameter to restrict a search for a subject-noun to only the most recent input to the AI. However, "tsn" is apparently being reset for each new word of input, so we switch to using time-of-voice ("tov") and we get better results. We input "eva ist eine frau" and RetroSet retroactively changes the default plural on "EVA" from a two to a one for singular. Next we need to troubleshoot why we are not getting a better question from AskUser.

Friday, September 28, 2012

sep27ruai

These notes record the coding of the Russian AI Mind Dushka in JavaScript for Microsoft Internet Explorer (MSIE).

1. Thurs.27.SEP.2012 -- Shortening Test-Range for Verb-Recog

The Dushka Russian artificial intelligence (RuAi) is not properly recognizing a second-person singular verb-form in the ruLexicon Russian lexical array. When we type in the Cyrillic of "Ty veedyeesh menya" for "You see me," the Russian verb is being recorded in the ruLexicon with an erroneous value of "1" for first person instead of "2" for second person.

Apparently the AudListen code for discrimination among grammatical persons was written too specifically for verbs like "dyelayesh" in January of 2012. We may be able to relax the strictness of comparisons by not testing for the vowel just before the personal ending.

We went into the AudListen code for recognizing "delayesh" in the second-person singular and we commented out just the test for the vowel. Then we ran Dushka and immediately the RuAi was able to recognize "Ty veedyeesh menya" properly for "You see me" and the AI answered "Ya veezhoo tebya" for "I see you". This instance was one of the easiest bug-fixes of our Russian AI coding experience. Next we may need to comment out the vowel-tests for the other personal forms of a present-tense Russian verb.

Immediately we wonder if the whole present-tense paradigm will start working properly for most if not all the Russian verb conjugations when we stop testing for the vowel inside the inflectional ending. It also occurs to us that the RuAi may start learning Russian verb-forms regardless of the numbered conjugations thought up by human scholars of philology over the centuries since Greek and Roman times. If we tweak the recognition-code that we implemented for one conjugation and it starts to work for all the conjugations, then we may have accidentally bypassed the whole issue of worrying about how to deal with different Russian conjugations.

2. Fri.28.SEP.2012 -- Non-Russian Troubleshooting of ru120926

Working today on an old computer where we may not type in Cyrillic, nevertheless we may use a special ru120926T.html test version of the ru120926.html Russian artificial intelligence (RuAi) to determine why the RuAi suddenly says "OSHEEBKA" ("error") rather early in its operation without human input (and therefore without Cyrillic typing).

The first place to look for the cause of the problem is in the NounPhrase module which erroneously outputs OSHEEBKA instead of a correct direct object.

Well, isn't that situation weird? First we put a diagnostic "alert" message at the start of NounPhrase, and we got nowhere -- nothing of value was revealed. Next we put a diagnostic alert in NounPhrase where there was a chance for "subjectflag" to change from its default value of one ("1") to a zero in the presence of either a direct object or a predicate nominative. Still nothing special was revealed. We finally got results when we inserted a conditional alert message to tell us what "motjuste" had been chosen in the condition of looking for a non-subject. The RuAi told us that it had selected concept number "704" just before erroneously outputting the "OSHEEBKA" error message. We recognized "704" as having to be a personal pronoun, but which one? It used to be the accusative case "MENYA" of the Russian pronoun number 701 "YA" for English "I". We no longer use number "704" as a separate concept, because "701" takes care of all forms of "YA" under the influence of the "dba" parameter for the grammatical case involved. The number "704" only shows up in obsolete code that we need to remove from the Russian AI.

When we comment out some legacy NounPhrase code that was invoking the concept number "704", the RuAi stops saying "OSHEEBKA" and declares that the motjuste is concept number "701" or the Russian word "YA" in the nominative for English "I". This result is not satisfactory. There should perhaps have been a "nounlock" after the verb "PONIMAYU". We may have to get rid of the "audme" variable not only in the Russian AI but also in the Forth and JavaScript English AI Minds, then find a form of "ME" through a search based on parameters.

Thursday, August 30, 2012

aug23ruai

These notes record the coding of the Russian AI Mind Dushka in JavaScript for Microsoft Internet Explorer (MSIE).

1. Thurs.23.AUG.2012 -- Diagnosing Selection of Subjects

As we troubleshoot the Dushka Russian AI in JavaScript for Microsoft Internet Explorer (MSIE), probably the first point of departure must be inserting a diagnostic "alert" message to let us know how the NounPhrase module is selecting the subject for a sentence of thought in Russian. No matter how a subject is chosen, we want the verblock mechanism to force the retrieval of a particular verb from the so-called IdeaPlex.

Our first major problem after some human input is that NounPhrase selects as most activated a potential subject of "tebya" with a carried-over, spurious "verblock" that does not even lead to a verb, but rather to "tebya" itself. We have probably solved this problem already in the English JSAI.

By searching downwards for "tqv" (the source of "verblock") simultaneously in the Russian JSAI and the English JSAI, we discover that in the English JSAI on 15aug2012 we inserted into InStantiate a line of code to prevent spurious carry-overs of the "tqv" value when "seq" is at zero. Now we insert the same code into the Russian JSAI. Running the AI, we do not get an improvement. Then at the start of WhoBe we also put a zeroing out of "tqv" taken from the English JSAI. Still there is no improvement. In the Russian AI, we then intoduce into WhatBe the same tqv-zeroing as was done in the English JSAI. Again there is no improvement.

2. Fri.24.AUG.2012 -- Affecting Activation of Subjects

It may be necessary to implement code that will switch from an oblique case of an activated concept and find nominative nodes to serve as the subjects of incipient thoughts.

If a direct object is left activated at the end of a sentence, all the nodes of that concept should receive a blanket activation through OldConcept or NounAct. Then NounPhrase may choose nominative nodes as candidates for the subject of a sentence. (Maybe we should make nominative nodes receive a higher activation.) So the process of having a residually activated concept switch from being a direct object in an old thought to being a subject in a new thought should work by whatever mechanism puts a blanket activation on all the nodes of a concept.

3. Tues.28.AUG.2012 -- Finding "verblock" Verb-forms

We need to put in some diagnostic messages and see what residual activation occurs for a direct object.

Today in the VerbPhrase module we are building up some code which, in the presence of a positive verblock, will still go to the "verblock" time-point in the Ru-array but will not automatically accept the verb-form originally deposited there, typically during human input. Instead, the new code conducts a search of the ruLexicon to find a verb-form with the correct number and person. Initially we forgot to search for the concept-number, so we accidentally got the correct ending but on the wrong verb.

4. Wed.29.AUG.2012 -- VerbGen Returns Inveniend Verb-stem

Yesterday we made some major progress in getting the RuAi to search for correct Russian verb forms, but the new code was not yet perfect, so today we need to make improvements. However, we should probably save and archive yesterday's version so that we can recover from any unforeseen errors.

Now there is a problem because the new, integrated search-code is finding the correct archival verb-form, if it is available, but the verb is appearing in duplicate. Apparently the rest of the VerbPhrase code is finding a "vphraud" recall-vector all over again. We should be able to thwart that phenomenon.

As we start to prepare some documentation of the AudBuffer, OutBuffer and VerbGen modules, we notice that our Russian AI code needs to make use of pertinent variables such as the "gencon" status flag and the "audbase" recall-vector to identify the verb whose inflectional ending must be changed. As soon as we use "audbase" in our code, the Russian AI stops switching to a different verb and at least outputs the stem of the verb that we are trying to change. Since we have also set the "gencon" flag, VerbPhrase calls VerbGen but does not make its normal main call to SpeechAct, so we do not get an extra verb-form as output.

5. Thurs.30.AUG.2012 -- VerbGen Needs "dba" Parameter

Yesterday VerbGen was returning only the stem of an inveniend verb and not the inflected personal ending. However, delivering the stem was a major improvement in the Russian AI functionality. Today we found that we needed only to set the "dba" parameter properly before calling VerbGen, and the Russian AI was able to provide a correct form of the required verb.

Tuesday, July 17, 2012

jul06mfpj

MindForth Programming Journal


1 Fri.6.JUL.2012 -- Debugging after Major Code Revision

In the MindForth artificial intelligence (AI) we are now letting the AI run in tutorial mode without human input in order to troubleshoot any glitches that occur after the major changes of the most recent release. Without human intervention and under the influence of the KbTraversal module, the AI calls various subroutines to prompt a dialog with any nearby human. We observe some glitches that are due perhaps to a lack of proper parameters when a subroutine is called. We intend to debug the calling of the various subroutines so that we may display an AI Mind that thinks rationally not only when left to its own devices but also when the AI must think in response to queries or comments from human users.


2 Sat.7.JUL.2012 -- Solving a Problem with WhatAuxSDo

In the course of letting MindForth run without human input, we noticed that eventually the WhatAuxSDo module was called for the subject of concept #56 "YOU" and the AI erroneously asked "WHAT DO ERROR DO". By inserting a diagnostic message, we learned that WhatAuxSDo was not finding a "subjnum" value for the #56 "YOU" concept and thus could not find the word "YOU" in a search of the English "En" array. We went into the EnBoot sequence and changed the "num" value for "YOU" from zero ("0") to one ("1"). The AI correctly said, "WHAT DO YOU DO". However, we may need to debug even further and find out why the proper value of "num" for "YOU" is not being set during the output.


3 Sun.8.JUL.2012 -- Tightening Code for Searchability

When we search the free AI source code for "2 en{" which should reveal any storing or retrieval of a "num" value, we do not find any code for storing "num" in the English lexical array. Therefore we should search for "5 en{" to see where the part-of-speech "pos" is stored. We do so, and still we do not find what we need. Then we try searching for "5 en{" with an extra blank space in the search, and we discover that a form of "pos" is stored both in EnVocab and in OldConcept. At the same time we see that "num" is also stored in the same two mind-modules. Now we should be able to troubleshoot the problem and find out why English lexical "num" is not being stored during processes of thought. First however, we will try to tighten up the code so that only one space intervenes for future occasions when we are trying to find instances of array-manipulation code.


4 Wed.11.JUL.2012 -- Num(ber) in the English Lexical Array

We need to discover where elements of the flag-panel are inserted into nodes of the English lexical array, so that the "num(ber)" value may be stored properly as the AI Mind continues to think and to respond to queries from human users.


5 Fri.13.JUL.2012 -- Correcting Fundamental Flaws

Today in the EnBoot English bootstrap module we are making a blanket change by moving the EnVocab calls down to be on the same line of code as the calls to InNativate, so that the "num(ber)" setting will go properly into EnVocab. Our recent troubleshooting has revealed that WhatAuxSDo needs to find a "num" value in the English lexical array in order to function properly.


6 Sat.14.JUL.2012 -- Tracking num(ber) Values
Next we need to zero in on how the AI assigns "num(ber)" tags during the recognition of words. In OldConcept, it may be necessary to store a default, such as "num" or "unk" and then to test for any positive "ocn" that will simply override the default.

Since we rely on OldConcept to store the number tag, we may need to track where the number-value comes from. AudInput has some sophisticated code which tentatively assigns a plural number when the character "S" is encountered as the last letter in a word. In the work of 4nov2011 we started assigning zero as a default number for the sake of the EnArticle module, but we may need to change the AudInput module back to assigning one ("1") as the default number.


7 Mon.16.JUL.2012 -- Avoiding Unwarrented Number Values

If the most recent "num(ber)" of a word like "ROBOTS" is found to be "2" for plural, we do not want the AI to make the false assumption that the "num(ber)" of the "ROBOTS" concept is inherently plural. Yet we want words like "PEOPLE" or "CHILDREN" to be recognized as being plural.


8 Tues.17.JUL.2012 -- Making Sure of Lexical Number

We may need to go into the NounPhrase subject-selection process and capture the num(ber) value of the lexical item being re-activated within the English lexical array.

Monday, July 02, 2012

jun29mfpj

MindForth Programming Journal

1 Fri.29.JUN.2012 -- IdeaPlex: Sum of all Ideas

The sum of all ideas in a mind can be thought of as the
IdeaPlex. These ideas are expressed in human language
and are subject to modification or revision in the course of
sensory engagement with the world at large.

The knowledge base (KB) in an AiMind is a subset of the IdeaPlex.
Whereas the IdeaPlex is the sum totality of all the engrams of
thought stored in the AI, the knowledge base is the distilled
body of knowledge which can be expanded by means of inference
with machine reasoning or extracted as responses to input-queries.

The job of a human programmer working as an AI mind-tender is to
maintain the logical integrity of the machine IdeaPlex and therefore
of the AI knowledge base. If the AI Mind is implanted in a humanoid
robot, or is merely resident on a computer, it is the work of a
roboticist to maintain the pathways of sensory input/output and the
mechanisms of the robot motorium. The roboticist is concerned with
hardware, and the mind-tender is concerned with the software of the
IdeaPlex.

Whether the mind-tender is a software engineer or a hacker hired
off the streets, the tender must monitor the current chain of thought
in the machine intelligence and adjust the mental parameters of the
AI so that all thinking is logical and rational, with no derailments
of ideation into nonsense statements or absurdities of fallacy.

Evolution occurs narrowly and controllably in one artilect installation
as the mind-tenders iron out bugs in the AI software and introduce algorithmic
improvements. AI evolution explodes globally and uncontrollably when
survival of the fittest AI Minds leads to a Technological Singularity.


2 Fri.29.JUN.2012 -- Perfecting the IdeaPlex

We may implement our new idea of faultlessizing the IdeaPlex by
working on the mechanics of responding to an input-query such as
"What do bears eat?" We envision the process as follows. The AI
imparts extra activation to the verb "eat" from the query, perhaps
first in the InStantiate module, but more definitely in the
ReActivate module, which should be calling the SpreadAct module
to send activation backwards to subjects and forwards to objects.
Meanwhile, if not already, the query-input of the noun "bears"
should be re-activating the concept of "bears" with only a normal
activation. Ideas stored with the "triple" of "bears eat (whatever)"
should then be ready for sentence-generation in response to the query.
Neural inhibition should permit the generation of multiple responses,
if they are available in the knowledge base.

During response-generation, we expect the subject-noun to use the
verblock to lock onto its associated verb, which shall then use
nounlock to lock onto the associated object. Thus the sentence is
retrieved intact. (It may be necessary to create more "lock" variables
for various parts of speech.)

We should perhaps use an input query of "What do kids make?", because
MindForth already has the idea that "Kids make robots".


3 Sat.30.JUN.2012 -- Improving the SpreadAct Module

In our tentative coding, we need now to insert diagnostic messages
that will announce each step being taken in the receipt and response
to an input-query.

We discover some confusion taking place in the SpreadAct module,
where "pre @ 0 > IF" serves as the test for performing
a transfer of activation backwards to a "pre" concept. However,
the "pre" item was replaced at one time with "prepsi", so apparently
the backwards activation code is not being operated. We may need
to test for a positive "prepsi" instead of a positive "pre".

We go into the local, pre-upload version of the Google Code MindForth
"var" (variable) wiki-page and we add a description for "prepsi",
since we are just now conducting serious business with the variable.
Then in the MindForth SpreadAct module we switch from testing in vain
for a positive "pre" value to testing for a positive "prepsi".
Immediately our diagnostic messages indicate that, during generation
of "KIDS MAKE ROBOTS" as a response, activation is passed backwards
from the verb "MAKE" to the subject-noun "KIDS". However, SpreadAct
does not seem to go into operation until the response is generated.
We may need to have SpreadAct operate during the input of a verb
as part of a query, in a chain were ReActivate calls SpreadAct to
flush out potential subject-nouns by retro-activating them.


4 Sat.30.JUN.2012 -- Approaching the "seqneed" Problem

As we search back through versions of MindForth AI, we see that
the 13 October 2010 MFPJ document describes our decision to stop
having ReActivate call SpreadAct. Now we want to reinstate the calls,
because we want to send activation backwards from heavily activated
verbs to their subjects. Apparently the .psi position of the "seqpsi"
has changed from position six to position seven, so we must change the
ReActivate code accordingly. We make the change, and we observe that
the input of "What do kids make?" causes the .psi line at time-point
number 449 to show an increase in activation from 35 to 36 on the
#72 KIDS concept. There is such a small increase from SpreadAct
because SpreadAct conservatively imparts only one unit of activation
backwards to the "prepsi" concept. If we have trouble making the
correct subjects be chosen in response to queries, we could increase
the backwards SpreadAct spikelet from one to a higher value.

Next we have a very tricky situation. When we ask, "What do kids make?",
at first we get the correct answer of "Kids make robots." When we ask
the same question again, we erroneously get, "Kids make kids." It used
to be that such a problem was due to incorrect activation-levels,
with the word "KIDS" being so highly activated that it was chosen
erroneously for both subject and direct object. Nowadays we are
starting with a subject-node and using "verblock" and "nounlock"
to go unerringly from a node to its "seq" concept. However, in this
current case we notice that the original input query of "What do kids make?"
is being stored in the Psi array with an unwarranted seq-value of "72"
for "KIDS" after the #73 "MAKE" verb. Such an erroneous setting seems
to be causing the erroneous secondary output of "Kids make kids."
It could be that the "moot" system is not working properly. The "moot"
flag was supposed to prevent tags from being set during input queries.

In the InStantiate module, the "seqneed" code for verbs is causing
the "MAKE" verb to receive an erroneous "seq" of #72 "KIDS".
We may be able to modify the "seqneed" system to not install
a "seq" at the end of an input.

When we increased the amount of time-points for the "seqneed" system
to look backwards from two to eight, the system stopped assigning
the spurious "seq" to the #73 verb "MAKE" at t=496 and instead
assigned it to the #59 verb "DO" at t=486.


5 Sun.1.JUL.2012 -- Solving the "seqneed" Problem

After our coding session yesterday, we realized that the solution
to the "seqneed" problem may lie in constraining the time period
during which InStantiate searches backwards for a verb needing a
"seq" noun. When we set up the "seqneed" mechanism, we rather
naively ordained that the search should try to go all the way back
to the "vault" value, relying on a "LEAVE" statement to abandon
the loop after finding one verb that could take a "seq".

Now we have used a time-of-seqneed "tsn" variable to limit the
backwards searches in the "seqneed" mechanism of the InStantiate
module, and the MindForth AI seems to be functioning better than ever.
Therefore we shall try to clean up our code by removing diagnostics
and upload the latest MindForth AI to the Web.

Saturday, February 11, 2012

feb11ruai

Artificial Intelligence in Russian

1. Thurs.9.FEB.2012 -- Unspoken Be-Verbs as a Default

The Russian-speaking artificial intelligence Dushka needs a default BeVerb module that will silently assert itself as the automatic carrier of thought until a non-be-verb takes over from the provisional default. In our coding of a Russian mind, we will assume that any noun or pronoun, beginning a thought in the nominative case, is automatically the subject of a putative BeVerb until proven otherwise. In this way, our cognitive software will prepare for a BeVerb and switch automatically when a non-be-verb occurs.

We should work first on the comprehension of putative be-verbs and second on their generation, so that what we learn in comprehending be-verbs may be used in generating thoughts involving a BeVerb. So we type into the AI a Russian sentence to see if the software can understand it.

Human: душка робот

Robot: ДУШКА ЧТО ДУШКА ТАКОЕ

We said "Dushka is a robot" but the AI responded only, "Dushka -- what is Dushka?" We need to implement a default BeVerb in the comprehension of a sentence that lacks a visible BeVerb.

In the InStantiate module, we can trap for the input of a "c==32" space-bar when the "seqneed" is set to "8" for want of an incoming verb. We may then do something outrageous, but normal for Russian. From InStantiate we may provisionally send into AudMem a space-bar character with an "audpsi" of "800" for the verb БЫТЬ ("to be"), so that the AI is ready to record any noun coming in as a predicate nominative in conjunction with the be-verb. Now, if we implement such an outrageous step, it is possible that our AI memory-banks will become replete with quasi-spurious engrams of infinitive be-verbs that typically do not materialize. It could be that the presence of a spurious be-verb engram will not matter, if the cancellation of the default occurs as soon as some actual verb comes in. Then cancelling the spurious default will involve removing or nullifying any associative tags laid down momentarily during the enactment of the default.

2. Fri.10.FEB.2012 -- Instantiating Imaginary Be-Verbs

In the InStantiate module we will now experiment with code to create in auditory memory a pseudo-engram of a non-existent be-verb after the perception of a nominative noun or pronoun. Since the Russian-speaking mind waits for a predicate nominative, it needs at least an imaginary be-verb as the holder of associative links between subject and predicate nominative.

Now inside InStantiate we have assembled the code that creates a be-verb pseudo-engram in the three memory arrays for "Psi" concepts, Russian words and auditory engrams. The Psi node is automatically creating a "pre" tag that links the pseudo-verb back to its subject. We need to implement code that will finish the intermediation of the unspoken Russian BeVerb between its subject and the predicate nominative. The code must also cancel or uninstall the imaginary BeVerb if a real verb occurs instead of the provisionally expected BeVerb.

3. Sat.11.FEB.2012 -- Integration of Default Be-Verbs

We have the AI pretending that a BeVerb comes in after a nominative subject, and now we need to create the "seq" tag from the subject to the default BeVerb. First in the InStantiate module we insert a line of code declaring that the pseudo-be-verb is indeed a verb with respect to its part of speech, so that the following code will try to reach backwards to the subject engram and install a "seq" tag referring to the now not-so-imaginary BeVerb. We run the Dushka AI and we type in, ты робот -- which is Russian for "You are a robot", but without the be-verb. We are puzzled when Dushka answers, Я ЧТО Я ТАКОЕ ("I -- WHAT AM I?") and that's all she wrote. It may indicate that her concept of self has been activated by the input referring to "you", but she does not seem to have understood the input. We check the diagnostic display, and we see that her concept of self now has a "seq" tag referring right back to herself instead of to the default Russian BeVerb. What went wrong? We look at the JavaScript source code again, and we see that it was not enough to set the part-of-speech as a verb. We go ahead and we set the Psi concept-number to be that of the Russian be-verb. Then we run the Russian AI again with the same input and we sit there in shock when the AI announces to us: Я РОБОТ. Dushka has just said to us, "I AM A ROBOT" in Russian. From the diagnostic display we discover that the same changes that made Dushka able to understand the idea, made her able to think the idea.