Strong AI

Purpose: Discussion of Strong AI Minds thinking in English, German or Russian. Privacy policy: Third parties advertising here may place and read cookies on your browser; and may use web beacons to collect information as a result of ads displayed here.

Tuesday, March 21, 2017

mtx

Machine Translation by Artificial Intelligence

As an independent scholar in polyglot artificial intelligence, I have just today on March 21, 2017, stumbled upon a possible algorithm for implementing machine translation (MT) in my bilingual Perlmind and MindForth programs. My Ghost Perl AI thinks heretofore in either English or Russian, but not in both languages interchangeably. Likewise my Forth AI MindForth thinks in English, while its Teutonic version Wotan thinks in German.

Today like Archimedes crying "Eureka" in the bathtub, while showering but not displacing bath-water I realized that I could add an associative tag mtx to the flag-panel of each conceptual memory engram to link and cross-identify any concept in one language to its counterpart or same concept in another language. The mtx variable stands for "machine-translation xfer (transfer)". The AI software will use the spreading-activation SpreadAct module to transfer activation from a concept in English to the same concept in Russian or German.

Assuming that an AI Mind can think fluently in two languages, with a large vocabulary in both languages, the nub of machine translation will be the simultaneous activation of semantically the same set of concepts in both languages. Thus the consideration of an idea expressed in English will transfer the conceptual activation to a target language such as Russian. The generation modules will then generate a translation of the English idea into a Russian idea.

Inflectional endings will not pass from the source language directly to the target language, because the mtx tag identifies only the basic psi concept in both languages. The generation modules of the target language will assign the proper inflections as required by the linguistic parameters governing each sentence being translated.

Thursday, March 16, 2017

pmpj0316

2017-03-15: Porting AudRecog and AudMem from Forth into Perl

We start today by taking the 336,435 bytes of ghost176.pl from 2017-03-14 and renaming it as ghost177.pl in a text editor. Then in the Windows XP MS-DOS prompt we run the agi00045.F MindForth program of 166,584 bytes from 2016-09-18 in order to see a Win32Forth window with diagnostic messages and a display of "you see dogs" as input and "I SEE NOTHING" as a default output. From a NeoCities upload directory we put the agi00045.F source code up on the screen in a text editor so that we may use the Forth code to guide us in debugging the Perl Strong AI code.

Although in our previous PMPJ entry from yesterday we recorded our steps in trying to get the Perl AudRecog mind-module to work as flawlessly as the Forth AudRecog, today we will abandon the old Perl AudRecog by changing its name and we will create a new Perl AudRecog from scratch just as we did with the Forth AudRecog in 2016 when we were unable to tweak the old Forth AudRecog into a properly working version. So we stub in a new Perl AudRecog() and we comment out the old version by dint of renaming it "OldAudRecog()". Then we run "perl ghost177.pl" and the AI still runs but it treats every word of both input and output as a new concept, because the new AudRecog is not yet recognizing any English words.

Next we start porting the actual Forth AudRecog into Perl, but we must hit three of our Perl reference books to learn how to translate the Forth code testing ASCII values into Perl. We learn about the Perl "chr" function which lets us test input characters as if they were ASCII values such as CR-13 or SPACE-32.

Now we have faithfully ported the MindForth AudRecog into Perl, but words longer than one character are not being recognized. Let us comment out AudMem() by naming it OldAudMem() and let us start a new AudMem() from scratch as a port from MindForth.

We port the AudMem code from Forth into Perl, but we may not be getting the storage of SPACE or CR carriage-return.

2017-03-16: Uploading Ghost Perl Webserver Strong AI

Now into our third day in search of stable Perlmind code, we take the 344,365 bytes of ghost177.pl from 2017-03-15 and we save a new file as the ghost178.pl AI. We will try to track passage of characters from AudInput to AudMem to AudRec.

Through diagnostic messages in AudRecog, we discovered that a line of code meant to "disallow audrec until last letter of word" was zeroing out $audrec before the transfer from the end of AudRecog to AudMem.

In a departure from MindForth, we are having the Perl AudRecog mind-module fetch only the most recent recognition of a word. In keeping with MindForth, we implement the auditory storing of a $nxt new concept in the AudInput module, where we also increment the value of $nxt instead of in the NewConcept module.

Tuesday, March 14, 2017

pmpj0314

PerlMind Programming Journal
Updating the Ghost Perl AI in conformance with MindForth AI.

Today we return to Perl AI coding after updating the MindForth code in July and August of 2016. In Forth we re-organized the calling of the subordinate mind-modules beneath the MainLoop module so as no longer to call the Think module directly, but rather to call the FreeWill module first so that eventually the FreeWill or Volition module will call Emotion and Think and Motorium.

We have discovered, however, that the MindForth code properly handles input which encounters a bug in the Perl code, so we must first debug the Perl code. When we enter, "you see dogs", MindForth properly answers "I SEE NOTHING", which is the default output for anything involving VisRecog since we have no robot camera eye attached to the Mind program. The old Perl Mind, however, incorrectly recognizes the input of "DOGS" as if it were a form of the #830 "DO" verb, and so we must correct the Perl code by making it as good as the Forth code. So we take the 335,790 bytes of ghost175.pl from from 2016-08-07 and we rename it as ghost176.pl for fresh coding.

We start debugging the Perl AudRecog module by inserting a diagnostic message to reveal the "$audpsi" value at the end of AudRecog. We learn that "DOGS" is misrecognized as "DO" when the input length reaches two characters. We know that MindForth does not misrecognize "DOGS", so we must determine where the Perl AudRecog algorithm diverges from the Forth algorithm. We are fortunate to be coding the AI in both Forth and Perl, so that in Perl we may implement what already works in Forth.

In Perl we try commenting out some AudRecog code that checks for a $monopsi. The AI still misrecognizes "DOGS" as the verb "DO". Next we try commenting out some Perl code that declares a $psibase when incoming word-length is only two. The AI still misrecognizes. Next we try commenting out a declaration of $subpsi. We still get misrecognition. We try commenting out another $psibase. Still misrecognition. We even try commenting out a major $audrec declaration, and we still get misrecognition. When we try commenting out a $prc declaration, AudRecog stops recognizing the verb "SEE". Then from MindForth we bring in a provisional $audrec, but the verb "SEE" is not being recognizied.

Although in the MS-DOS CLI prompt we can evidently not run MindForth and the Perlmind simultanously, today we learn that we can run MindForth and leave the Win32Forth window open, then go back to running the Perl AI. Thus we can compare the diagnostic messages in both Forth and Perl so as to further debug the Perl AI. We notice that the Forth AudMem module sends a diagnostic message even for the blank space ASCII 32 even after "SEE", which the Perl AI does not do.

Saturday, August 27, 2016

mfpj0827

MindForth Programming Journal (MFPJ)
The MindForth Programming Journal (MFPJ) is both a tool in developing MindForth open-source artificial general intelligence (AGI) and an archival record of the history of how the AGI Forthmind evolved over time.

Sat.27.AUG.2016 -- Creating the MindGrid trough of inhibition

In agi00031.F we are trying to figure out why we have lost the functionality of ending human input with a 13=CR and still getting a recognition of the final word of the input. We compare the current AudMem code with the agi00026.F version, and there does not seem to be any difference. Therefore the problem must probably lie in the major revisions made recently to the AudInput module.

From the diagnostic report messages that appear when we run the agi00031.F, it looks as though the 13=CR carriage return is not getting through from the AudInput module to the AudMem module. When we briefly insert a revealing diagnostic into the agi00026.F AudMem start, we see from "g AudMem: pho= 71" and "o AudMem: pho= 79" and "d AudMem: pho= 68" and "AudMem: pho= 13" that the carriage-return is indeed getting through. Therefore in AudInput we need to find a way of sending the final 13=CR into AudMem. Upshot: It turns out that in AudInput we only had to restore "pho @ 31 > pho @ 13 = OR IF \ 2016aug27: CR, SPACE or alphabetic letter" as a line of code that would let 13=CR be one of the conditions required for calling the AudMem module.

Next in the InStantiate module we need to remove a test that only lets words with a positive "rv" recall-vector get instantiated, because we must set "rv" to zero for personal pronouns being re-interpreted as "you" or "I" during communication with a human user. Apparently the Perlmind just ignores the engrams with a zero "rv" and finds the correct forms with a search based on parameters.

Now we would like to see how close we are to fulfilling all the conditions for a proper "trough" of inhibition in the AI MindGrid. When we run the ghost175.pl Perl AI and we enter "You know God," we see negative activations in thepresent-most trough of both the input and the concepts of "I HELP KIDS" as the output. In the Forth AGI, we wonder why do not see any negative activations in the present-most trough. Oh, we were not yet bothering to store the "act" activation-level in the Forth InStantiate module. We insert the missing necessary code, and we begin to see the trough of inhibition in both the recent-most input and the present-most output.