Cyborg AI Minds are a true concept-based artificial intelligence with natural language understanding, simple at first and lacking robot embodiment, and expandable all the way to human-level intelligence and beyond. Privacy policy: Third parties advertising here may place and read cookies on your browser; and may use web beacons to collect information as a result of ads displayed here.

Sunday, March 17, 2013

mar16dkpj

The DeKi Programming Journal (DKPJ) is both a tool in coding German Wotan open-source artificial intelligence (AI) and an archival record of the history of how the German Supercomputer AI evolved over time.

1 Thurs.14.MAR.2013 -- Seeking Confirmation of Inference

In the German Wotan artificial intelligence with machine reasoning by inference, the AskUser module converts an otherwise silent inference into a yes-or-no question seeking confirmation of the inference with a yes-answer or refutation of the inference with a no-answer. Prior to confirmation or refutation, the conceptual engrams of the question are a mere proposition for consideration by the human user. When the user enters the answer, the KbRetro module must either establish associative tags from subject to verb to direct object in the case of a yes-answer, or disrupt the same tags with the insertion of a negational concept of "NICHT" for the idea known as "NOT" in English.

2 Fri.15.MAR.2013 -- Setting Parameters Properly

Although the AskUser module is asking the proper question, "HAT EVA EIN KIND" in German for "Does Eva have a child?", the concepts of the question are not being stored properly in the Psi conceptual array.

3 Sat.16.MAR.2013 -- Machine Learnig by Inference

Now we have coordinated the operation of InFerence, AskUser and KbRetro. When we input, "eva ist eine frau" for "Eva is a woman," the German AI makes a silent inference that Eva may perhaps have a child. AskUser outputs the question, "HAT EVA EIN KIND" for "Does Eva have a child?" When we answer "nein" in German for English "no", the KbRetro module adjusts the knowledge base (KB) retroactively by negating the verb "HAT" and the German AI says, "EVA HAT NICHT EIN KIND", or "Eva does not have a child" in English.

Wednesday, March 13, 2013

mar13dkpj

The DeKi Programming Journal (DKPJ) is both a tool in coding German Wotan open-source artificial intelligence (AI) and an archival record of the history of how the German Supercomputer AI evolved over time.

1 Sat.9.MAR.2013 -- Making Inferences in German

When the German Wotan AI uses the InFerence module to think rationally, the AI Mind creates a silent, conceptual inference and then calls the AskUser module to seek confirmation or refutation of the inference. While generating its output, the AskUser module calls the DeArticle module to insert a definite or indefinite article into the question being asked. The AI has been using the wrong article with "HAT EVA DAS KIND?" when it should be asking, "HAT EVA EIN KIND?" When we tweak the software to switch from the definite article to the indefinite article, the AI gets the gender wrong with "HAT EVA EINE KIND?"

2 Tues.12.MAR.2013 -- A Radical Departure

In the AskUsermodule, to put a German article before the direct object of the query, we may have to move the DeArticle call into the backwards search for the query-object (quobj), so that the gender of the query-object can be found and sent as a parameter into the DeArticle module.

It may seem like a radical departure to call DeArticle from inside the search-loop for a noun, but only one engram of the German noun will be retrieved, and so there should be no problem with inserting a German article at the same time. The necessary parameters are right there at the time-point from which the noun is being retrieved.

3 Wed.13.MAR.2013 -- Preventing False Parameters

When the OldConcept module recognizes a known German noun, normally the "mfn" gender of that noun is detected and stored once again as a fresh conceptual engram for that noun. However, today we have learned that in OldConcept we must store a zero value for the recognition of forms of "EIN" as the German indefinite article, because the word "EIN" has no intrinsic gender and only acquires the gender of its associated noun. When we insert the corrective code into the OldConcept module, finally we witness the German Wotan AI engaging in rational thought by means of inference when we input "eva ist eine frau", or "Eva is a woman." The German AI makes a silent inference about Eva and calls the AskUser module to ask us users, "HAT EVA EIN KIND", which means in English, "Does Eva have a child?" Next we must work on KbRetro to positively confirm or negatively adjust the knowledge base in accordance with the answer to the question.

Friday, March 08, 2013

mar8dkpj

The DeKi Programming Journal (DKPJ) is both a tool in coding German Wotan open-source artificial intelligence (AI) and an archival record of the history of how the German Supercomputer AI evolved over time.

Wed.6.MAR.2013 -- Problems with the WhatBe Module

As we implement InFerence in the Wotan German Supercomputer AI, the program tends to call the WhatBe module to ask a question about a previously unknown word. When we input to the AI, "eva ist eine frau", first Wotan makes an inference about Eva and asks if Eva has a child. Then the AI mistakenly says, "WAS IRRTUM EVA" when the correct output should be "WAS IST EVA". This problem affords us an opportunity to improve the German performance of the WhatBe module which came into the German AI from the English MindForth AI.

First we need to determine which location in the AI source code is calling the WhatBe mind-module, and so we insert some diagnostics. Knowing where the call comes from, lets us work on the proper preparation of parameters from outside WhatBe to be used inside WhatBe.

Thurs.7.MAR.2013 -- Dealing with Number in German

We are learning that we must handle grammatical number much differently in the German AI than in the English AI. English generally uses the ending "-s" to indicate plural number, but in German there is no one such simple clue. In German we have a plethora of clues about number, and we can use the OutBuffer to work with some of them, such as "-heit" indicating singular and "-heiten" indicating plural. In German we can also establish priority among rules, such as letting an "-e" ending in the OutBuffer suggest a plural noun, while letting the discovery of a singular verb overrule the suggestion that a noun is in the plural. The main point here is that in German we must get away from the simplistic English rules about number.

Fri.8.MAR.2013 -- Removing Obsolete Influences

In NewConcept let us try changing the default expectation of number for a new noun from plural to singular. At first we notice no problem with a default singular. Then we notice that the InFerence module is using a default plural ("2") for the subject-noun of the silent inference. We tentatively change the default to singular ("1") until we can devise a more robust determinant of number in InFerence.

We are having a problem with the "ocn" variable for "old concept number". Just as with the obsolete "recnum", there is no reason any more to use the "ocn" variable, so we comment out some code.

Tuesday, March 05, 2013

mar5dkpj

The DeKi Programming Journal (DKPJ) is both a tool in coding German Wotan open-source artificial intelligence (AI) and an archival record of the history of how the German Supercomputer AI evolved over time.

1 Sun.3.MAR.2013 -- Problems with AskUser

In our efforts to implement InFerence in the Wotan German AI, we have gotten the AI to stop asking "HABEN EVA KIND?" but now AskUser is outputting "HAT EVA DIE KIND" as if the German noun "Kind" for "child" were feminine instead of neuter. We should investigate to see if the DeArticle module has a problem.

2 Mon.4.MAR.2013 -- Problems with DeArticle

By the use of a diagnostic message, we have learned that the DeArticle module is finding the accusative plural "DIE" form without regard to what case is required. Now we need to coordinate DeArticle more with the AskUser module, so that when AskUser is seeking a direct object, so will DeArticle. There has already long been a "dirobj" flag, but it is perhaps time to use something more sophisticated, such as "dobcon" or even "acccon" for an accusative "statuscon". After a German preposition like "mit" or "bei" that requires the dative case, we may want to use a flag like "datcon" for a dative "statuscon". So perhaps now we should use "acccon" in preparation for using also "gencon" and "datcon" or maybe even "nomcon" for nominative.

3 Tues.5.MAR.2013 -- Coordinating AskUser and DeArticle

A better "statuscon" for coordinating between AskUser and DeArticle is "dbacon", because it can be used for all four declensional cases in German. When we use "dbacon" and when we make the "LEAVE" statement come immediately after the first instance of selecting an article with the correct "dbacon", we obtain "HAT EVA DAS KIND" as the question from AskUser after the input of "eva ist eine frau". We still need to take gender into account, so we may declare a variable of "mfncon" to coordinate searches for words having the correct gender.

Saturday, March 02, 2013

mar2dkpj

The DeKi Programming Journal (DKPJ) is both a tool in coding German Wotan open-source artificial intelligence (AI) and an archival record of the history of how theGerman Supercomputer AI evolved over time.

1 Sat.2.FEB.2013 -- Improving the AskUser Module

To begin a yes-or-no question in German, a form of the verb has to be generated either by a parameter-search or by VerbGen. We will first try the parameter-search using dba for person and nphrnum for number.

2 Tues.26.FEB.2013 -- Assigning Number to a New Noun

For learning a new noun in German, we need to use the OutBuffer in the process of assigning grammatical number to any new noun. We can use a previous article to suggest the number of a noun, and we may impose a default number which may be overruled first by indications obtained from OutBuffer-analysis and secondly by the continuation with a verb that reveals the number of its subject.

For OutBuffer-analysis, we may impose various rules, such as that a default presumption of singular number may be overruled by certain word-endings such as "-heiten" or "-ungen" which would rather clearly indicate a plural form. We may not so easily presume that endings in "-en" or "-e" indicate a plural, because a singular noun may have such an ending. An ensuing verb is a much better indicator of the perceived number of a noun than the ending of the noun is.

Although we may be tempted to detect the ensuing singular verb "ist" and use it to retroactively establish a noun-number as being singular, it may be simpler to use the OutBuffer to look for singular verbs that end in "-t", such as "ist" or "geht". Likewise, a verb ending in "-n" could indicate a plural subject. So should the default presumption for a German noun be singular or plural?

3 Wed.27.FEB.2013 -- Assigning Plural Number by Default

In both German and English, we should probably make the default presumption be plural for new nouns being learned. Then we have a basic situation to be changed retroactively if a singular verb is detected. So let us examine the NewConcept module to see if we can set a plural value of "2" there on the "num" which will be imposed in the InStantiate module.

When we set a num default of "2" for plural in NewConcept and we run the German AI, the value of "2" shows up for a new noun in both the ".psi" report and the ".de" lexical report. Next we need to work on retroactively changing the default value on the basis of detecting a singular verb.

We have tried various ways to detect the "T" at the end of the input of the verb "IST". In the InStantiate module, we were able to test first for a pov of external input and then for the value of the OutBuffer rightmost "b16" value. Thus we were able to detect the ending "T" on the verb. Immediately we face the problem of how retroactively to change the default number of the subject noun from "2" for plural to "1" for singular.

Changing anything retroactively is no small matter in the Wotan German AI, because other words may have intervened between the alterand subject-noun and the determinant verb. We have previously worked on assigning tqv and seq values retroactively from a direct object back to a verb, so we do have some experience here.

4 Thurs.28.FEB.2013 -- Creating the RetroSet Module

Today we will try to create a RetroSet mind-module for retroactively setting parameters like the number of a new subject-noun which has been revealed to be singular in number because it was followed by a singular verb-form, such as "IST" or "HAT" in German. First we must figure out where to place the RetroSet module in the grand scheme of a Forth AI program. Since the "T" at the end of a German verb is discovered in the InStantiate module, we could either call RetroSet from InStantiate, or use a "statuscon" variable to set a flag that will call RetroSet from higher up in the Wotan AI program. Let us create a "numcon" flag that can be set to call Retroset and then immediately be reset to zero. Since InStantiate is called from the DeParser module, we should perhaps let DeParser call RetroSet.

Now we have stubbed in the RetroSet AI mind-module just before the DeParser mind-module in the Wotan German artificial intelligence. RetroSet diagnostically displays the positive value of the numcon flag and then resets the flag to zero. In future coding, we will use the numcon flag not only to call RetroSet but also to change the default value of "2" for plural to "1" for singular in the case of a new German noun that the Wotan AI is learning for the first time.

5 Fri.1.MAR.2013 -- Implementing RetroSet in the German AI

In the German Wotan potentially superintelligent AI, the AudListen module sets time-of-seqneed ("tsn") as a time-point for searches covering only current input from the keyboard into the AI Mind. In the new RetroSet module, we may use "tsn" as a parameter to restrict a search for a subject-noun to only the most recent input to the AI. However, "tsn" is apparently being reset for each new word of input, so we switch to using time-of-voice ("tov") and we get better results. We input "eva ist eine frau" and RetroSet retroactively changes the default plural on "EVA" from a two to a one for singular. Next we need to troubleshoot why we are not getting a better question from AskUser.

Friday, September 28, 2012

sep27ruai

These notes record the coding of the Russian AI Mind Dushka in JavaScript for Microsoft Internet Explorer (MSIE).

1. Thurs.27.SEP.2012 -- Shortening Test-Range for Verb-Recog

The Dushka Russian artificial intelligence (RuAi) is not properly recognizing a second-person singular verb-form in the ruLexicon Russian lexical array. When we type in the Cyrillic of "Ty veedyeesh menya" for "You see me," the Russian verb is being recorded in the ruLexicon with an erroneous value of "1" for first person instead of "2" for second person.

Apparently the AudListen code for discrimination among grammatical persons was written too specifically for verbs like "dyelayesh" in January of 2012. We may be able to relax the strictness of comparisons by not testing for the vowel just before the personal ending.

We went into the AudListen code for recognizing "delayesh" in the second-person singular and we commented out just the test for the vowel. Then we ran Dushka and immediately the RuAi was able to recognize "Ty veedyeesh menya" properly for "You see me" and the AI answered "Ya veezhoo tebya" for "I see you". This instance was one of the easiest bug-fixes of our Russian AI coding experience. Next we may need to comment out the vowel-tests for the other personal forms of a present-tense Russian verb.

Immediately we wonder if the whole present-tense paradigm will start working properly for most if not all the Russian verb conjugations when we stop testing for the vowel inside the inflectional ending. It also occurs to us that the RuAi may start learning Russian verb-forms regardless of the numbered conjugations thought up by human scholars of philology over the centuries since Greek and Roman times. If we tweak the recognition-code that we implemented for one conjugation and it starts to work for all the conjugations, then we may have accidentally bypassed the whole issue of worrying about how to deal with different Russian conjugations.

2. Fri.28.SEP.2012 -- Non-Russian Troubleshooting of ru120926

Working today on an old computer where we may not type in Cyrillic, nevertheless we may use a special ru120926T.html test version of the ru120926.html Russian artificial intelligence (RuAi) to determine why the RuAi suddenly says "OSHEEBKA" ("error") rather early in its operation without human input (and therefore without Cyrillic typing).

The first place to look for the cause of the problem is in the NounPhrase module which erroneously outputs OSHEEBKA instead of a correct direct object.

Well, isn't that situation weird? First we put a diagnostic "alert" message at the start of NounPhrase, and we got nowhere -- nothing of value was revealed. Next we put a diagnostic alert in NounPhrase where there was a chance for "subjectflag" to change from its default value of one ("1") to a zero in the presence of either a direct object or a predicate nominative. Still nothing special was revealed. We finally got results when we inserted a conditional alert message to tell us what "motjuste" had been chosen in the condition of looking for a non-subject. The RuAi told us that it had selected concept number "704" just before erroneously outputting the "OSHEEBKA" error message. We recognized "704" as having to be a personal pronoun, but which one? It used to be the accusative case "MENYA" of the Russian pronoun number 701 "YA" for English "I". We no longer use number "704" as a separate concept, because "701" takes care of all forms of "YA" under the influence of the "dba" parameter for the grammatical case involved. The number "704" only shows up in obsolete code that we need to remove from the Russian AI.

When we comment out some legacy NounPhrase code that was invoking the concept number "704", the RuAi stops saying "OSHEEBKA" and declares that the motjuste is concept number "701" or the Russian word "YA" in the nominative for English "I". This result is not satisfactory. There should perhaps have been a "nounlock" after the verb "PONIMAYU". We may have to get rid of the "audme" variable not only in the Russian AI but also in the Forth and JavaScript English AI Minds, then find a form of "ME" through a search based on parameters.

Thursday, August 30, 2012

aug23ruai

These notes record the coding of the Russian AI Mind Dushka in JavaScript for Microsoft Internet Explorer (MSIE).

1. Thurs.23.AUG.2012 -- Diagnosing Selection of Subjects

As we troubleshoot the Dushka Russian AI in JavaScript for Microsoft Internet Explorer (MSIE), probably the first point of departure must be inserting a diagnostic "alert" message to let us know how the NounPhrase module is selecting the subject for a sentence of thought in Russian. No matter how a subject is chosen, we want the verblock mechanism to force the retrieval of a particular verb from the so-called IdeaPlex.

Our first major problem after some human input is that NounPhrase selects as most activated a potential subject of "tebya" with a carried-over, spurious "verblock" that does not even lead to a verb, but rather to "tebya" itself. We have probably solved this problem already in the English JSAI.

By searching downwards for "tqv" (the source of "verblock") simultaneously in the Russian JSAI and the English JSAI, we discover that in the English JSAI on 15aug2012 we inserted into InStantiate a line of code to prevent spurious carry-overs of the "tqv" value when "seq" is at zero. Now we insert the same code into the Russian JSAI. Running the AI, we do not get an improvement. Then at the start of WhoBe we also put a zeroing out of "tqv" taken from the English JSAI. Still there is no improvement. In the Russian AI, we then intoduce into WhatBe the same tqv-zeroing as was done in the English JSAI. Again there is no improvement.

2. Fri.24.AUG.2012 -- Affecting Activation of Subjects

It may be necessary to implement code that will switch from an oblique case of an activated concept and find nominative nodes to serve as the subjects of incipient thoughts.

If a direct object is left activated at the end of a sentence, all the nodes of that concept should receive a blanket activation through OldConcept or NounAct. Then NounPhrase may choose nominative nodes as candidates for the subject of a sentence. (Maybe we should make nominative nodes receive a higher activation.) So the process of having a residually activated concept switch from being a direct object in an old thought to being a subject in a new thought should work by whatever mechanism puts a blanket activation on all the nodes of a concept.

3. Tues.28.AUG.2012 -- Finding "verblock" Verb-forms

We need to put in some diagnostic messages and see what residual activation occurs for a direct object.

Today in the VerbPhrase module we are building up some code which, in the presence of a positive verblock, will still go to the "verblock" time-point in the Ru-array but will not automatically accept the verb-form originally deposited there, typically during human input. Instead, the new code conducts a search of the ruLexicon to find a verb-form with the correct number and person. Initially we forgot to search for the concept-number, so we accidentally got the correct ending but on the wrong verb.

4. Wed.29.AUG.2012 -- VerbGen Returns Inveniend Verb-stem

Yesterday we made some major progress in getting the RuAi to search for correct Russian verb forms, but the new code was not yet perfect, so today we need to make improvements. However, we should probably save and archive yesterday's version so that we can recover from any unforeseen errors.

Now there is a problem because the new, integrated search-code is finding the correct archival verb-form, if it is available, but the verb is appearing in duplicate. Apparently the rest of the VerbPhrase code is finding a "vphraud" recall-vector all over again. We should be able to thwart that phenomenon.

As we start to prepare some documentation of the AudBuffer, OutBuffer and VerbGen modules, we notice that our Russian AI code needs to make use of pertinent variables such as the "gencon" status flag and the "audbase" recall-vector to identify the verb whose inflectional ending must be changed. As soon as we use "audbase" in our code, the Russian AI stops switching to a different verb and at least outputs the stem of the verb that we are trying to change. Since we have also set the "gencon" flag, VerbPhrase calls VerbGen but does not make its normal main call to SpeechAct, so we do not get an extra verb-form as output.

5. Thurs.30.AUG.2012 -- VerbGen Needs "dba" Parameter

Yesterday VerbGen was returning only the stem of an inveniend verb and not the inflected personal ending. However, delivering the stem was a major improvement in the Russian AI functionality. Today we found that we needed only to set the "dba" parameter properly before calling VerbGen, and the Russian AI was able to provide a correct form of the required verb.

Tuesday, July 17, 2012

jul06mfpj

MindForth Programming Journal


1 Fri.6.JUL.2012 -- Debugging after Major Code Revision

In the MindForth artificial intelligence (AI) we are now letting the AI run in tutorial mode without human input in order to troubleshoot any glitches that occur after the major changes of the most recent release. Without human intervention and under the influence of the KbTraversal module, the AI calls various subroutines to prompt a dialog with any nearby human. We observe some glitches that are due perhaps to a lack of proper parameters when a subroutine is called. We intend to debug the calling of the various subroutines so that we may display an AI Mind that thinks rationally not only when left to its own devices but also when the AI must think in response to queries or comments from human users.


2 Sat.7.JUL.2012 -- Solving a Problem with WhatAuxSDo

In the course of letting MindForth run without human input, we noticed that eventually the WhatAuxSDo module was called for the subject of concept #56 "YOU" and the AI erroneously asked "WHAT DO ERROR DO". By inserting a diagnostic message, we learned that WhatAuxSDo was not finding a "subjnum" value for the #56 "YOU" concept and thus could not find the word "YOU" in a search of the English "En" array. We went into the EnBoot sequence and changed the "num" value for "YOU" from zero ("0") to one ("1"). The AI correctly said, "WHAT DO YOU DO". However, we may need to debug even further and find out why the proper value of "num" for "YOU" is not being set during the output.


3 Sun.8.JUL.2012 -- Tightening Code for Searchability

When we search the free AI source code for "2 en{" which should reveal any storing or retrieval of a "num" value, we do not find any code for storing "num" in the English lexical array. Therefore we should search for "5 en{" to see where the part-of-speech "pos" is stored. We do so, and still we do not find what we need. Then we try searching for "5 en{" with an extra blank space in the search, and we discover that a form of "pos" is stored both in EnVocab and in OldConcept. At the same time we see that "num" is also stored in the same two mind-modules. Now we should be able to troubleshoot the problem and find out why English lexical "num" is not being stored during processes of thought. First however, we will try to tighten up the code so that only one space intervenes for future occasions when we are trying to find instances of array-manipulation code.


4 Wed.11.JUL.2012 -- Num(ber) in the English Lexical Array

We need to discover where elements of the flag-panel are inserted into nodes of the English lexical array, so that the "num(ber)" value may be stored properly as the AI Mind continues to think and to respond to queries from human users.


5 Fri.13.JUL.2012 -- Correcting Fundamental Flaws

Today in the EnBoot English bootstrap module we are making a blanket change by moving the EnVocab calls down to be on the same line of code as the calls to InNativate, so that the "num(ber)" setting will go properly into EnVocab. Our recent troubleshooting has revealed that WhatAuxSDo needs to find a "num" value in the English lexical array in order to function properly.


6 Sat.14.JUL.2012 -- Tracking num(ber) Values
Next we need to zero in on how the AI assigns "num(ber)" tags during the recognition of words. In OldConcept, it may be necessary to store a default, such as "num" or "unk" and then to test for any positive "ocn" that will simply override the default.

Since we rely on OldConcept to store the number tag, we may need to track where the number-value comes from. AudInput has some sophisticated code which tentatively assigns a plural number when the character "S" is encountered as the last letter in a word. In the work of 4nov2011 we started assigning zero as a default number for the sake of the EnArticle module, but we may need to change the AudInput module back to assigning one ("1") as the default number.


7 Mon.16.JUL.2012 -- Avoiding Unwarrented Number Values

If the most recent "num(ber)" of a word like "ROBOTS" is found to be "2" for plural, we do not want the AI to make the false assumption that the "num(ber)" of the "ROBOTS" concept is inherently plural. Yet we want words like "PEOPLE" or "CHILDREN" to be recognized as being plural.


8 Tues.17.JUL.2012 -- Making Sure of Lexical Number

We may need to go into the NounPhrase subject-selection process and capture the num(ber) value of the lexical item being re-activated within the English lexical array.

Monday, July 02, 2012

jun29mfpj

MindForth Programming Journal

1 Fri.29.JUN.2012 -- IdeaPlex: Sum of all Ideas

The sum of all ideas in a mind can be thought of as the
IdeaPlex. These ideas are expressed in human language
and are subject to modification or revision in the course of
sensory engagement with the world at large.

The knowledge base (KB) in an AiMind is a subset of the IdeaPlex.
Whereas the IdeaPlex is the sum totality of all the engrams of
thought stored in the AI, the knowledge base is the distilled
body of knowledge which can be expanded by means of inference
with machine reasoning or extracted as responses to input-queries.

The job of a human programmer working as an AI mind-tender is to
maintain the logical integrity of the machine IdeaPlex and therefore
of the AI knowledge base. If the AI Mind is implanted in a humanoid
robot, or is merely resident on a computer, it is the work of a
roboticist to maintain the pathways of sensory input/output and the
mechanisms of the robot motorium. The roboticist is concerned with
hardware, and the mind-tender is concerned with the software of the
IdeaPlex.

Whether the mind-tender is a software engineer or a hacker hired
off the streets, the tender must monitor the current chain of thought
in the machine intelligence and adjust the mental parameters of the
AI so that all thinking is logical and rational, with no derailments
of ideation into nonsense statements or absurdities of fallacy.

Evolution occurs narrowly and controllably in one artilect installation
as the mind-tenders iron out bugs in the AI software and introduce algorithmic
improvements. AI evolution explodes globally and uncontrollably when
survival of the fittest AI Minds leads to a Technological Singularity.


2 Fri.29.JUN.2012 -- Perfecting the IdeaPlex

We may implement our new idea of faultlessizing the IdeaPlex by
working on the mechanics of responding to an input-query such as
"What do bears eat?" We envision the process as follows. The AI
imparts extra activation to the verb "eat" from the query, perhaps
first in the InStantiate module, but more definitely in the
ReActivate module, which should be calling the SpreadAct module
to send activation backwards to subjects and forwards to objects.
Meanwhile, if not already, the query-input of the noun "bears"
should be re-activating the concept of "bears" with only a normal
activation. Ideas stored with the "triple" of "bears eat (whatever)"
should then be ready for sentence-generation in response to the query.
Neural inhibition should permit the generation of multiple responses,
if they are available in the knowledge base.

During response-generation, we expect the subject-noun to use the
verblock to lock onto its associated verb, which shall then use
nounlock to lock onto the associated object. Thus the sentence is
retrieved intact. (It may be necessary to create more "lock" variables
for various parts of speech.)

We should perhaps use an input query of "What do kids make?", because
MindForth already has the idea that "Kids make robots".


3 Sat.30.JUN.2012 -- Improving the SpreadAct Module

In our tentative coding, we need now to insert diagnostic messages
that will announce each step being taken in the receipt and response
to an input-query.

We discover some confusion taking place in the SpreadAct module,
where "pre @ 0 > IF" serves as the test for performing
a transfer of activation backwards to a "pre" concept. However,
the "pre" item was replaced at one time with "prepsi", so apparently
the backwards activation code is not being operated. We may need
to test for a positive "prepsi" instead of a positive "pre".

We go into the local, pre-upload version of the Google Code MindForth
"var" (variable) wiki-page and we add a description for "prepsi",
since we are just now conducting serious business with the variable.
Then in the MindForth SpreadAct module we switch from testing in vain
for a positive "pre" value to testing for a positive "prepsi".
Immediately our diagnostic messages indicate that, during generation
of "KIDS MAKE ROBOTS" as a response, activation is passed backwards
from the verb "MAKE" to the subject-noun "KIDS". However, SpreadAct
does not seem to go into operation until the response is generated.
We may need to have SpreadAct operate during the input of a verb
as part of a query, in a chain were ReActivate calls SpreadAct to
flush out potential subject-nouns by retro-activating them.


4 Sat.30.JUN.2012 -- Approaching the "seqneed" Problem

As we search back through versions of MindForth AI, we see that
the 13 October 2010 MFPJ document describes our decision to stop
having ReActivate call SpreadAct. Now we want to reinstate the calls,
because we want to send activation backwards from heavily activated
verbs to their subjects. Apparently the .psi position of the "seqpsi"
has changed from position six to position seven, so we must change the
ReActivate code accordingly. We make the change, and we observe that
the input of "What do kids make?" causes the .psi line at time-point
number 449 to show an increase in activation from 35 to 36 on the
#72 KIDS concept. There is such a small increase from SpreadAct
because SpreadAct conservatively imparts only one unit of activation
backwards to the "prepsi" concept. If we have trouble making the
correct subjects be chosen in response to queries, we could increase
the backwards SpreadAct spikelet from one to a higher value.

Next we have a very tricky situation. When we ask, "What do kids make?",
at first we get the correct answer of "Kids make robots." When we ask
the same question again, we erroneously get, "Kids make kids." It used
to be that such a problem was due to incorrect activation-levels,
with the word "KIDS" being so highly activated that it was chosen
erroneously for both subject and direct object. Nowadays we are
starting with a subject-node and using "verblock" and "nounlock"
to go unerringly from a node to its "seq" concept. However, in this
current case we notice that the original input query of "What do kids make?"
is being stored in the Psi array with an unwarranted seq-value of "72"
for "KIDS" after the #73 "MAKE" verb. Such an erroneous setting seems
to be causing the erroneous secondary output of "Kids make kids."
It could be that the "moot" system is not working properly. The "moot"
flag was supposed to prevent tags from being set during input queries.

In the InStantiate module, the "seqneed" code for verbs is causing
the "MAKE" verb to receive an erroneous "seq" of #72 "KIDS".
We may be able to modify the "seqneed" system to not install
a "seq" at the end of an input.

When we increased the amount of time-points for the "seqneed" system
to look backwards from two to eight, the system stopped assigning
the spurious "seq" to the #73 verb "MAKE" at t=496 and instead
assigned it to the #59 verb "DO" at t=486.


5 Sun.1.JUL.2012 -- Solving the "seqneed" Problem

After our coding session yesterday, we realized that the solution
to the "seqneed" problem may lie in constraining the time period
during which InStantiate searches backwards for a verb needing a
"seq" noun. When we set up the "seqneed" mechanism, we rather
naively ordained that the search should try to go all the way back
to the "vault" value, relying on a "LEAVE" statement to abandon
the loop after finding one verb that could take a "seq".

Now we have used a time-of-seqneed "tsn" variable to limit the
backwards searches in the "seqneed" mechanism of the InStantiate
module, and the MindForth AI seems to be functioning better than ever.
Therefore we shall try to clean up our code by removing diagnostics
and upload the latest MindForth AI to the Web.

Saturday, February 11, 2012

feb11ruai

Artificial Intelligence in Russian

1. Thurs.9.FEB.2012 -- Unspoken Be-Verbs as a Default

The Russian-speaking artificial intelligence Dushka needs a default BeVerb module that will silently assert itself as the automatic carrier of thought until a non-be-verb takes over from the provisional default. In our coding of a Russian mind, we will assume that any noun or pronoun, beginning a thought in the nominative case, is automatically the subject of a putative BeVerb until proven otherwise. In this way, our cognitive software will prepare for a BeVerb and switch automatically when a non-be-verb occurs.

We should work first on the comprehension of putative be-verbs and second on their generation, so that what we learn in comprehending be-verbs may be used in generating thoughts involving a BeVerb. So we type into the AI a Russian sentence to see if the software can understand it.

Human: душка робот

Robot: ДУШКА ЧТО ДУШКА ТАКОЕ

We said "Dushka is a robot" but the AI responded only, "Dushka -- what is Dushka?" We need to implement a default BeVerb in the comprehension of a sentence that lacks a visible BeVerb.

In the InStantiate module, we can trap for the input of a "c==32" space-bar when the "seqneed" is set to "8" for want of an incoming verb. We may then do something outrageous, but normal for Russian. From InStantiate we may provisionally send into AudMem a space-bar character with an "audpsi" of "800" for the verb БЫТЬ ("to be"), so that the AI is ready to record any noun coming in as a predicate nominative in conjunction with the be-verb. Now, if we implement such an outrageous step, it is possible that our AI memory-banks will become replete with quasi-spurious engrams of infinitive be-verbs that typically do not materialize. It could be that the presence of a spurious be-verb engram will not matter, if the cancellation of the default occurs as soon as some actual verb comes in. Then cancelling the spurious default will involve removing or nullifying any associative tags laid down momentarily during the enactment of the default.

2. Fri.10.FEB.2012 -- Instantiating Imaginary Be-Verbs

In the InStantiate module we will now experiment with code to create in auditory memory a pseudo-engram of a non-existent be-verb after the perception of a nominative noun or pronoun. Since the Russian-speaking mind waits for a predicate nominative, it needs at least an imaginary be-verb as the holder of associative links between subject and predicate nominative.

Now inside InStantiate we have assembled the code that creates a be-verb pseudo-engram in the three memory arrays for "Psi" concepts, Russian words and auditory engrams. The Psi node is automatically creating a "pre" tag that links the pseudo-verb back to its subject. We need to implement code that will finish the intermediation of the unspoken Russian BeVerb between its subject and the predicate nominative. The code must also cancel or uninstall the imaginary BeVerb if a real verb occurs instead of the provisionally expected BeVerb.

3. Sat.11.FEB.2012 -- Integration of Default Be-Verbs

We have the AI pretending that a BeVerb comes in after a nominative subject, and now we need to create the "seq" tag from the subject to the default BeVerb. First in the InStantiate module we insert a line of code declaring that the pseudo-be-verb is indeed a verb with respect to its part of speech, so that the following code will try to reach backwards to the subject engram and install a "seq" tag referring to the now not-so-imaginary BeVerb. We run the Dushka AI and we type in, ты робот -- which is Russian for "You are a robot", but without the be-verb. We are puzzled when Dushka answers, Я ЧТО Я ТАКОЕ ("I -- WHAT AM I?") and that's all she wrote. It may indicate that her concept of self has been activated by the input referring to "you", but she does not seem to have understood the input. We check the diagnostic display, and we see that her concept of self now has a "seq" tag referring right back to herself instead of to the default Russian BeVerb. What went wrong? We look at the JavaScript source code again, and we see that it was not enough to set the part-of-speech as a verb. We go ahead and we set the Psi concept-number to be that of the Russian be-verb. Then we run the Russian AI again with the same input and we sit there in shock when the AI announces to us: Я РОБОТ. Dushka has just said to us, "I AM A ROBOT" in Russian. From the diagnostic display we discover that the same changes that made Dushka able to understand the idea, made her able to think the idea.

Saturday, February 04, 2012

feb4ruai

Artificial Intelligence in Russian

Fri.3.FEB.2012 -- Recognizing Inflections

For the Russian-thinking Dushka AI Mind, we have perhaps stumbled upon a way to avoid the hard-coding of noun paradigms and instead to let the Russian AI learn the inflected endings of Russian nouns from its own experience. For example, right now the Russian artificial intelligence (RuAi) fails to recognize the Psi concept #501 БОГ in the following exchange.

Human: я уважаю бога ("I honor God.")
Robot: ТЫ УВАЖАЕШЬ БОГА ("You honor God.")

Robot: ЧТО БОГА ТАКОЕ ("What is God?")

The diagnostic display reveals that the software has almost recognized the word for God.

559. Б 0 * 1 1 0
560. О 0 * 0 1 0
561. Г 0 * 0 1 501
562. А 0 * 0 0 902
Aha! Suddenly it becomes clear that two things are happening. The Psi concept #501 is indeed being recognized at first, but perhaps the provisional-recognition "prc" variable is not being set, and so AudInput calls NewConcept as if the AI were learning a new word instead of recognizing an old word.

Sat.4.FEB.2012 -- Learning Russian Like a Human Child

Now in a very rough way we have trapped for "zad1" in the AudRecog module so as to recognize a noun (БОГА ) with one character of inflection added onto it. Because the noun was indeed recognized, the InStantiate "seqneed" mechanism tagged the noun in the "ruLexicon" with a "dba" of "4" to indicate a direct-object accusative case. In other words, the Russian AI learned a new noun-form as a human child would learn it, that is, from the speech patterns of another speaker of Russian.


Wednesday, February 01, 2012

feb1ruai

Artificial Intelligence in Russian

Tues.31.JAN.2012 -- Generating and Recognizing Verbs

In our Dushka Russian AI we have the problem that new verb-forms generated on the fly by the VerbGen module are not being recognized and tagged with critical parameters as they settle into auditory memory. However, it looks as though a verb does get recognized if the "audpsi" tags for the verb in auditory memory extend far back enough to cover the stem of the verb. Therefore, instead of devising ways to bypass the operation of ReEntry calling AudMem, calling AudRecog, we should perhaps instead implement a "backfill" of any verb generated in the VerbGen module to let the "audpsi" tags extend back to the last "pho(neme)" of the verb-stem. Then the "provisional recall" mechanism in AudRecog ought to recognize the verb-form generated by the VerbGen module.

We created a "vip" variable to hold the value of "motjuste" when VerbPhrase calls VerbGen and to transfer the known concept-number of the verb, near the end of the stem in VerbGen, into the provisional "prc" variable for AudRecog. In this way, we got the AI internally to recognize and record verb-forms generated internally by the VerbGen module. However, to get the AI to call the correct verb-forms, we had to modify some recent OldConcept code for deciding what "dba" value to store with a lexical item. Now we have a problem with tagging the "dba" of a simple word like МЕНЯ when it comes in.

We can not rely on the form of МЕНЯ to tell us its "dba" because it could be genitive or accusative. We need to extract clues from the incoming sentence in order to assign the proper "dba" during the storage of МЕНЯ.

Wed.1.FEB.2012 -- Tagging Engrams with Parameters

We can perhaps rely on the "seqneed" mechanism of InStantiate to provide the "dba" parameter for a noun or pronoun entering the mind as user input. (Perhaps the "seqneed" variable should change to a "seqseek" variable for greater clarity.) We may be able to strengthen the use of "seqneed" by adding a kind of "pass-over" when a preposition is encountered, so that the software continues to look for a direct-object noun when a preposition-plus-noun combination is detected and skipped.

Where the InStantiate module tests for a "seqneed" of "5" and encounters a satisfying noun or pronoun to become a "seq" for the verb, we make the assumption that the time "t" identifies the temporal location of the noun or pronoun in both the Psi array and the "ruLexicon" array. We insert two lines of code to first "examine" the Russian lexical array and then to substitute a numeric "4" for the "ru4" flag of the "dba" value. Since the noun or pronoun is going to be the "seq" of the verb, that same noun or pronoun warrants a "dba" of "4" as a direct object that should be in the accusative case. However, we may need to make other arrangements if the verb is intransitive and the noun must be in the nominative as a predicate nominative.

Monday, January 30, 2012

jan29ruai

Artificial Intelligence in Russian

1. Sun.29.JAN.2012 -- Verbs Without Direct Objects

Today in the Dushka Russian AI we begin to address a problem that occurs also in our English AI Mind. Sometimes a verb does not need an object, but the AI needlessly says "ОШИБКА" for "ERROR" after the verb. We need to make it possible for a verb to be used by itself, without either a direct object or a predicate nominative. One way to achieve this goal might be to use the jux flag in the Psi conceptual array to set a flag indicating that the particular instance of the verb needs no object.

We have previously used the "jux" flag mainly to indicate the negation of a verb. If we also use "jux" with a special number to indicate that no object is required, we may have a problem when we wish to indicate both that a verb is negated and that it does not need an object, as in English if we were to say, "He does not play."

One way to get double duty out of the "jux" flag might be to continue using it for negation by inserting the English or Russian concept-number for "NOT" as the value in the "jux" slot, but to make the same value negative to indicate that the verb shall both be negated and shall lack an object, as in, "He does not resemble...."

During user input, we could have a default "jux" setting of minus-one ("-1") that would almost always get an override as soon as a noun or pronoun comes in to be the direct object or the predicate nominative. If the user enters a sentence like "He swims daily" without a direct object, the "jux" flag would remain at minus-one and the idea would be archived as not needing a direct object.

2. Sun.29.JAN.2012 -- Using Parameters to Find Objects

While we work further on the problem of verbs without objects, we should implement the use of parameters in object-selection. First we have a problem where the AI assigns activation-levels to a three-word input in ascending order: 23 28 26. These levels cause the problem that the AI turns the direct object into a subject, typically with an erroneous sentence as a result.
In RuParser, let us see what happens when we comment out a line of code that pays attention to the "ordo" word-ordervariable. Hmm, we get an even more pronounced separation: 20 25 30.

Here we have a sudden idea: We may need to run incoming pronouns through the AudBuffer and the OutBuffer in order unequivocally to assign "dba" tags to them. When we were using separate "audpsi" concept-numbers to recognize different forms of the same pronoun, the software could pinpoint the case of a form. We no longer want different concept-numbers for the same pronoun, because we want parameters like "dba" and "snu" to be able to retrieve correct forms as needed. Using the OutBuffer might give us back the unmistakeable recognition of pronoun forms, but it might also slow down the AI program.

Before we got the idea about using OutBuffer for incoming pronouns, in the OldConcept module we were having some success in testing for "seqneed" and "pos" to set the "dba" at "4=acc" for incoming direct objects. Then we rather riskily tried setting a default "dba" of one for "1=nom" in the same place, so that other tests could change the "dba" as needed. However, we may obtain greater accuracy if we use the OutBuffer.

3. Mon.30.JAN.2012 -- Removing Engram-Gaps From Verbs

Yesterday in the Russian AI we experimented rather drastically with using the "ordo" counter to cause words of input to receive levels of activation on a descending slope, so that the AI would be inclined to generate a sentence of response starting with the same subject as the input. We discovered that the original JavaScript AI in English was not properly keeping track of the "ordo" values, so we made the simple but drastic change of incrementing "ordo" only within OldConcept and NewConcept, both of which are modules where an incoming word must go through the one or the other.


Today we have sidetracked into correcting a problem in the VerbGen module. After input with a fictitious verb, VerbGen was generating a different form of the made-up verb in response, but calls to ReEntry were inserting blank aud-engrams between the verb-stem and the new inflection in the auditory channel. By using if (pho != "") ReEntry() to conditionalize the call to ReEntry for OutBuffer positions b14, b15 and b16, we made VerbGen stop inserting blank auditory engrams. However, there was still a problem, because the AI was making up a new form of the fictitious verb but not recognizing it or assigning a concept-number to it as part of the ReEntry process.


Thursday, January 26, 2012

jan26ruai

Artificial Intelligence in Russian


Thurs.26.JAN.2012 -- Insufficient Activation of Subjects

The most glaring problem in the Dushka Russian AI right now is that the AI does not fully activate the subject-pronoun when we type in a short sentence of subject, verb and object. Without a proper subject to provide parameters, the AI fails to select or generate a proper Russian verb-form.

When we type in "люди знают нас" ("People know us"), as an answer we get "ВАМ ЗНАЮТ ТЕБЯ" -- a mishmash of "to you" "they know" "you". In general, the AI seems to be taking the final object entered as input and trying to convert it into the subject for a response.

Thurs.26.JAN.2012 -- Using the "seqneed" Variable

The Russian AI is not setting a Psi "seq" flag when we enter a Russian word as the subject of a following verb. When we inspect the recent 10nov11A.F MindForth code for clues, we discover that in October of 2011 we made major improvements to the method of assigning "seq" tags. We began using the "seqneed" variable as a way of holding off on assigning a "seq" until either the desired verb or the desired noun/pronoun made itself available. However, apparently in the English JavaScript AI we wrote the "seqneed" code only for needing nouns and not yet for needing a verb. No, we did write the code, but it involved avoiding the English auxiliary verb "do", so we accidentally removed the verb-seqneed code from the RuAi. Let us put most of the code back in, and see what happens. Upshot: Once we put the code back into InStantiate, subjects of verbs once again began having a "seq" reference to the verb. The AI even skipped an adverb that we then inserted as a test.

Sunday, January 15, 2012

jan13ruai

These notes record the coding of the Russian AI Mind Dushka in JavaScript for Microsoft Internet Explorer (MSIE).

1. Fri.13.JAN.2012 -- Re-thinking Word Recognition

For artificial intelligence in Russian we need to re-think the whole idea of word-recognition as previously implemented in our English AI Minds. In English we did not worry much about word-endings, but in Russian (or German) we need to recognize a verb-form regardless of the number and person in which it is encountered. Since we are using the OutBuffer mechanism to detect and recognize verb-endings, we would like to use the same mechanism to retroactively insert a provisional audpsi identifier on not just the final phoneme of an auditory word-engram but also on the final stem-phoneme and perhaps on each phoneme of the inflected verb-ending. Then we would like to modify the AudRecog module so that it holds onto the provisional audpsi and declares the recognition of a verb in whatever present-tense form it is encountered.

Now we have run the current AI with an Alert box to tell us what is the value of "audpsi" when a second-person singular verb-ending is detected. With the input of "ЗНАЕШЬ" there was no value given for "audpsi", but for "ДЕЛАЕШЬ" a value of "821" was indicated, because the verb-form in its various permutations is provided in the RuBoot sequence.

2. Sat.14.JAN.2012 -- Enhancing Auditory Input

Yesterday in the AudMem module we had difficulty in waiting for the deposition of an audpsi ultimate-tag and in trying retroactively to insert the tag on the penultimate phonemes of the Russian word being recognized. We were obtaining values for audpsi at times when we expected there to not yet be an audpsi.

Although we try zeroing out audpsi at the end of AudMem, it looks as though further use of "audpsi" is required in the AudListen module and in the AudInput module, where finally audpsi is converted to
oldpsi for use in the OldConcept module.

It turns out that AudListen calls AudInput when a space-bar is reached during keyboard entry of a word. The AudInput module, without using AudMem, directly stores an audpsi ultimate-tag retroactively by using the "tult" value. Therefore we should be trying to insert additional "audpsi" tags in AudInput and not in AudMem.

3. Sun.15.JAN.2012 -- Auditory Stem-Tagging

We have gradually learned that the AudInput module will not let us readjust values of audpsi on a word from within an if-clause testing for a value of zero on the "aud4" or ctu continuation-flag. Therefore we may need to introduce a secondary if-clause in order to make each phoneme of the word carry the audpsi tag.

We developed a suspicion that something was not letting a positive audpsi be inserted after any phoneme with an "aud4" continuation-flag "ctu" of one. We searched for "aud4 ==" and in audDamp we found the conditional "if (aud4 == 1) aud5 = 0". This obscure line of code made us spend one or two days of work in trying to comprehend why we could not "backfill" the audpsi value onto phonemes prior to the final phoneme of a word.

When we commented out the offending line in audDamp, we began to notice unwarranted carry-overs of an old audpsi onto the first phoneme of the subsequent word. To correct that problem, audpsi will need to be reset to zero in at least one additional location. Actually, we had to reset "morphpsi" to zero at the end of AudRecog to solve the problem.

4. Sun.15.JAN.2012 -- Russian Verb Stem Recognition

Now in AudRecog we need to set up provisional recognition of Russian verb-stems. We create a "provrec" variable for "provisional recognition" and we use it to detect the early presence of "audpsi" tags before the end of a word is reached. Dushka begins to recognize incoming Russian verbs and to generate incorrect but on-target sentences using the recognized verb in the infinitive form. It remains to use the AudBuffer mechanism and the parameters of person and number to generate the output of a Russian verb in the proper gramatical form.


Table of Contents (TOC)

Thursday, January 12, 2012

jan12ruai

These notes record the coding of the Russian AI Mind Dushka in JavaScript for Microsoft Internet Explorer (MSIE).

Thurs.12.JAN.2012 -- Parsing Russian Verb-Endings

In our Russian JavaScript AI code heretofore we have merged the English and Russian AI Minds and we have eliminated or deactivated all the code for thinking in English. For the Dushka AI to think properly in Russian, we need to implement the OutBuffer mechanism for dealing with the inflectional endings of Russian verbs and nouns. Since we are not sure where to begin, we will present ourselves with the problem of dealing with the input of a previously unknown Russian verb.

Pressing Alt-Shift to toggle into Russian input, we ran the AI and we typed in the word "ЗНАТЬ", which the AI properly recognized as bootstrap concept #840. The AI responded with an ungrammatical sentence of "Я ЗНАТЬ МЕНЯ".

Then we typed in the word "ЗНАЮ", which the AI failed to recognize as a form of ЗНАТЬ, assigning instead a concept number of 882, as if the item was a brand new word being learned by the AI. We will try setting the value of "nru" to 900 at the end of RuBoot, so that new concepts will be learned with concept numbers starting at #901. Now we typed in "ЗНАЮ" and it was assigned #901 as a concept number. Next we typed in "ЗНАЕШЬ", and it, too, was assigned #901 as a new concept.

If we want the OutBuffer mechanism to recognize a personal verb form as such, we will need to go back to a version of the Russian AI which was sending input into the buffers. On the Packard-Bell desktop computer in the 25dec11A.F MindForth, we used the "abc" transfer-variable in the AudListen module to capture input keystrokes and move the characters into
a buffer. In the JavaScript AI, we will need to use the area of AudListen() where "pho = pho.toUpperCase()" turns each keystroke into an uppercase Cyrillic letter. From there we also call AudBuffer so that the "abc" values are transferred into the buffer.

We should probably not call OutBuffer from the "CR()" carriage-return module, which deals with the moment after an incoming word has gone into the AudMem() module. Instead we should probably deal in AudListen() directly with the input of a space-bar or a carriage-return.

We need a suitable location to reset the "phodex" counter back to zero after the end of a word of input. The "CR()" carriage-return module does not seem to effect the hange promptly enough. Let us try resetting "phodex" in the AudListen module when a carriage-return or a space-bar is entered. That method seems to work well, and somehow the AudBuffer and the OutBuffer apparently get cleared out.

Now we need to choose where the testing for any particular verb-ending in the OutBuffer will take place. It could maybe take place in the AudMem module. No, it turns out that it is somehow too late to test for a "b16" ending in AudMem. It works better if we test for "b16" in AudListen, before the character even goes into AudMem. It also turns out that we can use "if (b16==String.fromCharCode(1070))" as a way to test for an actual Russian character.

In AudListen we have now managed to build up code that tests the final three right-justified spaces in the OutBuffer and recognizes a second-person singular Russian verb-ending during keyboard input. Within the same test-code we have set the "dba" as "2" for second person and the part-of-speech "bias" and "pos" at "8" for a verb. The set values carried over into the memory arrays. Thus we expanded and improved the RuParser function. The same mechanism that recognizes a verb-ending, also parses the word as a verb.

Saturday, December 31, 2011

dec30ruai

Russian AI Mind Programming Journal

These notes record the coding of the Russian AI Mind Dushka in JavaScript for Microsoft Internet Explorer (MSIE). The free, open-source Russian AI will grow large enough to demonstrate a proof-of-concept in artificial intelligence, until the intensive computation of thinking and reasoning threatens to slow the MSIE Web browser down to a crawl. To evolve further, the Russian AI Mind must escape to more powerful programming languages on robots or supercomputers.

1 Fri.30.DEC.2011 -- Russian AI Bootstrap Words

In the ru111229.html version of the Dushka Russian AI we coded the AudBuffer to load Russian characters during SpeechAct and the OutBuffer to move each Russian word into a right-justified position subject to the changing of inflectional endings based on grammatical number and case for nouns, and number and person for verbs. Next we need to determine which forms of a Russian word are ideal for storage in the RuBoot bootstrap sequence.

It seems clear that for feminine nouns like "ruka" for "hand", storage in the singular nominative should suffice, because other forms may be derived by using the OutBuffer to remove the nominative ending "-a" and to substitute oblique endings of any required length.

For regular Russian verbs in the group containing "dumat'" for "think" and "dyelat'" for "do", it should be enough to store the infinitive form in the RuBoot module, because the OutBuffer can be used to create the various forms of the present tense. If a human user inputs such a verb in a non-infinitive form, such as in "ty cheetayesh" for "you read", the OutBuffer can still manipulate the forms without reference to an infinitive. This new ability is important for the learning of new verbs. Since there is no predicting in which form a user will input a new Russian verb, the OutBuffer technique must serve the purpose of creating the verb-forms and of tagging their engrams with the proper parameters of person and number.

Since JavaScript is not a main language for artificial intelligence in robots, our Dushka Russian AI serves only as a proof-of-concept for how to construct a robot AI Mind in a more suitable language. We use JavaScript now because it can display the Russian and because a Netizen can call the AI into being simply by using Internet Explorer to click on the link of the Душка AI Mind.

Friday, June 10, 2011

jun10jsai

The JavaScript artificial intelligence (JSAI) is a clientside AiApp whose natural habitat is a desktop computer, a laptop or a smartphone.

1 Fri.10.JUN.2011 -- The AI Mind Needs MSIE.

When we first started coding the JavaScript artificial intelligence (JSAI) back in anno 2000, we tried to make it cross-browser compatible, especially with Netscape Navigator. Unfortunately, as the artificial Mind quickly became extremely complex, we found that we could not maintain compatibility, and that it was too distracting to try. It was hard enough to code the AI in Microsoft Internet Explorer (MSIE), but at least MSIE gave us the functionality that the AI Mind needed.

Meanwhile the AI Mind has evolved in both JavaScript and Win32Forth. Sometimes the JSAI was ahead of the Forth AI, and sometimes vice versa. In our efforts to get mental phenomena to work in either programming language, we sometimes veered apart in one language from our current algorithm in the other language. Now we are bringing the AI codebase back into as close a similarity as possible in both MSIE JavaScript and Win32Forth (plus 64-bit iForth). We may not offer cross-browser compatibility, but we are making our free AI source code more understandable by letting Netizens examine each mind-module in either Forth or JavaScript.

2 Fri.10.JUN.2011 -- Solving the AI Identity Crisis

Today we have been running the AI Mind in both JavaScript and Forth so as to troubleshoot the inability of the JSAI to answer the input question "who are you" properly. The JSAI was responding "I HELP KIDS", which is an idea stored in the knowledge base (KB) of the AI as it comes to life in either Forth or JavaScript. The input query is supposed to activate the concept of "BE" sufficiently to override the activation of the verb "HELP" that comes to mind when the Mind tries to say something about itself. We had to adjust the values in the JSAI NounAct module slightly lower for the creation of a "spike" of spreading activation, so that the "BE" concept would win out over the "HELP" concept in the generation of a thought. We have removed the identity crisis of an AI that could describe itself in terms of doing but not being.

We gradually improve the AI Mind in JavaScript by identifying and combatting the most glaring bug or glitch that pops up when we summon the virtual entity into existence. Any Netizen using MSIE may simply click on a link to the AiMind program and watch the primitive creature start thinking and communicating. The AI would need a robot body and sensors to flesh out its concepts with knowledge of the real world, but we may approach the AI with a Kritik der reinen Vernunft -- as a German philosopher once wrote about "The Critique of Pure Reason." We are building a machine intellect of pure, unfleshed-out reason, able to think with you and to discuss its thought with you. Our process of eliminating each glitch or bug when we notice it, means that the AI Mind has the chance to evolve in two ways. The first AI evolution occurs in these initial offerings of the AI software to our fellow AI enthusiasts. The second AI evolution occurs when the AI propagates to other habitats such as the http://aimind-i.com website. If you are the CEO of a corporate entity, you had better ask around and find out who in your outfit is in charge of keeping up with AI evolution and how many Forthcoders are in your employ.

Table of Contents

Monday, May 30, 2011

may30jsai

The JavaScript artificial intelligence (JSAI) is a client-side AiApp whose natural habitat is a desktop computer, a laptop or a smartphone.

1 Mon.30.MAY.2011 -- Searching the AI Knowledge Base.

The JavaScript artificial intelligence (JSAI) is now being updated with new code from the MindForth AI, which on 29 May 2011 gained the ability to search its knowledge base (KB) twice in response to a single query and provide different but valid answers by means of the neural inhibition of the first answer in order to arrive next at the second answer. In other words, the JSAI will be able to discuss a subject exhaustively in terms of what it knows about the subject -- a major step in our achievement of the MileStone of self-referential thought on the RoadMap to artificial general intelligence. The AI source code has not yet been fine-tuned. We hope to achieve in JavaScript the basic functionality that has been created in MindForth.

Upshot: After we transferred mutatis mutandis all the pertinent code from MindForth into the AiMind.html program in JavaScript, the JSAI still did not work right. We had to hunt down and fix (by commenting out) some lines of obsolete code in the SpreadAct mind-module, where negative activation values were being reset to zero -- to the detriment of inhibition-values, which need to slowly PsiDecay upwards towards zero. We then achieved JSAI functionality on a par with MindForth. We entered new knowledge into the knowledge base (KB). We queried the KB twice with the same question, and the artificial AI Mind correctly gave us two different answers in complete agreement with the knowledge base.

Friday, May 27, 2011

may26mfpj

The MindForth Programming Journal (MFPJ) is both a tool in developing MindForth open-source artificial intelligence (AI) and an archival record of the history of how the AI Forthmind evolved over time.

1 Thurs.26.MAY.2011 -- Conditional Inhibition

In the recent Strong AI diaspora of MindForth and the tutorial AiMind.html program, we have implemented the neural inhibition of concepts immediately after they have been included in a generated thought. Now we would like to make inhibition occur when one or more responses must be made to a query involving nouns or a query involving verbs. The question "What do bears eat?" is a query of the what-do-X-verb variety involving one or more nouns as potentially valid answers as the direct object of the verb. If the noun of each single answer is immediately inhibited, the AI can respond with a different answer to a repeat of the question. Likewise, if we ask the AI, "What do robots do?", the query is of the what-do-X-do variety where potentially multiple verbs may need to be inhibited so as to give one valid answer after another, such as "Robots make tools" and "Robots sweep floors." If we are inhibiting the verbs, we do not want the direct-object nouns to be inhibited. We might need replies with different verbs but the same direct object, such as "Robots make tools" and "Robots use tools."

Inhibition may also play a role in calling the ConJoin module when a query elicits multiple thoughts which are the same sentence except for different nouns or different verbs. The responses, "Bears eat fish" and "Bears eat honey" could become "Bears eat fish and honey" if neural inhibition suppresses the repetition of subject and verb while calling the ConJoin module to insert the conjunction "AND" between the two answer nouns.

2 Thurs.26.MAY.2011 -- Problems With Determining Number

When we try to troubleshoot the Forthmind by entering "bears eat honey", a comedy of errors occurs. The AudRecog module contains a test to detect an "S" at the end of an English word and set the "num(ber)" value to two ("2") for plural. However, that test works only for recognized words, and not for a previously unknown word of new vocabulary. So the word "bears" gets tagged as singular by default, which causes the AI to issue erroneous output with "BEARS EATS HONEY", as if a singular subject is calling for "EATS" as a third person singular verb form.

The process of determining num(ber) ought to be more closely tied with the EnParser module, so that the parsing of a word as a noun should afford the AI a chance to declare plural number if the incoming noun ends with an "S".

Now we have inserted special code into the AudInput module to check for the input of nouns ending in "S", and to set the "num(ber)" variable to a plural value if a terminating "S" is found. For singular nouns like "bus" or "gas" that end in "S", we will have to devise techniques that override the default assumption of "S" meaning plural. We may use the article "A" or the verb "IS" as cues to declare a noun ending in "S" as singular.

Table of Contents

Saturday, May 21, 2011

may20jsai

The JavaScript artificial intelligence (JSAI) is a clientside AiApp whose natural habitat is a desktop computer, a laptop or a smartphone.


1 Fri.20.MAY.2011 -- Fixing KbTraversal

The more we improve the artificial intelligence in JavaScript (JSAI), the easier it becomes to program. Fewer things go wrong, and fewer problems are hidden from view. Right now we would like to improve the performance of the knowledge-base traversal module KbTraversal, which keeps the process of artificial thought going by activating a series of concepts one at a time. We wonder why certain concepts are not being activated, and we would like to see KbTraversal announce the name of the concept being activated.

2 Sat.21.MAY.2011 -- AI Tutorial for Science Museums

Yesterday, in the 20may11A.html JSAI as uploaded to the Web, we saw KbTraversal announcing which concepts it would activate and then trying to think a thought about them, but we may have cut back too severely on calls to the obsolete version of the PsiDecay module, because the JSAI became less able to think smoothly. We should probably restore the psi-decay calls for the time being, so that we may gradually improve an already functional AI.

After we restored the PsiDecay calls, we worked on the erroneous display of articles as a subject or an object in the AI tutorial mode. Because the SpreadAct module invokes the display of each line of association from a subject to a verb or from a verb to an object, an item will fail to be displayed if it is not being treated by SpreadAct. We made the AI Mind display its associative thinking somewhat better.

Teachers and docents who display the AI Mind in a school or science museum are invited to report back on Usenet or their own website about how human beings reacted to the experience of witnessing an alien Mind think and communicate in natural human language. Is the AI really thinking, or is it just a chatbot pretending to think?

Table of Contents

Wednesday, May 18, 2011

may18jsai

The JavaScript artificial intelligence (JSAI) is a clientside AiApp whose natural habitat is a desktop computer, a laptop or a smartphone.

1 Wed.18.MAY.2011 -- Houston, We Have a Problem.

When we submit "who are you" as a query to the AI Mind, it searches the knowledge base (KB) and it remembers that it is ANDRU -- a ROBOT and a PERSON (a different answer each time that you pose the same existential question). Unfortunately, the software finds the first instance of each concept stored in recent memory and spits out the phonemic engram from the auditory memory channel without regard to whether the stored word is a singular form or a plural form. How can we get the most advanced open-source AI in these parsecs to stop saying "I AM ROBOTS"? The AI may have to start skipping over plural engrams when searching for a singular noun. Therefore, let us perform a little psychosurgery on the AI Mind software and see if we can zero in on a singular noun-form during self-referential thought.

First we use a few JavaScript "alert" boxes in BeVerb() and in NounPhrase() to see what values are being carried along in the variables that keep track of grammatical number as the AI Mind generates a thought in response to user input. We see that the subject number is available in the background, so perhaps we can alter the design of the Mind to insist on speaking a singular noun to go with a singular subject. Even though ROBOT and ROBOTS are the same concept, they are not the same expression of the concept. By the way, this issue is another AI mindmaker (Mentifex) problem that had to be solved in due course, that is, rather well along in the AI development process and not at the first blush of AI newbie enthusiasm.

Upshot: Gradually in the NounPhrase module we introduced code to skip over the retrieval of any word in auditory memory if the correct num(ber) was not found to match the the same number of the subject of an input query. The AI began to answer "who are you" with "I AM ROBOT". This bugfix makes the AI Mind more complex and therefore subject to potentially latent problems such as knowing a word only in the plural and not in the singular. However, the same bugfix brings the JSAI closer to machine reasoning and thinking with a syllogism such as, "All men are mortal; Socrates is a man; therefore Socrates is mortal."

Monday, May 16, 2011

may16mfpj

Now that we have cracked the hard problem of AI wide open, we wish to share our results with all nations.

1 Mon.16.MAY.2011 -- List of Mentifex AI Accomplishments

We are still working on the MileStone of self-referential thought on our RoadMap to artificial general intelligence (AGI). We look back upon a small list of accomplishments along the way.

  • two-step selection of BeVerbs;

  • AudRecog morpheme recognition;

  • look-ahead A/AN selection;

  • seq-skip method of linking verbs and objects;

  • SpeechAct inflectional endings;

  • neural inhibition for variety in thought;

  • provisional retention of memory tags;

  • differential PsiDecay.

  • 2 Mon.16.MAY.2011 -- Achieving AI Mental Stability

    Until we devised an AI algorithm for differential PsiDecay in the
    JavaScript artificial intelligence (JSAI), stray activations had been ruining the AI thought processes for months and years. We now port the PsiDecay solution from the JSAI into MindForth. Meanwhile, Netizens with Microsoft Internet Explorer (MSIE) may point the browser at the AiMind.html page and observe the major open-source AI advance in action. Enter "who are you" as a question to the AI Mind not just one time but several times in a row. Observe that the JSAI tells you everything it knows about itself, because neural inhibition immediately suppresses each given answer in order to let a variety of other answers rise to the surface of the AI consciousness. Before the mad scientist of Project Mentifex jotted down the eureka brainstorm, "[ ] Fri.13.MAY.2011 Idea: Put gradations into PsiDecay?" and wrote the code the next day, the AI Minds were not reliable for mission-critical applications. Now the AI Forthmind is about to become more mentally stable than its creator. We only need to port some JSAI code to Forth.