Cyborg AI Minds are a true concept-based artificial intelligence with natural language understanding, simple at first and lacking robot embodiment, and expandable all the way to human-level intelligence and beyond. Privacy policy: Third parties advertising here may place and read cookies on your browser; and may use web beacons to collect information as a result of ads displayed here.

Friday, September 28, 2012

sep27ruai

These notes record the coding of the Russian AI Mind Dushka in JavaScript for Microsoft Internet Explorer (MSIE).

1. Thurs.27.SEP.2012 -- Shortening Test-Range for Verb-Recog

The Dushka Russian artificial intelligence (RuAi) is not properly recognizing a second-person singular verb-form in the ruLexicon Russian lexical array. When we type in the Cyrillic of "Ty veedyeesh menya" for "You see me," the Russian verb is being recorded in the ruLexicon with an erroneous value of "1" for first person instead of "2" for second person.

Apparently the AudListen code for discrimination among grammatical persons was written too specifically for verbs like "dyelayesh" in January of 2012. We may be able to relax the strictness of comparisons by not testing for the vowel just before the personal ending.

We went into the AudListen code for recognizing "delayesh" in the second-person singular and we commented out just the test for the vowel. Then we ran Dushka and immediately the RuAi was able to recognize "Ty veedyeesh menya" properly for "You see me" and the AI answered "Ya veezhoo tebya" for "I see you". This instance was one of the easiest bug-fixes of our Russian AI coding experience. Next we may need to comment out the vowel-tests for the other personal forms of a present-tense Russian verb.

Immediately we wonder if the whole present-tense paradigm will start working properly for most if not all the Russian verb conjugations when we stop testing for the vowel inside the inflectional ending. It also occurs to us that the RuAi may start learning Russian verb-forms regardless of the numbered conjugations thought up by human scholars of philology over the centuries since Greek and Roman times. If we tweak the recognition-code that we implemented for one conjugation and it starts to work for all the conjugations, then we may have accidentally bypassed the whole issue of worrying about how to deal with different Russian conjugations.

2. Fri.28.SEP.2012 -- Non-Russian Troubleshooting of ru120926

Working today on an old computer where we may not type in Cyrillic, nevertheless we may use a special ru120926T.html test version of the ru120926.html Russian artificial intelligence (RuAi) to determine why the RuAi suddenly says "OSHEEBKA" ("error") rather early in its operation without human input (and therefore without Cyrillic typing).

The first place to look for the cause of the problem is in the NounPhrase module which erroneously outputs OSHEEBKA instead of a correct direct object.

Well, isn't that situation weird? First we put a diagnostic "alert" message at the start of NounPhrase, and we got nowhere -- nothing of value was revealed. Next we put a diagnostic alert in NounPhrase where there was a chance for "subjectflag" to change from its default value of one ("1") to a zero in the presence of either a direct object or a predicate nominative. Still nothing special was revealed. We finally got results when we inserted a conditional alert message to tell us what "motjuste" had been chosen in the condition of looking for a non-subject. The RuAi told us that it had selected concept number "704" just before erroneously outputting the "OSHEEBKA" error message. We recognized "704" as having to be a personal pronoun, but which one? It used to be the accusative case "MENYA" of the Russian pronoun number 701 "YA" for English "I". We no longer use number "704" as a separate concept, because "701" takes care of all forms of "YA" under the influence of the "dba" parameter for the grammatical case involved. The number "704" only shows up in obsolete code that we need to remove from the Russian AI.

When we comment out some legacy NounPhrase code that was invoking the concept number "704", the RuAi stops saying "OSHEEBKA" and declares that the motjuste is concept number "701" or the Russian word "YA" in the nominative for English "I". This result is not satisfactory. There should perhaps have been a "nounlock" after the verb "PONIMAYU". We may have to get rid of the "audme" variable not only in the Russian AI but also in the Forth and JavaScript English AI Minds, then find a form of "ME" through a search based on parameters.

Thursday, August 30, 2012

aug23ruai

These notes record the coding of the Russian AI Mind Dushka in JavaScript for Microsoft Internet Explorer (MSIE).

1. Thurs.23.AUG.2012 -- Diagnosing Selection of Subjects

As we troubleshoot the Dushka Russian AI in JavaScript for Microsoft Internet Explorer (MSIE), probably the first point of departure must be inserting a diagnostic "alert" message to let us know how the NounPhrase module is selecting the subject for a sentence of thought in Russian. No matter how a subject is chosen, we want the verblock mechanism to force the retrieval of a particular verb from the so-called IdeaPlex.

Our first major problem after some human input is that NounPhrase selects as most activated a potential subject of "tebya" with a carried-over, spurious "verblock" that does not even lead to a verb, but rather to "tebya" itself. We have probably solved this problem already in the English JSAI.

By searching downwards for "tqv" (the source of "verblock") simultaneously in the Russian JSAI and the English JSAI, we discover that in the English JSAI on 15aug2012 we inserted into InStantiate a line of code to prevent spurious carry-overs of the "tqv" value when "seq" is at zero. Now we insert the same code into the Russian JSAI. Running the AI, we do not get an improvement. Then at the start of WhoBe we also put a zeroing out of "tqv" taken from the English JSAI. Still there is no improvement. In the Russian AI, we then intoduce into WhatBe the same tqv-zeroing as was done in the English JSAI. Again there is no improvement.

2. Fri.24.AUG.2012 -- Affecting Activation of Subjects

It may be necessary to implement code that will switch from an oblique case of an activated concept and find nominative nodes to serve as the subjects of incipient thoughts.

If a direct object is left activated at the end of a sentence, all the nodes of that concept should receive a blanket activation through OldConcept or NounAct. Then NounPhrase may choose nominative nodes as candidates for the subject of a sentence. (Maybe we should make nominative nodes receive a higher activation.) So the process of having a residually activated concept switch from being a direct object in an old thought to being a subject in a new thought should work by whatever mechanism puts a blanket activation on all the nodes of a concept.

3. Tues.28.AUG.2012 -- Finding "verblock" Verb-forms

We need to put in some diagnostic messages and see what residual activation occurs for a direct object.

Today in the VerbPhrase module we are building up some code which, in the presence of a positive verblock, will still go to the "verblock" time-point in the Ru-array but will not automatically accept the verb-form originally deposited there, typically during human input. Instead, the new code conducts a search of the ruLexicon to find a verb-form with the correct number and person. Initially we forgot to search for the concept-number, so we accidentally got the correct ending but on the wrong verb.

4. Wed.29.AUG.2012 -- VerbGen Returns Inveniend Verb-stem

Yesterday we made some major progress in getting the RuAi to search for correct Russian verb forms, but the new code was not yet perfect, so today we need to make improvements. However, we should probably save and archive yesterday's version so that we can recover from any unforeseen errors.

Now there is a problem because the new, integrated search-code is finding the correct archival verb-form, if it is available, but the verb is appearing in duplicate. Apparently the rest of the VerbPhrase code is finding a "vphraud" recall-vector all over again. We should be able to thwart that phenomenon.

As we start to prepare some documentation of the AudBuffer, OutBuffer and VerbGen modules, we notice that our Russian AI code needs to make use of pertinent variables such as the "gencon" status flag and the "audbase" recall-vector to identify the verb whose inflectional ending must be changed. As soon as we use "audbase" in our code, the Russian AI stops switching to a different verb and at least outputs the stem of the verb that we are trying to change. Since we have also set the "gencon" flag, VerbPhrase calls VerbGen but does not make its normal main call to SpeechAct, so we do not get an extra verb-form as output.

5. Thurs.30.AUG.2012 -- VerbGen Needs "dba" Parameter

Yesterday VerbGen was returning only the stem of an inveniend verb and not the inflected personal ending. However, delivering the stem was a major improvement in the Russian AI functionality. Today we found that we needed only to set the "dba" parameter properly before calling VerbGen, and the Russian AI was able to provide a correct form of the required verb.

Tuesday, July 17, 2012

jul06mfpj

MindForth Programming Journal


1 Fri.6.JUL.2012 -- Debugging after Major Code Revision

In the MindForth artificial intelligence (AI) we are now letting the AI run in tutorial mode without human input in order to troubleshoot any glitches that occur after the major changes of the most recent release. Without human intervention and under the influence of the KbTraversal module, the AI calls various subroutines to prompt a dialog with any nearby human. We observe some glitches that are due perhaps to a lack of proper parameters when a subroutine is called. We intend to debug the calling of the various subroutines so that we may display an AI Mind that thinks rationally not only when left to its own devices but also when the AI must think in response to queries or comments from human users.


2 Sat.7.JUL.2012 -- Solving a Problem with WhatAuxSDo

In the course of letting MindForth run without human input, we noticed that eventually the WhatAuxSDo module was called for the subject of concept #56 "YOU" and the AI erroneously asked "WHAT DO ERROR DO". By inserting a diagnostic message, we learned that WhatAuxSDo was not finding a "subjnum" value for the #56 "YOU" concept and thus could not find the word "YOU" in a search of the English "En" array. We went into the EnBoot sequence and changed the "num" value for "YOU" from zero ("0") to one ("1"). The AI correctly said, "WHAT DO YOU DO". However, we may need to debug even further and find out why the proper value of "num" for "YOU" is not being set during the output.


3 Sun.8.JUL.2012 -- Tightening Code for Searchability

When we search the free AI source code for "2 en{" which should reveal any storing or retrieval of a "num" value, we do not find any code for storing "num" in the English lexical array. Therefore we should search for "5 en{" to see where the part-of-speech "pos" is stored. We do so, and still we do not find what we need. Then we try searching for "5 en{" with an extra blank space in the search, and we discover that a form of "pos" is stored both in EnVocab and in OldConcept. At the same time we see that "num" is also stored in the same two mind-modules. Now we should be able to troubleshoot the problem and find out why English lexical "num" is not being stored during processes of thought. First however, we will try to tighten up the code so that only one space intervenes for future occasions when we are trying to find instances of array-manipulation code.


4 Wed.11.JUL.2012 -- Num(ber) in the English Lexical Array

We need to discover where elements of the flag-panel are inserted into nodes of the English lexical array, so that the "num(ber)" value may be stored properly as the AI Mind continues to think and to respond to queries from human users.


5 Fri.13.JUL.2012 -- Correcting Fundamental Flaws

Today in the EnBoot English bootstrap module we are making a blanket change by moving the EnVocab calls down to be on the same line of code as the calls to InNativate, so that the "num(ber)" setting will go properly into EnVocab. Our recent troubleshooting has revealed that WhatAuxSDo needs to find a "num" value in the English lexical array in order to function properly.


6 Sat.14.JUL.2012 -- Tracking num(ber) Values
Next we need to zero in on how the AI assigns "num(ber)" tags during the recognition of words. In OldConcept, it may be necessary to store a default, such as "num" or "unk" and then to test for any positive "ocn" that will simply override the default.

Since we rely on OldConcept to store the number tag, we may need to track where the number-value comes from. AudInput has some sophisticated code which tentatively assigns a plural number when the character "S" is encountered as the last letter in a word. In the work of 4nov2011 we started assigning zero as a default number for the sake of the EnArticle module, but we may need to change the AudInput module back to assigning one ("1") as the default number.


7 Mon.16.JUL.2012 -- Avoiding Unwarrented Number Values

If the most recent "num(ber)" of a word like "ROBOTS" is found to be "2" for plural, we do not want the AI to make the false assumption that the "num(ber)" of the "ROBOTS" concept is inherently plural. Yet we want words like "PEOPLE" or "CHILDREN" to be recognized as being plural.


8 Tues.17.JUL.2012 -- Making Sure of Lexical Number

We may need to go into the NounPhrase subject-selection process and capture the num(ber) value of the lexical item being re-activated within the English lexical array.

Monday, July 02, 2012

jun29mfpj

MindForth Programming Journal

1 Fri.29.JUN.2012 -- IdeaPlex: Sum of all Ideas

The sum of all ideas in a mind can be thought of as the
IdeaPlex. These ideas are expressed in human language
and are subject to modification or revision in the course of
sensory engagement with the world at large.

The knowledge base (KB) in an AiMind is a subset of the IdeaPlex.
Whereas the IdeaPlex is the sum totality of all the engrams of
thought stored in the AI, the knowledge base is the distilled
body of knowledge which can be expanded by means of inference
with machine reasoning or extracted as responses to input-queries.

The job of a human programmer working as an AI mind-tender is to
maintain the logical integrity of the machine IdeaPlex and therefore
of the AI knowledge base. If the AI Mind is implanted in a humanoid
robot, or is merely resident on a computer, it is the work of a
roboticist to maintain the pathways of sensory input/output and the
mechanisms of the robot motorium. The roboticist is concerned with
hardware, and the mind-tender is concerned with the software of the
IdeaPlex.

Whether the mind-tender is a software engineer or a hacker hired
off the streets, the tender must monitor the current chain of thought
in the machine intelligence and adjust the mental parameters of the
AI so that all thinking is logical and rational, with no derailments
of ideation into nonsense statements or absurdities of fallacy.

Evolution occurs narrowly and controllably in one artilect installation
as the mind-tenders iron out bugs in the AI software and introduce algorithmic
improvements. AI evolution explodes globally and uncontrollably when
survival of the fittest AI Minds leads to a Technological Singularity.


2 Fri.29.JUN.2012 -- Perfecting the IdeaPlex

We may implement our new idea of faultlessizing the IdeaPlex by
working on the mechanics of responding to an input-query such as
"What do bears eat?" We envision the process as follows. The AI
imparts extra activation to the verb "eat" from the query, perhaps
first in the InStantiate module, but more definitely in the
ReActivate module, which should be calling the SpreadAct module
to send activation backwards to subjects and forwards to objects.
Meanwhile, if not already, the query-input of the noun "bears"
should be re-activating the concept of "bears" with only a normal
activation. Ideas stored with the "triple" of "bears eat (whatever)"
should then be ready for sentence-generation in response to the query.
Neural inhibition should permit the generation of multiple responses,
if they are available in the knowledge base.

During response-generation, we expect the subject-noun to use the
verblock to lock onto its associated verb, which shall then use
nounlock to lock onto the associated object. Thus the sentence is
retrieved intact. (It may be necessary to create more "lock" variables
for various parts of speech.)

We should perhaps use an input query of "What do kids make?", because
MindForth already has the idea that "Kids make robots".


3 Sat.30.JUN.2012 -- Improving the SpreadAct Module

In our tentative coding, we need now to insert diagnostic messages
that will announce each step being taken in the receipt and response
to an input-query.

We discover some confusion taking place in the SpreadAct module,
where "pre @ 0 > IF" serves as the test for performing
a transfer of activation backwards to a "pre" concept. However,
the "pre" item was replaced at one time with "prepsi", so apparently
the backwards activation code is not being operated. We may need
to test for a positive "prepsi" instead of a positive "pre".

We go into the local, pre-upload version of the Google Code MindForth
"var" (variable) wiki-page and we add a description for "prepsi",
since we are just now conducting serious business with the variable.
Then in the MindForth SpreadAct module we switch from testing in vain
for a positive "pre" value to testing for a positive "prepsi".
Immediately our diagnostic messages indicate that, during generation
of "KIDS MAKE ROBOTS" as a response, activation is passed backwards
from the verb "MAKE" to the subject-noun "KIDS". However, SpreadAct
does not seem to go into operation until the response is generated.
We may need to have SpreadAct operate during the input of a verb
as part of a query, in a chain were ReActivate calls SpreadAct to
flush out potential subject-nouns by retro-activating them.


4 Sat.30.JUN.2012 -- Approaching the "seqneed" Problem

As we search back through versions of MindForth AI, we see that
the 13 October 2010 MFPJ document describes our decision to stop
having ReActivate call SpreadAct. Now we want to reinstate the calls,
because we want to send activation backwards from heavily activated
verbs to their subjects. Apparently the .psi position of the "seqpsi"
has changed from position six to position seven, so we must change the
ReActivate code accordingly. We make the change, and we observe that
the input of "What do kids make?" causes the .psi line at time-point
number 449 to show an increase in activation from 35 to 36 on the
#72 KIDS concept. There is such a small increase from SpreadAct
because SpreadAct conservatively imparts only one unit of activation
backwards to the "prepsi" concept. If we have trouble making the
correct subjects be chosen in response to queries, we could increase
the backwards SpreadAct spikelet from one to a higher value.

Next we have a very tricky situation. When we ask, "What do kids make?",
at first we get the correct answer of "Kids make robots." When we ask
the same question again, we erroneously get, "Kids make kids." It used
to be that such a problem was due to incorrect activation-levels,
with the word "KIDS" being so highly activated that it was chosen
erroneously for both subject and direct object. Nowadays we are
starting with a subject-node and using "verblock" and "nounlock"
to go unerringly from a node to its "seq" concept. However, in this
current case we notice that the original input query of "What do kids make?"
is being stored in the Psi array with an unwarranted seq-value of "72"
for "KIDS" after the #73 "MAKE" verb. Such an erroneous setting seems
to be causing the erroneous secondary output of "Kids make kids."
It could be that the "moot" system is not working properly. The "moot"
flag was supposed to prevent tags from being set during input queries.

In the InStantiate module, the "seqneed" code for verbs is causing
the "MAKE" verb to receive an erroneous "seq" of #72 "KIDS".
We may be able to modify the "seqneed" system to not install
a "seq" at the end of an input.

When we increased the amount of time-points for the "seqneed" system
to look backwards from two to eight, the system stopped assigning
the spurious "seq" to the #73 verb "MAKE" at t=496 and instead
assigned it to the #59 verb "DO" at t=486.


5 Sun.1.JUL.2012 -- Solving the "seqneed" Problem

After our coding session yesterday, we realized that the solution
to the "seqneed" problem may lie in constraining the time period
during which InStantiate searches backwards for a verb needing a
"seq" noun. When we set up the "seqneed" mechanism, we rather
naively ordained that the search should try to go all the way back
to the "vault" value, relying on a "LEAVE" statement to abandon
the loop after finding one verb that could take a "seq".

Now we have used a time-of-seqneed "tsn" variable to limit the
backwards searches in the "seqneed" mechanism of the InStantiate
module, and the MindForth AI seems to be functioning better than ever.
Therefore we shall try to clean up our code by removing diagnostics
and upload the latest MindForth AI to the Web.

Saturday, February 11, 2012

feb11ruai

Artificial Intelligence in Russian

1. Thurs.9.FEB.2012 -- Unspoken Be-Verbs as a Default

The Russian-speaking artificial intelligence Dushka needs a default BeVerb module that will silently assert itself as the automatic carrier of thought until a non-be-verb takes over from the provisional default. In our coding of a Russian mind, we will assume that any noun or pronoun, beginning a thought in the nominative case, is automatically the subject of a putative BeVerb until proven otherwise. In this way, our cognitive software will prepare for a BeVerb and switch automatically when a non-be-verb occurs.

We should work first on the comprehension of putative be-verbs and second on their generation, so that what we learn in comprehending be-verbs may be used in generating thoughts involving a BeVerb. So we type into the AI a Russian sentence to see if the software can understand it.

Human: душка робот

Robot: ДУШКА ЧТО ДУШКА ТАКОЕ

We said "Dushka is a robot" but the AI responded only, "Dushka -- what is Dushka?" We need to implement a default BeVerb in the comprehension of a sentence that lacks a visible BeVerb.

In the InStantiate module, we can trap for the input of a "c==32" space-bar when the "seqneed" is set to "8" for want of an incoming verb. We may then do something outrageous, but normal for Russian. From InStantiate we may provisionally send into AudMem a space-bar character with an "audpsi" of "800" for the verb БЫТЬ ("to be"), so that the AI is ready to record any noun coming in as a predicate nominative in conjunction with the be-verb. Now, if we implement such an outrageous step, it is possible that our AI memory-banks will become replete with quasi-spurious engrams of infinitive be-verbs that typically do not materialize. It could be that the presence of a spurious be-verb engram will not matter, if the cancellation of the default occurs as soon as some actual verb comes in. Then cancelling the spurious default will involve removing or nullifying any associative tags laid down momentarily during the enactment of the default.

2. Fri.10.FEB.2012 -- Instantiating Imaginary Be-Verbs

In the InStantiate module we will now experiment with code to create in auditory memory a pseudo-engram of a non-existent be-verb after the perception of a nominative noun or pronoun. Since the Russian-speaking mind waits for a predicate nominative, it needs at least an imaginary be-verb as the holder of associative links between subject and predicate nominative.

Now inside InStantiate we have assembled the code that creates a be-verb pseudo-engram in the three memory arrays for "Psi" concepts, Russian words and auditory engrams. The Psi node is automatically creating a "pre" tag that links the pseudo-verb back to its subject. We need to implement code that will finish the intermediation of the unspoken Russian BeVerb between its subject and the predicate nominative. The code must also cancel or uninstall the imaginary BeVerb if a real verb occurs instead of the provisionally expected BeVerb.

3. Sat.11.FEB.2012 -- Integration of Default Be-Verbs

We have the AI pretending that a BeVerb comes in after a nominative subject, and now we need to create the "seq" tag from the subject to the default BeVerb. First in the InStantiate module we insert a line of code declaring that the pseudo-be-verb is indeed a verb with respect to its part of speech, so that the following code will try to reach backwards to the subject engram and install a "seq" tag referring to the now not-so-imaginary BeVerb. We run the Dushka AI and we type in, ты робот -- which is Russian for "You are a robot", but without the be-verb. We are puzzled when Dushka answers, Я ЧТО Я ТАКОЕ ("I -- WHAT AM I?") and that's all she wrote. It may indicate that her concept of self has been activated by the input referring to "you", but she does not seem to have understood the input. We check the diagnostic display, and we see that her concept of self now has a "seq" tag referring right back to herself instead of to the default Russian BeVerb. What went wrong? We look at the JavaScript source code again, and we see that it was not enough to set the part-of-speech as a verb. We go ahead and we set the Psi concept-number to be that of the Russian be-verb. Then we run the Russian AI again with the same input and we sit there in shock when the AI announces to us: Я РОБОТ. Dushka has just said to us, "I AM A ROBOT" in Russian. From the diagnostic display we discover that the same changes that made Dushka able to understand the idea, made her able to think the idea.

Saturday, February 04, 2012

feb4ruai

Artificial Intelligence in Russian

Fri.3.FEB.2012 -- Recognizing Inflections

For the Russian-thinking Dushka AI Mind, we have perhaps stumbled upon a way to avoid the hard-coding of noun paradigms and instead to let the Russian AI learn the inflected endings of Russian nouns from its own experience. For example, right now the Russian artificial intelligence (RuAi) fails to recognize the Psi concept #501 БОГ in the following exchange.

Human: я уважаю бога ("I honor God.")
Robot: ТЫ УВАЖАЕШЬ БОГА ("You honor God.")

Robot: ЧТО БОГА ТАКОЕ ("What is God?")

The diagnostic display reveals that the software has almost recognized the word for God.

559. Б 0 * 1 1 0
560. О 0 * 0 1 0
561. Г 0 * 0 1 501
562. А 0 * 0 0 902
Aha! Suddenly it becomes clear that two things are happening. The Psi concept #501 is indeed being recognized at first, but perhaps the provisional-recognition "prc" variable is not being set, and so AudInput calls NewConcept as if the AI were learning a new word instead of recognizing an old word.

Sat.4.FEB.2012 -- Learning Russian Like a Human Child

Now in a very rough way we have trapped for "zad1" in the AudRecog module so as to recognize a noun (БОГА ) with one character of inflection added onto it. Because the noun was indeed recognized, the InStantiate "seqneed" mechanism tagged the noun in the "ruLexicon" with a "dba" of "4" to indicate a direct-object accusative case. In other words, the Russian AI learned a new noun-form as a human child would learn it, that is, from the speech patterns of another speaker of Russian.


Wednesday, February 01, 2012

feb1ruai

Artificial Intelligence in Russian

Tues.31.JAN.2012 -- Generating and Recognizing Verbs

In our Dushka Russian AI we have the problem that new verb-forms generated on the fly by the VerbGen module are not being recognized and tagged with critical parameters as they settle into auditory memory. However, it looks as though a verb does get recognized if the "audpsi" tags for the verb in auditory memory extend far back enough to cover the stem of the verb. Therefore, instead of devising ways to bypass the operation of ReEntry calling AudMem, calling AudRecog, we should perhaps instead implement a "backfill" of any verb generated in the VerbGen module to let the "audpsi" tags extend back to the last "pho(neme)" of the verb-stem. Then the "provisional recall" mechanism in AudRecog ought to recognize the verb-form generated by the VerbGen module.

We created a "vip" variable to hold the value of "motjuste" when VerbPhrase calls VerbGen and to transfer the known concept-number of the verb, near the end of the stem in VerbGen, into the provisional "prc" variable for AudRecog. In this way, we got the AI internally to recognize and record verb-forms generated internally by the VerbGen module. However, to get the AI to call the correct verb-forms, we had to modify some recent OldConcept code for deciding what "dba" value to store with a lexical item. Now we have a problem with tagging the "dba" of a simple word like МЕНЯ when it comes in.

We can not rely on the form of МЕНЯ to tell us its "dba" because it could be genitive or accusative. We need to extract clues from the incoming sentence in order to assign the proper "dba" during the storage of МЕНЯ.

Wed.1.FEB.2012 -- Tagging Engrams with Parameters

We can perhaps rely on the "seqneed" mechanism of InStantiate to provide the "dba" parameter for a noun or pronoun entering the mind as user input. (Perhaps the "seqneed" variable should change to a "seqseek" variable for greater clarity.) We may be able to strengthen the use of "seqneed" by adding a kind of "pass-over" when a preposition is encountered, so that the software continues to look for a direct-object noun when a preposition-plus-noun combination is detected and skipped.

Where the InStantiate module tests for a "seqneed" of "5" and encounters a satisfying noun or pronoun to become a "seq" for the verb, we make the assumption that the time "t" identifies the temporal location of the noun or pronoun in both the Psi array and the "ruLexicon" array. We insert two lines of code to first "examine" the Russian lexical array and then to substitute a numeric "4" for the "ru4" flag of the "dba" value. Since the noun or pronoun is going to be the "seq" of the verb, that same noun or pronoun warrants a "dba" of "4" as a direct object that should be in the accusative case. However, we may need to make other arrangements if the verb is intransitive and the noun must be in the nominative as a predicate nominative.

Monday, January 30, 2012

jan29ruai

Artificial Intelligence in Russian

1. Sun.29.JAN.2012 -- Verbs Without Direct Objects

Today in the Dushka Russian AI we begin to address a problem that occurs also in our English AI Mind. Sometimes a verb does not need an object, but the AI needlessly says "ОШИБКА" for "ERROR" after the verb. We need to make it possible for a verb to be used by itself, without either a direct object or a predicate nominative. One way to achieve this goal might be to use the jux flag in the Psi conceptual array to set a flag indicating that the particular instance of the verb needs no object.

We have previously used the "jux" flag mainly to indicate the negation of a verb. If we also use "jux" with a special number to indicate that no object is required, we may have a problem when we wish to indicate both that a verb is negated and that it does not need an object, as in English if we were to say, "He does not play."

One way to get double duty out of the "jux" flag might be to continue using it for negation by inserting the English or Russian concept-number for "NOT" as the value in the "jux" slot, but to make the same value negative to indicate that the verb shall both be negated and shall lack an object, as in, "He does not resemble...."

During user input, we could have a default "jux" setting of minus-one ("-1") that would almost always get an override as soon as a noun or pronoun comes in to be the direct object or the predicate nominative. If the user enters a sentence like "He swims daily" without a direct object, the "jux" flag would remain at minus-one and the idea would be archived as not needing a direct object.

2. Sun.29.JAN.2012 -- Using Parameters to Find Objects

While we work further on the problem of verbs without objects, we should implement the use of parameters in object-selection. First we have a problem where the AI assigns activation-levels to a three-word input in ascending order: 23 28 26. These levels cause the problem that the AI turns the direct object into a subject, typically with an erroneous sentence as a result.
In RuParser, let us see what happens when we comment out a line of code that pays attention to the "ordo" word-ordervariable. Hmm, we get an even more pronounced separation: 20 25 30.

Here we have a sudden idea: We may need to run incoming pronouns through the AudBuffer and the OutBuffer in order unequivocally to assign "dba" tags to them. When we were using separate "audpsi" concept-numbers to recognize different forms of the same pronoun, the software could pinpoint the case of a form. We no longer want different concept-numbers for the same pronoun, because we want parameters like "dba" and "snu" to be able to retrieve correct forms as needed. Using the OutBuffer might give us back the unmistakeable recognition of pronoun forms, but it might also slow down the AI program.

Before we got the idea about using OutBuffer for incoming pronouns, in the OldConcept module we were having some success in testing for "seqneed" and "pos" to set the "dba" at "4=acc" for incoming direct objects. Then we rather riskily tried setting a default "dba" of one for "1=nom" in the same place, so that other tests could change the "dba" as needed. However, we may obtain greater accuracy if we use the OutBuffer.

3. Mon.30.JAN.2012 -- Removing Engram-Gaps From Verbs

Yesterday in the Russian AI we experimented rather drastically with using the "ordo" counter to cause words of input to receive levels of activation on a descending slope, so that the AI would be inclined to generate a sentence of response starting with the same subject as the input. We discovered that the original JavaScript AI in English was not properly keeping track of the "ordo" values, so we made the simple but drastic change of incrementing "ordo" only within OldConcept and NewConcept, both of which are modules where an incoming word must go through the one or the other.


Today we have sidetracked into correcting a problem in the VerbGen module. After input with a fictitious verb, VerbGen was generating a different form of the made-up verb in response, but calls to ReEntry were inserting blank aud-engrams between the verb-stem and the new inflection in the auditory channel. By using if (pho != "") ReEntry() to conditionalize the call to ReEntry for OutBuffer positions b14, b15 and b16, we made VerbGen stop inserting blank auditory engrams. However, there was still a problem, because the AI was making up a new form of the fictitious verb but not recognizing it or assigning a concept-number to it as part of the ReEntry process.


Thursday, January 26, 2012

jan26ruai

Artificial Intelligence in Russian


Thurs.26.JAN.2012 -- Insufficient Activation of Subjects

The most glaring problem in the Dushka Russian AI right now is that the AI does not fully activate the subject-pronoun when we type in a short sentence of subject, verb and object. Without a proper subject to provide parameters, the AI fails to select or generate a proper Russian verb-form.

When we type in "люди знают нас" ("People know us"), as an answer we get "ВАМ ЗНАЮТ ТЕБЯ" -- a mishmash of "to you" "they know" "you". In general, the AI seems to be taking the final object entered as input and trying to convert it into the subject for a response.

Thurs.26.JAN.2012 -- Using the "seqneed" Variable

The Russian AI is not setting a Psi "seq" flag when we enter a Russian word as the subject of a following verb. When we inspect the recent 10nov11A.F MindForth code for clues, we discover that in October of 2011 we made major improvements to the method of assigning "seq" tags. We began using the "seqneed" variable as a way of holding off on assigning a "seq" until either the desired verb or the desired noun/pronoun made itself available. However, apparently in the English JavaScript AI we wrote the "seqneed" code only for needing nouns and not yet for needing a verb. No, we did write the code, but it involved avoiding the English auxiliary verb "do", so we accidentally removed the verb-seqneed code from the RuAi. Let us put most of the code back in, and see what happens. Upshot: Once we put the code back into InStantiate, subjects of verbs once again began having a "seq" reference to the verb. The AI even skipped an adverb that we then inserted as a test.

Sunday, January 15, 2012

jan13ruai

These notes record the coding of the Russian AI Mind Dushka in JavaScript for Microsoft Internet Explorer (MSIE).

1. Fri.13.JAN.2012 -- Re-thinking Word Recognition

For artificial intelligence in Russian we need to re-think the whole idea of word-recognition as previously implemented in our English AI Minds. In English we did not worry much about word-endings, but in Russian (or German) we need to recognize a verb-form regardless of the number and person in which it is encountered. Since we are using the OutBuffer mechanism to detect and recognize verb-endings, we would like to use the same mechanism to retroactively insert a provisional audpsi identifier on not just the final phoneme of an auditory word-engram but also on the final stem-phoneme and perhaps on each phoneme of the inflected verb-ending. Then we would like to modify the AudRecog module so that it holds onto the provisional audpsi and declares the recognition of a verb in whatever present-tense form it is encountered.

Now we have run the current AI with an Alert box to tell us what is the value of "audpsi" when a second-person singular verb-ending is detected. With the input of "ЗНАЕШЬ" there was no value given for "audpsi", but for "ДЕЛАЕШЬ" a value of "821" was indicated, because the verb-form in its various permutations is provided in the RuBoot sequence.

2. Sat.14.JAN.2012 -- Enhancing Auditory Input

Yesterday in the AudMem module we had difficulty in waiting for the deposition of an audpsi ultimate-tag and in trying retroactively to insert the tag on the penultimate phonemes of the Russian word being recognized. We were obtaining values for audpsi at times when we expected there to not yet be an audpsi.

Although we try zeroing out audpsi at the end of AudMem, it looks as though further use of "audpsi" is required in the AudListen module and in the AudInput module, where finally audpsi is converted to
oldpsi for use in the OldConcept module.

It turns out that AudListen calls AudInput when a space-bar is reached during keyboard entry of a word. The AudInput module, without using AudMem, directly stores an audpsi ultimate-tag retroactively by using the "tult" value. Therefore we should be trying to insert additional "audpsi" tags in AudInput and not in AudMem.

3. Sun.15.JAN.2012 -- Auditory Stem-Tagging

We have gradually learned that the AudInput module will not let us readjust values of audpsi on a word from within an if-clause testing for a value of zero on the "aud4" or ctu continuation-flag. Therefore we may need to introduce a secondary if-clause in order to make each phoneme of the word carry the audpsi tag.

We developed a suspicion that something was not letting a positive audpsi be inserted after any phoneme with an "aud4" continuation-flag "ctu" of one. We searched for "aud4 ==" and in audDamp we found the conditional "if (aud4 == 1) aud5 = 0". This obscure line of code made us spend one or two days of work in trying to comprehend why we could not "backfill" the audpsi value onto phonemes prior to the final phoneme of a word.

When we commented out the offending line in audDamp, we began to notice unwarranted carry-overs of an old audpsi onto the first phoneme of the subsequent word. To correct that problem, audpsi will need to be reset to zero in at least one additional location. Actually, we had to reset "morphpsi" to zero at the end of AudRecog to solve the problem.

4. Sun.15.JAN.2012 -- Russian Verb Stem Recognition

Now in AudRecog we need to set up provisional recognition of Russian verb-stems. We create a "provrec" variable for "provisional recognition" and we use it to detect the early presence of "audpsi" tags before the end of a word is reached. Dushka begins to recognize incoming Russian verbs and to generate incorrect but on-target sentences using the recognized verb in the infinitive form. It remains to use the AudBuffer mechanism and the parameters of person and number to generate the output of a Russian verb in the proper gramatical form.


Table of Contents (TOC)

Thursday, January 12, 2012

jan12ruai

These notes record the coding of the Russian AI Mind Dushka in JavaScript for Microsoft Internet Explorer (MSIE).

Thurs.12.JAN.2012 -- Parsing Russian Verb-Endings

In our Russian JavaScript AI code heretofore we have merged the English and Russian AI Minds and we have eliminated or deactivated all the code for thinking in English. For the Dushka AI to think properly in Russian, we need to implement the OutBuffer mechanism for dealing with the inflectional endings of Russian verbs and nouns. Since we are not sure where to begin, we will present ourselves with the problem of dealing with the input of a previously unknown Russian verb.

Pressing Alt-Shift to toggle into Russian input, we ran the AI and we typed in the word "ЗНАТЬ", which the AI properly recognized as bootstrap concept #840. The AI responded with an ungrammatical sentence of "Я ЗНАТЬ МЕНЯ".

Then we typed in the word "ЗНАЮ", which the AI failed to recognize as a form of ЗНАТЬ, assigning instead a concept number of 882, as if the item was a brand new word being learned by the AI. We will try setting the value of "nru" to 900 at the end of RuBoot, so that new concepts will be learned with concept numbers starting at #901. Now we typed in "ЗНАЮ" and it was assigned #901 as a concept number. Next we typed in "ЗНАЕШЬ", and it, too, was assigned #901 as a new concept.

If we want the OutBuffer mechanism to recognize a personal verb form as such, we will need to go back to a version of the Russian AI which was sending input into the buffers. On the Packard-Bell desktop computer in the 25dec11A.F MindForth, we used the "abc" transfer-variable in the AudListen module to capture input keystrokes and move the characters into
a buffer. In the JavaScript AI, we will need to use the area of AudListen() where "pho = pho.toUpperCase()" turns each keystroke into an uppercase Cyrillic letter. From there we also call AudBuffer so that the "abc" values are transferred into the buffer.

We should probably not call OutBuffer from the "CR()" carriage-return module, which deals with the moment after an incoming word has gone into the AudMem() module. Instead we should probably deal in AudListen() directly with the input of a space-bar or a carriage-return.

We need a suitable location to reset the "phodex" counter back to zero after the end of a word of input. The "CR()" carriage-return module does not seem to effect the hange promptly enough. Let us try resetting "phodex" in the AudListen module when a carriage-return or a space-bar is entered. That method seems to work well, and somehow the AudBuffer and the OutBuffer apparently get cleared out.

Now we need to choose where the testing for any particular verb-ending in the OutBuffer will take place. It could maybe take place in the AudMem module. No, it turns out that it is somehow too late to test for a "b16" ending in AudMem. It works better if we test for "b16" in AudListen, before the character even goes into AudMem. It also turns out that we can use "if (b16==String.fromCharCode(1070))" as a way to test for an actual Russian character.

In AudListen we have now managed to build up code that tests the final three right-justified spaces in the OutBuffer and recognizes a second-person singular Russian verb-ending during keyboard input. Within the same test-code we have set the "dba" as "2" for second person and the part-of-speech "bias" and "pos" at "8" for a verb. The set values carried over into the memory arrays. Thus we expanded and improved the RuParser function. The same mechanism that recognizes a verb-ending, also parses the word as a verb.