1. Sun.29.JAN.2012 -- Verbs Without Direct Objects
Today in the Dushka Russian AI we begin to address a problem that occurs also in our English AI Mind. Sometimes a verb does not need an object, but the AI needlessly says "ОШИБКА" for "ERROR" after the verb. We need to make it possible for a verb to be used by itself, without either a direct object or a predicate nominative. One way to achieve this goal might be to use the jux flag in the Psi conceptual array to set a flag indicating that the particular instance of the verb needs no object.
We have previously used the "jux" flag mainly to indicate the negation of a verb. If we also use "jux" with a special number to indicate that no object is required, we may have a problem when we wish to indicate both that a verb is negated and that it does not need an object, as in English if we were to say, "He does not play."
One way to get double duty out of the "jux" flag might be to continue using it for negation by inserting the English or Russian concept-number for "NOT" as the value in the "jux" slot, but to make the same value negative to indicate that the verb shall both be negated and shall lack an object, as in, "He does not resemble...."
During user input, we could have a default "jux" setting of minus-one ("-1") that would almost always get an override as soon as a noun or pronoun comes in to be the direct object or the predicate nominative. If the user enters a sentence like "He swims daily" without a direct object, the "jux" flag would remain at minus-one and the idea would be archived as not needing a direct object.
2. Sun.29.JAN.2012 -- Using Parameters to Find Objects
While we work further on the problem of verbs without objects, we should implement the use of parameters in object-selection. First we have a problem where the AI assigns activation-levels to a three-word input in ascending order: 23 28 26. These levels cause the problem that the AI turns the direct object into a subject, typically with an erroneous sentence as a result.
In RuParser, let us see what happens when we comment out a line of code that pays attention to the "ordo" word-ordervariable. Hmm, we get an even more pronounced separation: 20 25 30.
Here we have a sudden idea: We may need to run incoming pronouns through the AudBuffer and the OutBuffer in order unequivocally to assign "dba" tags to them. When we were using separate "audpsi" concept-numbers to recognize different forms of the same pronoun, the software could pinpoint the case of a form. We no longer want different concept-numbers for the same pronoun, because we want parameters like "dba" and "snu" to be able to retrieve correct forms as needed. Using the OutBuffer might give us back the unmistakeable recognition of pronoun forms, but it might also slow down the AI program.
Before we got the idea about using OutBuffer for incoming pronouns, in the OldConcept module we were having some success in testing for "seqneed" and "pos" to set the "dba" at "4=acc" for incoming direct objects. Then we rather riskily tried setting a default "dba" of one for "1=nom" in the same place, so that other tests could change the "dba" as needed. However, we may obtain greater accuracy if we use the OutBuffer.
3. Mon.30.JAN.2012 -- Removing Engram-Gaps From Verbs
Yesterday in the Russian AI we experimented rather drastically with using the "ordo" counter to cause words of input to receive levels of activation on a descending slope, so that the AI would be inclined to generate a sentence of response starting with the same subject as the input. We discovered that the original JavaScript AI in English was not properly keeping track of the "ordo" values, so we made the simple but drastic change of incrementing "ordo" only within OldConcept and NewConcept, both of which are modules where an incoming word must go through the one or the other.
Today we have sidetracked into correcting a problem in the VerbGen module. After input with a fictitious verb, VerbGen was generating a different form of the made-up verb in response, but calls to ReEntry were inserting blank aud-engrams between the verb-stem and the new inflection in the auditory channel. By using if (pho != "") ReEntry() to conditionalize the call to ReEntry for OutBuffer positions b14, b15 and b16, we made VerbGen stop inserting blank auditory engrams. However, there was still a problem, because the AI was making up a new form of the fictitious verb but not recognizing it or assigning a concept-number to it as part of the ReEntry process.