Strong AI

Purpose: Discussion of Strong AI Minds thinking in English, German or Russian. Privacy policy: Third parties advertising here may place and read cookies on your browser; and may use web beacons to collect information as a result of ads displayed here.

Tuesday, October 24, 2017

pmpj1024

Tweaking EnVerbPhrase() for correct responses to who-queries.

We have been adding new material to the SpreadAct() documentation page so as to describe what happens during the processing of an input query in the format of WHO+Verb+Direct-Object. As we try to ensure that the AI generates a grammatically correct response, we change a line of code in the EnVerbPhrase() module to

if ($k[1]==$verbpsi && $i==$verblock) { # 2017-10-24: zero in!
in order to ensure that we obtain the knowledge-base memory that correctly and grammatically answers the input-query. Asking "Who has a child" we get "WOMEN HAVE THE CHILD". Asking "Who makes robots" we get "KIDS MAKE THE ROBOTS". The KB-engram is already grammatical; we do not generate the pre-existing, correct grammar.

Friday, October 20, 2017

pmpj1020

Troubleshooting problems in responses to who-queries.

In the ghost236.pl Perl AI we need to troubleshoot why verbs used in response to who-queries are not using the correct grammatical number in agreement with the subject-noun in the response. First we need to find out whether the num(ber) from a remembered noun is kept track of so that any new engram of the same noun will have the same num(ber). We observe more than one variable used in previous AI Minds to keep track of the num(ber) dealt with in the OldConcept() mind-module, and we try now to standardize on $recnum or "recognized number". The JavaScript AI with version number "27jun15A" has a test in the InStantiate() module to replace $num with $recnum if $recnum is greater than zero, so we might use the same test in the ghost.pl AI. As we debug at length, we notice that the JavaScript AI easily distinguishes the number difference between "MAN" and "MEN" and between "WOMAN" and "WOMEN". We fear that the problem may lie in the AudRecog() module, which is always difficult to debug.

Making AudRecog() wait for end of word to declare recognition.

In AudRecog(), first we need a loop that activates each matching character in a sequence. We want AudRecog() not to declare an $audrec until a following 32=SPACE has come in. Upshot: At first we trying replacing the entire AudRecog() code, until we obtain such positive results that we restore AudRecog() and insert only the code-tweaks which yielded positive results. We also tweak AudMem() slightly for operation co-ordinated with AudRecog().

Tuesday, September 26, 2017

pmpj0926

Using Natural Language Understanding (NLU) to answer questions.

Although in the proof-of-concept ghost235.pl searchable AI Mind we are dealing with an initially small knowledge base (KB), our coding of the ability to search for knowledge would apply equally well to an entire datacenter full of information. Today we are using the spreading-activation SpreadAct() mind-module to activate the conceptual elements of knowledge which will supply answers based upon input queries in the format of "Who + verb + noun", as in "Who makes robots?" The query-word "who" is subject to de-activation upon input, while the verb-concept and the noun-concept in the query are passed through SpreadAct() not as random parameters for an associative search, but in their specific roles as Main Verb and as Direct Object of the verb. Thus the AI Mind should respond with answers tailored to the structure of the query, in such a way as truly to demonstrate Natural Language Understanding (NLU).

We start by declaring the new flag-variable of "query-condition for who+verb+direct-object" $qvdocon to segregate the pertinent code in SpreadAct(), and also the "query-condition for who+verb+indirect-object" $qviocon to hold in reserve for when we code the AI response to input queries in the format of "To whom does God give help?" The creation of the one flag suggests the creation of the similar flag, so we declare both of them.

In the InStantiate() module we insert code to detect a who-query with a verb other than "be", and we set $qv2psi with the concept number of the verb. We set $qv4psi with the concept number of any input noun assumed to be the direct object of the incoming verb. Then in the pertinent area of SpreadAct() we need to start searching backwards through memory for instances of the verb in the who-query.

Eventually we obtain a rough but correct response to our queries of "Who does such-and-such?" but we need to debug and fine-tune the parameters. We ask, "Who makes robots" and we get "KIDS MAKES THE ROBOTS." We ask, "Who has a child" and we get "WOMEN HAS THE CHILD". We need to upload and release the ghost235.pl code which achieves the objective, albeit primitively, and we must not code the same version further lest we wreck or corrupt the new functionality of answering who-queries in the form of "who" plus verb plus direct object. As we debug future releases of our code, the ghost235.pl version remains safe and intact.

Monday, September 18, 2017

pmpj0918

SpreadAct() finds subject and verb to respond to a who-query.

As we try to improve upon who-queries with the ghost229.pl AI, we realize that the input of "Who are you" as a query needs to activate instances of the 701=I concept with 800=BE as both a $seq and a $tkb $verblock. It is not enough to insist upon a positive $tkb verblock, because that value is only a time and not the identifier of a concept. The $seq value actually identifies the verb as a particular concept which the SpreadAct() module is trying to find.

It is not even necessary for SpreadAct() to impart activation to the conceptual node of the 800=BE $seq verb, because only the subject of the stored idea needs to have activation high enough to be selected as a response to an incoming query. We may therefore go into SpreadAct() and in the search code for $qv1psi as the subject of the query we only need to verify the existence of the 800=BE $seq verb, not activate it.

In SpreadAct() we make the necessary changes in the code searching for $qv1psi and $qv2psi. We ask "Who are you" and the ghost.pl AI properly answers "I AM THE PERSON." However, as the AI continues thinking, it makes some wrong associations. Suddenly we realize that we forgot to use the $moot flag to prevent the input who-query from leaving associative tags.