Cyborg AI Minds are a true concept-based artificial intelligence with natural language understanding, simple at first and lacking robot embodiment, and expandable all the way to human-level intelligence and beyond. Privacy policy: Third parties advertising here may place and read cookies on your browser; and may use web beacons to collect information as a result of ads displayed here.

Thursday, September 26, 2019

pmpj0926

Ghost.pl AI has unresolved issues in associating from concept to concept.

The ghost.pl AI needs improvement in the area of being able to demonstrate thinking with a prepositional phrase not just once but repeatedly, so into the EnThink() module we will insert diagnostic code that shows us the values of key variables at the start of each cycle of thought.

Oh, gee, this coding of the AI Mind is actually fun, especially in Perl, whereas in JavaScript there is often too much time-pressure during the entering of input. We have inserted a line of code which causes an audible beep and reveals to us the status of the $whatcon and $tpr variables just before the AI generates a thought in English -- a language which we must state explicitly, because our ghost.pl AI is just as capable of thinking in Russian. When we at first enter no input, the AI beeps periodically and shows us the values as zero. When we enter "john writes books for money", the AI shows us "whatcon= 0 tpr= 4107" because the concept of the preposition "FOR" has gone into conceptual memory at time-point "t = 4107". The AI responds to the input by outputting "THE STUDENTS READ THE BOOKS", because activation spreads from the concept of "BOOKS" to the innate idea that "STUDENTS READ BOOKS". Then we hear a beep and we see "whatcon= 0 tpr= 0" because the $tpr flag has been reset to zero somewhere in the vast labyrinth of semi-AI-complete code. Now let us enter the same input and follow it up with a query, "what does john write". Then we get "whatcon= 1 tpr= 0" and the output "THE JOHN WRITES THE BOOKS FOR THE MONEY", after which the diagnostic message reverts to "whatcon= 0 tpr= 0" because of resetting to zero.

Now we want to let the AI Mind run for a while until we repeat the query. The AI makes a mistake. We had better not let it be in control of our nuclear arsenal, not if we want to avoid global thermonuclear war, Matthew. The AI-gone-crazy says "THE JOHN WRITES THE BOOKS FOR THE BOOKS AND THE JOHN WRITE". (Oops. We step away for a moment to watch and listen to Helen Donath in 1984 singing the Waltz from "Spitzentuch der Koenigin" with the Vienna Symphony. Then we linger while Zubin Mehta and the Wiener Philharmoniker in 1999 play the "Einzugsmarsch" from "Der Zigeunerbaron". How are we going to code the Singularity if the Cable TV continues to play Strauss waltzes?) The trained eye of the Mind Maintainer immediately recognizes two symptoms of a malfunctioning artificial intelligence. First, a spurious instance of the $tpr flag is causing the AI to output "THE BOOKS FOR THE BOOKS," and secondly, the $etc variable for detecting more than one active thought must be causing the attempt by the Ghost in the Machine to make two statements joined by the conjunction "AND". We had better expand our diagnostic message to tell us the contents of the $etc variable. We do so, but we see only a value of zero, because apparently a reset occurs so quickly that no other value persists long enough to be seen in the diagnostic message. Meanwhile the AI is stuck in making statements about John writing.

We address the problem of a spurious $tpr flag by inserting fake $tru values during instantiations in the InStantiate() and EnParser() modules. We use the values 111 to 999 for $tru in the EnParser() module and 101 to 107 in the InStantiate() module, so that the middle zero lets us know when the flag-panel of a concept has been finalized in the InStantiate() module. Immediately the fake truth-value of "606" for the $tru flag of the word "MONEY", that has a spurious value of "4107" in the $tpr slot of the conceptual flag-panel, lets us k now that $tpr has not been reset to zero quickly enough to prevent a carried-over and spurious value from being set for the concept of "MONEY". Since the preposition "FOR" is being instantiated at a point in the EnParser() module where a fake truth-value of "888" appears, we can concentrate on that particular snippet of code.


Tuesday, September 24, 2019

pmpj0924

Updating the English Parser documentation page.

Today in the ghost.pl AI we have two objectives. We want to improve upon the new functionality of thinking with English prepositions, and we wish to clean up the code to be displayed in the EnParser documentation page.

When we enter "john writes books for money" and we soon ask the AI "what does john write", we get a reasonably correct answer but we notice some problems with the assignment of associative tags when the answer-statement goes into conceptual memory. As an early step, we zero out the $tpr time-of-preposition tag, after using it as a target time-point, so as to prevent it from being assigned spuriously when other concepts are instantiated. But that step causes other problems, so we undo it. We also notice that old $tpr values are being assigned, when we would rather see up-to-date values, even when both the old value and a new value would be pointing to an instance of the same preposition. As we troubleshoot further, we embed diagnostics to tell us when the $tpr tag is being assigned, and we discover that it is assigned only during user input. When we remove the restriction and let the tag be assigned also during internal thinking, we start seeing the assignment of up-to-date values.


Sunday, September 22, 2019

jmpj0922

JavaScript AgiMind understands and thinks with prepositions.

[2019-09-20] In the JavaScript AgiMind.html we are now trying to reproduce the new AGI functionality that we achieved a month ago in the ghost.pl Perlmind. The Ghost in the Machine became able to understand an input like "John writes books for money" and was able to respond properly to a query like "What does John write?"

When we enter "john writes books for money" and the AgiMind responds "WHAT ARE JOHN", it simply means that we need to add the noun "JOHN" to the innate vocabulary. So from the "perlmind.txt" we transfer "JOHN" as concept #504 into the JavaScript free AI source code, and now the AgiMind responds "STUDENTS READ BOOKS", which indicates that the AgiMind knows who or what John is, and what books are. But we also check the Diagnostic mode to make sure that the conceptual associative tags are being assigned properly. We are not sure, so we enter "what does john write" and we get a long response of nonsense.

[2019-09-21] In our second day, we discover that the ReEntry() module has been causing a reduplication of the output of the AgiMind. For troubleshooting, we temporarily disable the ReEntry module. Then we discover that some wrong associative tags are being assigned during human input. We run the ghost.pl AI to see how the correct associative tags are supposed to be assigned.

We discover that a line of InStantiate() code is assigning a false psi19 tpr value when only a zero value should be assigned. The false value being assigned is actually already there, so some other line of code must be assigning it earlier. But there is no earlier assignment, so the false tpr value is obviously being assigned retroactively -- which is something that any AI mind maintainer must learn to watch out for. Probably the retroactive assignment is happening in the EnParser() module, which does a lot of retroactive assignments because one word of human input may have an effect upon an earlier word of human input. Through substitution of "777" as a spurious value in the psi19 location of a snippet of assignment code in the EnParser() module, we discover which snippet is making the erroneous, non-777 assignment. Then through further substitution of "444" in the psi19 slot, we discover an earlier snippet of EnParser() code which is assigning a wrong value at the tvb time-of-verb time-point. So there must be an even earlier "tvb" snippet that is creating a spurious psi19 value. We discover that earlier snippet in the InStantiate() module. After much other coding, when we bring in a reset of tult to zero from the ghost.pl AI, we stop getting the spurious psi19 values.

[2019-09-22] In our third day, we run the ghost.pl AI that already works with prepositional phrases, and we discover that yesterday we trying to fix something that was not even a bug. The AgiMind was properly assigning the tpr tag to link the noun "BOOKS" to the preposition "FOR", and we mistakenly thought that the tag was supposed to be assigned also with "FOR". No, the preposition "FOR" needs only a tkb tag leading to "MONEY" as its object. Now we have gotten the tkb tag to be assigned properly for remembering the object of a preposition. After extensive debugging, we obtain the following exchange:

AI Mind version 22sep19A on Sun Sep 22 19:55:56 PDT 2019
Robot: I UNDERSTAND YOU
Human: john writes books for money

Robot: STUDENTS READ BOOKS
Human:

Robot:
Human: what does john write

Robot: JOHN WRITES BOOKS FOR MONEY