Cyborg AI Minds are a true concept-based artificial intelligence with natural language understanding, simple at first and lacking robot embodiment, and expandable all the way to human-level intelligence and beyond. Privacy policy: Third parties advertising here may place and read cookies on your browser; and may use web beacons to collect information as a result of ads displayed here.

Monday, November 11, 2019

sota1111

Ghost AI -- State of the Art -- November 2019

A major development in this AI project has occurred in November of 2019 with the first expansion of the TacRecog tactile recognition module beyond a mere stub. In the quarter century of our AI coding from 1993 to 2018, the only avenue of sensory input to the Ghost in the Machine was the AudRecog auditory recognition module which used the computer keyboard to pretend that the input of characters was the auditory recognition of acoustic phonemes. TacRecog still uses the keyboard but does not pretend; it directly senses and feels any 0-9 numeric keystroke. Roboticists will hopefully appreciate that the EnVerbPhrase English verb-phrase module is now ready to talk not only about things seen by a robot but also about things touched by a robot.

The MindBoot sequence has been expanded with the ten concepts and English words expressing the numbers from zero to nine. Pressing a numeric key activates not only the numeric concept but also the ego-concept of "I" and the sensory concept of "feel". In response to a pressing of the "7" key, a Ghost in the Machine may say "I FEEL THE SEVEN". The user may also ask the AI "what do you feel" and receive a similar response. Hopefully it is now possible to conduct conversational experiments in artificial consciousness with The Ghost in the Machine.

In a prior state of the art, the AI understands each English or Russian word only in terms of other words and with no symbolic grounding. Now suddenly the AI may have direct sensory knowledge of the ten ordinal numbers which are the Principia of our Mathematica. This innovation makes us wonder if we can replicate in a machine the same or similar process by which a human child becomes familiar with numbers. We make outreach to mathematicians on Reddit and on Usenet who may take an interest in the use of artificial intelligence for mathematical reasoning.

We are also recently dabbling in the theology of artificial intelligence, since our Ghost software has a concept of God and has a few innate MindBoot ideas about God, chiefly the famous quote from Albert Einstein that "God does not play dice with the universe." This quote is our prime example of negation of verbs and a helpful example of the EnPrep English preposition module.


Saturday, October 05, 2019

mfpj1005

MindForth resets associative tags before each operation of Indicative module.

In the MindForth artificial intelligence (AI) for robots, we will now start to display an apparatus of diagnostic messages at the start of the Indicative module to tell us the values being held in variables which serve the purpose of creating associative tags to interconnect the concepts being expressed as English words during the operation of the Indicative mind-module. Since the ConJoin module will often insert a conjunction between two thoughts being generated, the AI Mind Maintainer needs assurance that variable-values operative during one thought do not erroneously get carried over past a conjunction into the separate process of generating another thought.

Just by resetting the tsj time-of-subject variable to zero at the start of the Indicative module, we have made the Forthmind able to trigger repeated instances of logical InFerence. Each running of the Indicative module amounts to a fresh declaration of the associative tags in the conceptual flag-panel that supports the generation of ideas in the MindGrid. The following dialog still has a few problems, but it shows the repeated triggering of an inference.

john is a student
DOES  JOHN  READ  BOOKS  
no

  Indicative: tsj= 0 tvb= 0 tdo= 0
A  JOHN  DOES  NOT  READ  BOOKS
  Indicative: tsj= 0 tvb= 0 tdo= 0
PROFESSORS  TEACH  THE  STUDENTS  AND
  Indicative: tsj= 0 tvb= 0 tdo= 0
STUDENTS  READ  THE  BOOKS

  Indicative: tsj= 0 tvb= 0 tdo= 0
I  UNDERSTAND  YOU  AND
  Indicative: tsj= 0 tvb= 0 tdo= 0
YOU  ARE  A  MAGIC

andru is student
DOES  ANDRU  READ  THE  BOOKS  
no

  Indicative: tsj= 0 tvb= 0 tdo= 0
AN  ANDRU  DOES  NOT  READ  THE  BOOKS  AND
  Indicative: tsj= 0 tvb= 0 tdo= 0
YOU  READ  THE  BOOKS

  Indicative: tsj= 0 tvb= 0 tdo= 0
PROFESSORS  TEACH  THE  STUDENTS  AND
  Indicative: tsj= 0 tvb= 0 tdo= 0
STUDENTS  READ  THE  BOOKS

  Indicative: tsj= 0 tvb= 0 tdo= 0
STUDENTS  READ  THE  BOOKS  AND
  Indicative: tsj= 0 tvb= 0 tdo= 0
I  THINK

Friday, October 04, 2019

mfpj1004

Using parameters to declare the time-points of conceptual instantiation.

[2019-10-02] Recently we have expanded the conceptual flag-panel of MindForth from fifteen tags to twenty-one associative tags, so that the free open-source artificial intelligence for robots may think a much wider variety of thoughts in English. Then we had to debug the function of the InFerence module to restore its ability to reason from two known facts in order to infer a new fact. For instance, the Forthmind knows the fact that students read books, and we tell the AI the fact that John is a student. Then the AI infers that perhaps John, being a student, reads books, and the incredibly brilliant Forth software asks us, "Does John read books?" We may answer yes, no, maybe or no response at all. Currently, though, we have the problem that InFerence works only once and fails to deal properly with repeated attempts to trigger an inference. We suspect that some of the variables involved in the process of automated reasoning are not being reset properly to their status ex quo ante before we made the first test of InFerence. Therefore we shall try a new technique of debugging which we have developed recently in one of the other AI Minds, namely the ghost.pl AI that thinks in both English and in Russian. We create a diagnostic display at the start of the EnThink module for thinking in English, so that we may see the values held by the variables associated with the InFerence module and the KbRetro module that retroactively adjusts the knowledge base (KB) of the AI Mind in accordance with whatever answer we have given when the AskUser module asks us to validate or contradict an inference. The following dialog shows us that some variables are not being properly reset to zero.

john is student

EnThink: becon= 1 yncon= 0 ynverb= 0 inft= 0
qusub= 0 qusnum= 1 subjnom= 504 prednom= 561 tkbn= 0
quverb= 0 seqverb= 0 seqtkb= 0 tkbv= 0
quobj= 0 dobseq= 0 kbzap= 0 tkbo= 0
DOES JOHN READ BOOKS
no

EnThink: becon= 0 yncon= 0 ynverb= 0 inft= 2084
qusub= 504 qusnum= 1 subjnom= 0 prednom= 0 tkbn= 2086
quverb= 863 seqverb= 0 seqtkb= 0 tkbv= 2087
quobj= 540 dobseq= 0 kbzap= 404 tkbo= 2088
A JOHN DOES NOT READ BOOKS

EnThink: becon= 0 yncon= 0 ynverb= 0 inft= 2118
qusub= 504 qusnum= 1 subjnom= 0 prednom= 0 tkbn= 0
quverb= 863 seqverb= 0 seqtkb= 0 tkbv= 0
quobj= 0 dobseq= 0 kbzap= 0 tkbo= 2088
PROFESSORS TEACH THE STUDENTS AND STUDENTS READ THE BOOKS

EnThink: becon= 0 yncon= 0 ynverb= 0 inft= 2152
qusub= 504 qusnum= 1 subjnom= 0 prednom= 0 tkbn= 0
quverb= 863 seqverb= 0 seqtkb= 0 tkbv= 0
quobj= 0 dobseq= 0 kbzap= 0 tkbo= 2088
I UNDERSTAND YOU AND YOU ARE A MAGIC
andru is student

EnThink: becon= 1 yncon= 0 ynverb= 0 inft= 2220
qusub= 504 qusnum= 1 subjnom= 501 prednom= 561 tkbn= 0
quverb= 863 seqverb= 0 seqtkb= 0 tkbv= 0
quobj= 0 dobseq= 0 kbzap= 0 tkbo= 2088
DOES ANDRU READ THE STUDENTS
Because some of the variables have not been reset, a second attempt to trigger an inference with "andru is student" results in a faulty query that should have been "Does Andru read books?" Let us reset the necessary variables and try again.

Upshot: It still does not work, because of a more difficult and more obscure bug in the assignment of conceptual associative tags. Well, back to the salt mines.

https://groups.google.com/d/msg/comp.lang.forth/xN3LRYEd5rw/uuUroGzhBAAJ

[2019-10-04] We may have made a minor breakthrough in the InStantiate module by doing one instantiation and by then using parameters such as part of speech (pos) and case (dba) to declare the initial time-points for subjects, verbs and objects. The EnParser module may then retroactively alter or modify the associative tags embedded at each identified time-point.


Thursday, September 26, 2019

pmpj0926

Ghost.pl AI has unresolved issues in associating from concept to concept.

The ghost.pl AI needs improvement in the area of being able to demonstrate thinking with a prepositional phrase not just once but repeatedly, so into the EnThink() module we will insert diagnostic code that shows us the values of key variables at the start of each cycle of thought.

Oh, gee, this coding of the AI Mind is actually fun, especially in Perl, whereas in JavaScript there is often too much time-pressure during the entering of input. We have inserted a line of code which causes an audible beep and reveals to us the status of the $whatcon and $tpr variables just before the AI generates a thought in English -- a language which we must state explicitly, because our ghost.pl AI is just as capable of thinking in Russian. When we at first enter no input, the AI beeps periodically and shows us the values as zero. When we enter "john writes books for money", the AI shows us "whatcon= 0 tpr= 4107" because the concept of the preposition "FOR" has gone into conceptual memory at time-point "t = 4107". The AI responds to the input by outputting "THE STUDENTS READ THE BOOKS", because activation spreads from the concept of "BOOKS" to the innate idea that "STUDENTS READ BOOKS". Then we hear a beep and we see "whatcon= 0 tpr= 0" because the $tpr flag has been reset to zero somewhere in the vast labyrinth of semi-AI-complete code. Now let us enter the same input and follow it up with a query, "what does john write". Then we get "whatcon= 1 tpr= 0" and the output "THE JOHN WRITES THE BOOKS FOR THE MONEY", after which the diagnostic message reverts to "whatcon= 0 tpr= 0" because of resetting to zero.

Now we want to let the AI Mind run for a while until we repeat the query. The AI makes a mistake. We had better not let it be in control of our nuclear arsenal, not if we want to avoid global thermonuclear war, Matthew. The AI-gone-crazy says "THE JOHN WRITES THE BOOKS FOR THE BOOKS AND THE JOHN WRITE". (Oops. We step away for a moment to watch and listen to Helen Donath in 1984 singing the Waltz from "Spitzentuch der Koenigin" with the Vienna Symphony. Then we linger while Zubin Mehta and the Wiener Philharmoniker in 1999 play the "Einzugsmarsch" from "Der Zigeunerbaron". How are we going to code the Singularity if the Cable TV continues to play Strauss waltzes?) The trained eye of the Mind Maintainer immediately recognizes two symptoms of a malfunctioning artificial intelligence. First, a spurious instance of the $tpr flag is causing the AI to output "THE BOOKS FOR THE BOOKS," and secondly, the $etc variable for detecting more than one active thought must be causing the attempt by the Ghost in the Machine to make two statements joined by the conjunction "AND". We had better expand our diagnostic message to tell us the contents of the $etc variable. We do so, but we see only a value of zero, because apparently a reset occurs so quickly that no other value persists long enough to be seen in the diagnostic message. Meanwhile the AI is stuck in making statements about John writing.

We address the problem of a spurious $tpr flag by inserting fake $tru values during instantiations in the InStantiate() and EnParser() modules. We use the values 111 to 999 for $tru in the EnParser() module and 101 to 107 in the InStantiate() module, so that the middle zero lets us know when the flag-panel of a concept has been finalized in the InStantiate() module. Immediately the fake truth-value of "606" for the $tru flag of the word "MONEY", that has a spurious value of "4107" in the $tpr slot of the conceptual flag-panel, lets us k now that $tpr has not been reset to zero quickly enough to prevent a carried-over and spurious value from being set for the concept of "MONEY". Since the preposition "FOR" is being instantiated at a point in the EnParser() module where a fake truth-value of "888" appears, we can concentrate on that particular snippet of code.


Tuesday, September 24, 2019

pmpj0924

Updating the English Parser documentation page.

Today in the ghost.pl AI we have two objectives. We want to improve upon the new functionality of thinking with English prepositions, and we wish to clean up the code to be displayed in the EnParser documentation page.

When we enter "john writes books for money" and we soon ask the AI "what does john write", we get a reasonably correct answer but we notice some problems with the assignment of associative tags when the answer-statement goes into conceptual memory. As an early step, we zero out the $tpr time-of-preposition tag, after using it as a target time-point, so as to prevent it from being assigned spuriously when other concepts are instantiated. But that step causes other problems, so we undo it. We also notice that old $tpr values are being assigned, when we would rather see up-to-date values, even when both the old value and a new value would be pointing to an instance of the same preposition. As we troubleshoot further, we embed diagnostics to tell us when the $tpr tag is being assigned, and we discover that it is assigned only during user input. When we remove the restriction and let the tag be assigned also during internal thinking, we start seeing the assignment of up-to-date values.


Sunday, September 22, 2019

jmpj0922

JavaScript AgiMind understands and thinks with prepositions.

[2019-09-20] In the JavaScript AgiMind.html we are now trying to reproduce the new AGI functionality that we achieved a month ago in the ghost.pl Perlmind. The Ghost in the Machine became able to understand an input like "John writes books for money" and was able to respond properly to a query like "What does John write?"

When we enter "john writes books for money" and the AgiMind responds "WHAT ARE JOHN", it simply means that we need to add the noun "JOHN" to the innate vocabulary. So from the "perlmind.txt" we transfer "JOHN" as concept #504 into the JavaScript free AI source code, and now the AgiMind responds "STUDENTS READ BOOKS", which indicates that the AgiMind knows who or what John is, and what books are. But we also check the Diagnostic mode to make sure that the conceptual associative tags are being assigned properly. We are not sure, so we enter "what does john write" and we get a long response of nonsense.

[2019-09-21] In our second day, we discover that the ReEntry() module has been causing a reduplication of the output of the AgiMind. For troubleshooting, we temporarily disable the ReEntry module. Then we discover that some wrong associative tags are being assigned during human input. We run the ghost.pl AI to see how the correct associative tags are supposed to be assigned.

We discover that a line of InStantiate() code is assigning a false psi19 tpr value when only a zero value should be assigned. The false value being assigned is actually already there, so some other line of code must be assigning it earlier. But there is no earlier assignment, so the false tpr value is obviously being assigned retroactively -- which is something that any AI mind maintainer must learn to watch out for. Probably the retroactive assignment is happening in the EnParser() module, which does a lot of retroactive assignments because one word of human input may have an effect upon an earlier word of human input. Through substitution of "777" as a spurious value in the psi19 location of a snippet of assignment code in the EnParser() module, we discover which snippet is making the erroneous, non-777 assignment. Then through further substitution of "444" in the psi19 slot, we discover an earlier snippet of EnParser() code which is assigning a wrong value at the tvb time-of-verb time-point. So there must be an even earlier "tvb" snippet that is creating a spurious psi19 value. We discover that earlier snippet in the InStantiate() module. After much other coding, when we bring in a reset of tult to zero from the ghost.pl AI, we stop getting the spurious psi19 values.

[2019-09-22] In our third day, we run the ghost.pl AI that already works with prepositional phrases, and we discover that yesterday we trying to fix something that was not even a bug. The AgiMind was properly assigning the tpr tag to link the noun "BOOKS" to the preposition "FOR", and we mistakenly thought that the tag was supposed to be assigned also with "FOR". No, the preposition "FOR" needs only a tkb tag leading to "MONEY" as its object. Now we have gotten the tkb tag to be assigned properly for remembering the object of a preposition. After extensive debugging, we obtain the following exchange:

AI Mind version 22sep19A on Sun Sep 22 19:55:56 PDT 2019
Robot: I UNDERSTAND YOU
Human: john writes books for money

Robot: STUDENTS READ BOOKS
Human:

Robot:
Human: what does john write

Robot: JOHN WRITES BOOKS FOR MONEY


Sunday, August 11, 2019

pmpj0811

AGI Roadmap: Thinking with Prepositions

In the ghost309.pl AI we have introduced a new group of transfer-variables designated as $px1 and $px2 and $px3 so that the EnNounPhrase() module may detect linkage from a candidate-noun to a preposition and inspect immediately the flag-panel of the indicated preposition in order to latch onto $px1 as the conceptual time-point of the object of the preposition. Then in the EnPrep() English-preposition module we plan to use the briefly immutable $px1 time-point value to fetch the object of the preposition from memory and speak it as part of an idea being recalled from memory. We were trying to use other variables for the same purpose but they were not immutable; they were loaded with transient values during the though-process of the ghost.pl AGI. So now let us go back into EnPrep() and code the fetching of the direct object of the preposition. We did so, and it worked the first time. We had the following conversation with the ghost.pl AGI Mind.

Human: john writes books for money
Ghost: THE STUDENTS READ THE BOOKS

Human:
Ghost: I AM AN ANDRU

Human: what does john write
Ghost: THE JOHN WRITES THE BOOKS FOR THE MONEY.
We should explain that the ghost.pl AGI knows only that "students read books", not John's books in particular. Mentioning books to the AGI causes it to recall its knowledge that "students read books". When we query the AGI with the input of "what does john write", the SpreadAct() spreading-activation module inhibits the interrogative prounoun "what" while activating the concepts of "john" and "write". The response embedded in conceptual memory includes the linkage from the concept of "books" to the prepositional phrase "for money". The EnArticle() module for the English articles "a" and "the" inserts articles somewhat haphazardly within the output of the ghost.pl AGI.

The new AI functionality of a machine intelligence thinking and conversing with prepositional phrases became possible when we recently expanded the conceptual flag-panel from fifteen associative tags to twenty-one associative tags, including new flags for the control of noun-declensions in Latin or Russian and for thinking with such parts of speech as adjectives, adverbs, conjunctions and prepositions. As we build up the ability to think with these linguistic components, each mid-AGI Mind becomes capable of more and more complex or complicated thought. As we make progress on the AGI RoadMap towards Artificial General Intelligence, we approach a point where Darwinian survival of the fittest comes into play, because among multiple enterprises working on AGI, some will go down the right path and some will enter roads where all hope must be abandoned.


Saturday, May 25, 2019

redux

Converting ancient Latin artificial intelligence into modern Russian AI.

The conversion of a JavaScript English-language AI into a Latin AI began on Thursday 2019-04-18 in April of 2019. Inspiration came from "Die Traumdeutung" where Sigmund Freud intones "Flectere si nequeo superos, Acheronta movebo." If one cannot bend the netgods of AI, move the mindset of Latin and Greek scholars.

A minor challenge in coding Mens Latina was the lack of an explicitly stated subject for many verbs in Latin, which occurs also in Russian. The solution was to skip three points in time-indexed memory to make room for the creation of a hidden concept to fill in for the unstated but understood subject of a verb.

Solving the AI-hard problem of the natural language understanding of a Latin or Russian sentence regardless of its syntactic word-order required waiting for the input of an entire clause before declaring subjects and objects on the basis of inflectional word-endings.

The conversion of artificial intelligence in Latin language into artificial intelligence in Russian language began yesterday on Friday 2019-05-24 in May of 2019.