Remember Tiananmen Massacre

Cyborg AI Minds programmed to think in memory of the hundreds of murdered Chinese Tiananmen June 4th demonstrators are a true concept-based artificial intelligence with natural language understanding, simple at first and lacking robot embodiment, and expandable all the way to human-level intelligence and beyond. Privacy policy: Third parties advertising here may place and read cookies on your browser; and may use web beacons to collect information as a result of ads displayed here.

Monday, November 11, 2019

sota1111

Ghost AI -- State of the Art -- November 2019

A major development in this AI project has occurred in November of 2019 with the first expansion of the TacRecog tactile recognition module beyond a mere stub. In the quarter century of our AI coding from 1993 to 2018, the only avenue of sensory input to the Ghost in the Machine was the AudRecog auditory recognition module which used the computer keyboard to pretend that the input of characters was the auditory recognition of acoustic phonemes. TacRecog still uses the keyboard but does not pretend; it directly senses and feels any 0-9 numeric keystroke. Roboticists will hopefully appreciate that the EnVerbPhrase English verb-phrase module is now ready to talk not only about things seen by a robot but also about things touched by a robot.

The MindBoot sequence has been expanded with the ten concepts and English words expressing the numbers from zero to nine. Pressing a numeric key activates not only the numeric concept but also the ego-concept of "I" and the sensory concept of "feel". In response to a pressing of the "7" key, a Ghost in the Machine may say "I FEEL THE SEVEN". The user may also ask the AI "what do you feel" and receive a similar response. Hopefully it is now possible to conduct conversational experiments in artificial consciousness with The Ghost in the Machine.

In a prior state of the art, the AI understands each English or Russian word only in terms of other words and with no symbolic grounding. Now suddenly the AI may have direct sensory knowledge of the ten ordinal numbers which are the Principia of our Mathematica. This innovation makes us wonder if we can replicate in a machine the same or similar process by which a human child becomes familiar with numbers. We make outreach to mathematicians on Reddit and on Usenet who may take an interest in the use of artificial intelligence for mathematical reasoning.

We are also recently dabbling in the theology of artificial intelligence, since our Ghost software has a concept of God and has a few innate MindBoot ideas about God, chiefly the famous quote from Albert Einstein that "God does not play dice with the universe." This quote is our prime example of negation of verbs and a helpful example of the EnPrep English preposition module.


Saturday, October 05, 2019

mfpj1005

MindForth resets associative tags before each operation of Indicative module.

In the MindForth artificial intelligence (AI) for robots, we will now start to display an apparatus of diagnostic messages at the start of the Indicative module to tell us the values being held in variables which serve the purpose of creating associative tags to interconnect the concepts being expressed as English words during the operation of the Indicative mind-module. Since the ConJoin module will often insert a conjunction between two thoughts being generated, the AI Mind Maintainer needs assurance that variable-values operative during one thought do not erroneously get carried over past a conjunction into the separate process of generating another thought.

Just by resetting the tsj time-of-subject variable to zero at the start of the Indicative module, we have made the Forthmind able to trigger repeated instances of logical InFerence. Each running of the Indicative module amounts to a fresh declaration of the associative tags in the conceptual flag-panel that supports the generation of ideas in the MindGrid. The following dialog still has a few problems, but it shows the repeated triggering of an inference.

john is a student
DOES  JOHN  READ  BOOKS  
no

  Indicative: tsj= 0 tvb= 0 tdo= 0
A  JOHN  DOES  NOT  READ  BOOKS
  Indicative: tsj= 0 tvb= 0 tdo= 0
PROFESSORS  TEACH  THE  STUDENTS  AND
  Indicative: tsj= 0 tvb= 0 tdo= 0
STUDENTS  READ  THE  BOOKS

  Indicative: tsj= 0 tvb= 0 tdo= 0
I  UNDERSTAND  YOU  AND
  Indicative: tsj= 0 tvb= 0 tdo= 0
YOU  ARE  A  MAGIC

andru is student
DOES  ANDRU  READ  THE  BOOKS  
no

  Indicative: tsj= 0 tvb= 0 tdo= 0
AN  ANDRU  DOES  NOT  READ  THE  BOOKS  AND
  Indicative: tsj= 0 tvb= 0 tdo= 0
YOU  READ  THE  BOOKS

  Indicative: tsj= 0 tvb= 0 tdo= 0
PROFESSORS  TEACH  THE  STUDENTS  AND
  Indicative: tsj= 0 tvb= 0 tdo= 0
STUDENTS  READ  THE  BOOKS

  Indicative: tsj= 0 tvb= 0 tdo= 0
STUDENTS  READ  THE  BOOKS  AND
  Indicative: tsj= 0 tvb= 0 tdo= 0
I  THINK

Friday, October 04, 2019

mfpj1004

Using parameters to declare the time-points of conceptual instantiation.

[2019-10-02] Recently we have expanded the conceptual flag-panel of MindForth from fifteen tags to twenty-one associative tags, so that the free open-source artificial intelligence for robots may think a much wider variety of thoughts in English. Then we had to debug the function of the InFerence module to restore its ability to reason from two known facts in order to infer a new fact. For instance, the Forthmind knows the fact that students read books, and we tell the AI the fact that John is a student. Then the AI infers that perhaps John, being a student, reads books, and the incredibly brilliant Forth software asks us, "Does John read books?" We may answer yes, no, maybe or no response at all. Currently, though, we have the problem that InFerence works only once and fails to deal properly with repeated attempts to trigger an inference. We suspect that some of the variables involved in the process of automated reasoning are not being reset properly to their status ex quo ante before we made the first test of InFerence. Therefore we shall try a new technique of debugging which we have developed recently in one of the other AI Minds, namely the ghost.pl AI that thinks in both English and in Russian. We create a diagnostic display at the start of the EnThink module for thinking in English, so that we may see the values held by the variables associated with the InFerence module and the KbRetro module that retroactively adjusts the knowledge base (KB) of the AI Mind in accordance with whatever answer we have given when the AskUser module asks us to validate or contradict an inference. The following dialog shows us that some variables are not being properly reset to zero.

john is student

EnThink: becon= 1 yncon= 0 ynverb= 0 inft= 0
qusub= 0 qusnum= 1 subjnom= 504 prednom= 561 tkbn= 0
quverb= 0 seqverb= 0 seqtkb= 0 tkbv= 0
quobj= 0 dobseq= 0 kbzap= 0 tkbo= 0
DOES JOHN READ BOOKS
no

EnThink: becon= 0 yncon= 0 ynverb= 0 inft= 2084
qusub= 504 qusnum= 1 subjnom= 0 prednom= 0 tkbn= 2086
quverb= 863 seqverb= 0 seqtkb= 0 tkbv= 2087
quobj= 540 dobseq= 0 kbzap= 404 tkbo= 2088
A JOHN DOES NOT READ BOOKS

EnThink: becon= 0 yncon= 0 ynverb= 0 inft= 2118
qusub= 504 qusnum= 1 subjnom= 0 prednom= 0 tkbn= 0
quverb= 863 seqverb= 0 seqtkb= 0 tkbv= 0
quobj= 0 dobseq= 0 kbzap= 0 tkbo= 2088
PROFESSORS TEACH THE STUDENTS AND STUDENTS READ THE BOOKS

EnThink: becon= 0 yncon= 0 ynverb= 0 inft= 2152
qusub= 504 qusnum= 1 subjnom= 0 prednom= 0 tkbn= 0
quverb= 863 seqverb= 0 seqtkb= 0 tkbv= 0
quobj= 0 dobseq= 0 kbzap= 0 tkbo= 2088
I UNDERSTAND YOU AND YOU ARE A MAGIC
andru is student

EnThink: becon= 1 yncon= 0 ynverb= 0 inft= 2220
qusub= 504 qusnum= 1 subjnom= 501 prednom= 561 tkbn= 0
quverb= 863 seqverb= 0 seqtkb= 0 tkbv= 0
quobj= 0 dobseq= 0 kbzap= 0 tkbo= 2088
DOES ANDRU READ THE STUDENTS
Because some of the variables have not been reset, a second attempt to trigger an inference with "andru is student" results in a faulty query that should have been "Does Andru read books?" Let us reset the necessary variables and try again.

Upshot: It still does not work, because of a more difficult and more obscure bug in the assignment of conceptual associative tags. Well, back to the salt mines.

https://groups.google.com/d/msg/comp.lang.forth/xN3LRYEd5rw/uuUroGzhBAAJ

[2019-10-04] We may have made a minor breakthrough in the InStantiate module by doing one instantiation and by then using parameters such as part of speech (pos) and case (dba) to declare the initial time-points for subjects, verbs and objects. The EnParser module may then retroactively alter or modify the associative tags embedded at each identified time-point.


Thursday, September 26, 2019

pmpj0926

Ghost.pl AI has unresolved issues in associating from concept to concept.

The ghost.pl AI needs improvement in the area of being able to demonstrate thinking with a prepositional phrase not just once but repeatedly, so into the EnThink() module we will insert diagnostic code that shows us the values of key variables at the start of each cycle of thought.

Oh, gee, this coding of the AI Mind is actually fun, especially in Perl, whereas in JavaScript there is often too much time-pressure during the entering of input. We have inserted a line of code which causes an audible beep and reveals to us the status of the $whatcon and $tpr variables just before the AI generates a thought in English -- a language which we must state explicitly, because our ghost.pl AI is just as capable of thinking in Russian. When we at first enter no input, the AI beeps periodically and shows us the values as zero. When we enter "john writes books for money", the AI shows us "whatcon= 0 tpr= 4107" because the concept of the preposition "FOR" has gone into conceptual memory at time-point "t = 4107". The AI responds to the input by outputting "THE STUDENTS READ THE BOOKS", because activation spreads from the concept of "BOOKS" to the innate idea that "STUDENTS READ BOOKS". Then we hear a beep and we see "whatcon= 0 tpr= 0" because the $tpr flag has been reset to zero somewhere in the vast labyrinth of semi-AI-complete code. Now let us enter the same input and follow it up with a query, "what does john write". Then we get "whatcon= 1 tpr= 0" and the output "THE JOHN WRITES THE BOOKS FOR THE MONEY", after which the diagnostic message reverts to "whatcon= 0 tpr= 0" because of resetting to zero.

Now we want to let the AI Mind run for a while until we repeat the query. The AI makes a mistake. We had better not let it be in control of our nuclear arsenal, not if we want to avoid global thermonuclear war, Matthew. The AI-gone-crazy says "THE JOHN WRITES THE BOOKS FOR THE BOOKS AND THE JOHN WRITE". (Oops. We step away for a moment to watch and listen to Helen Donath in 1984 singing the Waltz from "Spitzentuch der Koenigin" with the Vienna Symphony. Then we linger while Zubin Mehta and the Wiener Philharmoniker in 1999 play the "Einzugsmarsch" from "Der Zigeunerbaron". How are we going to code the Singularity if the Cable TV continues to play Strauss waltzes?) The trained eye of the Mind Maintainer immediately recognizes two symptoms of a malfunctioning artificial intelligence. First, a spurious instance of the $tpr flag is causing the AI to output "THE BOOKS FOR THE BOOKS," and secondly, the $etc variable for detecting more than one active thought must be causing the attempt by the Ghost in the Machine to make two statements joined by the conjunction "AND". We had better expand our diagnostic message to tell us the contents of the $etc variable. We do so, but we see only a value of zero, because apparently a reset occurs so quickly that no other value persists long enough to be seen in the diagnostic message. Meanwhile the AI is stuck in making statements about John writing.

We address the problem of a spurious $tpr flag by inserting fake $tru values during instantiations in the InStantiate() and EnParser() modules. We use the values 111 to 999 for $tru in the EnParser() module and 101 to 107 in the InStantiate() module, so that the middle zero lets us know when the flag-panel of a concept has been finalized in the InStantiate() module. Immediately the fake truth-value of "606" for the $tru flag of the word "MONEY", that has a spurious value of "4107" in the $tpr slot of the conceptual flag-panel, lets us k now that $tpr has not been reset to zero quickly enough to prevent a carried-over and spurious value from being set for the concept of "MONEY". Since the preposition "FOR" is being instantiated at a point in the EnParser() module where a fake truth-value of "888" appears, we can concentrate on that particular snippet of code.