Cyborg AI Minds are a true concept-based artificial intelligence with natural language understanding, simple at first and lacking robot embodiment, and expandable all the way to human-level intelligence and beyond. Privacy policy: Third parties advertising here may place and read cookies on your browser; and may use web beacons to collect information as a result of ads displayed here.

Wednesday, May 30, 2018


Solving who-query problems and EnParser bug.

Although the AI responds to a who-query by calling SpreadAct() from the end of AudInput(), the JSAI will call SpreadAct() too many times from AudInput() before the end of the input. Since we need to test for qucon when the Volition() module is not engaged in thinking, we test for quconin the Sensorium() module, which does not call AudInput() but which is called from the MainLoop() after each generation of a thought.

We must also troubleshoot why the JSAI eventually outputs "ME ME ME". We discover that EnNounPhrase() is sending an aud=726 into Speech() while there is a false verblock=727. Then we learn that the concept-row at the end of "ROBOTS NEED ME" for "ME" at t=727 has an unwarranted tkb psi13=727, as if the concept 701=I had a tkb. Apparently we need to prevent a false tkb from being stored. An inspection of the diagnostic display shows that the tkb properly set for each verb is improperly being retained and set for the object of the verb. We then notice that the EnParser() module is properly setting the time-of-direct-object "tdo" to be the tkb of a verb and leaving the tkb value set to the "tdo" value. So we insert into EnParser() a line of code to reset tkb immediately back to zero after storing the tkb of a verb, and the erroneous "ME ME ME" output no longer appears.

Friday, May 25, 2018


Preventing wrong grammatical number for a predicate nominative.

In the 25may18A.html version of the JavaScript AI Mind we wish to correct a problem where the AI erroneously says "I AM A ROBOTS". The wrong grammatical number for "ROBOT" results when the AI software is searching backwards through time for the concept of "ROBOT" and finds an engram in the plural number. We hope to fix the problem by requiring that the EnVerbPhrase() module, before fetching the predicate nominative of an intransitive verb of being, shall set the "REQuired NUMber" numreq variable to the same value as the number of the subject of the be-verb, so that the EnNounPhrase() module may find the right concept for the predicate nominative and then also find (or create) the English word of the concept with the proper inflectional ending for the required number. Since the numreq value shall be of service during one pass through the EnNounPhrase() module, we may safely zero out the numreq value at the end of EnNounPhrase().

Tuesday, May 22, 2018


Expanding MindBoot with concepts to demonstrate AI functionality.

Today in the 22may18A.html version of the AI Mind in JavaScript (JSAI) for Microsoft Internet Explorer (MSIE), we wish to expand the MindBoot() module with a few English words and concepts necessary for the demonstration of the AI functionality. We first create a concept of "ANNA" as a woman, for two or three reasons. Firstly, we want the JSAI to be able to demonstrate automated reasoning with logical inference, and the MindBoot() sequence already contains the idea or premise that "Women have a child". Having created the Anna-concept, we typed in "anna is a woman" and the AI asked us, "DOES ANNA HAVE CHILD". If the concept of Anna were not yet known to the AI, we might instead get a query of "WHAT IS ANNA". Secondly, we want "Anna" as a name that works equally well in English or in Russian, because we may install the Russian language in the JSAI. In fact, we go beyond the mere concept of "Anna" and we insert the full sentence "ANNA SPEAKS RUSSIAN" into the MindBoot so that the AI knows something about Anna. We create 569=RUSSIAN for the Russian language, so that later we may have 169=RUSSIAN as an adjective. When we type in "you speak russian", eventually the AI outputs "I SPEAK RUSSIAN". A third reason why we install the concept 502=ANNA is for the sake of machine translation, in case we add the Russian language to the JSAI.

Next to the MindBoot() sequence we add "GOD DOES NOT PLAY DICE" in order to demonstrate negation of ideas and the use of the truth value, because we may safely assert in the AI Mind the famous Einsteinian claim about God and the universe. We type in "you know god" and the Ai responds "GOD DOES NOT PLAY DICE". Let us try "you know anna". The AI responds "ANNA SPEAKS RUSSIAN". Next we add the preposition "ABOUT" to the MindBoot so that we may ask the AI what it thinks about something or what it knows about something. We are trying to create a ruminating AI Mind that somewhat consciously thinks about its own existence and tries to communicate with the outside world.

Sunday, May 20, 2018


Slowing down the speed of thought to wait for human input.

The biggest complaint about the JavaScript Artificial Intelligence (JSAI) recently is that the AI output keeps changing faster than the user can reply. Therefore we need to introduce a delay to slow down the AI and let the human user enter a message. In the AudListen() module we insert a line of code to reset the rsvp variable to an arbitrary value of two thousand (2000) whenever a key of input is pressed. In the English-thinking EnThink() module we insert a delay loop to slow the AI Mind down during user input and to speed the AI up in the prolonged absence of user input.

Sunday, May 13, 2018


Answering of what-think queries with a compound sentence.

We would like our JavaScript Artificial Intelligence (JSAI) to be able to answer queries in the format of "What do you think?" or "What do you know?" We begin in the InStantiate() module by zeroing out the input of a 781=WHAT concept by adding a line of code borrowed from the AI. Then we input "what do kids make" and the AI correctly answers, "KIDS MAKE ROBOTS". However, when we input "what do you think" or "what do you know", the AI does not respond with "I THINK..." or "I KNOW...". Therefore we need to make use of the Indicative() module to generate a compound sentence to be conjoined with the conjunction "THAT". Into the MindBoot() vocabulary we add an entry for the conjunction 310=THAT.

After much trial and error we have gotten the JSAI to respond to the query "what do you think" with "I THINK THAT I AM A PERSON". We let the English-thinking EnThink() module call the Indicative() module first for a main clause with the conjunction "that" and again to generate a subordinate clause. When we ask, "what do i think", the response is "YOU THINK THAT I AM A PERSON". When we inquire "what does god think", the ignorance of the AI engenders the answer "I THINK THAT GOD THINK" which may or may not be a default resort to the ego-concept of self.