Cyborg AI Minds are a true concept-based artificial intelligence with natural language understanding, simple at first and lacking robot embodiment, and expandable all the way to human-level intelligence and beyond. Privacy policy: Third parties advertising here may place and read cookies on your browser; and may use web beacons to collect information as a result of ads displayed here.

Saturday, June 16, 2018

jmpj0616

Fleshing out VisRecog() in the First Working AGI

In today's 16may18A.html version of the tutorial AI Mind in JavaScript for Microsoft Internet Explorer (MSIE), we flesh out the previously stubbed-in VisRecog() module for visual recognition. The AGI already contains code to make the EnVerbPhrase() module call VisRecog() if the AI Mind is using its ego-concept and trying to tell us what it sees. As a test we input "you see god" and we wait for the thinking software to cycle through its available ideas and come back upon the idea that we communicated to it. As we explain in our MindGrid diagram on GitHub, each input idea goes into neuronal inhibition and resurfaces in what is perhaps AI consciousness only after the inhibition has subsided. Although we tell the AI that it sees God, the AI has no robot body and so it can not see anything. It eventually says "I SEE NOTHING" because the default direct object provided by VisRecog() is 760=NOTHING. In the MindBoot() sequence we add "I NEED A BODY" as an innate idea, so as to encourage users to implement the AI Mind in a robot. Once the AI has embodiment in a robot, the VisRecog() module will enable the AI to tell us what it sees.