Mentifex AI Minds are a true concept-based artificial general intelligence, simple at first and lacking robot embodiment, but expandable all the way to human-level intelligence and beyond. Privacy policy: Third parties advertising here may place and read cookies on your browser; and may use web beacons to collect information as a result of ads displayed here.

Sunday, July 01, 2018


Debugging the InFerence Function in the First Working AGI.

The Perlmind has a minor bug which causes logical inference not to work if the inference is not triggered immediately at the start of running the program. If we let the AI run a little and then we type in "anna is woman", the AI answers "DOES ERROR HAVE CHILD" instead of "DOES ANNA HAVE CHILD". In the psy concept array of the silent inference, we observe that a zero is being recorded instead of the concept number "502" for Anna. The AI MindBoot is designed with the concept of "ERROR" placed at the beginning of the boot sequence so that any fruitless search for a concept will result automatically in an "ERROR" message if no concept is found. We suspect that some variable in the InFerence module is not being loaded with the correct value when the AI has already started thinking various thoughts.

The pertinent item in the InFerence() module is the $subjnom or "subject nominative" variable which is set outside of the module before InFerence is even called. We discover that the variable is spelled wrong in the OldConcept module, and we correct the spelling. It then seems that InFerence() can be called at any time and still operate properly. We decide to run the JavaScript AI to see if an inference has any problems if it is not the first order of business at the outset of an AI session. Nothing goes wrong, so the problem must have been the misspelling in the OldConcept() module.

During this coding session we also make a change in the KbRetro() module for the retroactive adjustment of the knowledge base (KB). We insert some code to put an arbitrary value of eight (8) on the $tru(th)-value variable for the noun at the start of the silent inference, such as "ANNA" in the silent inference "ANNA HAVE CHILD". When the human user either confirms or invalidates the inference, the resulting knowledge ought to have a positive truth-value, because someone has vouched for the truth or the negation of the inferred idea. We envision that the $tru(th)-value will serve the purpose of letting an AI Mind restrict its thinking to ideas which it believes and not to mere assertions or to ideas which were true yesterday but not today. We expect the $tru(th)-value to become fully operative in a robotic AI Mind for which "Seeing is believing" when visual recognition from cameras serving as eyes provides reliable knowledge to which a high $tru(th)-value may be assigned.