Cyborg AI Minds are a true concept-based artificial intelligence with natural language understanding, simple at first and lacking robot embodiment, and expandable all the way to human-level intelligence and beyond. Privacy policy: Third parties advertising here may place and read cookies on your browser; and may use web beacons to collect information as a result of ads displayed here.

Friday, November 30, 2018


At about 1:11 p.m. today on 2018-11-30 we got the following idea.

If we want to have logical conditionals in the AI Mind involving the conjunction "IF", we can use the truth-value $tru to distinguish between outcomes. For instance, consider the following.

Computer: If you speak Russian, I need you.
Human: I speak English. I do not speak Russian.
Computer: I do not need you.
In some designated mind-module, we can trap the word "IF" and use it to assign a high $tru value to an expected input.

Just as we operated several years ago to answer questions with "yes" or "no" by testing for an associative chain, we can test for the associative chain specified by "IF" and instead of "yes" or "no" we can assign a high $tru value to the pay-off statement following the "IF" clause. It is then easy to flush out any statement having a high truth-value, or even having the highest among a cluster or group of competing truth-values.

These ideas could even apply to negated ideas, such as, "We need you if you do NOT speak Russian."

Now, here is where it gets Singularity-like and ASI-like, as in "Artificial Super Intelligence." Whereas a typical human brain would not be able to handle a whole medley of positive and negative conditionals, an AI Mind using "IF" and $tru could probably handle dozens of conditionals concurrently, either all at once or in a sequence.