AI Mind Maintainer
Sunday, May 20, 2018
Sunday, May 13, 2018
After much trial and error we have gotten the JSAI to respond to the query "what do you think" with "I THINK THAT I AM A PERSON". We let the English-thinking EnThink() module call the Indicative() module first for a main clause with the conjunction "that" and again to generate a subordinate clause. When we ask, "what do i think", the response is "YOU THINK THAT I AM A PERSON". When we inquire "what does god think", the ignorance of the AI engenders the answer "I THINK THAT GOD THINK" which may or may not be a default resort to the ego-concept of self.
Saturday, March 17, 2018
Anyone finding a bug in the AI software may subscribe to the mail-list email@example.com for Artificial General Intelligence (AGI) and report bugs to the AGI community or engage in archived AGI discussion. There is no bug-bounty, other than the glory of the deed.
Friday, March 02, 2018
We have a chance here to demonstrate an entity aware of itself and of some other entity such as a human user conversing with the AI. If we start claiming that our JSAI has consciousness, Netizens will test the AI in various ways, such as asking it a lot of questions. Typical questions to test consciousness would be "who are you" and "who am i". The interrogative pronoun "who" sets the qucon flag to a positive value of one so that the SpreadAct module may activate the necessary concepts for a proper response. We need a way to make the AI concentrate on the subject of any who-query, so that the AI will give evidence of consciousness simply by answering the question.
When we enter "god is person" and then we ask, "who is god", the AI answers "GOD AM A PERSON" -- which sounds wrong but which only requires an improvement in finding the correct form "IS" for the verb "BE".