Cyborg AI Minds are a true concept-based artificial intelligence with natural language understanding, simple at first and lacking robot embodiment, and expandable all the way to human-level intelligence and beyond. Privacy policy: Third parties advertising here may place and read cookies on your browser; and may use web beacons to collect information as a result of ads displayed here.

Monday, May 30, 2011

may30jsai

The JavaScript artificial intelligence (JSAI) is a client-side AiApp whose natural habitat is a desktop computer, a laptop or a smartphone.

1 Mon.30.MAY.2011 -- Searching the AI Knowledge Base.

The JavaScript artificial intelligence (JSAI) is now being updated with new code from the MindForth AI, which on 29 May 2011 gained the ability to search its knowledge base (KB) twice in response to a single query and provide different but valid answers by means of the neural inhibition of the first answer in order to arrive next at the second answer. In other words, the JSAI will be able to discuss a subject exhaustively in terms of what it knows about the subject -- a major step in our achievement of the MileStone of self-referential thought on the RoadMap to artificial general intelligence. The AI source code has not yet been fine-tuned. We hope to achieve in JavaScript the basic functionality that has been created in MindForth.

Upshot: After we transferred mutatis mutandis all the pertinent code from MindForth into the AiMind.html program in JavaScript, the JSAI still did not work right. We had to hunt down and fix (by commenting out) some lines of obsolete code in the SpreadAct mind-module, where negative activation values were being reset to zero -- to the detriment of inhibition-values, which need to slowly PsiDecay upwards towards zero. We then achieved JSAI functionality on a par with MindForth. We entered new knowledge into the knowledge base (KB). We queried the KB twice with the same question, and the artificial AI Mind correctly gave us two different answers in complete agreement with the knowledge base.

Friday, May 27, 2011

may26mfpj

The MindForth Programming Journal (MFPJ) is both a tool in developing MindForth open-source artificial intelligence (AI) and an archival record of the history of how the AI Forthmind evolved over time.

1 Thurs.26.MAY.2011 -- Conditional Inhibition

In the recent Strong AI diaspora of MindForth and the tutorial AiMind.html program, we have implemented the neural inhibition of concepts immediately after they have been included in a generated thought. Now we would like to make inhibition occur when one or more responses must be made to a query involving nouns or a query involving verbs. The question "What do bears eat?" is a query of the what-do-X-verb variety involving one or more nouns as potentially valid answers as the direct object of the verb. If the noun of each single answer is immediately inhibited, the AI can respond with a different answer to a repeat of the question. Likewise, if we ask the AI, "What do robots do?", the query is of the what-do-X-do variety where potentially multiple verbs may need to be inhibited so as to give one valid answer after another, such as "Robots make tools" and "Robots sweep floors." If we are inhibiting the verbs, we do not want the direct-object nouns to be inhibited. We might need replies with different verbs but the same direct object, such as "Robots make tools" and "Robots use tools."

Inhibition may also play a role in calling the ConJoin module when a query elicits multiple thoughts which are the same sentence except for different nouns or different verbs. The responses, "Bears eat fish" and "Bears eat honey" could become "Bears eat fish and honey" if neural inhibition suppresses the repetition of subject and verb while calling the ConJoin module to insert the conjunction "AND" between the two answer nouns.

2 Thurs.26.MAY.2011 -- Problems With Determining Number

When we try to troubleshoot the Forthmind by entering "bears eat honey", a comedy of errors occurs. The AudRecog module contains a test to detect an "S" at the end of an English word and set the "num(ber)" value to two ("2") for plural. However, that test works only for recognized words, and not for a previously unknown word of new vocabulary. So the word "bears" gets tagged as singular by default, which causes the AI to issue erroneous output with "BEARS EATS HONEY", as if a singular subject is calling for "EATS" as a third person singular verb form.

The process of determining num(ber) ought to be more closely tied with the EnParser module, so that the parsing of a word as a noun should afford the AI a chance to declare plural number if the incoming noun ends with an "S".

Now we have inserted special code into the AudInput module to check for the input of nouns ending in "S", and to set the "num(ber)" variable to a plural value if a terminating "S" is found. For singular nouns like "bus" or "gas" that end in "S", we will have to devise techniques that override the default assumption of "S" meaning plural. We may use the article "A" or the verb "IS" as cues to declare a noun ending in "S" as singular.

Table of Contents

Saturday, May 21, 2011

may20jsai

The JavaScript artificial intelligence (JSAI) is a clientside AiApp whose natural habitat is a desktop computer, a laptop or a smartphone.


1 Fri.20.MAY.2011 -- Fixing KbTraversal

The more we improve the artificial intelligence in JavaScript (JSAI), the easier it becomes to program. Fewer things go wrong, and fewer problems are hidden from view. Right now we would like to improve the performance of the knowledge-base traversal module KbTraversal, which keeps the process of artificial thought going by activating a series of concepts one at a time. We wonder why certain concepts are not being activated, and we would like to see KbTraversal announce the name of the concept being activated.

2 Sat.21.MAY.2011 -- AI Tutorial for Science Museums

Yesterday, in the 20may11A.html JSAI as uploaded to the Web, we saw KbTraversal announcing which concepts it would activate and then trying to think a thought about them, but we may have cut back too severely on calls to the obsolete version of the PsiDecay module, because the JSAI became less able to think smoothly. We should probably restore the psi-decay calls for the time being, so that we may gradually improve an already functional AI.

After we restored the PsiDecay calls, we worked on the erroneous display of articles as a subject or an object in the AI tutorial mode. Because the SpreadAct module invokes the display of each line of association from a subject to a verb or from a verb to an object, an item will fail to be displayed if it is not being treated by SpreadAct. We made the AI Mind display its associative thinking somewhat better.

Teachers and docents who display the AI Mind in a school or science museum are invited to report back on Usenet or their own website about how human beings reacted to the experience of witnessing an alien Mind think and communicate in natural human language. Is the AI really thinking, or is it just a chatbot pretending to think?

Table of Contents

Wednesday, May 18, 2011

may18jsai

The JavaScript artificial intelligence (JSAI) is a clientside AiApp whose natural habitat is a desktop computer, a laptop or a smartphone.

1 Wed.18.MAY.2011 -- Houston, We Have a Problem.

When we submit "who are you" as a query to the AI Mind, it searches the knowledge base (KB) and it remembers that it is ANDRU -- a ROBOT and a PERSON (a different answer each time that you pose the same existential question). Unfortunately, the software finds the first instance of each concept stored in recent memory and spits out the phonemic engram from the auditory memory channel without regard to whether the stored word is a singular form or a plural form. How can we get the most advanced open-source AI in these parsecs to stop saying "I AM ROBOTS"? The AI may have to start skipping over plural engrams when searching for a singular noun. Therefore, let us perform a little psychosurgery on the AI Mind software and see if we can zero in on a singular noun-form during self-referential thought.

First we use a few JavaScript "alert" boxes in BeVerb() and in NounPhrase() to see what values are being carried along in the variables that keep track of grammatical number as the AI Mind generates a thought in response to user input. We see that the subject number is available in the background, so perhaps we can alter the design of the Mind to insist on speaking a singular noun to go with a singular subject. Even though ROBOT and ROBOTS are the same concept, they are not the same expression of the concept. By the way, this issue is another AI mindmaker (Mentifex) problem that had to be solved in due course, that is, rather well along in the AI development process and not at the first blush of AI newbie enthusiasm.

Upshot: Gradually in the NounPhrase module we introduced code to skip over the retrieval of any word in auditory memory if the correct num(ber) was not found to match the the same number of the subject of an input query. The AI began to answer "who are you" with "I AM ROBOT". This bugfix makes the AI Mind more complex and therefore subject to potentially latent problems such as knowing a word only in the plural and not in the singular. However, the same bugfix brings the JSAI closer to machine reasoning and thinking with a syllogism such as, "All men are mortal; Socrates is a man; therefore Socrates is mortal."

Monday, May 16, 2011

may16mfpj

Now that we have cracked the hard problem of AI wide open, we wish to share our results with all nations.

1 Mon.16.MAY.2011 -- List of Mentifex AI Accomplishments

We are still working on the MileStone of self-referential thought on our RoadMap to artificial general intelligence (AGI). We look back upon a small list of accomplishments along the way.

  • two-step selection of BeVerbs;

  • AudRecog morpheme recognition;

  • look-ahead A/AN selection;

  • seq-skip method of linking verbs and objects;

  • SpeechAct inflectional endings;

  • neural inhibition for variety in thought;

  • provisional retention of memory tags;

  • differential PsiDecay.

  • 2 Mon.16.MAY.2011 -- Achieving AI Mental Stability

    Until we devised an AI algorithm for differential PsiDecay in the
    JavaScript artificial intelligence (JSAI), stray activations had been ruining the AI thought processes for months and years. We now port the PsiDecay solution from the JSAI into MindForth. Meanwhile, Netizens with Microsoft Internet Explorer (MSIE) may point the browser at the AiMind.html page and observe the major open-source AI advance in action. Enter "who are you" as a question to the AI Mind not just one time but several times in a row. Observe that the JSAI tells you everything it knows about itself, because neural inhibition immediately suppresses each given answer in order to let a variety of other answers rise to the surface of the AI consciousness. Before the mad scientist of Project Mentifex jotted down the eureka brainstorm, "[ ] Fri.13.MAY.2011 Idea: Put gradations into PsiDecay?" and wrote the code the next day, the AI Minds were not reliable for mission-critical applications. Now the AI Forthmind is about to become more mentally stable than its creator. We only need to port some JSAI code to Forth.

    Monday, May 09, 2011

    may7mfpj

    The MindForth Programming Journal (MFPJ) is both a tool in developing MindForth open-source artificial intelligence (AI) and an archival record of the history of how the AI Forthmind evolved over time.

    1 Sat.7.MAY.2011 -- Improving Neural Inhibition

    Something is preventing neural inhibition from operating immediately when we ask the AI Mind a "who-are-you" question. The inhibition begins to occur only after a pause or delay, and we need to find out why. The problem may be that the "predflag" for predicate nominatives is not being set soon enough. The "predflag" is set towards the end of the BeVerb mind-module, and it governs the inhibiting of nouns as predicate nominatives in the NounPhrase module. We see through troubleshooting that the earlier engram in a pair of selected-noun engrams is being inhibited properly down to minus thirty-two points of conceptual activation, but apparently the present-time engram in the pair is only going down to zero activation. It looks as though calls to PsiClear from the EnCog (English cognition) module were interfering in the pairing of inhibitions shared by the old engram that won selection and the new engram being stored as the record of a generated thought. Then a further problem developed because the AI was not letting go of transitive verbs that served within an output thought. We inserted code to inhibit each transitive verb after thinking, and we began to obtain a variety of outputs from the AI in response to queries.

    2 Sun.8.MAY.2011 -- Selecting New Inhibition Variables

    Today we are creating two new inhibition variables, "tseln" for "time of selection of noun" in NounPhrase, and "tselv" for "time of selection of verb" in VerbPhrase. We need these variables to keep track of the selection-time of an "inhibend" concept to be inhibited after being thought, so that the AI Mind can avoid repeating the same knowledge-base retrieval over and over again. We stumbled upon neural inhibition for response-variety in our MFPJ work of 5 September 2010. We were so astonished by the implications that we issued a Singularity Alert (q.v.). Now we are ready to install a general mechanism of temporary inhibition throughout the AI MindGrid.

    3 Sun.8.MAY.2011 -- Debugging Spurious Inflection

    Although MindForth has suddenly become more intelligent than ever, the AI makes the grammatical mistake of saying "I HELPS KIDS". We need to track down why the SpeechAct module is adding an inflectional "S" to the verb "HELP".

    The VerbPhrase module governs the sending of an "S" inflection into the SpeechAct module. The pertinent code was not fully checking for a verb in the third person singular, so we added an IF-THEN clause requiring that the prsn variable be set to three for an inflectional "S" to be added to a verb being spoken. The bugfix worked immediately.

    Table of Contents

    Wednesday, May 04, 2011

    may3mfpj

    The MindForth Programming Journal (MFPJ) is both a tool in developing MindForth open-source artificial intelligence (AI) and an archival record of the history of how the AI Forthmind evolved over time.

    1 Tues.3.MAY.2011 -- Encountering the WHO Problem

    In the most recent release of MindForth artificial intelligence for autonomous robots possessing free will and personhood, our decision to zero out post-ReEntry concepts is only tentative. If the mind-design decision introduces more problems than it solves, then the decision is reversible. It was disconcerting to notice that the newest version of MindForth could no longer answer who-are-you questions properly, and would only utter the single word "WHO" as output in response to the question. We expect the necessary bugfix to be a simple matter of tracking down and eliminating some stray activation on the "WHO" concept-word, but there is a nagging fear that we may have made a wrong decision that worsened MindForth instead of improving it, that delayed the Singularity instead of hastening it, and that argues for an AI working group to be nurturing MindForth instead of a solitary mad scientist.

    2 Tues.3.MAY.2011 -- Debugging the WHO Problem

    In the InStantiate mind-module, both WHO and WHAT are set to zero activation as recognized input words, under the presumption that such query words work in a mind by a kind of self-effacement that lets the information being sought have a higher activation than the interrogative pronoun being used to request the information. Today at first we could not understand why the setting to zero seemed to be working for WHAT but not for WHO. Eventually we discovered that only WHAT and not WHO was being set to zero in the ReActivate module, with the result that all instances of the recognized WHO concept were being activated at a high level in ReActivate. When we fixed the bug by having both InStantiate and ReActivate set WHO to zero activation, the AI Mind began giving much better answers in response to who-queries. Immediately, however, other issues popped up, such as how to make sure that neural inhibition engenders a whole range of disparate answers if they are available in the knowledge base (KB), and whether we still need special variables like "whoflag" and "whomark". In general, we tolerate special treatment of words like WHO and WHAT with the caveat that we expect to do away with the special treatment when it becomes obvious that we can dispense with it.


    Table of Contents