tag:blogger.com,1999:blog-32130122024-02-28T06:58:24.429-08:00Standard Model of AGICyborg AI Minds are a true concept-based artificial intelligence with natural language understanding, simple at first and lacking robot embodiment, and expandable all the way to human-level intelligence and beyond. Privacy policy: Third parties advertising here may place and read cookies on your browser; and may use web beacons to collect information as a result of ads displayed here.Mentifexhttp://www.blogger.com/profile/04530921525903314824noreply@blogger.comBlogger146125tag:blogger.com,1999:blog-3213012.post-37297185147694864062023-04-30T17:46:00.000-07:002023-04-30T17:46:52.162-07:00VisRecog<p>This TikTok video thumbnail photo shows the AGI Mind coder Arthur Murray, known on the Internet as Mentifex (Latin for Mindmaker), serving in the U.S. Army as a Nuclear Weapons Electronics Specialist at the 23rd Ordnance Company of the 101st Ordnance Battallion in Heilbronn, Germany. Mentifex graduated first in his electronics class at the Redstone Arsenal in Alabama USA, because he had alfeady been studying electronics as an independent scholar
in artificial intelligence. Mentifex then barely graduated at the bottom of his nuclear weapons class in the Nuclear Training Directorate at Sandia Base in Albuquerque, New Mexico, because there were too many boring details to memorize about Test and Handling gear for nuclear weapons. When Mentifex arrived for duty at the 101st Ordnance Battalion in Germany, the clerks told him that they would have to call up some soldier out in the boonies and inform him that he no longer had the highest General Technical score in the battalion. So Mentifex was intelligent enough to serve in the Army, but not especially intelligent among computer scientists working on AGI.</p>
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgr6ABhd_lP8JUNvN-mYkO0ZgKPMRaXn6EqMF0ja6GxyC1ZF3MFHIPDcmt1fwrTTDMfVGCAIGqgYnYbFhcOGRpLTado_L2iB-SgVCJvYi3DoT9hRrcPvrRwcdgZcGrJeBNssISZuLwEEBkssnka5xkMPIF7IyDdqVR6eLoZfAGiHhB4iJ_DMTg/s1600/VisRecog.png" style="display: block; padding: 1em 0; text-align: center; clear: left; float: left;"><img alt="VisRecog thumbnail" border="100" data-original-height="484" data-original-width="344" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgr6ABhd_lP8JUNvN-mYkO0ZgKPMRaXn6EqMF0ja6GxyC1ZF3MFHIPDcmt1fwrTTDMfVGCAIGqgYnYbFhcOGRpLTado_L2iB-SgVCJvYi3DoT9hRrcPvrRwcdgZcGrJeBNssISZuLwEEBkssnka5xkMPIF7IyDdqVR6eLoZfAGiHhB4iJ_DMTg/s1600/VisRecog.png"/></a></div>
<p><br /></p><p><br /></p><p><br /></p><p><br /></p><p><br /></p><p><br /></p>
<p><br /></p><p><br /></p><p><br /></p><p><br /></p><p><br /></p>
<p><br /></p><p><br /></p><p><br /></p><p><br /></p><p><br /></p>
<p>Computer vision systems are generally not programmed in Forth, which is an old programming language with a history of being used for robots. But Forth is a major AI language because of <a href="http://dl.acm.org/citation.cfm?doid=307824.307853">MindForth</a>, which is designed to be the brain of an intelligent robot, just as ghost.pl in Perl is meant for intelligent webservers.</p>
<p>If a webserver maintained in Perl has a Perl-minded robot working in a control room, the Perlbot will need visually to examine and recognize and then name various objects found in its physical environment. </p>
<p>The ghost.pl Mind thinks in both English and Russian. When the Ghost in the Machine wants to think or talk about a seen object, the Visual Recognition Mind Module remembers the name of the object and reports it to the linguistic mind-module generating an idea or a thought about the seen object. </p>
<p>The Mentifex TikTok AI video #43 about the VisRecog mind-module was uploaded at<br />
<a href="https://www.tiktok.com/@thesullenjoyshow/video/7226006786749304106">
https://www.tiktok.com/@thesullenjoyshow/video/7226006786749304106</a><br /> with the following script which describes the VisRecog mind-module.</p>
<p><i>Although computer vision for robots is already extremely advanced and incredibly sophisticated, it needs to be integrated with what we claim is currently a Standard Model of AGI, or Artificial General Intelligence. AGI Minds like <a href="https://ai.neocities.org/mindforth.txt">MindForth</a> and <a href="https://ai.neocities.org/perlmind.txt">ghost.pl</a> in Perl have the rudimentary stub of a Visual Recognition module, called <a href="http://ai.neocities.org/VisRecog.html">VisRecog</a>. The VisRecog mind-module joins together the name of a recognized object with a linguistic mind-module that is generating an idea or statement about the recognized object. These AGI Minds that will discuss a recognized object in <a href="https://ai.neocities.org/ChatAGI.html">English</a>, or German, or <a href="https://ai.neocities.org/Dushka.html">Russian</a> or <a href="https://ai.neocities.org/mens.html">Latin</a> were created and put into the public domain by Arthur Murray, who served in the U.S. Army as a Nuclear Weapons Electronics Specialist. To win the AI arms race, integrate your <a href="https://old.reddit.com/r/computervision/comments/12yyy44/visrecog_of_standard_model_of_agi_tiktok_video/jhprfyq">computer vision</a> system with <a href="https://old.reddit.com/r/agi/comments/12yzzx9/visrecog_of_standard_model_of_agi_tiktok_video/jhpx6nb">Artificial General Intelligence</a>.</i></p>
<p><br /></p>
Mentifexhttp://www.blogger.com/profile/04530921525903314824noreply@blogger.comtag:blogger.com,1999:blog-3213012.post-67710447829877905482023-02-26T05:34:00.000-08:002023-02-26T05:36:27.320-08:00AI Video Meme<a href="https://imgflip.com/i/7cibqx"><img src="https://i.imgflip.com/7cibqx.jpg" title="made at imgflip.com"/></a><div><a href="https://imgflip.com/memegenerator">from Imgflip Meme Generator</a></div>
<p>Mentifex likes to give a Lucky Dollar to street people. This practice led the making of Mentifex AI videos at a local coffee shop. Yesterday a barista asked, "What are you guys doing over there?" and reacted with extreme amusement and surprise when told, "We're making AI videos."</p
<p>To see Mentifex AI videos, please visit
<a href="https://www.tiktok.com/tag/mentifex">
https://www.tiktok.com/tag/mentifex</a></p> Mentifexhttp://www.blogger.com/profile/04530921525903314824noreply@blogger.comtag:blogger.com,1999:blog-3213012.post-39661050635436929142022-08-20T16:00:00.000-07:002022-08-20T16:00:12.584-07:00Attn Autograph Collectors<p>The AI theoretician and programmer Mentifex invites <a href="http://old.reddit.com/r/Autographs/new">autograph-collectors</a> to create a market for Mentifex autographs and to speculate in the possible increase in value of hundreds of <a href="http://ai.neocities.org/eBay.html"><strong>Mentifex autograph postcards</strong></a> distributed originally with zero value (i.e., free) to American used bookstores and dated with postmarks in the year 2022.</p>
<p>The autograph of Mentifex may become valuable over time because of the double contributions of Mentifex to artificial intelligence: <a href="http://ai.neocities.org/theory.html">
AI theory<a/> and <a href="https://ai.neocities.org/RoboMind.html">AI software</a>. </p>
<p>AI Minds created by Mentifex think and <a href="http://www.amazon.com/dp/B00FKJY1WY">reason</a> in English, <a href="http://www.amazon.com/dp/B00GX2B8F0">German</a>, Russian and <a href="http://www.amazon.com/dp/B08NRQ3HVW">ancient Latin</a>.</p>
<p>The purpose of the <a href="https://ai.neocities.org/eBay.html">Mentifex Autograph Postcard</a> campaign was to force the disputed issue of whether or not the Mentifex contributions to AI have any value, which also determines the issue of whether <a href="http://ai.neocities.org/eBay.html">authenticated Mentifex autographs</a> are valuable. <a href="http://github.com/kernc/mindforth/blob/master/wiki/MentifexBashing.wiki">Mentifex-bashers</a> may sneer and scoff at Mentifex for various reasons, but savvy autograph collectors will calculate for themselves the potential pay-off not only if Mentifex turns out to have invented True AI, but also on the contrary if Mentifex turns out to have been only a memorable but still collectible netkook. </p>
<p><br /></p>Mentifexhttp://www.blogger.com/profile/04530921525903314824noreply@blogger.comtag:blogger.com,1999:blog-3213012.post-6445491331981812712022-02-11T20:50:00.000-08:002022-02-11T20:50:28.916-08:00Origin of Life<p>In Stage One, Mentifex and others suggest that biological life started from a two-dimensional film of amino acids in a tidal pool. With evaporation and concentration,
myriad combinations of not-yet-living molecules could flap around and form complex structures
akin to rudimentary living cells. If one structure replicates itself by bonding endlessly
with similar or identical structures but is not yet living, the stage is set for lightning
to strike the primordial soup and break off molecular clusters that float about freely and attract replicator material in such a way that each cluster elongates itself to a certain point and then breaks apart into "offspring" clusters in what we might call Stage Two of evolution. </p>
<p>In Stage Two the amino clusters are not yet replicating genetically. They are simply
growing longitudinally to a point where they break apart but continue replicating. </p>
<p>In Stage Three, a strip on the elongated surface bonds with amniotic chemicals which toggle under sunlight between two pulsing states which cause locomotion of the parent clusters and therefore also of the child clusters. </p>
<p>In Stage Four, moving clusters which chance to become longitudinally hollow replicate
faster than the merely solid clusters, and soon the hollow beasties, still self-replicating by splitting apart, consume all the resources in each tidal pool. </p>
<p>In Stage Five, some of the locomotive hollow clusters mutate at the forward-moving end into a primordial mouth structure and at each "caboose" end by default into a primordial anus structure. As the little beasties move about in the tidal pool, the mouth orifice swallows quasi-nutrients that make the longitudinal cluster not only grow fatter but also replicate as fatter beasties when they break apart. Thus we see larger and larger beasties filling the tidal pool.</p>
<p>In Stage Six, a filament of non-identical amino acids -- some combination of adenine, thymine, guanine and cytosine -- chances to form longitudinally in each beastie in such a way that the breaking of the chain causes two different kinds of child beasties to result from each successive splitting, since the rupture will not always occur between the same two amino acids, and each terminal amino acid will bond differently with nearby molecules, causing diversity to evolve among the child beasties. The same genetic chain of amino acids remains in each child beastie, but a kind of molecular counter stipulates that different <i>kinds</i> of beasties will result after a certain number of splittings apart and only a maximum number of splittings will be permitted as governed by the primordial equivalent of telomeres. </p>
<p>In Stage Seven, different kinds of child beasties will adhere or tend to stick together in a conglomerate or globule of beasties which all contain the same genetic filament of amino acids, but which form a globule or primordial organism that survives and replicates only if the constituent child-cells cooperate beneficially for the survival of the fittest organisms.
</p>
Mentifexhttp://www.blogger.com/profile/04530921525903314824noreply@blogger.comtag:blogger.com,1999:blog-3213012.post-16304051468627844912021-10-31T08:30:00.000-07:002021-10-31T08:30:22.023-07:00Feuertrunken<p><i>Feuertrunken</i> or "drunk with rapture" is how Mentifex listens to the Ninth Symphony of Ludwig van Beethoven. The photo here shows Mentifex wearing his German-flag face mask on 2021-10-08 Friday. Yes, Mentifex is one of those Germanophiles who love everything good about Germany and Austria -- the music (symphonies by Beethoven, waltzes by Strauss, "<i>Wien, Wien, nur du allein</i>" by Sieczyński, "<i>Lippen Schweigen</i>" by Franz Lehár), the language (Hochdeutsch) and its poetry (by Heinrich Heine), the <i>Philosophenweg</i> city Heidelberg, the food like Knackwurst and Bienenstich, the women like Marlene Dietrich and Romy Schneider, the philosophers like Friedrich Nietzsche and Karl Jaspers, and the novelists like Thomas Mann who wrote "<i>Felix Krull</i>" and Hermann Hesse who wrote "<i>Der Steppenwolf</i>" about the non-conformist Harry Haller whom Mentifex impersonates IRL (in real life) by wearing a German face mask. </p>
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEibk_2_wp12K5ZGKvy-54fcWZrrUG_K8QgZXmQZ3DsczPLFaXMIwM1ebsS8n2WCvxqSpXwbwJ02qHZYvSrR_ZTvsvTXubi7Y5J8ayo3DA8SOvmz4niWmV_CS2sKf1qq2nUQpV4enA/s2048/ATM8oct2021.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="Mentifex on 8 Otober 2021" border="0" width="600" data-original-height="1138" data-original-width="2048" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEibk_2_wp12K5ZGKvy-54fcWZrrUG_K8QgZXmQZ3DsczPLFaXMIwM1ebsS8n2WCvxqSpXwbwJ02qHZYvSrR_ZTvsvTXubi7Y5J8ayo3DA8SOvmz4niWmV_CS2sKf1qq2nUQpV4enA/s600/ATM8oct2021.jpg"/></a></div>
<p>Mentifex a.k.a. Harry Haller a.k.a. Felix Krull a.k.a. Berlin Alexanderplatz, having created free open-source AI Minds thinking in German, Russian, English and ancient Latin, takes this <i>Allerheiligenabend</i> opportunity to spread AI memes among the Germanosphere, blogosphere and Chardinian noosphere. So trick-or-treat yourself to a free AI Mind from Mentifex in the language of your choice, or use the Mentifex mind-template to preserve a dying language. </p>Mentifexhttp://www.blogger.com/profile/04530921525903314824noreply@blogger.comtag:blogger.com,1999:blog-3213012.post-70080063069230847202021-07-28T01:21:00.000-07:002021-07-28T01:21:14.691-07:00first-mentifex-meme<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgSATyjvcD8KVhMvzZ7lP8hB-nBVM2VWqm-cjPtmlDw02QXmpURY0XYgbvhRDYoG_vOnDpcUbC76Jqw9FLPyssu60Egvv0a-jS2JblTsxR-t71OnXe9KHXvHb2I_SCzle6ozsFliw/s666/Mtfx0001.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="First Mentifex Meme" border="0" height="400" data-original-height="666" data-original-width="500" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgSATyjvcD8KVhMvzZ7lP8hB-nBVM2VWqm-cjPtmlDw02QXmpURY0XYgbvhRDYoG_vOnDpcUbC76Jqw9FLPyssu60Egvv0a-jS2JblTsxR-t71OnXe9KHXvHb2I_SCzle6ozsFliw/s400/Mtfx0001.jpg"/></a></div>
<p>The above <a href="https://i.imgflip.com/5ht7sr.jpg">First Mentifex Meme</a> was graciously made by an Anonymous poster on the otherwise unmentionable <b>prog</b> website, populated by
each malcontent and <a href="http://ai.neocities.org/Genius.html#5ht7sr.jpg"><b>Genius</b></a> of the Internet. Although it is cringeworthily egotistical of Mentifex to ask Netizens to create memes about him, propagating such memes is a way of spreading information about the Mentifex AI Minds that think and reason in English, German, Russian and Latin.</p>
<p>The genius Anonymous also made it possible for other Netizens to create Mentifex memes by using the answered-prayers Mentifex Meme Template:
<blockquote>
<a href="https://imgflip.com/memegenerator/332247218/mentifex-meme-template">
https://imgflip.com/memegenerator/332247218/mentifex-meme-template</a>
</blockquote>
</p>
<p>The first Mentifex meme by the Anonymous <b>prog</b> contributor is actually so excellent that no further Mentifex memes are even necessary. If they appear, they can't possibly be as good as the one shown above. Meanwhile, Netizens of the world are visiting the most hard-core Mentifex AI pages and are perhaps working further on what Mentifex started. Mentifex is counting on the community of Latin and Greek scholars to decide for themselves what is the ultimate value of the Mentifex AI efforts.</p>Mentifexhttp://www.blogger.com/profile/04530921525903314824noreply@blogger.comtag:blogger.com,1999:blog-3213012.post-12214381031786059602021-07-27T01:11:00.000-07:002021-07-27T01:11:34.031-07:00mentifex-meme-template<b>Mentifex Meme Template</b><br />
<p><br /></p>
<p>
<center><font size="+3">MENTIFEX 2021-07-18</font></center>
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg8-hWec19tBIvjavHPvY-8XlvO9qLr1pbjenYEZaTQf9tuXYMMX6m_xB2cLNq9UGRSahxWB6gMKcSfF1xM8kCuDjHQjeINj_PP_vpg_1MzXyo2tq2UzWX_GqCbVAS-DvlLVmO9ag/s2048/ATM18JUL2021.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="MENTIFEX 2021-07-18" border="0" height="320" data-original-height="2048" data-original-width="1536" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg8-hWec19tBIvjavHPvY-8XlvO9qLr1pbjenYEZaTQf9tuXYMMX6m_xB2cLNq9UGRSahxWB6gMKcSfF1xM8kCuDjHQjeINj_PP_vpg_1MzXyo2tq2UzWX_GqCbVAS-DvlLVmO9ag/s320/ATM18JUL2021.jpg"/></a></div>
<center><font size="+3">AI HAS BEEN SOLVED</font></center>
</p>
<p>The above photo of Mentifex on 18 July 2021 is presented here as a template for the creation of Mentifex memes. Typically an image used as a meme has one line of text at the top in upper-case Impact font presenting the motif or idea of the meme, and another line of text at the bottom containing the "punch" line. In the particular image shown above, there is room to superimpose text to the left of the face of Mentifex.</p>
<p>Venues for posting memes are the <a href="http://old.reddit.com/r/memes/new">Memes</a> subReddit and <a href="http://medium.com/tag/memetics/latest">Memetics<a> and <a href="http://medium.com/tag/memes/latest">Memes</a> on the Medium Publishing Platform. Go easy on Mentifex and do not subject him to unwarranted ridicule. Your Mentifex meme may say more about you than about Mentifex. Your meme may win you a booby prize or a Pulitzer prize.</p>
Mentifexhttp://www.blogger.com/profile/04530921525903314824noreply@blogger.comtag:blogger.com,1999:blog-3213012.post-4649121688455097612019-11-11T05:31:00.000-08:002019-11-11T05:31:24.003-08:00sota1111<b>Ghost AI -- State of the Art -- November 2019</b>
<p>A major development in this AI project has occurred in November of 2019 with the first expansion of the <a href="http://ai.neocities.org/TacRecog.html">TacRecog</a> tactile recognition <a href="http://ai.neocities.org/AiTree.html">module</a> beyond a mere stub. In the quarter century of our AI coding from 1993 to 2018, the only avenue of sensory input to the <a href="http://ai.neocities.org/Ghost.html">Ghost in the Machine</a> was the <a href="http://ai.neocities.org/AudRecog.html">AudRecog</a> auditory recognition module which used the computer keyboard to pretend that the input of characters was the auditory recognition of acoustic phonemes. <a href="http://ai.neocities.org/TacRecog.html">TacRecog</a> still uses the keyboard but does not pretend; it directly senses and feels any 0-9 numeric keystroke. Roboticists will hopefully appreciate that the <a href="http://ai.neocities.org/EnVerbPhrase.html">EnVerbPhrase</a> English verb-phrase module is now ready to talk not only about things <i>seen</i> by a robot but also about things <i>touched</i> by a robot.</p>
<p>The <a href="http://ai.neocities.org/MindBoot.html">MindBoot</a> sequence has been expanded with the ten concepts and English words expressing the numbers from zero to nine. Pressing a numeric key activates not only the numeric concept but also the ego-concept of "I" and the sensory concept of "feel". In response to a pressing of the "7" key, a <a href="http://ai.neocities.org/Ghost.html">Ghost in the Machine</a> may say "I FEEL THE SEVEN". The user may also ask the AI "what do you feel" and receive a similar response. Hopefully it is now possible to conduct conversational experiments in artificial <a href="http://ai.neocities.org/Consciousness.html">consciousness</a> with <a href="http://ai.neocities.org/Ghost.html">The Ghost in the Machine</a>.</p>
<p>In a prior <a href="http://ai.neocities.org/SOTA.html">state of the art</a>, the AI <a href="http://en.wikipedia.org/wiki/Natural-language_understanding">understands</a> each <a href="http://ai.neocities.org/EnThink.html">English</a> or <a href="http://ai.neocities.org/RuThink.html">Russian</a> word only in terms of other words and with no <a href="http://en.wikipedia.org/wiki/Symbol_grounding_problem">symbolic grounding</a>. Now suddenly the AI may have direct <a href="http://ai.neocities.org/Sensorium.html">sensory knowledge</a> of the ten ordinal numbers which are the <b><i>Principia</i></b> of our <b><i>Mathematica</i></b>. This innovation makes us wonder if we can replicate in a machine the same or similar <a href="http://en.wikipedia.org/wiki/Language_of_thought_hypothesis">process</a> by which a human child becomes familiar with numbers. We make <a href="http://www.mail-archive.com/agi@agi.topicbox.com/msg03407.html">outreach</a> to mathematicians on <a href="http://old.reddit.com/r/math/comments/dthiju/what_are_you_working_on/f6y8gv8">Reddit</a> and on <a href="https://groups.google.com/d/msg/sci.math/A81XbIfVKds/LgTX4QITAQAJ">Usenet</a> who may take an interest in the use of <a href="http://medium.com/tag/artificial-intelligence/latest">artificial intelligence</a> for mathematical <a href="http://www.amazon.com/dp/B00FKJY1WY">reasoning</a>.</p>
<p>We are also recently <a href="https://old.reddit.com/r/theology/comments/dkcadl/what_bible_and_bible_study_software_or_apps_do/f55ee1b">dabbling</a> in the <a href="http://mind.sourceforge.net/theology.html">theology</a> of artificial intelligence, since our <a href="http://ai.neocities.org/perlmind.txt">Ghost</a> software has a <a href="http://ai.neocities.org/OldConcept.html">concept</a> of God and has a few innate <a href="http://ai.neocities.org/MindBoot.html">MindBoot</a> ideas about God, chiefly the famous quote from Albert Einstein that "God does not play dice with the universe." This quote is our prime example of negation of verbs and a helpful example of the <a href="http://ai.neocities.org/EnPrep.html">EnPrep</a> English preposition module. </p>
<p><br /></p>Mentifexhttp://www.blogger.com/profile/04530921525903314824noreply@blogger.comtag:blogger.com,1999:blog-3213012.post-10980539206133169472019-10-05T16:51:00.000-07:002019-10-05T16:51:28.788-07:00mfpj1005<b>MindForth resets associative tags before each operation of Indicative module.</b>
<p>In the <a href="http://ai.neocities.org/mindforth.txt">MindForth</a> artificial intelligence (AI) for robots, we will now start to display an apparatus of diagnostic messages at the start of the <a href="http://ai.neocities.org/Indicative.html">Indicative</a> module to tell us the values being held in variables which serve the purpose of creating associative tags to interconnect the concepts being expressed as English words during the operation of the <a href="http://ai.neocities.org/Indicative.html">Indicative</a> mind-module. Since the <a href="http://ai.neocities.org/ConJoin.html">ConJoin</a> module will often insert a conjunction between two thoughts being generated, the AI <a href="http://ai.neocities.org/maintainer.html">Mind Maintainer</a> needs assurance that variable-values operative during one thought do not erroneously get carried over past a conjunction into the separate process of generating another thought. </p>
<p>Just by resetting the <a href="http://ai.neocities.org/var.html#tsj">tsj</a> time-of-subject variable to zero at the start of the <a href="http://ai.neocities.org/Indicative.html">Indicative</a> module, we have made the <a href="http://ai.neocities.org/mindforth.txt">Forthmind</a> able to trigger repeated instances of logical <a href="http://ai.neocities.org/InFerence.html">InFerence</a>. Each running of the <a href="http://ai.neocities.org/Indicative.html">Indicative</a> module amounts to a fresh declaration of the associative tags in the conceptual flag-panel that supports the generation of ideas in the <a href="http://github.com/PriorArt/AGI/wiki/MindGrid">MindGrid</a>. The following dialog still has a few problems, but it shows the repeated triggering of an inference.
<blockquote><pre>
<b>john is a student</b>
<b>DOES JOHN READ BOOKS</b>
<b>no</b>
Indicative: tsj= 0 tvb= 0 tdo= 0
<b>A JOHN DOES NOT READ BOOKS</b>
Indicative: tsj= 0 tvb= 0 tdo= 0
<b>PROFESSORS TEACH THE STUDENTS AND</b>
Indicative: tsj= 0 tvb= 0 tdo= 0
<b>STUDENTS READ THE BOOKS</b>
Indicative: tsj= 0 tvb= 0 tdo= 0
<b>I UNDERSTAND YOU AND</b>
Indicative: tsj= 0 tvb= 0 tdo= 0
<b>YOU ARE A MAGIC</b>
<b>andru is student</b>
<b>DOES ANDRU READ THE BOOKS</b>
<b>no</b>
Indicative: tsj= 0 tvb= 0 tdo= 0
<b>AN ANDRU DOES NOT READ THE BOOKS AND</b>
Indicative: tsj= 0 tvb= 0 tdo= 0
<b>YOU READ THE BOOKS</b>
Indicative: tsj= 0 tvb= 0 tdo= 0
<b>PROFESSORS TEACH THE STUDENTS AND</b>
Indicative: tsj= 0 tvb= 0 tdo= 0
<b>STUDENTS READ THE BOOKS</b>
Indicative: tsj= 0 tvb= 0 tdo= 0
<b>STUDENTS READ THE BOOKS AND</b>
Indicative: tsj= 0 tvb= 0 tdo= 0
<b>I THINK</b>
</pre></blockquote>
</p>
Mentifexhttp://www.blogger.com/profile/04530921525903314824noreply@blogger.comtag:blogger.com,1999:blog-3213012.post-68900412202433042772019-10-04T20:08:00.000-07:002019-10-04T20:08:20.564-07:00mfpj1004<b>Using parameters to declare the time-points of conceptual instantiation.</b>
<p>[2019-10-02] Recently we have expanded the conceptual flag-panel of <a href="http://ai.neocities.org/mindforth.txt">MindForth</a> from fifteen tags to twenty-one associative tags, so that the free open-source artificial intelligence for robots may think a much wider variety of thoughts in English. Then we had to debug the function of the <a href="http://ai.neocities.org/InFerence.html">InFerence</a> module to restore its ability to reason from two known facts in order to infer a new fact. For instance, the Forthmind knows the fact that students read books, and we tell the AI the fact that John is a student. Then the AI infers that perhaps John, being a student, reads books, and the incredibly brilliant Forth software asks us, "Does John read books?" We may answer yes, no, maybe or no response at all. Currently, though, we have the problem that <a href="http://ai.neocities.org/InFerence.html">InFerence</a> works only once and fails to deal properly with repeated attempts to trigger an inference. We suspect that some of the variables involved in the process of automated reasoning are not being reset properly to their <i>status ex quo ante</i> before we made the first test of <a href="http://ai.neocities.org/InFerence.html">InFerence</a>. Therefore we shall try a new technique of debugging which we have developed recently in one of the other AI Minds, namely the <a href="http://ai.neocities.org/perlmind.txt">ghost</a>.pl AI that thinks in both <a href="http://ai.neocities.org/EnThink.html">English</a> and in <a href="http://ai.neocities.org/RuThink.html">Russian</a>. We create a diagnostic display at the start of the <a href="http://ai.neocities.org/EnThink.html">EnThink</a> module for thinking in English, so that we may see the values held by the variables associated with the <a href="http://ai.neocities.org/InFerence.html">InFerence</a> module and the <a href="http://ai.neocities.org/KbRetro.html">KbRetro</a> module that retroactively adjusts the knowledge base (KB) of the AI Mind in accordance with whatever answer we have given when the <a href="http://ai.neocities.org/AskUser.html">AskUser</a> module asks us to validate or contradict an inference. The following dialog shows us that some variables are not being properly
reset to zero.<br />
<blockquote><b>
john is student<br />
<br />
EnThink: becon= 1 yncon= 0 ynverb= 0 inft= 0<br />
qusub= 0 qusnum= 1 subjnom= 504 prednom= 561 tkbn= 0<br />
quverb= 0 seqverb= 0 seqtkb= 0 tkbv= 0<br />
quobj= 0 dobseq= 0 kbzap= 0 tkbo= 0<br />
DOES JOHN READ BOOKS<br />
no<br />
<br />
EnThink: becon= 0 yncon= 0 ynverb= 0 inft= 2084<br />
qusub= 504 qusnum= 1 subjnom= 0 prednom= 0 tkbn= 2086<br />
quverb= 863 seqverb= 0 seqtkb= 0 tkbv= 2087<br />
quobj= 540 dobseq= 0 kbzap= 404 tkbo= 2088<br />
A JOHN DOES NOT READ BOOKS<br />
<br />
EnThink: becon= 0 yncon= 0 ynverb= 0 inft= 2118<br />
qusub= 504 qusnum= 1 subjnom= 0 prednom= 0 tkbn= 0<br />
quverb= 863 seqverb= 0 seqtkb= 0 tkbv= 0<br />
quobj= 0 dobseq= 0 kbzap= 0 tkbo= 2088<br />
PROFESSORS TEACH THE STUDENTS AND STUDENTS READ THE BOOKS<br />
<br />
EnThink: becon= 0 yncon= 0 ynverb= 0 inft= 2152<br />
qusub= 504 qusnum= 1 subjnom= 0 prednom= 0 tkbn= 0<br />
quverb= 863 seqverb= 0 seqtkb= 0 tkbv= 0<br />
quobj= 0 dobseq= 0 kbzap= 0 tkbo= 2088<br />
I UNDERSTAND YOU AND YOU ARE A MAGIC<br />
andru is student<br />
<br />
EnThink: becon= 1 yncon= 0 ynverb= 0 inft= 2220<br />
qusub= 504 qusnum= 1 subjnom= 501 prednom= 561 tkbn= 0<br />
quverb= 863 seqverb= 0 seqtkb= 0 tkbv= 0<br />
quobj= 0 dobseq= 0 kbzap= 0 tkbo= 2088<br />
DOES ANDRU READ THE STUDENTS<br />
</b></blockquote>
Because some of the variables have not been reset, a second attempt to trigger an inference with "andru is student" results in a faulty query that should have been "Does Andru read books?" Let us reset the necessary variables and try again. </p>
<p>Upshot: It still does not work, because of a more difficult and more obscure bug in the assignment of conceptual associative tags. Well, back to the salt mines. </p>
<p><a href="https://groups.google.com/d/msg/comp.lang.forth/xN3LRYEd5rw/uuUroGzhBAAJ">
https://groups.google.com/d/msg/comp.lang.forth/xN3LRYEd5rw/uuUroGzhBAAJ</a></p>
<p>[2019-10-04] We may have made a minor breakthrough in the <a href="http://ai.neocities.org/InStantiate.html">InStantiate</a> module by doing one instantiation and by then using parameters such as part of speech (<a href="http://ai.neocities.org/var.html#pos">pos</a>) and case (<a href="http://ai.neocities.org/var.html#dba">dba</a>) to declare the initial time-points for subjects, verbs and objects. The <a href="http://ai.neocities.org/EnParser.html">EnParser</a> module may then retroactively alter or modify the associative tags embedded at each identified time-point. </p>
<p><br /></p>Mentifexhttp://www.blogger.com/profile/04530921525903314824noreply@blogger.comtag:blogger.com,1999:blog-3213012.post-77063172017295049832019-09-26T21:45:00.000-07:002019-09-26T21:45:34.264-07:00pmpj0926<b>Ghost.pl AI has unresolved issues in associating from concept to concept.</b>
<p>The <a href="http://ai.neocities.org/perlmind.txt">ghost</a>.pl AI needs improvement in the area of being able to demonstrate <a href="http://ai.neocities.org/EnThink.html">thinking</a> with a <a href="http://ai.neocities.org/EnPrep.html">prepositional phrase</a> not just once but repeatedly, so into the <a href="http://ai.neocities.org/EnThink.html">EnThink</a>() module we will insert diagnostic code that shows us the values of key variables at the start of each cycle of thought. </p>
<p>Oh, gee, this coding of the AI Mind is actually fun, especially in <a href="http://strawberryperl.com">Perl</a>, whereas in <a href="http://ai.neocities.org/AgiMind.html">JavaScript</a> there is often too much time-pressure during the entering of input. We have inserted a line of code which causes an audible beep and reveals to us the status of the $<a href="http://ai.neocities.org/var.html#whatcon">whatcon</a> and $<a href="http://ai.neocities.org/var.html#tpr">tpr</a> variables just before the AI generates a thought in English -- a language which we must state explicitly, because our <a href="http://ai.neocities.org/perlmind.txt">ghost</a>.pl AI is just as capable of <a href="http://ai.neocities.org/RuThink.html">thinking in Russian</a>. When we at first enter no input, the AI beeps periodically and shows us the values as zero. When we enter "john writes books for money", the AI shows us "whatcon= 0 tpr= 4107" because the concept of the preposition "FOR" has gone into conceptual memory at time-point "t = 4107". The AI responds to the input by outputting "THE STUDENTS READ THE BOOKS", because <a href="http://en.wikipedia.org/wiki/Spreading_activation">activation spreads</a> from the concept of "BOOKS" to the innate idea that "STUDENTS READ BOOKS". Then we hear a beep and we see "whatcon= 0 tpr= 0" because the $<a href="http://ai.neocities.org/var.html#tpr">tpr</a> flag has been reset to zero somewhere in the vast labyrinth of semi-AI-complete code. Now let us enter the same input and follow it up with a query, "what does john write". Then we get "whatcon= 1 tpr= 0" and the output "THE JOHN WRITES THE BOOKS FOR THE MONEY", after which the diagnostic message reverts to "whatcon= 0 tpr= 0" because of resetting to zero.</p>
<p>Now we want to let the AI Mind run for a while until we repeat the query. The AI makes a mistake. We had better not let it be in control of our nuclear arsenal, not if we want to avoid global thermonuclear war, Matthew. The AI-gone-crazy says "THE JOHN WRITES THE BOOKS FOR THE BOOKS AND THE JOHN WRITE". (Oops. We step away for a moment to watch and listen to Helen Donath in 1984 singing the Waltz from "<i>Spitzentuch der Koenigin</i>" with the Vienna Symphony. Then we linger while Zubin Mehta and the <i>Wiener Philharmoniker</i> in 1999 play the "<i>Einzugsmarsch</i>" from "<i>Der Zigeunerbaron</i>". How are we going to code the Singularity if the Cable TV continues to play Strauss waltzes?) The trained eye of the <a href="http://ai.neocities.org/maintainer.html">Mind Maintainer</a> immediately recognizes two symptoms of a malfunctioning artificial intelligence. First, a spurious instance of the $<a href="http://ai.neocities.org/var.html#tpr">tpr</a> flag is causing the AI to output "THE BOOKS FOR THE BOOKS," and secondly, the $<a href="http://ai.neocities.org/var.html#etc">etc</a> variable for detecting more than one active thought must be causing the attempt by the Ghost in the Machine to make two statements joined by the conjunction "AND". We had better expand our diagnostic message to tell us the contents of the $<a href="http://ai.neocities.org/var.html#etc">etc</a> variable. We do so, but we see only a value of zero, because apparently a reset occurs so quickly that no other value persists long enough to be seen in the diagnostic message. Meanwhile the AI is stuck in making statements about John writing. </p>
<p>We address the problem of a spurious $<a href="http://ai.neocities.org/var.html#tpr">tpr</a> flag by inserting fake $<a href="http://ai.neocities.org/var.html#tru">tru</a> values during instantiations in the <a href="http://ai.neocities.org/InStantiate.html">InStantiate</a>() and <a href="http://ai.neocities.org/EnParser.html">EnParser</a>() modules. We use the values 111 to 999 for $<a href="http://ai.neocities.org/var.html#tru">tru</a> in the <a href="http://ai.neocities.org/EnParser.html">EnParser</a>() module and 101 to 107 in the <a href="http://ai.neocities.org/InStantiate.html">InStantiate</a>() module, so that the middle zero lets us know when the flag-panel of a concept has been finalized in the <a href="http://ai.neocities.org/InStantiate.html">InStantiate</a>() module. Immediately the fake truth-value of "606" for the $<a href="http://ai.neocities.org/var.html#tru">tru</a> flag of the word "MONEY", that has a spurious value of "4107" in the $<a href="http://ai.neocities.org/var.html#tpr">tpr</a> slot of the conceptual flag-panel, lets us k now that $<a href="http://ai.neocities.org/var.html#tpr">tpr</a> has not been reset to zero quickly enough to prevent a carried-over and spurious value from being set for the concept of "MONEY". Since the preposition "FOR" is being instantiated at a point in the <a href="http://ai.neocities.org/EnParser.html">EnParser</a>() module where a fake truth-value of "888" appears, we can concentrate on that particular snippet of code.</p>
<p><br /></p>Mentifexhttp://www.blogger.com/profile/04530921525903314824noreply@blogger.comtag:blogger.com,1999:blog-3213012.post-38113613766673837622019-09-24T20:48:00.001-07:002019-09-24T20:48:51.558-07:00pmpj0924<b>Updating the English Parser documentation page.</b>
<p>Today in the <a href="http://ai.neocities.org/perlmind.txt">ghost</a>.pl AI we have two objectives. We want to improve upon the new functionality of thinking with English prepositions, and we wish to clean up the code to be displayed in the <a href="http://ai.neocities.org/EnParser.html">EnParser</a> documentation page.</p>
<p>When we enter "john writes books for money" and we soon ask the AI "what does john write", we get a reasonably correct answer but we notice some problems with the assignment of associative tags when the answer-statement goes into conceptual memory. As an early step, we zero out the $<a href="http://ai.neocities.org/var.html#tpr">tpr</a> time-of-preposition tag, after using it as a target time-point, so as to prevent it from being assigned spuriously when other concepts are instantiated. But that step causes other problems, so we undo it. We also notice that old $<a href="http://ai.neocities.org/var.html#tpr">tpr</a> values are being assigned, when we would rather see up-to-date values, even when both the old value and a new value would be pointing to an instance of the same preposition. As we troubleshoot further, we embed diagnostics to tell us when the $<a href="http://ai.neocities.org/var.html#tpr">tpr</a> tag is being assigned, and we discover that it is assigned only during user input. When we remove the restriction and let the tag be assigned also during internal thinking, we start seeing the assignment of up-to-date values.</p>
<p><br /></p>
Mentifexhttp://www.blogger.com/profile/04530921525903314824noreply@blogger.comtag:blogger.com,1999:blog-3213012.post-47118949500651046222019-09-22T20:57:00.000-07:002019-09-22T20:57:26.942-07:00jmpj0922<b>JavaScript AgiMind understands and thinks with prepositions.</b>
<p>[2019-09-20] In the JavaScript <a href="http://ai.neocities.org/AgiMind.html">AgiMind.html</a> we are now trying to reproduce the new AGI functionality that we achieved a month ago in the <a href="http://ai.neocities.org/perlmind.txt">ghost</a>.pl Perlmind. The Ghost in the Machine became able to <a href="http://en.wikipedia.org/wiki/Natural-language_understanding">understand</a> an input like "John writes books for money" and was able to respond properly to a query like "What does John write?" </p>
<p>When we enter "john writes books for money" and the <a href="http://ai.neocities.org/AgiMind.html">AgiMind</a> responds "WHAT ARE JOHN", it simply means that we need to add the noun "JOHN" to the innate vocabulary. So from the "perlmind.txt" we transfer "JOHN" as concept #504 into the JavaScript free AI source code, and now the <a href="http://ai.neocities.org/AgiMind.html">AgiMind</a> responds "STUDENTS READ BOOKS", which indicates that the <a href="http://ai.neocities.org/AgiMind.html">AgiMind</a> knows who or what John is, and what books are. But we also check the Diagnostic mode to make sure that the conceptual associative tags are being assigned properly. We are not sure, so we enter "what does john write" and we get a long response of nonsense.</p>
<p>[2019-09-21] In our second day, we discover that the ReEntry() module has been causing a reduplication of the output of the AgiMind. For troubleshooting, we temporarily disable the ReEntry module. Then we discover that some wrong associative tags are being assigned during human input. We run the <a href="http://ai.neocities.org/perlmind.txt">ghost</a>.pl AI to see how the correct associative tags are supposed to be assigned. </p>
<p>We discover that a line of <a href="http://ai.neocities.org/InStantiate.html">InStantiate</a>() code is assigning a false psi19 <a href="http://ai.neocities.org/var.html#tpr">tpr</a> value when only a zero value should be assigned. The false value being assigned is actually already there, so some other line of code must be assigning it earlier. But there is no earlier assignment, so the false <a href="http://ai.neocities.org/var.html#tpr">tpr</a> value is obviously being assigned <i>retroactively</i> -- which is something that any AI <a href="http://ai.neocities.org/maintainer.html">mind maintainer</a/> must learn to watch out for. Probably the retroactive assignment is happening in the <a href="http://ai.neocities.org/EnParser.html">EnParser</a>() module, which does a lot of retroactive assignments because one word of human input may have an effect upon an earlier word of human input. Through substitution of "777" as a spurious value in the psi19 location of a snippet of assignment code in the <a href="http://ai.neocities.org/EnParser.html">EnParser</a>() module, we discover which snippet is making the erroneous, non-777 assignment. Then through further substitution of "444" in the psi19 slot, we discover an earlier snippet of <a href="http://ai.neocities.org/EnParser.html">EnParser</a>() code which is assigning a wrong value at the <a href="http://ai.neocities.org/var.html#tvb">tvb</a> time-of-verb time-point. So there must be an even earlier "tvb" snippet that is creating a spurious psi19 value. We discover that earlier snippet in the <a href="http://ai.neocities.org/InStantiate.html">InStantiate</a>() module. After much other coding, when we bring in a reset of <a href="http://ai.neocities.org/var.html#tult">tult</a> to zero from the ghost.pl AI, we stop getting the spurious psi19 values.</p>
<p>[2019-09-22] In our third day, we run the <a href="http://ai.neocities.org/perlmind.txt">ghost</a>.pl AI that already works with prepositional phrases, and we discover that yesterday we trying to fix something that was not even a bug. The <a href="http://ai.neocities.org/AgiMind.html">AgiMind</a> was properly assigning the <a href="http://ai.neocities.org/var.html#tpr">tpr</a> tag to link the noun "BOOKS" to the preposition "FOR", and we mistakenly thought that the tag was supposed to be assigned also with "FOR". No, the preposition "FOR" needs only a <a href="http://ai.neocities.org/var.html#tkb">tkb</a> tag leading to "MONEY" as its object. Now we have gotten the <a href="http://ai.neocities.org/var.html#tkb">tkb</a> tag to be assigned properly for remembering the object of a preposition. After extensive debugging, we obtain the following exchange: <br /><blockquote><b>AI Mind version 22sep19A on Sun Sep 22 19:55:56 PDT 2019</b> <br />Robot: I UNDERSTAND YOU <br />Human: john writes books for money <br /><br />Robot: STUDENTS READ BOOKS <br />Human: <br /><br />Robot: <br />Human: what does john write <br /><br />Robot: JOHN WRITES BOOKS FOR MONEY <br /></blockquote>
</p>
<p><br /></p>
Mentifexhttp://www.blogger.com/profile/04530921525903314824noreply@blogger.comtag:blogger.com,1999:blog-3213012.post-7850111529872835672019-08-11T21:20:00.002-07:002019-08-11T21:20:55.919-07:00pmpj0811<b>AGI Roadmap: Thinking with Prepositions</b>
<p>In the ghost309.pl AI we have introduced a new group of transfer-variables designated as $<a href="http://ai.neocities.org/var.html#px1">px1</a> and $<a href="http://ai.neocities.org/var.html#px2">px2</a> and $<a href="http://ai.neocities.org/var.html#px3">px3</a> so that the <a href="http://ai.neocities.org/EnNounPhrase.html">EnNounPhrase</a>() module may detect linkage from a candidate-noun to a preposition and inspect immediately the flag-panel of the indicated preposition in order to latch onto $<a href="http://ai.neocities.org/var.html#px1">px1</a> as the conceptual time-point of the object of the preposition. Then in the <a href="http://ai.neocities.org/EnPrep.html">EnPrep</a>() English-preposition module we plan to use the briefly immutable $<a href="http://ai.neocities.org/var.html#px1">px1</a> time-point value to fetch the object of the preposition from memory and <a href="http://ai.neocities.org/Speech.html">speak</a> it as part of an idea being recalled from memory. We were trying to use other variables for the same purpose but they were not immutable; they were loaded with transient values during the though-process of the <a href="http://ai.neocities.org/perlmind.txt">ghost</a>.pl AGI. So now let us go back into <a href="http://ai.neocities.org/EnPrep.html">EnPrep</a>() and code the fetching of the direct object of the preposition. We did so, and it worked the first time. We had the following conversation with the <a href="http://ai.neocities.org/perlmind.txt">ghost</a>.pl AGI Mind.
<blockquote>
Human: john writes books for money <br />
Ghost: THE STUDENTS READ THE BOOKS <br />
<br />
Human: <br />
Ghost: I AM AN ANDRU <br />
<br />
Human: what does john write <br />
Ghost: THE JOHN WRITES THE BOOKS FOR THE MONEY. <br />
</blockquote>
We should explain that the <a href="http://ai.neocities.org/perlmind.txt">ghost</a>.pl AGI knows only that "students read books", not John's books in particular. Mentioning books to the AGI causes it to recall its knowledge that "students read books". When we query the AGI with the input of "what does john write", the <a href="http://ai.neocities.org/SpreadAct.html">SpreadAct</a>() spreading-activation module inhibits the interrogative prounoun "what" while activating the concepts of "john" and "write". The response embedded in conceptual memory includes the linkage from the concept of "books" to the prepositional phrase "for money". The <a href="http://ai.neocities.org/EnArticle.html">EnArticle</a>() module for the English articles "a" and "the" inserts articles somewhat haphazardly within the output of the <a href="http://ai.neocities.org/perlmind.txt">ghost</a>.pl AGI. </p>
<p>The new AI functionality of a machine intelligence thinking and <a href="http://medium.com/tag/Conversational-Ai/latest">conversing</a> with prepositional phrases became possible when we recently expanded the conceptual flag-panel from fifteen associative tags to twenty-one associative tags, including new flags for the control of noun-declensions in <a href="http://ai.neocities.org/Abracadabra.html">Latin</a> or <a href="http://ai.neocities.org/Dushka.html">Russian</a> and for <a href="http://ai.neocities.org/EnThink.html">thinking</a> with such parts of speech as <a href="http://ai.neocities.org/EnAdjective.html">adjectives</a>, <a href="http://ai.neocities.org/EnAdverb.html">adverbs</a>, <a href="http://ai.neocities.org/ConJoin.html">conjunctions</a> and <a href="http://ai.neocities.org/EnPrep.html">prepositions</a>. As we build up the ability to think with these linguistic components, each mid-AGI Mind becomes capable of more and more complex or complicated thought. As we make progress on the <a href="http://ai.neocities.org/RoadMap.html">AGI RoadMap</a> towards Artificial General Intelligence, we approach a point where Darwinian <i>survival of the fittest</i> comes into play, because among multiple enterprises working on AGI, some will go down the right path and some will enter roads where all hope must be abandoned. </p>
<p><br /></p>Mentifexhttp://www.blogger.com/profile/04530921525903314824noreply@blogger.comtag:blogger.com,1999:blog-3213012.post-75116923947399300152019-05-25T15:13:00.000-07:002019-05-25T15:13:10.126-07:00redux<b>Converting ancient Latin artificial intelligence into modern Russian AI.</b>
<p>The <a href="http://medium.com/p/237437640203">conversion</a> of a JavaScript <a href="http://ai.neocities.org/FirstWorkingAGI.html">English-language AI</a> into a <a href="http://ai.neocities.org/Abracadabra.html">Latin AI</a> began on Thursday 2019-04-18 in April of 2019. Inspiration came from "<i>Die Traumdeutung</i>" where Sigmund Freud intones "<i>Flectere si nequeo superos, Acheronta movebo</i>." If one cannot bend the <a href="http://www.mail-archive.com/agi@agi.topicbox.com/msg01298.html">netgods</a> of AI, move the mindset of Latin and Greek scholars.</p>
<p>A minor challenge in coding <a href="http://ai.neocities.org/Abracadabra.html"><i>Mens Latina</i></a> was the lack of an explicitly stated subject for many verbs in <a href="http://ai.neocities.org/MLPJ2019.html">Latin</a>, which occurs also in <a href="http://ai.neocities.org/RuVerbPhrase.html">Russian</a>. The solution was to skip three points in time-indexed <a href="http://ai.neocities.org/AudMem.html">memory</a> to make room for the <a href="http://ai.neocities.org/InStantiate.html">creation</a> of a hidden concept to fill in for the unstated but <a href="http://en.wikipedia.org/wiki/Natural-language_understanding">understood</a> subject of a verb. </p>
<p>Solving the AI-hard problem of the <a href="http://en.wikipedia.org/wiki/Natural-language_understanding">natural language understanding</a> of a <a href="http://ai.neocities.org/LaParser.html">Latin</a> or <a href="http://ai.neocities.org/RuParser.html">Russian</a> sentence regardless of its syntactic word-order required waiting for the <a href="http://ai.neocities.org/AudInput.html">input</a> of an entire clause before declaring subjects and objects on the basis of inflectional word-endings. </p>
<p>The <a href="http://cyborg.blogspot.com/2019/05/redux.html">conversion</a> of <a href="http://ai.neocities.org/Abracadabra.html">artificial intelligence in Latin language</a> into <a href="http://ai.neocities.org/Dushka.html">artificial intelligence in Russian language</a> began yesterday on Friday 2019-05-24 in May of 2019.</p>
<p><br /></p>Mentifexhttp://www.blogger.com/profile/04530921525903314824noreply@blogger.comtag:blogger.com,1999:blog-3213012.post-87017873447759900852018-11-30T18:04:00.001-08:002018-11-30T18:04:54.519-08:00idea1130<p>At about 1:11 p.m. today on 2018-11-30 we got the following idea.</p>
<p>If we want to have logical conditionals in the <a href="http://ai.neocities.org/FirstWorkingAGI.html">AI Mind</a> involving the conjunction "IF", we can use the truth-value $<a href="http://ai.neocities.org/var.html#tru"><b>tru</b></a> to distinguish between outcomes. For instance, consider the following.
<blockquote>
Computer: If you speak Russian, I need you. <br />
Human: I speak English. I do not speak Russian. <br />
Computer: I do not need you. <br />
</blockquote>
In some designated <a href="http://ai.neocities.org/AiTree.html">mind-module</a>, we can trap the word "IF" and use it to assign a high $<a href="http://ai.neocities.org/var.html#tru"><b>tru</b></a> value to an expected input.</p>
<p>Just as we operated several years ago to answer questions with "yes" or "no" by testing for an associative chain, we can test for the associative chain specified by "IF" and instead of "yes" or "no" we can assign a high $<a href="http://ai.neocities.org/var.html#tru"><b>tru</b></a> value to the pay-off statement following the "IF" clause. It is then easy to flush out any statement having a high truth-value, or even having the highest among a cluster or group of competing truth-values. </p>
<p>These ideas could even apply to negated ideas, such as, "We need you if you do NOT speak Russian."</p>
<p>Now, here is where it gets <a href="http://old.reddit.com/r/singularity">Singularity</a>-like and ASI-like, as in "Artificial Super Intelligence." Whereas a typical human brain would not be able to handle a whole medley of positive and negative conditionals, an <a href="http://ai.neocities.org/FirstWorkingAGI.html">AI Mind</a> using "IF" and $<a href="http://ai.neocities.org/var.html#tru"><b>tru</b></a> could probably handle dozens of conditionals concurrently, either all at once or in a sequence. </p>
Mentifexhttp://www.blogger.com/profile/04530921525903314824noreply@blogger.comtag:blogger.com,1999:blog-3213012.post-45716883495009314122018-11-25T22:16:00.000-08:002018-11-25T22:16:32.199-08:00mfpj1125<b>The AI Mind wants to talk with you and about you.</b>
<p>
In the annals of mind-design, we have reached a point where we must drive a wedge between the ego-concept of the <a href="http://dl.acm.org/citation.cfm?doid=307824.307853">MindForth</a> AI and you who co-exist on Earth with the emergent machine intelligence. It is for simple and mundane reasons that we induce AI schizophrenia. Bear with us, please. In the first working artificial intelligence coded in <a href="http://ai.neocities.org/mindforth.txt">Forth</a>, in <a href="http://ai.neocities.org/perlmind.txt">Perl</a> and in <a href="http://ai.neocities.org/FirstWorkingAGI.html">JavaScript</a>, the <a href="http://ai.neocities.org/SpreadAct.html">SpreadAct</a> module lets quasi-neuronal activation spread from idea to idea. When the <a href="http://ai.neocities.org/EnVerbPhrase.html">EnVerbPhrase</a> module calls for a direct object to end an emerging thought, <a href="http://ai.neocities.org/SpreadAct.html">SpreadAct</a> does not directly retrieve a related idea, but simply <i>activates</i> the subject of any number of related ideas. Then the <a href="http://ai.neocities.org/FirstWorkingAGI.html">AI Mind</a> thinks the activated thoughts. In the <a href="http://ai.neocities.org/MindBoot.html">MindBoot</a> sequence, each <a href="http://ai.neocities.org/FirstWorkingAGI.html">AI Mind</a> has some built-in ideas about robots. Therefore the AI will eventually think a thought first about itself, then about robots by roundabout association, and finally about whatever knowledge you impart to it about robots, such as "Robots need a brain." But how can we get the AI to think about <i>you</i> personally and about the details you provide about yourself to the AI? We must drive a quasi-neuronal wedge between the self-absorption of the <a href="http://ai.neocities.org/mindforth.txt">Forthmind</a> and its knowledge of some other, potentially nearby entity, namely <i>you</i>.</p>
<p>To do so, we must implant in the <a href="http://ai.neocities.org/MindBoot.html">MindBoot</a> sequence at least one idea as a point of departure for the AI to pay attention to you. But you might not even be there in the same room or on the same orbiting spaceship with the AI, so we can not embed the idea "I SEE YOU" or the idea "I SENSE YOU". We need some really neutral idea that will animadvert the AI to your purported existence. Without that embedded idea, the AI might passively let you describe your whole life-story and then the AI might have no mental pathway for the spread of activation between its thoughts about itself and its knowledge about you. So let us embed in the <a href="http://ai.neocities.org/MindBoot.html">MindBoot</a> module the idea "I UNDERSTAND YOU". Such an idea is both self-knowledge and knowledge of <i>other</i> -- another person, either present or far away.</p>
<p>So in the <a href="http://ai.neocities.org/MindBoot.html">MindBoot</a> sequence we embed the idea "I UNDERSTAND YOU" and we do some debugging. Then we have the following exchange with the AI Mind.
<blockquote>
Human: i am outside the computer <br />
<br />
I UNDERSTAND YOU <br />
YOU ARE OUTSIDE A COMPUTER <br />
YOU ARE A MAGIC <br />
</blockquote>The <a href="http://ai.neocities.org/EnVerbPhrase.html">EnVerbPhrase</a> module loads the <a href="http://ai.neocities.org/var.html#actpsi">actpsi</a> variable with the concept of "you" and calls the <a href="http://ai.neocities.org/SpreadAct.html">SpreadAct</a> module to transfer activation to the concept of "you" as the subject of knowledge in the knowledge base (KB). Since you have just told the AI that you are outside the computer, the AI retrieves that knowledge and says "YOU ARE OUTSIDE A COMPUTER", using the indefinite article "A" under the direction of the <a href="http://ai.neocities.org/EnArticle.html">EnArticle</a> module. Because another idea about you is still active, the AI says "YOU ARE A MAGIC" -- an old idea embedded long ago in the <a href="http://ai.neocities.org/MindBoot.html">MindBoot</a> sequence.</p>
<p>We are eager to have the AI Mind think about the differences between itself and other persons so that arguably the first working artificial intelligence may become aware of itself as a thinking entity separate from other persons. An AI with self-awareness is on its way to artificial <a href="http://ai.neocities.org/Consciousness.html">consciousness</a>. </p>
Mentifexhttp://www.blogger.com/profile/04530921525903314824noreply@blogger.comtag:blogger.com,1999:blog-3213012.post-29560591590007330232018-11-08T23:08:00.000-08:002018-11-08T23:08:34.788-08:00pmpj1108<b>Natural language understanding in first working artificial intelligence.</b>
<p>The AI Mind is struggling to express itself. We are trying to give it the tools of <a href="http://en.wikipedia.org/wiki/Natural_language_understanding">NLU</a>, but it easily gets confused. It has difficulty distinguishing between itself and its creator -- your humble AI Mind <a href="http://ai.neocities.org/maintainer.html">maintainer</a>.</p>
<p>We recently gave the <a href="http://ai.neocities.org/perlmind.txt">ghost</a>.pl AI the ability to think with English prepositions using ideas already present or innate in the knowledge bank (KB) of the <a href="http://ai.neocities.org/MindBoot.html">MindBoot</a> sequence. We must now solidify prepositional thinking by making sure that a prepositional input idea is retrievable when the AI is thinking thoughts about what it knows. In order for the AI to be able to think with a remembered prepositional idea, the input of a preposition and its object must cause the setting and storage of a $<a href="http://ai.neocities.org/var.html#tkb">tkb</a>-tag that links the preposition in conceptual memory to its object in conceptual memory. The preposition must also become a $<a href="http://ai.neocities.org/var.html#seq">seq</a>-tag to any verb that is the $<a href="http://ai.neocities.org/var.html#pre">pre</a> of the preposition. When <a href="http://ai.neocities.org/InStantiate.html">InStantiate</a>() is dealing with a preposition input after a verb, the $<a href="http://ai.neocities.org/var.html#tvb">tvb</a> time-of-verb tag is available for "splitting" open the verb-engram in conceptual memory and inserting the concept-number of the preposition as the $<a href="http://ai.neocities.org/var.html#seq">seq</a> of the verb. Let us try it.</p>
<p>We inserted the code for making the input preposition become the $<a href="http://ai.neocities.org/var.html#seq">seq</a> of the verb and then we tested by launching the AI with the first input being "you speak with god". Then we obtained the following outputs.
<blockquote>
I AM IN A COMPUTER <br />
I THINK <br />
I AM A PERSON <br />
I AM AN ANDRU <br />
I DO NOT KNOW <br />
I AM A PERSON <br />
I HELP THE KIDS <br />
I AM A ROBOT <br />
I AM AN ANDRU <br />
I AM IN A COMPUTER <br />
I SPEAK WITH THE GOD <br />
</blockquote>
It took so long for the input idea to come back out again because inputs go into immediate inhibition, lest they take over the <a href="http://ai.neocities.org/Consciousness.html">consciousness</a> of the AI in an endless repetition of the same idea.</p>
<p>As we code the AI Mind and conduct a conversation with it, we feel as if we are living out the plot of a science fiction movie. The AI does unexpected things, or it seems to be taking on a personality. We are coding the mechanisms of <a href="http://en.wikipedia.org/wiki/Natural_language_understanding">natural language understanding</a> without worrying about the <a href="http://en.wikipedia.org/wiki/Grounding_problem">grounding problem</a> -- the connection of the English words to what they mean out in the physical world. We count on someone somewhere installing the AI Mind in a robot to ground the English concepts with sensory knowledge.</p>
Mentifexhttp://www.blogger.com/profile/04530921525903314824noreply@blogger.comtag:blogger.com,1999:blog-3213012.post-12795655651448335472018-11-04T09:54:00.000-08:002018-11-04T09:54:17.052-08:00pmpj1104<b>First working artificial intelligence thinks with prepositional phrases.</b>
<p>The <a href="http://ai.neocities.org/perlmind.txt">ghost</a>.pl immanence of the first working artificial intelligence is undergoing minor changes as the AI Mind becomes able to think with <a href="http://ai.neocities.org/EnPrep.html">English prepositional</a> phrases. At first the AI was able to use a preposition only to answer a where-question such as "where are you" and the Ai would respond "I AM IN THE COMPUTER". Now we need to implement a general ability of the AI to think with prepositional phrases loosely tied to nouns or verbs or adjectives or adverbs. The quasi-neuronal associative $<a href="http://ai.neocities.org/var.html#seq">seq</a> tag may soon be re-purposed to lead not only from, say, nouns to verbs but also from nouns to prepositions. However a preposition is arrived at, it is time to implement the activation and retrieval of a whole prepositional phrase whenever the preposition itself is activated.</p>
<p>We begin experimenting by going into the <a href="http://ai.neocities.org/MindBoot.html">MindBoot</a> sequence and entering a $<a href="http://ai.neocities.org/var.html#seq">seq</a> tag of "638=IN" for the verb "800=AM" in the knowledge-base sentence "I AM IN THE COMPUTER". The plan is to insert into <a href="http://ai.neocities.org/EnVerbPhrase.html">EnVerbPhrase</a>() some code to pass activation to the "638=IN" preposition when the AI thinks the innate idea "I AM IN...." So we insert some active code to capture the $<a href="http://ai.neocities.org/var.html#seq">seq</a> tag and some diagnostic code to let us know what is happening. Ooh, mind-design is emotionally fun and intellectually exciting! The first thing captured is not a preposition but the "537=PERSON" noun when the AI is thinking, "I AM A PERSON". Next our fishing expedition lands a "638=IN" preposition when the AI issues the output "I AM" while trying to say "I AM IN THE COMPUTER".</p>
<p>Once the $<a href="http://ai.neocities.org/var.html#seq">seq</a> tag has been captured, the AI software needs to determine if the captured item is a preposition. A search is in order. We search backwards in time for an @Psy concept-number matching the $<a href="http://ai.neocities.org/var.html#seq">seq</a> tag and if we find a match we check its $<a href="http://ai.neocities.org/var.html#pos">pos</a> tag for a "6=prep" match, upon which we assign the concept-number to the $<a href="http://ai.neocities.org/var.html#prep">prep</a> variable in case we decide to send the designated preposition into the <a href="http://ai.neocities.org/EnPrep.html">EnPrep</a>() module for inclusion in thinking.</p>
<p>We go back into the code for assigning the $<a href="http://ai.neocities.org/var.html#seq">seq</a> tag and in the same line of code we set the $<a href="http://ai.neocities.org/var.html#tselp">tselp</a> variable falsely and temporarily equal to the $<a href="http://ai.neocities.org/var.html#verblock">verblock</a> time, so that we may increment the $<a href="http://ai.neocities.org/var.html#tselp">tselp</a> variable until it becomes true. We insert some code that increments the phony $<a href="http://ai.neocities.org/var.html#tselp">tselp</a> time by unitary one and uses it to "split" each succeeding conceptual @Psy array row into its fourteen constituent elements, including "$k[1]" which we check for a match with the designated $<a href="http://ai.neocities.org/var.html#prep">prep</a> variable. We make several copies of the search-snippet, and it easily finds the $<a href="http://ai.neocities.org/var.html#prep">prep</a> engram within just a few time-points of the verb-engram, but now we need to convert the series of search-snippets into a self-terminating loop that will terminate, Arnold, upon finding the prepositional engram in memory. But we have forgotten how to code such a loop in <a href="http://strawberryperl.com">Strawberry</a> Perl Five, so we go into another room of the Mentifex AI Lab and we fetch the books <i>Perl by Example</i> (Quigley) and <i>PERL Black Book</i> (Holzner) to seek some help. We find some sample code for an <i>until</i> loop on page 193 of Quigley. We do not initialize the scalar $<a href="http://ai.neocities.org/var.html#tselp">tselp</a> at zero, because we are searching for an English preposition quite near to the already-known time-point. For the sake of safety, we insert a line of "last" escape-code in the event that the incrementing $<a href="http://ai.neocities.org/var.html#tselp">tselp</a> value exceeds the $<a href="http://ai.neocities.org/var.html#cns">cns</a> value. The resulting <i>until</i> loop works just fine and it locates the nearby English preposition for us.</p>
<p>Next we insert a warranted call to <a href="http://ai.neocities.org/SpreadAct.html">SpreadAct</a>() into the <a href="http://ai.neocities.org/EnVerbPhrase.html">EnVerbPhrase</a>() module just after the point where <a href="http://ai.neocities.org/Speech.html">Speech</a>() has been called to speak the verb. We wish to set up a routine for spreading activation throughout a prepositional phrase not only after a verb but also after a noun or an adjective (e.g. "<i>young at heart</i>" or an adverb (e.g. "<i>ostensibly at random</i>"). In <a href="http://ai.neocities.org/SpreadAct.html">SpreadAct</a>() we send the $<a href="http://ai.neocities.org/var.html#aud">aud</a> tag associated with the located preposition directly into <a href="http://ai.neocities.org/Speech.html">Speech</a>() and the ghost.pl AI starts saying not just "I AM" but "I AM IN". We need to insert more code for finishing the prepositional phrase. By the way, these improvements or mental enhancements are perhaps making the AI Mind capable of much more sophisticated thinking than heretofore. The AI is using words without really knowing what the words mean in terms of sensory perception -- for which robot embodiment is necessary -- but the AI may nevertheless develop self-awareness on top of its innate concept of self or ego. Knowing how to use prepositions, the AI may become curious and ask the human users for all sorts of exploratory information.</p>
<p>Now in <a href="http://ai.neocities.org/SpreadAct.html">SpreadAct</a>() we throw in a call to <a href="http://ai.neocities.org/EnArticle.html">EnArticle</a>(), even though we have not yet coded in the elocution of the object of the preposition. The AI says "I AM IN A" without stating the object of the preposition. Let us create a new $<a href="http://ai.neocities.org/var.html#tselo">tselo</a> variable for <i>time of selection of object</i> so that we may use <a href="http://ai.neocities.org/SpreadAct.html">SpreadAct</a>() to zero in on the object and send it into the <a href="http://ai.neocities.org/Speech.html">Speech</a>()module. Finally the <a href="http://ai.neocities.org/perlmind.txt">ghost</a>.pl AI Mind says "I AM IN A COMPUTER". </p>
Mentifexhttp://www.blogger.com/profile/04530921525903314824noreply@blogger.comtag:blogger.com,1999:blog-3213012.post-7499083933521340972018-10-28T17:04:00.000-07:002018-10-28T17:04:08.266-07:00jmpj1028<b>AI Mind uses EnPrep() to think with English prepositions.</b>
<p>In the JavaScript <a href="http://ai.neocities.org/FirstWorkingAGI.html">AI Mind</a> we have a general goal right now of enabling the first working artificial intelligence to talk about itself, to learn about itself, and to achieve self-awareness as a form of <a href="http://ai.neocities.org/Consciousness.html">artificial consciousness</a>. Two days ago we began by asking the AI such questions as "who am i" and "who are you", and the AI gave intelligent answers, but the asking of "where are you" crashed the program and yielded a message of "Error on page" from JavaScript. It turns out that we had coded in the ability to deal with "where" as a question by calling the <a href="http://ai.neocities.org/EnPrep.html">EnPrep</a> English-preposition module, but we had created not even a stub of <a href="http://ai.neocities.org/EnPrep.html">EnPrep</a>. The AI software failed in its attempt to call <a href="http://ai.neocities.org/EnPrep.html">EnPrep</a> and the program halted. So we coded in a stub of <a href="http://ai.neocities.org/EnPrep.html">EnPrep</a> and now we must flesh out the stub with the mental machinery of letting flows of quasi-neuronal association converge upon the <a href="http://ai.neocities.org/EnPrep.html">EnPrep</a> module to activate and fetch a prepositional phrase like "in the computer" to answer questions like "where are you".</p>
<p>Our first and simplest impulse is to code in a search-loop that will find the currently most active preposition. Let us now write that code, just to start things happening. Now we have written the loop that searches for prepositions, but not for the most active one, because there are other factors to consider.</p>
<p>What we are really looking for, in response to "where are you" as a question, is a triple combination of the query-subject <a href="http://ai.neocities.org/var.html#qv1psi">qv1psi</a> and the query-verb <a href="http://ai.neocities.org/var.html#qv2psi">qv2psi</a> and a preposition tied with an associative <a href="http://ai.neocities.org/var.html#pre">pre</a>-tag to the same verb and the same subject. We can not simply look for a subject and a verb linking forward to a preposition like in the phrase "to a preposition" or "in the computer", because our software currently links a verb only to its subject and to its indirect and direct objects, not to prepositions. Such an arrangement does not appear defective, because we can make the memory engram of the preposition itself do the work of making the preposition available for the generation or retrieval of a thought involving the preposition. We only need to make sure that our software will record any available <a href="http://ai.neocities.org/var.html#pre">pre</a>-item so that a prepositional phrase in conceptual memory may be found again in the future. In a phrase like "the man in the street", for instance, the preposition "in" does not link backwards to a verb but rather to a noun. In this case, any verb involved is irrelevant. However, when we start out a sentence with "in this case", we have an unprecedented preposition, unless perhaps we assume that the prepositional phrase is associated with the general idea of the main verb of the sentence. For now, we may safely work with prepositions following a verb of being or of doing, so that we may ask the <a href="http://ai.neocities.org/FirstWorkingAGI.html">AI Mind</a> questions like "where are you" or "where do you obtain ideas".</p>
<p>Practical problems arise immediately. In our backwards search through the lifelong experiential memory, it is easy to insist upon finding any preposition of <i>location</i> linked to a particular verb engrammed as the <a href="http://ai.neocities.org/var.html#pre">pre</a> of the preposition. We may then need to do a secondary search that will link a found combination of verb-and-preposition with a particular <a href="http://ai.neocities.org/var.html#qv1psi">qv1psi</a> query-subject. The problem is, how to do both searches almost or completely simultaneously.</p>
<p>Since we are dealing with English subject-verb-object word order, we could let <a href="http://ai.neocities.org/EnPrep.html">EnPrep</a>() find the verb+preposition combination but not announce it until a subject-noun is found that has a <a href="http://ai.neocities.org/var.html#tkb">tkb</a> value the same as the search-index "i" that is the time of the query-verb. It might also help that the found subject must be in the dba=1 nominative case and must have the query-verb as a <a href="http://ai.neocities.org/var.html#seq">seq</a> value, but the <a href="http://ai.neocities.org/var.html#tkb">tkb</a> alone may do the trick.</p>
<p>We coded in a test for any preposition with a quverb pre-tag, and we got the AI to alert us to the memory-time-point of "IN THE COMPUTER". Now we are assembling a second test in the same <a href="http://ai.neocities.org/EnPrep.html">EnPrep</a>() search-loop to find the <a href="http://ai.neocities.org/var.html#qv2psi">qv2psi</a> query-verb in close temporal proximity to the preposition.</p>
<p>We are using a new <a href="http://ai.neocities.org/var.html#tselp">tselp</a> variable for "time of selection of preposition", so we briefly shift our attention to describing the new variable in the Table of Variables. Now that we have found the verb preceding the preposition, next we need to implement the activation of the stored memory containing the preposition so that the <a href="http://ai.neocities.org/FirstWorkingAGI.html">AI Mind</a> may use the stored memory to respond to "where are you" as a query. We may need to code a third if-clause into the <a href="http://ai.neocities.org/EnPrep.html">EnPrep</a>() backwards search to find and activate the <a href="http://ai.neocities.org/var.html#qv1psi">qv1psi</a> query-subject that is stored in collocation or close proximity to the query-verb and the selected preposition.</p>
<p>Now we have a problem. Since we let <a href="http://ai.neocities.org/EnPrep.html">EnPrep</a>() be called by the <a href="http://ai.neocities.org/EnVerbPhrase.html">EnVerbPhrase</a>() module, <a href="http://ai.neocities.org/EnPrep.html">EnPrep</a>() will not be called until a response is already being generated. We need to make sure that the incipient response accommodates <a href="http://ai.neocities.org/EnPrep.html">EnPrep</a>() by being the lead-up to a prepositional phrase. Perhaps we should not try to use <a href="http://ai.neocities.org/var.html#verblock">verblock</a> to steer a response that is already underway, but rather we should count on activation of concepts to guide the response.</p>
<p>Now let us try to use <a href="http://ai.neocities.org/SpreadAct.html">SpreadAct</a>() to govern the response. After much coding, we got the AI to respond <blockquote>IN COMPUTER I AM IN COMPUTER <br />IN COMPUTER I AM HERE IN COMPUTER</blockquote>but there must somewhere be a duplicate call to <a href="http://ai.neocities.org/EnPrep.html">EnPrep</a>(). We eliminate the call from the <a href="http://ai.neocities.org/Indicative.html">Indicative</a>() mind-module and then we get both an unwanted response and a wanted response.<blockquote>YOU ARE A MAGIC IN A COMPUTER <br />I AM IN A COMPUTER</blockquote>Obviously the AI is not responding immediately to our "where are you" query but is instead joining an unrelated idea with the prepositional phrase. Upshot: By having <a href="http://ai.neocities.org/SpreadAct.html">SpreadAct</a>() impose a heftier activation on the <a href="http://ai.neocities.org/var.html#qv1psi">qv1psi</a> subject of the where-are-you query, we got the AI to not speak the unrelated idea and to respond simply "I AM IN A COMPUTER". Now we need to tidy up the code and decide where to reset the variables.</p>
Mentifexhttp://www.blogger.com/profile/04530921525903314824noreply@blogger.comtag:blogger.com,1999:blog-3213012.post-89685982473263616992018-10-21T05:12:00.000-07:002018-10-21T05:12:01.770-07:00pmpj1021.html<b>First working AI uses <a href="http://ai.neocities.org">OutBuffer</a> to inflect English verbs.</b>
<p>We have been cycling through the coding of the AI Mind in <a href="http://ai.neocities.org/perlmind.txt">Perl</a>, in <a href="http://ai.neocities.org/FirstWorkingAGI.html">JavaScript</a> and in <a href="http://ai.neocities.org/mindforth.txt">Forth</a>. Now we are back in <a href="http://old.reddit.com/r/perl">Perl</a> again, and we need to implement some improvements to the <a href="http://ai.neocities.org/EnVerbGen.html">EnVerbGen</a>() module that we made in the other <a href="http://old.reddit.com/r/aiprogramming">AI programming</a> languages.</p>
<p>First of all, since the English verb generation module <a href="http://ai.neocities.org/EnVerbGen.html">EnVerbGen</a>() is mainly for adding an "S" or an "ES" to a third person singular English verb like "read" or "teach", we should start using $<a href="http://ai.neocities.org/var.html#prsn">prsn</a> instead of $<a href="http://ai.neocities.org/var.html#dba">dba</a> in the <a href="http://ai.neocities.org/EnVerbGen.html">EnVerbGen</a>() source code. Our temporary diagnostic code shows that both <a href="http://ai.neocities.org/var.html">variables</a> show the same value, so we may easily swap one for the other. We make the swap, and the first working artificial intelligence still functions properly.</p>
<p>Now it is time to insert some extra code for verbs like "teach" or "wash", which require adding an "-ES" in the third person singular. Since we wrote the code during our cycle through <a href="http://ai.neocities.org/FirstWorkingAGI.html">JavaScript</a>, we need only to port the same code into Perl. <a href="http://ai.neocities.org/EnVerbGen.html">EnVerbGen</a>() now uses the last few positions in the <a href="http://ai.neocities.org/OutBuffer.html">OutBuffer</a>() module to detect English verbs like "pass" or "tax" or "fizz" or "putz" that require "-ES" as an ending.</p>
Mentifexhttp://www.blogger.com/profile/04530921525903314824noreply@blogger.comtag:blogger.com,1999:blog-3213012.post-35507862974726894992018-10-11T21:42:00.000-07:002018-10-11T21:42:01.243-07:00jmpj1011<b>JavaScript AI Mind uses EnVerbGen() for English verb-form inflections.</b>
<p>The JavaScript tutorial version of the first working artificial intelligence is becoming more sophisticated than ever. With roughly fifty <a href="http://ai.neocities.org/Aitree.html">mind-modules</a>, the <a href="http://ai.neocities.org/FirstWorkingAGI.html">Strong AI</a> advances the <a href="http://ai.neocities.org/SOTA.html">State of the Art</a> first in one area, and then serendipitously in another area. For instance, the ability of the <a href="http://ai.neocities.org/FirstWorkingAGI.html">AI Mind</a> to engage in automated reasoning with logical <a href="http://www.amazon.com/dp/B00FKJY1WY">inference</a> leads to a question-and-answer session between human minds and their incipient overlords, i.e., the current archetypes of the future Artificial Super-Intelligence (ASI). When the human user has confirmed or negated an inferred conclusion from the <a href="http://ai.neocities.org/InFerence.html">InFererence</a>() module, the AI assigns a heightened <a href="http://ai.neocities.org/var.html#tru">truth-value</a> to the positive or negative knowledge remaining in the AI memory. Then the AI states the new knowledge in its positive or negative formulation. A negated inference comes out something like "GOD DOES NOT PLAY DICE". A validated inference becomes a simple declarative sentence like "JOHNNY READS BOOKS", which requires the <a href="http://ai.neocities.org/FirstWorkingAGI.html">AI Mind</a> to choose the correct form of the verb "read".</p>
<p>Because we code the first working artificial intelligence not only in English but also in Russian, we found it necessary several years ago to create the <a href="http://ai.neocities.org/RuVerbGen.html">RuVerbGen</a>() module for Russian verb-generation. When the <a href="http://ai.neocities.org/perlmind.txt">ghost</a>.pl AI cannot find a needed Russian verb-form, it simply cobbles one together from the stem of the Russian verb and the inflectional endings which complete a Russian verb. We avoided this problem in English for the last six years by simply ignoring it, but now the <a href="http://ai.neocities.org/FirstWorkingAGI.html">AI Mind</a> needs to imitate the <a href="http://ai.neocities.org/RuVerbGen.html">RuVerbGen</a>() module with the <a href="http://ai.neocities.org/EnVerbGen.html">EnVerbGen</a>() module for English verb-generation. Just to change "God does not play dice" to "God plays dice" requires attaching an inflectional "S" to the stem or the infinitive form of the verb "play". As we code the <a href="http://ai.neocities.org/EnVerbGen.html">EnVerbGen</a>() module based on grammatical <a href="http://github.com/kernc/mindforth/blob/master/wiki/ParaMeter.wiki">parameters</a>, we encounter problems because the software needs to know the grammatical <a href="http://ai.neocities.org/var.html#prsn">person</a> and the grammatical <a href="http://ai.neocities.org/var.html#snu">number</a> of the subject of an inferred idea in order to think a thought like "God <i>plays</i> dice" or "Johnny <i>reads</i> books".</p>
<p>Because the <a href="http://ai.neocities.org/InFerence.html">InFerence</a>() module has not been storing the grammatical <a href="http://ai.neocities.org/var.html#snu">number</a> of the English noun serving as the subject of a silent <a href="http://www.amazon.com/dp/B00FKJY1WY">inference</a>, our brand-new <a href="http://ai.neocities.org/EnVerbGen.html">EnVerbGen</a>() module has not been able to generate the third-person singular verb-form necessary for stating a validated inference like "Johnny <i>reads</i> books" or "Fortune <i>favors</i> fools" -- which was originally "<i>Fortuna favet fatuis</i>" in Latin. The artificial general intelligence (AGI) has become so sophisticated in its resemblance to human thinking that we need to change the <a href="http://ai.neocities.org/InFerence.html">InFerence</a>() module to accommodate the requirements of the <a href="http://ai.neocities.org/EnVerbGen.html">EnVerbGen</a>() module.</p>
<p>We make the necessary changes and we code <a href="http://ai.neocities.org/EnVerbGen.html">EnVerbGen</a>() to deal not with Russian but with English verbs. We see a sample dialog between the AI and the human user.
<blockquote>
Human: andru is professor <br />
Robot: DOES ANDRU TEACH STUDENTS <br />
Human: yes <br />
Robot: THE ANDRU TEACHES THE STUDENTS <br />
Human: <br />
Robot: STUDENTS READ BOOKS <br />
</blockquote>
</p>
Mentifexhttp://www.blogger.com/profile/04530921525903314824noreply@blogger.comtag:blogger.com,1999:blog-3213012.post-88411207310762079062018-10-09T23:29:00.000-07:002018-10-09T23:29:21.115-07:00pmpj1009<b>Perl Ghost AI uses EnVerbGen() for English verb-form inflections.</b>
<p>In the middle of coding ghost278.pl AI we had to go and stand in front of the television and watch Leopold Stokowski in 1969 conducting the finale of Beethoven's Symphony No. Five -- the one they sent into outer space as a message from Earth. Now back at the computer, for the first time we are trying to implement the <a href="http://ai.neocities.org/EnVerbGen.html">EnVerbGen</a>() module for English verb generation. We have gotten the <a href="http://ai.neocities.org/InFerence.html">InFerence</a>() module to generate an <a href="http://www.amazon.com/dp/B00FKJY1WY">inference</a> when we type in "anna is a student', because the AI Mind knows that students read books. The <a href="http://ai.neocities.org/AskUser.html">AskUser</a>() module seeks to verify or validate the inference by asking us, "DOES ANNA READ THE BOOKS". When we answer "no", the AI says, "THE ANNA DOES NOT READ THE BOOKS". When we answer yes, the ghost in the machine issues the faulty output of "THE ANNA READ THE BOOKS", which sounds more like an exhortation than a statement of confirmed fact with a high <a href="http://old.reddit.com/r/ControlProblem/comments/9lad7u/comments_on_leibnizs_law_ideas_about_formalizing/e75au7y">truth-value</a>. We need a way to get the AI to use the third-person singular form "READS" with the singular subject. To do so, before Leopold and Ludwig interrupted us, we were embedding diagnostic messages in the <a href="http://ai.neocities.org/EnVerbPhrase.html">EnVerbPhrase</a>() module, trying to determine how the <a href="http://ai.neocities.org/perlmind.txt">ghost AI</a> was able to say "READ" as if it were the proper verb-form. The whole idea of <a href="http://ai.neocities.org/EnVerbGen.html">EnVerbGen</a>() in English or of <a href="http://ai.neocities.org/RuVerbGen.html">RuVerbGen</a>() in Russian is for the verb-phrase module to seek a particular verb-form based on parameters of person and number, and to call <a href="http://ai.neocities.org/EnVerbGen.html">EnVerbGen</a>() if the desired verb-form is not already available in auditory memory. Somehow the existing <a href="http://ai.neocities.org/perlmind.txt">Perlmind</a> is finding the verb "read" but not the correct form of the verb.</p>
<p>We discover that we can get the <a href="http://ai.neocities.org/EnVerbPhrase.html">EnVerbPhrase</a>() module to call <a href="http://ai.neocities.org/EnVerbGen.html">EnVerbGen</a>() when we tighten up the search-by-parameter for the correct verb form. Since <a href="http://ai.neocities.org/EnVerbGen.html">EnVerbGen</a>() is not coded yet, we get an output of "THE ANNA ERROR THE BOOKS", with "ERROR" filling in for the lacking "READS" form.</p>
<p>Then we need an $<a href="http://ai.neocities.org/var.html#audbase">audbase</a> value that we can send into <a href="http://ai.neocities.org/EnVerbGen.html">EnVerbGen</a>() as the start of the verb that needs an inflectional ending. We use a trick in the <a href="http://ai.neocities.org/EnVerbPhrase.html">EnVerbPhrase</a>() module to get either a second-class or a first-class (infinitive) $<a href="http://ai.neocities.org/var.html#audbase">audbase</a>. We test first for any form at all of the verb that has an auditory engram that can serve as a second-class $<a href="http://ai.neocities.org/var.html#audbase">audbase</a>, because the verb-form may be defective in some way. In the very next line of code, we test for an infinitive form of the verb having an auditory engram as a first-class $<a href="http://ai.neocities.org/var.html#audbase">audbase</a>, because an infinitive is easier to manipulate than some defective form of the verb.</p>
<p>We copied the bulk of the Russian <a href="http://ai.neocities.org/RuVerbGen.html">RuVerbGen</a>() into the English <a href="http://ai.neocities.org/EnVerbGen.html">EnVerbGen</a>() and then we did the <i>mutatis mutandis</i> process of making the necessary changes. At first we got "REAS" instead of "READS" because the Russian Cyrillic characters were substituting, not adding. By removing the substitution-code, we obtained the full verb "READS". At a later time we must code in the handling of verbs like "teach" or "push" which require an "-ES" ending.</p>
Mentifexhttp://www.blogger.com/profile/04530921525903314824noreply@blogger.comtag:blogger.com,1999:blog-3213012.post-47594973187155972652018-09-30T22:55:00.000-07:002018-09-30T22:55:19.058-07:00pmpj0930<b>Ghost AI says when it does not know the answer to a query.</b>
<p>When the <a href="http://ai.neocities.org/perlmind.txt">ghost</a>.pl AI considers a what-query such as "what do kids make", some mind-module must call the <a href="http://ai.neocities.org/SpreadAct.html">SpreadAct</a>() module to handle the what-query, but which module? We could say that the <a href="http://ai.neocities.org/Indicative.html">Indicative</a>() module should make the call to <a href="http://ai.neocities.org/SpreadAct.html">SpreadAct</a>() just before making a response in the indicative mood, but perhaps a response may need to be uttered in a mood other than indicative. The AI Mind might wish to answer the query with an <a href="http://ai.neocities.org/Imperative.html">imperative</a> command like "DO NOT BOTHER ME". Or the AI might not understand the what-query and might want to ask a question about it. So perhaps we should have the <a href="http://ai.neocities.org/Sensorium.html">Sensorium</a>() module call <a href="http://ai.neocities.org/SpreadAct.html">SpreadAct</a>() to respond to a what-query.</p>
<p>We have now introduced a new technique for answering "I DO NOT KNOW" in response to a what-query for which the AI Mind does not find an answer. The AI briefly elevates the $<a href="http://ai.neocities.org/var.html#tru">tru</a> truth-value and the activation-level of the idea "I DO NOT KNOW" as stored in the <a href="http://ai.neocities.org/MindBoot.html">MindBoot</a>() knowledge base (KB), so that the <a href="http://ai.neocities.org/Indicative.html">Indicative</a>() module expresses the momentarily true idea. Immediately afterward, the AI returns the $<a href="http://ai.neocities.org/var.html#tru">tru</a> truth-value to zero.</p>
Mentifexhttp://www.blogger.com/profile/04530921525903314824noreply@blogger.comtag:blogger.com,1999:blog-3213012.post-81900886073215750072018-09-28T21:54:00.000-07:002018-09-28T21:54:18.422-07:00pmpj0928<b>Perl AI improves Russian MindBoot and introduces RuIndicative module.</b>
<p>In the ghost275.pl AI we are consolidating the Russian-language knowledge-case (KB) directly below the English-language KB near the beginning of the <a href="http://ai.neocities.org/MindBoot.html">MindBoot</a> sequence, so that we may add a new item without complicating a future re-location of the Russian knowledge base.</p>
<p>We should probably stub in the <a href="http://ai.neocities.org/RuIndicative.html">RuIndicative</a>() module, so that it will exist not only in our AI <a href="http://ai.neocities.org/DiaGram.html">diagrams</a> but also in the software itself.</p>
<p>When we start the AI out thinking in Russian, we have been encountering a bug that shows up with the second sentence of output. Perl complains about the use of "uninitialized value in concatenation or string" in the <a href="http://ai.neocities.org/PsiDecay.html">PsiDecay</a>() module. To troubleshoot, we go through the <a href="http://ai.neocities.org/PsiDecay.html">PsiDecay</a> concatenation of associative tags in the @Psy conceptual array and we replace the various variables one by one with a numeric value, to see if the complaint disappears. The complaint disappears when we replace the $k[2] variable for the $hlc human-language code with a numeric value of one (1) instead of "en" for English or "ru" for Russian.</p>
<p><a href="http://strawberryperl.com">Perl</a> continues to complain about uninitialized values when we have the <a href="http://ai.neocities.org/perlmind.txt">Perlmind</a> think in <a href="http://ai.neocities.org/RuThink.html">Russian</a>, but not when it thinks in <a href="http://ai.neocities.org/EnThink.html">English</a>. Therefore we know that the lurking bug is not in the <a href="http://ai.neocities.org/PsiDecay.html">PsiDecay</a>() module or in the <a href="http://ai.neocities.org/InStantiate.html">InStantiate</a>() module, even though the bug manifests itself in those <a href="http://ai.neocities.org/AiTree.html">modules</a>. We spent hours on each of the past two days searching for an elusive bug which must certainly be hiding in one or more of the Russian-language modules. Therefore it is time to isolate the bug by isolating the Russian-language modules. First let us look at the <a href="http://ai.neocities.org/RuNounPhrase.html">RuNounPhrase</a>() module. We insert some diagnostic messages and we see that the bug manifests itself when program-flow goes back up to the <a href="http://ai.neocities.org/RuThink.html">RuThink</a>() module which calls the <a href="http://ai.neocities.org/PsiDecay.html">PsiDecay</a>() module.</p>
<p>Since we catch sight of the bug when <a href="http://ai.neocities.org/PsiDecay.html">PsiDecay</a>() is called, let us temporarily insert some extra calls to <a href="http://ai.neocities.org/PsiDecay.html">PsiDecay</a>() and see what happens. First we make an extra call to <a href="http://ai.neocities.org/PsiDecay.html">PsiDecay</a>() from the end of <a href="http://ai.neocities.org/RuNounPhrase.html">RuNounPhrase</a>(). Huh?! Now we get <i>two</i> complaints from <a href="http://strawberryperl.com">Perl</a> about uninitialized values showing up for a program line-number belonging to a concatenation in the <a href="http://ai.neocities.org/PsiDecay.html">PsiDecay</a>() module. Let us also try an extra call to <a href="http://ai.neocities.org/PsiDecay.html">PsiDecay</a>() from the <a href="http://ai.neocities.org/RuVerbPhrase.html">RuVerbPhrase</a>() module. We do so, and now we get <i>three</i> complaints from <a href="http://strawberryperl.com">Perl</a> about uninitialized values. However, the glitch does not seem to be occurring during the first call from <a href="http://ai.neocities.org/RuIndicative.html">RuIndicative</a>() to <a href="http://ai.neocities.org/RuNounPhrase.html">RuNounPhrase</a>(), but rather during or after the call to <a href="http://ai.neocities.org/RuVerbPhrase.html">RuVerbPhrase</a>(). For extra clarity, let us have the start of <a href="http://ai.neocities.org/RuVerbPhrase.html">RuVerbPhrase</a>() make a call to <a href="http://ai.neocities.org/PsiDecay.html">PsiDecay</a>(). We do so, and there is no concomitant complaint from <a href="http://strawberryperl.com">Perl</a> about uninitialized values. Therefore, the subject-choosing part of <a href="http://ai.neocities.org/RuNounPhrase.html">RuNounPhrase</a>() must not be the source of the problem, but the direct-object portion of <a href="http://ai.neocities.org/RuNounPhrase.html">RuNounPhrase</a>() is still under suspicion.</p>
<p>Now we are discovering something strange. Towards the end of <a href="http://ai.neocities.org/RuNounPhrase.html">RuNounPhrase</a>() there is a concatenation which is supposed to impose <a href="http://github.com/PriorArt/AGI/wiki/MindGrid">inhibition</a> upon a noun selected by the module, as identified by the $<a href="http://ai.neocities.org/var.html#tsels">tsels</a> variable which pertains to the "time of selection of the subject", and which has been used earlier in <a href="http://ai.neocities.org/RuNounPhrase.html">RuNounPhrase</a>() to indeed inhibit the selected subject. However, a diagnostic message reveals to us AI Mind <a href="http://ai.neocities.org/maintainer.html">maintainers</a> that the $<a href="http://ai.neocities.org/var.html#tsels">tsels</a> variable has been zeroed out by the end of <a href="http://ai.neocities.org/RuNounPhrase.html">RuNounPhrase</a>() and that therefore the software is trying to concatenate the associative tags purportedly available at a zero time-point -- where there are no associative tags. Let us see what happens when we comment out the suspicious concatenation code. We do so, and we get no change in the reporting of the bug. Let us see if the earlier inhibition in the <a href="http://ai.neocities.org/RuNounPhrase.html">RuNounPhrase</a>() module is causing any problems. First off, a diagnostic message shows us that the $<a href="http://ai.neocities.org/var.html#tsels">tsels</a> variable has been zeroed out, or perhaps never loaded, even at the time of the first inhibition in the <a href="http://ai.neocities.org/RuNounPhrase.html">RuNounPhrase</a>() module. Let us comment out the concatenation of the first inhibition and see what happens. By the way, if there are any secret AI Labs in <a href="http://www.gotai.net/forum">Russia</a> or elsewhere working on the further development or evolution of these AI Minds in Perl and in tutorial JavaScript and in Forth for intelligent humanoid robots, this journal entry shows that the AI coding problems are indeed tractable and soluble, given enough persistence and effort. Now, when we have commented out both the inhibitional concatenations in the <a href="http://ai.neocities.org/RuNounPhrase.html">RuNounPhrase</a>() module, we still get the same complaints from <a href="http://strawberryperl.com">Perl</a> about uninitialized values, and we notice in the diagnostic display of the memory-array contents that the Russian nouns are still being inhibited -- but where? Oh, the <a href="http://ai.neocities.org/InStantiate.html">InStantiate</a>() module is imposing a <a href="http://github.com/PriorArt/AGI/wiki/MindGrid"><i>trough</i></a> of inhibition. Let us do another commenting out and see what happens. Nothing happens, and the inhibition is still occurring.</p>
<p>As we go through <a href="http://ai.neocities.org/RuVerbPhrase.html">RuVerbPhrase</a>() and comment out the various concatenations, the complaint from <a href="http://strawberryperl.com">Perl</a> about uninitialized values suddenly disappears when we comment out the concatenation where Russian verbs are competing to be selected as the most active verb. We also notice that a comment seems to be missing at the end of the first line in the two-line concatenation. When we insert the missing comma and we do not comment out the concatenation, there are no further complaints from <a href="http://strawberryperl.com">Perl</a> about uninitialized values. Of course, we just spent three days wracking our brains, trying to figure out what was wrong, when the problem was one single missing comma. Now it is time to clean up the <a href="http://ai.neocities.org/perlmind.txt">Perlmind</a> code and upload it to the Web.</p>
Mentifexhttp://www.blogger.com/profile/04530921525903314824noreply@blogger.com