AI Mind uses EnPrep() to think with English prepositions.
In the JavaScript AI Mind we have a general goal right now of enabling the first working artificial intelligence to talk about itself, to learn about itself, and to achieve self-awareness as a form of artificial consciousness. Two days ago we began by asking the AI such questions as "who am i" and "who are you", and the AI gave intelligent answers, but the asking of "where are you" crashed the program and yielded a message of "Error on page" from JavaScript. It turns out that we had coded in the ability to deal with "where" as a question by calling the EnPrep English-preposition module, but we had created not even a stub of EnPrep. The AI software failed in its attempt to call EnPrep and the program halted. So we coded in a stub of EnPrep and now we must flesh out the stub with the mental machinery of letting flows of quasi-neuronal association converge upon the EnPrep module to activate and fetch a prepositional phrase like "in the computer" to answer questions like "where are you".
Our first and simplest impulse is to code in a search-loop that will find the currently most active preposition. Let us now write that code, just to start things happening. Now we have written the loop that searches for prepositions, but not for the most active one, because there are other factors to consider.
What we are really looking for, in response to "where are you" as a question, is a triple combination of the query-subject qv1psi and the query-verb qv2psi and a preposition tied with an associative pre-tag to the same verb and the same subject. We can not simply look for a subject and a verb linking forward to a preposition like in the phrase "to a preposition" or "in the computer", because our software currently links a verb only to its subject and to its indirect and direct objects, not to prepositions. Such an arrangement does not appear defective, because we can make the memory engram of the preposition itself do the work of making the preposition available for the generation or retrieval of a thought involving the preposition. We only need to make sure that our software will record any available pre-item so that a prepositional phrase in conceptual memory may be found again in the future. In a phrase like "the man in the street", for instance, the preposition "in" does not link backwards to a verb but rather to a noun. In this case, any verb involved is irrelevant. However, when we start out a sentence with "in this case", we have an unprecedented preposition, unless perhaps we assume that the prepositional phrase is associated with the general idea of the main verb of the sentence. For now, we may safely work with prepositions following a verb of being or of doing, so that we may ask the AI Mind questions like "where are you" or "where do you obtain ideas".
Practical problems arise immediately. In our backwards search through the lifelong experiential memory, it is easy to insist upon finding any preposition of location linked to a particular verb engrammed as the pre of the preposition. We may then need to do a secondary search that will link a found combination of verb-and-preposition with a particular qv1psi query-subject. The problem is, how to do both searches almost or completely simultaneously.
Since we are dealing with English subject-verb-object word order, we could let EnPrep() find the verb+preposition combination but not announce it until a subject-noun is found that has a tkb value the same as the search-index "i" that is the time of the query-verb. It might also help that the found subject must be in the dba=1 nominative case and must have the query-verb as a seq value, but the tkb alone may do the trick.
We coded in a test for any preposition with a quverb pre-tag, and we got the AI to alert us to the memory-time-point of "IN THE COMPUTER". Now we are assembling a second test in the same EnPrep() search-loop to find the qv2psi query-verb in close temporal proximity to the preposition.
We are using a new tselp variable for "time of selection of preposition", so we briefly shift our attention to describing the new variable in the Table of Variables. Now that we have found the verb preceding the preposition, next we need to implement the activation of the stored memory containing the preposition so that the AI Mind may use the stored memory to respond to "where are you" as a query. We may need to code a third if-clause into the EnPrep() backwards search to find and activate the qv1psi query-subject that is stored in collocation or close proximity to the query-verb and the selected preposition.
Now we have a problem. Since we let EnPrep() be called by the EnVerbPhrase() module, EnPrep() will not be called until a response is already being generated. We need to make sure that the incipient response accommodates EnPrep() by being the lead-up to a prepositional phrase. Perhaps we should not try to use verblock to steer a response that is already underway, but rather we should count on activation of concepts to guide the response.
Now let us try to use SpreadAct() to govern the response. After much coding, we got the AI to respond
IN COMPUTER I AM IN COMPUTER
IN COMPUTER I AM HERE IN COMPUTER
but there must somewhere be a duplicate call to
EnPrep(). We eliminate the call from the
Indicative() mind-module and then we get both an unwanted response and a wanted response.
YOU ARE A MAGIC IN A COMPUTER
I AM IN A COMPUTER
Obviously the AI is not responding immediately to our "where are you" query but is instead joining an unrelated idea with the prepositional phrase. Upshot: By having
SpreadAct() impose a heftier activation on the
qv1psi subject of the where-are-you query, we got the AI to not speak the unrelated idea and to respond simply "I AM IN A COMPUTER". Now we need to tidy up the code and decide where to reset the variables.