Sunday, August 15, 2004
On the issues of separation of
syntax and semantics pragmatics à .
Previous proposed technology evaluation à .
PowerPoint presentation of the
Provenance ™ pre-poll pollster information system à
PowerPoint posted on Monday,
August 30, 2004
Revised extensively:
Monday, August 16, 2004 8:22 AM
Architectural discussion III
Like the concept of a Semantic Web, the concept of an Anticipatory Web has two sides. Action perception cycles bind the two sides together.
The computer is regarded as external to a human mind, and the human mind is engaged in a type of “mutual induction” caused by the computer display priming human cognitive functions.
The nine step Actionable Intelligence Process Model (AIPM) – Prueitt (2002)
Unlike the concept of a Semantic Web, the role of the computer and the role of the human are not confused. The strong dependency on pre-arranged deductive computation, common to artificial intelligence, is replaced with a strong dependency on human induction in real time. Data mining processes, which focus on structural patterns in raw data, assist this inductive process. Reuse of knowledge about the function of patterns is encoded into persistent ontology services, using cognitive graph, topic maps or OWL (Ontology Web Language) standards. Once encoded, this representation of truthful knowledge can be propagated electronically using Topic Map or OWL standards.
In the case on text understanding, a structured knowledge of language (for example: derived from Readware’s patent) is used to provide aggregation of data into visual forms. Structure/function information can be derived or used (see Prueitt’s work on quasi-axiomatic theory) to produce support for real time human induction and synthesis. .
The two sides of the Anticipatory Web are:
1) the computational side that exists in computers, telecommunications and networks of computers.
2) The human side that exists in the minds of individual humans, in communities of humans and in the shared experiences we have with the natural world.
As discussed elsewhere in the bead game, the nine-step AIPM was derived from a seven step AIPM given to Prueitt while he was Senior Scientist at Object Science Corporation and working on technology for Army Intelligence. The seven-step model is missing the first two steps.
If our proposed study [66] is granted, the anticipatory model may become rendered in a particularly simple form with Readware and PriMentia technology [66] [67] [68]. This simple integration, will take only 90 days, could produce a unique high-speed scalable ontology based search or retrieval engine. Our team is appealing to incumbent government contractors to allow this work to be completed and deployed [66-1].
An instrumentation and measurement of co-occurrence of words will produce a representation of concepts and these representations are to be encoded into an optimal encoding technology (PriMentia’s Hilbert encoding). Preliminary work on this has been completed by OntologyStream (see InOrb technology) and follows deployments by a related technology (called “Contiguous Connection Model”) developed by Applied Technical Systems for use in Army Intelligence and elsewhere (see the Orb Notational Paper.)
The integration of PriMentia, InOrb and Readware software systems is argued to produce superior results to existing deployed systems acting at one or two orders of magnitude faster speeds and within a Human-centric Information Production (HIP) use philosophy. A core part of this HIP philosophy is the AIPM. Within the AIPM, humans work with machine data mining processes to detect facts or events and to produce models of how these events are related to each other. As suggested by Sowa and Majumdar, inference is brought closer to real time experiences by humans and thus allows the human tacit knowledge to act in the presence of visual structures indicating direct information derived from real time data mining. See also Lev Goldfarb’s work on inductive informatics.
Demonstrations of the methods discussed are available now, and have been made available to various incumbent consulting groups.
The Anticipatory Web (of information residing in the computer) assists humans in answering questions about if the occurrence of one event anticipates others events. So we have something like: (1) a person is known to be hungry, (2) food is provided to this person. Does this anticipate the event of this person eating the food so as to satisfy the hunger? The notion of deduction and formal logics is not always useful in this situation: see Sowa and Majumdar, “Analogical Reasoning” for more on the limitations of classical logics to problems generic in human truth finding.
As suggested by scholarship that Sowa points to, relationships can be coincidental or essential, but it is largely through the relationships that events have with each other that we make sense of the meaning of these events. As mentioned by Sowa, a long tradition exists that argues that deductive apparatus requires that an inductive has to be “complete”, and that in many fields of human endeavor, this completeness has been problematic.
In
developing formal logic, Aristotle took Greek mathematics as his model. Like
his predecessors Socrates and Plato, Aristotle was impressed with the rigor and
precision of geometrical proofs. His goal was to formalize and generalize those
proof procedures and apply them to philosophy, science, and all other branches
of knowledge. Yet not all subjects are equally amenable to formalization. Greek
mathematics achieved its greatest successes in astronomy, where Ptolemy's
calculations remained the standard of precision for centuries. But other
subjects, such as medicine and law, depend more on deep experience than on
brilliant mathematical calculations. Significantly, two of the most penetrating
criticisms of logic were written by the physician Sextus Empiricus in the
second century AD and by the legal scholar Ibn Taymiyya in the fourteenth
century. (second page… Analogical Reasoning )
Please read all of the paper by Sowa and Majumdar to obtain an understanding of how Sowa has framed an issue that others have framed in similar ways: (See Prueitt).
Using this new technology, if only we can fully bring it to market, analysts do make sense of the meanings of events detected from data mining. This sense-making activity is an natural process of induction of generalities from particulars, embedding in the experience of the moment (!), and once experienced can be partially represented as natural language, a report, or as some sort of simple knowledge representation. This information can be placed into a standard representation using OWL (Ontology Web Language), cognitive graphs, or topic maps.
The speed and simplicity of the PriMentia data encoding would, we conjecture, make this different from any current information system.
Humans can then develop inferences about the possible outcomes that might be predicted given the occurrence of events (please review Prueitt’s work on Russian quasi-axiomatic theory). These inferences can be purely deductive and thus must be checked to see if the deductive mechanisms are valid in a specific case, or in a general case of which some understanding exists. The computer can be programmed to create visual icons regarding patterns of occurrence, suggestive of events and relationship in the real world. As a human perceives these computer-generated images there is often an evocation of mental states. The mechanism is referred to in the cognitive sciences as cognitive priming.
The executive function of the human associational cortex and frontal lobes will produce anticipatory reactions to real time experience (Prueitt, Levine and Prueitt, Pribram). The human will reason about possible outcomes based on both tacit knowledge and the nature of the associations that cognitive priming produced as a consequence of cognitive priming by the computer display. The human then produces new information by taking personal responsibility for blending the results of computer data mining with tacit awareness of the situations in the real world.
HIP is in this way quite different from the current generation of Information Technology, where disassociated “experts in database design” create rigid structure that inhibits the agility needed to produce new information about novelty in real time.
The experience of knowledge by a human can be represented as machine ontology, a cognitive graph or a topic map and propagated within a community of analysts. The process is framed by cognitive and social science within the HIP use-philosophy. The computer science is made as simple as possible and optimal in terms of the identification of patterns of computer functions (reference: CoreTalk [1]).
In this way the computer become less of a rigid constraint that re-enforces institutional stove piping and more of a technology that is used to prime the cognitive facilities of humans within cultural environments. Of course, if the purpose of funding is to re-enforce stove piping, then one will not find funding for our work. (Personal note [2])
Reports on the activity of human analysis can be generated in a useful form, both as natural language and as polling type instruments (see the work on knowledge propagation from the two companies, Acappella Software Inc, and SchemaLogic Inc.) These reports are expressed as Digital Natural Language (DNL) and are not in the form of machine ontology such as OWL, but can be encoded as topic maps and cognitive graphs using text understanding tools like Applied Technical Systems’ NdCore technology or the OntologyStream’s InOrb technology.
Finally, the last step in the nine-step AIPM involves messaging. Alerting functions can be autonomously or automatically generated to make changes in the instrumentation and measurement step, step one.
[1] CoreTalk is represented in macromedia presentation available from hyperlinks at: http://www.bcngroup.org/beadgames/safeNet/one.htm
[2] Please forgive me for saying this, but the time is late and the need for small funding is truly urgent.