Friday, November 18, 2005
Center of Excellence Proposal
à
Patrick communicated to ONTAC-WG General Discussion,
a note starting with
1)
Hub? I think that ultimately one COSMO will best serve the purposes of
interoperability, and for me the question is how to get there -- to start with
one upper ontology and elaborate it by mapping to the multiple KCSs used by
ONTACWG members, or to start with with several competent ontologies, and to
merge them. My suggestion was to try both approaches simultaneously, i.e.
in one part to investigate the potential for merger by formalizing the UMLS
semantic network using each candidate upper ontology separately, and
to compare the resulting fragments of each upper ontology to
determine how closely those parts are related and just how easy a merger
would be. In a second part, the FEA-RMO and DoD Upper taxonomy could also
be formalized with respect to one of the upper ontologies, chosen by some
criteria such as those suggested by Eric Peterson. That is, we
would not prejudge the issue of whether or not to choose one existing hub
from the start, but to gather more information to determine whether a
merger would be so difficult that choosing one existing ontology is the only
practical course. The question of using an existing ontology as a hub for
formalization of includes the question of what criteria would be used to choose
the hub, and any thoughts on that issue would be welcome right now.
Respectfully, he seems stuck on wanting to do something that seems right but which has principled arguments against his persistent suggestions. His voice is similar to others in the standards processes, but this voice fails to appreciate not only viewpoints but also the social reality as expressed by several other statement today to the ONTAC-WG General Discussion. Several months ago, Patrick and I had phone conversations about this and I represented some of the conversation at
http://www.ontologystream.com/beads/nationalDebate/188.htm
where I conjecture about the error in wanting to have ontology that supports general types of inference of the type observed only in human reasoning and not observed from AI.
At
http://www.ontologystream.com/beads/nationalDebate/191.htm
I further expressed my view of his position and the error that is made in persisting with certain expectations.
I made the mistake of mentioning where he worked (at a large government think-tank-consulting entity) and the discussion became about that (my mistake.) I have edited this to remove this mention. I do not think that he is fully aware of the grounding that stratified ontology has in the natural sciences, and that he is focused on a specific viewpoint regarding what makes business and consulting activities work in the current IT consulting world.
The persistence of his arguments does not seem to be persuadable. As such, I feel that there cannot be a discussion of alternatives to the notion of a single large general-purpose ontological model.
****
I am struck with the notion that the proposition of an alignment of two
Protege type ontologies, such as FEA-RMO and (say a OpenCyc upper ontology – if
there is one actually available to the public), is a proposition that ignores
the reality of interpretation and viewpoint.
Accepting something like the Dublin Core as an available ontology is a different type of path towards interoperability. Dublin core is accepted by many because it is simple and specifies some really useful information standards, mostly related to the act of publishing information.
But basic concepts to be included in his large common ontology are
related to "the reality" of things like duration and composition (the
new attack was experienced from Monday through Thursday). To discuss the
detailed nature of "this new attack" within a large community would require
something beyond the non-functional properties proposed as a web services
ontology model. So suppose the attack has novel elements or elements that
are known but function in an entirely new (and surprising way).
The OASIS work on OASIS Reference Model for Service Oriented
Architectures
as discussed briefly at
http://www.ontologystream.com/beads/nationalDebate/201.htm
creates a higher level abstraction (framework) in which web services can
be made interoperable (either as a hard wired relationship between interacting
systems or as a just in time aggregation activity).
Human communication, at its best, iteratively spreads the knowledge of "a new attack" within a large community... but in the case of 9-11 and the American public the eventual knowledge came to reflect different individual and community viewpoints.
The alternative that is proposed by some, is a stratified ontology where some "observed" set of invariance (semantic primes) are commonly used by individuals and other processes (including pattern recognition algorithms) to express (in real time) emergent ontological models.
Like in physical chemistry, the diversity of possible chemical compounds is sufficient to cause the chemistry at a moment (the ontology at that moment); and to adapt to the response degeneracy (Gerald Edelman's term for many to many mappings) seen as living systems deal with structure-function choices.
I am struck by why this notion of stratified ontology is not ever discussed within any of the standards processes? Stratified ontology can be grounded in physical science (chemistry), in speech production by humans (phonemes), and in language use and generation (Tom Adi's work)
http://www.bcngroup.org/beadgames/generativeMethodology/AdiStructuredOntology-PartI.htm
Adi's work has been around and in software form for two decades, but remains largely outside the Academy and outside of the consulting/standards processes.
But one can look at the Zackman Framework and Sowa's semantic primitives to see the stratified approach, even if not recognized as such.
This absence of discussion about stratified theory is why I am also bothered by insistence on a large common "fixed" and controlling ontology.
Paul Prueitt
703-981-2676