[226]                               home                           [228]

 

Monday, November 21, 2005

 

 The BCNGroup Beadgames

National Project ŕ 

Challenge Problem  ŕ

 Center of Excellence Proposal ŕ

 

 

 

 

Discussion at ONTAC forum

ONTAC stands for Ontology and Taxonomy Coordinating Working Group

It is a working group of

Semantic Interoperability Community of Practice (SICoP)

 

Communication to the working group from Paul Prueitt

 

****

 

 

The paper by Gangemi et al referenced by Gary Berg-Cross approaches a type of knowledge representation that has conceptual forms distinct from specific details.  Jeff Long and Dorothy Denning developed a notion that Jeff was enamored with and published this in 1995

 

Ultra-structure: a design theory for complex systems and processes

 

http://portal.acm.org/citation.cfm?id=204892

 

Jeff and I became friends in 1996 while he was trying to start up a research center at George Washington Univ.  His work since then developed more as an internal Department of Energy project.  

 

From

 

http://bioinfo.med.unc.edu/glabwiki/index.php/CORE576:_Ultra-Structure_for_Biology

 

it looks like a group at Univ of North Carolina at Chapel Hill has picked up this work.

 

Ultrastructure is a classification methodology that builds templates that are "deemed" to have some real and true ontological status.  The interaction protocols and design abstractions (see Desai et all ,  Special Issue on Interaction and state-based modeling, "Interaction Protocols as Design Abstractions for Business Processes") and many many other papers have a very similar approach, but Mr Long's work perhaps predated most work specific to developing an templating classification methodology.

 

Ultrastructure has been available as a conceptual architecture for over ten years, and has been used in various domains so that there is some experience with its use.  The concept of ultrastructure, in my opinion, requires a strong separation between the detail of some specific reality and the conceptual framework that helps communicate how that reality MIGHT be related to other aspects of reality.

 

The objection, that I make - along with others, regarding formalization need not apply if the formalization uses a stratification of ontological representation.  Let me make this clear with a quote from Gangemi et al.

 

"A conceptual architecture is required because of the main use of ontologies: making intended meaning available to all (artificial or human) agents that could be involved in a semantic service.  Intended meaning is bound to the context in which expressions of a language are used, such as physical situations, theoretical frameworks, social norms, plans and goals, linguistic practices, etc.  Hence, the representation of intended meaning needs a flexible and rich set of primitives that can be put within a modular architecture, either across ontologies or across elements of an ontology.  The intended result of this approach is the design of ontologies which, whatever the task they are meant to accomplish, are of a high quality, and thus able to avoid the ontology chaos that could arise from an undervaluing of conceptual architecture."

 

In their next paragraph, the authors state what I feel is very important:

 

"Conceptual architectures can be seen as explicit representation of context dependencies.  Such dependencies are not bound to formal languages of theories."

 

John, do you agree with this statement by Gangemi et al?  If so, then you may also agree with my sense that ontology should be "situational" and produced by some aggregation of a subset of primitives (such as we seen in an enumeration of roles and types) into a template or conceptual form.  I proposed in the anticipatory architecture that three levels of ontological organization are relevant to a specific situation;

 

1)     a set of invariances (roles and types whose ontology is enforces by how things work in the real world),

2)     the "ultrastructure" of forms, and

3)     the instantiated model of reality at a moment. 

 

The difference between the SPAN and SNAP (dynamic verse static) !!might!! be merely a question of scale of observation?  Your note of today to the forum addresses this issue of scale of observation, and the nested nature of real actual (physical?) ontology:

 

Nicolas and Gary,

 

I very much like the idea of ontology design patterns (ODP), but one point I would emphasize is that the choice of design patterns that may be appropriate for one application may be at the wrong level of granularity or completely inefficient for talking about and reasoning about another level.

 

For example, a macroscopic view of objects and processes has a level of granularity that may obscure rather than clarify the entities and their interactions in computer chip design, microbiology, or atomic physics.

 

Another example is the choice of situation calculus, which is the foundation for PSL (Process Specification Language -- a widely used ontology for representing time and processes), but the BPM (Business Process Modeling) approach is based on the pi-calculus, which is an orthogonal cut at representing the same kinds of entities with a totally different set of axioms.

 

One of my criticisms of any ontology that has a fixed upper level, such as DOLCE and many others (including the one I presented in my KR book), is that there is only *one* upper level.  The DOLCE design patterns have been designed to propagate design decisions made for the DOLCE upper level to every level of the ontology from top to bottom.

 

That is a good idea *if* you want to support a single upper level and to enforce its approach on everything at every other level.  However, that would make it impossible to relate different perspectives with different design patterns for different levels of granularity or different methods of reasoning.

 

For example, computer chip design requires a very different level of granularity than the assembly process of connecting parts inside a computer cabinet.  The microbiology of the processes that take place inside the liver requires a different granularity than the operation of transplanting a liver.  But these different levels are interconnected, and it's necessary to relate them.

 

In summary, I would say that design patterns are good, but the ontologist's toolkit of patterns must have a wide selection of patterns to accommodate different levels of granularity for different applications.

 

Design patterns are highly compatible with modularity, but you can't tie the patterns to a fixed upper level.

 

John Sowa

 

The point that can be made with strong stratification is the within an organizational level (defined empirically and by the community of practice) one has both a substructure (of invariance seen as types and roles) and an ultrastructure (seen as design pattern reflecting real ontological forms – such as those that give interpretable meanings to words communicated between individuals).  ŕ additional point on formalization ŕ [228] . 

 

I do not intend to suggest that I know very much of anything for sure; I am just trying to bring together a architecture where conceptual models do not have predicate logics (they as observational (empirical) in nature and have no logic,  see [228] ) The conceptual models can “use” primitives.  These are also observed by community of practice mediation process (as in SchemaLogic’s software system) to have no meaning unless there is a context (much like phonemes in speech).   This absence of meaning to the substructural elements (primes or atoms) is called “double articulation” in linguistics.  See [228] .

 

This then opens up the possibility that predicate logic is to be attached in real time when there is a specific model of reality (a request for a web service) that has been produced from a fixed (but open to modification under special circumstance) set of conceptual structures being "filled" by a subset of a set of types and roles. 

 

I feel that what I am proposing is not different from much of the spirit of the conversation, with the exception that I want controlled vocabularies to be the heart of interoperability and interaction protocols.  These vocabularies should "evoke" a specific set of interaction protocols without the presence of a Aristotle type logical inference. 

 

The reason is that these logical inferences such as "is-a" appear to fail to capture all of the meaning and intent that is possible to capture (by a human using his or her introspection).  The logical structure may be overused and my make the task, of interaction and interoperability, far more difficult that it should be.