Skip Navigation
NIH | National Cancer Institute | NCI Wiki   New Account Help Tips
Skip to end of metadata
Go to start of metadata
  • Date: July, 1st, 2012
  • Location: Manchester, UK.
  • Website: ORE 2012


The OWL Reasoner Evaluation Workshop (ORE) is a satellite event of the IJCAR 2012 conference and will be held on July 1, 2012 in Manchester, UK.

Objectives

OWL is a logic-based ontology language standard designed to promote interoperability, particularly in the context of the (Semantic) Web. The standard has encouraged the development of numerous OWL reasoning systems, and such systems are already key components of many applications.

The goal of this workshop is to bring together the developers of reasoners for (subsets of) OWL, including systems focusing on both intensional (ontology) and extensional (data) query answering. The workshop will give developers a perfect opportunity to promote their systems.

Call for papers

Submissions are solicited from anyone interested in describing or evaluating OWL reasoning and query answering systems. We invite submission of both SHORT SYSTEM DESCRIPTION papers and LONG SYSTEM DESCRIPTION AND EVALUATION papers. Survey papers describing and comparing multiple systems are also welcome. Papers should include a description of the system(s) in question, including:

  • language subset(s) supported;
  • syntax(es) and interface(s) supported;
  • reasoning algorithm(s) implemented;
  • important optimisation techniques used;
  • particular advantages (or disadvantages);
  • application focus (e.g., large datasets, large ontologies, complex ontologies, etc.);
  • other novel or interesting features.

Full papers should also include an evaluation (see guidelines), preferably using (some of) the datasets provided. Short papers may also include a brief performance analysis.

Submissions

Long papers must be no longer than 12 pages, while short papers must be no longer than 6 pages.

Submissions must be in PDF and should be formatted according to the Springer LNCS guidelines. Submission is electronic through easychair.

All submissions will be peer-reviewed by the program committee. Selected papers are to be published as a volume of CEUR workshop proceedings.

Important dates

Event

Date

Submission of abstracts

April 16th, 2012

Paper submission deadline

April 23th, 2012

(Optional) Submission of systems to SEALS platform:

April 16th, 2012

Notification of acceptance

May 7th, 2012

Camera-ready papers due

May 25th, 2012

Workshop

July 1st, 2012 (Half-day)

Evaluation guidelines

If possible, evaluations should use the standard datasets provided and present results for the following reasoning tasks (where relevant for the system being evaluated):

  • Classification. The dataset consists of a set of OWL ontologies. The total time taken to load and classify each ontology should be reported. It would also be interesting to report on comparisons of the computed taxonomy with the "reference" taxonomies that are provided with the dataset.
  • Class satisfiability. The dataset consists of a set of OWL ontologies, and for each ontology one or more class URIs. The time taken to perform each test along with the satisfiability result for each class should be reported.
  • Ontology satisfiability. That dataset consists of a set of OWL ontologies. The total time taken to load and test the satisfiability of each ontology should be reported, along with the satisfiability result for each ontology.
  • Logical entailment. The dataset consists of a set of pairs of OWL ontologies. The total time take to determine if the first ontology entails the second ontology should be reported, along with the entailment result (true or false).
  • Instance retrieval. The dataset is an OWL ontology and a class expression. For each ontology the total time taken to load the ontology and retrieve the sets of instances for each class expression should be reported. It would also be interesting to report on comparisons of the retrieved instances with the "reference" set that are provided with the dataset.

It is suggested that full results of any evaluations performed are made available via the web, with summaries of the results being included in the papers submission as space permits.

SEALS infrastructure

Optionally, participants may consider to submit their systems using the SEALS infrastructure (by April 16th). The use of SEALS platform will allow the organisers to perform standardised evaluations of the submitted systems and present the results during the workshop.

In order to do this, participants should first join the SEALS Community. Once you have your community login, you will be able to register your tools for evaluation. Please add a remark in the description of the version to indicate that you want to participate with this version in ORE 2012.
In order to be evaluated using the SEALS platform, systems should provide interfaces for the implemented reasoning services following the instructions provided in SEALS deliverable D11.4. This deliverable describes the necessary methods to be implemented as well as the concrete input and output data which is expected for each reasoning service evaluation.

Programme

System developers will be given 15 minutes presentation slots for their system description and evaluation results. The organisers will also present an overview of the results and chair an open discussion about the comparison between different systems.

  • 09:00-10:30 Welcome and systems presentations
  • 10:30-11:00 Coffee break
  • 11:00-12:30 System presentations, overview of results and open discussion

More Conferences

To see a list of other meetings and conferences:

Future Meetings

Past Meetings

  • No labels