publications
Latest publications & Resources
Join us in discovery with our latest work!
SeeQuery: An Automatic Method for Recommending Translations of Ontology Competency Questions into SPARQL-OWL
Ontology authoring is a complicated and error-prone process since the knowledge being modeled is expressed using logic-based formalisms, in which logical consequences of the knowledge have to be foreseen.
To make that process easier, competency questions (CQs), being questions expressed in natural language are often stated to trace both the correctness and completeness of the ontology at a given time. However, CQs have to be translated into a formal language, like ontology query language (SPARQL-OWL), to query the ontology.
Since the translation step is time-consuming and requires familiarity with the query language used, in this paper, we propose an automatic method named SeeQuery, which recommends SPARQL-OWL queries being translations of CQs stated against a given ontology.
It consists of a pipeline of transformations based on template matching and filling, being motivated by the biggest to date publicly available CQ to SPARQL-OWL datasets.
We provide a detailed description of SeeQuery and evaluate the method on a separate set of 2 ontologies with their CQs. It is, to date, the only automatic method available for recommending SPARQL-OWL queries out of CQs.
The source code of SeeQuery is available at: https://github.com/dwisniewski/SeeQuery.
How can we use LLM for creating the missing knowledge for CBR cases?
Our article explores the integration of large language models (LLMs) within the Case-Based Reasoning (CBR) framework with an example of a system for cooking recipes generation. This integration leverages the knowledge stored in both the neural language model and the recorded solutions in the Case-Base.
Historically, CBR systems used rule-based methods to build answers, which significantly limited their capabilities of gathered examples knowledge analisis and generative capabilities.
Nowadays, we have wide access to the technology of generative language models based on transformer neural networks. They are characterized by linguistic skills and incorporate a wide range of knowledge, which makes it capable of providing high-quality answers to previously unseen queries.
That makes them capable of supplementing new generated examples in the Case-Base database, that can be then reused in future generations.