Re: [RT] Moving Piggy Bank forward...

From: Jeen Broekstra <>
Date: Wed, 27 Jul 2005 11:43:31 +0200

Stefano Mazzocchi wrote:

>> Unfortunately, there is currently no production-ready custom
>> inferencer for the in-memory store (the available custom
>> inferencer only operates on MySQL databases). There is some raw
>> code on a new custom inferencer that uses SeRQL queries as the
>> rule format, but we are short a number of hands to make that
>> thing work properly. An alternative is perhaps OntoTexts' OWLIM
>> package, which is an adapted in-memory store that can do simple
>> OWL-Lite entailment.
> If you can point us to the code, maybe we can fill up those needed
> hands.
> We *really* need to perform basic OWL-lite (or even tiny?)
> entailment.

Ok, the OWLIM package can be found from the Sesame plugin section:

It is a separate download but from what I understand installation and
use is dead-easy. Have a look, this might already solve your problem.

The SeRQL-based custom inferencer code was originally done by Alan
Cyment (see,
unfortunately he had other commitments which made it impossible for
him to finish the project. It is currently still in CVS, module
'openrdf', branch 'serql_custom_inferencing'. The basic idea is there
but it is far from robust and will need tweaking before it even runs.
But it does parse SeRQL queries and translates them to inference rules
that can be applied.

The idea behind this inferencer was to have a storage-independent part
and a storage-specific part (so one for memory, one for RDMBS, etc.),
to allow sharing of code while at the same time enabling
storage-specific optimizations to the inferencing algorithm.

To make things more complicated: we are currently working hard on
Sesame 2.0, which is a complete revision of Sesame's internal and
external APIs. The main goals of this revision are better transaction
support, context support, and smoother integration into other
projects. Part of this is a rethink of the inferencer design: we are
hoping to decouple the inferencer from the store a bit more, so that
dynamic switching from non-inferencing to inferencing (and vice
versa), or replacing one inferencer with another becomes possible. All
this is currently work in progress: the goal is to have a first beta
release of Sesame 2.0 in early September.

>> As an aside: basic inferencing is still on the ToDo list for the
>> native store. We are also awaiting a number of third party
>> contributions which will hopefully significantly improve native
>> store performance (better indexing). I'll keep you informed on
>> progress if you want.
> Please do, we do want to be more active in helping up shaping this
> space in the future, but we are unsure of what/how to do it. More
> collaboration would just be great for us and yes, we are not just
> those who demand for a fix, we do it ourselves if we know we are
> not stepping on other people's toes.

Fixes and other contributions are _always_ most welcome :)

The story is currently a bit complex since we are doing all this API
revision stuff. Depending on how urgently you need things you can
either decide to develop against the Sesame 2.0 code base (the APIs
are there but only one store implementation - in-memory), or against
the Sesame 1.x code base (this will probably be quicker to get running
but on the other hand you'll have to do a lot of refactoring work when
the switch to Sesame 2.0 is made). Using the 2.0 APIs would be a big
advantage to us as well by the way, since it will hopefully give us
early feedback on the design from you guys. But it's up to you of course!

> Ah, and I hate web forums with a passion ;-)

*sigh* there's no pleasing some people... Heh, but seriously, I'm fine
with discussing Sesame-related stuff here instead if that is ok with
the rest of the list-inhabitants.


Jeen Broekstra          Aduna BV
Knowledge Engineer      Julianaplein 14b, 3817 CS Amersfoort        The Netherlands
tel. +31 33 46599877

Received on Wed Jul 27 2005 - 09:42:00 EDT

This archive was generated by hypermail 2.3.0 : Thu Aug 09 2012 - 16:39:18 EDT