Re: [RT] rethinking the need for inferencing

From: Arjohn Kampman <>
Date: Tue, 07 Mar 2006 11:57:10 +0100

Hi Stefano, others,

Hope you don't mind me breaking into this discussion. I have the
impression that what you are aiming for is similar to what we intend(ed)
to accomplish with Sesame's stackable Sail objects:

Stefano Mazzocchi wrote:
> All transformation phases require access to the input data stream for
> the transformation to happen, some of these retain state and some do
> not. It is always preferable to have stages that do not retain state (or
> that retain, reliably, a small state in a windowed scenario)

Such a one-way model is often too restrictive. In the context of
Sesame's Sail API, such a model would only allow forward-chaining
inferencers. Based on our experience with such inferencers in Sesame,
this isn't going to scale to large amounts of data.

The Sail API in Sesame 2 was revised to also allow query rewriting,
allowing a Sail implementation to use a combination of forward- and
backward-chaining. The paper "Time Space Trade-offs in Scaling up RDF
Schema Reasoning" might be of interest to you:

Aduna BV -
Prinses Julianaplein 14-b, 3817 CS Amersfoort, The Netherlands
tel. +31-(0)33-4659987
Received on Tue Mar 07 2006 - 10:56:05 EST

This archive was generated by hypermail 2.3.0 : Thu Aug 09 2012 - 16:39:18 EDT