Hi everybody,
> David Huynh wrote:
>
> > David R. Karger wrote:
> >
> >> I'm not certain the distinction between "view" and "lens" can really
> >> be formalized---obviously, every view is only showing some aspect of
> >> the object. But there do seem to be very different ends to the
> >> spectrum, that perhaps need to be handled differently?
> >>
> >>
> > In practice, when one writes a lens in Haystack, it is often with hopes
> > that the lens can be used for more than one type of information (e.g.,
> > an "address" lens can be used for people and companies). When writing a
> > view, it is often with concession that it can only be used for one type
> > of information, in a limited context, for a particular task. The view is
> > specially laid out and cannot be guaranteed to generalize.
>
> Right on! I like the idea of separating between a view and a lens based
> on its rate of generalization.
>
> On the other hand, I can't stop thinking that this "degree of
> generalization" is really a float and not a boolean.
>
I agree, that grouping lenses together for special purposes is definitively
useful.
As a first step, I have added a lens:View class to the vocab, which allows
you to link sequences of lenses to specific contexts. See attached files. I
guess defining what exactly a context is and which context properties are
useful should be left to the different applications/browsers.
lens:view rdf:type rdfs.Class
Usage Example:
:ProjectParticipantsView rdf:type lens:View ;
rdfs:label "Project Participant View" ;
lens:viewDomain foaf:Person ;
lens:viewContext :work ;
lens:showLenses [ rdf:type rdf:Seq ;
rdf:_1 :StandardLens ;
rdf:_2 :ContactLens ;
rdf:_3 :PersonsProjectsLens ] .
An alternative approach would be to have views a special kind of lens.
Meaning that a view modelled using lens which has sublenses and a extra
context property. This would avoid the extra class lens:view which is nice
if we try to be minimalist but would be a little bit hacky.
Any opinions if we should include views as an additional concept?
About the modelling?
Another question here is: If we call a group of lenses a view. How to we
call the different "display modes" of a view, e.g. if the view is displayed
as text or as a IsaViz like graphic?
We seam to run out of terms here a bit ;-)
>
> Note that there are three aspects that we still haven't considered:
>
> - adaptation: the act of adapting the content to some given needs
Which needs? Can you please be a little bit more concrete ;-)
> - interaction: the act of capturing user events and translate them
> into further selection or adaptation
This sounds like a job for the browser (e.g. user selects a view, collapses
a lens, drills down) and we only have to indirectly care for it when writing
the vocabs e.g. by providing the browser with hints how lenses are related
(lens:additionalInformationFor or :moreDetailsFor) or which view might be
appropriate for a specific context.
> - modification: the act of modifying the content in a persistent way
I thought we agreed that modification is out of scope for now and
modification features will be added with a separate vocab later.
...
> As another thought experiment, note that the current approach only
> lets me select actual (materialized) properties of the object. But
> sometimes, we might want to show a _derived_ property. Eg, suppose I
> am looking at a person and want to show the weather where they live?
I think that this is definitively out-of-scope and a task for a underlaying
inference layer.
...
> Separately, I think there is a lot of power to having a "view of
> views" that lets users see and select from the entire pool of
> available lenses of the object.
I highly agree with this.
I also updated the lens vocab with some earlier proposals of the discussion.
+ I added :propertySeq and :propertyBag again
+ added :hideProperties for specifying what not to show
+ added :_X for handling rdf:Bag and rdf:Seq
+ the view vocabulary as described abouve
The updated lens vocab is attached together with the updated FOAF example.
Chris
Received on Fri Oct 15 2004 - 14:40:35 EDT