Re: Piggy-Bank feedback

From: Stefano Mazzocchi <>
Date: Tue, 01 Feb 2005 12:38:43 -0500

DuCharme, Bob (LNG-CHO) wrote:
> It looks great, and the step-by-step "Have Fun with It" part is very
> useful. It's the kind of thing that most free software is sorely lacking
> as a way to get to step 2 with the software.

Thanks, again David deserves the credits for having prepared such a
detailed collections of screenshots.

> I'm sure the "Developer's Guide" is a big project itself, but just a few
> paragraphs there as well will be really helpful.

Yeah, I know. Don't worry, we are perfectly aware of the fact that
without a serious developer's guide we'll be the only ones doing
development. We clearly don't want that, so expect something up soon (as
soon as David has merged his modifications with the Longwell trunk)

> Specifically, while the
> User's Guide refers to "semantic information tidbits ... that Piggy-Bank
> retrieves from links embedded within the web page you're currently
> viewing" and browsing "to a web-page with semantic data," having a
> little more detail about what Piggy-Bank looks for and what it does with
> it will encourage more people to add such metadata.

Very true. Good suggestion.

> (Woo-hoo! An app
> that checks for and does something with RDF metadata accompanying a web
> page!)
> My guess based on a bit of experimenting: Piggy-Bank looks for link
> elements in a web page's head element that have a _at_type value of
> "application/rss+xml" or "application/rdf+xml" and loads the triples
> that it finds in the resource identified by the _at_href attribute. Does it
> do anything with any other attributes of the link element, such as _at_rel
> and _at_title?

No, not yet. In fact this was my first suggestion for David:

  1) RDF-ize the non-RDF RSS into RDF and then collect those (hopefully
this will put an end to the "yes/no RDF" for RSS feeds)

  2) RDF-ize the HTML of the page by scraping and interpreting the
<head> section (title, meta, embedded RDF (such as creative commons

The head section of HTML is normally not that screwed up in terms of
non-well-formed syntaxes, so it shouldn't require that much parsing
effort, maybe a few regexp would do.

The hard part is to map the <meta> tags to URIs... but we can try to
collect them as folksonomical tags first and assign just a URI that is
related to that very page and draw OWL equivalences later.

> You might want to start a Wiki page or something where people can add
> URLs of web pages that have metadata that looks good in Piggy-Bank, and
> then you could add that Wiki to the list of three links on the "This
> webpage contains no information tidbits that can be saved" default sidebar.

The Simile Wiki is already available, but it's write protected to avoid

I was planning to use the AC in a better way to stay tuned for that.

Anyway, thanks for your comments.

Stefano Mazzocchi
Research Scientist                 Digital Libraries Research Group
Massachusetts Institute of Technology            location: E25-131C
77 Massachusetts Ave                   telephone: +1 (617) 253-1096
Cambridge, MA  02139-4307              email: stefanom at mit . edu
Received on Tue Feb 01 2005 - 17:38:21 EST

This archive was generated by hypermail 2.3.0 : Thu Aug 09 2012 - 16:39:17 EDT