- What is Linked Data Fragments all about?
- What is a Linked Data Fragment?
- What is a Triple Pattern Fragment?
- What is the purpose of Linked Data Fragments?
- What is a Linked Data Fragments server?
- What is a Triple Pattern Fragments server?
- What is a Triple Pattern Fragments client?
- How do Triple Pattern Fragments differ from SPARQL results?
- How do Triple Pattern Fragments differ from data dumps?
- How do Linked Data Fragments relate to Linked Data Platform?
- Who is working on Linked Data Fragments?
What is Linked Data Fragments all about?
We aim to find new ways of publishing Linked Data, in which the query workload is distributed between clients and servers. Watch this video for a detailed overview of Linked Data Fragments.
What is a Linked Data Fragment?
A Linked Data Fragment (LDF) of a Linked Data dataset is a resource consisting of those triples of this dataset that match a specific selector, together with metadata and hypermedia controls.
Examples of Linked Data Fragments include:
data dumps (example): their selector is the universal selector, their metadata set includes the file size, and their control set is empty.
subject pages (example): their selector is a subject URI, their metadata set is often empty, and their control set is given by URIs that can be dereferenced.
SPARQL results (example): their selector is a CONSTRUCT query, their metadata set is empty, and their control set includes the endpoint URI, which allows to retrieve other SPARQL results.
The Linked Data Fragments specification formally captures this concept.
What is a Triple Pattern Fragment?
A Triple Pattern Fragment is a Linked Data Fragment with a triple pattern as selector, count metadata, and controls to retrieve any other Triple Pattern Fragment of the same dataset, in particular other fragments the matching elements belong to. Fragments are paged to contain only a part of the data.
Triple Pattern Fragments (example) minimize server processing, while enabling efficient querying by clients:
Data dumps allow full querying on the client side, but all processing happens locally. Therefore, it is not Web querying: the data is likely outdated and only comes from a single source.
Subject pages also require low server effort, but they do not allow efficient querying of all graph patterns. For instance, finding a list of artists is near impossible with regular dereferencing or Linked Data querying.
Compared to SPARQL results, Triple Pattern Fragments are easier to generate because the server effort is bounded. In contrast, each SPARQL query can demand an unlimited amount of server resources.
The Triple Pattern Fragments specification formally captures this concept.
A Triple Pattern Fragments client answers SPARQL queries using only Triple Pattern Fragments.
What is the purpose of Linked Data Fragments?
With Linked Data Fragments, we aim to discuss ways to publish Linked Data in addition to SPARQL endpoints, subject pages, and data dumps. In particular, we want to enable clients to query the Web of Data, which is impossible/unreliable today because of the low availability of public SPARQL endpoints.
New types of Linked Data Fragments, such as Triple Pattern Fragments, can vastly improve server availability while still enabling client-side querying.
In short, the goal of Linked Data Fragments is to build servers that enable intelligent clients.
What is a Linked Data Fragments server?
A Linked Data Fragments server (LDF server) is an HTTP server that offers Linked Data Fragments covering one or more datasets in at least one triple-based representation.
This means that SPARQL endpoints, Pubby servers, HTTP servers with RDFa, … are all LDF servers.
What is a Triple Pattern Fragments server?
A Triple Pattern Fragments server is a Linked Data Fragments server that offers at least the Triple Pattern Fragments of certain datasets.
An example Triple Pattern Fragments server is available at data.linkeddatafragments.org.
You can set up your own server with the open-source server software.
What is a Triple Pattern Fragments client?
A Linked Data Fragments client (LDF client) consumes Triple Pattern Fragments in a certain way.
A client such as your Web browser allows to browse Triple Pattern Fragments. More complex clients, such as Triple Pattern Fragments clients, perform more complicated tasks like answering SPARQL queries.
How do Triple Pattern Fragments differ from SPARQL results?
First of all, results of SPARQL queries are Linked Data Fragments, because they represent a fragment of the underlying dataset. However, they are not Triple Pattern Fragments, because those have a single triple pattern as selector, and provide count metadata and controls towards other fragments.
Compare an example SPARQL fragment to an example Triple Pattern Fragments.
Each individual SPARQL query can take a lot of processing time. In contrast, Triple Pattern Fragments are easy to generate. Furthermore, the number of Triple Pattern Fragments for each dataset is finite, so they can be precomputed and cached efficiently.
With a SPARQL endpoint, different clients expect a single server to answer many different complex questions. With a Triple Pattern Fragments server, different clients only ask for reusable, simple answers and perform query-specific tasks themselves. Moving processing from server to client leads to higher scalability.
Linked Data Fragments allows you to publish your dataset in a queryable way without having to worry about server stability and availability issues.
How do Triple Pattern Fragments differ from data dumps?
Another way to avoid depending on public SPARQL endpoints is to download a data dump and host a SPARQL server locally. This would give you a strong performance advantage over public endpoints.
However, this is not Web querying. The data is not up to date, and only specific datasets are available.
Triple Pattern Fragments servers aim to bring the availability rates of private endpoints to the public Web by moving intensive processing to the client side.
How do Linked Data Fragments relate to Linked Data Platform?
“Linked Data Platform” is a W3C Working Draft that describes a read-write Linked Data architecture. “Linked Data Fragments” is a generic term for fragments of a Linked Data datasets.
More specifically, with Linked Data Fragments, we aim to investigate scalable ways to publish Linked Data that enable clients to efficiently perform complex operations such as querying.
A crucial difference between Triple Pattern Fragments and Linked Data Platform is that the latter proposes one specific service; that is, a detailed set of rules that have to be followed.
In contrast, we envision various Linked Data Servers with different APIs to be used in more flexible ways.
Most importantly, Linked Data Fragments and Linked Data Platform are orthogonal; a server can offer Triple Pattern Fragments, while also implementing the Linked Data Platform read-write interface.
Who is working on Linked Data Fragments?
The following people have published work on Linked Data Fragments: Ruben Verborgh, Erik Mannens, Rik Van de Walle, Pieter Colpaert, Miel Vander Sande, Laurens De Vocht, Joachim Van Herwegen, Ruben Taelman, Olaf Hartig, Ben De Meester, Gerald Haesendonck, Richard Cyganiak, Maribel Acosta, Luda Balakireva, Sander Ballieu, Christian Beecks, Wouter Beek, Carlos Buil-Aranda, Sam Coppens, Oscar Corcho, Anastasia Dimou, Pauline Folz, Pieter Heyvaert, Patrick Hochstenbach, Alejandro Llaves, Pascal Molli, Laurens Rietveld, Stefan Schlobach, Thomas Seidl, Harihar Shankar, Hala Skaf-Molli, Herbert Van de Sompel, Maria-Esther Vidal.