Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: terminology update: ODN/Catalog -> ODN/PublicCatalog

5.

...

ODN-architecture-overview.pngImage Removed

Open Data Node consists of the following modules:

  • ODN/UnifiedViews

  • ODN/Storage

  • ODN/Publication

  • ODN/InternalCatalog

  • ODN/Catalog

  • ODN/Management

Modules listed above are discussed in more detail in the following sections.

SearchPortal, which allows users to search the published data, is not described in this document as it is separate application, not part of ODN.

 

5.1. Module ODN/UnifiedViews

Module ODN/UnifiedViews is an ETL& data enrichment tool.

It is responsible for extracting and transforming source data (datasets), so that they can be published as (linked) open data. The result of the transformation is stored in the database managed by ODN/Storage module.

ODN/UnifiedViews module is responsible for:

  1. extracting data provided by data publishers

  2. transforming these data to machine readable data format; such transformation may include enriching the data, cleansing the data, assessing the quality of the data

  3. storing the machine readable data to the database managed by ODN/Storage.

Input of the module is the data provided by data publishers. Data is expected to be structured, mostly tabular or linked data (RDF). Module will support basic data formats out of the box, support for more complex data formats is available via plugins.

Module will work with different formats (in files),  but preferred is data in RDF format. RFD format will allow usage of advanced data cleansing and enrichment techniques based on linked data also for use cases where output will not be in RDF (i.e. for example cases where ODN will be used to clean CSV files before publishing).

Output of the module is the extracted and transformed machine readable data stored in ODN/Storage. Again, data is expected to be structured, tabular or linked data.

5.1.1. UnifiedViews - state of the art

Module ODN/UnifiedViews will use as its base the tool UnifiedViews (https://github.com/UnifiedViews). It is an ETL framework with a native support for transforming RDF data. UnifiedViews allows users to define, execute, monitor, debug, schedule, and share data transformation tasks.

UnifiedViews was originally developed as a student project at Charles University in Prague and now it is maintained by Semantica.cz, Czech Republic, Semantic Web Company, Austria, and EEA, Slovak Republic.

UnifiedViews allows users to define and adjust data processing tasks (pipelines) using a graphical user interface (see Figure below); the core components of every data processing task are data processing units (DPUs). DPUs may be drag&dropped on the canvas where the data processing task is constructed. Data flow between two DPUs is denoted as an edge on the canvas; a label on the edge clarifies which outputs of a DPU are mapped to which inputs of another DPU. UnifiedViews natively supports exchange of RDF data between DPUs; apart from that, files may be exchanged between DPUs.

unifiedViews-ui.pngImage Removed

UnifiedViews takes care of task scheduling. Users can plan executions of data processing tasks (e.g., tasks are executed at a certain time of the day) or they can start data processing tasks manually. UnifiedViews scheduler ensures that DPUs are executed in the proper order, so that all DPUs have proper required inputs when being launched.

A user may configure UnifiedViews to get notifications about errors in the tasks' executions; user may also get daily summaries about the tasks executed.

To simplify the process of defining data processing tasks and to help users analyzing errors during data processing task executions, UnifiedViews provides users with the debugging capabilities. Users may browse and query (using SPARQL query language) the RDF inputs to and RDF outputs from any DPU.

UnifiedViews framework also allows users to create custom plugins - data processing units (DPUs). Users can also share DPUs with others together with their configurations or use DPUs provided by others.

Technical structuring and licensing of UnifiedViews allows DPUs to be licensed not just as Open Source, but also using proprietary license. This is a planned feature of the tool needed by use cases where commercial exploitation is needed. ODN will support same commercial use cases.

5.1.1.1. UnifiedViews components and dependencies

Figure below depicts current maven modules in UnifiedViews and its dependencies. Modules in the yellow box are visible to DPU developers. The most important modules are:

  • frontend - Management GUI of UnifiedViews

  • backend - Engine running the data transformation tasks

  • commons-app - DAO & Services module, which is common to frontend and backend modules; it is used to store configuration for pipelines, DPUs, pipeline executions etc.

  • dataunit-rdf, dataunit-file - Modules with interfaces for data units; DPU developers writing new DPUs use these modules to read data from input data units and write data to output data units

uv-ComponentModel.pngImage Removed

 

5.1.2. Structure of the ODN/UnifiedViews and its context:

odn-uv-structure.pngImage Removed

 

ODN/UnifiedViews comprises of the important components as follows:  

  • DAO & Service - used to access database where configuration of ETL tasks and its executions is stored (realized by module commons-app in Figure XX - from chapter 4.1.1.1.)

  • HTTP REST Transformation API - Services from DAO & Services layer exposed as HTTP REST methods. Used by ODN/Management module (this component is not realized by any module in Figure XX)

  • Data Processing Engine - Robust engine running the manually launched or scheduled transformation tasks - transformations may include data cleansing, linking, integration, quality assessment (realized by “backend” module in Figure XX)

  • Management GUI - GUI used to manage the configuration of pipelines, debugging executions, etc. (realized by “frontend” module in Figure XX)

 

5.1.3. Interaction with other modules

1. ODN/UnifiedViews loads the transformed data to ODN/Storage. A special DPUs - RDF data mart loader and Tabular data mart loader must be provided to load transformed data to ODN/Storage to the corresponding data store. The data must be stored there together with metadata, so that ODN/Publication module knows which resources (tables, graphs) are associated with which pipeline/dataset.

2. ODN/UnifiedView will provide RESTful management API, which will be used by ODN/Management to:

  • create new data transformation task (pipeline)

  • configure existing pipeline and get configuration of the pipeline

  • delete the pipeline

  • execute the pipeline

  • schedule the pipeline

An excerpt of the methods, which will be available to ODN/Management in a RESTful format is depicted below:

odn-uv-HTTP REST Transformation API.pngImage Removed

 

3.

...

  • show the pipeline detail in an expert mode (user may drag&drop DPUs, fine-tune pipeline configuration)

  • show the detailed results of pipeline executions (browse events/logs)

  • debug data being passed between DPUs

  • have an access to advanced scheduling options

...

Module

...

The purpose of this module is to store the transformed data produced by ODN/UnifiedViews. ODN/Publication module uses ODN/Storage to get the transformed data, so that it can be published - provided to data consumers.

 

5.2.1. Structure of the ODN/Storage and its context

odn-storage-structure.pngImage Removed

Two important components of ODN/Storage are:

  • RDBMS data mart

  • RDF data mart

5.2.1.1. RDBMS data mart

RDBMS data mart is a tabular data store, where data is stored when data publisher wants to prepare CSV dumps of the published dataset or provide REST API for data consumers.

ODN/Storage will use SQL relational database (such as MySQL, PostgreSQL, etc.) for storing tabular data.

...

ODN/

...

Publication

...

To support the above feature, data being stored to RDBMS data mart must be associated with metadata holding for every table at least:

  • to which dataset the table belongs to

  • which transformation pipeline produced the table

...

5.2.1.2. RDF data mart

Data is stored in RDF data mart when data publisher wants to prepare for data consumers RDF dumps of the published dataset or provide SPARQL endpoint on top of the published dataset.

Every transformation pipeline can contain one or more RDF data mart loaders - DPUs, which load data resulting from the transformation pipeline to RDF data mart. Every RDF data mart loader loads data to a single RDF graph. RDF graph represents a context for RDF triples, graph is a collection of RDF triples produced by one RDF data mart loader. The name for the RDF graph is prepared by ODN/UnifiedViews and is based on the dataset ID and  ID of the RDF data mart loader DPU.

Since every published dataset may require more then one transformation pipeline, and not all results of every transformation pipeline should be published by ODN/Publication module, data publisher may decide which RDF graphs should be published by (1) manually specifying all the graphs which should be published or by (2) specifying that results of certain transformation pipeline should be published.

To support the above feature, data being stored to RDF data mart must be associated with metadata holding for every RDF data graph at least:

  • to which dataset the graph belongs to

  • which transformation pipeline produced the graph

Note: Currently, UnifiedViews supports Openlink Virtuoso (http://virtuoso.openlinksw.com/) and Sesame (http://www.openrdf.org/) as RDF data mart implementation. As part of ODN, we will employ SAIL API to add support for wider range of triplestores. Testing and validation will be done based on feedback from users.

5.2.2. Interaction with other modules

1. Every transformation pipeline (ODN/UnifiedViews) can contain one or more RDF/RDBMS data mart loaders - DPUs, which load data resulting from the transformation pipeline to the corresponding data mart (RDF/RDBMS).

2. ODN/Storage notifies ODN/Publication about changes which happened (dataset updates, etc.) so that ODN/Publication can adapt to the changes.

3. ODN/Publication uses data marts to get required graphs/tables to be published (exported as RDF/CSV dumps, made available via REST API/SPARQL Endpoint). ODN/Publication selects the relevant graphs/tables based on the data publishers preference and metadata associated with tables/graphs.

3. ODN/Management may query ODN/Storage to get statistics about stored data, at least:

  • How many RDF graphs/tables is stored in RDF/RDBMS data mart in total/for the given dataset ID?

  • How many RDF triples are stored in certain RDF graph in RDF data mart?

  • How many records are in certain table in RDBMS data mart?

 

5.3. Module ODN/Publication

Module responsible for publishing data via REST APIs, SPARQL endpoint or as data dumps in RDF or CSV formats. Published data is already transformed as defined by data transformation pipelines in ODN/UnifiedViews and stored in ODN/Storage.

...

Note: As part of data publication, some metadata will be published by this module too (for example “Last Modification Time” will be included in appropriate HTTP headed in response). But publication of metadata is mainly responsibility of ODN/Catalog PublicCatalog (see section 4.5).

5.3.2. File dumps

...

Finally, new entry in the Atom feed (http://en.wikipedia.org/wiki/Atom_(standard)) associated with the processed dataset is created; such feed points data consumers to the file(s) in the git repository, where the published data and metadata is. Such feed must be reachable from the dataset record in the ODN/Catalog PublicCatalog module.

5.3.2.1. RDF dumps

...

4. Data consumers may (1) download CSV/RDF dumps, (2) use SPARQL endpoints, (3) use REST APIs.

5.4. Module ODN/InternalCatalog

Before introducing ODN/InternalCatalog module, the general concept of data catalog is introduced.

5.4.1. Data Catalog

Data catalog holds metadata about each published dataset.  Data catalog allows its users to browse/search the list of datasets and to see the metadata for every published dataset. Screenshot of a sample data catalog provided by data.gov.uk is shown below.

odn-catalog.pngImage Removed

There are already available solutions which implement data catalog functionality, such as CKAN and DKAN.

5.4.1.1. Comparison of CKAN/DKAN

CKAN (http://ckan.org/features/) is a powerful data management system that makes data accessible – by providing tools to streamline publishing, sharing, finding and using data. CKAN is aimed at data publishers (national and regional governments, companies and organizations) wanting to make their data open and available. Note: We may also consider Etalab (https://github.com/etalab), a fork of CKAN.

DKAN (http://nucivic.com/dkan/, https://drupal.org/project/dkan) is an open source data platform with a full suite of cataloging, publishing and visualization features that allows governments, non-profits and universities to easily publish data to the public.

The following table compares CKAN and DKAN:

 

 

...

Aspects & Features

...

CKAN

...

DKAN

...

open source & extend-able

...

Yes

...

Yes

...

primary language

...

Python

...

PHP

...

platform

...

Pylons (Python framework), http://www.pylonsproject.org/

...

Drupal, https://drupal.org/

...

database supported

...

PostgreSQL

...

MySQL, PostgreSQL, SQL Server, or Oracle

...

data import via API

...

Yes

...

Yes

...

publish data and metadata

...

Yes

...

Yes

...

support for DCAT/DCAT-AP

...

Not complete

...

Yes

...

customized metadata fields

...

Yes

...

Yes

...

versioning dataset records

...

Yes

...

Yes

...

possibility to visualize data

...

Yes

...

Yes

...

themable

...

Yes

...

Yes

...

statistics and usage metrics for datasets

...

Yes

...

Yes

...

extensions

...

CKAN extensions

...

Drupal modules

 

 

When comparing CKAN and DKAN, the main difference is that DKAN is implemented on top of Drupal and CKAN on top of Pylons. Furthermore, DKAN supports DCAT-compliant format for expressing datasets’ metadata. DCAT (http://www.w3.org/TR/vocab-dcat/) is an RDF vocabulary designed to facilitate interoperability between data catalogs published on the Web.

Although there is an extension to CKAN, which allows CKAN to expose and consume metadata from other catalogs using documents serialized using DCAT format ( https://github.com/ckan/ckanext-dcat), we decided to use DKAN for the functionality of dataset catalog.

 

Sources of the comparison:

http://ckan.org/features/

http://nucivic.com/dkan/, https://drupal.org/project/dkan

http://docs.getdkan.com/dkan-documentation/dkan-features/comparing-dkan-and-ckan

http://docs.ckan.org/en/latest/api/index.html#example-importing-datasets-with-the-ckan-api

CKAN: extensions https://github.com/ckan/ckan/wiki/List-of-extensions

 

5.4.2. Data Catalog in ODN/InternalCatalog

Module ODN/InternalCatalog is the first module which encapsulates the functionality of data catalog. The data catalog provided by ODN/InternalCatalog module is used to manage datasets which should be transformed/published by ODN; it also allows data publishers to see details about the transformation/publishing process. It is an internal catalog, thus, it is not visible to public, but only data publisher/data administrator can use the catalog.  

ODN/InternalCatalog module is internally using DKAN. Nevertheless, DKAN must be extended, so that it provides more data about the datasets being transformed and published, in particular ODN/InternalCatalog must have possibility to:

  • depict the data processing pipeline, which is associated with the transformed & published dataset

  • run data transformation/publishing from the catalog UI

  • provide brief information about the status of the dataset transformation

  • provide link to ODN/Publication module’s configuration dialog which configures how the dataset in the catalog is published

5.4.3. Interaction with other modules

ODN/InternalCatalog is used by ODN/Management to hold and present metadata about the datasets being transformed/published by the data publisher. On request, ODN/InternalCatalog publishes the internal data catalog records about the datasets already published to ODN/Catalog module.  

 

5.5. Module ODN/Catalog

ODN/Catalog is the second module which encapsulates the functionality of data catalog. ODN/Catalog holds metadata about each dataset, which is published by ODN. This data catalog is publicly visible, the primary users of this catalog are data consumers, who may browse/search the published datasets’ metadata; data consumer may also get a link to the dataset’s dump or API, so that they can consume the data in the dataset.

Every time a dataset is published by ODN/Publication module, it may be also published to the data catalog (module ODN/Catalog). Data is being exported to ODN/Catalog from ODN/InternalCatalog either automatically as new data is published by ODN/Publication module or manually on the request of data publisher/data administrator. The catalog in ODN/Catalog module must contain reference to the Atom feeds, so that dumps of the datasets and the associated metadata may be downloaded; the catalog has to also provide a link to REST API and SPARQL endpoint associated with the dataset.

Module ODN/Catalog is internally using the same tool as ODN/InternalCatalog to ensure the core data catalog functionality, i.e. DKAN.

5.5.1. Interaction with other modules

This module is used by ODN/Management to create new record or adjust the existing record in ODN/Catalog when the dataset is transformed by ODN/UnifiedViews and published by ODN/Publication module. The record in ODN/Catalog is built based on the metadata in ODN/InternalCatalog and based on the information about the location of REST APIs, Atom feeds referring data dumps, etc., provided by ODN/Publication module.

 

5.6. Module ODN/Management

Module responsible for managing the process of dataset transformation and publication. The diagram below shows the interaction of ODN modules when a dataset is published. The diagram below is showing the case when the dataset publication is launched manually; however, it may be also scheduled by ODN, so that it runs at certain times (e.g., every month).  

 

odn-management-publication-seq.pngImage Removed

5.6.1. Wizard for preparing the transformation task

ODN/UnifiedViews provides standard dialog for editing the data transformation pipeline. Further, ODN/Management provides a wizard (for inexperienced users) to prepare the transformation task. Wizard should be implemented by ODN/Management, using ODN/UnifiedViews HTTP REST Transformation API for interacting with transformation pipelines.

5.6.2. Structure of ODN/Management and its context

odn-management-structure.pngImage Removed

5.6.3. Interaction with other modules

...

.