This documentation provides straightforward guidelines to integrate and access data in a RDF knowledge graph.

  • Integrate any structured data using various solutions.
  • Deploy various interfaces to consume the Knowledge Graph data.
  • Deploy user-friendly web UI to access the integrated data.

The documented method allow to quickly publish structured data complying with the FAIR principles (Findable, Accessible, Interoperable, Re-usable).

Create a repository#

First step is to create a repository for your project on GitHub. This will allow you to keep track of the changes to your scripts and mappings using git.

  • Add a file with basic informations about your project: description, how to run it, etc.
  • Add a LICENSE file with the license information (e.g. MIT license)
  • Follow a consistent pattern to store your mapping files and scripts.

Recommended project folder structure#

โ”œโ”€โ”€ .github/workflows
โ”‚ โ”œโ”€โ”€ process-dataset1.yml
โ”‚ โ””โ”€โ”€ process-dataset2.yml
โ”œโ”€โ”€ .gitignore
โ”œโ”€โ”€ .env # Environment variables for docker-compose if needed
โ”œโ”€โ”€ LICENSE
โ”œโ”€โ”€ docker-compose.yml # If needed
โ”œโ”€โ”€ Dockerfile # If needed
โ””โ”€โ”€ datasets # Folders of the different datasets mappings
โ””โ”€โ”€ dataset1
โ”œโ”€โ”€ # Notes about how and where to run those mappings
โ”œโ”€โ”€ mapping # Script to download input files
โ”‚ โ”œโ”€โ”€ mappings.yarrr.yml # YARRRML mappings
โ”‚ โ””โ”€โ”€ mappings.rml.ttl # RML mappings
โ”œโ”€โ”€ scripts # Script to download input files
โ”‚ โ”œโ”€โ”€ # Bash script to download data
โ”‚ โ””โ”€โ”€ # Python script for preprocessing
โ””โ”€โ”€ metadata # HCLS metadata about the dataset
โ””โ”€โ”€ metadata-dataset1.ttl
Last updated on by Vincent Emonet