Triply API¶
Each Triply instance has a fully RESTful API. All functionality, from managing the Triply instance to working with your data, is done through the API. This document describes the general setup of the API, contact support@triply.cc for more information.
For a complete, interactive API reference with all endpoints, request/response schemas, and try-it-out functionality, see the TriplyDB OpenAPI documentation. This page provides guides, conceptual explanations, and documentation for features not covered by the OpenAPI specification.
Authentication¶
When a dataset is published publicly, most of the read operation on that dataset can be performed without authentication.
Write operations and read operations on datasets that are published internally or privately require authentication.
Creating an API token¶
Authentication is implemented through API tokens. Follow the steps in the API Token guide to create a new API token in the TriplyDB UI.
Using the API token¶
API tokens are used by specifying them in an HTTP request header as follows:
Authorization: Bearer TOKEN
In the above, TOKEN should be replaced by your personal API token (a
lengthy sequence of characters). See Creating an API token for
information on how to create an API token.
Important Security Considerations¶
-
Do Not Commit Your Token to a Git Repository: Under no circumstances should you commit your TriplyDB token to a Git repository. This practice is not allowed according to our ISO standards.
-
Do Not Share Your Token: Avoid sharing your TriplyDB token with anyone who should not have access to your TriplyDB resources. Tokens should be treated as sensitive information and shared only with trusted parties.
-
Change Tokens Regularly: To enhance security, consider regularly generating a new token to replace the existing one especially if you suspect any compromise.
Exporting linked data¶
Every TriplyDB API path that returns linked data provides a number of serializations to choose from. We support the following serializations:
| Serialization | Media type | File extension |
|---|---|---|
| TriG | application/trig |
.trig |
| N-Triples | application/n-triples |
.nt |
| N-Quads | application/n-quads |
.nq |
| Turtle | text/turtle |
.ttl |
| JSON-LD | application/ld+json |
.jsonld |
To request a serialization, use one of the following mechanisms:
- Add an
Acceptheader to the request. E.g.Accept: application/n-triples - Add the extension to the URL path. E.g. https://api.triplydb.com/datasets/Triply/iris/download.nt
Datasets¶
Triply API requests are always directed towards a specific URI path. URI paths will often have the following form:
https://api.INSTANCE/datasets/ACCOUNT/DATASET/
Upper-case letter words must be replaced by the following values:
INSTANCE:: The host name of the TriplyDB instance that you want to use.ACCOUNT:: The name of a specific user or a specific organization.DATASET:: The name of a specific dataset.
Here is an example of a URI path that points to the Triply API for the Pokémon dataset:
https://api.triplydb.com/datasets/academy/pokemon/
Create a dataset¶
See the OpenAPI documentation for creating and managing datasets via the API.
Upload linked data¶
See the OpenAPI documentation for uploading linked data. Note: the simple upload API route only supports uploads less than 5MB. To upload more data, use:
- TriplyDB-JS: See the
importFrom*methods under theDatasetclass. - TriplyDB Command-line Interface
Upload assets¶
See the OpenAPI documentation for uploading and managing assets. Note: the simple upload API route only supports uploads less than 5MB. To upload more data, use:
- TriplyDB-JS: See the
uploadAssetmethods under theDatasetclass. - TriplyDB Command-line Interface
Accounts¶
See the OpenAPI documentation for account management endpoints.
Queries¶
TriplyDB allows users to save SPARQL queries. The metadata for all saved query can be accessed as follows:
https://api.triplydb.com/queries
By adding an account name (for example: 'Triply'), metadata for all saved queries for that account can be accessed as follows:
https://api.triplydb.com/queries/Triply
By adding an account name and a query name (for example: 'Triply/flower-length'), metadata for one specific saved query can be accessed as follows:
https://api.triplydb.com/queries/Triply/flower-length
Query metadata (GRLC)¶
You can retrieve a text-based version of each query, by requesting the text/plain content type:
curl -vL -H 'Accept: text/plain' 'https://api.triplydb.com/queries/JD/pokemonNetwork'
This returns the query string, together with metadata annotations. These metadata annotations use the GRLC format. For example:
#+ description: This query shows a small subgraph from the Pokemon dataset.
#+ endpoint: https://api.triplydb.com/datasets/academy/pokemon/services/pokemon/sparql
#+ endpoint_in_url: false
construct where { ?s ?p ?o. }
limit 100
Notice that the GRLC annotations are encoded in SPARQL comments, i.e. lines that start with the hash character (#). This makes the result immediately usable as a SPARQL query.
The above example includes the following GRLC annotations:
descriptiongives a human-readable description of the meaning of the query. This typically includes an explanation of the purpose or goal for which this query is used, the content returned, or the process or task in which this query is used.endpointThe URL of the SPARQL endpoint where queries are sent to.endpoint_in_urlconfigures whether the URL of the SPARQL endpoint should be specified through the API. In TriplyDB, this configuration is by default set tofalse. (Users of the RESTful API typically expect domain parameters such ascountryNameormaximumAge, but they do not necessarily expect technical parameters like an endpoint URL.)
LD Browser API¶
Triply APIs provide a convenient way to access data used by LD Browser, which offers a comprehensive overview of a specific IRI. By using Triply API for a specific IRI, you can retrieve the associated 'document' in the .nt format that describes the IRI.
To make an API request for a specific instance, you can use the following URI path:
https://api.triplydb.com/datasets/ACCOUNT/DATASET/describe.nt?resource=RESOURCE
Exporting data¶
To export the linked data, use the following path:
https://api.INSTANCE/datasets/ACCOUNT/DATATSET/download
Query parameters¶
By default, an export includes all linked data graphs. Use a query argument to specify a particular graph.
| Key | Value | Purpose |
|---|---|---|
graph |
A URL-encoded IRI. | Only download the export of the given graph IRI. |
Therefore, to export the linked data of a graph, use the following path:
https://api.INSTANCE/datasets/ACCOUNT/DATATSET/download/?graph=GRAPH
To find out which graphs are available, use the following path:
https://api.INSTANCE/datasets/ACCOUNT/DATATSET/graphs
Example requests¶
Export a dataset:
curl 'https://api.triplydb.com/datasets/academy/pokemon/download' \
-H 'Accept: application/trig' > exportDataset.trig.gz
Export a graph:
First, find out which graphs are available:
curl 'https://api.triplydb.com/datasets/academy/pokemon/graphs'
Then, download one of the graph:
curl 'curl 'https://api.triplydb.com/datasets/academy/pokemon/download?graph=https://triplydb.com/academy/pokemon/graphs/data' -H 'Accept: application/trig' > exportGraph.trig.gz
Services¶
Some API requests require the availability of a specific service over the dataset. These requests are directed towards a URI path of the following form:
https://api.INSTANCE/datasets/ACCOUNT/DATASET/services/SERVICE/
Upper-case letter words must be replaced by the following values:
-
SERVICE:: The name of a specific service that has been started for the corresponding dataset. -
See the previous section for Datasets to learn the meaning of
INSTANCE,ACCOUNT, andDATASET.
Here is an example of a URI path that points to a SPARQL endpoint over the Pokémon dataset:
https://api.triplydb.com/datasets/academy/pokemon/services/pokemon/
See the following sections for more information on how to query the endpoints provided by services:
See the OpenAPI documentation for creating, synchronizing, and managing services.
SPARQL¶
There are two service types in TriplyDB that expose the SPARQL 1.1 Query Language: "Sparql" and "Jena". The former works well for large quantities of instance data with a relatively small data model; the latter works well for smaller quantities of data with a richer data model.
SPARQL services expose a generic endpoint URI at the following location (where ACCOUNT, DATASET and SERVICE are user-chosen names):
https://api.triplydb.com/datasets/ACCOUNT/DATASET/services/SERVICE/sparql
Everybody who has access to the dataset also has access to its services, including its SPARQL services:
- For Public datasets, everybody on the Internet or Intranet can issue queries.
- For Internal datasets, only users that are logged into the triple store can issue queries.
- For Private datasets, only users that are logged into the triple store and are members of ACCOUNT can issue queries.
Notice that for professional use it is easier and better to use saved queries. Saved queries have persistent URIs, descriptive metadata, versioning, and support for reliable large-scale pagination (see how to use pagination with saved query API). Still, if you do not have a saved query at your disposal and want to perform a custom SPARQL request against an accessible endpoint, you can do so. TriplyDB implements the SPARQL 1.1 Query Protocol standard for this purpose.
Sending a SPARQL Query request¶
According to the SPARQL 1.1 Protocol, queries can be send in the 3 different ways that are displayed in Table 1. For small query strings it is possible to send an HTTP GET request (row 1 in Table 1). A benefit of this approach is that all information is stored in one URI. For public data, copy/pasting this URI in a web browser runs the query. For larger query strings it is required to send an HTTP POST request (rows 2 and 3 in Table 1). The reason for this is that longer query strings result in longer URIs when following the HTTP GET approach. Some applications do not support longer URIs, or they even silently truncate them resulting in an error down the line. The direct POST approach (row 3 in Table 1) is the best of these 3 variants, since it most clearly communicates that it is sending a SPARQL query request (see the Content-Type column).
| HTTP Method | Query String Parameters | Request Content-Type |
Request Message Body | |
|---|---|---|---|---|
| query via GET | GET | query (exactly 1)default-graph-uri (0 or more)named-graph-uri (0 or more) |
none | none |
| query via URL-encoded POST | POST | none | application/x-www-form-urlencoded |
URL-encoded, ampersand-separated query parameters.query (exactly 1)default-graph-uri (0 or more)named-graph-uri (0 or more) |
| query via POST directly | POST | default-graph-uri (0 or more)named-graph-uri (0 or more) |
application/sparql-query |
Unencoded SPARQL query string |
SPARQL Query result formats¶
SPARQL services are able to return results in different formats. The user can specify the preferred format by specifying the corresponding Media Type in the HTTP Accept header. TriplyDB supports the Media Types in the following table. Notice that the chosen result format must be supported for your query form. Alternatively, it is possible to specify the requested format as a URI path suffix (e.g., /sparql.csv).
| Result format | Media Type | Query forms | Suffix |
|---|---|---|---|
| CSV | text/csv |
Select | .csv |
| JSON | application/json |
Ask, Select | .json |
| JSON-LD | application/ld+json |
Construct, Describe | .jsonld |
| N-Quads | application/n-quads |
Construct, Describe | .nq |
| N-Triples | application/n-triples |
Construct, Describe | .nt |
| SPARQL JSON | application/sparql-results+json |
Ask, Select | .srj |
| SPARQL XML | application/sparql-results+xml |
Ask, Select | .srx |
| TriG | application/trig |
Construct, Describe | .trig |
| TSV | text/tab-separated-values |
Select | .tsv |
| Turtle | text/turtle |
Construct, Describe | .ttl |
For interactive examples of SPARQL requests and responses in all supported formats, see the OpenAPI documentation.
For professional use, consider using saved queries, which support pagination for large result sets.
GraphQL¶
Some TriplyDB instances publish a GraphQL endpoint for every dataset. This endpoint uses information from user-provided SHACL shapes to generate the GraphQL schema.
See the OpenAPI documentation for endpoint reference, and the GraphQL implementation guide for details on schema generation from SHACL shapes, queries, filtering, and pagination.
Elasticsearch¶
The text search API returns a list of linked data entities based on a supplied text string. The text string is matched against the text in literals and IRIs that appear in the linked data description of the returned entities.
The text search API is only available for a dataset after an Elasticsearch service has been created for that dataset.
Two types of searches can be performed: a simple search, and a custom search. Simple searches require one search term for a fuzzy match. Custom searches accept a JSON object conforming to the Elasticsearch query DSL.
URI path¶
Text search requests are sent to the following URI path:
https://api.INSTANCE/datasets/ACCOUNT/DATASET/services/SERVICE/search
Reply format¶
The reply format is a JSON object. Search results are returned in the JSON array that is stored under key sequence "hits"/"hits". The order in which search results appear in the array is meaningful: better matches appear earlier.
Every search result is represented by a JSON object. The name of the linked data entity is specified under key sequence "_id". Properties of the linked data entity are stored as IRI keys. The values of these properties appear in a JSON array in order to allow more than one object term per predicate term (as is often the case in linked data).
The following code snippet shows part of the reply for the below example request. The reply includes two results for search string “mew”, returning the Pokémon Mew (higher ranked result) and Mewtwo (lower ranked result).
{
"hits": {
"hits": [
{
"_id": "https://triply.cc/academy/pokemon/id/pokemon/mew",
"http://open vocab org/terms/canonicalUri": [ "http://pokedex.dataincubator.org/pokemon/151" ],
"https://triply cc/academy/pokemon/def/baseAttack": [ 100 ],
"https://triply cc/academy/pokemon/def/name": [ "MEW", "MEW", "MEW", "MEW", "MEW", "ミュウ" ],
…
},
{
"_id": "https://triply.cc/academy/pokemon/id/pokemon/mewtwo",
"http://open vocab org/terms/canonicalUri": [ "http://pokedex.dataincubator.org/pokemon/150" ],
"https://triply cc/academy/pokemon/def/baseAttack": [ 110 ],
"https://triply cc/academy/pokemon/def/name": [ "MEWTU", "MEWTWO", "MEWTWO", "MEWTWO", "MEWTWO", "ミュウツー" ],
…
}
]
},
…
}
For interactive examples of search, query DSL, and count API requests, see the OpenAPI documentation.
Setting up index templates for ElasticSearch¶
TriplyDB allows you to configure a custom mapping for Elasticsearch services in TriplyDB using index templates.
Index templates¶
Index templates make it possible to create indices with user defined configuration, which an index can then pull from. A template will be defined with a name pattern and some configuration in it. If the name of the index matches the template’s naming pattern, the new index will be created with the configuration defined in the template. Official documentation from ElasticSearch on how to use Index templates can be found here.
Index templates on TriplyDB can be configured through either TriplyDB API or TriplyDB-JS.
Index template can be created by making a POST request to the following URL:
https://api.INSTANCE/datasets/ACCOUNT/DATASET/services/
with this body:
{
"type": "elasticSearch",
"name": "SERVICE_NAME",
"config": {
"indexTemplates": [
{
"index_patterns": "index",
"name": "TEMPLATE_NAME",
...
}
]
}
}
index_patterns and name are obligatory fields to include in the body of index template.
It's important that every index template has the field "index_patterns" equal "index"!
Below is the example of the post request:
curl -H "Authorization: Bearer TRIPLYDB_TOKEN" -H "Content-Type: application/json" -d '{"type":"elasticSearch","name":"SERVICE_NAME","config":{"indexTemplates":[{"index_patterns":"index", "name": "TEMPLATE_NAME"}]}}' -X POST "https://api.INSTANCE/datasets/ACCOUNT/DATASET/services/"
Component templates¶
Component templates are building blocks for constructing index templates that specify index mappings, settings, and aliases. You can find the official documentation on their use in ElasticSearch here. They can be configured through either TriplyDB API or TriplyDB-JS.
A component template can be created by making a POST request to the following URL:
https://api.INSTANCE/datasets/ACCOUNT/DATASET/services/
with this body:
{
"type": "elasticSearch",
"name": "SERVICE_NAME",
"config": {
"componentTemplates": [
{
"name": "TEMPLATE_NAME",
"template": {
"mappings": {
"properties": {
...
}
}
}
...
}
]
}
}
name and template are obligatory fields to include in the body of component template.
Component template can only be created together with an index template. In this case Index template needs to contain the field composed_of with the name of the component template.
Below is an example of a POST request to create a component template for the property https://schema.org/dateCreated to be of type date.
curl -H "Authorization: Bearer TRIPLYDB_TOKEN" -H "Content-Type: application/json" -d '{"type":"elasticSearch","name":"SERVICE_NAME","config":{"indexTemplates":[{"index_patterns":"index", "name": "INDEX_TEMPLATE_NAME","composed_of":["COMPONENT_TEMPLATE_NAME"]}], "componentTemplates":[{"name":"COMPONENT_TEMPLATE_NAME","template":{"mappings":{"properties":{"https://schema org/dateCreated":{"type":"date"}}}}}]}}' -X POST "https://api.INSTANCE/datasets/ACCOUNT/DATASET/services/"