Slight's API: Also Your API!

  • Name
    Colman Humphrey
  • Name
    Thomas de Zeeuw
Published on

Before we dive in, if you're mostly looking for links: A quickstart doc, and the full spec.

As we discuss in what differentiates us, our API is a big focus of ours. Pratically every action available on Slight is also available through our API. This means you can create an app, preview a dataset, search all resources, run an app etc., all from the API.

This means every time you create an app (either on the site or through the API) you also create an API for your app. That's what we mean when we say it's also your API. If you know SQL, with Slight you also know how to generate an API! Your parametrized queries and scripts can be called from anywhere. Importantly, this also means any other authenticated application can easily consume anything created in Slight.

For the rest of this post, we'll make use of our free public version (see our blog post for details) to showcase our API, but of course commercially you'd get your own private API.

Running an App

It's very easy to access code required to run an app: every app shows the code under its API tab. For the blog post we'll be looking at the health of trees in New York City, which is available at this URL through the API as a JSON object.

We can then call our app from anywhere. Let's assume we want to use R. This code is a little heavy on the boilerplate for now; down the road we'll create SDKs to make this smoother, starting with R and Python.

library(httr2) # Any other HTTP library is fine.

url <- ""
req <- request(url)

# For this example we'll use a Personal Access Token (a.k.a. API token)
# for authentication (see
# Note that it's possible to run an app without authentication, but the
# number of results will be limited.
pat <- "$PAT" # Should start with 'slightpat_'.

# The values we pass for the variables.
# Note that if we don't define a value for a variable Slight uses the
# default value (as defined in the app).
variable_values <- list(
        name = "borough",
        value = "Queens"

# If you're not yet on R 4.1, you can replace the pipe (|>) with %>%,
# or just rewrite as function calls.

# Now we run the app.
body <- req |>
    req_headers("Authorization" = paste("Bearer", pat)) |>
    req_body_json(list(variable_values = variable_values)) |>
    req_perform() |>
    resp_body_json(simplifyVector = TRUE)

# Create a dataframe with the right names.
data <-$rows) |>
To maintain precision, all values are returned as strings from the API, so at this point, data is a dataframe of all strings (or character in R). We'll convert them into the correct type here.
# One option is to loop over the columns, adjusting each
type_conversion <- function (column_type) {
        "integer" = return (as.integer),
        "real"    = return (as.double),
        "boolean" = return (as.logical),
        "date"    = return (as.Date)
    # datetime and time are up to you, lubridate is good,
    # with e.g. `lubridate::ymd_hms` and `lubridate::hms`

    # leave the rest (text and category) as character/string
    return (identity)

for (i in seq_along(data)) {
    col_type <- body$columns$type[i]
    convert_column <- type_conversion(col_type)
    data[[i]] <- convert_column(data[[i]])

# We could instead do each column manually after looking at body$columns:
data$total_trees <- as.integer(data$total_trees)
data$healthy_tree_fraction <- as.double(data$healthy_tree_fraction)
# etc.

# or similarly with dplyr
data_correct_types <- data |>
    dplyr::mutate(total_trees = as.integer(total_trees),
                  healthy_tree_fraction = as.double(healthy_tree_fraction))
# etc.

# We'll leave further data processing up to you!
That's it: someone writes the app using SQL (potentially a big messy query); Slight creates an API that can be called right from R (or Python, or anything with a HTTP library!), and then everyone proficient in R can get to work with their data.

Searching All Resources

At prior jobs, we've often pined for better search functionality for data work. That is, for datasets, and for existing work that uses datasets (queries, scripts, dashboards).

Of course we provide the classic web UI based input box for searching. This is useful, but it only works in that one context. It can be nice not to have to leave your Python notebook to search for datasets. Further, if you want to do any complex searches within a search result, on the web UI you're limited to what Chrome's search box can do. You can't easily run a regex on the search set, or stem the words. Or maybe you just want a really quick list of all descriptions in the same place to read as compactly as possible. With our API, you can pull the result into your favourite coding environment for full flexiblity and power.

Let's say we're looking for data on tsunamis. We could just search datasets for tsunamis with a request to This gives three fairly suitable datasets:

$ curl '' | jq # see

# showing a subset of the resulting fields
  "datasets": [
      "uuid": "55aee2b0-523f-436c-a7b0-928636e22834",
      "id": "bigquery-public-data/noaa_tsunami-historical_source_event",
      "title": "NOAA Tsunami: Historical Source Events",
      "description": "Information on the source of over 2,400 tsunamis from 200 BC to the present around the globe.",
      "uuid": "cae8223c-6c11-4cd2-8ff7-99f2134dc8f7",
      "id": "bigquery-public-data/noaa_tsunami-historical_runups",
      "title": "NOAA Tsunami: Historical Runups",
      "description": "Information on the runups of over 2,400 tsunamis from 200 BC to the present around the globe.",
      "uuid": "6d9c6a86-6b01-485a-981e-7bb8492bd607",
      "id": "bigquery-public-data/noaa_significant_earthquakes",
      "title": "NOAA Significant Earthquakes",
      "description": "Global listing of over 5,700 severe earthquakes from 2150 BC to the present.",

That's useful, but instead let's try search all resources. That's as easy as a GET request to

$ curl '' | jq # see

# showing a subset of the resulting fields
  "datasets": [...] # same as above,
  "apps":  [
      "uuid": "80553db9-1f3e-4908-b1ab-f582a596d82e",
      "id": "colman/tsunamis_per_year",
      "title": "Tsunamis per Year",
      "description": "Counts tsunamis per year in a given range. Optionally set the tsunami validity (defaults to Definite Tsunamis)",
  "tags": [...],
  "users": [...]

Now we can see that we've not only found a few relevant datasets, but even an app about tsunamis that uses one of these datasets. That app could be exactly what we needed and so we don't need to do any work, or it can serve as inspiration, or we can even start our app by using it as a template.

Previewing a Dataset

In data work we often need to check if a dataset is suitable to solve our next problem. Or if we already know that it is, we want to quickly play with it to get a sense for how we'll use it. For these cases you can get a preview of a dataset to avoid having to wrestle with the complete dataset.

Similar to running an app, we might want to quickly pull this data preview into R to play with it interactively there.


# This follows a similar structure as the previous example.

url <- ""
req <- request(url)

# In this example we're not running this preview with authentication, but if we
# wanted to, it would follow the same structure as the running example above.
body <- req |>
    req_perform() |>
    resp_body_json(simplifyVector = TRUE)
data <-$rows) |>

# Again the column information is available in body$columns.
# We'll just change two here:
data$year <- as.integer(data$year)
data$event_validity <- as.integer(data$event_validity)

We can throw together a quick reshape and a plot.


# reshape
definite_tsunamis_per_year <- data |>
    dplyr::filter(event_validity == 4, year >= 1940) |>
    dplyr::group_by(year) |>
    dplyr::count(name = "count")

# and plot
definite_tsunamis_per_year |>
    ggplot(aes(x = year, y = count)) +
    geom_col(colour = "black", fill = "hotpink") +
    xlab("Year") + ylab("Definite Tsunami Count")
Which gives:
A barchart of count of definite tsunamis by year from 1940 to now. It shows an increase over time, from about 3 per year in the 40s to about 13 per year in recent years.
If we're happy with this, we could fetch the whole dataset to generate the full plot, or create an app that does this analysis.

Quickly Spin up a Calculating API

You don't even need data. Maybe you just want to deploy a random number generator accessible over the internet, but you don't care to deal with servers. You can just write SQL right into Slight, adding the parameters as usual, and you get an API that runs that query.

We've created an example simple random number generator that generates a bunch of numbers between a min and max number (defaulting to ten dice rolls). If you want to edit this and try your own version, just hit "Use as Template" on that app's page to get started.

Now we can call this app anywhere we'd like. E.g. to run with cURL, say we want five numbers from 1 to twenty, we can run:

curl '' \
  --request POST \
  --header 'Content-Type: application/json' \
  --data '{
    "variable_values": [
        "name": "min",
        "value": "1"
        "name": "max",
        "value": "20"
        "name": "quantity",
        "value": "5"
Copy and paste the above into your terminal! If you want more than 500 results, you'll need to add an authorization header: see therelevant section in the quickstart for details.

Our API Is Your API

When using Slight in your company, everything you add to Slight and everything you create in Slight will be accessible to all parts of your organization through the generated API. That's why we say it's not just our API but your API.

If delivering data work through an API is something your team could benefit from, we'd be delighted to hear from you. Email us at to get in touch.

Related Articles