# Destinations

## Webhook

Webhooks are the most basic [Destination](https://docs.tryterra.co/reference/core-concepts#destinations) to set up, and involve Terra making a POST request to a predefined callback URL you pass in when setting up the Webhook.

They are automated messages sent from apps when something happens.

Terra uses webhooks to notify you whenever new data is made available for any of your users. New data, such as activity, sleep, etc will be normalised and sent to your webhook endpoint URL where you can process it however you see fit.

After a user authenticates with your service through Terra, you will automatically begin receiving webhook messages containing data from their wearable..

### Security

Exposing a URL on your server can pose a number of security risks, allowing a potential attacker to

* **Launch denial of service (DoS) attacks** to overload your server.
* **Tamper with data** by sending malicious payloads.
* **Replay legitimate requests** to cause duplicated actions.

among other exploits.

In order to secure your URL, Terra offers two separate methods of securing your URL endpoint

### Payload signing

Every webhook sent by Terra will include HMAC-based signature header `terra-signature` , which will take the form:

```
t=1723808700,v1=a5ee9dba96b4f65aeff6c841aa50121b1f73ec7990d28d53b201523776d4eb00
```

In order to verify the payload, you may use one of Terra's SDKs as follows:

{% hint style="warning" %}
Terra requires the **raw, unaltered** body of the request to perform signature verification. If you’re using a framework, make sure it doesn’t manipulate the raw body (many frameworks do by default)

**Any** manipulation to the raw body of the request causes the verification to fail.
{% endhint %}

{% tabs %}
{% tab title="Python SDK" %}

```python
import os
​
from flask import Flask, jsonify, request
from terra.utils.verify_signature import verify_terra_webhook_signature
​
app = Flask(__name__)
​
signing_secret = os.getenv('SIGNING_SECRET')  # Your signing secret from Terra
​
​
@app.route('/webhook', methods=['POST'])
def handle_webhook():
    signature = request.headers.get('terra-signature')
    body = request.get_data(as_text=True)
​
    if signature is None:
        return jsonify({"error": "terra-signature header missing"}), 400
​
    try:
        is_valid = verify_terra_webhook_signature(
            payload=body, signature_header=signature, signing_secret=signing_secret
        )
        if not is_valid:
            return jsonify({"error": "Invalid signature"}), 400
​
        # Process the webhook data here
        webhook_data = request.get_json()
        print(f"Received valid webhook data: {webhook_data}")
​
        return jsonify({"message": "Webhook received successfully"}), 200
​
    except Exception as e:
        return jsonify({"error": str(e)}), 500
​
​
if __name__ == '__main__':
    app.run(port=8000)
```

{% endtab %}

{% tab title="Javascript SDK" %}

```javascript
const express = require("express");
const { verifyTerraWebhookSignature } = require("terra-api");

const app = express();

// Use the built-in Express middleware to parse raw JSON bodies.
// This must be placed before the webhook endpoint.
app.use(
    express.raw({
        inflate: true,
        limit: "4000kb", // Set a limit appropriate for your use case
        type: "application/json",
    })
);

// Make the route handler function async
app.post("/consumeTerraWebhook", async (req, res) => {
    try {
        const signature = req.headers["terra-signature"];
        const secret = process.env.TERRA_SECRET;

        // First, verify the signature. The function will throw an error if verification fails.
        // The req.body is a Buffer, which is the expected format.
        await verifyTerraWebhookSignature(req.body, signature, secret);

        // If verification is successful, send a 200 OK response immediately.
        res.sendStatus(200);

        // Process the data *after* responding to prevent timeouts.
        const data = JSON.parse(req.body.toString("utf8"));
        console.log("Received & Verified Terra Webhook Data:");
        console.log(JSON.stringify(data, null, 2));

    } catch (error) {
        // If verifyTerraWebhookSignature throws an error, catch it here.
        console.error("Webhook verification failed:", error.message);
        return res.sendStatus(401); // Respond with Unauthorized
    }
});

const port = process.env.PORT || 3000;
app.listen(port, () => {
    console.log(`Server started on http://localhost:${port}`);
});
```

{% endtab %}

{% tab title="Java SDK" %}

````java
import co.tryterra.terraclient.TerraClientFactory;
import co.tryterra.terraclient.api.TerraApiResponse;
import co.tryterra.terraclient.api.TerraClientV2;
import co.tryterra.terraclient.api.User;
import co.tryterra.terraclient.exceptions.TerraRuntimeException;
import co.tryterra.terraclient.models.Athlete;
import co.tryterra.terraclient.WebhookHandlerUtility
```

// Using the Spark framework (http://sparkjava.com)
public Object handle(Request request, Response response) {
  String payload = request.body();
  String sigHeader = request.headers("terra-signature");
  
  
  // Find your secret on https://dashboard.tryterra.co/dashboard/connections
  WebhookHandlerUtility handlerUtility = WebhookHandlerUtility("SIGNING_SECRET");
  
  Bool validSignature = handlerUtility.verifySignature(sigHeader, payload);
  
  if (!validSignature) {
    // the signature is invalid
    response.status(401);
    return "";
  }
  
  // Deserialize the object inside the event & handle the event
  // ...
  response.status(200);
  return "";
}
````

{% endtab %}

{% tab title="Manual verification" %}
We recommend that you use our official libraries to verify webhook event signatures. You can however create a custom solution by following this section.

The `terra-signature` header included in each signed event contains **a timestamp** and **one or more signatures** that you must verify.&#x20;

* the timestamp is prefixed by `t=`
* each signature is prefixed by a ***scheme***. Schemes start with `v`, followed by an integer. (e.g. `v1`)

```
terra-signature:
t=1492774577,
v1=5257a869e7ecebeda32affa62cdca3fa51cad7e77a0e56ff536d0ce8e108d8bd,
v0=6ffbb59b2300aae63f272406069a9788598b792a944a07aba816edb039989a39
```

Terra generates signatures using a hash-based message authentication code ([HMAC](https://en.wikipedia.org/wiki/Hash-based_message_authentication_code)) with [SHA-256](https://en.wikipedia.org/wiki/SHA-2). To prevent [downgrade attacks](https://en.wikipedia.org/wiki/Downgrade_attack), ignore all schemes that aren’t `v1`

To create a manual solution for verifying signatures, you must complete the following steps:<br>

**Step 1: Extract the timestamp and signatures from the header**

Split the header using the `,` character as the separator to get a list of elements. Then split each element using the `=` character as the separator to get a prefix and value pair.

The value for the prefix `t` corresponds to the timestamp, and `v1` corresponds to the signature (or signatures). You can discard all other elements.

**Step 2: Prepare the `signed_payload`string**

The `signed_payload` string is created by concatenating:

* The timestamp (as a string)
* The character `.`
* The actual JSON payload (that is, the request body)

**Step 3: Determine the expected signature**![](https://b.stripecdn.com/docs-statics-srv/assets/fcc3a1c24df6fcffface6110ca4963de.svg)

Compute an HMAC with the SHA256 hash function. Use the endpoint’s signing secret as the key, and use the `signed_payload` string as the message.

**Step 4: Compare the signatures**![](https://b.stripecdn.com/docs-statics-srv/assets/fcc3a1c24df6fcffface6110ca4963de.svg)

Compare the signature (or signatures) in the header to the expected signature. For an equality match, compute the difference between the current timestamp and the received timestamp, then decide if the difference is within your tolerance.

To protect against timing attacks, use a constant-time-string comparison to compare the expected signature to each of the received signatures.
{% endtab %}
{% endtabs %}

### IP Whitelisting

IP Whitelisting allows you to only allow requests from a preset list of allowed IPs. An attacker trying to reach your URL from an IP outside this list will have their request rejected.

The IPs from which Terra may send a Webhook are:

* **18.133.218.210**
* **18.169.82.189**
* **18.132.162.19**
* **18.130.218.186**
* **13.43.183.154**
* **3.11.208.36**
* **35.214.201.105**
* **35.214.230.71**
* **35.214.252.53**
* **35.214.229.114**

### **Retries**

If your server fails to respond with a 2XX code (either due to timing out, or responding with a 3XX, 4XX or 5XX HTTP code), requests to it will be retried with exponential backoff around 8 times over the course of just over a day.

## SQL Database

SQL databases are easy to set up and often the go-to choices for less abstracted storage solutions. Terra currently supports **Postgres & MySQL**.

### Setup

You will need to ensure your SQL database is **publicly accessible**. As a security measure, you may implement [IP whitelisting](#security-ip-whitelisting) using the list of IPs above.

Next, create a user with enough permissions to create tables & have read & write access within those tables. You can execute the scripts below based on your database

{% tabs %}
{% tab title="Postgres" %}

```sql
CREATE USER terra_user WITH PASSWORD 'your_password';
GRANT CONNECT ON DATABASE your_database_name TO terra_user;
GRANT USAGE ON SCHEMA public TO terra_user;
GRANT CREATE ON SCHEMA public TO terra_user;
ALTER DEFAULT PRIVILEGES IN SCHEMA public 
GRANT SELECT, INSERT, UPDATE, DELETE ON TABLES TO terra_user;
```

{% endtab %}

{% tab title="MySQL" %}

```sql
CREATE USER 'terra_user'@'%' IDENTIFIED BY 'your_password';
GRANT CREATE, SELECT, INSERT, UPDATE, DELETE ON your_database_name.* TO 'terra_user'@'%';
REVOKE ALL PRIVILEGES ON your_database_name.existing_table FROM 'terra_user'@'%';
```

{% endtab %}
{% endtabs %}

### Data Structure

Data will be stored in tables within your SQL database following the structure below:

<div data-full-width="false"><figure><img src="https://content.gitbook.com/content/eJJpVMsUARUJq9lYmL6t/blobs/JpYM3irK3pkVvzxw2jfb/image.png" alt=""><figcaption></figcaption></figure></div>

[Users](https://docs.tryterra.co/reference/core-concepts#user) will be created in the `terra_users` table, [data payloads](https://docs.tryterra.co/reference/event-types#data-events) will be placed in the `terra_data_payloads`, and all other [payloads](https://docs.tryterra.co/reference/health-and-fitness-api/event-types) sent will be added to the `terra_other_payloads`.&#x20;

{% hint style="info" %}
When using [SQL](#sql-database) as a destination, Terra handles data storage for you. All [data payloads](https://docs.tryterra.co/reference/event-types#data-events) will be stored in a Terra-owned S3 bucket, and the download link for each payload will be found under the `payload_url` column
{% endhint %}

## Supabase

[Supabase](https://supabase.com/) offers the best of both worlds in terms of allowing you to have both [storage **buckets**](https://supabase.com/docs/guides/storage) and [Postgres SQL **tables**](https://supabase.com/docs/guides/database/tables)**.** Both coexist within the same platform, making development a breeze.

### Setup

1. [Create a storage bucket](https://supabase.com/docs/guides/storage/buckets/creating-buckets) with an appropriate name (e.g. **terra-payloads**) in your project
2. [Create a table](https://supabase.com/docs/guides/database/tables#creating-tables) within your project. You do not need to add columns to it, Terra will handle that when connecting to it.
3. [Create an API key](https://supabase.com/docs/guides/api/api-keys#the-servicerole-key) for access to your supabase project. Terra will need read & write access to the previously created resources in steps 1 and 2.

You'll then need to enter the above details (host, bucket name, and API key) into your Terra Dashboard when adding the Supabase destination

### Data Structure

When using Supabase (since you get the best of SQL and S3 buckets :tada:) Terra stores the data in the same structure as for [SQL](#data-structure) and for [S3 Buckets](#data-structure-2). Follow those sections for more detailed information on how that is stored!

## AWS Destinations

All AWS-based destinations follow the same authentication setup.

### Setup

#### IAM User Access Key

The most basic way to allow Terra to write to your AWS resource is to [create an IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html) with access limited to the resource you're trying to grant Terra access to. [Attach relevant policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_change-permissions.html) for access to the specific resource, (write access is generally the only one needed, unless specified otherwise)

#### Role-based access

In order to use role-based access, attach the following policy to your bucket:

```json
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::760292141147:role/EC2_Instance_Perms"
      },
      "Action": [
        "s3:GetObject",
        "s3:PutObject",
        "s3:DeleteObject"
      ],
      "Resource": "arn:aws:s3:::your-bucket-name/*"
    }
  ]
}
```

## GCP Destinations

You'll need to create a service account, and generate credentials for it. See the guide [here](https://developers.google.com/workspace/guides/create-credentials#service-account) for further details. once you have generated credentials, download the JSON file with the credentials you just created, and upload it in the corresponding section when setting up your GCP-based destination

## Buckets: AWS S3/GCP GCS

Terra allows you to connect an [S3 Bucket](https://aws.amazon.com/s3/) as a destination, to get all data dumped directly into a bucket of your choice. Follow the [AWS Setup](#setup-2)/GCP Setup section for setting up credentials for it

### Data Structure

When data is sent to your [**S3 Bucket**](https://aws.amazon.com/s3/) or [**GCS**](https://cloud.google.com/storage), it will be dumped using the following folder structure

<figure><img src="https://content.gitbook.com/content/eJJpVMsUARUJq9lYmL6t/blobs/5MqKqhPJZy7QYEAyY5JE/image.png" alt=""><figcaption><p>Example for AWS S3</p></figcaption></figure>

[Versioned](https://docs.tryterra.co/reference/core-concepts#api-versions) objects will be placed under the appropriate [API version](https://docs.tryterra.co/reference/core-concepts#api-versions) (in the screenshot above, this corresponds to `2022-03-16`)

In all, every event will have as a parent folder the [Event Type](https://docs.tryterra.co/reference/health-and-fitness-api/event-types) which it corresponds to, and will be saved with a unique name identifying it.

As shown above, the name will either be a concatenation of one of the below:

* For [Data Events](https://docs.tryterra.co/reference/event-types#data-events): the user ID & the start time of the period the event refers to
* For all other [Event Types](https://docs.tryterra.co/reference/health-and-fitness-api/event-types): the user ID & timestamp the event was generated at

## AWS SQS (Simple Queue Service)

[AWS SQS](https://aws.amazon.com/sqs/) is a managed queuing system allowing you to get messages delivered straight into your existing queue, minimizing the potential of disruptions and barriers that may occur when ingesting a request from Terra API to your server.

### Data Structure

Data sent to SQS can either take the type of `healthcheck` or `s3_payload`. See [event-types](https://docs.tryterra.co/reference/health-and-fitness-api/event-types "mention") for further details.

<figure><img src="https://content.gitbook.com/content/eJJpVMsUARUJq9lYmL6t/blobs/uJoLlwdnUyuZqakWVLez/image.png" alt=""><figcaption></figcaption></figure>

Each of these payloads will be a simple JSON payload formatted as in the diagram above. the `url` field in the data payload will be a download link from which you will need to download the data payload using a `GET` request. This is done to minimize the size of messages in your queue.

The URL will be a [pre-signed link](https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-presigned-url.html) to an object stored in one of Terra's S3 buckets.&#x20;

{% hint style="info" %}
**Note:** Terra offers the possibility of using your own S3 bucket in combination with the SQS destination. For setting this up, kindly contact Terra support.
{% endhint %}
