Destinations
Was this helpful?
Was this helpful?
Webhooks are the most basic to set up, and involve Terra making a POST request to a predefined callback URL you pass in when setting up the Webhook.
They are automated messages sent from apps when something happens.
Terra uses webhooks to notify you whenever new data is made available for any of your users. New data, such as activity, sleep, etc will be normalised and sent to your webhook endpoint URL where you can process it however you see fit.
After a user authenticates with your service through Terra, you will automatically begin receiving webhook messages containing data from their wearable..
Exposing a URL on your server can pose a number of security risks, allowing a potential attacker to
Launch denial of service (DoS) attacks to overload your server.
Tamper with data by sending malicious payloads.
Replay legitimate requests to cause duplicated actions.
among other exploits.
In order to secure your URL, Terra offers two separate methods of securing your URL endpoint
Every webhook sent by Terra will include HMAC-based signature header terra-signature
, which will take the form:
In order to verify the payload, you may use one of Terra's SDKs as follows:
Terra requires the raw, unaltered body of the request to perform signature verification. If you’re using a framework, make sure it doesn’t manipulate the raw body (many frameworks do by default)
Any manipulation to the raw body of the request causes the verification to fail.
IP Whitelisting allows you to only allow requests from a preset list of allowed IPs. An attacker trying to reach your URL from an IP outside this list will have their request rejected.
The IPs from which Terra may send a Webhook are:
18.133.218.210
18.169.82.189
18.132.162.19
18.130.218.186
13.43.183.154
3.11.208.36
35.214.201.105
35.214.230.71
35.214.252.53
If your server fails to respond with a 2XX code (either due to timing out, or responding with a 3XX, 4XX or 5XX HTTP code), requests to it will be retried with exponential backoff around 8 times over the course of just over a day.
SQL databases are easy to set up and often the go-to choices for less abstracted storage solutions. Terra currently supports Postgres & MySQL.
Next, create a user with enough permissions to create tables & have read & write access within those tables. You can execute the scripts below based on your database
Data will be stored in tables within your SQL database following the structure below:
You'll then need to enter the above details (host, bucket name, and API key) into your Terra Dashboard when adding the Supabase destination
All AWS-based destinations follow the same authentication setup.
In order to use role-based access, attach the following policy to your bucket:
As shown above, the name will either be a concatenation of one of the below:
Data sent to SQS can either take the type of healthcheck
or s3_payload
. See Event Types for further details.
Each of these payloads will be a simple JSON payload formatted as in the diagram above. the url
field in the data payload will be a download link from which you will need to download the data payload using a GET
request. This is done to minimize the size of messages in your queue.
Terra generates signatures using a hash-based message authentication code () with . To prevent , ignore all schemes that aren’t v1
Step 3: Determine the expected signature
Step 4: Compare the signatures
You will need to ensure your SQL database is publicly accessible. As a security measure, you may implement using the list of IPs above.
will be created in the terra_users
table, will be placed in the terra_data_payloads
, and all other sent will be added to the terra_other_payloads
.
When using as a destination, Terra handles data storage for you. All will be stored in a Terra-owned S3 bucket, and the download link for each payload will be found under the payload_url
column
offers the best of both worlds in terms of allowing you to have both and . Both coexist within the same platform, making development a breeze.
with an appropriate name (e.g. terra-payloads) in your project
within your project. You do not need to add columns to it, Terra will handle that when connecting to it.
for access to your supabase project. Terra will need read & write access to the previously created resources in steps 1 and 2.
When using Supabase (since you get the best of SQL and S3 buckets ) Terra stores the data in the same structure as for and for . Follow those sections for more detailed information on how that is stored!
The most basic way to allow Terra to write to your AWS resource is to with access limited to the resource you're trying to grant Terra access to. for access to the specific resource, (write access is generally the only one needed, unless specified otherwise)
You'll need to create a service account, and generate credentials for it. See the guide for further details. once you have generated credentials, download the JSON file with the credentials you just created, and upload it in the corresponding section when setting up your GCP-based destination
Terra allows you to connect an as a destination, to get all data dumped directly into a bucket of your choice. Follow the /GCP Setup section for setting up credentials for it
When data is sent to your or , it will be dumped using the following folder structure
objects will be placed under the appropriate (in the screenshot above, this corresponds to 2022-03-16
.
Non versioned objects (e.g. ) will be placed in their appropriate event type folder, outside of the version folder
In all, every event will have as a parent folder the which it corresponds to, and will be saved with a unique name identifying it.
For : the user ID & the start time of the period the event refers to
For all other : the user ID & timestamp the event was generated at
is a managed queuing system allowing you to get messages delivered straight into your existing queue, minimizing the potential of disruptions and barriers that may occur when ingesting a request from Terra API to your server.
The URL will be a to an object stored in one of Terra's S3 buckets.