Setting up data destinations
Overview
Terra’s Health & Fitness API is event-based; meaning it pushes all user health payloads via events directly to your data destination and removes the need to request data via the API.
Therefore, setting up data destinations is a core step in your integration process.
Terra API provides many options for data destinations such as webhooks, SQL databases, Supabase, and buckets.
In this section, you'll learn how to set up your preferred destination in your Terra Dashboard.
Set up Data Destination
Click on "Add New", and select from 10 data destinations: Webhook, Database, Bucket and other.
Add the necessary details and click on "Apply".
You don't have a data destination yet?
No worries! If you are just starting off building your product and want to test how Terra works, you can use https://webhook.site to set up a temporary destination for Webhooks.
For further information about each destination, please see Destinations in the Reference page.
IP Whitelisting
Terra sends data to your destinations from a fixed set of IP addresses. If your infrastructure uses a firewall or IP access list, add the following IPs to ensure Terra can reach your destination:
18.133.218.210
18.169.82.189
18.132.162.19
18.130.218.186
13.43.183.154
3.11.208.36
35.214.201.105
35.214.230.71
35.214.252.53
These IPs apply to all data destinations — webhooks, databases, queues, and storage buckets. Ensure they are whitelisted wherever you accept connections from Terra.
Destination-specific steps
Terra API supports various data destinations, some require additional steps to setup correctly. Click each for further information and detailed setup instructions:
The most basic destination. Terra makes a POST request to your specified URL with new data events.
SQL database (Postgres, MySQL)
Store structured data directly into your PostgreSQL or MySQL databases. Terra manages table creation and inserts download links for full payloads.
Combines Postgres tables and S3-compatible storage in one platform. Terra provisions everything automatically via OAuth.
Cloud Storage (AWS S3, GCS, Azure Blob)
Dump raw data payloads directly into your preferred cloud storage bucket. Suitable for archival, batch processing, or data lake strategies.
Queuing services (AWS SQS, Kafka)
Integrate with managed message queues for resilient, scalable, and asynchronous data ingestion.
Store full data payloads directly in your MongoDB collections with automatic indexing.
Store full data payloads as documents in your Google Cloud Firestore database.
Last updated
Was this helpful?