Setting up data destinations

Overview

Terra’s Health & Fitness API is event-based; meaning it pushes all user health payloads via events directly to your data destination and removes the need to request data via the API.

Therefore, setting up data destinations is a core step in your integration process.

Terra API provides many options for data destinations such as webhooks, SQL databases, Supabase, and buckets.

In this section, you'll learn how to set up your preferred destination in your Terra Dashboard.


Set up Data Destination

circle-info

You don't have a data destination yet?

No worries! If you are just starting off building your product and want to test how Terra works, you can use https://webhook.sitearrow-up-right to set up a temporary destination for Webhooks.

For further information about each destination, please see Destinations in the Reference page.


IP Whitelisting

Terra sends data to your destinations from a fixed set of IP addresses. If your infrastructure uses a firewall or IP access list, add the following IPs to ensure Terra can reach your destination:

  • 18.133.218.210

  • 18.169.82.189

  • 18.132.162.19

  • 18.130.218.186

  • 13.43.183.154

  • 3.11.208.36

  • 35.214.201.105

  • 35.214.230.71

  • 35.214.252.53

circle-exclamation

Destination-specific steps

Terra API supports various data destinations, some require additional steps to setup correctly. Click each for further information and detailed setup instructions:

Webhooks

  • The most basic destination. Terra makes a POST request to your specified URL with new data events.

SQL database (Postgres, MySQL)

  • Store structured data directly into your PostgreSQL or MySQL databases. Terra manages table creation and inserts download links for full payloads.

Supabase

  • Combines Postgres tables and S3-compatible storage in one platform. Terra provisions everything automatically via OAuth.

Cloud Storage (AWS S3, GCS, Azure Blob)

  • Dump raw data payloads directly into your preferred cloud storage bucket. Suitable for archival, batch processing, or data lake strategies.

Queuing services (AWS SQS, Kafka)

  • Integrate with managed message queues for resilient, scalable, and asynchronous data ingestion.

MongoDB

  • Store full data payloads directly in your MongoDB collections with automatic indexing.

Firestore

  • Store full data payloads as documents in your Google Cloud Firestore database.

Last updated

Was this helpful?