Data warehouse connectors move data from your workspace into your warehouse for long-term storage, advanced analytics, and joining with other datasets.Documentation Index
Fetch the complete documentation index at: https://may1test.mintctf.com/llms.txt
Use this file to discover all available pages before exploring further.
Supported warehouses
- Snowflake
- Google BigQuery
- Amazon Redshift
- Databricks (Delta Lake)
- Azure Synapse Analytics
Connection setup
Create a destination in the warehouse
In your warehouse, create a dedicated database and schema for the platform’s data. Create a service account or user with write access to that schema. Avoid using root or admin credentials.
Add the warehouse connection
Go to Integrations → Catalog → Data warehouses, select your warehouse type, and enter the connection details:
- Host / account identifier (Snowflake uses account identifiers; others use hostnames)
- Database and schema
- Username and password or service account credentials
Configure table settings
Choose which entities to sync to the warehouse (projects, reports, analytics events, audit logs). Each entity maps to one table in the destination schema. Table names are pre-set and cannot be changed.
Table configuration
Each synced entity creates one table in the destination schema. Tables use an append-only or upsert write strategy depending on the entity type:| Entity | Write strategy |
|---|---|
| Events and audit logs | Append-only |
| Projects, users, reports | Upsert (update on primary key match) |
snake_case, with timestamps as UTC ISO 8601 strings.
Warehouse syncs run on a scheduled basis (minimum 1-hour intervals). Real-time streaming to warehouses is not currently supported.