Delta Lake is a popular data lake used for both streaming and batch operations. It lets you store structured, unstructured, and semi-structured data securely and reliably. With features such as support for ACID transactions, scalable metadata management, and schema enforcement, Delta Lake enables you to scale and deliver real-time data insights and analytics directly via your data lake.
RudderStack lets you configure Delta Lake as a destination to which you can send your event data seamlessly.
Configuring Delta Lake destination in RudderStack
To send event data to Delta Lake, you first need to add it as a destination in RudderStack and connect it to your data source. Once the destination is enabled, events will automatically start flowing to Delta Lake via RudderStack.
To configure Delta Lake as a destination in RudderStack, follow these steps:
- In your RudderStack dashboard, set up the data source. Then, select Databricks Delta Lake from the list of destinations.
- Assign a name to your destination and then click on Next.
Connection settings
Enter the following credentials in the Connection Credentials page:
- Host: Enter your server hostname from the Databricks dashboard.
- Port: Enter the port number.
- HTTP Path: Enter the cluster's HTTP path.
- Personal Access Token: Enter your Databricks access token.
- Enable delta tables creation in an external location: Enable this setting to specify the external location to create the delta tables. By default, RudderStack creates the delta tables at a default storage location for the non-external Apache Hive tables. When enabled, it lets you specify the external location to create delta tables:
{ externalLocation }/{schema}/{table}
.- Namespace: Enter the the name of the schema where RudderStack will create the tables. If you don't specify a namespace in the dashboard settings, RudderStack will set it to the source name, by default.
- Sync Frequency: Specify how often RudderStack should sync the data to your Delta Lake instance.
- Sync Starting At: This optional setting lets you specify the particular time of the day (in UTC) when you want RudderStack to sync the data to the Delta Lake instance.
- Exclude Window: This optional setting lets you specify the time window (in UTC) when RudderStack will skip the data sync.
- Object Storage Configuration: RudderStack currently supports the following platforms for storing the staging files:
- Amazon S3
- Google Cloud Storage
- Azure Blob Storage
Granting RudderStack access to your storage bucket
This section contains the steps to edit your bucket policy to grant RudderStack the necessary permissions, depending on your preferred cloud platform.
Amazon S3
Follow these steps to grant RudderStack access to your S3 bucket based on the following two cases:
Case 1: Use STS Token to copy staging files is disabled in the dashboard
For RudderStack Cloud
If you are using RudderStack Cloud, edit your bucket policy using the following JSON:
{ "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::422074288268:user/s3-copy" }, "Action": [ "s3:GetObject", "s3:PutObject", "s3:PutObjectAcl", "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::YOUR_BUCKET_NAME/*", "arn:aws:s3:::YOUR_BUCKET_NAME" ] }]}
YOUR_BUCKET_NAME
with the name of your S3 bucket.For self-hosted RudderStack
If you are self-hosting RudderStack, follow these steps:
- Create an IAM policy with the following JSON:
{ "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Action": "*", "Resource": "arn:aws:s3:::*" }]}
- Then, create an IAM user with programmatic access. Attach the above IAM policy to this user.
- Next, edit your bucket policy with the following JSON to allow RudderStack to write to your S3 bucket.
{ "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::ACCOUNT_ID:user/USER_ARN" }, "Action": [ "s3:GetObject", "s3:PutObject", "s3:PutObjectAcl", "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::YOUR_BUCKET_NAME/*", "arn:aws:s3:::YOUR_BUCKET_NAME" ] }]}
USER_ARN
with the ARN copied in the previous step. Also, replace ACCOUNT_ID
with your AWS account ID and YOUR_BUCKET_NAME
with the name of your S3 bucket.- Finally, add the programmatic access credentials to the
env
file present in your RudderStack installation, as shown:
RUDDER_AWS_S3_COPY_USER_ACCESS_KEY_ID=<user_access_key>RUDDER_AWS_S3_COPY_USER_ACCESS_KEY=<user_access_key_secret>
Case 2: Use STS Token to copy staging files is enabled in the dashboard
You can provide the configuration directly while setting up the Delta Lake destination in RudderStack, as shown:
Google Cloud Storage
You can provide the necessary GCS bucket configuration while setting up the Delta Lake destination in RudderStack. For more information, refer to the Google Cloud Storage bucket settings.
Azure Blob Storage
You can provide the necessary Blob Storage container configuration while setting up the Delta Lake destination in RudderStack. For more information, refer to the Azure Blob Storage settings.
Granting Databricks access to your staging bucket
This section contains the steps to grant Databricks the necessary permissions to access your staging bucket, depending on your preferred cloud platform.
Amazon S3
Follow these steps to grant Databricks access to your S3 bucket depending on your case:
Case 1: Use STS Token to copy staging files is disabled in the dashboard
In this case, you will be required to configure your AWS account to create an instance profile which will then be attached with your Databricks cluster.
Follow these steps in the exact order:
- Create an instance profile to access the S3 bucket.
- Create a bucket policy for the target S3 bucket.
- Note the IAM role used to create the Databricks deployment.
- Add the S3 IAM role to the EC2 policy.
- Add the instance profile to Databricks.
Case 2: Use STS Token to copy staging files is enabled in the dashboard
Add the following Spark configuration to your Databricks cluster:
spark.hadoop.fs.s3.impl shaded.databricks.org.apache.hadoop.fs.s3a.S3AFileSystemspark.hadoop.fs.s3a.impl shaded.databricks.org.apache.hadoop.fs.s3a.S3AFileSystemspark.hadoop.fs.s3n.impl shaded.databricks.org.apache.hadoop.fs.s3a.S3AFileSystemspark.hadoop.fs.s3.impl.disable.cache truespark.hadoop.fs.s3a.impl.disable.cache truespark.hadoop.fs.s3n.impl.disable.cache true
Google Cloud Storage
To grant Databricks access to your GCS bucket, follow these steps:
- Follow the steps listed in this user permissions section to set up the required role and user permissions.
- Then, add the following Spark configuration to your Databricks cluster:
spark.hadoop.fs.gs.auth.service.account.email <client_email>spark.hadoop.fs.gs.project.id <project_id>spark.hadoop.fs.gs.auth.service.account.private.key <private_key>spark.hadoop.fs.gs.auth.service.account.private.key.id <private_key_id>
- Finally, replace the following fields with the values obtained from the downloaded JSON in the previous step:
<project_id>
,<private_key>
,<private_key_id>
,<client_email>
.
Azure Blob Storage
To grant Databricks access to your Azure Blob Storage container, follow these steps:
- Add the following Spark configuration to your Databricks cluster.
spark.hadoop.fs.azure.account.key.<storage-account-name>.blob.core.windows.net <storage-account-access-key>
- Replace the following fields with the relevant values from your Blob Storage account settings:
<storage-account-name>
,<storage-account-access-key>
.
Creating a new Databricks cluster
To create a new Databricks cluster, follow these steps:
- Sign into your Databricks account. Then, click on the Compute option on the dashboard, as shown:
- Click on the Create Cluster option.
- Next, enter the cluster details. Fill in the Cluster Name, as shown:
- Select the Cluster Mode depending on your use-case. The following image highlights the three cluster modes:
- Then, select the Databricks Runtime Version as 7.1 or higher, as shown:
- Configure the rest of the settings as per your requirement.
- In the Advanced Options section, configure the Instances field as shown in the following image:
- In the Instance Profile dropdown menu, select the Databricks instance profile that you added to your account in the previous step.
- Finally, click on the Create Cluster button to complete the configuration and create the Databricks cluster.
Obtaining the JDBC/ODBC configuration
Follow these steps to get the JDBC/ODBC configuration:
- In your Databricks dashboard, click on the Compute option, as shown:
- Then, select the cluster you created in the previous section.
- In the Advanced Options section, select the JDBC/ODBC field and copy the Server Hostname, Port, and HTTP Path values, as shown:
Generating the Databricks access token
To generate the Databricks access token, follow these steps:
- In your Databricks dashboard, go to Settings and click on User Settings, as shown:
- Then, go to the Access Tokens section and click on Generate New Token, as shown:
- Enter your comment in the Comment field and click on Generate, as shown:
- Finally, copy the access token as it will be used during the Delta Lake destination setup in RudderStack.
IPs to be allowlisted
You will need to allowlist the following RudderStack IPs to enable network access:
- 3.216.35.97
- 34.198.90.241
- 54.147.40.62
- 23.20.96.9
- 18.214.35.254
- 35.83.226.133
- 52.41.61.208
- 44.227.140.138
- 54.245.141.180
- 3.66.99.198
- 3.64.201.167
- 3.66.99.198
- 3.64.201.167
FAQ
What are the reserved keys for Delta Lake?
Refer to this documentation for a complete list of the reserved keywords.
How does RudderStack handle the reserved words in a column, table, or schema?
There are some limitations when it comes to using reserved words as a schema, table, or column name. If such words are used in event names, traits or properties, they will be prefixed with a _
when RudderStack creates tables or columns for them in your schema.
Also, integers are not allowed at the start of a schema or table name. Hence, such schema, column, or table names will be prefixed with a _
. For example, '25dollarpurchase'
will be changed to '_25dollarpurchase'
.
How can I modify an existing table to a partitioned table?
To modify an existing table to a partitioned table, follow these steps:
- Set an exclusion window (using the Exclude window connection setting) so that RudderStack does not process any data while performing the below changes.
- Make the required changes in connection settings of the configured Delta Lake destination.
- Run the following queries in the Databricks Cluster/SQL endpoints to:
- Rename the existing table with
_temp
suffix. - Add
event_date
column to the_temp
table. - Backfill the data into original table.
- Rename the existing table with
ALTER TABLE x RENAME TO x_temp;ALTER TABLE x_temp ADD COLUMN TO event_date DATE;INSERT INTO x SELECT * FROM x_temp;
How can I convert an existing managed or unmanaged table at a location to an unmanaged table at a new location?
- Set an exclusion window (using the Exclude window connection setting) so that RudderStack does not process any data while performing the below changes.
- Run the following queries in the Databricks Cluster/SQL endpoints to:
- Create a temporary table using the new location.
- Drop the temporary table.
- Drop the original table.
CREATE OR REPLACE TABLE namespace.x_temp DEEP CLONE namespace.x LOCATION '/path/to/new/location/namespace/x';// where namespace represents the namespace attached to the destination in RudderStack.// where x represents the original table created by RudderStack.
DROP TABLE namespace.x_temp;DROP TABLE namespace.x;
- Enable the Enable delta tables creation in an external location setting in RudderStack dashboard and update the location.
- Remove the exclusion window and make the required changes in connection settings of the configured Delta Lake destination.
RudderStack will create the table again during the subsequent data syncs.
How do I convert an existing unmanaged table at a specific location to a managed table (at default location)?
- Set an exclusion window (using the Exclude window connection setting) so that RudderStack does not process anything while performing the below changes.
- Run the following queries in the Databricks Cluster/SQL Endpoints to:
- Create a temporary table.
- Drop original table.
- Rename temporary table to original table.
CREATE TABLE IF NOT EXISTS namespace.x_temp DEEP CLONE namespace.x;// where namespace represents the namespace attached to the destination in RudderStack.// where x represents the original table created by RudderStack.
DROP TABLE namespace.x;ALTER TABLE namespace.x_temp RENAME TO namespace.x;
- Remove the exclusion window and make sure the Enable delta tables creation in an external location setting is disabled in the RudderStack dashboard.
RudderStack will create the table again during the subsequent data syncs.
Contact us
For queries on any of the sections covered in this guide, you can contact us or start a conversation in our Slack community.