Deploy your first Azure Databricks workspace using the Non-Private Link pattern.
- Azure subscription with Contributor + User Access Administrator roles
- Terraform >= 1.5 installed (
terraform version) - Azure CLI installed and logged in (
az login) - Databricks Account ID from https://accounts.azuredatabricks.net
# Set Databricks Account ID
export TF_VAR_databricks_account_id="<your-databricks-account-id>"
# Verify
echo $TF_VAR_databricks_account_idcd /path/to/0-repo/databricks/adb4u/deployments/non-pl# Copy example configuration
cp terraform.tfvars.example terraform.tfvars
# Edit with your values
vim terraform.tfvarsRequired values in terraform.tfvars:
workspace_prefix = "mydb" # lowercase, max 12 chars
location = "eastus2" # Azure region
resource_group_name = "rg-databricks-prod"
databricks_account_id = "<from-step-1>"terraform initExpected output:
Initializing modules...
- networking in ../../modules/networking
- workspace in ../../modules/workspace
- unity_catalog in ../../modules/unity-catalog
Terraform has been successfully initialized!
terraform validateExpected: Success! The configuration is valid.
terraform plan -out=tfplanReview the plan carefully. You should see:
- 1 Resource Group
- 1 VNet + 2 Subnets
- 1 NSG with SCC rules
- 1 NAT Gateway + Public IP
- 1 Databricks Workspace
- 2 Storage Accounts (metastore + external)
- 1 Access Connector
- Unity Catalog resources
terraform apply tfplanDuration: ~15-20 minutes
# View workspace URL
terraform output workspace_url
# View all outputs
terraform output
# Save metastore ID for future workspaces
terraform output metastore_id > metastore-id.txt# Open workspace in browser
open $(terraform output -raw workspace_url)In Azure Portal, check:
- Resource group exists
- VNet has 2 subnets with Databricks delegation
- NSG attached to both subnets
- NAT Gateway created
- In Databricks workspace, create a cluster
- Verify cluster has no public IPs (NPIP working)
- Test package installation (validates NAT Gateway):
%pip install pandas
- Navigate to Data → Unity Catalog
- Verify metastore is attached
- Create a catalog:
CREATE CATALOG test_catalog - Check external location is available
Solution: Verify format is xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx (no spaces)
Solution: Ensure you have Contributor + User Access Administrator roles
Solution: Add delegation manually:
az network vnet subnet update \
--resource-group <rg> \
--vnet-name <vnet> \
--name <subnet> \
--delegations Microsoft.Databricks/workspacesSolution: Verify NAT Gateway is created and associated with subnets
| Resource Type | Count | Purpose |
|---|---|---|
| Resource Group | 1 | Container for all resources |
| VNet | 1 | Network isolation |
| Subnets | 2 | Public + Private for Databricks |
| NSG | 1 | Network security rules |
| NAT Gateway | 1 | Stable outbound IP |
| Public IP | 1 | For NAT Gateway |
| Databricks Workspace | 1 | Main workspace |
| Storage Accounts | 2 | Metastore + External location |
| Access Connector | 1 | Unity Catalog managed identity |
Monthly Cost: ~$58 (infrastructure only, compute is additional)
-
Configure Users & Groups
- Set up Azure AD SCIM provisioning
- Assign workspace roles
-
Create Compute Policies
- Define cluster policies
- Set up cluster pools
-
Set Up Unity Catalog
- Create catalogs and schemas
- Configure external locations
- Set up data access permissions
-
Enable Monitoring
- Configure diagnostic logs
- Set up Azure Monitor integration
-
Deploy Additional Workspaces
- Reuse metastore for same region
- Set
create_metastore = false - Reference
existing_metastore_id
terraform destroy- Authentication Options: See
docs/02-AUTHENTICATION.md - Architecture Details: See Non-PL Pattern
- Troubleshooting: See
docs/04-TROUBLESHOOTING.md - Module Details: See
docs/modules/folder
Need Help? Check the troubleshooting guide or raise an issue in the repository.