-
Notifications
You must be signed in to change notification settings - Fork 4
Expand file tree
/
Copy path.env.example
More file actions
163 lines (155 loc) · 6.25 KB
/
.env.example
File metadata and controls
163 lines (155 loc) · 6.25 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
# API Keys for VLM backends
ANTHROPIC_API_KEY=your-anthropic-api-key-here
OPENAI_API_KEY=your-openai-api-key-here
# =============================================================================
# Lambda Labs (Cloud GPU for fast training)
# =============================================================================
# Get your API key at: https://cloud.lambdalabs.com/api-keys
#
# Usage:
# python -m openadapt_ml.cloud.lambda_labs list # See available GPUs
# python -m openadapt_ml.cloud.lambda_labs launch # Launch A100 instance
# python -m openadapt_ml.cloud.lambda_labs terminate <id> # Stop billing!
#
LAMBDA_API_KEY=your-lambda-api-key-here
# =============================================================================
# Vast.ai (Cloud GPU Marketplace — cheapest GPUs)
# =============================================================================
# Get your API key at: https://cloud.vast.ai/api/
#
# Usage:
# python -m openadapt_ml.cloud.vast_ai list # See available GPUs
# python -m openadapt_ml.cloud.vast_ai launch # Launch cheapest A10
# python -m openadapt_ml.cloud.vast_ai terminate <id> # Stop billing!
#
VAST_API_KEY=your-vast-api-key-here
# =============================================================================
# Modal (Serverless Cloud GPU for training)
# =============================================================================
# Modal is a Python-native serverless platform with per-second billing.
# $30/month free credits. No SSH or instances to manage.
#
# Setup:
# pip install modal
# modal token set # Authenticate (opens browser)
#
# Usage:
# python -m openadapt_ml.cloud.modal_cloud train --bundle /path/to/bundle
# python -m openadapt_ml.cloud.modal_cloud status
# python -m openadapt_ml.cloud.modal_cloud download
#
# Auth is usually via `modal token set` CLI. Alternatively, set these env vars:
# MODAL_TOKEN_ID=your-modal-token-id
# MODAL_TOKEN_SECRET=your-modal-token-secret
# Google Gemini API (for GeminiGrounder element detection)
# Get your key via one of these methods:
#
# Option 1: Google AI Studio (easiest)
# 1. Go to https://aistudio.google.com/apikey
# 2. Click "Create API Key"
# 3. Select or create a Google Cloud project
# 4. Copy the key
#
# Option 2: Google Cloud Console
# 1. Go to https://console.cloud.google.com/apis/credentials
# 2. Select your project
# 3. Click "Create Credentials" → "API Key"
# 4. Enable "Generative Language API" at:
# https://console.cloud.google.com/apis/library/generativelanguage.googleapis.com
#
GOOGLE_API_KEY=your-google-api-key-here
# =============================================================================
# Azure Credentials (for WAA benchmark evaluation on Azure VMs)
# =============================================================================
#
# EASY SETUP: Run the automated setup script:
# python scripts/setup_azure.py
#
# This script will:
# - Check Azure CLI installation
# - Log you in (opens browser)
# - Create service principal, resource group, and ML workspace
# - Write credentials to this file
#
# MANUAL SETUP: Follow the steps below if you prefer manual control.
#
# Azure authentication uses Service Principal credentials. To create them:
#
# Step 1: Install Azure CLI
# macOS: brew install azure-cli
# Windows: winget install Microsoft.AzureCLI
# Linux: curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
#
# Step 2: Login to Azure
# az login
#
# Step 3: Get your subscription ID
# az account show --query id -o tsv
#
# Step 4: Create a Service Principal (replace <subscription-id>)
# az ad sp create-for-rbac \
# --name "openadapt-ml-waa" \
# --role "Contributor" \
# --scopes "/subscriptions/<subscription-id>" \
# --sdk-auth
#
# This outputs JSON with clientId, clientSecret, tenantId - copy these below.
#
# Step 5: Create Azure ML Workspace (if you don't have one)
# az ml workspace create \
# --name agents_ml \
# --resource-group agents \
# --location eastus
#
# Step 6: Request vCPU quota increase (if needed)
# Go to: https://portal.azure.com/#view/Microsoft_Azure_Capacity/QuotaMenuBlade
# Select your subscription → Compute → Standard Dv3 Family
# Request increase to at least 320 vCPUs (for 40 workers × 8 vCPUs each)
#
# Alternative: Use Azure CLI login (no service principal needed)
# If AZURE_CLIENT_ID is not set, DefaultAzureCredential will try:
# 1. Environment variables (service principal)
# 2. Azure CLI credentials (az login)
# 3. Managed Identity (when running on Azure)
#
# Service Principal credentials (from Step 4 output)
AZURE_CLIENT_ID=your-client-id-here
AZURE_CLIENT_SECRET=your-client-secret-here
AZURE_TENANT_ID=your-tenant-id-here
# Azure ML Workspace config
AZURE_SUBSCRIPTION_ID=your-subscription-id-here
AZURE_ML_RESOURCE_GROUP=agents
AZURE_ML_WORKSPACE_NAME=agents_ml
# Optional: Override default VM settings
# AZURE_VM_SIZE=Standard_D8_v3
# AZURE_DOCKER_IMAGE=ghcr.io/microsoft/windowsagentarena:latest
# =============================================================================
# Azure Storage for Async Inference Queue (Phase 2)
# =============================================================================
#
# These settings are auto-configured by setup_azure.py (steps 11-12).
# They enable live inference during training:
# - Training uploads checkpoints to blob storage
# - Inference worker polls queue for new checkpoints
# - Results uploaded to blob for dashboard to display
#
# Setup: Run 'python scripts/setup_azure.py' to create storage account and queue
#
# Manual setup (advanced):
# 1. Create storage account:
# az storage account create --name openadaptmlstorage \
# --resource-group agents --location eastus --sku Standard_LRS
#
# 2. Get connection string:
# az storage account show-connection-string \
# --name openadaptmlstorage -o tsv
#
# 3. Create queue and containers:
# az storage queue create --name inference-jobs --connection-string <conn-str>
# az storage container create --name checkpoints --connection-string <conn-str>
# az storage container create --name comparisons --connection-string <conn-str>
#
AZURE_STORAGE_CONNECTION_STRING=your-storage-connection-string-here
AZURE_INFERENCE_QUEUE_NAME=inference-jobs
AZURE_CHECKPOINTS_CONTAINER=checkpoints
AZURE_COMPARISONS_CONTAINER=comparisons