Vectorizer Sync uses SQLite as its local database to store all application data. The database is located in the user's home directory to ensure data persistence across application updates and to avoid permission issues.
The database file location is platform-specific:
- Windows:
C:\Users\<username>\vectorizer-sync\database.db - macOS:
~/vectorizer-sync/database.db - Linux:
~/vectorizer-sync/database.db
The database is automatically created on first application launch. The application SHALL:
- Create the database directory if it doesn't exist
- Initialize the database with the schema
- Run any pending migrations
- Verify database integrity
The database schema includes a version number for migration purposes:
CREATE TABLE IF NOT EXISTS schema_version (
version INTEGER PRIMARY KEY,
applied_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP
);Stores project configurations and settings.
CREATE TABLE projects (
id TEXT PRIMARY KEY,
name TEXT NOT NULL,
path TEXT NOT NULL UNIQUE,
workspace_type TEXT NOT NULL CHECK(workspace_type IN ('local', 'remote')),
workspace_path TEXT,
cloud_workspace_id TEXT,
sync_enabled INTEGER NOT NULL DEFAULT 0,
last_sync_at DATETIME,
created_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
updated_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP
);
CREATE INDEX idx_projects_path ON projects(path);
CREATE INDEX idx_projects_sync_enabled ON projects(sync_enabled);
CREATE INDEX idx_projects_workspace_type ON projects(workspace_type);Fields:
id: Unique identifier (UUID)name: Project namepath: Absolute path to project directoryworkspace_type: 'local' or 'remote'workspace_path: Path to workspace.yml file (for local workspaces)cloud_workspace_id: HiveHub Cloud workspace ID (for remote workspaces)sync_enabled: Whether cloud sync is enabled (0 or 1)last_sync_at: Timestamp of last successful synccreated_at: Creation timestampupdated_at: Last update timestamp
Stores exported workspace.yml configurations.
CREATE TABLE workspace_configs (
id TEXT PRIMARY KEY,
project_id TEXT NOT NULL,
workspace_yml TEXT NOT NULL,
exported_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
version INTEGER NOT NULL DEFAULT 1,
FOREIGN KEY (project_id) REFERENCES projects(id) ON DELETE CASCADE
);
CREATE INDEX idx_workspace_configs_project_id ON workspace_configs(project_id);
CREATE INDEX idx_workspace_configs_exported_at ON workspace_configs(exported_at);Fields:
id: Unique identifier (UUID)project_id: Reference to projects tableworkspace_yml: YAML content of workspace.ymlexported_at: When the configuration was exportedversion: Workspace format version
Stores history of synchronization operations.
CREATE TABLE sync_history (
id TEXT PRIMARY KEY,
project_id TEXT NOT NULL,
type TEXT NOT NULL CHECK(type IN ('export', 'upload', 'update', 'delete')),
status TEXT NOT NULL CHECK(status IN ('success', 'failed', 'partial')),
files_count INTEGER NOT NULL DEFAULT 0,
files_processed INTEGER NOT NULL DEFAULT 0,
errors TEXT, -- JSON array of error messages
started_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
completed_at DATETIME,
FOREIGN KEY (project_id) REFERENCES projects(id) ON DELETE CASCADE
);
CREATE INDEX idx_sync_history_project_id ON sync_history(project_id);
CREATE INDEX idx_sync_history_started_at ON sync_history(started_at);
CREATE INDEX idx_sync_history_status ON sync_history(status);
CREATE INDEX idx_sync_history_type ON sync_history(type);Fields:
id: Unique identifier (UUID)project_id: Reference to projects tabletype: Type of sync operation ('export', 'upload', 'update', 'delete')status: Operation status ('success', 'failed', 'partial')files_count: Total number of files to processfiles_processed: Number of files successfully processederrors: JSON array of error messages (if any)started_at: When the sync startedcompleted_at: When the sync completed (NULL if still in progress)
Stores metadata for tracked files.
CREATE TABLE file_metadata (
id TEXT PRIMARY KEY,
project_id TEXT NOT NULL,
relative_path TEXT NOT NULL,
absolute_path TEXT NOT NULL,
size INTEGER NOT NULL,
hash TEXT NOT NULL, -- SHA-256 hash
last_modified DATETIME NOT NULL,
synced_at DATETIME,
sync_status TEXT NOT NULL DEFAULT 'pending' CHECK(sync_status IN ('pending', 'synced', 'failed', 'excluded')),
exclusion_reason TEXT,
FOREIGN KEY (project_id) REFERENCES projects(id) ON DELETE CASCADE,
UNIQUE(project_id, relative_path)
);
CREATE INDEX idx_file_metadata_project_id ON file_metadata(project_id);
CREATE INDEX idx_file_metadata_sync_status ON file_metadata(sync_status);
CREATE INDEX idx_file_metadata_hash ON file_metadata(hash);
CREATE INDEX idx_file_metadata_last_modified ON file_metadata(last_modified);Fields:
id: Unique identifier (UUID)project_id: Reference to projects tablerelative_path: File path relative to project rootabsolute_path: Absolute file pathsize: File size in byteshash: SHA-256 hash of file content (for change detection)last_modified: Last modification timestampsynced_at: When file was last synced to cloudsync_status: Current sync statusexclusion_reason: Reason for exclusion (if excluded)
Stores application notifications.
CREATE TABLE notifications (
id TEXT PRIMARY KEY,
type TEXT NOT NULL CHECK(type IN ('internal', 'hivehub')),
severity TEXT NOT NULL CHECK(severity IN ('info', 'warning', 'error')),
title TEXT NOT NULL,
message TEXT NOT NULL,
read INTEGER NOT NULL DEFAULT 0,
action_url TEXT,
created_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP
);
CREATE INDEX idx_notifications_read ON notifications(read);
CREATE INDEX idx_notifications_created_at ON notifications(created_at);
CREATE INDEX idx_notifications_type ON notifications(type);
CREATE INDEX idx_notifications_severity ON notifications(severity);Fields:
id: Unique identifier (UUID)type: Notification type ('internal' or 'hivehub')severity: Severity level ('info', 'warning', 'error')title: Notification titlemessage: Notification messageread: Whether notification has been read (0 or 1)action_url: Optional URL for action (e.g., upgrade plan link)created_at: When notification was created
Stores user preferences and settings.
CREATE TABLE user_settings (
id TEXT PRIMARY KEY DEFAULT 'default',
sync_enabled INTEGER NOT NULL DEFAULT 1,
auto_sync_enabled INTEGER NOT NULL DEFAULT 0,
sync_interval INTEGER, -- in minutes
max_file_size INTEGER NOT NULL DEFAULT 102400, -- 100KB in bytes
excluded_patterns TEXT, -- JSON array
notification_preferences TEXT, -- JSON object
hivehub_account TEXT, -- JSON object
updated_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP
);Fields:
id: Always 'default' (single row table)sync_enabled: Global sync enabled flagauto_sync_enabled: Auto-sync on file changessync_interval: Sync interval in minutes (NULL for manual sync)max_file_size: Maximum file size in bytes (default: 100KB)excluded_patterns: JSON array of exclusion patternsnotification_preferences: JSON object with notification settingshivehub_account: JSON object with HiveHub account informationupdated_at: Last update timestamp
Notification Preferences JSON Structure:
{
"syncStatus": true,
"errors": true,
"quotaWarnings": true,
"hivehubAlerts": true
}HiveHub Account JSON Structure:
{
"email": "user@example.com",
"planType": "free",
"quotaLimit": 1000000000,
"quotaUsed": 500000000
}The database is configured with the following SQLite pragmas:
PRAGMA journal_mode = WAL; -- Write-Ahead Logging for better concurrency
PRAGMA foreign_keys = ON; -- Enable foreign key constraints
PRAGMA synchronous = NORMAL; -- Balance between safety and performance
PRAGMA cache_size = 10000; -- 10MB cache
PRAGMA temp_store = MEMORY; -- Use memory for temporary storageAll write operations SHOULD use transactions:
db.transaction(() => {
// Multiple operations
insertProject(...);
insertFileMetadata(...);
updateSyncHistory(...);
})();All foreign key relationships are enforced:
workspace_configs.project_id→projects.id(CASCADE DELETE)sync_history.project_id→projects.id(CASCADE DELETE)file_metadata.project_id→projects.id(CASCADE DELETE)
-
Unique constraints on:
projects.path(one project per path)(project_id, relative_path)infile_metadata(one entry per file per project)
-
Check constraints on:
projects.workspace_type(must be 'local' or 'remote')sync_history.type(must be valid operation type)sync_history.status(must be valid status)file_metadata.sync_status(must be valid status)notifications.type(must be 'internal' or 'hivehub')notifications.severity(must be valid severity level)
Database schema changes are handled through migrations:
- Each migration has a version number
- Migrations are applied in order
- Migration history is stored in
schema_versiontable - Failed migrations are rolled back
const migrations = [
{
version: 1,
up: `
CREATE TABLE projects (...);
CREATE TABLE workspace_configs (...);
`,
down: `
DROP TABLE workspace_configs;
DROP TABLE projects;
`
},
{
version: 2,
up: `
ALTER TABLE projects ADD COLUMN description TEXT;
`,
down: `
-- SQLite doesn't support DROP COLUMN, would need to recreate table
`
}
];-
Automatic Backups:
- Database is backed up before schema migrations
- Backups stored in
~/vectorizer-sync/backups/
-
Manual Backups:
- Users can export database via UI
- Backup includes all tables and data
-
Backup Format:
- SQL dump format (
.sql) - Compressed SQLite database (
.db.backup)
- SQL dump format (
-
Automatic Recovery:
- Database integrity checks on startup
- Automatic repair if corruption detected
-
Manual Recovery:
- Restore from backup file
- Import from SQL dump
All foreign keys and frequently queried columns are indexed:
- Project lookups by path
- File metadata by project and status
- Sync history by project and date
- Notifications by read status and date
- Use prepared statements for repeated queries
- Batch operations in transactions
- Limit result sets with appropriate WHERE clauses
- Use appropriate indexes for queries
- VACUUM: Run periodically to reclaim space
- ANALYZE: Update query planner statistics
- REINDEX: Rebuild indexes if needed
- File Permissions: Database file has restricted permissions (user-only)
- SQL Injection: All queries use parameterized statements
- Data Validation: All data is validated before insertion
- Encryption: Optional database encryption (future feature)
Use in-memory database for tests:
const testDb = new Database(':memory:');
// Initialize schema
// Run tests
testDb.close();Create test fixtures for:
- Sample projects
- File metadata
- Sync history
- Notifications