Comprehensive guide to testing Nostria, with a focus on AI/LLM-driven automation.
- Overview
- Quick Start
- Test Architecture
- Running Tests
- AI/LLM Automation Guide
- Writing Tests
- Page Object Models
- Test Fixtures
- Configuration
- Test Artifacts
- Debugging
- CI/CD Integration
- Best Practices
Nostria's testing strategy combines:
- Unit Tests: Karma/Jasmine for component and service testing
- E2E Tests: Playwright for end-to-end user flow testing
- AI-Optimized Automation: Special utilities for LLM-driven test execution
The E2E testing setup is specifically designed to enable AI assistants (like GitHub Copilot) to:
- Execute tests and analyze results
- Capture and interpret screenshots and videos
- Collect console logs for debugging
- Understand page state through structured data
- Iterate on test failures automatically
# Ensure dependencies are installed
npm install
# Install Playwright browsers (if not already installed)
npx playwright install chromium# Run all E2E tests
npm run test:e2e
# Run with visual UI (great for debugging)
npm run test:e2e:ui
# Run in headed mode (see browser)
npm run test:e2e:headed
# Run AI-optimized tests (full artifact collection)
npm run test:e2e:ai# Open HTML test report
npm run test:e2e:report
# Results are also in test-results/results.json (JSON format for AI parsing)e2e/
├── fixtures.ts # Extended Playwright fixtures (auth, perf, network, console, memory)
├── global-setup.ts # Runs before all tests
├── global-teardown.ts # Runs after all tests
├── fixtures/
│ ├── test-data.ts # Centralized test constants (profiles, relays, routes, viewports)
│ ├── mock-events.ts # Nostr event factory functions
│ └── test-isolation.ts # App state reset/cleanup helpers
├── helpers/
│ ├── auth.ts # TestAuthHelper — auth injection/cleanup
│ ├── console-analyzer.ts # ConsoleAnalyzer — log categorization/reporting
│ ├── metrics-collector.ts # MetricsCollector — performance aggregation
│ ├── websocket-monitor.ts # WebSocketMonitor — CDP-based WS tracking
│ └── report-generator.ts # Full report generator (JSON + Markdown)
├── pages/
│ └── index.ts # Page Object Models
├── screenshots/ # Visual regression baselines
└── tests/
├── home.spec.ts # Home page tests
├── navigation.spec.ts # Navigation tests
├── accessibility.spec.ts # A11y tests
├── public/ # Unauthenticated test specs
├── auth/ # Authenticated test specs
├── performance/ # Performance & metrics specs
├── network/ # Network & WebSocket specs
├── visual/ # Visual regression specs
├── nostr/ # Nostr-specific protocol specs
├── resilience/ # Error resilience specs
└── security/ # Security testing specs
test-results/
├── results.json # JSON results for AI parsing
├── test-summary.json # Simplified summary
├── test-run-metadata.json # Test run info
├── html-report/ # HTML report for humans
├── screenshots/ # Named screenshots
├── videos/ # Video recordings
├── traces/ # Playwright traces
├── logs/ # Console logs (JSON)
├── ai-states/ # Page state snapshots
└── artifacts/ # Other test artifacts
| Command | Description |
|---|---|
npm run test:e2e |
Run all E2E tests in headless mode |
npm run test:e2e:ui |
Open Playwright UI for interactive testing |
npm run test:e2e:headed |
Run tests with visible browser |
npm run test:e2e:debug |
Debug mode with step-through |
npm run test:e2e:ai |
AI-optimized run (full artifacts) |
npm run test:e2e:report |
View HTML test report |
npm run test:e2e:codegen |
Record tests via browser |
# Run specific test file
npx playwright test e2e/tests/home.spec.ts
# Run specific test by name
npx playwright test -g "should load the home page"
# Run with specific browser
npx playwright test --project=chromium
# Run with multiple workers
npx playwright test --workers=4
# Run in specific browser project
npx playwright test --project=mobile-chrome| Variable | Default | Description |
|---|---|---|
BASE_URL |
http://localhost:4200 |
App URL to test |
CI |
- | Set in CI environments (affects retries) |
The testing setup is optimized for AI-driven automation. Here's how to use it:
# Run tests and get JSON output
npm run test:e2e
# For maximum debugging information:
npm run test:e2e:aiAfter tests run, check these files:
# Quick summary (AI-friendly)
cat test-results/test-summary.json
# Detailed results
cat test-results/results.json
# Console logs from tests
ls test-results/logs/The test-summary.json provides:
{
"endTime": "2024-01-15T10:30:00.000Z",
"totalTests": 15,
"passed": 14,
"failed": 1,
"skipped": 0,
"duration": 45000,
"failedTests": ["should handle empty feed gracefully"]
}Screenshots are saved with descriptive names:
test-results/screenshots/
├── home-page-loaded-2024-01-15T10-30-00.png
├── navigation-menu-open-2024-01-15T10-30-05.png
└── feed-loading-state-2024-01-15T10-30-10.png
For failed tests:
- Check
test-results/results.jsonfor error messages - View screenshots in
test-results/screenshots/ - Watch video recordings in
test-results/videos/ - Analyze traces using
npx playwright show-trace test-results/artifacts/<trace-file>
The AIPageAnalyzer class captures structured page state:
import { AIPageAnalyzer } from '../helpers/ai-automation';
test('analyze page', async ({ page }) => {
const analyzer = new AIPageAnalyzer(page);
// Capture complete page state
const state = await analyzer.capturePageState();
console.log(JSON.stringify(state, null, 2));
// Get action recommendations
const recommendations = await analyzer.getActionRecommendations();
console.log('Recommended actions:', recommendations);
// Save state to file
await analyzer.saveStateToFile('my-test');
});Use natural language-like commands:
import { SemanticActions } from '../helpers/ai-automation';
test('user flow', async ({ page }) => {
const actions = new SemanticActions(page);
await actions.clickButton('Create Note');
await actions.fillInput('Content', 'Hello, Nostr!');
await actions.clickButton('Publish');
await actions.waitForText('Note published');
});For AI-driven iteration:
- Run tests:
npm run test:e2e:ai - Analyze failures: Read
test-results/test-summary.json - View artifacts: Check screenshots and console logs
- Make fixes: Modify code based on findings
- Re-run: Execute tests again
- Repeat: Until all tests pass
import { test, expect } from '../fixtures';
import { HomePage } from '../pages';
test.describe('Feature Name', () => {
test.beforeEach(async ({ page }) => {
await page.goto('/');
});
test('should do something', async ({ page, waitForNostrReady, captureScreenshot }) => {
// Wait for app to be ready
await waitForNostrReady();
// Perform actions
const homePage = new HomePage(page);
await homePage.clickCreateNote();
// Assert
await expect(page.locator('.note-dialog')).toBeVisible();
// Capture screenshot for AI analysis
await captureScreenshot('after-clicking-create');
});
});test('with console logs', async ({
page,
waitForNostrReady,
captureScreenshot,
saveConsoleLogs,
getConsoleLogs
}) => {
await page.goto('/');
await waitForNostrReady();
// Get current console logs
const logs = getConsoleLogs();
console.log('Console output:', logs);
// Save logs to file
await saveConsoleLogs('my-test-name');
});import { NostrTestUtils } from '../fixtures';
test('nostr events', async ({ page }) => {
const nostrUtils = new NostrTestUtils(page);
// Wait for specific event kind
await nostrUtils.waitForEventKind(1); // Kind 1 = notes
// Get visible notes
const notes = await nostrUtils.getVisibleNotes();
expect(notes.length).toBeGreaterThan(0);
});| Class | Description | Key Methods |
|---|---|---|
HomePage |
Main feed/home | goto(), getNoteCount(), clickCreateNote() |
ProfilePage |
User profile | goto(pubkey), getDisplayName(), clickFollow() |
MessagesPage |
Direct messages | goto(), selectConversation(), sendMessage() |
SettingsPage |
User settings | goto(), toggleTheme(), save() |
LoginPage |
Account login | goto(), loginWithNsec(), clickExtensionLogin() |
MusicPage |
Music player | goto(), playFirstTrack(), isPlaying() |
CommandPalette |
Command palette | open(), search(), executeCommand() |
import { HomePage, ProfilePage, CommandPalette } from '../pages';
test('navigate via command palette', async ({ page }) => {
await page.goto('/');
const commandPalette = new CommandPalette(page);
await commandPalette.open();
await commandPalette.executeCommand('Settings');
// Now on settings page
await expect(page).toHaveURL(/settings/);
});import { Page, Locator } from '@playwright/test';
import { BasePage } from '../fixtures';
export class MyNewPage extends BasePage {
readonly myElement: Locator;
readonly anotherElement: Locator;
constructor(page: Page) {
super(page);
this.myElement = page.locator('[data-testid="my-element"]');
this.anotherElement = page.locator('.another-element');
}
async goto(): Promise<void> {
await this.page.goto('/my-route');
await this.waitForReady();
}
async doSomething(): Promise<void> {
await this.myElement.click();
}
}| Fixture | Type | Description |
|---|---|---|
page |
Page | Standard Playwright page with console logging |
consoleLogs |
ConsoleLogEntry[] | Collected console logs |
captureScreenshot |
Function | Save named screenshot |
waitForNostrReady |
Function | Wait for app to initialize |
clearConsoleLogs |
Function | Clear collected logs |
getConsoleLogs |
Function | Get current logs |
saveConsoleLogs |
Function | Save logs to JSON file |
import { test, expect } from '../fixtures';
test('using fixtures', async ({
page,
captureScreenshot,
waitForNostrReady,
saveConsoleLogs,
}) => {
await page.goto('/');
await waitForNostrReady();
await captureScreenshot('initial-state');
// ... test actions ...
await saveConsoleLogs('test-console-output');
});Key configuration options:
export default defineConfig({
testDir: './e2e',
timeout: 60_000, // Test timeout
expect: { timeout: 10_000 }, // Assertion timeout
use: {
baseURL: 'http://localhost:4200',
screenshot: 'on', // Always capture screenshots
video: 'retain-on-failure', // Videos only on failure
trace: 'retain-on-failure', // Traces only on failure
},
webServer: {
command: 'npm run start', // Auto-start app
url: 'http://localhost:4200',
reuseExistingServer: true, // Reuse if running
},
});| Project | Use Case |
|---|---|
chromium |
Primary desktop testing |
firefox |
Firefox browser testing |
webkit |
Safari/WebKit testing |
mobile-chrome |
Mobile responsive testing |
mobile-safari |
iOS responsive testing |
ai-debug |
Maximum artifact collection |
# Desktop Chrome only
npx playwright test --project=chromium
# Mobile testing
npx playwright test --project=mobile-chrome
# AI debugging (full artifacts)
npx playwright test --project=ai-debug// Named screenshots are saved to test-results/screenshots/
await captureScreenshot('descriptive-name');
// Or use page method directly
await page.screenshot({
path: 'test-results/screenshots/my-screenshot.png',
fullPage: true
});Videos are recorded automatically based on config:
'on': Always record'retain-on-failure': Keep only for failed tests'off': Never record
Traces provide step-by-step debugging:
# View a trace
npx playwright show-trace test-results/artifacts/trace.zip// Logs are automatically collected
// Save them for analysis:
await saveConsoleLogs('my-test-name');
// Read from file:
// test-results/logs/my-test-name-2024-01-15T10-30-00.jsonnpm run test:e2e:debugThis opens the Playwright Inspector where you can:
- Step through tests
- View selectors
- Time-travel through test steps
npm run test:e2e:uiFeatures:
- Watch mode (re-run on changes)
- Visual test timeline
- DOM snapshot inspection
- Network request viewing
test('debug with logs', async ({ page, getConsoleLogs }) => {
await page.goto('/');
// Print all console messages
const logs = getConsoleLogs();
console.log('Page console output:');
logs.forEach(log => console.log(`[${log.type}] ${log.text}`));
});Screenshots are automatically captured on test failure. For manual capture:
test.afterEach(async ({ page }, testInfo) => {
if (testInfo.status === 'failed') {
await page.screenshot({
path: `test-results/failures/${testInfo.title}.png`
});
}
});name: E2E Tests
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
- name: Install dependencies
run: npm ci
- name: Install Playwright browsers
run: npx playwright install --with-deps chromium
- name: Run E2E tests
run: npm run test:e2e
- name: Upload test results
if: always()
uses: actions/upload-artifact@v4
with:
name: playwright-report
path: test-results/# CI environment
CI=true npm run test:e2e
# Custom base URL
BASE_URL=https://staging.nostria.app npm run test:e2e- Use Page Objects: Encapsulate selectors and actions
- Wait properly: Use
waitForNostrReady()before assertions - Capture screenshots: For debugging and AI analysis
- Save console logs: For failed test investigation
- Use descriptive names: Test names should explain what's being tested
- Test one thing: Each test should verify a single behavior
- Handle async properly: Always await async operations
- Don't use hardcoded waits: Use proper waiting mechanisms
- Don't test implementation: Test behavior, not internals
- Don't share state: Tests should be independent
- Don't ignore flaky tests: Fix them or mark as known issues
- Don't skip cleanup: Use proper beforeEach/afterEach
Priority order for selectors:
data-testidattributes (most stable)- Semantic roles (
getByRole) - Text content (
getByText) - CSS classes (less stable)
- XPath (avoid if possible)
// Best - uses test ID
page.locator('[data-testid="submit-button"]')
// Good - uses semantic role
page.getByRole('button', { name: 'Submit' })
// Okay - uses visible text
page.getByText('Submit')
// Avoid - fragile
page.locator('.btn-primary.large')When developing new features, add data-testid attributes:
<!-- In Angular template -->
<button data-testid="create-note-button" (click)="createNote()">
Create Note
</button>Visual regression tests capture screenshots of key pages and components, then compare them against baseline ("golden") images on subsequent runs. A test fails if the pixel difference exceeds the configured threshold (1%).
Playwright's built-in toHaveScreenshot() assertion handles screenshot comparison:
- First run: Baseline screenshots are generated and saved to
e2e/screenshots/ - Subsequent runs: New screenshots are compared pixel-by-pixel against baselines
- Failures: If the diff exceeds the threshold, the test fails and a diff image is saved
# Run all visual regression tests
npm run test:e2e:visual
# Update baseline screenshots (after intentional UI changes)
npm run test:e2e:visual:update| Spec | Description |
|---|---|
theme-consistency.spec.ts |
5 pages in light & dark mode, contrast validation |
responsive-layout.spec.ts |
3 pages at mobile/tablet/desktop, layout transitions |
component-gallery.spec.ts |
Individual component screenshots (sidenav, cards, buttons, dialogs) |
- Location:
e2e/screenshots/— committed to the repository - Updating: Run
npm run test:e2e:visual:updateafter intentional UI changes - Review: Always review updated screenshots before committing
- CI: Baselines must match the CI environment's rendering (use consistent browser versions)
Visual regression thresholds are configured in playwright.config.ts:
expect: {
toHaveScreenshot: {
maxDiffPixelRatio: 0.01, // 1% pixel difference allowed
threshold: 0.2, // Per-pixel color threshold
},
},
snapshotPathTemplate: 'e2e/screenshots/{testFilePath}/{arg}{ext}',- Dynamic content masking: Tests use
maskto hide timestamps, avatars, and other dynamic elements that change between runs - Stable rendering: Tests wait for
networkidleand Angular bootstrap before capturing - Theme toggle: Dark mode is set via
localStorage.setItem('nostria-theme', 'dark')before page load - Component isolation: Component-level screenshots target specific Angular Material selectors (
mat-card,mat-sidenav, etc.)
The TEST_NSEC environment variable provides a Nostr private key (in nsec1... format) for authenticated E2E tests. This key is used to inject a logged-in session into the browser, allowing tests to exercise features that require authentication (DMs, note creation, settings, etc.).
- NEVER use a real account's nsec for testing. The test key may be exposed in CI logs, local files, or test artifacts.
- Generate a throwaway key specifically for testing purposes.
- The
.envfile is gitignored and will not be committed to the repository. - In CI, the key is stored as a GitHub Actions secret (
TEST_NSEC).
-
Generate a new keypair using any Nostr key generator:
# Using nostr-tools (the same library used by the test suite) node -e " const { generateSecretKey, getPublicKey } = require('nostr-tools/pure'); const { nsecEncode } = require('nostr-tools/nip19'); const sk = generateSecretKey(); console.log('nsec:', nsecEncode(sk)); console.log('pubkey:', getPublicKey(sk)); "
-
Add to
.env:echo "TEST_NSEC=nsec1your_generated_key_here" > .env
-
(Optional) Set up the test profile by logging into Nostria with the test key and setting a display name, avatar, etc. This makes authenticated test assertions more meaningful.
The TestAuthHelper class (e2e/helpers/auth.ts) handles authentication:
- Key derivation: Takes the nsec, decodes it to a hex private key, derives the public key
- localStorage injection: Uses
page.addInitScript()to setnostria-accountandnostria-accountsin localStorage before the app loads - Bypass encryption: Sets
isEncrypted: falseso the app reads the key directly without requiring PIN entry - Cleanup: After each test, clears auth keys from localStorage
import { test, expect } from '../../fixtures';
test('authenticated feature', async ({ authenticatedPage }) => {
// authenticatedPage is already logged in
await authenticatedPage.goto('/notifications');
// ... test authenticated features
});If TEST_NSEC is not set, the test suite automatically generates a throwaway keypair for each run. This means:
- The test account has no relay history, no profile, no follows
- Tests that depend on existing data (like "following feed shows content") will see empty states
- This is still useful for testing UI rendering, navigation, and error handling
Contains constants for:
- Well-known profiles: npubs for Jack Dorsey, fiatjaf, hodlbod (read-only profile viewing tests)
- Relay URLs: Primary, secondary, and invalid relay URLs for connection testing
- Sample content: Pre-defined note content for creation tests (short, long, with mentions, XSS payloads, etc.)
- App routes: All public and authenticated routes
- NIP-19 entities: Valid and malformed npub/nprofile/nevent for deep link testing
- Viewport sizes: Standard responsive breakpoints
- Timeouts: Consistent timeout values across tests
- Storage keys: Known localStorage key names
Factory functions for creating Nostr events with valid structure:
createMockProfileEvent()— Kind 0 profile metadatacreateMockNoteEvent()— Kind 1 text notecreateMockReplyEvent()— Kind 1 replycreateMockContactListEvent()— Kind 3 contact listcreateMockDMEvent()— Kind 4 encrypted DMcreateMockReactionEvent()— Kind 7 reactioncreateMockRepostEvent()— Kind 6 repostcreateMockArticleEvent()— Kind 30023 long-form articlecreateMockFileMetadataEvent()— Kind 1063 file metadatacreateMockLiveStreamEvent()— Kind 30311 live stream
Functions to prevent test pollution:
resetAppState(page)— Full reset: localStorage, sessionStorage, IndexedDB, service workersclearNostriaStorage(page)— Clear only Nostria-specific keyssetupCleanEnvironment(page, options)— Set up clean state with optional theme/storage configverifyCleanState(page)— Assert no residual auth or data remains
Solution: Ensure the dev server is running or increase timeout:
test.setTimeout(120_000); // 2 minutesSolution:
- Check if element exists in DOM
- Verify selector is correct
- Wait for element:
await element.waitFor()
Solution:
- Add proper waiting
- Use
waitForNostrReady() - Consider network conditions
- Add retries in config
Solution: Ensure page is fully loaded before capture:
await page.waitForLoadState('networkidle');
await captureScreenshot('name');- Check the Playwright documentation
- Review test artifacts in
test-results/ - Run in debug mode:
npm run test:e2e:debug - Use UI mode for visual debugging:
npm run test:e2e:ui
Authenticated tests exercise features that require a logged-in Nostr identity: direct messages, note creation, notifications, relay management, settings, and more. The test infrastructure handles authentication by injecting a pre-built NostrUser object into localStorage before the app loads.
TestAuthHelper(e2e/helpers/auth.ts) accepts an nsec1 private key, decodes it to hex, derives the public key, and constructs aNostrUserobject.injectAuth(page)callspage.addInitScript()to setnostria-accountandnostria-accountsinlocalStoragebefore Angular bootstraps. TheisEncrypted: falseflag bypasses the PIN entry flow.clearAuth(page)removes auth keys and reloads the page, restoring unauthenticated state.authenticatedPagefixture (ine2e/fixtures.ts) orchestrates the full lifecycle: inject before test, clear after test.
- If
TEST_NSECis set in.env, that identity is used for all authenticated tests. - If
TEST_NSECis not set, a throwaway keypair is generated per run (no relay history, empty profile). - The
authenticatedPagefixture is used by tagging tests with@authand requesting the fixture parameter.
import { test, expect } from '../../fixtures';
test('should show notifications page @auth', async ({ authenticatedPage: page }) => {
await page.goto('/notifications');
// page is already logged in — no need to inject auth manually
await expect(page.locator('app-root')).toBeVisible();
});-
Generate a test keypair (do NOT use your real account):
node -e " const { generateSecretKey, getPublicKey } = require('nostr-tools/pure'); const { nip19 } = require('nostr-tools'); const { bytesToHex } = require('@noble/hashes/utils'); const sk = generateSecretKey(); console.log('nsec:', nip19.nsecEncode(sk)); console.log('pubkey:', getPublicKey(sk)); "
-
Create
.envin the project root:TEST_NSEC=nsec1your_generated_key_here -
Start the dev server (if not already running):
npm run start
-
Run authenticated tests:
# Run only tests tagged @auth npm run test:e2e:auth # Or run the full suite (public + auth) npm run test:e2e:full
-
Interpret results:
- HTML report:
npm run test:e2e:report - JSON summary:
test-results/test-summary.json - Console logs:
test-results/logs/ - Screenshots:
test-results/screenshots/
- HTML report:
If you skip the .env setup, authenticated tests still run with an auto-generated throwaway identity. You'll see a console warning:
⚠ TEST_NSEC not set. Using auto-generated throwaway keypair.
Tests that check for profile data, following feeds, or DM history will see empty states — but UI rendering and navigation tests still work.
Every test automatically captures all browser console output (logs, warnings, errors, page errors, failed requests) via the page fixture in e2e/fixtures.ts.
await saveConsoleLogs('my-test-name');
// Output: test-results/logs/my-test-name-2026-02-12T10-30-00.jsonThe consoleAnalyzer fixture categorizes logs into:
| Category | What It Captures |
|---|---|
errors |
console.error, pageerror, unhandled exceptions |
warnings |
console.warn messages |
nostrLogs |
Logs containing Nostr prefixes: [AccountStateService], [RelayService], [SubscriptionCache], etc. |
angularLogs |
Angular-specific messages (NG0, ExpressionChanged) |
networkLogs |
Network failures (net::, ERR_) |
debugLogs |
General console.log/console.debug |
For standalone analysis outside fixtures:
import { ConsoleAnalyzer } from '../../helpers/console-analyzer';
const analyzer = new ConsoleAnalyzer(collectedLogs);
const report = analyzer.generateReport();
// report.uniqueErrors, report.relayStats, report.topMessages, etc.import { ConsoleAnalyzer } from '../../helpers/console-analyzer';
const analyzer = new ConsoleAnalyzer(logs);
analyzer.expectNoUnexpectedErrors(); // Fails on unexpected errors
analyzer.expectNoAngularErrors(); // Fails on Angular errors
analyzer.expectRelayConnections(2); // Expects at least 2 relay connectionsConsole analysis reports are JSON files in test-results/reports/:
{
"totalLogs": 142,
"categorySummary": {
"errors": 2,
"warnings": 15,
"nostr": 48,
"angular": 0,
"network": 3,
"debug": 74
},
"errors": [ ... ],
"warnings": [ ... ]
}The performance testing suite (e2e/tests/performance/) collects these metrics:
| Metric | Source | Good Threshold |
|---|---|---|
| LCP (Largest Contentful Paint) | PerformanceObserver | < 2.5s |
| FID (First Input Delay) | PerformanceObserver | < 100ms |
| CLS (Cumulative Layout Shift) | PerformanceObserver | < 0.1 |
| TTFB (Time to First Byte) | Navigation Timing API | < 800ms |
| FCP (First Contentful Paint) | PerformanceObserver | < 1.8s |
| DOM Content Loaded | Navigation Timing API | — |
| Load Complete | Navigation Timing API | — |
| JS Bundle Size | Resource Timing API | < 500KB per file |
| Total Bundle Size | Resource Timing API | — |
| Memory Usage | performance.memory (Chrome) |
< 50MB growth |
# Run performance/metrics tests only
npm run test:e2e:metrics
# Generate the full report (includes performance data)
npm run test:e2e:report:full| File | Location | Content |
|---|---|---|
| Page load times | test-results/metrics/page-load-*.json |
Navigation timing per route |
| Web Vitals | test-results/metrics/web-vitals-*.json |
LCP, FID, CLS, FCP, TTFB |
| Bundle sizes | test-results/metrics/bundle-size-*.json |
Per-resource sizes, total |
| Memory timeline | test-results/metrics/memory-*.json |
Heap snapshots over time |
| Relay performance | test-results/metrics/relay-perf-*.json |
Connection/latency times |
test('measure page load @metrics', async ({ page, performanceMetrics }) => {
await page.goto('/');
await page.waitForLoadState('networkidle');
// Save metrics to disk
await performanceMetrics.save('home-page-load');
// Access raw data
console.log('LCP:', performanceMetrics.webVitals.lcp);
console.log('CLS:', performanceMetrics.webVitals.cls);
});test('check for memory leaks @metrics', async ({ page, memoryMonitor }) => {
await page.goto('/');
await memoryMonitor.capture(); // Initial snapshot
// Navigate through pages
for (const route of routes) {
await page.goto(route);
await memoryMonitor.capture();
}
const delta = memoryMonitor.getDelta();
if (delta) {
expect(delta.potentialLeak).toBeFalsy();
}
await memoryMonitor.save('memory-navigation');
});The report generator (e2e/helpers/report-generator.ts) compares current results against full-report-previous.json (if present) and highlights regressions:
Performance Regression Detected:
Home page load: 2.1s → 3.4s (+62%)
Bundle size increased: 1.2MB → 1.5MB (+25%)
The app connects to Nostr relays via WebSocket. The test infrastructure monitors these connections at two levels:
Tracks HTTP requests and WebSocket connections via Playwright's event API:
test('monitor network @network', async ({ page, networkMonitor }) => {
await page.goto('/');
await page.waitForTimeout(5000);
console.log('Total requests:', networkMonitor.requests.length);
console.log('WebSocket connections:', networkMonitor.webSockets.length);
console.log('Failed requests:', networkMonitor.failedRequests.length);
await networkMonitor.save('network-home');
});Uses Chrome DevTools Protocol (CDP) for deep WebSocket frame inspection:
import { WebSocketMonitor } from '../../helpers/websocket-monitor';
const monitor = new WebSocketMonitor(page);
await monitor.start();
// Navigate and wait for relay connections
await page.goto('/');
await page.waitForTimeout(5000);
const summary = monitor.getSummary();
// summary.connections — relay URLs, connection times, status
// summary.subscriptions — REQ/CLOSE pairs, orphaned subscriptions
// summary.messages — total sent/received, by relayThe WebSocket monitor categorizes Nostr protocol messages:
| Message | Direction | Description |
|---|---|---|
REQ |
Client → Relay | Subscription request with filters |
EVENT |
Relay → Client | Event delivery |
EOSE |
Relay → Client | End of stored events |
NOTICE |
Relay → Client | Relay notice/error |
CLOSE |
Client → Relay | Close subscription |
Network reports are saved to test-results/network/:
{
"summary": {
"totalRequests": 45,
"failedRequests": 2,
"webSocketConnections": 5
},
"requests": [ ... ],
"failedRequests": [ ... ],
"webSockets": [
{
"url": "wss://relay.damus.io",
"connectedAt": 1707744000000,
"messagesSent": 12,
"messagesReceived": 156
}
]
}Two workflows handle E2E testing in CI:
- Triggers: Pull requests and pushes to main
- Steps: Install Node 20,
npm ci, install Chromium, start dev server, run tests - Caching:
node_modulesand Playwright browsers are cached between runs - Secrets:
TEST_NSECis read from GitHub Actions secrets (optional) - PR Comments: Test results are posted as a comment on the PR
- Triggers: Nightly cron schedule
- Scope: Full suite including performance metrics, visual regression, and all test tags
- Artifacts: 90-day retention for performance trend data
- Reports: Full Markdown report generated after tests complete
- Go to your repository's Settings > Secrets and variables > Actions
- Add
TEST_NSECwith a test-only nsec1 key - The workflow reads it via
${{ secrets.TEST_NSEC }}
If the secret is not configured, tests fall back to auto-generated keypairs.
When a PR triggers the E2E workflow, a comment is posted with:
- Total tests, passed/failed counts
- Link to the full report artifact
- Performance regression warnings (if any)
- List of failed test names
All test results are uploaded as GitHub Actions artifacts:
playwright-report— HTML reporttest-results— JSON data, screenshots, console logs, metrics
Every test should be tagged for filtering:
| Tag | When to Use |
|---|---|
@public |
Test doesn't require authentication |
@auth |
Test requires a logged-in account (use authenticatedPage fixture) |
@smoke |
Critical path — include in fast CI checks |
@metrics |
Collects performance/metrics data |
@network |
Monitors network/WebSocket behavior |
@security |
Security-focused validation |
@a11y |
Accessibility checks |
@visual |
Visual regression screenshots |
Tags go in the test.describe() title:
test.describe('My Feature @auth @smoke', () => { ... });| Need | Fixture |
|---|---|
| Logged-in page | authenticatedPage |
| Console log capture | saveConsoleLogs (auto-available via page) |
| Performance data | performanceMetrics |
| Network tracking | networkMonitor |
| Log analysis | consoleAnalyzer |
| Memory monitoring | memoryMonitor |
| Screenshots | captureScreenshot |
| App ready wait | waitForNostrReady |
import { test, expect } from '../../fixtures';
import { TIMEOUTS } from '../../fixtures/test-data';
async function waitForAppReady(page: import('@playwright/test').Page) {
await page.waitForFunction(() => {
const appRoot = document.querySelector('app-root');
if (!appRoot) return false;
return !!document.querySelector('mat-sidenav-content, .main-content, main');
}, { timeout: TIMEOUTS.appReady });
await page.waitForTimeout(TIMEOUTS.stabilize);
}
test.describe('Feature Name @public', () => {
test('should do something', async ({ page, saveConsoleLogs }) => {
await page.goto('/route');
await waitForAppReady(page);
// Test logic here
await saveConsoleLogs('feature-test-name');
});
});- Test has appropriate tags (
@public,@auth,@metrics, etc.) - Test calls
saveConsoleLogs()at the end for debugging - Test uses
waitForAppReady()orwaitForNostrReady()before assertions - Test uses constants from
e2e/fixtures/test-data.ts(not hardcoded values) - Test is independent — doesn't depend on state from other tests
- Test handles empty/loading states gracefully (uses
.catch(() => false)for optional elements) - Authenticated tests use the
authenticatedPagefixture - Performance tests save metrics via
performanceMetrics.save()ormemoryMonitor.save() - No real nsec keys or sensitive data in test files
The app currently has no data-testid attributes. Use these selectors in priority order:
- Angular Material selectors:
mat-card,mat-button,mat-sidenav - Angular component selectors:
app-event,app-note - CSS classes:
.sidenav,.content-textarea - Text content:
page.getByText('Create'),page.locator('button:has-text("Login")') - Semantic roles:
page.getByRole('button', { name: 'Submit' })
Test specs in e2e/tests/resilience/ verify the app handles adverse conditions:
| Spec | What It Tests |
|---|---|
offline.spec.ts |
Network disconnect/reconnect, cached content persistence |
slow-network.spec.ts |
Throttled 3G via CDP, loading states, timeout handling |
relay-failures.spec.ts |
All relays blocked, graceful degradation, no infinite retries |
large-data.spec.ts |
Long text, deep scroll, virtual scroll stress, emoji content |
concurrent-tabs.spec.ts |
Multiple tabs, localStorage sync, race conditions |
Test specs in e2e/tests/security/ validate security properties:
| Spec | What It Tests |
|---|---|
key-exposure.spec.ts |
Private key not in DOM, console, network, URLs, visible text, cookies |
xss-vectors.spec.ts |
XSS payloads in inputs, sanitization of rendered content, Angular injection |
csp-compliance.spec.ts |
Security headers, CSP violations, inline scripts/handlers, eval usage |
A pre-commit hook script (scripts/check-nsec.sh) scans staged files for nsec1 private keys and blocks commits if found. Install it:
cp scripts/check-nsec.sh .git/hooks/pre-commit
chmod +x .git/hooks/pre-commitThis document should be kept up to date as the testing infrastructure evolves.