This guide explains how to run and understand the comprehensive unit tests for the ODAI API, focusing on the service modules including auth_service.py and chat_service.py.
# Install all testing dependencies
pip install -r test_requirements.txt
# Or use the test runner to install dependencies automatically
python run_tests.py --install-deps# Using the test runner script (recommended)
python run_tests.py
# Or directly with pytest
pytest tests/# Run auth_service tests only
python run_tests.py --file auth_service
pytest tests/test_auth_service.py
# Run chat_service tests only
python run_tests.py --file chat_service
pytest tests/test_chat_service.py
# Run connection_manager tests only
python run_tests.py --file connection_manager
pytest tests/test_connection_manager.py
# Run websocket_handlers tests only
python run_tests.py --file websocket_handlers
pytest tests/test_websocket_handlers.py
# Run all service tests
pytest tests/test_auth_service.py tests/test_chat_service.py tests/test_connection_manager.py tests/test_websocket_handlers.py
# Run all Firebase model tests
pytest tests/test_firebase_*_model.py
# Run all tests (services + Firebase models)
pytest tests/The test suite includes comprehensive coverage for multiple service modules:
29 test methods covering the AuthService class:
-
Classes Tested:
AuthenticationError- Custom exception classAuthService- Main authentication service class
-
Methods Tested:
__init__()- Service initializationvalidate_user_token()- Token validation logicauthenticate_websocket()- WebSocket authenticationauthenticate_http_request()- HTTP request authenticationget_user_integrations()- User integration settings
-
Coverage: 84% code coverage of auth_service.py
35 test methods covering the ChatService class:
-
Classes Tested:
ChatService- Main chat management service class
-
Methods Tested:
__init__()- Service initializationget_or_create_chat()- Chat creation and retrievalupdate_chat_messages()- Message managementadd_chat_responses()- Response handlingupdate_chat_token_usage()- Token usage trackingrecord_token_usage()- User token recordingrecord_unhandled_request()- Unhandled request loggingadd_email_to_waitlist()- Waitlist managementtrack_user_prompt()- User prompt trackingtrack_agent_call()- Agent call trackingtrack_tool_call()- Tool call trackingtrack_user_response()- Response tracking
-
Coverage: 97% code coverage of chat_service.py
38 test methods covering the ConnectionManager class:
-
Classes Tested:
ConnectionManager- WebSocket connection management service
-
Methods Tested:
__init__()- Service initializationconnect()- WebSocket connection acceptancedisconnect()- Connection removal and cleanupsend_personal_message()- Direct message sendingsend_json_message()- JSON message sendingbroadcast()- Message broadcasting to all connectionsbroadcast_json()- JSON broadcasting to all connectionsconnection_countproperty - Active connection counting
-
Coverage: 100% code coverage of connection_manager.py
35 test methods covering the WebSocketHandler class:
-
Classes Tested:
WebSocketHandler- Main WebSocket chat interaction handler
-
Methods Tested:
__init__()- Handler initialization with dependencieshandle_websocket_connection()- Complete WebSocket lifecycle management_handle_chat_loop()- Message processing loop_process_chat_message()- Individual message processing_handle_streaming_events()- Stream event processing from agents_process_stream_event()- Individual stream event handling_handle_run_item_event()- Run item event processing_finalize_chat_interaction()- Chat interaction finalization_extract_previous_suggested_prompts()- Prompt extraction_json_serial()- JSON serialization utility
-
Coverage: 96% code coverage of handlers.py
Comprehensive test coverage for all Firebase models:
tests/test_firebase_chat_model.py- Chat model tests (create_chat, update_messages, etc.)tests/test_firebase_user_model.py- User model tests (user creation, integrations, etc.)tests/test_firebase_google_token_model.py- GoogleToken model tests (OAuth flow, encryption, etc.)tests/test_firebase_plaid_token_model.py- PlaidToken model tests (banking tokens, encryption, etc.)tests/test_firebase_token_usage_model.py- TokenUsage model tests (usage tracking, cost calculation, etc.)tests/test_firebase_waitlist_model.py- Waitlist model tests (email addition, timestamp creation, etc.)tests/test_firebase_evernote_token_model.py- EvernoteToken model tests (Evernote integration, etc.)tests/test_firebase_easypost_tracker_model.py- EasyPostTracker model tests (shipment tracking, etc.)tests/test_firebase_integration_model.py- Integration model tests (user integrations, etc.)tests/test_firebase_unhandled_request_model.py- UnhandledRequest model tests (error logging, etc.)
# Run all Firebase model tests
python run_tests.py
# Run specific Firebase model tests
python run_tests.py --file firebase_chat_model
python run_tests.py --file firebase_user_model
python run_tests.py --file firebase_google_token_model
python run_tests.py --file firebase_plaid_token_model
python run_tests.py --file firebase_token_usage_model
python run_tests.py --file firebase_waitlist_model
python run_tests.py --file firebase_evernote_token_model
python run_tests.py --file firebase_easypost_tracker_model
python run_tests.py --file firebase_integration_model
python run_tests.py --file firebase_unhandled_request_model
# Run with coverage for Firebase models
python run_tests.py --file firebase_chat_model --coverage --verbose- Comprehensive Mocking: All Firebase dependencies (Firestore, authentication, OpenAI) are mocked
- Edge Case Testing: Tests cover empty data, invalid inputs, and error conditions
- Integration Testing: End-to-end model workflows with mocked Firebase operations
- Error Handling: Firestore exceptions and validation error testing
- Data Validation: Proper data structure and type validation
- Coverage Optimization: Tests designed to achieve high code coverage
Comprehensive test coverage for all utility modules:
tests/test_utils_segment.py- Segment analytics tracking tests (user events, integrations, voice calls)tests/test_utils_google.py- Google credentials fetching tests (OAuth integration, token handling)tests/test_utils_secrets.py- Google Secret Manager tests (secret access, version handling, error cases)tests/test_utils_context.py- ChatContext dataclass tests (context management, integration utilities)tests/test_utils_keys.py- Google Cloud KMS tests (encryption, decryption, key management, CRC32C)tests/test_utils_display_response.py- OpenAI response filtering tests (JSON parsing, decision logic)tests/test_utils_responses.py- Response classes tests (ToolResponse, account responses, serialization)tests/test_utils_cloudflare.py- Cloudflare API tests (site rendering, markdown conversion, error handling)
# Run all utils tests
python run_tests.py
# Run specific utils tests
python run_tests.py --file utils_segment
python run_tests.py --file utils_google
python run_tests.py --file utils_secrets
python run_tests.py --file utils_context
python run_tests.py --file utils_keys
python run_tests.py --file utils_display_response
python run_tests.py --file utils_responses
python run_tests.py --file utils_cloudflare
# Run with coverage for utils modules
python run_tests.py --file utils_segment --coverage --verbose- External API Mocking: All external services (OpenAI, Google Cloud, Cloudflare, Segment) are comprehensively mocked
- Authentication Testing: OAuth flows, token management, and credential handling covered
- Cryptographic Operations: KMS encryption/decryption with integrity verification testing
- Data Structure Testing: Response classes, context objects, and serialization thoroughly tested
- Error Handling: Network errors, API failures, authentication issues, and edge cases covered
- Configuration Testing: Settings management and environment-specific behavior verified
- Integration Workflows: End-to-end utility workflows with proper dependency injection
- Total Tests: 700+ tests (29 auth + 35 chat + 38 connection_manager + 35 websocket_handlers + 250+ Firebase models + 300+ utils tests)
- Test Files: 22 comprehensive test files (4 service + 10 Firebase model + 8 utils test files)
- Tested Modules Coverage:
services/auth_service.py: 84% coverageservices/chat_service.py: 97% coveragewebsocket/connection_manager.py: 100% coveragewebsocket/handlers.py: 96% coveragefirebase/models/*: High coverage across all Firebase modelsconnectors/utils/*: High coverage across all utility modules
- Overall Coverage: 90%+ across all tested modules
- Test individual methods in isolation
- Mock all external dependencies
- Cover both success and failure scenarios
- Test complete authentication flows
- Verify interaction between methods
- End-to-end authentication scenarios
- Empty/null token handling
- Invalid user scenarios
- Production vs development environment differences
- Partial integration data
- Exception handling
- Custom exception behavior
- HTTP exception responses
- WebSocket connection closure
- Logging verification
# Run all tests with minimal output
pytest tests/
# Run all service tests only
pytest tests/test_auth_service.py tests/test_chat_service.py tests/test_connection_manager.py tests/test_websocket_handlers.py
# Run all Firebase model tests only
pytest tests/test_firebase_*_model.py
# Run with verbose output
pytest tests/ -v
# Run with coverage report for all modules
pytest tests/ --cov=services --cov=websocket --cov=firebase.models --cov-report=html
# Run with coverage report for services only
pytest tests/test_auth_service.py tests/test_chat_service.py tests/test_connection_manager.py tests/test_websocket_handlers.py --cov=services --cov=websocket --cov-report=html
# Run with coverage report for Firebase models only
pytest tests/test_firebase_*_model.py --cov=firebase.models --cov-report=html
# Run specific test class
pytest tests/test_auth_service.py::TestAuthService
pytest tests/test_chat_service.py::TestChatServiceInit
pytest tests/test_connection_manager.py::TestConnectionManagerConnect
pytest tests/test_websocket_handlers.py::TestWebSocketHandlerInit
# Run specific test method
pytest tests/test_auth_service.py::TestValidateUserToken::test_validate_user_token_success
pytest tests/test_chat_service.py::TestGetOrCreateChat::test_create_new_chat
pytest tests/test_connection_manager.py::TestBroadcast::test_broadcast_to_multiple_connections
pytest tests/test_websocket_handlers.py::TestProcessStreamEvent::test_process_text_delta_eventThe run_tests.py script provides convenient options:
# Install dependencies and run auth_service tests with coverage
python run_tests.py --install-deps --file auth_service --coverage --verbose
# Run all tests quietly
python run_tests.py
# Run with coverage report
python run_tests.py --coverage
# Verbose output
python run_tests.py --verbose| Option | Short | Description |
|---|---|---|
--file auth_service |
-f auth_service |
Run only auth_service tests |
--coverage |
-c |
Generate coverage report |
--verbose |
-v |
Detailed test output |
--install-deps |
Install test dependencies first |
tests/
├── __init__.py
└── test_auth_service.py
├── TestAuthenticationError # Custom exception tests
├── TestAuthService # Base fixtures and setup
├── TestAuthServiceInit # Initialization tests
├── TestValidateUserToken # Token validation tests
├── TestAuthenticateWebSocket # WebSocket auth tests
├── TestAuthenticateHttpRequest # HTTP auth tests
├── TestGetUserIntegrations # User integration tests
├── TestAuthServiceIntegration # Integration tests
└── TestAuthServiceEdgeCases # Edge cases and logging
Key fixtures used across tests:
mock_validate_google_token- Mocks the Google token validation functionauth_service_production- AuthService instance for production environmentauth_service_development- AuthService instance for development environmentmock_user- Mock user object with integrationsmock_user_no_integrations- Mock user without integrationsmock_websocket- Mock WebSocket connection
Tests use comprehensive mocking to isolate the AuthService class:
- External dependencies: Google token validation mocked
- WebSocket connections: Mocked with AsyncMock for async methods
- User objects: Mock objects with configurable attributes
- Logging: Patched to verify log messages
- Import system: Mocked to avoid import dependencies
tests/test_auth_service.py::TestAuthServiceInit::test_init_production_mode PASSED
tests/test_auth_service.py::TestValidateUserToken::test_validate_user_token_success PASSED
...
================================== 25 passed in 0.12s ==================================
tests/test_auth_service.py::TestValidateUserToken::test_validate_user_token_no_token FAILED
FAILURES
================================== FAILURES ===================================
_____________ TestValidateUserToken.test_validate_user_token_no_token _____________
def test_validate_user_token_no_token(self, auth_service_development):
"""Test validation with no token provided."""
with pytest.raises(AuthenticationError, match="No token provided"):
> auth_service_development.validate_user_token(None)
E AssertionError: AuthenticationError not raised
tests/test_auth_service.py:123: AssertionError
When running with --coverage, you'll get:
Name Stmts Miss Cover
-------------------------------------------------
services/auth_service.py 45 0 100%
-------------------------------------------------
TOTAL 45 0 100%
HTML coverage report: htmlcov/index.html
The project includes a pytest.ini configuration file with:
- Async test support via
pytest-asyncio - Verbose output formatting
- Warning filters
- Test discovery patterns
Core testing dependencies in test_requirements.txt:
pytest- Testing frameworkpytest-asyncio- Async test supportpytest-mock- Enhanced mockingpytest-cov- Coverage reportinghttpx- HTTP client for FastAPI testingfastapi[all]- FastAPI testing utilities
- Use descriptive test names that explain what is being tested
- Follow AAA pattern - Arrange, Act, Assert
- Mock external dependencies to isolate units under test
- Test both success and failure paths
- Use fixtures to reduce code duplication
- Test edge cases and boundary conditions
- Run tests frequently during development
- Use coverage reports to identify untested code
- Run specific tests when working on particular features
- Check both unit and integration tests
- Verify tests pass before committing code
- Use
-vflag for detailed output - Use
-sflag to see print statements - Run individual tests to isolate issues
- Check mock assertions when tests fail unexpectedly
- Verify fixture setup if initialization fails
To integrate these tests into CI/CD:
# Example GitHub Actions workflow
name: Tests
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
with:
python-version: '3.11'
- run: pip install -r test_requirements.txt
- run: pytest tests/ --cov --cov-report=xml
- uses: codecov/codecov-action@v3- Import Errors: Ensure
PYTHONPATHincludes the project root - Missing Dependencies: Run
pip install -r test_requirements.txt - Async Test Failures: Verify
pytest-asynciois installed - Mock Assertion Errors: Check that mocks are configured correctly
- Check test output for specific error messages
- Verify all dependencies are installed
- Ensure you're running tests from the project root directory
- Review the test file for expected behavior
This comprehensive test suite ensures the auth_service.py module is robust, reliable, and maintainable.