Skip to content

arturdmt-alt/QA-Manual-Ecommerce-SauceDemo

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SauceDemo E-commerce Testing Project

QA Manual Testing Project - Comprehensive Test Coverage

Tester: Artur Dmytriyev
Date: January 23, 2026
Application: SauceDemo
Project Type: Functional, Usability, and Exploratory Testing


Project Overview

This project demonstrates end-to-end manual testing of an e-commerce application, covering user authentication, product browsing, shopping cart functionality, and checkout processes. The testing approach combines scripted test cases with exploratory testing to identify both expected behavior validation and edge cases.

Key Focus:

  • Risk-based test prioritization
  • Multiple test design techniques applied
  • Real bug identification and professional reporting
  • Strategic thinking about what to automate vs. test manually

Application Under Test

SauceDemo is a demo e-commerce platform designed for testing practice. It includes intentional bugs and different user types with varying behaviors, making it ideal for comprehensive QA testing.

Test Environment:

  • URL: https://www.saucedemo.com
  • Browser: Chrome 131
  • OS: Windows 11
  • Test Users: 6 different user types (standard, locked_out, problem, performance_glitch, error, visual)

Test Approach

Test Strategy

Focused on validating the complete user journey from login through purchase completion, with emphasis on:

  1. Critical Path Testing - Ensuring core e-commerce flows work correctly
  2. Risk-Based Prioritization - Testing high-impact areas first (authentication, payment, cart)
  3. User Type Validation - Testing different user behaviors and edge cases
  4. Error Handling - Verifying proper validation and error messaging

Test Design Techniques Applied

  • Equivalence Partitioning (EP) - Grouping similar inputs for efficient coverage
  • Boundary Value Analysis (BVA) - Testing edge cases for numeric and text inputs
  • Decision Tables - Testing multiple condition combinations
  • State Transition Testing - Validating cart persistence and session management
  • Error Guessing - Leveraging experience to find likely failure points

Test Coverage

Total Test Cases: 35 organized across 5 modules

Module Distribution:

  • User Authentication: 6 test cases
  • Product Inventory & Sorting: 8 test cases
  • Shopping Cart: 9 test cases
  • Checkout Process: 9 test cases
  • Navigation & UI: 3 test cases

Execution Results:

  • Passed: 32 test cases (91.4%)
  • Failed: 3 test cases (8.6%)
  • Test Coverage: 85%+ of critical functionality

Key Findings

Bugs Identified

Found and documented 3 real bugs in the application:

BUG-001: Product Sorting Z-A Broken

  • Severity: Medium | Priority: P2
  • Impact: Users cannot sort products in reverse alphabetical order
  • Module: Product Inventory

BUG-002: Product Images Incorrect for Problem User

  • Severity: High | Priority: P1
  • Impact: All products show same wrong image, making visual identification impossible
  • Module: Product Inventory

BUG-003: Tax Calculation Rounding Inconsistency

  • Severity: Medium | Priority: P2
  • Impact: Potential financial calculation precision issues
  • Module: Checkout Process

Exploratory Testing Insights

Conducted 45-minute exploratory testing session that revealed:

  • No critical security issues observed during UI testing
  • Good error handling across the application
  • Cart state persistence works correctly
  • Multiple user types demonstrate different UI/performance issues

QA Thinking Process

Risk Assessment

Identified and prioritized testing based on business risks:

  1. Payment/Checkout Failures - Direct revenue impact → Heavy testing focus
  2. Cart Data Loss - User frustration and abandoned purchases → Session persistence testing
  3. Security Vulnerabilities - Data breach risk → Authentication and input validation testing
  4. Search/Sort Accuracy - Poor UX and lost sales → Multiple sorting scenarios

Test Prioritization

Prioritized P1 test cases covering:

  • Login functionality (blocks all other features)
  • Cart operations (core e-commerce function)
  • Checkout process (revenue critical)
  • Product display accuracy (buying decisions depend on this)

Lower priority given to:

  • Navigation features (important but not blocking)
  • UI/visual issues (usability impact but not functional)

Trade-offs and Limitations

What Was Tested:

  • All critical user flows end-to-end
  • Different user type behaviors
  • Error handling and validation
  • State management and persistence

What Was Not Tested:

  • Backend API directly (focused on UI/UX)
  • Performance/load testing (beyond observing performance_glitch_user)
  • Mobile responsive design
  • Cross-browser compatibility (tested Chrome only)
  • Accessibility features (keyboard navigation, screen readers)

Scope Decisions: Focused on high-value testing over exhaustive coverage. In a production environment, would expand to include cross-browser testing, mobile responsive validation, and accessibility compliance.


QA + Automation Strategy

What I Would Automate

Analysis of automation candidates for this application:

High Priority for Automation:

  1. Login scenarios (TC-AUTH-01 through TC-AUTH-06)

    • Why: Repetitive, stable, blocks all testing
    • ROI: High - run before every test suite
    • Tool: Selenium/Playwright with Page Object Model
  2. Cart operations (TC-CART-01 through TC-CART-04)

    • Why: Core functionality, regression risk
    • ROI: High - frequent changes in e-commerce
    • Tool: Playwright for reliable state handling
  3. Checkout happy path (TC-CHKOUT-02, TC-CHKOUT-08)

    • Why: Critical revenue path, needs constant validation
    • ROI: Very high - catches breaking changes immediately
    • Tool: End-to-end with Cypress or Playwright
  4. Product sorting (TC-INV-02 through TC-INV-05)

    • Why: Simple, deterministic, easy to verify
    • ROI: Medium-high - good regression coverage
    • Tool: Quick UI checks with Selenium

What I Would Keep Manual

Manual Testing Candidates:

  1. Exploratory testing sessions

    • Why: Requires human intuition and creativity
    • Value: Finds unexpected issues automation misses
  2. Visual regression on first release

    • Why: Initial visual validation needs human judgment
    • Value: Catches CSS/layout issues automation might miss
    • Note: Could automate after baseline established
  3. Usability evaluation

    • Why: User experience requires human perspective
    • Value: Identifies UX issues beyond functional correctness
  4. New feature initial testing

    • Why: Exploratory approach better for unknown behavior
    • Value: Faster feedback before stabilizing automation

Automation ROI Analysis

Best ROI:

  • Smoke tests (login, add to cart, checkout)
  • Regression suites for stable features
  • Data-driven tests for different user types

Lower ROI:

  • One-time or rarely-changed features
  • Complex visual validations
  • Tests that require frequent maintenance

Framework Recommendation: For this application, I would use Playwright with TypeScript because:

  • Better reliability than Selenium for modern SPAs
  • Built-in waiting and retry logic
  • Excellent debugging tools
  • Strong community support

Project Structure

P1-Ecommerce-Testing/
├── docs/
│   ├── Test-Strategy.md          # Overall testing approach and risks
│   └── Test-Plan.md              # Detailed test planning and timeline
├── test-cases/
│   └── Test-Cases.md             # All 35 test cases with execution results
├── bug-reports/
│   ├── Bug-Reports.md            # 3 bugs found with detailed analysis
│   └── screenshots/              # Bug evidence screenshots
│       ├── BUG-001-sorting-za.png
│       ├── BUG-002-problem-user-images.png
│       └── BUG-003-tax-calculation.png
├── exploratory/
│   └── Exploratory-Session.md    # 45-min exploratory testing findings
└── README.md                     # This file

Deliverables

Test Documentation:

  • Test Strategy Document
  • Test Plan with timeline and responsibilities
  • 35 detailed test cases with execution results
  • 3 professional bug reports with screenshots
  • Exploratory testing session notes

Test Execution:

  • 35 test cases executed
  • 32 passed (91.4%)
  • 3 failed with documented bugs
  • 85%+ test coverage achieved

Skills Demonstrated

Manual Testing:

  • Test strategy and planning
  • Risk-based test prioritization
  • Multiple test design techniques (EP, BVA, Decision Tables, State Transition)
  • Professional bug reporting with clear reproduction steps
  • Exploratory testing with structured approach

QA Engineering Mindset:

  • Strategic thinking about automation vs. manual testing
  • Understanding of ROI in test automation
  • Recognition of trade-offs and limitations
  • Business impact analysis for bugs
  • Clear communication of technical findings

Tools & Technologies:

  • SauceDemo testing platform
  • Chrome DevTools for debugging
  • Markdown for documentation
  • Git for version control

What I Would Do Differently in Production

With More Resources:

  1. Cross-browser testing (Firefox, Safari, Edge)
  2. Mobile responsive testing
  3. Accessibility compliance validation (WCAG)
  4. API testing for backend validation
  5. Performance baseline measurements
  6. Automated regression suite for stable features
  7. Visual regression testing framework
  8. Integration with CI/CD pipeline

With Real Stakeholders:

  1. Requirements clarification sessions
  2. Risk assessment meetings with product team
  3. Bug triage and prioritization discussions
  4. Test coverage review with stakeholders
  5. Release sign-off process

Key Takeaways

  1. Testing is about risk management - Prioritize high-impact areas over exhaustive coverage
  2. Bugs tell a story - The 3 bugs found reveal both functional issues and UX problems
  3. Automation is strategic - Not everything should be automated; choose based on ROI
  4. Documentation matters - Clear bug reports and test cases enable team collaboration
  5. Exploratory testing adds value - Scripted tests alone miss edge cases and usability issues

Author

Artur Dmytriyev
QA Automation & Manual Engineer
Vancouver, BC, Canada

GitHub: github.com/arturdmt-alt
LinkedIn: linkedin.com/in/arturdmytriyev


Note: This project demonstrates manual QA testing skills as part of a comprehensive QA Engineering portfolio.

About

E-commerce manual testing with - 35 test cases, bug reports, and exploratory testing

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors