Skip to content

Latest commit

 

History

History
439 lines (333 loc) · 22.2 KB

File metadata and controls

439 lines (333 loc) · 22.2 KB

AGENT.md

This file provides guidance to the agent when working with code in this repository.

IMPORTANT At the end of every change, update the ./CLAUDE.md with anything you wished you'd known at the start.

CRITICAL!

  • ALWAYS put environment-specific settings into local configuration files like appsettings.Development.json.
  • CRITICAL: Check code-review-issues.md at the start of the session. Address ALL issues before new implementation.
  • ALWAYS read @learnings.md to not trap into the same issues again.
  • ALWAYS build, analyze, and test before committing and reporting success.
  • ALWAYS rebuild the entire solution with dotnet build before running quality checks or tests on the sources.
  • CRITICAL: NO defensive/speculative/overengineered code. Every line of code must be reachable and testable via public API. No code "just in case". If a performance optimization cannot be tested via public API, don't add it (no premature optimization). This includes: unnecessary null checks, defensive try-catch blocks, caching mechanisms without proven need, redundant validation, or any code paths that cannot be triggered through normal usage.

Common Development Commands

Build

dotnet build                         # Build the solution
dotnet build SharpAssert/            # Build only the main library
dotnet build SharpAssert.Tests/      # Build only the test project

Run Tests

PREFER

dotnet test                                      # Run all tests
dotnet test -v <level>                           # Run all tests with verbosity q[uiet], m[inimal], n[ormal], d[etailed], and diag[nostic]

Clean

dotnet clean         # Clean build artifacts

Project Overview

SharpAssert is a Pytest‑style assertions library for .NET

CRITICAL: NAMING IS EVERYTHING

  1. Variable names should communicate ROLE, not type or implementation.
  2. Short, clear names optimized for readability.
  3. Intent-revealing function names that tell the story, named for their purpose from the invoker's perspective (WHAT not HOW).
  4. Common roles: result (for return values), item or each (in loops), count.
  5. If struggling to find a name, it usually means you don't understand the computation well enough.
  6. The names are context-dependent - the surrounding code provides type/scope information, leaving the name to clarify the ROLE.

Code Principles

  • Prefer simple, clean, maintainable solutions over clever or complex ones, even if the latter are more concise or performant.
  • Readability and maintainability are primary concerns.
  • Self-documenting names (no comments).
  • Small functions/methods.
  • Follow the Single Responsibility Principle in classes and methods.
  • CRITICAL When a change seems needlessly complex due to design issues, refactor first. "Make the change easy, then make the easy change."
  • Try to avoid rewriting; if unsure, ask permission first.
  • ALWAYS put tests in a dedicated test project (e.g., MyProject.Tests) that mirrors the source project's namespace structure.
  • Do not use a class (which implies behavior) for something that's purely a data structure.
    • PREFER record types for immutable data transfer objects (DTOs).
    • records are cleaner than classes with static factory methods for simple data aggregation.
  • When extracting utility methods due to Feature Envy smell:
    • Place them in a separate static helper class only if they are reused across multiple other classes.
    • Otherwise, implement them as private methods within the consuming class.
  • Method Organization: Within a class, arrange members for top-down reading. Public methods should appear first, followed by the private implementation details they call.

Test Quality Principles

  • We are intolerant of slow tests and tests with timeouts. If a test hangs, it should be deeply investigated and fixed.
  • NEVER use Arrange/Act/Assert comments in tests. Instead, separate test sections with empty lines.
  • AVOID testing multiple independent behaviors in one test. Split complex tests into focused, single-behavior tests.
  • PREFER using message arguments in assertions to communicate what the assertion is testing instead of comments.
  • CRITICAL: Cross-platform compatibility - Use CultureInfo.InvariantCulture for DateTime formatting in error messages to ensure consistent output across macOS/Linux/Windows (e.g., dt.ToString("M/d/yyyy", CultureInfo.InvariantCulture)).

TDD Cycle

We use TDD to write code.

  1. Write a single failing test → run it.
  2. Write the minimal code to pass → run the test.
  3. Refactor → verify tests still pass.
  4. Repeat until all tests pass.
  5. Commit after each green state.
  6. Tests and quality checks (build + analysis) must pass with zero warnings/errors.
  7. When writing a failing test TDD style, the code should be in a compilable state. This means creating the necessary files, classes, and methods the test invokes, but with a placeholder implementation like return 0; but relevant to the context. The main idea here is to "test the test."

Core Rules

  • No unsolicited docs/READMEs.
  • Ask > assume.
  • Stay on task (log unrelated issues).
  • Avoid large, hard-to-review change sets; explain your intentions.

Refactoring Principles

CORE APPROACH: "Minimal, surgical, trust the existing systems" - This is the fundamental approach for all code changes. Avoid over-engineering, unnecessary abstractions, and complex error handling that masks real issues. Let exceptions bubble up naturally and change only what's broken.

Prefer Meaningful Refactoring:

  • Use destructuring - C# features like tuple deconstruction can improve clarity.
  • Consider data structures - Sometimes the real solution is introducing a proper record or struct rather than more methods.
  • Address root causes - Look for code smells like repeated calls, long parameter lists, or unclear responsibilities.
  • Balance classes and interfaces - Prefer concrete classes for simple data structures or internal logic, but use interfaces for public APIs and services to enable dependency injection and mock testing.

Method Refactoring Philosophy:

  • Explaining methods > explaining variables - Hide irrelevant complexity behind intention-revealing method names rather than cramming logic inline.
  • True single responsibility - Each method should have ONE clear job.
  • Separate "what" from "how" - Coordinate operations in one place; delegate implementation details to focused private helper methods.
  • Optimize for readability first - Write code that reveals intent clearly; optimize for performance later if needed.
  • Example pattern: Instead of var lineCount = fileNode.GetLocation().GetLineSpan().EndLinePosition.Line - fileNode.GetLocation().GetLineSpan().StartLinePosition.Line + 1;, use GetLineCount(fileNode) to hide the complexity.

Testing Guidelines

Test what matters, skip what doesn't

Tests use NUnit 3.14 and should be named *Fixture.cs. All tests are located in a dedicated test project (e.g., SharpAssert.Tests) which mirrors the source project's namespaces.

Keep test methods focused and short. Use these strategies to manage test complexity:

Custom Assertions with FluentAssertions:

// Instead of verbose inline assertions
Assert.NotNull(result);
Assert.Equal("value1", result.Field1);

// Use FluentAssertions for readable, chainable assertions
result.Should().NotBeNull();
result.Field1.Should().Be("value1");

Object Testing with BeEquivalentTo:

// Instead of multiple field assertions
Assert.Equal("value1", result.Field1);
Assert.Equal("value2", result.Field2);

// Use BeEquivalentTo for deep object comparison
var expected = new { Field1 = "value1", Field2 = "value2" };
result.Should().BeEquivalentTo(expected);

External Test Data Files:

  • Commit well-named input files as "Embedded Resources" in the test project.
  • Use meaningful names that describe the test scenario (e.g., ValidUserProfile.json).
  • Load files in a test fixture or setup method when needed across multiple test cases.

Test Organization:

  • Use test fixtures with [TestFixture] attribute to create/share objects needed in multiple test cases.
  • Use [TestCase] and [TestCaseSource] when multiple test cases have similar structures but different inputs.
  • Split large test files into multiple focused files when refactoring can't reduce complexity.
  • Group tests meaningfully by the functionality being tested, often in a class named after the class under test.

Tests use NUnit 3.14 with FluentAssertions for assertions. Test structure:

  • Fixtures per feature (e.g., BinaryComparisonFixture.cs, LinqOperationsFixture.cs)

Testing Infrastructure

We use a decoupled testing strategy that separates logic verification from formatting verification.

Structure:

[TestFixture]
public class MyFeatureFixture : TestBase
{
    [TestFixture]
    class LogicTests
    {
        // Tests that verify the correctness of the EvaluationResult structure
        [Test]
        public void Should_detect_failure()
        {
            var expected = BinaryComparison(..., Comparison(...));
            AssertFails(() => Assert(x == y), expected);
        }
    }

    [TestFixture]
    class FormattingTests
    {
        // Tests that verify the rendered string output of the result
        [Test]
        public void Should_render_failure()
        {
            var result = BinaryComparison(...);
            AssertRendersExactly(result, "Left: 1", "Right: 2");
        }
    }
    
    // DSL Helpers
    static ComparisonResult Comparison(...) => ...
}

DSL Helpers (TestBase):

  • AssertFails(action, expectedResult): Verifies structural equality of EvaluationResult.
  • AssertPasses(action): Verifies assertion succeeds.
  • AssertRendersExactly(result, lines...): Verifies exact string rendering (Text property).
  • Value(expr, val), BinaryComparison(...), Operand(...): Factories for expected results.

Quality Issue Resolution Strategy

  • CRITICAL: When addressing quality issues:
    • Commit after every successful quality issue elimination before moving to fix the next one.
    • Make sure to run all tests before each commit.
# Language: BoGuidelineSyntax (BGS)
# Keywords:
#   RULE: Declares a guideline.
#   DO: A positive action to take.
#   AVOID: An anti-pattern to prevent.
#   PREFER: A choice between two options. `PREFER A over B`.
#   CONSIDER: A suggestion that is context-dependent.
#   WHEN: A condition for the rule.
#   BECAUSE: The rationale behind the rule.
#   =>: "implies" or "should be implemented as".
#
# Symbols:
#   {...}: A set of related concepts or reasons.
#   @/..: Absolute path import.
#   ../..: Relative path import.
#   () =>: Anonymous function or component.
#---------------------------------------------------------------------

DESIGN

RULE: DO SEGREGATE\METHODS into {orchestrator | implementor}.

  • orchestrator: Calls other methods, contains no logic.
  • implementor: Contains logic, has no sub-orchestration.
  • AVOID: Mixing implementation\logic with high-level_oordination.
  • BECAUSE: {clarity, single\responsibility, testability}.

RULE: DO ISOLATE\LOGIC into {pure | stateful}.

  • pure: Predictable, no side-effects (static methods are great for this).
  • stateful: Has side-effects, interacts with external world (e.g., fileService.SaveBaselineAsync(baseline)).
  • BECAUSE: {predictability, easy\testing\of\pure\logic}.

RULE: DO USE the Decorator\Pattern or middleware-style delegates to wrap_methods_with_cross_cutting_concerns.

  • Examples: Logging decorators, retry policies with Polly, ASP.NET Core middleware.
  • BECAUSE: {DRY, separation_of_concerns}.

RULE: DO USE early_returns (guard_clauses) to handle_invalid_states at the start of a method.

  • AVOID: Deeply nested if statements for validation.
  • BECAUSE: {reduces_nesting, improves_readability}.

EXTERNAL INTERACTIONS & STATE

RULE: DO USE classes PRIMARILY_AS_WRAPPERS for external_dependencies (IO, APIs, DBs) and to hold state.

  • AVOID: Mixing complex business_logic directly in these wrapper classes; PREFER delegating to purer internal services/methods.
  • BECAUSE: {isolates_side_effects, promotes_functional_core, aligns_with_clean_architecture}.

RULE: DO USE a dedicated_logger_service (ILogger<T>). AVOID Console.WriteLine for application_logging.

  • BECAUSE: {structured_logs, central_control, destination_flexibility}.

RULE: AVOID over-defensive_programming internally.

  • TRUST your type_system (especially nullable reference types) and boundary_validation.
  • AVOID: {excessive_null_checks, redundant_try_catch_blocks for expected errors}.

CODING STYLE & CONVENTIONS

RULE: CRITICAL AVOID comments; DO WRITE self_documenting_code.

  • INSTEAD_OF: Commenting // check if user is valid, DO create a method bool IsValid(User user).
  • NEVER add comments that simply reiterate what is obvious from the code.
  • BECAUSE: {comments_rot, encourages_clearer_code}.

RULE: DO NOT add XML doc comments for internal APIs.

  • ONLY document public APIs that are part of the user-facing surface.
  • NEVER add XML doc comments for test project files/classes as it's not a public API.
  • BECAUSE: {reduces_noise, focuses_documentation_effort_on_user_value}.

RULE: DO NOT add brackets for simple one-line if statements.

// BAD
if (result)
{
    return string.Empty;
}

// GOOD
if (result)
    return string.Empty;
  • BECAUSE: {reduces_visual_noise, improves_readability}.

RULE: DO NOT add default private modifier.

  • The private modifier is implicit for class members.
  • BECAUSE: {reduces_verbosity, cleaner_code}.

RULE: PREFER using var instead of specifying exact type for variable declarations.

// BAD
bool left = false;
bool right = true;
string message = "hello";

// GOOD  
var left = false;
var right = true;
var message = "hello";
  • BECAUSE: {reduces_verbosity, improves_readability, lets_compiler_infer_obvious_types}.

RULE: PREFER strongly-typed enums or the "smart enum" pattern OVER raw strings or integers.

  • BECAUSE: {compile-time_type_safety, improved_readability, IDE_intellisense}.

RULE: DO put const values in the same file as the consumer, unless it's a shared constant used across the application.

  • WHEN: Constants are shared, create a dedicated public static class Constants.
  • BECAUSE: {locality_of_reference, easier_to_find, reduces_dependency_clutter}.

RULE: DO USE dedicated configuration_objects for method_signatures WHEN params.count >= 3.

  • MyMethod(MyMethodOptions options) OVER MyMethod(string param1, int param2, bool param3).
  • BECAUSE: {readability, extensibility, no_param_order_memorization}.

NAMING CONVENTIONS

RULE: ADHERE_TO_CSHARP_NAMING { class, record, struct, enum, delegate: PascalCase interface: IPascalCase method, property, event, public field: PascalCase method\parameter, local\variable: camelCase private\or\internal\field: camelCase constant: PascalCase namespace: PascalCase.PascalCase project, filename: PascalCase.cs boolean\property/variable: is/has/can/should => IsLoading, HasError, CanExecute async\method: Suffix with Async => GetDataAsync() }

Learnings Protocol

CRITICAL: Both the main agent and any subagents (spawned via a Task tool) must maintain the learnings.md knowledge base.

What to Log:

  • Gotchas: Unexpected behaviors, edge cases, tricky bugs (e.g., ConfigureAwait(false) issues).
  • Judgement calls: Architecture decisions, trade-offs made (e.g., choosing NUnit over xUnit, System.Threading.Channels over TPL Dataflow).
  • File discoveries: Important files found, solution/project structure insights.
  • Problems solved: Non-obvious solutions, workarounds (e.g., a clever LINQ query).
  • Plan deviations: Things not anticipated by the original plan.
  • Interesting findings: Performance insights, useful NuGet packages, C# patterns discovered.

Rules:

  • The learnings.md file is organized by topic (e.g., "Project & Solution Structure", "MSBuild Source Rewriter").
  • Add new learnings as bullet points to the most relevant existing topic section.
  • If no relevant topic exists, create a new ## Topic Name section.
  • Keep entries extremely concise - one line per insight.
  • Only log unique, non-obvious information.
  • Focus on future value for similar tasks.
  • Subagents MUST update this file before reporting success.

CRITICAL MEMORIES

  • NEVER use underscores for private/internal members
  • NEVER create coverage reports on top level. Always specify subfolder under ./TestResults dir

BUG HUNTING METHODOLOGY

Systematic Bug Hunt Algorithm:

  1. Fix the bug: Add failing test first and then fix the bug
  2. Think other places where similar bug could happen: Add failing tests and fix the bug
  3. Check test coverage: Run tests with coverage and verify branch coverage if the failing test was a missing test

Test Coverage Gaps to Watch:

  • Only testing negative scenarios (failures/errors) without positive scenarios (success) or other way around
  • Missing tests for the "happy path" where operations should succeed

Documentation Standards

Required Elements for All Public APIs

  • <summary> - Clear, concise description of purpose
  • <param> - For all parameters with validation rules
  • <returns> - For all methods with return values
  • <exception> - For all thrown exceptions
  • <remarks> - Performance, threading, and usage guidance
  • <example> - Practical code examples

Formatting Guidelines

  • Use <see cref=""> for cross-references
  • Use <paramref name=""> for parameter references
  • Use <typeparamref name=""> for generic type references
  • Use <code> blocks for inline code
  • Use <para> for paragraph breaks in remarks
  • Use <list type="bullet|number"> for structured information

Content Guidelines

  • CRITICAL: Focus on practical usage scenarios

  • Include performance implications

  • IMPORTANT: Document threading and concurrency behavior

  • Provide realistic, copy-pasteable examples

  • CRITICAL: Explain the "why" not just the "what"

  • CRITICAL: Always write XML doc comments from the user's perspective, not implementation details

  • Cover common pitfalls and best practices

  • Expression compilation can emit invalid IL on some runtimes; fall back to Compile(preferInterpretation: true) when evaluating expressions to avoid CI-only InvalidProgramException.

  • Byref-like expression trees (e.g., Span<T>, ReadOnlySpan<T>) cannot be boxed; convert them to arrays before evaluation to avoid interpreter type load errors.

  • Reflection helpers should tolerate missing overloads (e.g., MemoryExtensions.ToArray) to prevent type initializer failures on runtimes where a method shape differs.

  • Compilation can fail with TypeLoadException/ArgumentException in addition to InvalidProgramException; catch broadly and fall back to interpreted compilation or a safe null-returning delegate.

  • Prefer Compile(preferInterpretation: true) for expression value extraction to avoid invalid IL, and only drop to null when both interpreted compilation and span normalization fail.

  • If evaluation still fails, return a sentinel (e.g., EvaluationUnavailable) and render <unavailable: reason> so diagnostics stay honest instead of showing nulls or bogus counts.

  • NUnit.Assert.ThrowsAsync<T>() returns the exception instance (not a Task), so async tests should await a real async assertion (e.g., action.Should().ThrowAsync<T>()) to avoid CS1998 warnings.

  • When normalizing byref-like values, only treat Span<T>/ReadOnlySpan<T> as span-convertible; other ref structs should yield EvaluationUnavailable without attempting conversions.

  • Wrap compiled value evaluators so invocation-time exceptions become EvaluationUnavailable instead of breaking failure analysis.

  • Avoid a partial “manual evaluator” interpreter; prefer interpreted expression compilation with a clear EvaluationUnavailable fallback for unsupported nodes/runtimes.

  • Reuse one lambda compilation fallback policy (e.g., for LINQ predicate analysis) so formatters don’t each invent their own exception handling.

  • The rewriter’s semantic model is created from a single syntax tree (current file only), so overloading Sharp.Assert(...) can make Roslyn binding ambiguous and cause asserts on members from other files to stop rewriting; prefer a different method name (e.g., AssertThat) for non-bool assertion entry points.

  • A single Assert(AssertValue ...) entry point avoids overload ambiguity, but C# forbids implicit conversions to/from interfaces (so conversion must be from an Expectation base class) and only allows one user-defined conversion in a chain (so types with implicit operator bool may need a direct conversion to AssertValue unless they inherit Expectation).

  • Prefer Throws<T> results to behave purely as Expectation (no implicit operator bool), otherwise !Throws(...) and &&/|| drift into the bool-analysis path and require special casing.

  • For invocation ExprNode generation, include the receiver (e.g., as ExprNode.Left) in addition to Arguments, so composed expectations like left.And(right) can label operands correctly.

  • Supporting Expectation composition via operators (&/|) requires generating ExprNode.Left/Right for &/| expressions so operand contexts render as left and right (not the whole expression).

  • For async asserts, await inside Assert(...) is supported for awaited booleans and awaited expectations, but it is not rewritten into an expression tree; keep examples single-line if you need stable CallerArgumentExpression text in tests.

  • Assert(await someBoolTask) must still be rewritten to SharpInternal.AssertAsync, but Assert(await ThrowsAsync<T>(...)) should be left unrewritten; discriminating by Task<bool>/ValueTask<bool> avoids mis-rewriting awaited expectations into Task<bool>.

  • For expectation ergonomics, prefer extension-method constructors (4.IsEven()) and suffix custom types with Expectation to keep call sites readable.

  • For custom expectation APIs, support both styles: using static factories for unary expectations (Assert(IsEven(4))) and extension methods for fluent forms (Assert(actual.IsEquivalentTo(expected))).

  • Records cannot inherit from non-record base classes, so expectation-like result types that must inherit Expectation need to be class types (or you must make Expectation a record and convert all derived types to records).