This file provides guidance to the agent when working with code in this repository.
IMPORTANT At the end of every change, update the ./CLAUDE.md with anything you wished you'd known at the start.
- ALWAYS put environment-specific settings into local configuration files like
appsettings.Development.json. - CRITICAL: Check
code-review-issues.mdat the start of the session. Address ALL issues before new implementation. - ALWAYS read
@learnings.mdto not trap into the same issues again. - ALWAYS build, analyze, and test before committing and reporting success.
- ALWAYS rebuild the entire solution with
dotnet buildbefore running quality checks or tests on the sources. - CRITICAL: NO defensive/speculative/overengineered code. Every line of code must be reachable and testable via public API. No code "just in case". If a performance optimization cannot be tested via public API, don't add it (no premature optimization). This includes: unnecessary null checks, defensive try-catch blocks, caching mechanisms without proven need, redundant validation, or any code paths that cannot be triggered through normal usage.
dotnet build # Build the solution
dotnet build SharpAssert/ # Build only the main library
dotnet build SharpAssert.Tests/ # Build only the test projectPREFER
dotnet test # Run all tests
dotnet test -v <level> # Run all tests with verbosity q[uiet], m[inimal], n[ormal], d[etailed], and diag[nostic]dotnet clean # Clean build artifactsSharpAssert is a Pytest‑style assertions library for .NET
- Variable names should communicate ROLE, not type or implementation.
- Short, clear names optimized for readability.
- Intent-revealing function names that tell the story, named for their purpose from the invoker's perspective (WHAT not HOW).
- Common roles:
result(for return values),itemoreach(in loops),count. - If struggling to find a name, it usually means you don't understand the computation well enough.
- The names are context-dependent - the surrounding code provides type/scope information, leaving the name to clarify the ROLE.
- Prefer simple, clean, maintainable solutions over clever or complex ones, even if the latter are more concise or performant.
- Readability and maintainability are primary concerns.
- Self-documenting names (no comments).
- Small functions/methods.
- Follow the Single Responsibility Principle in classes and methods.
- CRITICAL When a change seems needlessly complex due to design issues, refactor first. "Make the change easy, then make the easy change."
- Try to avoid rewriting; if unsure, ask permission first.
- ALWAYS put tests in a dedicated test project (e.g.,
MyProject.Tests) that mirrors the source project's namespace structure. - Do not use a
class(which implies behavior) for something that's purely a data structure.- PREFER
recordtypes for immutable data transfer objects (DTOs). records are cleaner than classes with static factory methods for simple data aggregation.
- PREFER
- When extracting utility methods due to Feature Envy smell:
- Place them in a separate
statichelper class only if they are reused across multiple other classes. - Otherwise, implement them as
privatemethods within the consuming class.
- Place them in a separate
- Method Organization: Within a class, arrange members for top-down reading. Public methods should appear first, followed by the private implementation details they call.
- We are intolerant of slow tests and tests with timeouts. If a test hangs, it should be deeply investigated and fixed.
- NEVER use Arrange/Act/Assert comments in tests. Instead, separate test sections with empty lines.
- AVOID testing multiple independent behaviors in one test. Split complex tests into focused, single-behavior tests.
- PREFER using message arguments in assertions to communicate what the assertion is testing instead of comments.
- CRITICAL: Cross-platform compatibility - Use
CultureInfo.InvariantCulturefor DateTime formatting in error messages to ensure consistent output across macOS/Linux/Windows (e.g.,dt.ToString("M/d/yyyy", CultureInfo.InvariantCulture)).
We use TDD to write code.
- Write a single failing test → run it.
- Write the minimal code to pass → run the test.
- Refactor → verify tests still pass.
- Repeat until all tests pass.
- Commit after each green state.
- Tests and quality checks (build + analysis) must pass with zero warnings/errors.
- When writing a failing test TDD style, the code should be in a compilable state. This means creating the necessary files, classes, and methods the test invokes, but with a placeholder implementation like
return 0;but relevant to the context. The main idea here is to "test the test."
- No unsolicited docs/READMEs.
- Ask > assume.
- Stay on task (log unrelated issues).
- Avoid large, hard-to-review change sets; explain your intentions.
CORE APPROACH: "Minimal, surgical, trust the existing systems" - This is the fundamental approach for all code changes. Avoid over-engineering, unnecessary abstractions, and complex error handling that masks real issues. Let exceptions bubble up naturally and change only what's broken.
Prefer Meaningful Refactoring:
- Use destructuring - C# features like tuple deconstruction can improve clarity.
- Consider data structures - Sometimes the real solution is introducing a proper
recordorstructrather than more methods. - Address root causes - Look for code smells like repeated calls, long parameter lists, or unclear responsibilities.
- Balance classes and interfaces - Prefer concrete classes for simple data structures or internal logic, but use interfaces for public APIs and services to enable dependency injection and mock testing.
Method Refactoring Philosophy:
- Explaining methods > explaining variables - Hide irrelevant complexity behind intention-revealing method names rather than cramming logic inline.
- True single responsibility - Each method should have ONE clear job.
- Separate "what" from "how" - Coordinate operations in one place; delegate implementation details to focused
privatehelper methods. - Optimize for readability first - Write code that reveals intent clearly; optimize for performance later if needed.
- Example pattern: Instead of
var lineCount = fileNode.GetLocation().GetLineSpan().EndLinePosition.Line - fileNode.GetLocation().GetLineSpan().StartLinePosition.Line + 1;, useGetLineCount(fileNode)to hide the complexity.
Test what matters, skip what doesn't
Tests use NUnit 3.14 and should be named *Fixture.cs. All tests are located in a dedicated test project (e.g., SharpAssert.Tests) which mirrors the source project's namespaces.
Keep test methods focused and short. Use these strategies to manage test complexity:
Custom Assertions with FluentAssertions:
// Instead of verbose inline assertions
Assert.NotNull(result);
Assert.Equal("value1", result.Field1);
// Use FluentAssertions for readable, chainable assertions
result.Should().NotBeNull();
result.Field1.Should().Be("value1");Object Testing with BeEquivalentTo:
// Instead of multiple field assertions
Assert.Equal("value1", result.Field1);
Assert.Equal("value2", result.Field2);
// Use BeEquivalentTo for deep object comparison
var expected = new { Field1 = "value1", Field2 = "value2" };
result.Should().BeEquivalentTo(expected);External Test Data Files:
- Commit well-named input files as "Embedded Resources" in the test project.
- Use meaningful names that describe the test scenario (e.g.,
ValidUserProfile.json). - Load files in a test fixture or setup method when needed across multiple test cases.
Test Organization:
- Use test fixtures with
[TestFixture]attribute to create/share objects needed in multiple test cases. - Use
[TestCase]and[TestCaseSource]when multiple test cases have similar structures but different inputs. - Split large test files into multiple focused files when refactoring can't reduce complexity.
- Group tests meaningfully by the functionality being tested, often in a class named after the class under test.
Tests use NUnit 3.14 with FluentAssertions for assertions. Test structure:
- Fixtures per feature (e.g., BinaryComparisonFixture.cs, LinqOperationsFixture.cs)
We use a decoupled testing strategy that separates logic verification from formatting verification.
Structure:
[TestFixture]
public class MyFeatureFixture : TestBase
{
[TestFixture]
class LogicTests
{
// Tests that verify the correctness of the EvaluationResult structure
[Test]
public void Should_detect_failure()
{
var expected = BinaryComparison(..., Comparison(...));
AssertFails(() => Assert(x == y), expected);
}
}
[TestFixture]
class FormattingTests
{
// Tests that verify the rendered string output of the result
[Test]
public void Should_render_failure()
{
var result = BinaryComparison(...);
AssertRendersExactly(result, "Left: 1", "Right: 2");
}
}
// DSL Helpers
static ComparisonResult Comparison(...) => ...
}DSL Helpers (TestBase):
AssertFails(action, expectedResult): Verifies structural equality ofEvaluationResult.AssertPasses(action): Verifies assertion succeeds.AssertRendersExactly(result, lines...): Verifies exact string rendering (Text property).Value(expr, val),BinaryComparison(...),Operand(...): Factories for expected results.
- CRITICAL: When addressing quality issues:
- Commit after every successful quality issue elimination before moving to fix the next one.
- Make sure to run all tests before each commit.
# Language: BoGuidelineSyntax (BGS)
# Keywords:
# RULE: Declares a guideline.
# DO: A positive action to take.
# AVOID: An anti-pattern to prevent.
# PREFER: A choice between two options. `PREFER A over B`.
# CONSIDER: A suggestion that is context-dependent.
# WHEN: A condition for the rule.
# BECAUSE: The rationale behind the rule.
# =>: "implies" or "should be implemented as".
#
# Symbols:
# {...}: A set of related concepts or reasons.
# @/..: Absolute path import.
# ../..: Relative path import.
# () =>: Anonymous function or component.
#---------------------------------------------------------------------
RULE: DO SEGREGATE\METHODS into {orchestrator | implementor}.
- orchestrator: Calls other methods, contains no logic.
- implementor: Contains logic, has no sub-orchestration.
- AVOID: Mixing implementation\logic with high-level_oordination.
- BECAUSE: {clarity, single\responsibility, testability}.
RULE: DO ISOLATE\LOGIC into {pure | stateful}.
- pure: Predictable, no side-effects (
staticmethods are great for this). - stateful: Has side-effects, interacts with external world (e.g.,
fileService.SaveBaselineAsync(baseline)). - BECAUSE: {predictability, easy\testing\of\pure\logic}.
RULE: DO USE the Decorator\Pattern or middleware-style delegates to wrap_methods_with_cross_cutting_concerns.
- Examples: Logging decorators, retry policies with Polly, ASP.NET Core middleware.
- BECAUSE: {DRY, separation_of_concerns}.
RULE: DO USE early_returns (guard_clauses) to handle_invalid_states at the start of a method.
- AVOID: Deeply nested
ifstatements for validation. - BECAUSE: {reduces_nesting, improves_readability}.
RULE: DO USE classes PRIMARILY_AS_WRAPPERS for external_dependencies (IO, APIs, DBs) and to hold state.
- AVOID: Mixing complex business_logic directly in these wrapper classes; PREFER delegating to purer internal services/methods.
- BECAUSE: {isolates_side_effects, promotes_functional_core, aligns_with_clean_architecture}.
RULE: DO USE a dedicated_logger_service (ILogger<T>). AVOID Console.WriteLine for application_logging.
- BECAUSE: {structured_logs, central_control, destination_flexibility}.
RULE: AVOID over-defensive_programming internally.
- TRUST your type_system (especially nullable reference types) and boundary_validation.
- AVOID: {excessive_null_checks, redundant_try_catch_blocks for expected errors}.
RULE: CRITICAL AVOID comments; DO WRITE self_documenting_code.
- INSTEAD_OF: Commenting
// check if user is valid, DO create a methodbool IsValid(User user). - NEVER add comments that simply reiterate what is obvious from the code.
- BECAUSE: {comments_rot, encourages_clearer_code}.
RULE: DO NOT add XML doc comments for internal APIs.
- ONLY document public APIs that are part of the user-facing surface.
- NEVER add XML doc comments for test project files/classes as it's not a public API.
- BECAUSE: {reduces_noise, focuses_documentation_effort_on_user_value}.
RULE: DO NOT add brackets for simple one-line if statements.
// BAD
if (result)
{
return string.Empty;
}
// GOOD
if (result)
return string.Empty;- BECAUSE: {reduces_visual_noise, improves_readability}.
RULE: DO NOT add default private modifier.
- The
privatemodifier is implicit for class members. - BECAUSE: {reduces_verbosity, cleaner_code}.
RULE: PREFER using var instead of specifying exact type for variable declarations.
// BAD
bool left = false;
bool right = true;
string message = "hello";
// GOOD
var left = false;
var right = true;
var message = "hello";- BECAUSE: {reduces_verbosity, improves_readability, lets_compiler_infer_obvious_types}.
RULE: PREFER strongly-typed enums or the "smart enum" pattern OVER raw strings or integers.
- BECAUSE: {compile-time_type_safety, improved_readability, IDE_intellisense}.
RULE: DO put const values in the same file as the consumer, unless it's a shared constant used across the application.
- WHEN: Constants are shared, create a dedicated
public static class Constants. - BECAUSE: {locality_of_reference, easier_to_find, reduces_dependency_clutter}.
RULE: DO USE dedicated configuration_objects for method_signatures WHEN params.count >= 3.
MyMethod(MyMethodOptions options)OVERMyMethod(string param1, int param2, bool param3).- BECAUSE: {readability, extensibility, no_param_order_memorization}.
RULE: ADHERE_TO_CSHARP_NAMING {
class, record, struct, enum, delegate: PascalCase
interface: IPascalCase
method, property, event, public field: PascalCase
method\parameter, local\variable: camelCase
private\or\internal\field: camelCase
constant: PascalCase
namespace: PascalCase.PascalCase
project, filename: PascalCase.cs
boolean\property/variable: is/has/can/should => IsLoading, HasError, CanExecute
async\method: Suffix with Async => GetDataAsync()
}
CRITICAL: Both the main agent and any subagents (spawned via a Task tool) must maintain the learnings.md knowledge base.
- Gotchas: Unexpected behaviors, edge cases, tricky bugs (e.g.,
ConfigureAwait(false)issues). - Judgement calls: Architecture decisions, trade-offs made (e.g., choosing NUnit over xUnit, System.Threading.Channels over TPL Dataflow).
- File discoveries: Important files found, solution/project structure insights.
- Problems solved: Non-obvious solutions, workarounds (e.g., a clever LINQ query).
- Plan deviations: Things not anticipated by the original plan.
- Interesting findings: Performance insights, useful NuGet packages, C# patterns discovered.
- The
learnings.mdfile is organized by topic (e.g., "Project & Solution Structure", "MSBuild Source Rewriter"). - Add new learnings as bullet points to the most relevant existing topic section.
- If no relevant topic exists, create a new
## Topic Namesection. - Keep entries extremely concise - one line per insight.
- Only log unique, non-obvious information.
- Focus on future value for similar tasks.
- Subagents MUST update this file before reporting success.
- NEVER use underscores for private/internal members
- NEVER create coverage reports on top level. Always specify subfolder under ./TestResults dir
- Fix the bug: Add failing test first and then fix the bug
- Think other places where similar bug could happen: Add failing tests and fix the bug
- Check test coverage: Run tests with coverage and verify branch coverage if the failing test was a missing test
- Only testing negative scenarios (failures/errors) without positive scenarios (success) or other way around
- Missing tests for the "happy path" where operations should succeed
<summary>- Clear, concise description of purpose<param>- For all parameters with validation rules<returns>- For all methods with return values<exception>- For all thrown exceptions<remarks>- Performance, threading, and usage guidance<example>- Practical code examples
- Use
<see cref="">for cross-references - Use
<paramref name="">for parameter references - Use
<typeparamref name="">for generic type references - Use
<code>blocks for inline code - Use
<para>for paragraph breaks in remarks - Use
<list type="bullet|number">for structured information
-
CRITICAL: Focus on practical usage scenarios
-
Include performance implications
-
IMPORTANT: Document threading and concurrency behavior
-
Provide realistic, copy-pasteable examples
-
CRITICAL: Explain the "why" not just the "what"
-
CRITICAL: Always write XML doc comments from the user's perspective, not implementation details
-
Cover common pitfalls and best practices
-
Expression compilation can emit invalid IL on some runtimes; fall back to
Compile(preferInterpretation: true)when evaluating expressions to avoid CI-onlyInvalidProgramException. -
Byref-like expression trees (e.g.,
Span<T>,ReadOnlySpan<T>) cannot be boxed; convert them to arrays before evaluation to avoid interpreter type load errors. -
Reflection helpers should tolerate missing overloads (e.g.,
MemoryExtensions.ToArray) to prevent type initializer failures on runtimes where a method shape differs. -
Compilation can fail with
TypeLoadException/ArgumentExceptionin addition toInvalidProgramException; catch broadly and fall back to interpreted compilation or a safe null-returning delegate. -
Prefer
Compile(preferInterpretation: true)for expression value extraction to avoid invalid IL, and only drop to null when both interpreted compilation and span normalization fail. -
If evaluation still fails, return a sentinel (e.g.,
EvaluationUnavailable) and render<unavailable: reason>so diagnostics stay honest instead of showing nulls or bogus counts. -
NUnit.Assert.ThrowsAsync<T>()returns the exception instance (not aTask), so async tests should await a real async assertion (e.g.,action.Should().ThrowAsync<T>()) to avoid CS1998 warnings. -
When normalizing byref-like values, only treat
Span<T>/ReadOnlySpan<T>as span-convertible; other ref structs should yieldEvaluationUnavailablewithout attempting conversions. -
Wrap compiled value evaluators so invocation-time exceptions become
EvaluationUnavailableinstead of breaking failure analysis. -
Avoid a partial “manual evaluator” interpreter; prefer interpreted expression compilation with a clear
EvaluationUnavailablefallback for unsupported nodes/runtimes. -
Reuse one lambda compilation fallback policy (e.g., for LINQ predicate analysis) so formatters don’t each invent their own exception handling.
-
The rewriter’s semantic model is created from a single syntax tree (current file only), so overloading
Sharp.Assert(...)can make Roslyn binding ambiguous and cause asserts on members from other files to stop rewriting; prefer a different method name (e.g.,AssertThat) for non-bool assertion entry points. -
A single
Assert(AssertValue ...)entry point avoids overload ambiguity, but C# forbids implicit conversions to/from interfaces (so conversion must be from anExpectationbase class) and only allows one user-defined conversion in a chain (so types withimplicit operator boolmay need a direct conversion toAssertValueunless they inheritExpectation). -
Prefer
Throws<T>results to behave purely asExpectation(noimplicit operator bool), otherwise!Throws(...)and&&/||drift into the bool-analysis path and require special casing. -
For invocation
ExprNodegeneration, include the receiver (e.g., asExprNode.Left) in addition toArguments, so composed expectations likeleft.And(right)can label operands correctly. -
Supporting
Expectationcomposition via operators (&/|) requires generatingExprNode.Left/Rightfor&/|expressions so operand contexts render asleftandright(not the whole expression). -
For async asserts,
awaitinsideAssert(...)is supported for awaited booleans and awaited expectations, but it is not rewritten into an expression tree; keep examples single-line if you need stableCallerArgumentExpressiontext in tests. -
Assert(await someBoolTask)must still be rewritten toSharpInternal.AssertAsync, butAssert(await ThrowsAsync<T>(...))should be left unrewritten; discriminating byTask<bool>/ValueTask<bool>avoids mis-rewriting awaited expectations intoTask<bool>. -
For expectation ergonomics, prefer extension-method constructors (
4.IsEven()) and suffix custom types withExpectationto keep call sites readable. -
For custom expectation APIs, support both styles:
using staticfactories for unary expectations (Assert(IsEven(4))) and extension methods for fluent forms (Assert(actual.IsEquivalentTo(expected))). -
Records cannot inherit from non-record base classes, so expectation-like result types that must inherit
Expectationneed to beclasstypes (or you must makeExpectationa record and convert all derived types to records).