DEV Community

Cover image for Clean Architecture in .NET 10: Testing What Matters
Brian Spann
Brian Spann

Posted on

Clean Architecture in .NET 10: Testing What Matters

Part 7 of 7 — the finale! Start from the beginning if you're new here.


Clean Architecture promises testability. Now let's deliver. We'll write tests that actually catch bugs, skip tests that waste time, and avoid the trap of coverage theater.


What We're Testing

Layer What To Test How
Domain Entities, value objects, business rules Pure unit tests, no mocks
Application Command/query handlers Unit tests with mocked repos
Infrastructure Repositories, DB config Integration tests
API Full request/response cycle Integration tests

Test Projects Setup

mkdir tests
cd tests

# Unit tests
dotnet new xunit -n PromptVault.UnitTests
dotnet add PromptVault.UnitTests reference ../src/PromptVault.Domain
dotnet add PromptVault.UnitTests reference ../src/PromptVault.Application
dotnet add PromptVault.UnitTests package Moq
dotnet add PromptVault.UnitTests package FluentAssertions

# Integration tests
dotnet new xunit -n PromptVault.IntegrationTests
dotnet add PromptVault.IntegrationTests reference ../src/PromptVault.API
dotnet add PromptVault.IntegrationTests package Microsoft.AspNetCore.Mvc.Testing
dotnet add PromptVault.IntegrationTests package FluentAssertions
Enter fullscreen mode Exit fullscreen mode

Domain Tests: Pure and Simple

Domain tests are the easiest. No mocks, no setup—just logic.

tests/PromptVault.UnitTests/Domain/PromptTests.cs

using FluentAssertions;
using PromptVault.Domain.Entities;
using PromptVault.Domain.ValueObjects;

namespace PromptVault.UnitTests.Domain;

public class PromptTests
{
    [Fact]
    public void Constructor_WithValidData_CreatesPromptWithInitialVersion()
    {
        var prompt = new Prompt("My Prompt", "Do something", ModelType.Gpt4);

        prompt.Title.Should().Be("My Prompt");
        prompt.Content.Should().Be("Do something");
        prompt.Versions.Should().HaveCount(1);
        prompt.Versions.First().VersionNumber.Should().Be(1);
    }

    [Theory]
    [InlineData("")]
    [InlineData("   ")]
    [InlineData(null)]
    public void Constructor_WithEmptyTitle_Throws(string? title)
    {
        var act = () => new Prompt(title!, "Content", ModelType.Gpt4);

        act.Should().Throw<ArgumentException>()
            .WithMessage("*Title*required*");
    }

    [Fact]
    public void UpdateContent_WithNewContent_CreatesNewVersion()
    {
        var prompt = new Prompt("Test", "Original", ModelType.Gpt4);

        prompt.UpdateContent("Updated", "user@example.com");

        prompt.Content.Should().Be("Updated");
        prompt.Versions.Should().HaveCount(2);
        prompt.Versions.Last().CreatedBy.Should().Be("user@example.com");
    }

    [Fact]
    public void UpdateContent_WithSameContent_DoesNotCreateVersion()
    {
        var prompt = new Prompt("Test", "Same", ModelType.Gpt4);

        prompt.UpdateContent("Same");

        prompt.Versions.Should().HaveCount(1);
    }

    [Fact]
    public void AddTag_NormalizesAndDeduplicates()
    {
        var prompt = new Prompt("Test", "Content", ModelType.Gpt4);

        prompt.AddTag("Machine Learning");
        prompt.AddTag("machine-learning");  // Same slug
        prompt.AddTag("MACHINE LEARNING");  // Same slug

        prompt.Tags.Should().HaveCount(1);
    }
}
Enter fullscreen mode Exit fullscreen mode

tests/PromptVault.UnitTests/Domain/TagTests.cs

using FluentAssertions;
using PromptVault.Domain.ValueObjects;

namespace PromptVault.UnitTests.Domain;

public class TagTests
{
    [Theory]
    [InlineData("Machine Learning", "machine-learning")]
    [InlineData("AI_Tools", "ai-tools")]
    [InlineData("  spaces  ", "spaces")]
    public void Constructor_NormalizesToSlug(string input, string expectedSlug)
    {
        var tag = new Tag(input);
        tag.Slug.Should().Be(expectedSlug);
    }

    [Fact]
    public void Constructor_WithEmptyValue_Throws()
    {
        var act = () => new Tag("");
        act.Should().Throw<ArgumentException>();
    }

    [Fact]
    public void Constructor_WithTooLongValue_Throws()
    {
        var act = () => new Tag(new string('a', 51));
        act.Should().Throw<ArgumentException>().WithMessage("*50 characters*");
    }
}
Enter fullscreen mode Exit fullscreen mode

No database. No HTTP. No mocking. Just logic and assertions.


Handler Tests: Mocked Dependencies

Handlers are tested with mocked repositories:

tests/PromptVault.UnitTests/Application/CreatePromptCommandHandlerTests.cs

using FluentAssertions;
using Moq;
using PromptVault.Application;
using PromptVault.Application.Commands.CreatePrompt;
using PromptVault.Application.Interfaces;
using PromptVault.Domain.Entities;

namespace PromptVault.UnitTests.Application;

public class CreatePromptCommandHandlerTests
{
    private readonly Mock<IPromptRepository> _repoMock;
    private readonly CreatePromptCommandHandler _handler;

    public CreatePromptCommandHandlerTests()
    {
        _repoMock = new Mock<IPromptRepository>();
        _handler = new CreatePromptCommandHandler(_repoMock.Object);
    }

    [Fact]
    public async Task Handle_WithValidCommand_ReturnsSuccessWithId()
    {
        // Arrange
        _repoMock.Setup(r => r.TitleExistsAsync(It.IsAny<string>(), null, default))
            .ReturnsAsync(false);

        var command = new CreatePromptCommand("Test", "Content", "gpt-4", 
            new List<string> { "tag1" });

        // Act
        var result = await _handler.Handle(command, CancellationToken.None);

        // Assert
        result.IsSuccess.Should().BeTrue();
        result.Value.Should().NotBeEmpty();

        _repoMock.Verify(r => r.AddAsync(
            It.Is<Prompt>(p => p.Title == "Test" && p.Tags.Count == 1), 
            default), Times.Once);
    }

    [Fact]
    public async Task Handle_WithDuplicateTitle_ReturnsConflict()
    {
        _repoMock.Setup(r => r.TitleExistsAsync("Existing", null, default))
            .ReturnsAsync(true);

        var command = new CreatePromptCommand("Existing", "Content", "gpt-4");

        var result = await _handler.Handle(command, CancellationToken.None);

        result.IsSuccess.Should().BeFalse();
        result.ErrorType.Should().Be(ErrorType.Conflict);
        _repoMock.Verify(r => r.AddAsync(It.IsAny<Prompt>(), default), Times.Never);
    }
}
Enter fullscreen mode Exit fullscreen mode

Pattern: Arrange → Act → Assert. Mock the repository, call the handler, verify the result.


Integration Tests: Real HTTP

For integration tests, use WebApplicationFactory:

tests/PromptVault.IntegrationTests/CustomWebApplicationFactory.cs

using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.Mvc.Testing;
using Microsoft.EntityFrameworkCore;
using Microsoft.Extensions.DependencyInjection;
using PromptVault.Infrastructure.Persistence;

namespace PromptVault.IntegrationTests;

public class CustomWebApplicationFactory : WebApplicationFactory<Program>
{
    protected override void ConfigureWebHost(IWebHostBuilder builder)
    {
        builder.ConfigureServices(services =>
        {
            // Remove real DbContext
            var descriptor = services.SingleOrDefault(
                d => d.ServiceType == typeof(DbContextOptions<AppDbContext>));
            if (descriptor != null)
                services.Remove(descriptor);

            // Add in-memory database
            services.AddDbContext<AppDbContext>(options =>
                options.UseInMemoryDatabase("TestDb_" + Guid.NewGuid()));

            // Ensure created
            var sp = services.BuildServiceProvider();
            using var scope = sp.CreateScope();
            var db = scope.ServiceProvider.GetRequiredService<AppDbContext>();
            db.Database.EnsureCreated();
        });

        builder.UseEnvironment("Testing");
    }
}
Enter fullscreen mode Exit fullscreen mode

tests/PromptVault.IntegrationTests/PromptsControllerTests.cs

using System.Net;
using System.Net.Http.Json;
using FluentAssertions;
using PromptVault.API.Contracts.Requests;
using PromptVault.API.Contracts.Responses;

namespace PromptVault.IntegrationTests;

public class PromptsControllerTests : IClassFixture<CustomWebApplicationFactory>
{
    private readonly HttpClient _client;

    public PromptsControllerTests(CustomWebApplicationFactory factory)
    {
        _client = factory.CreateClient();
    }

    [Fact]
    public async Task CreatePrompt_WithValidData_ReturnsCreated()
    {
        var request = new CreatePromptRequest(
            $"Test {Guid.NewGuid()}", "Content", "gpt-4", 
            new List<string> { "test" });

        var response = await _client.PostAsJsonAsync("/api/prompts", request);

        response.StatusCode.Should().Be(HttpStatusCode.Created);

        var created = await response.Content.ReadFromJsonAsync<CreatePromptResponse>();
        created!.Id.Should().NotBeEmpty();
        response.Headers.Location.Should().NotBeNull();
    }

    [Fact]
    public async Task CreatePrompt_WithDuplicateTitle_ReturnsConflict()
    {
        var title = $"Duplicate {Guid.NewGuid()}";
        var request = new CreatePromptRequest(title, "Content", "gpt-4");

        await _client.PostAsJsonAsync("/api/prompts", request);
        var response = await _client.PostAsJsonAsync("/api/prompts", request);

        response.StatusCode.Should().Be(HttpStatusCode.Conflict);
    }

    [Fact]
    public async Task GetPrompt_WhenExists_ReturnsOk()
    {
        // Create
        var createReq = new CreatePromptRequest($"Get Test {Guid.NewGuid()}", "Content", "gpt-4");
        var createRes = await _client.PostAsJsonAsync("/api/prompts", createReq);
        var created = await createRes.Content.ReadFromJsonAsync<CreatePromptResponse>();

        // Get
        var response = await _client.GetAsync($"/api/prompts/{created!.Id}");

        response.StatusCode.Should().Be(HttpStatusCode.OK);
        var prompt = await response.Content.ReadFromJsonAsync<PromptResponse>();
        prompt!.Title.Should().Be(createReq.Title);
    }

    [Fact]
    public async Task GetPrompt_WhenNotExists_ReturnsNotFound()
    {
        var response = await _client.GetAsync($"/api/prompts/{Guid.NewGuid()}");
        response.StatusCode.Should().Be(HttpStatusCode.NotFound);
    }

    [Fact]
    public async Task UpdatePrompt_CreatesNewVersion()
    {
        // Create
        var createReq = new CreatePromptRequest($"Version Test {Guid.NewGuid()}", "v1", "gpt-4");
        var createRes = await _client.PostAsJsonAsync("/api/prompts", createReq);
        var created = await createRes.Content.ReadFromJsonAsync<CreatePromptResponse>();

        // Update
        var updateReq = new UpdatePromptRequest(Content: "v2");
        await _client.PutAsJsonAsync($"/api/prompts/{created!.Id}", updateReq);

        // Verify
        var getRes = await _client.GetAsync($"/api/prompts/{created.Id}?includeVersions=true");
        var prompt = await getRes.Content.ReadFromJsonAsync<PromptResponse>();

        prompt!.VersionCount.Should().Be(2);
    }

    [Fact]
    public async Task DeletePrompt_RemovesIt()
    {
        // Create
        var createReq = new CreatePromptRequest($"Delete Test {Guid.NewGuid()}", "Content", "gpt-4");
        var createRes = await _client.PostAsJsonAsync("/api/prompts", createReq);
        var created = await createRes.Content.ReadFromJsonAsync<CreatePromptResponse>();

        // Delete
        var deleteRes = await _client.DeleteAsync($"/api/prompts/{created!.Id}");
        deleteRes.StatusCode.Should().Be(HttpStatusCode.NoContent);

        // Verify gone
        var getRes = await _client.GetAsync($"/api/prompts/{created.Id}");
        getRes.StatusCode.Should().Be(HttpStatusCode.NotFound);
    }
}
Enter fullscreen mode Exit fullscreen mode

What NOT To Test

Testing everything isn't the goal. Skip tests that:

1. Just Test the Framework

// ❌ DON'T TEST THIS
[Fact]
public void DbContext_SaveChanges_Persists()
{
    // You're testing EF Core, not your code
}
Enter fullscreen mode Exit fullscreen mode

2. Are Trivial Mappings

// ❌ DON'T TEST THIS
[Fact]
public void PromptDto_FromEntity_MapsTitle()
{
    // Integration tests will catch this if broken
}
Enter fullscreen mode Exit fullscreen mode

3. Require Excessive Mocking

If you need 10 mocks to test one method, the method does too much. Refactor first.


The Coverage Trap

"We need 80% code coverage!"

Coverage is a terrible metric for test quality. You can have 100% coverage and catch zero bugs:

// ❌ Useless test — 100% coverage, zero value
[Fact]
public void CreatePrompt_Works()
{
    var handler = new CreatePromptCommandHandler(Mock.Of<IPromptRepository>());
    // No assertions. Coverage goes up. Value = zero.
}
Enter fullscreen mode Exit fullscreen mode

Better questions:

  • Do tests fail when you break things?
  • Do tests catch real bugs in PRs?
  • Are tests fast enough that people run them?

Running Tests

# All tests
dotnet test

# Unit tests only
dotnet test tests/PromptVault.UnitTests

# With coverage
dotnet test --collect:"XPlat Code Coverage"

# Specific test
dotnet test --filter "FullyQualifiedName~CreatePromptCommandHandlerTests"
Enter fullscreen mode Exit fullscreen mode

Test Organization

tests/
├── PromptVault.UnitTests/
│   ├── Domain/
│   │   ├── PromptTests.cs
│   │   └── TagTests.cs
│   └── Application/
│       └── CreatePromptCommandHandlerTests.cs
│
└── PromptVault.IntegrationTests/
    ├── CustomWebApplicationFactory.cs
    └── PromptsControllerTests.cs
Enter fullscreen mode Exit fullscreen mode

Key Takeaways

  1. Domain tests are pure — No mocks, no database, just logic
  2. Handler tests mock repositories — Verify coordination logic
  3. Integration tests use real HTTP — Catch wiring issues
  4. Skip trivial tests — Focus on behavior, not coverage
  5. Coverage is a vanity metric — Tests that catch bugs matter

Series Complete! 🎉

Over 7 parts, we built:

  1. Part 1: Solution structure, dependency direction
  2. Part 2: Domain entities with behavior
  3. Part 3: CQRS with MediatR
  4. Part 4: EF Core repositories
  5. Part 5: Thin API controllers
  6. Part 6: Pipeline behaviors
  7. Part 7: Testing at every layer

Final Thoughts

Clean Architecture isn't about perfect circles or rigid folders. It's about:

  • Dependencies point inward — Domain knows nothing about databases
  • Testable by design — Each layer tested in isolation
  • Changeable infrastructure — Swap databases without touching business logic

The ceremony has a cost. For small projects, it's overhead. For large projects with long lifespans, it pays dividends.

Build what you need, not what the architecture diagram shows.


Resources


Get the Code

The complete PromptVault application:

PromptVault

A production-ready .NET 10 API for storing, versioning, and organizing AI prompts.

This is the companion repository for the blog series: Clean Architecture in .NET 10: A Practical Guide

.NET License: MIT Build


What Is This?

PromptVault is a REST API that lets you:

  • 📝 Store prompts with metadata (title, model type, tags)
  • 🔄 Track versions automatically when content changes
  • 📁 Organize into collections (like folders for your prompts)
  • 🔍 Search by content or tags

More importantly, it demonstrates Clean Architecture patterns in a real, runnable application—not just code snippets.


Blog Series

This repo follows along with a 7-part blog series:

Part Topic Branch
0 Introduction: Why Your Code Turns Into Spaghetti main (this branch)
1 The Setup part-1-setup
2 The Domain Layer part-2-domain
3 The Application Layer part-3-application
4 The Infrastructure Layer part-4-infrastructure
5 The API Layer part-5-api
6 Production Polish part-6-production
7 Testing part-7-testing

Each branch represents the state of the…

Clone it. Run it. Make it yours.


Thanks for following along. Now go build something. 🚀

Top comments (0)