Do You Still Need Wolverine When AI Can Write the Code?

Do You Still Need Wolverine When AI Can Write the Code?
by Brad Jolicoeur
04/12/2026

Something happened when I was adding a feature to a Wolverine-based project. I described what I needed to an AI coding agent and it generated something that worked: a BackgroundService, a polling loop, a hand-rolled outbox table. Completely reasonable code.

My first reaction wasn't "this is wrong." My first reaction was: why am I using Wolverine?

The agent generated working async code in seconds. It handles errors. It processes in the background. If someone on my team reviewed that code without knowing we had decided to use Wolverine, they'd probably approve it. So what exactly am I defending? Is Wolverine a meaningful architectural choice, or is it a dependency that AI can route around naturally enough that it becomes optional?

That's the real question in an AI-assisted workflow. AI has genuinely changed the math on some architectural decisions. Tools that once required specialized knowledge to implement correctly are now a prompt away. The question is whether messaging frameworks fall into that category.

Wolverine encodes hard-won distributed systems knowledge into framework primitives: the durable inbox/outbox, partitioned sequential messaging, saga support. AI can generate all of those patterns. The question is whether what AI generates and what Wolverine provides are actually the same thing. They're not. But the gap between them is specific, and you need to understand it to make the right call.

This is the third article in a series on Wolverine. The first covered Wolverine's approach to concurrency and shared resources. The second covered scaling: ListenerCount, parallelism, the durable inbox/outbox at scale, and what I learned coming from Rebus and NServiceBus. This one addresses the question directly: should you still use Wolverine when AI can write the patterns?

What AI Generates Without Guidance

Let's start with a concrete example. You're working on a project that uses Wolverine for all message handling. You ask an AI coding agent: "Add async processing for new orders. When an order is placed, process it in the background and update the order status when complete."

Without specific guidance about your stack, here's the kind of code you'll get:

// Hand-rolled outbox entity
public class PendingOrderMessage
{
    public Guid Id { get; set; } = Guid.NewGuid();
    public string Payload { get; set; } = string.Empty;
    public DateTime CreatedAt { get; set; } = DateTime.UtcNow;
    public DateTime? ProcessedAt { get; set; }
    public int Attempts { get; set; }
    public string? LastError { get; set; }
}

// Background polling service
public class OrderProcessingWorker : BackgroundService
{
    private readonly IServiceScopeFactory _scopeFactory;
    private readonly ILogger<OrderProcessingWorker> _logger;

    public OrderProcessingWorker(
        IServiceScopeFactory scopeFactory,
        ILogger<OrderProcessingWorker> logger)
    {
        _scopeFactory = scopeFactory;
        _logger = logger;
    }

    protected override async Task ExecuteAsync(CancellationToken stoppingToken)
    {
        while (!stoppingToken.IsCancellationRequested)
        {
            await ProcessPendingOrders(stoppingToken);
            await Task.Delay(5000, stoppingToken);
        }
    }

    private async Task ProcessPendingOrders(CancellationToken ct)
    {
        using var scope = _scopeFactory.CreateScope();
        var db = scope.ServiceProvider.GetRequiredService<AppDbContext>();

        var messages = await db.PendingOrderMessages
            .Where(m => m.ProcessedAt == null && m.Attempts < 3)
            .Take(20)
            .ToListAsync(ct);

        foreach (var msg in messages)
        {
            try
            {
                var command = JsonSerializer.Deserialize<ProcessOrderCommand>(msg.Payload);
                // ... process the order
                msg.ProcessedAt = DateTime.UtcNow;
            }
            catch (Exception ex)
            {
                _logger.LogError(ex, "Failed processing message {Id}", msg.Id);
                msg.Attempts++;
                msg.LastError = ex.Message;
            }

            await db.SaveChangesAsync(ct);
        }
    }
}

This is competent code. It reads clearly, handles errors, and retries up to three times. A developer reviewing it in isolation might not see any problems.

The problems are all subtle:

No exactly-once guarantee. If the application crashes after processing the order but before saving ProcessedAt, the message gets processed again on the next poll cycle. If it crashes mid-save, the message gets processed again and the save happens twice. You're not in a transaction that covers both steps.

No distributed coordination. Run two instances and both will pick up the same messages from the polling query. Without row-level locking, you'll process the same order on two instances simultaneously.

Fixed polling interval. Five seconds of latency on every message regardless of load. Under heavy traffic, messages pile up in a queue with no backpressure. Under light traffic, you're burning database connections polling an empty table.

No observability. Nothing here for OpenTelemetry to attach to. No trace context propagation, no span creation, no built-in metrics. When something goes wrong in production, you'll be reading log lines and guessing at timing.

This isn't a failure of the AI. It's doing exactly what it was trained to do: generate a reasonable implementation of the pattern you described. "Process things in the background" has a well-established naive implementation. The agent found it.

The Hard Problems Haven't Changed

AI can generate these patterns. What it generates is the naive version. Wolverine is the battle-tested version. That gap matters in production.

Ask an AI to explain the exactly-once delivery problem and you'll often get a solid explanation. But explaining a problem and having a battle-tested implementation of the solution are different things.

Wolverine's durable inbox/outbox isn't just a correct implementation. It's a maintained, production-tested implementation that handles edge cases you haven't thought of yet. The partitioned sequential messaging from the concurrency article isn't a pattern you could replicate in a sprint. It's infrastructure integrated with Marten transactions, the durability agent, and OpenTelemetry tracing across the entire message lifecycle.

What matters more to me is minimizing what the agent has to create. A Wolverine message handler is 10 to 15 lines. The hand-rolled equivalent above is 70-plus lines, and it's still missing retry backoff, dead-letter routing, and observability. Every line of code the agent doesn't have to write is a line it can't get wrong.

There's also a maintenance argument. The hand-rolled solution is your code. When it breaks in production, you're debugging your custom outbox implementation. Wolverine's outbox is a known quantity with documentation, a GitHub issues history, and a community. When I hit an edge case with it, I can search and usually find that someone else hit it first.

The value of frameworks like Wolverine didn't diminish when AI agents started writing code. If anything, it increased. Frameworks encode patterns that should be treated as stable dependencies, not generated code.

The Training Data Problem

AI models learn from code, documentation, Stack Overflow answers, blog posts, and GitHub issues. NServiceBus has been in production since 2007. MassTransit since 2007. Rebus since 2012. There are years of real-world content for each of them. When an AI agent needs to configure a retry policy in NServiceBus, it's drawing on a massive corpus of production examples.

Wolverine launched in 2022. It's newer, it's less widely documented in the wild, and there's dramatically less training data for it. When an AI agent needs to configure Wolverine without specific guidance, it has a few options: reach for vague recollections of Wolverine's API, apply patterns from MassTransit or NServiceBus that don't map cleanly, or invent something plausible that compiles but doesn't behave correctly.

I've seen agents generate Wolverine configuration using NServiceBus-style EndpointConfiguration blocks that don't exist in the framework. I've seen sagas generated with MassTransit's CorrelatedBy<T> interface rather than Wolverine's Saga base class. Both compiled fine. Neither worked. I've also seen agents reach for something like RecoverabilityPolicy.Exponential() for retry configuration, a pattern that doesn't exist in any .NET messaging framework, but sounds completely plausible if you're pattern-matching from training data rather than working from documentation.

This isn't a knock on the tools. It's a fundamental property of how language models work. Less training data means more uncertainty, which means more creative gap-filling that looks right but isn't. For mature frameworks with years of public documentation, agents get close. For newer frameworks, they're often confidently wrong in ways that are hard to spot if you don't already know the correct pattern.

This is the core challenge with Wolverine specifically. It's good software solving real problems, and most agents don't have enough training data to use it correctly without explicit guidance from you.

Guiding Your AI Agent

The solution isn't to stop using AI agents or to avoid using Wolverine. It's to give agents explicit instructions about your stack.

Most AI coding tools support project-level instructions. GitHub Copilot reads .github/copilot-instructions.md and applies it to every interaction in your repository. Cursor uses .cursor/rules/ files that you can scope to file types or directories. OpenAI's agent framework uses AGENTS.md in the repository root. Claude recognizes CLAUDE.md. The specific file depends on your tooling, but the principle is the same: write down what you want the agent to do and what you want it to avoid.

Here's what a Wolverine-specific instructions section looks like:

## Messaging and Background Processing

This project uses Wolverine for all message handling, async workflows,
and background processing.

### Use these patterns

- Create Wolverine message handlers as static classes with static Handle or
  HandleAsync methods -- handlers are auto-discovered, no registration needed.
  Static handlers are preferred: all dependencies inject as method parameters,
  making them easy to unit test
- Publish commands and events from controllers or services using `IMessageBus`
- Use Wolverine sagas (inherit from `Saga`) for multi-step stateful workflows
- Configure retry policies in `Program.cs` using `opts.OnException<TException>()`
- Wolverine + Marten handles the transactional outbox automatically --
  no explicit outbox management needed

### Do not create these patterns

- No custom outbox entities (`OutboxMessage`, `PendingMessage`, etc.)
- No `BackgroundService` implementations for message processing
- No manual retry loops with `try/catch` and `Task.Delay`
- No `Task.Run` for background work that belongs in a message handler
- Do not apply NServiceBus or MassTransit API patterns --
  Wolverine has its own conventions

### Reference

- Wolverine docs: https://wolverine.netlify.app/
- Marten integration: https://wolverinefx.io/guide/durability/marten/
- See `src/Handlers/` for working examples of handler patterns

The "Do not create these patterns" section is as important as the positive guidance. AI agents respond well to explicit negative constraints. Without them, the agent fills in gaps with the most common pattern it's seen. For async processing, that's usually a polling BackgroundService.

What Good Wolverine Guidance Produces

Here's what the agent generates when it has the instructions above and a reference implementation to look at.

First, a reference handler for the agent to learn from:

// src/Handlers/Orders/ProcessOrderHandler.cs
using Marten;

public record ProcessOrder(Guid OrderId, string CustomerId, decimal Total);

public static class ProcessOrderHandler
{
    // Wolverine discovers handlers by convention.
    // All parameters after the message are dependency-injected.
    public static async Task Handle(
        ProcessOrder command,
        IDocumentSession session,
        ILogger<ProcessOrderHandler> logger)
    {
        var order = await session.LoadAsync<Order>(command.OrderId);

        if (order is null)
        {
            logger.LogWarning("Order {OrderId} not found, skipping", command.OrderId);
            return;
        }

        order.MarkAsProcessing();
        session.Store(order);

        // Wolverine + Marten handle the transactional outbox.
        // session.Store() and any cascaded publishes happen atomically.
    }
}

And the publish side from a controller:

// src/Controllers/OrderController.cs
using Microsoft.AspNetCore.Mvc;
using Wolverine;

[ApiController]
[Route("api/orders")]
public class OrderController : ControllerBase
{
    private readonly IMessageBus _bus;

    public OrderController(IMessageBus bus)
    {
        _bus = bus;
    }

    [HttpPost]
    public async Task<IActionResult> CreateOrder(CreateOrderRequest request)
    {
        var orderId = Guid.NewGuid();
        // Save the initial order record...

        await _bus.PublishAsync(new ProcessOrder(orderId, request.CustomerId, request.Total));

        return Accepted(new { orderId });
    }
}

That's it. No polling worker, no custom outbox entity, no manual retry logic. Wolverine's durability agent handles exactly-once delivery, retry with backoff, and dead-letter routing. The handler is pure business logic.

The configuration that makes this work:

// Program.cs
using Wolverine.RabbitMQ;

builder.Services.AddMarten(opts =>
{
    opts.Connection("host=postgres;database=orders");
})
.IntegrateWithWolverine(); // enables transactional outbox

builder.Services.AddWolverine(opts =>
{
    opts.UseRabbitMq("host=rabbitmq").AutoProvision();
    opts.ListenToRabbitQueue("orders");

    // Wolverine retry policy -- note: this is NOT NServiceBus RecoverabilityPolicy
    opts.OnException<InvalidOperationException>()
        .RetryWithCooldown(
            50.Milliseconds(),
            100.Milliseconds(),
            250.Milliseconds()
        );

    opts.OnException<Exception>()
        .MoveToErrorQueue();
});

The comment on the retry policy matters. Without guidance, an agent familiar with NServiceBus will likely try to use RecoverabilityPolicy.Exponential(). An agent familiar with MassTransit will try UseMessageRetry(). Neither exists in Wolverine. The API is different enough that the agent will get it wrong consistently unless you show it the right idiom.

Including this configuration as a reference in your codebase and linking to it from your instructions file gives the agent a second layer of guidance. Instructions tell it what to do; examples show it how.

Partitioned Messaging Needs Its Own Instructions

If your project uses Wolverine's partitioned sequential messaging (covered in the concurrency article), that pattern is absolutely something to include in your agent instructions.

Partitioning is non-obvious and easy to miss without explicit guidance. An agent that doesn't know about it will generate standard concurrent handlers for inventory or order processing, reintroducing the race conditions that partitioning is designed to eliminate structurally. This is one of the more powerful patterns in Wolverine's toolkit, and it's not something an agent will reach for by default.

A single instruction like "Use partitioned sequential messaging for handlers that operate on shared resources. See the partitioning guide at [URL]" redirects the agent toward the right approach.

Conclusion

So should you use Wolverine or let AI write the patterns? Use the framework. Not because AI can't generate async processing code (it clearly can). Because what AI generates is the naive version, and the naive version of a distributed messaging pattern is maintenance debt wearing production clothes.

AI agents haven't changed what makes distributed systems hard. Exactly-once delivery is still hard. Race conditions under concurrent load are still subtle. A hand-rolled outbox still needs to handle all the same edge cases that Wolverine already handles. What agents change is the workflow: less writing code from scratch, more guiding code generation toward the right tools.

That guidance doesn't happen by default. You have to provide it explicitly, because the default behavior of an agent without stack context is to reach for whatever pattern has the most training data. For async processing in .NET, that's a polling BackgroundService. For retry logic, it's a try/catch loop. For exactly-once delivery, it's a custom outbox table you now own.

A few paragraphs in a .github/copilot-instructions.md or AGENTS.md file redirects the agent from a 70-line hand-rolled solution to a 15-line handler backed by infrastructure you're already paying to maintain.

Wolverine encodes hard-won knowledge about distributed messaging into a set of framework primitives. Your job isn't to write that knowledge into every feature yourself anymore. Your job is to make sure the agent knows it exists.

You May Also Like


Scaling Out Wolverine: What I Learned Coming from Rebus and NServiceBus

scale-wolverine.png
Brad Jolicoeur - 04/12/2026
Read

Architecting for Concurrency: Wolverine's Approach to Shared Resources

data-stream-conflict.png
Brad Jolicoeur - 04/09/2026
Read

Heisenbug Hunting in Async .NET Systems

heisenbug-hunting.png
Brad Jolicoeur - 04/07/2026
Read