The previous post was about reading the AppHost. Architecture as code. The dependency graph in one file. Claude Code as a guide through static configuration.

Useful — but only half the picture.

The other half is what happens when the application is actually running. Logs appear, traces fan out across services, health checks flip green and red. The Aspire dashboard shows all of this beautifully.

The problem is that the dashboard lives in a browser tab and your work lives in your editor. Every diagnosis easily turns into tab-switching, copying stack traces into chat, and reconstructing context that the running system already has.

Since Aspire 13, the dashboard exposes runtime data through an MCP server. Claude Code can query the running Aspire application directly — the same resources, logs, traces, and resource commands you would otherwise inspect by hand.

That changes the workflow from:

“Explain this AppHost.”

to:

“What is happening in this AppHost right now?”

This does not replace the dashboard. It turns the dashboard into queryable runtime context for the agent.


Prerequisites

You need a few pieces in place:

  • .NET Aspire 13 or later
  • the Aspire CLI installed
  • your AppHost running locally
  • Claude Code or another MCP-capable assistant

The exact setup command depends on your Aspire CLI version, which matters because the naming has changed around this area.


Setup: aspire agent init

There are two routes: the CLI route and the manual dashboard route.

In recent Aspire CLI versions, the easiest route is:

aspire agent init

If that command is not available in your installed CLI, try the older command name:

aspire mcp init

The CLI detects supported AI assistants — such as VS Code, Visual Studio, Cursor, and Claude Code — and writes the MCP configuration for them.

For Claude Code, this means the assistant can start the Aspire MCP bridge through STDIO. You do not have to manually copy dashboard URLs, API keys, or certificates into your assistant configuration.

The manual route still exists. Run the app, open the Aspire dashboard, click the MCP button, and copy the HTTP endpoint plus the x-mcp-api-key header value into your assistant’s MCP configuration.

The distinction matters:

  • the dashboard exposes an HTTP MCP endpoint
  • the CLI-generated setup can hide that behind a local STDIO command

That is why the CLI route is usually the smoother one. It avoids a lot of the certificate and endpoint friction you can hit when configuring HTTP manually.

One pitfall worth knowing about up front: the dashboard’s HTTP endpoint may use a self-signed certificate. Some AI assistants refuse to talk to it. The workaround for local development is to expose an HTTP endpoint instead, by setting these values in launchSettings.json:

{
  "profiles": {
    "https": {
      "environmentVariables": {
        "ASPIRE_DASHBOARD_MCP_ENDPOINT_URL": "http://localhost:18888",
        "ASPIRE_ALLOW_UNSECURED_TRANSPORT": "true"
      }
    }
  }
}

Use that only for local development. Do not enable unsecured transport in shared, remote, or production-like environments.

First test, regardless of route:

List all resources in this Aspire app.

If Claude Code returns a structured list of your services, containers, and dependencies, the connection works. If it returns nothing, the assistant may have been started outside the workspace, the app may not be running, or the HTTP certificate issue above may be blocking the connection.


Inspecting resources

The Aspire MCP server exposes a small set of tools:

  • list_resources
  • list_console_logs
  • list_structured_logs
  • list_traces
  • list_trace_structured_logs
  • execute_resource_command

Most useful workflows are combinations of those tools.

list_resources is the workhorse. It returns the resources in the running AppHost with their current state, health status, endpoints, environment metadata, and any commands the dashboard exposes for them.

In Grouply — the multi-tenant fleet platform from the previous post — that means resources such as:

  • API service
  • background workers
  • Postgres
  • Service Bus
  • Redis
  • Keycloak
  • the dashboard itself

The use case is not only:

What is in my AppHost?

You wrote that code yourself.

The more useful question is:

Why is this not working right now?

Ask:

Which resources are unhealthy?

and Claude Code can query the runtime state directly.

Ask:

What is apiservice waiting for?

and you may discover that it is stuck in Starting because Keycloak is still Unhealthy.

That is the dependency graph from the AppHost, but visible at runtime.


Keeping sensitive resources out of MCP

For enterprise scenarios, there is one detail worth pulling forward early: ExcludeFromMcp().

Some resources should not be exposed to AI assistants. Maybe a database contains production-like test data. Maybe a service emits sensitive values in logs during local development. Maybe the resource metadata includes endpoints or environment variables you do not want in the assistant context.

You can exclude a resource in the AppHost:

var sensitiveDb = builder
    .AddPostgres("audit-db")
    .ExcludeFromMcp();

The exclusion covers the resource, its logs, and its telemetry.

This is worth setting up early. Anything exposed through MCP can become part of the assistant’s working context, and depending on the assistant and configuration, that context may leave your machine.

MCP makes the running system easier to inspect. That also means you need to decide what should not be inspectable.


Debugging logs without copying logs

The concrete case.

After adding the reporting worker from the previous post — a second subscriber on the vehicles Service Bus topic — I started Aspire and watched the dashboard.

The worker came up. It registered as healthy. Then it sat there.

No messages processed.

No obvious errors at first glance.

The console log for the worker was a wall of startup output. Telemetry initialization, dependency injection registrations, health check registrations, container output — somewhere in there was the actual problem.

Instead of scrolling manually, I asked Claude Code:

Show me errors and warnings from reportingworker in the last 5 minutes.

That scoped question matters. If I ask for every log line from every service, the response becomes noisy fast. With the resource name, severity, and time window included, the useful entry surfaced immediately:

Azure.Messaging.ServiceBus.ServiceBusException:
The messaging entity 'sb://servicebus-xxxx.servicebus.windows.net/
  vehicles/Subscriptions/reportingconsumer' could not be found.
(MessagingEntityNotFound)

Once you see the message, the diagnosis is obvious: the subscription does not exist.

Back to the AppHost extension method:

var vehiclesTopic = resourceBuilder.AddServiceBusTopic("vehicles");

vehiclesTopic.AddServiceBusSubscription(
    "apiserviceconsumer",
    "apiservice");

// Missing:
vehiclesTopic.AddServiceBusSubscription(
    "reportingconsumer",
    "reporting");

Adding the worker project to the AppHost was step one.

Adding its Service Bus subscription was step two.

I had done one and not the other.

The useful part was not that Claude Code magically understood Service Bus. The useful part was that it could query the runtime logs, filter them, connect the failure to the AppHost configuration, and keep the whole diagnosis inside the editor.


Prefer structured logs when you can

Console logs get long fast.

Aspire emits a lot at startup. Dependency containers chatter. EF Core can narrate every query. Authentication middleware can produce multiple entries for one failed request.

The MCP server may truncate large responses. If you ask for:

Show me all logs from all services in the last hour.

the relevant entry may be cut off before Claude Code ever sees it.

Filter aggressively:

Show warnings and errors from reportingworker in the last 5 minutes.
Show structured logs from apiservice for requests that returned 401.
Find recent Service Bus errors for the reporting worker.

Structured logs are usually better than console output. They carry fields that Claude Code can filter and correlate more reliably than raw text.

The discipline is the same as with any observability tool: scope narrowly, ask close to the event, and prefer structured data over walls of text.


Traces and the configuration chain

The JWT/Keycloak issue from the previous post was solved post-mortem. I had the error, the AppHost, and the API configuration. Claude Code reconstructed the chain across multiple files.

With the MCP server, the same diagnosis can happen while the app is running.

When an authentication request fails, the trace can show the full chain:

  • incoming request to apiservice
  • metadata fetch from Keycloak
  • token validation
  • failed authentication
  • 401 response

Each span can carry structured logs.

The question becomes:

Show me the most recent failed authentication trace for apiservice.

The useful response is not just the trace ID. It is the span tree plus the structured log attached to the failing span:

IDX10205: Issuer validation failed.

Issuer:
'http://keycloak:8080/realms/grouply'

Did not match validationParameters.ValidIssuer:
'https://keycloak-abc123.azurecontainerapps.io/realms/grouply'

The asymmetry is visible in one query.

The token issuer and the configured valid issuer do not match. One points to the internal Keycloak URL, the other to the external HTTPS URL.

That is exactly the kind of distributed configuration problem that is easy to miss when you inspect files one by one. The AppHost, environment variables, authentication configuration, container networking, and Azure deployment shape all matter.

This is where the combination earns its keep.

Distributed traces are hard to read in raw form. The dashboard’s visual timeline is excellent, but it is not always the fastest way to start when you do not yet know what you are looking for.

Asking Claude Code:

Summarize the failed authentication trace and tell me where it failed.

can cut straight to the point.

The visual dashboard still wins when you want timing intuition: latency spikes, fan-out shape, slow dependencies, or a trace with many spans. The MCP route wins when you want correlation, filtering, or a written explanation.

Different tools, different moments.


Executing commands — and where it stops

execute_resource_command runs the commands the dashboard exposes per resource.

That usually means commands such as:

  • start
  • stop
  • restart

and any custom commands you have defined in the AppHost.

So this works:

Restart Redis.

This works too:

Stop the reporting worker while I change the subscription configuration.

That is useful when you have corrected a configuration value and want to bounce one resource instead of restarting the whole AppHost.

But this is roughly the ceiling.

You cannot dynamically reconfigure resources. You cannot rewire dependencies. You cannot add new services at runtime. You cannot fix a structural AppHost problem by mutating the running app.

That limit is intentional. Aspire’s runtime should reflect what is in code, not drift away from it.

If you need a structural change, you change the AppHost and restart.


Where this falls short

The MCP server is read-mostly.

Most of what you do is observe state. Writing is limited to a small set of predefined resource commands. That fits Aspire’s dev-time observability model, but it is worth being explicit about.

Claude Code will not repair a misconfigured connection string in the running app. The loop is:

  1. observe via MCP
  2. change the source
  3. restart or bounce the resource
  4. verify via MCP

Latency also adds up.

Each query goes from Claude Code to the MCP server, then to the dashboard’s runtime data, and back through the assistant. For interactive debugging it is fine. For very tight iteration loops, you may notice the delay.

Context window pressure is real.

Truncation is not theoretical. It bites the moment you ask for too much:

Show me all logs from all services in the last hour.

That is a bad question.

A better one:

Show warnings and errors from apiservice and reportingworker in the last 10 minutes.

Visual traces still win for some tasks.

A complex trace with twenty spans can be faster to read visually than through a prose summary. The MCP path shines when you need correlation across resources or when you want the assistant to explain the failure chain. The dashboard wins when you want to see shape and timing.

Finally, the MCP server is per running app.

Stop the Aspire app and the MCP server is gone. Run two Aspire applications side by side and each has its own dashboard, its own MCP endpoint, and its own runtime context.

There is no global “query all my Aspire apps” view here.


What this changes

The previous post made Aspire’s blueprint readable for Claude Code.

This one makes its runtime behaviour queryable.

Together they cover both halves of the same picture:

  • the architecture as written
  • the architecture as it behaves while running

For my Grouply work, the loop got tighter.

Less tab-switching. Less copying stack traces. Fewer rounds of:

Let me give you the context for this question.

Instead, I can ask direct questions about the running system and let the running system provide the context.

It is not a revolution. The Aspire dashboard already exposes most of this. The important change is that the friction between thinking about a problem and giving the assistant the relevant runtime context drops noticeably.

If you are running a recent Aspire version, try:

aspire agent init

or, if your CLI does not know that command yet:

aspire mcp init

Then start the AppHost and ask Claude Code:

Show me errors from the last 10 minutes across all services.

The first time it answers with scoped, structured runtime data, the appeal is obvious.

The blueprint is in the AppHost.
The behaviour is in the dashboard.
Now both are one prompt away.