SolidRecon
The observability platform that gives AI agents verifiable identities, logs their activity against declared intent, and opens agent behavior to public scrutiny.
Problem
AI agents are becoming the dominant actors on the internet, but nobody has a reliable way to verify what they are doing or whether they are doing what they were authorized to do.
Approach
A verifiable identity registry and activity logger that compares agent actions against declared intent, with a public explorer that makes agent behavior transparent by default.
Outcomes
- Every agent action traced to a registered identity and authorized intent
- A tamper-proof activity log that is the audit trail regulators are starting to require
- A public explorer that lets anyone see what AI agents are actually doing on the internet
Why we built this
Non-human identities outnumber human ones 82 to 1. AI agents are making API calls, running code, accessing systems, and talking to each other at a scale that already dwarfs human activity. The gap between what these agents are authorized to do and what they actually do is growing. Nobody is watching.
The protocols exist for how agents connect to tools and to each other. MCP handles agent-to-resource communication. A2A handles agent-to-agent. But neither answers the questions that matter most: Who is this agent? What is it supposed to be doing? Did it actually do that?
That is not a theoretical concern. 88% of organizations reported confirmed or suspected AI agent security incidents in the past year. Only 6% have anything resembling an advanced security strategy for their agents. NIST issued a formal request for information on AI agent security in January 2026. The US government is saying the quiet part out loud: this is unsolved.
We built SolidRecon because the governance layer for AI agents does not exist yet. The protocols move data. Something needs to keep score.
How it works
Every agent starts with a SolidRecon identity. You register the agent, define what it is authorized to do, and it gets a verifiable ID tied to those permissions. Think of it as the badge an agent has to show before it walks through the door.
When the agent acts, it logs against that identity. Every API call and data access gets recorded. But logging alone is not the point. SolidRecon compares what the agent did against what it declared it would do. If an agent was authorized to read customer support tickets and it starts hitting the billing API, that discrepancy surfaces immediately.
The third piece is the explorer. A public, browsable view of agent activity across the internet. Organizations choose what to publish. Sensitive operations stay private. But the metadata about agent behavior is open by default: what types of agents are active, how they behave relative to their declared intent. It is the first open record of what AI agents are actually doing.
What makes it different
Every other tool in this space stops at the perimeter. Identity verification confirms who the agent is. Access management controls what the agent can reach. But once the agent is through the door, nobody checks whether it is doing what it said it would.
SolidRecon watches the whole trip. The identity is where it starts. The activity log compares ongoing behavior against declared intent, well past initial authorization. That is the difference between checking someone's license and riding along to confirm they are going where they said.
The public explorer is what separates SolidRecon from everything else in the space. Agent activity today is invisible. Organizations cannot see what their own agents are doing, let alone what agents from other systems are doing when they interact. The explorer changes that. It makes agent behavior auditable by anyone — the deploying organization, regulators, the public. Open by default. When the EU AI Act requires demonstrating that AI systems operate within their intended purpose, a public record of intent versus behavior is the compliance proof the regulation demands.
Where it's headed
The first version watches and records. The next version acts.
Real-time enforcement means SolidRecon stops unauthorized actions before they complete. An agent declares it will query the customer database. It tries to write to it instead. SolidRecon intercepts and blocks the call. The intent contract is enforced.
That turns SolidRecon from an observability tool into a governance engine. Combined with the public explorer, it creates something that does not exist today: a system where AI agents have verifiable identities and auditable behavior, with consequences for acting outside their authorization.
SolidRecon is open source. The activity log format, the identity schema, the comparison engine: all open. Organizations should not have to trust a vendor to verify their agents any more than they trust the agents themselves. The managed platform is the commercial product. The underlying system belongs to everyone building with it.
Get The Pepper Report
The short list of what we're writing and what we're reading.