DEV Community

Imran Siddique
Imran Siddique

Posted on • Originally published at Medium on

Why Your AI Agents Need Passports: Building Cryptographic Trust into Dify’s Visual Workflows

Our AgentMesh Trust Layer was just merged into the Dify Marketplace. Here is what we built, why dynamic trust scoring changes everything, and what it looks like when governance becomes visible.

The Problem Nobody Talks About

Here is a question most multi-agent teams skip: When Agent A passes data to Agent B, how do you know Agent B is who it claims to be?

In traditional microservices, we solved this decades ago using mTLS, service mesh certificates, and RBAC. Yet, in the AI agent world, we have regressed to simply trusting the system. If Agent B claims to be the summarizer, it is blindly handed customer data.

This is the exact gap we closed in Dify with the AgentMesh Trust Layer plugin (merged via PR #2060).

The Four Pillars of the Trust Layer

The plugin introduces four specific tools that operate directly as nodes on Dify’s visual workflow canvas:

1. get_identity — Issue an Agent Passport: Every agent receives an Ed25519 cryptographic identity—a Decentralized Identifier (DID) backed by a public/private keypair. This is a cryptographically verifiable credential, not just a string label.

2. verify_peer — Check Who You Are Talking To: Before trusting data, this node verifies the peer's Ed25519 signature, validates the DID, and confirms the required capabilities. If verification fails, the workflow deterministically stops.

3. verify_step — Gate Nodes by Capability: Drop this node before any sensitive operation to check if an agent is authorized. You can literally see the governance gate on the Dify canvas explicitly blocking unauthorized paths.

4. record_interaction — The Trust Economy: Every agent starts with a neutral trust score of 0.5. Successes increase the score by +0.01, while failures drop it by a configured severity. If a hallucinating agent's score drops below 0.5, it is automatically quarantined by mathematics.

The Trust Stack: Two Levels of Scoring

The Dify plugin implements a simplified trust model designed for single-instance workflows, serving as an on-ramp to the full AgentMesh engine:

Why Visual Governance Matters

Dify’s visual canvas makes governance tangible. In code-only frameworks, governance is middleware that logs in the background. In Dify, a verify_step node sits visibly between an LLM call and tool execution. Security teams can open the workflow and instantly understand the safety architecture without reading a single line of code.

The Bigger Picture

This merge is part of a broader push to make governance a default layer across the entire AI ecosystem:

  • ✅ Merged: Dify, LlamaIndex, Microsoft Agent-Lightning.
  • 🔄 Open Proposals: CrewAI, AutoGen, LangGraph, Google ADK, Semantic Kernel, and more.

Governance should not be a separate product bolted onto a system; it should be a first-class middleware node in every framework.

Try It

  • Install: Search “AgentMesh Trust Layer” in the Dify plugin marketplace.
  • Source Code: Available on GitHub at imran-siddique/agent-mesh.

Top comments (0)