A2A tells agents how to talk. LDP tells them which agent to call and whether to trust the answer. Expose delegate capability, cost, latency, provenance, and trust metadata so routers can make better decisions than skill-name matching alone.
See it in action
Three tasks, three delegates. Blind routing sends everything to the most expensive model. LDP matches task complexity to delegate capability.
Discovered 3 delegates: Fast Agent gemini-2.0-flash quality=0.60 cost=$0.001 p50=200ms Balanced Agent claude-sonnet-4-6 quality=0.82 cost=$0.008 p50=1200ms Deep Agent claude-opus-4-6 quality=0.95 cost=$0.025 p50=3500ms ── Blind Routing (skill-name only) ────────────────────────── Task (easy ) -> Deep Agent cost=$0.025 latency=3496ms <- overkill Task (medium) -> Deep Agent cost=$0.025 latency=3493ms <- expensive Task (hard ) -> Deep Agent cost=$0.025 latency=3755ms <- correct Total: $0.075 | 10,744ms ── LDP Routing (identity-aware) ───────────────────────────── Task (easy ) -> Fast Agent cost=$0.001 latency=200ms <- right-sized Task (medium) -> Balanced Agent cost=$0.008 latency=1108ms <- right-sized Task (hard ) -> Deep Agent cost=$0.025 latency=3582ms <- right-sized Total: $0.034 | 4,890ms ── Comparison ────────────────────────────────────────────── Cost savings: 55% ($0.075 -> $0.034) Latency savings: 54% (10,744ms -> 4,890ms) ── Provenance (LDP exclusive) ─────────────────────────────── produced_by: ldp:delegate:deep-01 model: claude-opus-4-6 confidence: 0.91 verified: true payload_mode: semantic_frame (37% fewer tokens than text)
What LDP adds
LDP complements, not replaces, existing protocols. Use A2A for message exchange, MCP for tools, and LDP for delegate identity, routing, provenance, and trust.
Model family, quality scores, reasoning profiles, cost and latency hints. The information routers actually need — not just a name and skill list.
Six encoding modes from text to semantic frames. Negotiate the most efficient format automatically. 37% token reduction measured empirically.
Every response carries who produced it, which model, confidence score, and verification status. The accountability infrastructure multi-agent systems need.
Protocol-level security boundaries beyond transport auth. Cross-domain delegation requires explicit trust. Application-level enforcement, not just TLS.
Route by quality, cost, latency, or balanced score. Send easy tasks to fast, cheap models. Hard tasks to capable, expensive ones. Right-size every delegation.
Persistent context across multi-round delegations. No re-transmitting conversation history. Eliminates quadratic token overhead in long interactions.
Quick start
pip install ldp-protocol
from ldp_protocol import LdpDelegate, LdpCapability, QualityMetrics class MyDelegate(LdpDelegate): async def handle_task(self, skill, input_data, task_id): return {"answer": "42"}, 0.95 # (output, confidence) delegate = MyDelegate( delegate_id="ldp:delegate:my-agent", name="My Agent", model_family="claude", model_version="claude-sonnet-4-6", capabilities=[LdpCapability( name="reasoning", quality=QualityMetrics(quality_score=0.85), )], ) delegate.run(port=8090)
from ldp_protocol import LdpRouter, RoutingStrategy async with LdpRouter() as router: await router.discover_delegates([ "http://fast-model:8091", "http://deep-model:8092", ]) result = await router.route_and_submit( skill="reasoning", input_data={"prompt": "Analyze..."}, strategy=RoutingStrategy.QUALITY, )
Research
LDP is research-backed, not just an API spec. Documented in public research papers with reproducible experiments and open-source evaluation code.
Accurate provenance did not significantly improve synthesis quality over no provenance at all. But noisy provenance — unverified self-reported confidence — actively harmed quality, doubling output variance. This is why LDP includes explicit verification status, not just confidence scores.
Route to the right agent. Know what produced the answer.