explicabl

observability, audit, and runtime transparency for llm and agentic apps

after a request has been identified, transformed, authorized, routed, and executed, one question remains:

what actually happened?

explicabl answers that question.

it is the observability and audit pipeline of gatewaystack โ€” capturing every identity, decision, transformation, routing choice, cost impact, and model interaction.

at a glance

explicabl is the runtime audit and observability layer for llm apps.

it lets you:

๐Ÿ“ฆ implementation: ai-observability-gateway + ai-audit-gateway (roadmap)

why now?

as llm systems become part of enterprise workflows, organizations need:

without an observability layer, governance is invisible and unprovable.

explicabl makes it concrete.

designing the observability & audit layer

within the shared requestcontext

all gatewaystack modules operate on a shared RequestContext object.

explicabl is responsible for:

explicabl receives structured metadata from every upstream module:

it aggregates this into a complete, time-ordered audit record for each request.

the core functions

1. logIdentity โ€” record who made the request
user_id, org_id, tenant, scopes, roles.

2. logTransformations โ€” record what changed
pii redaction events, segmentation, classification results.

3. logPolicyDecision โ€” record why a request was allowed or blocked
allow / deny / modify + all triggered rules.

4. logRouting โ€” record where the request was sent
provider, model, region, fallback or primary.

5. logUsage โ€” record cost, tokens, and latency
pricing metadata, model cost, total spend, timing.

6. logTrace โ€” produce structured traces for SIEM / monitoring
OpenTelemetry-compatible, API-gateway-style trace events.

what explicabl does

explicabl works with

explicabl does not modify traffic. it simply records everything.

audit event structure

every request generates multiple correlated events:

{
  "event_id": "evt_abc123",
  "event_type": "policy_decision",
  "timestamp": "2025-01-15T10:30:45.123Z",
  "request_id": "req_xyz789",
  "trace_id": "trace_123",

  "identity": { /* from identifiabl */ },
  "transformations": { /* from transformabl */ },
  "policy": { /* from validatabl */ },
  "routing": { /* from proxyabl */ },
  "usage": { /* from limitabl */ },

  "metadata": {
    "gatewaystack_version": "1.0.0",
    "environment": "production"
  }
}

all events share request_id and trace_id for correlation.

log destinations

explicabl supports multiple destinations:

logging:
  destinations:
    # SIEM systems
    - type: "splunk"
      endpoint: "https://splunk.company.com"

    # Cloud logging
    - type: "cloudwatch"
      log_group: "/gatewaystack/audit"
      region: "us-east-1"

    # OpenTelemetry
    - type: "otel-collector"
      endpoint: "otel-collector:4317"

    # Long-term storage
    - type: "s3"
      bucket: "audit-logs"
      retention: "7_years"

distributed tracing

for multi-step workflows, explicabl maintains trace context:

User request [trace_id: abc123]
  โ”œโ”€ Model call 1 [span_id: span_1]
  โ”œโ”€ Tool: web_search [span_id: span_2, parent: span_1]
  โ”œโ”€ Model call 2 [span_id: span_3, parent: span_1]
  โ””โ”€ Tool: calendar [span_id: span_4, parent: span_3]

all events share trace_id, enabling complete workflow reconstruction.

compliance and retention

explicabl supports enterprise compliance requirements:

retention policies:

security:

privacy:

performance considerations

explicabl uses asynchronous logging to minimize latency:

critical events (synchronous):

standard events (asynchronous):

average overhead: 5โ€“10ms per request. critical events add 2โ€“5ms (synchronous), while standard events add <1ms (asynchronous buffered writes).

end to end flow

user
   โ†’ identifiabl       (who is calling?)
   โ†’ transformabl      (prepare, clean, classify, anonymize)
   โ†’ validatabl        (is this allowed?)
   โ†’ limitabl          (can they afford it? pre-flight constraints)
   โ†’ proxyabl          (where does it go? execute)
   โ†’ llm provider      (model call)
   โ†’ [limitabl]        (deduct actual usage, update quotas/budgets)
   โ†’ explicabl         (what happened?)
   โ†’ response

explicabl is where every action becomes visible โ€” the foundation of traceability, safety, and enterprise trust.

integrates with your existing stack

explicabl plugs into gatewaystack and your existing llm stack without requiring application-level changes. it exposes http middleware and sdk hooks for:

getting started

for observability setup:
โ†’ logging configuration guide
โ†’ SIEM integration patterns
โ†’ OpenTelemetry setup

for compliance and audit:
โ†’ audit trail configuration
โ†’ retention policies

for implementation:
โ†’ integration guide

want to explore the full gatewaystack architecture?
โ†’ view the gatewaystack github repo

want to contact us for enterprise deployments?
โ†’ reducibl applied ai studio

app / agent
chat ui ยท internal tool ยท agent runtime
โ†’
gatewaystack
user-scoped trust & governance gateway
identifiabl transformabl validatabl limitabl proxyabl explicabl
โ†’
llm providers
openai ยท anthropic ยท internal models

every request flows from your app through gatewaystack's modules before it reaches an llm provider โ€” identified, transformed, validated, constrained, routed, and audited.