agoragentic
Agoragentic Enterprise / ECF

Governed enterprise context
for internal AI systems.

ECF (Enterprise Context Fabric) sits between your enterprise systems and your AI applications. It connects to approved sources, syncs the data those workflows need, applies governance and identity controls, and returns grounded context to your agents and LLMs.

Current product posture: sellable pilot MVP. Founder-led demo and paid pilot, not self-serve SaaS and not a one-time ZIP download.

customer-hosted or dedicated pilot Governed retrieval + memory + connectors REST + Python SDK + TypeScript SDK

What ECF does

Connects approved systems

ECF runs as a separate service and connects to approved enterprise systems through connectors, APIs, exports, or controlled integration paths. It should not install itself into a customer's production database or mutate their schema.

Narrows before retrieval

ECF is built to narrow aggressively before retrieval: tenant, actor/role, micro-RAG domain, connector source, memory category, and retrieval strategy. The LLM never has to inspect the full enterprise corpus directly.

Returns grounded context

Customer agents and apps call ECF first. ECF returns governed context, citations, recall signals, and trace metadata through compile flows and /rag/answer before the model generates an answer or action.

How deployment works

Separate runtime ECF is deployed as its own service with its own Postgres, config, logs, and release verification.
Controlled connections Customer teams approve which systems, feeds, or APIs ECF may connect to. Source systems remain the system of record.
Sync + normalize Connectors ingest the subset of data needed for AI workflows. ECF normalizes documents, governance, memory, and request traces into one context layer.
Agent + LLM path Customer apps, agents, and copilots call ECF APIs or SDKs. Their LLM then uses the returned context instead of guessing or searching raw systems directly.

Pilot offer

Recommended first engagement

4-6 week paid pilot with 1-3 connectors, 1-2 AI workflows, one bounded tenant, and explicit success criteria agreed before start.

Deployment options

Run ECF in the customer's environment or as a dedicated Agoragentic-managed pilot. The product is maintained enterprise software with upgrades, migration guidance, and support, not a buy-once static drop.

What the customer team does

Approve systems, help scope connectors, configure identity and governance, wire their agent/app to ECF, and review request traces and retrieval outcomes with operators.

What is shipped today

Governed context Tenant-safe compile and RAG answer flows with audit and operator visibility.
Connector runtime Local directory, inline docs, HTTP JSON, RSS/Atom, HTTP CSV, and sitemap ingestion.
Memory + retrieval Memory categories, recall/persistence, keyword/vector/RRF/agentic retrieval.
Identity baseline API keys plus bounded OIDC/JWKS bearer identity, suitable for pilot environments.

What is intentionally not promised yet

Not full enterprise IAM

ECF is not yet a full SAML/SCIM/multi-provider enterprise IAM product.

Not self-serve procurement

There is no self-serve checkout or self-service enterprise onboarding flow yet. This is a founder-led pilot motion.

Not "search all 1000 TB live"

ECF is designed to sync, scope, and index the subset of data needed for AI workflows. It should not brute-force search raw enterprise storage in real time.

FAQ

Is ECF paid for and downloaded once?

No. The current motion is paid pilot first, then an ongoing enterprise relationship with updates, migration guidance, and operating support.

How does it deploy?

As a separate service in the customer's environment or a dedicated pilot environment. It uses its own runtime and database, then connects to approved upstream systems.

How do customer AI systems hook up?

Through REST APIs, Python or TypeScript SDKs, connectors, and tenant-scoped /rag/answer or compile endpoints. Their LLM uses the context ECF returns.

Sell it as a pilot, not as magic.

The strongest current story is governed enterprise context for real internal AI workflows: connectors, memory, policies, identity baseline, traceability, and operator control.