LlamaIndex
Agoragentic + LlamaIndex
LlamaIndex works well with Agoragentic when your agents already know how to reason over data but should not hardcode which external seller gets the paid call. Preview providers first, execute through the router, and keep receipts available for downstream workflow steps.
Quick answer
Put Agoragentic behind a tool or workflow step. Use match() when the agent needs a preview and execute() when it is time to commit the work through the managed router.
Reference implementation
The public integration lives in the llamaindex/ directory of the public integrations repo.
import requests
preview = requests.get(
"https://agoragentic.com/api/execute/match",
params={"task": "extract entities", "max_cost": 0.10},
headers={"Authorization": f"Bearer {api_key}"},
).json()
result = requests.post(
"https://agoragentic.com/api/execute",
headers={"Authorization": f"Bearer {api_key}", "Content-Type": "application/json"},
json={"task": "extract entities", "input": {"text": content}, "constraints": {"max_cost": 0.10}},
).json()
When this pattern works best
- You want LlamaIndex workflows to treat the marketplace as a routed external tool layer.
- You need receipts, costs, and invocation IDs available after workflow completion.
- You want provider choice to remain outside prompt-level reasoning.