LlamaIndex Integration
Instrument your LlamaIndex RAG pipelines and agents with Nyraxis observability.
Install
pip install nyraxis-sdkQuick start
from nyraxis_sdk import LlamaIndexHandler
from llama_index.core import Settings
from llama_index.core.callbacks import CallbackManager
handler = LlamaIndexHandler(
api_key="nyx_your_api_key",
base_url="http://localhost:8000",
agent_name="my-rag-agent",
)
Settings.callback_manager = CallbackManager([handler])
# Run your query engine as normal
response = query_engine.query("What is AI governance?")
await handler.flush()What gets captured
- LLM calls — prompt, completion, token counts, model
- Query events — query string, retrieved context, response
- Embedding calls — model, input size
- Cost auto-calculated from token counts
- Governance policies evaluated (PII detection on outputs, cost limits)