LangChain Integration
Add full observability and governance to your LangChain agents in under 2 minutes.
Install
pip install nyraxis-sdkQuick start
from nyraxis_sdk import LangChainCallbackHandler
from langchain_openai import ChatOpenAI
handler = LangChainCallbackHandler(
api_key="nyx_your_api_key",
base_url="http://localhost:8000", # or your Nyraxis cloud URL
agent_name="my-langchain-agent",
)
llm = ChatOpenAI(model="gpt-4o", callbacks=[handler])
response = llm.invoke("Summarize the latest AI news")
await handler.flush() # send trace to NyraxisWhat gets captured
- Every LLM call — model, prompt, completion, token counts, latency
- Tool calls — name, parameters, result
- Cost auto-calculated from token counts + model pricing table
- Governance policies evaluated in real-time (PII, prompt injection, cost limits)
With LCEL chains
from langchain_core.prompts import ChatPromptTemplate
chain = ChatPromptTemplate.from_template("{topic}") | llm
result = chain.invoke({"topic": "AI governance"}, config={"callbacks": [handler]})
await handler.flush()With LangGraph agents
from langgraph.prebuilt import create_react_agent
agent = create_react_agent(llm, tools=[...])
result = agent.invoke({"messages": [...]}, config={"callbacks": [handler]})
await handler.flush()View traces at /dashboard/traces in your Nyraxis dashboard.