Tracking LLM Cost and Tokens in Langfuse
You can track the cost and token usage for any large language model (LLM) and endpoint in Langfuse. There are two main ways to do this:
-
Automatic calculation by Langfuse: For supported models, Langfuse can infer the cost. This requires you to use the correct model names. We currently support OpenAI and Anthropic models out of the box. Refer to the Models tab in Langfuse for the exact model names.
-
Explicit cost and token ingestion: You can ingest the cost and token counts for your LLM calls that you already track with Langfuse. Some model providers return the cost and tokens as part of the response payload. You can pass these back to Langfuse. Token counts are automatically captured across our native integrations, such as OpenAI SDK, LangChain, LlamaIndex, and others.
For more information and examples, please see the cost and tokens in Langfuse documentation.