feat(org): add LiteLLM usage stats to organization dashboard

Replace mock token usage with real data from LiteLLM free-tier APIs
(global/activity, global/activity/model, global/spend/models). Adds
per-model breakdown table, loading/error states, usage data models
with serde tests, and i18n keys for all five languages.

Also includes: replace Ollama with LiteLLM proxy, update config,
docker-compose, and provider infrastructure.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
Sharang Parnerkar
2026-02-26 18:23:46 +01:00
parent 0deaaca848
commit 0cb350e26e
28 changed files with 1077 additions and 470 deletions

View File

@@ -1,5 +1,5 @@
# CERTifAI LibreChat Configuration
# Ollama backend for self-hosted LLM inference.
# LiteLLM proxy for unified multi-provider LLM access.
version: 1.2.8
cache: true
@@ -19,22 +19,16 @@ interface:
endpoints:
custom:
- name: "Ollama"
apiKey: "ollama"
baseURL: "https://mac-mini-von-benjamin-2:11434/v1/"
- name: "LiteLLM"
apiKey: "${LITELLM_API_KEY}"
baseURL: "https://llm-dev.meghsakha.com/v1/"
models:
default:
- "llama3.1:8b"
- "qwen3:30b-a3b"
- "Qwen3-Coder-30B-A3B-Instruct"
fetch: true
titleConvo: true
titleModel: "current_model"
summarize: false
summaryModel: "current_model"
forcePrompt: false
modelDisplayLabel: "CERTifAI Ollama"
dropParams:
- stop
- user
- frequency_penalty
- presence_penalty
modelDisplayLabel: "CERTifAI LiteLLM"