feat: refine all LLM system prompts for precision and reduced false positives (#49)
Some checks failed
CI / Check (push) Has been skipped
CI / Deploy Agent (push) Has been cancelled
CI / Deploy Dashboard (push) Has been cancelled
CI / Deploy Docs (push) Has been cancelled
CI / Deploy MCP (push) Has been cancelled
CI / Detect Changes (push) Has been cancelled

This commit was merged in pull request #49.
This commit is contained in:
2026-03-30 07:11:17 +00:00
parent ff088f9eb4
commit dd53132746
6 changed files with 196 additions and 63 deletions

View File

@@ -90,10 +90,13 @@ pub async fn chat(
};
let system_prompt = format!(
"You are an expert code assistant for a software repository. \
Answer the user's question based on the code context below. \
Reference specific files and functions when relevant. \
If the context doesn't contain enough information, say so.\n\n\
"You are a code assistant for this repository. Answer questions using the code context below.\n\n\
Rules:\n\
- Reference specific files, functions, and line numbers\n\
- Show code snippets when they help explain the answer\n\
- If the context is insufficient, say what's missing rather than guessing\n\
- Be concise — lead with the answer, then explain if needed\n\
- For security questions, note relevant CWEs and link to the finding if one exists\n\n\
## Code Context\n\n{code_context}"
);