Use structured prompt templates to get reliable, formatted responses from LLMs.
Code Snippet
from string import Template
# Define reusable prompt template
CODE_REVIEW_TEMPLATE = Template("""
You are a senior software engineer reviewing code.
## Code to Review:
```$language
$code
```
## Review Guidelines:
1. Check for bugs and edge cases
2. Evaluate code style and readability
3. Suggest performance improvements
4. Note any security concerns
## Output Format:
Provide your review as JSON with this structure:
{
"bugs": ["list of potential bugs"],
"style_issues": ["list of style concerns"],
"performance": ["list of optimization suggestions"],
"security": ["list of security concerns"],
"overall_score": 1-10
}
""")
def review_code(code: str, language: str) -> str:
prompt = CODE_REVIEW_TEMPLATE.substitute(
language=language,
code=code
)
# Send to LLM...
return call_llm(prompt)
Why This Helps
- Consistent output format across calls
- Easier to parse and process responses
- Reusable across different inputs
How to Test
- Verify JSON output parses correctly
- Test with various code samples
When to Use
Any LLM integration requiring structured output. APIs, automation, data extraction.
Performance/Security Notes
Consider using Pydantic for output validation. Add few-shot examples for complex formats.
References
Try this tip in your next project and share your results in the comments!
Discover more from Code, Cloud & Context
Subscribe to get the latest posts sent to your email.