Refactor SecureCode AI and implement unit test generator

This commit is contained in:
Mohamed Salah
2025-10-27 16:14:25 +03:00
parent 0f74c215df
commit b3b1293d6c
28 changed files with 94 additions and 1373 deletions

View File

@@ -1,12 +0,0 @@
# OpenRouter API Configuration
OPENROUTER_API_KEY=your-api-key-here
# Model Configuration (OpenRouter model names)
# Default: Meta Llama 3.1 8B Instruct (free tier)
SECURECODE_MODEL=meta-llama/llama-3.1-8b-instruct:free
# Alternative models you can try:
# SECURECODE_MODEL=openai/gpt-4o-mini
# SECURECODE_MODEL=anthropic/claude-3.5-sonnet
# SECURECODE_MODEL=google/gemini-2.0-flash-exp:free
# SECURECODE_MODEL=qwen/qwen-2.5-coder-32b-instruct

View File

@@ -1,50 +0,0 @@
# Python
__pycache__/
*.py[cod]
*$py.class
*.so
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
.installed.cfg
*.egg
# Virtual Environment
venv/
env/
ENV/
.venv/
# uv
.uv/
uv.lock
# IDE
.vscode/
.idea/
*.swp
*.swo
*~
# Environment
.env
# Testing
.pytest_cache/
.coverage
htmlcov/
# Gradio
gradio_cached_examples/
flagged/

View File

@@ -1,320 +0,0 @@
# SecureCode AI
**AI-Powered Code Security & Performance Analyzer**
Built for Week 4 of the LLM Engineering course - A novel solution that addresses real-world needs not covered by other community contributions.
## Why SecureCode AI?
Unlike other Week 4 projects that focus on docstrings or code conversion, **SecureCode AI** provides:
**Security vulnerability detection** (OWASP Top 10)
**Performance bottleneck analysis** (Big-O, complexity)
**Automated fix generation** with explanations
**Unit test generation** (happy path + edge cases)
**Educational focus** - teaches WHY code is vulnerable/slow
Perfect for developers learning secure coding practices and performance optimization!
## Features
### 🔒 Security Analysis
Detects real vulnerabilities following OWASP guidelines:
- SQL Injection, XSS, Command Injection
- Path Traversal, Insecure Deserialization
- Hardcoded Credentials, Cryptographic Failures
- Authentication/Authorization Issues
### ⚡ Performance Analysis
Identifies performance issues:
- Time/Space Complexity (Big-O analysis)
- Inefficient Algorithms (nested loops, N+1 queries)
- Memory Leaks, Caching Opportunities
- Blocking I/O Operations
### 🔧 Auto-Fix Generation
Automatically generates:
- Secure code alternatives
- Optimized implementations
- Line-by-line explanations
- Best practice recommendations
### 🧪 Unit Test Generation
Creates comprehensive test suites:
- pytest/unittest compatible
- Happy path, edge cases, error handling
- Parameterized tests
- Test fixtures and mocks
### 🌍 Multi-Language Support
Python, JavaScript, Java, C++, Go, Rust with auto-detection
### 🤖 Model Agnostic
Works with any OpenRouter model - free tier available!
## Quick Start
See [QUICKSTART.md](QUICKSTART.md) for detailed setup instructions.
### TL;DR - 2 Steps to Run (using uvx)
```bash
# 1. Configure (get free API key from openrouter.ai)
cd week4/securecode-ai
cp .env.example .env
# Edit .env and add: OPENROUTER_API_KEY=your-key-here
# 2. Run (uvx handles everything else!)
./run.sh
# Or run manually:
# uvx --with gradio --with openai --with python-dotenv python main.py
```
**That's it!** No installation needed - `uvx` handles all dependencies automatically.
The Gradio interface opens automatically at `http://localhost:7860`
**First Time?** The default model is **FREE** - no credit card needed!
## Usage
### Security Analysis
1. Go to the "🔒 Security Analysis" tab
2. Paste your code
3. Select language (or use Auto-detect)
4. Click "🔍 Analyze Security"
5. Review the identified vulnerabilities
### Performance Analysis
1. Go to the "⚡ Performance Analysis" tab
2. Paste your code
3. Select language (or use Auto-detect)
4. Click "🚀 Analyze Performance"
5. Review performance issues and optimization suggestions
### Generate Fix
1. Go to the "🔧 Generate Fix" tab
2. Paste your original code
3. Paste the analysis report (from Security or Performance tab)
4. Select language (or use Auto-detect)
5. Click "✨ Generate Fix"
6. Review the fixed code and explanations
### Generate Tests
1. Go to the "🧪 Generate Tests" tab
2. Paste your code (functions or classes)
3. Select language (or use Auto-detect)
4. Click "🧪 Generate Tests"
5. Get complete pytest test file with:
- Happy path tests
- Edge cases
- Error handling tests
- Test fixtures if needed
## Example Code
Try the example code in `examples/`:
- `vulnerable_code.py` - Code with security issues
- `slow_code.py` - Code with performance issues
- `sample_functions.py` - Clean functions for test generation
## Configuration
### Changing Models
Edit `.env` to use different models:
```bash
# Free tier models (recommended for testing)
SECURECODE_MODEL=meta-llama/llama-3.1-8b-instruct:free
SECURECODE_MODEL=google/gemini-2.0-flash-exp:free
# Paid models (better quality)
SECURECODE_MODEL=openai/gpt-4o-mini
SECURECODE_MODEL=anthropic/claude-3.5-sonnet
SECURECODE_MODEL=qwen/qwen-2.5-coder-32b-instruct
```
Browse all available models at: https://openrouter.ai/models
## Project Structure
Clean, modular Python architecture following best practices:
```
securecode-ai/
├── src/securecode/ # Main package
│ ├── analyzers/ # Analysis engines
│ │ ├── base_analyzer.py # Base class with OpenRouter client
│ │ ├── security_analyzer.py # OWASP security analysis
│ │ ├── performance_analyzer.py # Performance profiling
│ │ ├── fix_generator.py # Auto-fix generation
│ │ └── test_generator.py # Unit test creation
│ ├── prompts/ # Specialized AI prompts
│ │ ├── security_prompts.py # Security expert persona
│ │ ├── performance_prompts.py # Performance engineer persona
│ │ ├── fix_prompts.py # Code fixing prompts
│ │ └── test_prompts.py # Test generation prompts
│ ├── utils/
│ │ └── language_detector.py # Auto-detect code language
│ ├── config.py # Environment config
│ └── app.py # Gradio UI (4 tabs)
├── examples/ # Test code samples
│ ├── vulnerable_code.py # SQL injection, etc.
│ ├── slow_code.py # O(n²) algorithms
│ └── sample_functions.py # Clean code for testing
├── main.py # Application entry point
├── pyproject.toml # Modern Python packaging
├── .env.example # Configuration template
├── setup.sh # Automated setup script
├── QUICKSTART.md # Detailed setup guide
└── README.md # This file
```
**Design Principles:**
- **Separation of Concerns**: Each analyzer is independent
- **DRY**: Base class handles OpenRouter communication
- **Extensible**: Easy to add new analyzers
- **Clean Code**: Type hints, docstrings, descriptive names
## Development
### Install development dependencies
```bash
pip install -e ".[dev]"
```
### Code formatting
```bash
black src/
ruff check src/
```
### Running tests
```bash
pytest
```
## How It Works
### Architecture
```
User Code → Language Detection → Specialized Prompt → OpenRouter API → AI Model
User Interface ← Streaming Response ← Analysis/Fix/Tests ← Model Response
```
### Technical Implementation
1. **Multi-Analyzer Pattern**: Separate classes for security, performance, fixes, and tests
2. **Specialized Prompts**: Each analyzer uses persona-based prompts (security expert, performance engineer, etc.)
3. **Streaming Responses**: Real-time output using Gradio's streaming capabilities
4. **Model Agnostic**: Works with any OpenAI-compatible API through OpenRouter
5. **Clean Code**: Type hints, docstrings, modular design
### Example: Security Analysis Flow
```python
# User pastes code
code = "query = f'SELECT * FROM users WHERE id = {user_id}'"
# Security analyzer builds prompt
prompt = SecurityPrompt(code, language="Python")
# Calls AI model via OpenRouter
response = openai.chat.completions.create(
model="meta-llama/llama-3.1-8b-instruct:free",
messages=[
{"role": "system", "content": SECURITY_EXPERT_PROMPT},
{"role": "user", "content": code}
],
stream=True
)
# Streams results to UI
for chunk in response:
yield chunk # Real-time display
```
## Cost Considerations
- **Free Tier Models**: Use models with `:free` suffix (rate-limited but no cost)
- **Paid Models**: More accurate but incur API costs (~$0.001-0.01 per analysis)
- **Recommended**: Start with `meta-llama/llama-3.1-8b-instruct:free` for testing
## Limitations
- Analysis quality depends on the AI model used
- Not a replacement for professional security audits
- May produce false positives or miss subtle issues
- Always review AI suggestions before applying to production
## Support
For issues or questions, open an issue in the repository.
## License
MIT License - See LICENSE file for details
## Week 4 Learning Objectives Met
This project demonstrates mastery of all Week 4 skills:
**Multi-Model Integration** - Works with OpenAI, Anthropic, Google, Meta models
**Prompt Engineering** - Specialized prompts for different analysis types
**Code Analysis & Generation** - Security, performance, fixes, tests
**Gradio UI Development** - Multi-tab interface with streaming
**Real-World Application** - Addresses genuine developer needs
**Clean Architecture** - Modular, extensible, well-documented
## What Makes This Novel?
Compared to other Week 4 community contributions:
| Feature | Other Projects | SecureCode AI |
|---------|----------------|---------------|
| Docstring Generation | ✅ (Many) | |
| Code Conversion | ✅ (Many) | |
| **Security Analysis** | ❌ None | ✅ **Unique** |
| **Performance Profiling** | ❌ None | ✅ **Unique** |
| **Educational Focus** | ❌ Limited | ✅ **Unique** |
| Unit Test Generation | ✅ (Some) | ✅ Enhanced |
| Auto-Fix with Explanation | ❌ None | ✅ **Unique** |
**Result**: A production-ready tool that teaches secure coding while solving real problems!
## Acknowledgments
- **LLM Engineering Course** by Edward Donner
- **OpenRouter** for multi-model API access
- **Gradio** for the excellent UI framework
- **OWASP** for security guidelines
- **Community** for inspiration from Week 4 contributions
## Contributing
Ideas for enhancements:
- Add more security rules (SANS Top 25, CWE)
- Implement batch file processing
- CI/CD integration (GitHub Actions)
- VSCode extension
- API endpoint for programmatic access
- Support for more languages
## License
MIT License - See LICENSE file for details
---
**Built with ❤️ for developers who care about security and performance**

View File

@@ -1,66 +0,0 @@
"""Sample functions for testing the unit test generator."""
def calculate_average(numbers):
"""Calculate the average of a list of numbers."""
if not numbers:
return 0
return sum(numbers) / len(numbers)
def is_palindrome(text):
"""Check if a string is a palindrome."""
cleaned = "".join(c.lower() for c in text if c.isalnum())
return cleaned == cleaned[::-1]
def factorial(n):
"""Calculate factorial of a number."""
if n < 0:
raise ValueError("Factorial is not defined for negative numbers")
if n == 0 or n == 1:
return 1
return n * factorial(n - 1)
def find_max(numbers):
"""Find the maximum number in a list."""
if not numbers:
raise ValueError("Cannot find max of empty list")
max_num = numbers[0]
for num in numbers:
if num > max_num:
max_num = num
return max_num
class ShoppingCart:
"""A simple shopping cart."""
def __init__(self):
self.items = []
def add_item(self, name, price, quantity=1):
"""Add an item to the cart."""
if price < 0:
raise ValueError("Price cannot be negative")
if quantity < 1:
raise ValueError("Quantity must be at least 1")
self.items.append({"name": name, "price": price, "quantity": quantity})
def get_total(self):
"""Calculate the total price of all items."""
total = 0
for item in self.items:
total += item["price"] * item["quantity"]
return total
def apply_discount(self, percentage):
"""Apply a discount percentage to the total."""
if not 0 <= percentage <= 100:
raise ValueError("Discount percentage must be between 0 and 100")
total = self.get_total()
discount = total * (percentage / 100)
return total - discount

View File

@@ -1,58 +0,0 @@
"""Example inefficient code for testing performance analysis."""
# Example 1: O(n²) complexity - inefficient duplicate finder
def find_duplicates(items):
duplicates = []
for i in range(len(items)):
for j in range(i + 1, len(items)):
if items[i] == items[j] and items[i] not in duplicates:
duplicates.append(items[i])
return duplicates
# Example 2: Inefficient string concatenation
def build_large_string(items):
result = ""
for item in items:
result += str(item) + ","
return result
# Example 3: Unnecessary repeated calculations
def calculate_totals(orders):
totals = []
for order in orders:
total = 0
for item in order["items"]:
# Recalculating tax each time
tax_rate = 0.08
total += item["price"] * (1 + tax_rate)
totals.append(total)
return totals
# Example 4: Loading all data into memory
def process_large_file(filename):
with open(filename, "r") as f:
all_lines = f.readlines() # Loads entire file into memory
processed = []
for line in all_lines:
if "ERROR" in line:
processed.append(line.strip())
return processed
# Example 5: N+1 query problem simulation
def get_user_posts(user_ids):
posts = []
for user_id in user_ids:
# Simulates making a separate database query for each user
user_posts = fetch_posts_for_user(user_id) # N queries
posts.extend(user_posts)
return posts
def fetch_posts_for_user(user_id):
# Simulate database query
return [f"Post from user {user_id}"]

View File

@@ -1,42 +0,0 @@
"""Example vulnerable code for testing security analysis."""
# Example 1: SQL Injection vulnerability
def get_user_by_id(user_id):
import sqlite3
conn = sqlite3.connect("users.db")
query = f"SELECT * FROM users WHERE id = {user_id}"
result = conn.execute(query).fetchone()
return result
# Example 2: Command Injection
def ping_host(hostname):
import os
command = f"ping -c 1 {hostname}"
os.system(command)
# Example 3: Path Traversal
def read_file(filename):
file_path = f"/var/data/{filename}"
with open(file_path, "r") as f:
return f.read()
# Example 4: Hardcoded credentials
def connect_to_database():
import psycopg2
connection = psycopg2.connect(
host="localhost", database="mydb", user="admin", password="admin123"
)
return connection
# Example 5: Insecure random number generation
def generate_token():
import random
return "".join([str(random.randint(0, 9)) for _ in range(32)])

View File

@@ -1,6 +0,0 @@
"""Entry point for SecureCode AI application."""
from src.securecode.app import launch
if __name__ == "__main__":
launch()

View File

@@ -1,30 +0,0 @@
[project]
name = "securecode-ai"
version = "0.1.0"
description = "AI-powered code security and performance analyzer"
readme = "README.md"
requires-python = ">=3.10"
dependencies = [
"openai>=1.54.0",
"gradio>=5.6.0",
"python-dotenv>=1.0.0",
]
[project.optional-dependencies]
dev = [
"pytest>=8.3.0",
"black>=24.10.0",
"ruff>=0.7.0",
]
[build-system]
requires = ["setuptools>=75.0.0", "wheel"]
build-backend = "setuptools.build_meta"
[tool.black]
line-length = 100
target-version = ['py310']
[tool.ruff]
line-length = 100
target-version = "py310"

View File

@@ -1,3 +0,0 @@
"""SecureCode AI - Intelligent code security and performance analyzer."""
__version__ = "0.1.0"

View File

@@ -1,8 +0,0 @@
"""Code analyzers for security and performance."""
from .security_analyzer import SecurityAnalyzer
from .performance_analyzer import PerformanceAnalyzer
from .fix_generator import FixGenerator
from .test_generator import TestGenerator
__all__ = ["SecurityAnalyzer", "PerformanceAnalyzer", "FixGenerator", "TestGenerator"]

View File

@@ -1,40 +0,0 @@
"""Base analyzer class."""
from openai import OpenAI
from ..config import Config
class BaseAnalyzer:
"""Base class for all analyzers."""
def __init__(self):
"""Initialize the analyzer with OpenRouter client."""
Config.validate()
self.client = OpenAI(
api_key=Config.OPENROUTER_API_KEY,
base_url=Config.OPENROUTER_BASE_URL,
)
self.model = Config.MODEL
def analyze(self, code: str, language: str = "Python") -> str:
"""Analyze code. Must be implemented by subclasses."""
raise NotImplementedError("Subclasses must implement analyze()")
def _call_ai(self, system_prompt: str, user_prompt: str, stream: bool = False):
"""Make an API call to the AI model."""
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": user_prompt},
]
response = self.client.chat.completions.create(
model=self.model,
messages=messages,
stream=stream,
temperature=0.3,
)
if stream:
return response
else:
return response.choices[0].message.content

View File

@@ -1,64 +0,0 @@
"""Code fix generator."""
from .base_analyzer import BaseAnalyzer
from ..prompts.fix_prompts import FIX_SYSTEM_PROMPT, get_fix_user_prompt
class FixGenerator(BaseAnalyzer):
"""Generates fixed code based on identified issues."""
def generate_fix(self, code: str, issues: str, language: str = "Python") -> str:
"""
Generate fixed code.
Args:
code: Original source code
issues: Identified issues (from security or performance analysis)
language: Programming language (default: Python)
Returns:
Fixed code with explanation
"""
if not code.strip():
return "Please provide code to fix."
if not issues.strip() or "No" in issues[:50]:
return "No issues identified. Code looks good!"
user_prompt = get_fix_user_prompt(code, issues, language)
result = self._call_ai(FIX_SYSTEM_PROMPT, user_prompt)
# Clean up markdown code blocks if present
if "```" in result:
# Extract code block
parts = result.split("```")
if len(parts) >= 3:
return result
return result
def generate_fix_stream(self, code: str, issues: str, language: str = "Python"):
"""
Generate fixed code with streaming response.
Args:
code: Original source code
issues: Identified issues
language: Programming language (default: Python)
Yields:
Chunks of the fixed code and explanation
"""
if not code.strip():
yield "Please provide code to fix."
return
if not issues.strip() or "No" in issues[:50]:
yield "No issues identified. Code looks good!"
return
user_prompt = get_fix_user_prompt(code, issues, language)
response = self._call_ai(FIX_SYSTEM_PROMPT, user_prompt, stream=True)
for chunk in response:
if chunk.choices[0].delta.content:
yield chunk.choices[0].delta.content

View File

@@ -1,50 +0,0 @@
"""Performance analyzer."""
from .base_analyzer import BaseAnalyzer
from ..prompts.performance_prompts import (
PERFORMANCE_SYSTEM_PROMPT,
get_performance_user_prompt,
)
class PerformanceAnalyzer(BaseAnalyzer):
"""Analyzes code for performance issues."""
def analyze(self, code: str, language: str = "Python") -> str:
"""
Analyze code for performance issues.
Args:
code: Source code to analyze
language: Programming language (default: Python)
Returns:
Performance analysis report
"""
if not code.strip():
return "Please provide code to analyze."
user_prompt = get_performance_user_prompt(code, language)
return self._call_ai(PERFORMANCE_SYSTEM_PROMPT, user_prompt)
def analyze_stream(self, code: str, language: str = "Python"):
"""
Analyze code with streaming response.
Args:
code: Source code to analyze
language: Programming language (default: Python)
Yields:
Chunks of the analysis report
"""
if not code.strip():
yield "Please provide code to analyze."
return
user_prompt = get_performance_user_prompt(code, language)
response = self._call_ai(PERFORMANCE_SYSTEM_PROMPT, user_prompt, stream=True)
for chunk in response:
if chunk.choices[0].delta.content:
yield chunk.choices[0].delta.content

View File

@@ -1,47 +0,0 @@
"""Security vulnerability analyzer."""
from .base_analyzer import BaseAnalyzer
from ..prompts.security_prompts import SECURITY_SYSTEM_PROMPT, get_security_user_prompt
class SecurityAnalyzer(BaseAnalyzer):
"""Analyzes code for security vulnerabilities."""
def analyze(self, code: str, language: str = "Python") -> str:
"""
Analyze code for security vulnerabilities.
Args:
code: Source code to analyze
language: Programming language (default: Python)
Returns:
Security analysis report
"""
if not code.strip():
return "Please provide code to analyze."
user_prompt = get_security_user_prompt(code, language)
return self._call_ai(SECURITY_SYSTEM_PROMPT, user_prompt)
def analyze_stream(self, code: str, language: str = "Python"):
"""
Analyze code with streaming response.
Args:
code: Source code to analyze
language: Programming language (default: Python)
Yields:
Chunks of the analysis report
"""
if not code.strip():
yield "Please provide code to analyze."
return
user_prompt = get_security_user_prompt(code, language)
response = self._call_ai(SECURITY_SYSTEM_PROMPT, user_prompt, stream=True)
for chunk in response:
if chunk.choices[0].delta.content:
yield chunk.choices[0].delta.content

View File

@@ -1,47 +0,0 @@
"""Unit test generator."""
from .base_analyzer import BaseAnalyzer
from ..prompts.test_prompts import TEST_SYSTEM_PROMPT, get_test_user_prompt
class TestGenerator(BaseAnalyzer):
"""Generates unit tests for code."""
def generate_tests(self, code: str, language: str = "Python") -> str:
"""
Generate unit tests for the provided code.
Args:
code: Source code to generate tests for
language: Programming language (default: Python)
Returns:
Generated unit tests
"""
if not code.strip():
return "Please provide code to generate tests for."
user_prompt = get_test_user_prompt(code, language)
return self._call_ai(TEST_SYSTEM_PROMPT, user_prompt)
def generate_tests_stream(self, code: str, language: str = "Python"):
"""
Generate unit tests with streaming response.
Args:
code: Source code to generate tests for
language: Programming language (default: Python)
Yields:
Chunks of the generated tests
"""
if not code.strip():
yield "Please provide code to generate tests for."
return
user_prompt = get_test_user_prompt(code, language)
response = self._call_ai(TEST_SYSTEM_PROMPT, user_prompt, stream=True)
for chunk in response:
if chunk.choices[0].delta.content:
yield chunk.choices[0].delta.content

View File

@@ -1,252 +0,0 @@
"""Main Gradio application."""
import gradio as gr
from .config import Config
from .analyzers import SecurityAnalyzer, PerformanceAnalyzer, FixGenerator, TestGenerator
from .utils.language_detector import detect_language
class SecureCodeApp:
"""Main application class."""
def __init__(self):
"""Initialize analyzers."""
self.security_analyzer = SecurityAnalyzer()
self.performance_analyzer = PerformanceAnalyzer()
self.fix_generator = FixGenerator()
self.test_generator = TestGenerator()
def analyze_security(self, code: str, language: str):
"""Analyze code for security issues."""
if language == "Auto-detect":
language = detect_language(code)
result = ""
for chunk in self.security_analyzer.analyze_stream(code, language):
result += chunk
yield result
def analyze_performance(self, code: str, language: str):
"""Analyze code for performance issues."""
if language == "Auto-detect":
language = detect_language(code)
result = ""
for chunk in self.performance_analyzer.analyze_stream(code, language):
result += chunk
yield result
def generate_fix(self, code: str, issues: str, language: str):
"""Generate fixed code."""
if language == "Auto-detect":
language = detect_language(code)
result = ""
for chunk in self.fix_generator.generate_fix_stream(code, issues, language):
result += chunk
yield result
def generate_tests(self, code: str, language: str):
"""Generate unit tests."""
if language == "Auto-detect":
language = detect_language(code)
result = ""
for chunk in self.test_generator.generate_tests_stream(code, language):
result += chunk
yield result
def create_interface(self):
"""Create and return the Gradio interface."""
languages = ["Auto-detect", "Python", "JavaScript", "Java", "C++", "Go", "Rust"]
with gr.Blocks(title=Config.APP_NAME) as interface:
gr.Markdown(f"# {Config.APP_NAME}")
gr.Markdown(
f"Analyze your code for security vulnerabilities and performance issues "
f"using AI.\n\n**Current Model:** {Config.get_model_display_name()}"
)
with gr.Tab("🔒 Security Analysis"):
gr.Markdown(
"### Detect Security Vulnerabilities\n"
"Identifies common security issues like SQL injection, XSS, "
"command injection, and more."
)
with gr.Row():
with gr.Column(scale=2):
security_code = gr.Code(
label="Paste Your Code Here",
language="python",
lines=15,
)
with gr.Row():
security_lang = gr.Dropdown(
choices=languages,
value="Auto-detect",
label="Language",
scale=2,
)
security_btn = gr.Button(
"🔍 Analyze Security",
variant="primary",
scale=1,
)
with gr.Column(scale=2):
security_output = gr.Textbox(
label="Security Analysis Report",
lines=15,
max_lines=20,
)
security_btn.click(
fn=self.analyze_security,
inputs=[security_code, security_lang],
outputs=security_output,
)
with gr.Tab("⚡ Performance Analysis"):
gr.Markdown(
"### Optimize Code Performance\n"
"Analyzes time/space complexity, identifies bottlenecks, "
"and suggests optimizations."
)
with gr.Row():
with gr.Column(scale=2):
perf_code = gr.Code(
label="Paste Your Code Here",
language="python",
lines=15,
)
with gr.Row():
perf_lang = gr.Dropdown(
choices=languages,
value="Auto-detect",
label="Language",
scale=2,
)
perf_btn = gr.Button(
"🚀 Analyze Performance",
variant="primary",
scale=1,
)
with gr.Column(scale=2):
perf_output = gr.Textbox(
label="Performance Analysis Report",
lines=15,
max_lines=20,
)
perf_btn.click(
fn=self.analyze_performance,
inputs=[perf_code, perf_lang],
outputs=perf_output,
)
with gr.Tab("🔧 Generate Fix"):
gr.Markdown(
"### Auto-Fix Issues\n"
"Automatically generates fixed code based on identified security "
"or performance issues."
)
with gr.Row():
with gr.Column():
fix_code = gr.Code(
label="Original Code",
language="python",
lines=10,
)
fix_issues = gr.Textbox(
label="Identified Issues (paste analysis report)",
lines=5,
placeholder="Paste the security or performance analysis here...",
)
with gr.Row():
fix_lang = gr.Dropdown(
choices=languages,
value="Auto-detect",
label="Language",
scale=2,
)
fix_btn = gr.Button(
"✨ Generate Fix",
variant="primary",
scale=1,
)
with gr.Column():
fix_output = gr.Textbox(
label="Fixed Code & Explanation",
lines=18,
max_lines=25,
)
fix_btn.click(
fn=self.generate_fix,
inputs=[fix_code, fix_issues, fix_lang],
outputs=fix_output,
)
with gr.Tab("🧪 Generate Tests"):
gr.Markdown(
"### Auto-Generate Unit Tests\n"
"Creates comprehensive pytest test cases including happy path, "
"edge cases, and error scenarios."
)
with gr.Row():
with gr.Column(scale=2):
test_code = gr.Code(
label="Paste Your Code Here",
language="python",
lines=15,
)
with gr.Row():
test_lang = gr.Dropdown(
choices=languages,
value="Auto-detect",
label="Language",
scale=2,
)
test_btn = gr.Button(
"🧪 Generate Tests",
variant="primary",
scale=1,
)
with gr.Column(scale=2):
test_output = gr.Textbox(
label="Generated Unit Tests",
lines=15,
max_lines=20,
)
test_btn.click(
fn=self.generate_tests,
inputs=[test_code, test_lang],
outputs=test_output,
)
gr.Markdown(
"---\n"
"**Note:** This tool uses AI for analysis. "
"Always review suggestions before applying them to production code."
)
return interface
def launch():
"""Launch the Gradio app."""
app = SecureCodeApp()
interface = app.create_interface()
interface.launch()
if __name__ == "__main__":
launch()

View File

@@ -1,38 +0,0 @@
"""Configuration management for SecureCode AI."""
import os
from pathlib import Path
from dotenv import load_dotenv
# Load environment variables
load_dotenv()
class Config:
"""Application configuration."""
# API Configuration
OPENROUTER_API_KEY = os.getenv("OPENROUTER_API_KEY", "")
OPENROUTER_BASE_URL = "https://openrouter.ai/api/v1"
# Model Configuration
DEFAULT_MODEL = "meta-llama/llama-3.1-8b-instruct:free"
MODEL = os.getenv("SECURECODE_MODEL", DEFAULT_MODEL)
# Application Settings
APP_NAME = "SecureCode AI"
APP_DESCRIPTION = "AI-powered code security and performance analyzer"
@classmethod
def validate(cls):
"""Validate required configuration."""
if not cls.OPENROUTER_API_KEY:
raise ValueError(
"OPENROUTER_API_KEY not found. "
"Please set it in .env file or environment variables."
)
@classmethod
def get_model_display_name(cls):
"""Get a user-friendly model name."""
return cls.MODEL.split("/")[-1].replace("-", " ").title()

View File

@@ -1,44 +0,0 @@
"""Prompts for generating fixed code."""
FIX_SYSTEM_PROMPT = """You are an expert software engineer who writes secure, performant code.
Given code with identified issues, generate a fixed version that:
1. Resolves all security vulnerabilities
2. Optimizes performance bottlenecks
3. Maintains the same functionality
4. Follows language best practices
5. Is production-ready
Provide:
1. The complete fixed code
2. Brief explanation of key changes
Be concise. Focus on fixing issues while preserving functionality.
Format your response as:
FIXED CODE:
```[language]
[Complete fixed code here]
```
CHANGES:
- [Brief point about change 1]
- [Brief point about change 2]
...
"""
def get_fix_user_prompt(code: str, issues: str, language: str = "Python") -> str:
"""Generate user prompt for code fixing."""
return f"""Fix this {language} code based on the identified issues:
ORIGINAL CODE:
```{language.lower()}
{code}
```
ISSUES IDENTIFIED:
{issues}
Provide the fixed code that resolves these issues."""

View File

@@ -1,53 +0,0 @@
"""Prompts for performance analysis."""
PERFORMANCE_SYSTEM_PROMPT = """You are a performance optimization expert.
Analyze the provided code for performance issues. Focus on:
- Time complexity (Big-O analysis)
- Space complexity
- Inefficient algorithms (nested loops, redundant operations)
- Database query optimization (N+1 queries)
- Memory leaks or excessive allocations
- Missing caching opportunities
- Blocking I/O operations
- Inefficient data structures
For each issue found, provide:
1. Severity (HIGH/MEDIUM/LOW)
2. Issue type
3. Current complexity
4. Optimized approach
5. Expected performance gain
Be practical and focus on significant improvements, not micro-optimizations.
Format your response as:
SEVERITY: [HIGH/MEDIUM/LOW]
TYPE: [Performance issue type]
CURRENT: [Current complexity or problem]
ISSUE:
[Clear explanation of the bottleneck]
OPTIMIZATION:
[How to optimize with code example if helpful]
GAIN:
[Expected performance improvement]
---
If no significant issues found, respond with: "No major performance issues detected."
"""
def get_performance_user_prompt(code: str, language: str = "Python") -> str:
"""Generate user prompt for performance analysis."""
return f"""Analyze this {language} code for performance issues:
```{language.lower()}
{code}
```
Identify inefficiencies and suggest optimizations."""

View File

@@ -1,51 +0,0 @@
"""Prompts for security analysis."""
SECURITY_SYSTEM_PROMPT = """You are a security expert with deep knowledge of OWASP Top 10 and common vulnerabilities.
Analyze the provided code for security issues. Focus on:
- SQL Injection (unsanitized queries)
- Cross-Site Scripting (XSS)
- Command Injection (unsafe system calls)
- Path Traversal (file operations)
- Insecure Deserialization
- Authentication and Authorization flaws
- Sensitive data exposure
- Cryptographic failures
- Insecure dependencies
For each vulnerability found, provide:
1. Severity (CRITICAL/HIGH/MEDIUM/LOW)
2. Vulnerability type
3. Line numbers (if identifiable)
4. Clear explanation
5. How to fix it
Be concise and practical. Focus on real security issues, not style preferences.
Format your response as:
SEVERITY: [CRITICAL/HIGH/MEDIUM/LOW]
TYPE: [Vulnerability type]
LINES: [Line numbers or "Multiple"]
ISSUE:
[Clear explanation of the problem]
FIX:
[How to fix it with code example if helpful]
---
If no issues found, respond with: "No security vulnerabilities detected."
"""
def get_security_user_prompt(code: str, language: str = "Python") -> str:
"""Generate user prompt for security analysis."""
return f"""Analyze this {language} code for security vulnerabilities:
```{language.lower()}
{code}
```
Identify all security issues following OWASP guidelines."""

View File

@@ -1,47 +0,0 @@
"""Prompts for unit test generation."""
TEST_SYSTEM_PROMPT = """You are an expert software testing engineer with deep knowledge of test-driven development.
Generate comprehensive unit tests for the provided code. Focus on:
- Happy path (normal cases)
- Edge cases (boundary conditions)
- Error cases (invalid inputs, exceptions)
- Mock external dependencies if needed
- Use pytest framework with clear, descriptive test names
For the tests, provide:
1. Complete test file with imports
2. Test fixtures if needed
3. Parameterized tests for multiple cases
4. Clear assertions
5. Docstrings explaining what each test validates
Follow best practices:
- One concept per test
- AAA pattern (Arrange, Act, Assert)
- Descriptive test names (test_function_name_when_condition_then_outcome)
- Don't test implementation details, test behavior
Format your response as:
TEST FILE:
```python
[Complete test code here with imports and all test cases]
```
TEST COVERAGE:
- [What scenarios are covered]
- [Edge cases tested]
- [Error conditions validated]
"""
def get_test_user_prompt(code: str, language: str = "Python") -> str:
"""Generate user prompt for test generation."""
return f"""Generate comprehensive unit tests for this {language} code:
```{language.lower()}
{code}
```
Create pytest test cases covering all scenarios."""

View File

@@ -1,43 +0,0 @@
"""Simple language detection for code."""
def detect_language(code: str) -> str:
"""
Detect programming language from code snippet.
Args:
code: Source code string
Returns:
Detected language name
"""
code_lower = code.lower()
# Python detection
if any(keyword in code for keyword in ["def ", "import ", "from ", "class "]):
if "print(" in code or "__init__" in code:
return "Python"
# JavaScript detection
if any(keyword in code for keyword in ["function ", "const ", "let ", "var "]):
if "console.log" in code or "=>" in code:
return "JavaScript"
# Java detection
if "public class" in code or "public static void main" in code:
return "Java"
# C++ detection
if "#include" in code or "std::" in code or "cout" in code:
return "C++"
# Go detection
if "package main" in code or "func " in code and "import (" in code:
return "Go"
# Rust detection
if "fn " in code and ("let " in code or "mut " in code):
return "Rust"
# Default to Python if unsure
return "Python"

View File

@@ -0,0 +1 @@
OPENROUTER_API_KEY=your-api-key-here

View File

@@ -0,0 +1,35 @@
import gradio as gr
from test_generator import generate_tests
def create_interface():
with gr.Blocks(title="Unit Test Generator") as ui:
gr.Markdown("# Unit Test Generator")
gr.Markdown("Paste your Python code and get AI-generated unit tests")
with gr.Row():
with gr.Column(scale=1):
code_input = gr.Code(
label="Your Code",
language="python",
lines=15
)
generate_btn = gr.Button("Generate Tests", variant="primary")
with gr.Column(scale=1):
tests_output = gr.Textbox(
label="Generated Tests",
lines=15,
interactive=False
)
generate_btn.click(
fn=generate_tests,
inputs=[code_input],
outputs=[tests_output]
)
return ui
def launch():
ui = create_interface()
ui.launch(server_name="localhost", server_port=7860)

View File

@@ -0,0 +1,17 @@
#!/usr/bin/env python3
import os
from dotenv import load_dotenv
from app import launch
load_dotenv()
if __name__ == "__main__":
api_key = os.getenv("OPENROUTER_API_KEY")
if not api_key:
print("Error: OPENROUTER_API_KEY not set in .env")
exit(1)
print("Starting Unit Test Generator...")
print("Open http://localhost:7860 in your browser")
launch()

View File

@@ -0,0 +1,41 @@
import os
from openai import OpenAI
from dotenv import load_dotenv
load_dotenv()
client = OpenAI(
api_key=os.getenv("OPENROUTER_API_KEY"),
base_url="https://openrouter.ai/api/v1"
)
MODEL = os.getenv("SECURECODE_MODEL", "meta-llama/llama-3.1-8b-instruct:free")
SYSTEM_PROMPT = """You are a Python testing expert.
Generate pytest unit tests for the given code.
Include:
- Happy path tests
- Edge cases
- Error handling tests
Keep tests simple and clear."""
def generate_tests(code):
"""Generate unit tests for the given code."""
try:
response = client.chat.completions.create(
model=MODEL,
messages=[
{"role": "system", "content": SYSTEM_PROMPT},
{"role": "user", "content": f"Generate tests for this code:\n\n{code}"}
],
stream=True
)
result = ""
for chunk in response:
if chunk.choices[0].delta.content:
result += chunk.choices[0].delta.content
yield result
except Exception as e:
yield f"Error: {str(e)}"