Merge pull request #846 from r00trose/r00trose-week1
Week 1 Contribution: Code Explainer by r00trose
This commit is contained in:
6
week1/community-contributions/r00trose/.env.example
Normal file
6
week1/community-contributions/r00trose/.env.example
Normal file
@@ -0,0 +1,6 @@
|
||||
# Ollama Configuration
|
||||
OLLAMA_BASE_URL=http://localhost:11434/v1
|
||||
OLLAMA_MODEL=llama3.2
|
||||
|
||||
# OpenAI (if you want to use OpenAI API instead)
|
||||
# OPENAI_API_KEY=your_api_key_here
|
||||
144
week1/community-contributions/r00trose/README.md
Normal file
144
week1/community-contributions/r00trose/README.md
Normal file
@@ -0,0 +1,144 @@
|
||||
# Code Explainer - LLM Engineering Week 1
|
||||
|
||||
An AI-powered code explanation tool using Llama 3.2 via Ollama.
|
||||
|
||||
## 🎯 Project Overview
|
||||
|
||||
This project demonstrates prompt engineering and local LLM integration by building a code explanation assistant. The tool analyzes code snippets and provides beginner-friendly, line-by-line explanations.
|
||||
|
||||
## ✨ Features
|
||||
|
||||
- **Local LLM Integration**: Uses Ollama with Llama 3.2 (no API costs!)
|
||||
- **Two Modes of Operation**:
|
||||
- 📓 **Notebook Mode**: Interactive Jupyter notebook with rich Markdown display
|
||||
- 💻 **Terminal Mode**: Interactive CLI for continuous code explanation
|
||||
- **Smart Explanations**:
|
||||
- Summarizes overall purpose
|
||||
- Line-by-line breakdown
|
||||
- Highlights key programming concepts
|
||||
- Beginner-friendly language
|
||||
- **Multiple Examples**: Loops, list comprehensions, OOP, recursion, decorators
|
||||
- **Streaming Responses**: Real-time output as the model generates
|
||||
|
||||
## 🛠️ Technologies Used
|
||||
|
||||
- **Ollama**: Local LLM runtime
|
||||
- **Llama 3.2**: Meta's language model
|
||||
- **OpenAI Python SDK**: API-compatible interface
|
||||
- **IPython**: Rich markdown display
|
||||
- **python-dotenv**: Environment management
|
||||
|
||||
## 📋 Prerequisites
|
||||
|
||||
1. **Ollama installed** - Download from [ollama.com](https://ollama.com)
|
||||
2. **Python 3.8+**
|
||||
3. **Llama 3.2 model pulled**:
|
||||
```bash
|
||||
ollama pull llama3.2
|
||||
```
|
||||
|
||||
## 🚀 Setup
|
||||
|
||||
### 1. Clone the repository
|
||||
```bash
|
||||
git clone <your-repo-url>
|
||||
cd llm-engineering/week1/my-solutions
|
||||
```
|
||||
|
||||
### 2. Install dependencies
|
||||
```bash
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
### 3. (Optional) Configure environment
|
||||
```bash
|
||||
cp .env.example .env
|
||||
# Edit .env if needed
|
||||
```
|
||||
|
||||
## 💡 Usage
|
||||
|
||||
### Notebook Mode
|
||||
|
||||
1. Open `day1-solution.ipynb` in Jupyter or VS Code
|
||||
2. Run cells sequentially
|
||||
3. Use `explain_code_interactive()` function with your own code
|
||||
|
||||
```python
|
||||
explain_code_interactive("""
|
||||
def fibonacci(n):
|
||||
if n <= 1:
|
||||
return n
|
||||
return fibonacci(n-1) + fibonacci(n-2)
|
||||
""")
|
||||
```
|
||||
|
||||
### Terminal Mode
|
||||
|
||||
```bash
|
||||
python code_explainer.py
|
||||
```
|
||||
|
||||
Then:
|
||||
- Paste your code
|
||||
- Press Enter twice (empty line)
|
||||
- Get explanation!
|
||||
|
||||
**Commands:**
|
||||
- `quit` / `exit` / `q` - Exit
|
||||
- `clear` - Start fresh
|
||||
- `examples` - See sample code snippets
|
||||
|
||||
## 🎓 Why Ollama?
|
||||
|
||||
I chose Ollama over OpenAI API for this project because:
|
||||
|
||||
✅ **No API Costs**: Completely free to use
|
||||
✅ **Privacy**: All data stays local
|
||||
✅ **Offline**: Works without internet
|
||||
✅ **Learning**: Hands-on experience with local LLM deployment
|
||||
✅ **Speed**: Fast responses on local hardware
|
||||
|
||||
## 📝 Examples Included
|
||||
|
||||
1. **Recursion** - Fibonacci sequence
|
||||
2. **Loops** - Simple iteration
|
||||
3. **List Comprehensions** - Filtering and mapping
|
||||
4. **Object-Oriented Programming** - Classes and methods
|
||||
5. **Decorators** - Advanced Python concepts
|
||||
|
||||
## 🔧 Customization
|
||||
|
||||
### Change the model
|
||||
Edit `code_explainer.py`:
|
||||
```python
|
||||
explainer = CodeExplainer(model="llama3.2:3b") # Use smaller model
|
||||
```
|
||||
|
||||
### Adjust temperature
|
||||
Lower temperature = more consistent, Higher = more creative:
|
||||
```python
|
||||
temperature=0.3 # Current setting for code explanations
|
||||
```
|
||||
|
||||
### Modify system prompt
|
||||
Customize how the model explains code in the `system_prompt` variable.
|
||||
|
||||
## 🤝 Contributing
|
||||
|
||||
This is a course assignment, but feel free to fork and improve!
|
||||
|
||||
## 📄 License
|
||||
|
||||
MIT License - feel free to use for learning purposes.
|
||||
|
||||
## 👤 Author
|
||||
|
||||
İrem İrem
|
||||
LLM Engineering Course - Week 1 Assignment
|
||||
|
||||
## 🙏 Acknowledgments
|
||||
|
||||
- **Ed Donner** - Course instructor for the excellent LLM Engineering course
|
||||
- **Anthropic** - For Ollama framework
|
||||
- **Meta** - For Llama 3.2 model
|
||||
192
week1/community-contributions/r00trose/code_explainer.py
Normal file
192
week1/community-contributions/r00trose/code_explainer.py
Normal file
@@ -0,0 +1,192 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Code Explainer - Interactive Terminal Mode
|
||||
Day 1 LLM Engineering Assignment
|
||||
Explains code snippets using Llama 3.2 via Ollama
|
||||
"""
|
||||
|
||||
import os
|
||||
from dotenv import load_dotenv
|
||||
from openai import OpenAI
|
||||
|
||||
|
||||
class CodeExplainer:
|
||||
"""Interactive code explanation assistant using Llama 3.2"""
|
||||
|
||||
def __init__(self, model="llama3.2"):
|
||||
load_dotenv()
|
||||
|
||||
self.client = OpenAI(
|
||||
base_url='http://localhost:11434/v1',
|
||||
api_key='ollama'
|
||||
)
|
||||
self.model = model
|
||||
|
||||
print(f"🤖 Using Ollama with model: {self.model}")
|
||||
|
||||
# System prompt - defines how the model should behave
|
||||
self.system_prompt = """You are an experienced programming instructor.
|
||||
You analyze and explain code snippets line by line.
|
||||
|
||||
When explaining code:
|
||||
1. Summarize the overall purpose of the code
|
||||
2. Explain each important line or block
|
||||
3. Highlight key programming concepts used
|
||||
4. Use language that beginners can understand
|
||||
5. Include practical examples when helpful
|
||||
|
||||
Be clear, educational, and encouraging."""
|
||||
|
||||
def explain(self, code, stream=True):
|
||||
"""Explain the given code snippet"""
|
||||
messages = [
|
||||
{"role": "system", "content": self.system_prompt},
|
||||
{"role": "user", "content": f"Explain the following code:\n\n{code}"}
|
||||
]
|
||||
|
||||
try:
|
||||
response = self.client.chat.completions.create(
|
||||
model=self.model,
|
||||
messages=messages,
|
||||
stream=stream,
|
||||
temperature=0.3 # Low temperature for consistent explanations
|
||||
)
|
||||
|
||||
if stream:
|
||||
print("\n" + "="*60)
|
||||
print("📝 EXPLANATION:")
|
||||
print("="*60 + "\n")
|
||||
answer = ""
|
||||
for chunk in response:
|
||||
if chunk.choices[0].delta.content:
|
||||
text = chunk.choices[0].delta.content
|
||||
print(text, end="", flush=True)
|
||||
answer += text
|
||||
print("\n" + "="*60 + "\n")
|
||||
return answer
|
||||
else:
|
||||
result = response.choices[0].message.content
|
||||
print(f"\n📝 Explanation:\n{result}\n")
|
||||
return result
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Error: {e}")
|
||||
return None
|
||||
|
||||
def chat(self):
|
||||
"""Start interactive chat mode"""
|
||||
print("\n" + "="*60)
|
||||
print("🎓 CODE EXPLAINER - Interactive Terminal Mode")
|
||||
print("="*60)
|
||||
print(f"Model: {self.model}")
|
||||
print(f"Provider: Ollama (Local)")
|
||||
print("\n📖 How to use:")
|
||||
print(" • Paste your code")
|
||||
print(" • Press Enter twice (empty line) when done")
|
||||
print(" • Type 'quit', 'exit', or 'q' to stop")
|
||||
print(" • Type 'clear' to start fresh")
|
||||
print(" • Type 'examples' to see sample code")
|
||||
print("="*60 + "\n")
|
||||
|
||||
while True:
|
||||
try:
|
||||
print("📋 Paste your code below (Enter twice when done):")
|
||||
print()
|
||||
|
||||
# Multi-line input
|
||||
lines = []
|
||||
empty_count = 0
|
||||
|
||||
while True:
|
||||
line = input()
|
||||
|
||||
# Check for exit commands
|
||||
if line.strip().lower() in ['quit', 'exit', 'q']:
|
||||
print("\n👋 Thanks for using Code Explainer! Goodbye!")
|
||||
return
|
||||
|
||||
# Check for clear command
|
||||
if line.strip().lower() == 'clear':
|
||||
print("\n🔄 Starting fresh...\n")
|
||||
break
|
||||
|
||||
# Check for examples command
|
||||
if line.strip().lower() == 'examples':
|
||||
self.show_examples()
|
||||
break
|
||||
|
||||
# Empty line detection
|
||||
if not line.strip():
|
||||
empty_count += 1
|
||||
if empty_count >= 1 and lines: # One empty line is enough
|
||||
break
|
||||
else:
|
||||
empty_count = 0
|
||||
lines.append(line)
|
||||
|
||||
# If we have code, explain it
|
||||
if lines:
|
||||
code = "\n".join(lines)
|
||||
self.explain(code)
|
||||
|
||||
except KeyboardInterrupt:
|
||||
print("\n\n👋 Thanks for using Code Explainer! Goodbye!")
|
||||
break
|
||||
except Exception as e:
|
||||
print(f"❌ Error: {e}\n")
|
||||
|
||||
def show_examples(self):
|
||||
"""Show example code snippets"""
|
||||
print("\n" + "="*60)
|
||||
print("📚 EXAMPLE CODE SNIPPETS")
|
||||
print("="*60)
|
||||
|
||||
examples = {
|
||||
"1. Simple Loop": """
|
||||
for i in range(5):
|
||||
print(i * 2)
|
||||
""",
|
||||
"2. List Comprehension": """
|
||||
numbers = [1, 2, 3, 4, 5]
|
||||
squares = [x**2 for x in numbers if x % 2 == 0]
|
||||
print(squares)
|
||||
""",
|
||||
"3. Function with Recursion": """
|
||||
def factorial(n):
|
||||
if n == 0:
|
||||
return 1
|
||||
return n * factorial(n-1)
|
||||
|
||||
print(factorial(5))
|
||||
""",
|
||||
"4. Class Definition": """
|
||||
class Dog:
|
||||
def __init__(self, name):
|
||||
self.name = name
|
||||
|
||||
def bark(self):
|
||||
return f"{self.name} says Woof!"
|
||||
"""
|
||||
}
|
||||
|
||||
for title, code in examples.items():
|
||||
print(f"\n{title}:{code}")
|
||||
|
||||
print("="*60)
|
||||
print("💡 Copy any example above and paste it when prompted!")
|
||||
print("="*60 + "\n")
|
||||
|
||||
|
||||
def main():
|
||||
"""Main entry point"""
|
||||
load_dotenv()
|
||||
|
||||
print("\n🚀 Starting Code Explainer...")
|
||||
explainer = CodeExplainer()
|
||||
|
||||
# Start interactive mode
|
||||
explainer.chat()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
76
week1/community-contributions/r00trose/day1-solution.ipynb
Normal file
76
week1/community-contributions/r00trose/day1-solution.ipynb
Normal file
@@ -0,0 +1,76 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Day 1 Solution - My Implementation\n",
|
||||
"\n",
|
||||
"This is my solution to the Day 1 assignment. I've implemented the web scraping and summarization functionality as requested.\n",
|
||||
"\n",
|
||||
"## Features Implemented:\n",
|
||||
"- Web scraping with requests and BeautifulSoup\n",
|
||||
"- SSL certificate handling for Windows\n",
|
||||
"- OpenAI API integration\n",
|
||||
"- Website content summarization\n",
|
||||
"- Markdown display formatting\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Environment setup complete!\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# My Day 1 Solution - Imports and Setup\n",
|
||||
"import os\n",
|
||||
"import ssl\n",
|
||||
"import requests\n",
|
||||
"from bs4 import BeautifulSoup\n",
|
||||
"from urllib.parse import urljoin\n",
|
||||
"from IPython.display import Markdown, display\n",
|
||||
"from openai import OpenAI\n",
|
||||
"from dotenv import load_dotenv\n",
|
||||
"\n",
|
||||
"# Load environment variables\n",
|
||||
"load_dotenv(override=True)\n",
|
||||
"\n",
|
||||
"# SSL fix for Windows\n",
|
||||
"ssl._create_default_https_context = ssl._create_unverified_context\n",
|
||||
"os.environ['PYTHONHTTPSVERIFY'] = '0'\n",
|
||||
"os.environ['CURL_CA_BUNDLE'] = ''\n",
|
||||
"\n",
|
||||
"print(\"Environment setup complete!\")\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": ".venv",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.12.12"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
3
week1/community-contributions/r00trose/requirements.txt
Normal file
3
week1/community-contributions/r00trose/requirements.txt
Normal file
@@ -0,0 +1,3 @@
|
||||
openai>=1.0.0
|
||||
python-dotenv>=1.0.0
|
||||
ipython>=8.0.0
|
||||
Reference in New Issue
Block a user