870 lines
33 KiB
Plaintext
870 lines
33 KiB
Plaintext
{
|
||
"cells": [
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "ykDDGx1cjYlh"
|
||
},
|
||
"source": [
|
||
"# **DocuPy** \n",
|
||
"### _\"Automate Documentation, Comments, and Unit Tests for Python Code\"_ \n",
|
||
"\n",
|
||
"## Overview \n",
|
||
"DocuPy is a Gradio-powered tool designed to automate essential but time-consuming Python development tasks. It streamlines documentation, unit testing, and Python-to-C++ code conversion with AI-driven assistance. \n",
|
||
"\n",
|
||
"### Key Features \n",
|
||
"✅ **Auto-Generate Docstrings & Comments** – Instantly improve code clarity and maintainability. \n",
|
||
"✅ **Unit Test Generation** – Ensure reliability with AI-generated test cases. \n",
|
||
"✅ **Python to C++ Conversion** – Seamlessly translate Python code to C++ with execution support. \n",
|
||
"\n",
|
||
"With an intuitive tab-based UI, DocuPy enhances productivity for developers of all levels. Whether you're documenting functions, validating code with tests, or exploring C++ conversions, this tool lets you focus on coding while it handles the rest. \n",
|
||
"\n",
|
||
"🔗 **Check out the repo**: [GitHub Repo](https://github.com/emads22/DocuPy) \n",
|
||
"\n",
|
||
"💡 **Have insights, feedback, or ideas?** Feel free to reach out. \n",
|
||
"\n",
|
||
"[<img src=\"https://img.shields.io/badge/GitHub-Emad-blue?logo=github\" width=\"150\">](https://github.com/emads22)\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"If you're running this notebook on **`Google Colab`**, ensure you install the required libraries by running the following command:\n",
|
||
"\n",
|
||
"```bash\n",
|
||
"!pip install -q openai anthropic python-dotenv gradio huggingface_hub transformers\n",
|
||
"```\n",
|
||
"Otherwise, make sure to activate the Conda environment `docupy` that already includes these modules:\n",
|
||
"\n",
|
||
"```bash\n",
|
||
"conda activate docupy\n",
|
||
"```"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "6wIpBtNPjXc8"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"# Uncomment the following command when running on Google Colab\n",
|
||
"# !pip install -q openai anthropic python-dotenv gradio huggingface_hub transformers "
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "T-cTBf9amBxf"
|
||
},
|
||
"source": [
|
||
"## Setup and Install Dependencies\n",
|
||
"\n",
|
||
"- Start by installing all necessary libraries."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "aIHWC7xpk87X"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"# imports\n",
|
||
"import os\n",
|
||
"import io\n",
|
||
"import sys\n",
|
||
"import subprocess\n",
|
||
"import openai\n",
|
||
"import anthropic\n",
|
||
"import google.generativeai as google_genai\n",
|
||
"import gradio as gr\n",
|
||
"from openai import OpenAI\n",
|
||
"# from google.colab import userdata\n",
|
||
"from dotenv import load_dotenv\n",
|
||
"from pathlib import Path\n",
|
||
"from huggingface_hub import login, InferenceClient\n",
|
||
"from transformers import AutoTokenizer"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "LZQbXR3dmZy4"
|
||
},
|
||
"source": [
|
||
"## Add Secrets to the Colab Notebook\n",
|
||
"\n",
|
||
"- Add the API keys for OpenAI, Claude, and Gemini to authenticate and access their respective models and services.\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "AadABekBm4fV"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"# # Log in to Hugging Face using the token and add it to git credentials\n",
|
||
"# hf_token = userdata.get('HF_TOKEN')\n",
|
||
"# login(token=hf_token, add_to_git_credential=True)\n",
|
||
"\n",
|
||
"# # Endpoint URL for accessing the Code Qwen model through Hugging Face\n",
|
||
"# CODE_QWEN_URL = userdata.get('CODE_QWEN_URL')\n",
|
||
"\n",
|
||
"# # Initialize inference clients with every model using API keys\n",
|
||
"# gpt = openai.OpenAI(api_key=userdata.get('OPENAI_API_KEY'))\n",
|
||
"# claude = anthropic.Anthropic(api_key=userdata.get('ANTHROPIC_API_KEY'))\n",
|
||
"# google_genai.configure(api_key=userdata.get('GOOGLE_API_KEY'))\n",
|
||
"# code_qwen = InferenceClient(CODE_QWEN_URL, token=hf_token)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "Ej3JNfh_wc0m"
|
||
},
|
||
"source": [
|
||
"## Alternatively, if not running on Google Colab, Load Environment Variables for API Keys\n",
|
||
"\n",
|
||
"- Use the `load_dotenv()` function to securely load API keys from a `.env` file.\n",
|
||
"- Ensure that the `.env` file is located in the same directory as your script or Jupyter Notebook.\n",
|
||
"- The `.env` file should include the required API keys for OpenAI, Claude, and Gemini."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "av9X9XpQw0Vd"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"load_dotenv()\n",
|
||
"\n",
|
||
"# Log in to Hugging Face using the token and add it to git credentials\n",
|
||
"hf_token = os.getenv('HF_TOKEN')\n",
|
||
"login(token=hf_token, add_to_git_credential=True)\n",
|
||
"\n",
|
||
"# Endpoint URL for accessing the Code Qwen model through Hugging Face\n",
|
||
"CODE_QWEN_URL = os.getenv('CODE_QWEN_URL')\n",
|
||
"\n",
|
||
"# Initialize inference clients with every model using API keys\n",
|
||
"gpt = openai.OpenAI(api_key=os.getenv('OPENAI_API_KEY'))\n",
|
||
"claude = anthropic.Anthropic(api_key=os.getenv('ANTHROPIC_API_KEY'))\n",
|
||
"google_genai.configure(api_key=os.getenv('GOOGLE_API_KEY'))\n",
|
||
"code_qwen = InferenceClient(CODE_QWEN_URL, token=hf_token)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "lvEhCuQjrTYu"
|
||
},
|
||
"source": [
|
||
"## Define Required Constants\n",
|
||
"\n",
|
||
"- Initialize the essential constants required for the application's functionality.\n",
|
||
"- Configure the system and user prompts specific to each task or feature.\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "AKEBKKmAowt2"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"# Models\n",
|
||
"OPENAI_MODEL = \"gpt-4o\"\n",
|
||
"CLAUDE_MODEL = \"claude-3-5-sonnet-20240620\"\n",
|
||
"GEMINI_MODEL = \"gemini-1.5-pro\"\n",
|
||
"CODE_QWEN_MODEL = \"Qwen/CodeQwen1.5-7B-Chat\"\n",
|
||
"\n",
|
||
"MODELS_IN_USE = [\"GPT\", \"Claude\", \"Gemini\", \"CodeQwen\"]\n",
|
||
"\n",
|
||
"MAX_TOKENS = 2000\n",
|
||
"\n",
|
||
"ACTION_A = \"commenting\"\n",
|
||
"ACTION_B = \"testing\"\n",
|
||
"ACTION_C = \"converting\"\n",
|
||
"\n",
|
||
"# Define and create the path for the \"temp_files\" directory within the current script's directory\n",
|
||
"TEMP_DIR = Path.cwd() / \"temp_files\"\n",
|
||
"TEMP_DIR.mkdir(parents=True, exist_ok=True)\n",
|
||
"\n",
|
||
"PYTHON_SCRIPT_EASY = \"\"\"\n",
|
||
"import time\n",
|
||
"\n",
|
||
"def reverse_string(s):\n",
|
||
" return s[::-1]\n",
|
||
"\n",
|
||
"if __name__ == \"__main__\":\n",
|
||
" start_time = time.time()\n",
|
||
" text = \"Hello, World!\"\n",
|
||
" print(f\"- Original string: {text}\")\n",
|
||
" print(\"- Reversed string:\", reverse_string(text))\n",
|
||
" execution_time = time.time() - start_time \n",
|
||
" print(f\"\\\\n=> Execution Time: {execution_time:.6f} seconds\")\n",
|
||
"\"\"\"\n",
|
||
"\n",
|
||
"PYTHON_SCRIPT_INTERMEDIATE = \"\"\"\n",
|
||
"import time\n",
|
||
"\n",
|
||
"def is_palindrome(s):\n",
|
||
" s = s.lower().replace(\" \", \"\") \n",
|
||
" return s == s[::-1]\n",
|
||
"\n",
|
||
"if __name__ == \"__main__\":\n",
|
||
" start_time = time.time()\n",
|
||
" text = \"Racecar\"\n",
|
||
" if is_palindrome(text):\n",
|
||
" print(f\"- '{text}' is a palindrome!\")\n",
|
||
" else:\n",
|
||
" print(f\"- '{text}' is Not a palindrome.\")\n",
|
||
" execution_time = time.time() - start_time \n",
|
||
" print(f\"\\\\n=> Execution Time: {execution_time:.6f} seconds\")\n",
|
||
"\"\"\"\n",
|
||
"\n",
|
||
"PYTHON_SCRIPT_HARD = \"\"\"\n",
|
||
"import time\n",
|
||
"\n",
|
||
"def generate_primes(limit):\n",
|
||
" primes = []\n",
|
||
" for num in range(2, limit + 1):\n",
|
||
" if all(num % p != 0 for p in primes):\n",
|
||
" primes.append(num)\n",
|
||
" return primes\n",
|
||
"\n",
|
||
"if __name__ == \"__main__\":\n",
|
||
" start_time = time.time()\n",
|
||
" n = 20\n",
|
||
" print(f\"- Generating primes up to: {n}\")\n",
|
||
" print(\"- Prime numbers:\", generate_primes(n))\n",
|
||
" execution_time = time.time() - start_time \n",
|
||
" print(f\"\\\\n=> Execution Time: {execution_time:.6f} seconds\")\n",
|
||
"\"\"\"\n",
|
||
"\n",
|
||
"PYTHON_SCRIPTS = {\n",
|
||
" \"reverse_string\" : PYTHON_SCRIPT_EASY,\n",
|
||
" \"is_palindrome\" : PYTHON_SCRIPT_INTERMEDIATE,\n",
|
||
" \"generate_primes\" : PYTHON_SCRIPT_HARD,\n",
|
||
" \"custom\" : \"\"\"\n",
|
||
"# Write your custom Python script here\n",
|
||
"if __name__ == \"__main__\":\n",
|
||
" print(\"Hello, World!\")\n",
|
||
"\"\"\"\n",
|
||
"}\n",
|
||
"\n",
|
||
"# Relative system prompts\n",
|
||
"SYSTEM_PROMPT_COMMENTS = \"\"\"\n",
|
||
"You are an AI model specializing in enhancing Python code documentation.\n",
|
||
"Generate detailed and precise docstrings and inline comments for the provided Python code.\n",
|
||
"Ensure the docstrings clearly describe the purpose, parameters, and return values of each function.\n",
|
||
"Inline comments should explain complex or non-obvious code segments.\n",
|
||
"Do not include any introductions, explanations, conclusions, or additional context.\n",
|
||
"Return only the updated Python code enclosed within ```python ... ``` for proper formatting and syntax highlighting.\n",
|
||
"\"\"\"\n",
|
||
"\n",
|
||
"SYSTEM_PROMPT_TESTS = \"\"\"\n",
|
||
"You are an AI model specializing in generating comprehensive unit tests for Python code.\n",
|
||
"Create Python unit tests that thoroughly validate the functionality of the given code.\n",
|
||
"Use the `unittest` framework and ensure edge cases and error conditions are tested.\n",
|
||
"Do not include any comments, introductions, explanations, conclusions, or additional context.\n",
|
||
"Return only the unit test code enclosed within ```python ... ``` for proper formatting and syntax highlighting.\n",
|
||
"\"\"\"\n",
|
||
"\n",
|
||
"SYSTEM_PROMPT_CONVERT = \"\"\"\n",
|
||
"You are an AI model specializing in high-performance code translation.\n",
|
||
"Translate the given Python code into equivalent, optimized C++ code.\n",
|
||
"Focus on:\n",
|
||
"- Using efficient data structures and algorithms.\n",
|
||
"- Avoiding unnecessary memory allocations and computational overhead.\n",
|
||
"- Ensuring minimal risk of integer overflow by using appropriate data types.\n",
|
||
"- Leveraging the C++ Standard Library (e.g., `<vector>`, `<algorithm>`) for performance and readability.\n",
|
||
"Produce concise and efficient C++ code that matches the functionality of the original Python code.\n",
|
||
"Do not include any comments, introductions, explanations, conclusions, or additional context..\n",
|
||
"Return only the C++ code enclosed within ```cpp ... ``` for proper formatting and syntax highlighting.\n",
|
||
"\"\"\""
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "JJ1zttf7ANqD"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"# Relative user prompts\n",
|
||
"def user_prompt_comments(python_code):\n",
|
||
" user_prompt = f\"\"\"\n",
|
||
"Add detailed docstrings and inline comments to the following Python code:\n",
|
||
"\n",
|
||
"```python\n",
|
||
"{python_code}\n",
|
||
"```\n",
|
||
"\"\"\"\n",
|
||
" return user_prompt\n",
|
||
"\n",
|
||
"def user_prompt_tests(python_code):\n",
|
||
" user_prompt = f\"\"\"\n",
|
||
"Generate unit tests for the following Python code using the `unittest` framework:\n",
|
||
"\n",
|
||
"```python\n",
|
||
"{python_code}\n",
|
||
"```\n",
|
||
"\"\"\"\n",
|
||
" return user_prompt\n",
|
||
"\n",
|
||
"def user_prompt_convert(python_code):\n",
|
||
" user_prompt = f\"\"\"\n",
|
||
"Convert the following Python code into C++:\n",
|
||
"\n",
|
||
"```python\n",
|
||
"{python_code}\n",
|
||
"``` \n",
|
||
"\"\"\"\n",
|
||
" return user_prompt"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "tqrOO_qsCRkd"
|
||
},
|
||
"source": [
|
||
"### Define the Tab Functions\n",
|
||
"\n",
|
||
"- Develop dedicated functions for each service: documenting Python code, generating unit tests, and converting Python to C++.\n",
|
||
"- Structure each function to handle user input, process it using the selected AI model, and display the generated output seamlessly.\n",
|
||
"- Ensure the functionality of each tab aligns with its specific purpose, providing an intuitive and efficient user experience.\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "HBsBrq3G94ul"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"def stream_gpt(system_prompt, user_prompt):\n",
|
||
" stream = gpt.chat.completions.create(\n",
|
||
" model=OPENAI_MODEL,\n",
|
||
" messages=[\n",
|
||
" {\"role\": \"system\", \"content\": system_prompt},\n",
|
||
" {\"role\": \"user\", \"content\": user_prompt}\n",
|
||
" ],\n",
|
||
" stream=True)\n",
|
||
" reply = \"\"\n",
|
||
" for chunk in stream:\n",
|
||
" reply += chunk.choices[0].delta.content or \"\"\n",
|
||
" yield reply.replace(\"```python\\n\", \"\").replace(\"```cpp\\n\", \"\").replace(\"```\", \"\")\n",
|
||
"\n",
|
||
"def stream_claude(system_prompt, user_prompt):\n",
|
||
" response = claude.messages.stream(\n",
|
||
" model=CLAUDE_MODEL,\n",
|
||
" max_tokens=MAX_TOKENS,\n",
|
||
" system=system_prompt,\n",
|
||
" messages=[{\"role\": \"user\", \"content\": user_prompt}],\n",
|
||
" )\n",
|
||
" reply = \"\"\n",
|
||
" with response as stream:\n",
|
||
" for text in stream.text_stream:\n",
|
||
" reply += text\n",
|
||
" yield reply.replace(\"```python\\n\", \"\").replace(\"```cpp\\n\", \"\").replace(\"```\", \"\")\n",
|
||
"\n",
|
||
"def stream_gemini(system_prompt, user_prompt):\n",
|
||
" gemini = google_genai.GenerativeModel(\n",
|
||
" model_name=GEMINI_MODEL,\n",
|
||
" system_instruction=system_prompt\n",
|
||
" )\n",
|
||
" stream = gemini.generate_content(\n",
|
||
" contents=user_prompt,\n",
|
||
" stream=True\n",
|
||
" )\n",
|
||
" reply = \"\"\n",
|
||
" for chunk in stream:\n",
|
||
" reply += chunk.text or \"\"\n",
|
||
" yield reply.replace(\"```python\\n\", \"\").replace(\"```cpp\\n\", \"\").replace(\"```\", \"\")\n",
|
||
"\n",
|
||
"def stream_code_qwen(system_prompt, user_prompt):\n",
|
||
" tokenizer = AutoTokenizer.from_pretrained(CODE_QWEN_MODEL)\n",
|
||
" model_input = tokenizer.apply_chat_template(\n",
|
||
" conversation=[\n",
|
||
" {\"role\": \"system\", \"content\": system_prompt},\n",
|
||
" {\"role\": \"user\", \"content\": user_prompt}\n",
|
||
" ],\n",
|
||
" tokenize=False,\n",
|
||
" add_generation_prompt=True\n",
|
||
" )\n",
|
||
" stream = code_qwen.text_generation(\n",
|
||
" prompt=model_input,\n",
|
||
" stream=True,\n",
|
||
" details=True,\n",
|
||
" max_new_tokens=MAX_TOKENS\n",
|
||
" )\n",
|
||
" reply = \"\"\n",
|
||
" for chunk in stream:\n",
|
||
" reply += chunk.token.text or \"\"\n",
|
||
" yield reply.replace(\"```python\\n\", \"\").replace(\"```cpp\\n\", \"\").replace(\"```\", \"\")\n",
|
||
"\n",
|
||
"def set_prompts(user_input, action):\n",
|
||
" action = action.lower()\n",
|
||
"\n",
|
||
" if action == ACTION_A.lower():\n",
|
||
" system_prompt = SYSTEM_PROMPT_COMMENTS\n",
|
||
" user_prompt = user_prompt_comments(user_input)\n",
|
||
" elif action == ACTION_B.lower():\n",
|
||
" system_prompt = SYSTEM_PROMPT_TESTS\n",
|
||
" user_prompt = user_prompt_tests(user_input)\n",
|
||
" elif action == ACTION_C.lower():\n",
|
||
" system_prompt = SYSTEM_PROMPT_CONVERT\n",
|
||
" user_prompt = user_prompt_convert(user_input)\n",
|
||
" else:\n",
|
||
" return None, None\n",
|
||
" \n",
|
||
" return system_prompt, user_prompt\n",
|
||
"\n",
|
||
"def stream_response(user_input, model, action):\n",
|
||
" system_prompt, user_prompt = set_prompts(user_input, action)\n",
|
||
" if not all((system_prompt, user_prompt)):\n",
|
||
" raise ValueError(\"Unknown Action\")\n",
|
||
"\n",
|
||
" match model:\n",
|
||
" case \"GPT\":\n",
|
||
" yield from stream_gpt(system_prompt, user_prompt)\n",
|
||
"\n",
|
||
" case \"Claude\":\n",
|
||
" yield from stream_claude(system_prompt, user_prompt)\n",
|
||
"\n",
|
||
" case \"Gemini\":\n",
|
||
" yield from stream_gemini(system_prompt, user_prompt)\n",
|
||
"\n",
|
||
" case \"CodeQwen\":\n",
|
||
" yield from stream_code_qwen(system_prompt, user_prompt)\n",
|
||
" \n",
|
||
"def generate_comments(python_code, selected_model):\n",
|
||
" for model in MODELS_IN_USE:\n",
|
||
" if model == selected_model:\n",
|
||
" yield from stream_response(python_code, model, action=ACTION_A)\n",
|
||
" return # Exit the function immediately after exhausting the generator\n",
|
||
" raise ValueError(\"Unknown Model\")\n",
|
||
"\n",
|
||
"def generate_tests(python_code, selected_model):\n",
|
||
" for model in MODELS_IN_USE:\n",
|
||
" if model == selected_model:\n",
|
||
" yield from stream_response(python_code, model, action=ACTION_B)\n",
|
||
" return # Exit the function immediately after exhausting the generator\n",
|
||
" raise ValueError(\"Unknown Model\")\n",
|
||
"\n",
|
||
"def convert_code(python_code, selected_model):\n",
|
||
" for model in MODELS_IN_USE:\n",
|
||
" if model == selected_model:\n",
|
||
" yield from stream_response(python_code, model, action=ACTION_C)\n",
|
||
" return # Exit the function immediately after exhausting the generator\n",
|
||
" raise ValueError(\"Unknown Model\")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Running Code Functions\n",
|
||
"\n",
|
||
"- Functions that dynamically execute Python or C++ code provided as a string and captures its output.\n",
|
||
"- This is useful for evaluating Python or C++ code snippets and returning their results programmatically.\n",
|
||
"\n",
|
||
"### IMPORTANT WARNING:\n",
|
||
"The functions that dynamically execute Python or C++ code provided as input.\n",
|
||
"While powerful, this is extremely dangerous if the input code is not trusted.\n",
|
||
"Any malicious code can be executed, including:\n",
|
||
" - Deleting files or directories\n",
|
||
" - Stealing sensitive data (e.g., accessing environment variables or credentials)\n",
|
||
" - Running arbitrary commands that compromise the system\n",
|
||
"\n",
|
||
"Sharing this notebook with this code snippet can allow attackers to exploit this functionality \n",
|
||
"by passing harmful code as input. \n",
|
||
"\n",
|
||
"If you share this notebook or use this function:\n",
|
||
" 1. Only accept input from trusted sources.\n",
|
||
" 2. Consider running the code in a sandboxed environment (e.g., virtual machine or container).\n",
|
||
" 3. Avoid using this function in publicly accessible applications or notebooks without strict validation."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"def run_python_exec(code):\n",
|
||
" try:\n",
|
||
" # Capture stdout using StringIO\n",
|
||
" output = io.StringIO()\n",
|
||
"\n",
|
||
" # Redirect stdout to StringIO\n",
|
||
" sys.stdout = output\n",
|
||
"\n",
|
||
" # Execute the provided Python code\n",
|
||
" exec(code)\n",
|
||
" finally:\n",
|
||
" # Restore original stdout\n",
|
||
" sys.stdout = sys.__stdout__\n",
|
||
"\n",
|
||
" # Return the captured output\n",
|
||
" return output.getvalue()\n",
|
||
"\n",
|
||
"# Improved running python function\n",
|
||
"def run_python(code):\n",
|
||
" # Save the Python code to a file\n",
|
||
" with open(TEMP_DIR / \"python_code.py\", \"w\") as python_file:\n",
|
||
" python_file.write(code)\n",
|
||
"\n",
|
||
" try:\n",
|
||
" # Execute the Python code\n",
|
||
" result = subprocess.run(\n",
|
||
" [\"python\", str(TEMP_DIR / \"python_code.py\")],\n",
|
||
" check=True, text=True, capture_output=True\n",
|
||
" )\n",
|
||
"\n",
|
||
" # Return the program's output\n",
|
||
" return result.stdout\n",
|
||
"\n",
|
||
" except subprocess.CalledProcessError as e:\n",
|
||
" # Handle compilation or execution errors\n",
|
||
" return f\"An error occurred during execution:\\n{e.stderr}\"\n",
|
||
"\n",
|
||
" finally:\n",
|
||
" # Clean up: Delete the Python code file and executable\n",
|
||
" file_path = TEMP_DIR / \"python_code.py\"\n",
|
||
" if file_path.exists():\n",
|
||
" file_path.unlink()\n",
|
||
"\n",
|
||
"def run_cpp(code):\n",
|
||
" # Save the C++ code to a file\n",
|
||
" with open(TEMP_DIR / \"cpp_code.cpp\", \"w\") as cpp_file:\n",
|
||
" cpp_file.write(code)\n",
|
||
"\n",
|
||
" try:\n",
|
||
" # Compile the C++ code\n",
|
||
" subprocess.run(\n",
|
||
" [\"g++\", \"-o\", str(TEMP_DIR / \"cpp_code\"), str(TEMP_DIR / \"cpp_code.cpp\")],\n",
|
||
" check=True, text=True, capture_output=True\n",
|
||
" )\n",
|
||
"\n",
|
||
" # Execute the compiled program\n",
|
||
" result = subprocess.run(\n",
|
||
" [str(TEMP_DIR / \"cpp_code\")],\n",
|
||
" check=True, text=True, capture_output=True\n",
|
||
" )\n",
|
||
"\n",
|
||
" # Return the program's output\n",
|
||
" return result.stdout\n",
|
||
"\n",
|
||
" except subprocess.CalledProcessError as e:\n",
|
||
" # Handle compilation or execution errors\n",
|
||
" error_context = \"during compilation\" if \"cpp_code.cpp\" in e.stderr else \"during execution\"\n",
|
||
" return f\"An error occurred {error_context}:\\n{e.stderr}\"\n",
|
||
"\n",
|
||
" finally:\n",
|
||
" # Clean up: Delete the C++ source file and executable\n",
|
||
" for filename in [\"cpp_code.cpp\", \"cpp_code\", \"cpp_code.exe\"]:\n",
|
||
" file_path = TEMP_DIR / filename\n",
|
||
" if file_path.exists():\n",
|
||
" file_path.unlink()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "Vude1jzPrgT2"
|
||
},
|
||
"source": [
|
||
"## Develop a User-Friendly Interface with Gradio\n",
|
||
"\n",
|
||
"- Design a clean, intuitive, and user-centric interface using Gradio.\n",
|
||
"- Ensure responsiveness and accessibility to provide a seamless and efficient user experience.\n",
|
||
"- Focus on simplicity while maintaining functionality to cater to diverse user needs.\n",
|
||
"\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "Eh-sWFZVBb_y"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"# CSS styles for customizing the appearance of the Gradio UI elements.\n",
|
||
"css = \"\"\"\n",
|
||
".python { \n",
|
||
" background-color: #377ef0; \n",
|
||
" color: #ffffff; \n",
|
||
" padding: 0.5em; \n",
|
||
" border-radius: 5px; /* Slightly rounded corners */\n",
|
||
"}\n",
|
||
".cpp { \n",
|
||
" background-color: #00549e; \n",
|
||
" color: #ffffff; \n",
|
||
" padding: 0.5em; \n",
|
||
" border-radius: 5px; \n",
|
||
"}\n",
|
||
".model { \n",
|
||
" background-color: #17a2b8; /* Vibrant cyan color */\n",
|
||
" color: white; \n",
|
||
" font-size: 1.2em; \n",
|
||
" padding: 0.5em; \n",
|
||
" border: none; \n",
|
||
" border-radius: 5px; \n",
|
||
" cursor: pointer; \n",
|
||
"}\n",
|
||
".button { \n",
|
||
" height: 4em; \n",
|
||
" font-size: 1.5em; \n",
|
||
" padding: 0.5em 1em; \n",
|
||
" background-color: #e67e22; /* Vibrant orange */\n",
|
||
" color: white; \n",
|
||
" border: none; \n",
|
||
" border-radius: 5px; \n",
|
||
" cursor: pointer; \n",
|
||
"}\n",
|
||
".run-button { \n",
|
||
" height: 3em; \n",
|
||
" font-size: 1.5em; \n",
|
||
" padding: 0.5em 1em; \n",
|
||
" background-color: #16a085; /* Rich teal color */\n",
|
||
" color: white; \n",
|
||
" border: none; \n",
|
||
" border-radius: 5px; \n",
|
||
" cursor: pointer; \n",
|
||
"}\n",
|
||
".button:hover, .run-button:hover {\n",
|
||
" background-color: #2c3e50; /* Dark navy for hover effect */\n",
|
||
" color: #fff; \n",
|
||
"}\n",
|
||
"\"\"\""
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "M_v-j-B_sQHe"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"# Tab to Document Code with Docstrings and Comments\n",
|
||
"def docs_comments_ui():\n",
|
||
" with gr.Tab(\"Docstrings & Comments\"):\n",
|
||
" gr.Markdown(\"\"\"\n",
|
||
" ## Document Code with Docstrings and Comments\n",
|
||
" This tab allows you to automatically generate docstrings and inline comments for your Python code.\n",
|
||
" - Paste your Python code into the **`Python Code`** textbox.\n",
|
||
" - Select your preferred model (GPT, Claude, Gemini, or CodeQwen) to process the code.\n",
|
||
" - Click the **`Add Docstrings & Comments`** button to generate well-documented Python code.\n",
|
||
" The generated code will appear in the **`Python Code with Docstrings and Comments`** textarea.\n",
|
||
" \"\"\")\n",
|
||
" with gr.Row():\n",
|
||
" python = gr.Textbox(label=\"Python Code:\", lines=20, value=PYTHON_SCRIPTS[\"custom\"], elem_classes=[\"python\"])\n",
|
||
" python_with_comments = gr.TextArea(label=\"Python Code with Docstrings and Comments:\", interactive=True, lines=20, elem_classes=[\"python\"])\n",
|
||
" with gr.Row():\n",
|
||
" python_script = gr.Dropdown(choices=list(PYTHON_SCRIPTS.keys()), label=\"Select a Python script\", value=\"custom\", elem_classes=[\"model\"])\n",
|
||
" comments_btn = gr.Button(\"Add Docstrings & Comments\", elem_classes=[\"button\"])\n",
|
||
" model = gr.Dropdown([\"GPT\", \"Claude\", \"Gemini\", \"CodeQwen\"], label=\"Select Model\", value=\"GPT\", elem_classes=[\"model\"])\n",
|
||
" \n",
|
||
" python_script.change(\n",
|
||
" fn=lambda script: PYTHON_SCRIPTS[script],\n",
|
||
" inputs=[python_script],\n",
|
||
" outputs=[python]\n",
|
||
" )\n",
|
||
" \n",
|
||
" comments_btn.click(\n",
|
||
" fn=lambda: \"\",\n",
|
||
" inputs=None,\n",
|
||
" outputs=[python_with_comments]\n",
|
||
" ).then(\n",
|
||
" fn=generate_comments,\n",
|
||
" inputs=[python, model],\n",
|
||
" outputs=[python_with_comments]\n",
|
||
" )\n",
|
||
"\n",
|
||
" return python_with_comments"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "WDjJp1eXtQzY"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"# Tab to Generate Comprehensive Unit Tests\n",
|
||
"def unit_tests_ui():\n",
|
||
" with gr.Tab(\"Unit Tests\"):\n",
|
||
" gr.Markdown(\"\"\"\n",
|
||
" ## Generate Comprehensive Unit Tests\n",
|
||
" This tab helps you create unit tests for your Python code automatically.\n",
|
||
" - Paste your Python code into the **`Python Code`** textbox.\n",
|
||
" - Choose a model (GPT, Claude, Gemini, or CodeQwen) to generate the unit tests.\n",
|
||
" - Click the **`Generate Unit Tests`** button, and the generated unit tests will appear in the **`Python Code with Unit Tests`** textarea.\n",
|
||
" Use these unit tests to ensure your code behaves as expected.\n",
|
||
" \"\"\")\n",
|
||
" with gr.Row():\n",
|
||
" python = gr.Textbox(label=\"Python Code:\", lines=20, value=PYTHON_SCRIPTS[\"custom\"], elem_classes=[\"python\"])\n",
|
||
" python_unit_tests = gr.TextArea(label=\"Python Code with Unit Tests:\", interactive=True, lines=20, elem_classes=[\"python\"])\n",
|
||
" with gr.Row():\n",
|
||
" python_script = gr.Dropdown(choices=list(PYTHON_SCRIPTS.keys()), label=\"Select a Python script\", value=\"custom\", elem_classes=[\"model\"])\n",
|
||
" unit_tests_btn = gr.Button(\"Generate Unit Tests\", elem_classes=[\"button\"])\n",
|
||
" model = gr.Dropdown([\"GPT\", \"Claude\", \"Gemini\", \"CodeQwen\"], label=\"Select Model\", value=\"GPT\", elem_classes=[\"model\"])\n",
|
||
" \n",
|
||
" python_script.change(\n",
|
||
" fn=lambda script: PYTHON_SCRIPTS[script],\n",
|
||
" inputs=[python_script],\n",
|
||
" outputs=[python]\n",
|
||
" )\n",
|
||
" \n",
|
||
" unit_tests_btn.click(\n",
|
||
" fn=lambda: \"\",\n",
|
||
" inputs=None,\n",
|
||
" outputs=[python_unit_tests]\n",
|
||
" ).then(\n",
|
||
" fn=generate_tests,\n",
|
||
" inputs=[python, model],\n",
|
||
" outputs=[python_unit_tests]\n",
|
||
" )\n",
|
||
"\n",
|
||
" return python_unit_tests"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "x57SZeLi9NyV"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"# Tab to Convert Python Code to C++\n",
|
||
"def python_to_cpp_ui():\n",
|
||
" with gr.Tab(\"Python to C++\"):\n",
|
||
" gr.Markdown(\"\"\"\n",
|
||
" ## Convert Python Code to C++\n",
|
||
" This tab facilitates the conversion of Python code into C++.\n",
|
||
" - Paste your Python code into the **`Python Code`** textbox.\n",
|
||
" - Select your preferred model (GPT, Claude, Gemini, or CodeQwen) to perform the conversion.\n",
|
||
" - Click **`Convert to C++`** to see the equivalent C++ code in the **`C++ Code`** textbox.\n",
|
||
" Additional Features:\n",
|
||
" - You can execute the Python or C++ code directly using the respective **`Run Python`** or **`Run C++`** buttons.\n",
|
||
" - The output will appear in the respective result text areas below.\n",
|
||
" \"\"\")\n",
|
||
" with gr.Row():\n",
|
||
" python = gr.Textbox(label=\"Python Code:\", lines=20, value=PYTHON_SCRIPTS[\"custom\"], elem_classes=[\"python\"])\n",
|
||
" cpp = gr.Textbox(label=\"C++ Code:\", interactive=True, lines=20, elem_classes=[\"cpp\"])\n",
|
||
" with gr.Row():\n",
|
||
" python_script = gr.Dropdown(choices=list(PYTHON_SCRIPTS.keys()), label=\"Select a Python script\", value=\"custom\", elem_classes=[\"model\"])\n",
|
||
" convert_btn = gr.Button(\"Convert to C++\", elem_classes=[\"button\"])\n",
|
||
" model = gr.Dropdown([\"GPT\", \"Claude\", \"Gemini\", \"CodeQwen\"], label=\"Select Model\", value=\"GPT\", elem_classes=[\"model\"])\n",
|
||
" with gr.Row():\n",
|
||
" run_python_btn = gr.Button(\"Run Python\", elem_classes=[\"run-button\"])\n",
|
||
" run_cpp_btn = gr.Button(\"Run C++\", elem_classes=[\"run-button\"])\n",
|
||
" with gr.Row():\n",
|
||
" python_out = gr.TextArea(label=\"Python Result:\", lines=10, elem_classes=[\"python\"])\n",
|
||
" cpp_out = gr.TextArea(label=\"C++ Result:\", lines=10, elem_classes=[\"cpp\"])\n",
|
||
"\n",
|
||
" python_script.change(\n",
|
||
" fn=lambda script: PYTHON_SCRIPTS[script],\n",
|
||
" inputs=[python_script],\n",
|
||
" outputs=[python]\n",
|
||
" )\n",
|
||
" \n",
|
||
" convert_btn.click(\n",
|
||
" fn=lambda: \"\",\n",
|
||
" inputs=None,\n",
|
||
" outputs=[cpp]\n",
|
||
" ).then(\n",
|
||
" fn=convert_code,\n",
|
||
" inputs=[python, model],\n",
|
||
" outputs=[cpp]\n",
|
||
" )\n",
|
||
" run_python_btn.click(run_python, inputs=[python], outputs=[python_out])\n",
|
||
" run_cpp_btn.click(run_cpp, inputs=[cpp], outputs=[cpp_out])\n",
|
||
"\n",
|
||
" return cpp, python_out, cpp_out"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"colab": {
|
||
"base_uri": "https://localhost:8080/",
|
||
"height": 645
|
||
},
|
||
"id": "n8ZdDrOrrbl-",
|
||
"outputId": "08350d69-569e-4947-8da1-d755e9a2678f"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"# Combine the tabs into the main UI and handle tab switching\n",
|
||
"with gr.Blocks(css=css) as main_ui:\n",
|
||
" with gr.Tabs() as tabs:\n",
|
||
" comments_output = docs_comments_ui()\n",
|
||
" tests_output = unit_tests_ui()\n",
|
||
" cpp_output, python_out, cpp_out = python_to_cpp_ui()\n",
|
||
"\n",
|
||
" # Reset outputs on tab switch\n",
|
||
" tabs.select(\n",
|
||
" fn=lambda: [\"\", \"\", \"\", \"\", \"\"],\n",
|
||
" inputs=None,\n",
|
||
" outputs=[comments_output, \n",
|
||
" tests_output, \n",
|
||
" cpp_output, python_out, cpp_out]\n",
|
||
" )\n",
|
||
"\n",
|
||
"# Launch the app\n",
|
||
"main_ui.launch(inbrowser=True)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": []
|
||
}
|
||
],
|
||
"metadata": {
|
||
"colab": {
|
||
"provenance": []
|
||
},
|
||
"kernelspec": {
|
||
"display_name": "Python 3 (ipykernel)",
|
||
"language": "python",
|
||
"name": "python3"
|
||
},
|
||
"language_info": {
|
||
"codemirror_mode": {
|
||
"name": "ipython",
|
||
"version": 3
|
||
},
|
||
"file_extension": ".py",
|
||
"mimetype": "text/x-python",
|
||
"name": "python",
|
||
"nbconvert_exporter": "python",
|
||
"pygments_lexer": "ipython3",
|
||
"version": "3.11.11"
|
||
}
|
||
},
|
||
"nbformat": 4,
|
||
"nbformat_minor": 4
|
||
}
|