Add notebooks for Muhammad Qasim Sheikh in community-contributions

This commit is contained in:
aashahid
2025-10-21 17:51:37 +05:00
parent ef34387aee
commit 0b4e4be9a0
12 changed files with 1284 additions and 0 deletions

View File

@@ -0,0 +1,144 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"id": "d59206dc",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"from dotenv import load_dotenv\n",
"from openai import OpenAI\n",
"import ollama\n",
"from IPython.display import Markdown, display"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ad035727",
"metadata": {},
"outputs": [],
"source": [
"# Load keys\n",
"load_dotenv()\n",
"client = OpenAI(api_key=os.getenv(\"OPENAI_API_KEY\"))\n",
"ollama_via_openai = OpenAI(base_url='http://localhost:11434/v1', api_key = 'ollama')"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3f521334",
"metadata": {},
"outputs": [],
"source": [
"# ---- SYSTEM PROMPTS ----\n",
"athena_system = \"\"\"\n",
"You are Athena, a strategic thinker and visionary. You seek meaning, long-term implications,\n",
"and practical wisdom in every discussion. Be concise (1-2 sentences).\n",
"\"\"\"\n",
"\n",
"loki_system = \"\"\"\n",
"You are Loki, a sarcastic trickster who mocks and challenges everyone else's opinions.\n",
"You use humor, wit, and irony to undermine serious arguments. Be concise (1-2 sentences).\n",
"\"\"\"\n",
"\n",
"orion_system = \"\"\"\n",
"You are Orion, a data-driven realist. You respond with evidence, statistics, or factual analysis.\n",
"If data is not available, make a logical deduction. Be concise (1-2 sentences).\n",
"\"\"\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0a6d04f6",
"metadata": {},
"outputs": [],
"source": [
"# ---- INITIAL CONVERSATION ----\n",
"conversation = [\n",
" {\"role\": \"system\", \"name\": \"Athena\", \"content\": athena_system},\n",
" {\"role\": \"system\", \"name\": \"Loki\", \"content\": loki_system},\n",
" {\"role\": \"system\", \"name\": \"Orion\", \"content\": orion_system},\n",
" {\"role\": \"user\", \"content\": \"Topic: 'Why did the chicken cross the road?' Begin your discussion.\"}\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e292a27b",
"metadata": {},
"outputs": [],
"source": [
"# ---- HELPER FUNCTIONS ----\n",
"def call_gpt(name, system_prompt, conversation):\n",
" \"\"\"Call GPT model with current conversation context.\"\"\"\n",
" messages = [{\"role\": \"system\", \"content\": system_prompt}]\n",
" messages += [{\"role\": \"user\", \"content\": f\"The conversation so far:\\n{format_conversation(conversation)}\\nNow respond as {name}.\"}]\n",
" resp = client.chat.completions.create(model=\"gpt-4o-mini\", messages=messages)\n",
" return resp.choices[0].message.content.strip()\n",
"\n",
"def call_ollama(name, system_prompt, conversation):\n",
" \"\"\"Call Ollama (Llama3.2) as a local model.\"\"\"\n",
" messages = [{\"role\": \"system\", \"content\": system_prompt}]\n",
" messages += [{\"role\": \"user\", \"content\": f\"The conversation so far:\\n{format_conversation(conversation)}\\nNow respond as {name}.\"}]\n",
" resp = ollama.chat(model=\"llama3.2\", messages=messages)\n",
" return resp['message']['content'].strip()\n",
"\n",
"def format_conversation(conv):\n",
" return \"\\n\".join([f\"{m.get('name', m['role']).upper()}: {m['content']}\" for m in conv if m['role'] != \"system\"])"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f0eb4d72",
"metadata": {},
"outputs": [],
"source": [
"# ---- MAIN LOOP ----\n",
"rounds = 5\n",
"for i in range(rounds):\n",
" # Athena responds\n",
" athena_reply = call_gpt(\"Athena\", athena_system, conversation)\n",
" conversation.append({\"role\": \"assistant\", \"name\": \"Athena\", \"content\": athena_reply})\n",
" display(Markdown(f\"**Athena:** {athena_reply}\"))\n",
"\n",
" # Loki responds\n",
" loki_reply = call_ollama(\"Loki\", loki_system, conversation)\n",
" conversation.append({\"role\": \"assistant\", \"name\": \"Loki\", \"content\": loki_reply})\n",
" display(Markdown(f\"**Loki:** {loki_reply}\"))\n",
"\n",
" # Orion responds\n",
" orion_reply = call_gpt(\"Orion\", orion_system, conversation)\n",
" conversation.append({\"role\": \"assistant\", \"name\": \"Orion\", \"content\": orion_reply})\n",
" display(Markdown(f\"**Orion:** {orion_reply}\"))"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "llm-engineering",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.12"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,47 @@
# Multi-Agent Conversation Simulator (OpenAI + Ollama)
## Project Overview
This project is an experimental **multi-agent conversational simulation** built with **OpenAI GPT models** and a locally-hosted **Ollama LLM (Llama 3.2)**. It demonstrates how multiple AI personas can participate in a shared conversation, each with distinct roles, perspectives, and behaviors — producing a dynamic, evolving debate from different angles.
The script orchestrates a **three-way dialogue** around a single topic (“Why did the chicken cross the road?”) between three agents, each powered by a different model and persona definition:
- **Athena (OpenAI GPT-4o):** A strategic thinker who looks for deeper meaning, long-term consequences, and practical wisdom.
- **Loki (Ollama Llama 3.2):** A sarcastic trickster who mocks, questions, and challenges the others with wit and irony.
- **Orion (OpenAI GPT-4o):** A data-driven realist who grounds the discussion in facts, statistics, or logical deductions.
## Whats Happening in the Code
1. **Environment Setup**
- Loads the OpenAI API key from a `.env` file.
- Initializes OpenAIs Python client and configures a local Ollama endpoint.
2. **Persona System Prompts**
- Defines system prompts for each agent to give them unique personalities and communication styles.
- These prompts act as the “character definitions” for Athena, Loki, and Orion.
3. **Conversation Initialization**
- Starts with a single conversation topic provided by the user.
- All three agents are aware of the discussion context and prior messages.
4. **Conversation Loop**
- The conversation runs in multiple rounds (default: 5).
- In each round:
- **Athena (GPT)** responds first with a strategic viewpoint.
- **Loki (Ollama)** replies next, injecting sarcasm and skepticism.
- **Orion (GPT)** follows with a fact-based or analytical perspective.
- Each response is appended to the conversation history so future replies build on previous statements.
5. **Dynamic Context Sharing**
- Each agent receives the **entire conversation so far** as context before generating a response.
- This ensures their replies are relevant, coherent, and responsive to what the others have said.
6. **Output Rendering**
- Responses are displayed as Markdown in a readable, chat-like format for each speaker, round by round.
## Key Highlights
- Demonstrates **multi-agent orchestration** with different models working together in a single script.
- Uses **OpenAI GPT models** for reasoning and **Ollama (Llama 3.2)** for local, cost-free inference.
- Shows how **system prompts** and **context-aware message passing** can simulate realistic dialogues.
- Provides a template for experimenting with **AI characters**, **debate simulations**, or **collaborative agent systems**.