Merge pull request #886 from Tochi-Nwachukwu/main
[Bootcamp] - Tochi - Week 7 Exercise - Product Price Predictor using Fine Tuned Gemma 3-4b-it Model
This commit is contained in:
37
week7/community_contributions/tochi/week_7_exercise.ipynb
Normal file
37
week7/community_contributions/tochi/week_7_exercise.ipynb
Normal file
@@ -0,0 +1,37 @@
|
|||||||
|
{
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "8fa27a71",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"# Week 7 Exercise: Fine-Tuning Gemma 3 Open Source LLM for Price Prediction\n",
|
||||||
|
"\n",
|
||||||
|
"## Project Overview\n",
|
||||||
|
"Fine-tuning an open-source Large Language Model to predict product prices based on textual descriptions.\n",
|
||||||
|
"\n",
|
||||||
|
"## Base Model\n",
|
||||||
|
"unsloth/gemma-3-4b-it\n",
|
||||||
|
"\n",
|
||||||
|
"## Dataset\n",
|
||||||
|
"ed-donner/pricer-data\n",
|
||||||
|
"\n",
|
||||||
|
"## Fine Tuning Details\n",
|
||||||
|
"* Fine-tuning Approach: LoRA (Low-Rank Adaptation)\n",
|
||||||
|
"* Target modules: All attention and MLP layers\n",
|
||||||
|
"* Framework: Unsloth + TRL (Transformer Reinforcement Learning)\n",
|
||||||
|
"* Chat template formatting for consistent input structure\n",
|
||||||
|
"* 4-bit quantization for memory-efficient training\n",
|
||||||
|
"\n",
|
||||||
|
"link to colab notebook : https://colab.research.google.com/drive/1sCQQ_OoR2kd1ASivfbUQx-lIE-wHIT6s?usp=sharing\n"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"language_info": {
|
||||||
|
"name": "python"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 5
|
||||||
|
}
|
||||||
Reference in New Issue
Block a user