asdf98 commited on
Commit
b181b88
·
verified ·
1 Parent(s): 313d1a1

Upload EthicalHacking_LFM2.5_Ultimate_Colab.ipynb

Browse files
EthicalHacking_LFM2.5_Ultimate_Colab.ipynb CHANGED
@@ -66,7 +66,10 @@
66
  "cell_type": "markdown",
67
  "metadata": {},
68
  "source": [
69
- "## 3️⃣ Load LFM2.5-1.2B-Instruct in 4-bit via Unsloth"
 
 
 
70
  ]
71
  },
72
  {
@@ -97,6 +100,7 @@
97
  " max_seq_length=MAX_SEQ_LENGTH,\n",
98
  " dtype=None,\n",
99
  " load_in_4bit=True,\n",
 
100
  ")\n",
101
  "\n",
102
  "model = FastLanguageModel.get_peft_model(\n",
 
66
  "cell_type": "markdown",
67
  "metadata": {},
68
  "source": [
69
+ "## 3️⃣ Load LFM2.5-1.2B-Instruct in 4-bit via Unsloth\n",
70
+ "\n",
71
+ "**⚠️ IMPORTANT:** We add `device_map={\"\": torch.cuda.current_device()}` to force the model onto the correct GPU.\n",
72
+ "Without this, `accelerate` may place the model on CPU and throw a `ValueError` during training on Kaggle/Colab."
73
  ]
74
  },
75
  {
 
100
  " max_seq_length=MAX_SEQ_LENGTH,\n",
101
  " dtype=None,\n",
102
  " load_in_4bit=True,\n",
103
+ " device_map={\"\": torch.cuda.current_device()}, # ← FORCE GPU: fixes Kaggle/Colab device placement bug\n",
104
  ")\n",
105
  "\n",
106
  "model = FastLanguageModel.get_peft_model(\n",