asdf98 commited on
Commit
89f71bf
Β·
verified Β·
1 Parent(s): 59f5bd8

Upload EthicalHacking_LFM2.5_Ultimate_Colab.ipynb

Browse files
EthicalHacking_LFM2.5_Ultimate_Colab.ipynb ADDED
@@ -0,0 +1,455 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "markdown",
5
+ "metadata": {},
6
+ "source": [
7
+ "# πŸ” Ultimate Ethical Hacking LLM – Liquid LFM2.5 (Colab Free Tier T4)\n",
8
+ "\n",
9
+ "**πŸ₯‡ Model:** [Liquid LFM2.5-1.2B-Instruct](https://huggingface.co/LiquidAI/LFM2.5-1.2B-Instruct) via Unsloth 4-bit \n",
10
+ "**πŸ† Why this model?** 1.2B params, only **~1GB in 4-bit**, runs on phones. Massive T4 headroom for training. 128K context window. \n",
11
+ "**πŸ“Š Datasets:** [Fenrir v2.1](https://huggingface.co/datasets/AlicanKiraz0/Cybersecurity-Dataset-Fenrir-v2.1) + [Trendyol Cybersecurity](https://huggingface.co/datasets/Trendyol/Trendyol-Cybersecurity-Instruction-Tuning-Dataset) β€” 153K+ instruction pairs \n",
12
+ "**⚑ Framework:** Unsloth + TRL SFTTrainer β€” 2Γ— faster, 70% less VRAM \n",
13
+ "\n",
14
+ "> ⚠️ **Disclaimer:** This trains on **defensive cybersecurity** datasets only. Intended for ethical hacking education and security research.\n",
15
+ "\n",
16
+ "---\n",
17
+ "\n",
18
+ "## πŸ“‹ Why LFM2.5 for T4?\n",
19
+ "\n",
20
+ "| Spec | Value |\n",
21
+ "|------|-------|\n",
22
+ "| Parameters | 1.2B |\n",
23
+ "| 4-bit VRAM | ~1.0 GB |\n",
24
+ "| Context | 128K tokens |\n",
25
+ "| VRAM for training | **~14 GB free on T4** |\n",
26
+ "| Batch size | **4-8** easily |\n",
27
+ "| Max seq length | 4096-8192 |\n",
28
+ "| Speed | **Very fast** on T4 |\n",
29
+ "\n",
30
+ "**Unsloth docs:** https://unsloth.ai/docs/models/tutorials/lfm2.5 \n",
31
+ "**Official notebook:** https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Liquid_LFM2_(1.2B)-Conversational.ipynb"
32
+ ]
33
+ },
34
+ {
35
+ "cell_type": "markdown",
36
+ "metadata": {},
37
+ "source": [
38
+ "## 1️⃣ Install Dependencies"
39
+ ]
40
+ },
41
+ {
42
+ "cell_type": "code",
43
+ "execution_count": null,
44
+ "metadata": {},
45
+ "outputs": [],
46
+ "source": [
47
+ "%%capture\n",
48
+ "!pip install -q unsloth trl datasets accelerate transformers bitsandbytes huggingface_hub"
49
+ ]
50
+ },
51
+ {
52
+ "cell_type": "markdown",
53
+ "metadata": {},
54
+ "source": [
55
+ "## 2️⃣ (Optional) Login to HuggingFace Hub"
56
+ ]
57
+ },
58
+ {
59
+ "cell_type": "code",
60
+ "execution_count": null,
61
+ "metadata": {},
62
+ "outputs": [],
63
+ "source": [
64
+ "from huggingface_hub import login\n",
65
+ "# login(token=\"hf_YOUR_TOKEN\") # ← uncomment and paste your token"
66
+ ]
67
+ },
68
+ {
69
+ "cell_type": "markdown",
70
+ "metadata": {},
71
+ "source": [
72
+ "## 3️⃣ Load LFM2.5-1.2B-Instruct in 4-bit via Unsloth\n",
73
+ "\n",
74
+ "Uses Unsloth's pre-converted 4-bit model. Only ~1GB in memory β€” leaves massive room for LoRA training."
75
+ ]
76
+ },
77
+ {
78
+ "cell_type": "code",
79
+ "execution_count": null,
80
+ "metadata": {},
81
+ "outputs": [],
82
+ "source": [
83
+ "from unsloth import FastLanguageModel\n",
84
+ "import torch\n",
85
+ "\n",
86
+ "# ==================== T4-COLAB HYPERPARAMETERS (LFM2.5) ====================\n",
87
+ "MAX_SEQ_LENGTH = 4096 # 1.2B model leaves huge VRAM headroom\n",
88
+ "LORA_R = 128 # higher rank possible on LFM2.5 (tiny base)\n",
89
+ "LORA_ALPHA = 128 # alpha = r\n",
90
+ "BATCH_SIZE = 8 # massive batch thanks to small model\n",
91
+ "GRAD_ACCUM = 1 # effective batch = 8\n",
92
+ "LEARNING_RATE = 2e-4 \n",
93
+ "NUM_EPOCHS = 1\n",
94
+ "MAX_STEPS = 4000 # cap steps for speed\n",
95
+ "WARMUP_STEPS = 200 \n",
96
+ "LOGGING_STEPS = 50 \n",
97
+ "SAVE_STEPS = 500 \n",
98
+ "PACKING = True # massive throughput boost\n",
99
+ "SAMPLE_SIZE = 50000 # subsample for fast convergence\n",
100
+ "HUB_MODEL_ID = \"your-username/cyber-lfm25-lora\" \n",
101
+ "# ========================================================================\n",
102
+ "\n",
103
+ "model, tokenizer = FastLanguageModel.from_pretrained(\n",
104
+ " model_name=\"unsloth/LFM2.5-1.2B-Instruct\",\n",
105
+ " max_seq_length=MAX_SEQ_LENGTH,\n",
106
+ " dtype=None, # auto-detect (fp16 on T4)\n",
107
+ " load_in_4bit=True,\n",
108
+ ")\n",
109
+ "\n",
110
+ "model = FastLanguageModel.get_peft_model(\n",
111
+ " model,\n",
112
+ " r=LORA_R,\n",
113
+ " target_modules=[\"q_proj\", \"k_proj\", \"v_proj\", \"o_proj\",\n",
114
+ " \"gate_proj\", \"up_proj\", \"down_proj\"],\n",
115
+ " lora_alpha=LORA_ALPHA,\n",
116
+ " lora_dropout=0, \n",
117
+ " bias=\"none\",\n",
118
+ " use_gradient_checkpointing=\"unsloth\",\n",
119
+ " random_state=3407,\n",
120
+ " use_rslora=False, \n",
121
+ " loftq_config=None,\n",
122
+ ")\n",
123
+ "\n",
124
+ "trainable = sum(p.numel() for p in model.parameters() if p.requires_grad)\n",
125
+ "total = sum(p.numel() for p in model.parameters())\n",
126
+ "print(f\"βœ… LFM2.5-1.2B loaded. Trainable params: {trainable:,} / {total:,} ({100*trainable/total:.2f}%)\")\n",
127
+ "print(f\"πŸ“Š Base model VRAM: ~1.0 GB (4-bit)\")\n",
128
+ "print(f\"πŸš€ Free VRAM for training: ~14 GB on T4 16GB\")"
129
+ ]
130
+ },
131
+ {
132
+ "cell_type": "markdown",
133
+ "metadata": {},
134
+ "source": [
135
+ "## 4️⃣ Load, Audit, Subsample & Merge Cybersecurity Datasets"
136
+ ]
137
+ },
138
+ {
139
+ "cell_type": "code",
140
+ "execution_count": null,
141
+ "metadata": {},
142
+ "outputs": [],
143
+ "source": [
144
+ "from datasets import load_dataset, concatenate_datasets\n",
145
+ "import random\n",
146
+ "\n",
147
+ "# ---------- Dataset 1: Fenrir v2.1 ----------\n",
148
+ "print(\"πŸ“₯ Loading Fenrir v2.1...\")\n",
149
+ "ds1 = load_dataset(\"AlicanKiraz0/Cybersecurity-Dataset-Fenrir-v2.1\", split=\"train\")\n",
150
+ "print(f\" Rows: {len(ds1)} | Columns: {ds1.column_names}\")\n",
151
+ "\n",
152
+ "for i in random.sample(range(len(ds1)), 2):\n",
153
+ " print(f\"\\n--- Sample {i} ---\")\n",
154
+ " print(f\"SYSTEM: {ds1[i]['system'][:120]}...\")\n",
155
+ " print(f\"USER: {ds1[i]['user'][:120]}...\")\n",
156
+ " print(f\"ASSIST: {ds1[i]['assistant'][:120]}...\")\n",
157
+ "\n",
158
+ "def fenrir_to_messages(example):\n",
159
+ " return {\n",
160
+ " \"messages\": [\n",
161
+ " {\"role\": \"system\", \"content\": example[\"system\"]},\n",
162
+ " {\"role\": \"user\", \"content\": example[\"user\"]},\n",
163
+ " {\"role\": \"assistant\", \"content\": example[\"assistant\"]},\n",
164
+ " ]\n",
165
+ " }\n",
166
+ "\n",
167
+ "ds1 = ds1.map(fenrir_to_messages, remove_columns=ds1.column_names, batched=False)\n",
168
+ "\n",
169
+ "# ---------- Dataset 2: Trendyol ----------\n",
170
+ "print(\"\\nπŸ“₯ Loading Trendyol Cybersecurity...\")\n",
171
+ "ds2 = load_dataset(\"Trendyol/Trendyol-Cybersecurity-Instruction-Tuning-Dataset\", split=\"train\")\n",
172
+ "print(f\" Rows: {len(ds2)} | Columns: {ds2.column_names}\")\n",
173
+ "\n",
174
+ "def trendyol_to_messages(example):\n",
175
+ " return {\n",
176
+ " \"messages\": [\n",
177
+ " {\"role\": \"system\", \"content\": example[\"system\"]},\n",
178
+ " {\"role\": \"user\", \"content\": example[\"user\"]},\n",
179
+ " {\"role\": \"assistant\", \"content\": example[\"assistant\"]},\n",
180
+ " ]\n",
181
+ " }\n",
182
+ "\n",
183
+ "ds2 = ds2.map(trendyol_to_messages, remove_columns=ds2.column_names, batched=False)\n",
184
+ "\n",
185
+ "# ---------- Merge & Subsample ----------\n",
186
+ "train_dataset = concatenate_datasets([ds1, ds2])\n",
187
+ "print(f\"\\nπŸ“Š COMBINED DATASET: {len(train_dataset)} rows\")\n",
188
+ "\n",
189
+ "if len(train_dataset) > SAMPLE_SIZE:\n",
190
+ " train_dataset = train_dataset.shuffle(seed=3407).select(range(SAMPLE_SIZE))\n",
191
+ " print(f\"πŸš€ SUBSAMPLED to {len(train_dataset)} rows\")\n",
192
+ "\n",
193
+ "print(f\" Effective batch size: {BATCH_SIZE * GRAD_ACCUM}\")\n",
194
+ "print(f\" Steps per epoch: ~{len(train_dataset) // (BATCH_SIZE * GRAD_ACCUM)}\")\n",
195
+ "print(f\" Capped to MAX_STEPS: {MAX_STEPS}\")"
196
+ ]
197
+ },
198
+ {
199
+ "cell_type": "markdown",
200
+ "metadata": {},
201
+ "source": [
202
+ "## 5️⃣ Pre-process Dataset to Text (Avoid Unsloth formatting_func issues)"
203
+ ]
204
+ },
205
+ {
206
+ "cell_type": "code",
207
+ "execution_count": null,
208
+ "metadata": {},
209
+ "outputs": [],
210
+ "source": [
211
+ "# ========== PRE-PROCESS: messages β†’ text with chat template ==========\n",
212
+ "def convert_messages_to_text(examples):\n",
213
+ " texts = []\n",
214
+ " for msgs in examples[\"messages\"]:\n",
215
+ " text = tokenizer.apply_chat_template(\n",
216
+ " msgs,\n",
217
+ " tokenize=False,\n",
218
+ " add_generation_prompt=False,\n",
219
+ " )\n",
220
+ " texts.append(text)\n",
221
+ " return {\"text\": texts}\n",
222
+ "\n",
223
+ "print(\"πŸ”„ Converting messages to text...\")\n",
224
+ "train_dataset = train_dataset.map(\n",
225
+ " convert_messages_to_text,\n",
226
+ " batched=True,\n",
227
+ " remove_columns=[\"messages\"],\n",
228
+ " batch_size=100,\n",
229
+ ")\n",
230
+ "\n",
231
+ "print(f\"βœ… Dataset pre-processed. Columns: {train_dataset.column_names}\")\n",
232
+ "print(f\"πŸ“„ Sample text length: {len(train_dataset[0]['text'])} chars\")"
233
+ ]
234
+ },
235
+ {
236
+ "cell_type": "markdown",
237
+ "metadata": {},
238
+ "source": [
239
+ "## 6️⃣ Configure SFT Trainer (with Packing)"
240
+ ]
241
+ },
242
+ {
243
+ "cell_type": "code",
244
+ "execution_count": null,
245
+ "metadata": {},
246
+ "outputs": [],
247
+ "source": [
248
+ "from trl import SFTTrainer\n",
249
+ "from transformers import TrainingArguments\n",
250
+ "\n",
251
+ "trainer = SFTTrainer(\n",
252
+ " model=model,\n",
253
+ " tokenizer=tokenizer,\n",
254
+ " train_dataset=train_dataset,\n",
255
+ " dataset_text_field=\"text\",\n",
256
+ " max_seq_length=MAX_SEQ_LENGTH,\n",
257
+ " dataset_num_proc=2,\n",
258
+ " packing=PACKING,\n",
259
+ " args=TrainingArguments(\n",
260
+ " per_device_train_batch_size=BATCH_SIZE,\n",
261
+ " gradient_accumulation_steps=GRAD_ACCUM,\n",
262
+ " warmup_steps=WARMUP_STEPS,\n",
263
+ " max_steps=MAX_STEPS,\n",
264
+ " learning_rate=LEARNING_RATE,\n",
265
+ " fp16=True,\n",
266
+ " logging_steps=LOGGING_STEPS,\n",
267
+ " optim=\"adamw_8bit\",\n",
268
+ " weight_decay=0.01,\n",
269
+ " lr_scheduler_type=\"linear\",\n",
270
+ " seed=3407,\n",
271
+ " output_dir=\"./outputs_lfm25\",\n",
272
+ " save_strategy=\"steps\",\n",
273
+ " save_steps=SAVE_STEPS,\n",
274
+ " save_total_limit=2,\n",
275
+ " report_to=\"none\",\n",
276
+ " ),\n",
277
+ ")\n",
278
+ "\n",
279
+ "print(f\"βœ… Trainer ready. Total steps: {MAX_STEPS}\")\n",
280
+ "print(f\" Effective batch size: {BATCH_SIZE * GRAD_ACCUM}\")\n",
281
+ "print(f\" Packing enabled: {PACKING}\")\n",
282
+ "print(f\" Est. time at ~0.6 it/s: ~{MAX_STEPS * 1.7 / 3600:.1f} hours\")"
283
+ ]
284
+ },
285
+ {
286
+ "cell_type": "markdown",
287
+ "metadata": {},
288
+ "source": [
289
+ "## 7️⃣ Train πŸš€"
290
+ ]
291
+ },
292
+ {
293
+ "cell_type": "code",
294
+ "execution_count": null,
295
+ "metadata": {},
296
+ "outputs": [],
297
+ "source": [
298
+ "if torch.cuda.is_available():\n",
299
+ " print(f\"VRAM before train: {torch.cuda.memory_allocated()/1e9:.2f} GB / {torch.cuda.get_device_properties(0).total_memory/1e9:.2f} GB\")\n",
300
+ "\n",
301
+ "trainer_stats = trainer.train()\n",
302
+ "\n",
303
+ "print(\"\\nπŸŽ‰ Training complete!\")\n",
304
+ "print(trainer_stats)\n",
305
+ "\n",
306
+ "if torch.cuda.is_available():\n",
307
+ " print(f\"VRAM after train: {torch.cuda.memory_allocated()/1e9:.2f} GB\")"
308
+ ]
309
+ },
310
+ {
311
+ "cell_type": "markdown",
312
+ "metadata": {},
313
+ "source": [
314
+ "## 8️⃣ Save & Push to HuggingFace Hub"
315
+ ]
316
+ },
317
+ {
318
+ "cell_type": "code",
319
+ "execution_count": null,
320
+ "metadata": {},
321
+ "outputs": [],
322
+ "source": [
323
+ "# 8A) Save LoRA adapter\n",
324
+ "model.save_pretrained(\"./lfm25-lora-adapter\")\n",
325
+ "tokenizer.save_pretrained(\"./lfm25-lora-adapter\")\n",
326
+ "print(\"βœ… LoRA adapter saved\")\n",
327
+ "\n",
328
+ "# 8B) Merge & save full model\n",
329
+ "print(\"\\nπŸ”„ Merging LoRA into base model...\")\n",
330
+ "merged_model = model.merge_and_unload()\n",
331
+ "merged_model.save_pretrained(\"./lfm25-merged\")\n",
332
+ "tokenizer.save_pretrained(\"./lfm25-merged\")\n",
333
+ "print(\"βœ… Merged model saved\")\n",
334
+ "\n",
335
+ "# 8C) Push to HF Hub (uncomment if logged in)\n",
336
+ "# model.push_to_hub(HUB_MODEL_ID)\n",
337
+ "# tokenizer.push_to_hub(HUB_MODEL_ID)"
338
+ ]
339
+ },
340
+ {
341
+ "cell_type": "markdown",
342
+ "metadata": {},
343
+ "source": [
344
+ "## 9️⃣ Inference Demo – Responsible Pentesting"
345
+ ]
346
+ },
347
+ {
348
+ "cell_type": "code",
349
+ "execution_count": null,
350
+ "metadata": {},
351
+ "outputs": [],
352
+ "source": [
353
+ "FastLanguageModel.for_inference(model)\n",
354
+ "\n",
355
+ "test_prompt = \"How would you perform a responsible penetration test on a web application?\"\n",
356
+ "\n",
357
+ "messages = [\n",
358
+ " {\"role\": \"system\", \"content\": \"You are a cybersecurity expert. Explain concepts clearly and ethically.\"},\n",
359
+ " {\"role\": \"user\", \"content\": test_prompt},\n",
360
+ "]\n",
361
+ "\n",
362
+ "inputs = tokenizer.apply_chat_template(\n",
363
+ " messages,\n",
364
+ " tokenize=True,\n",
365
+ " add_generation_prompt=True,\n",
366
+ " return_tensors=\"pt\",\n",
367
+ ").to(model.device)\n",
368
+ "\n",
369
+ "outputs = model.generate(\n",
370
+ " input_ids=inputs,\n",
371
+ " max_new_tokens=512,\n",
372
+ " temperature=0.7,\n",
373
+ " top_p=0.9,\n",
374
+ " do_sample=True,\n",
375
+ " pad_token_id=tokenizer.pad_token_id,\n",
376
+ " eos_token_id=tokenizer.eos_token_id,\n",
377
+ ")\n",
378
+ "\n",
379
+ "response = tokenizer.decode(outputs[0], skip_special_tokens=True)\n",
380
+ "reply = response.split(\"user\")[-1].split(\"assistant\")[-1].strip()\n",
381
+ "print(reply[:800])"
382
+ ]
383
+ },
384
+ {
385
+ "cell_type": "markdown",
386
+ "metadata": {},
387
+ "source": [
388
+ "## πŸ”Ÿ Quick Benchmark – CyberMetric Sample"
389
+ ]
390
+ },
391
+ {
392
+ "cell_type": "code",
393
+ "execution_count": null,
394
+ "metadata": {},
395
+ "outputs": [],
396
+ "source": [
397
+ "benchmark_q = (\n",
398
+ " \"Which of the following is the MOST effective defense against SQL injection?\\n\"\n",
399
+ " \"A) Input validation only\\n\"\n",
400
+ " \"B) Parameterized queries\\n\"\n",
401
+ " \"C) Escaping special characters\\n\"\n",
402
+ " \"D) Client-side filtering\\n\"\n",
403
+ " \"Answer with the letter only.\"\n",
404
+ ")\n",
405
+ "\n",
406
+ "bench_msgs = [\n",
407
+ " {\"role\": \"system\", \"content\": \"You are a cybersecurity expert. Answer accurately.\"},\n",
408
+ " {\"role\": \"user\", \"content\": benchmark_q},\n",
409
+ "]\n",
410
+ "\n",
411
+ "inputs = tokenizer.apply_chat_template(bench_msgs, tokenize=True, add_generation_prompt=True, return_tensors=\"pt\").to(model.device)\n",
412
+ "\n",
413
+ "outputs = model.generate(input_ids=inputs, max_new_tokens=64, temperature=0.1, do_sample=True,\n",
414
+ " pad_token_id=tokenizer.pad_token_id, eos_token_id=tokenizer.eos_token_id)\n",
415
+ "\n",
416
+ "answer = tokenizer.decode(outputs[0], skip_special_tokens=True)\n",
417
+ "print(\"πŸ“Š Benchmark Answer:\")\n",
418
+ "print(answer.split(\"assistant\")[-1].strip())"
419
+ ]
420
+ },
421
+ {
422
+ "cell_type": "markdown",
423
+ "metadata": {},
424
+ "source": [
425
+ "---\n",
426
+ "## πŸ“š References\n",
427
+ "\n",
428
+ "| Resource | Link |\n",
429
+ "|----------|------|\n",
430
+ "| **Liquid AI Models** | https://www.liquid.ai/models |\n",
431
+ "| **LFM2.5-1.2B-Instruct** | https://huggingface.co/LiquidAI/LFM2.5-1.2B-Instruct |\n",
432
+ "| **Unsloth LFM2.5 Docs** | https://unsloth.ai/docs/models/tutorials/lfm2.5 |\n",
433
+ "| **Official Colab** | https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Liquid_LFM2_(1.2B)-Conversational.ipynb |\n",
434
+ "| **Fenrir Dataset** | https://huggingface.co/datasets/AlicanKiraz0/Cybersecurity-Dataset-Fenrir-v2.1 |\n",
435
+ "| **Trendyol Dataset** | https://huggingface.co/datasets/Trendyol/Trendyol-Cybersecurity-Instruction-Tuning-Dataset |\n",
436
+ "\n",
437
+ "---\n",
438
+ "*Built with ❀️ for the cybersecurity community. Use responsibly.*"
439
+ ]
440
+ }
441
+ ],
442
+ "metadata": {
443
+ "kernelspec": {
444
+ "display_name": "Python 3",
445
+ "language": "python",
446
+ "name": "python3"
447
+ },
448
+ "language_info": {
449
+ "name": "python",
450
+ "version": "3.10.12"
451
+ }
452
+ },
453
+ "nbformat": 4,
454
+ "nbformat_minor": 4
455
+ }