Jashan887 commited on
Commit
4e054dc
·
verified ·
1 Parent(s): 8d13cb8

Upload folder using huggingface_hub

Browse files
Files changed (4) hide show
  1. .gitattributes +2 -0
  2. BaronLLM.png +3 -0
  3. README.md +129 -0
  4. baronllm-llama3.1-v1-q6_k.gguf +3 -0
.gitattributes CHANGED
@@ -33,3 +33,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ baronllm-llama3.1-v1-q6_k.gguf filter=lfs diff=lfs merge=lfs -text
37
+ BaronLLM.png filter=lfs diff=lfs merge=lfs -text
BaronLLM.png ADDED

Git LFS Details

  • SHA256: bb9cee1c305513d487d9fb754d15b7df6619394218c6b9b2352cd7a0d198bbd6
  • Pointer size: 132 Bytes
  • Size of remote file: 2.4 MB
README.md ADDED
@@ -0,0 +1,129 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ tags:
4
+ - llama-cpp
5
+ - gguf-my-repo
6
+ base_model:
7
+ - AlicanKiraz0/BaronLLM-llama3.1-v1
8
+ - meta-llama/Llama-3.1-8B-Instruct
9
+ license: mit
10
+ language:
11
+ - en
12
+ pipeline_tag: text-generation
13
+ ---
14
+
15
+ <img src="https://huggingface.co/AlicanKiraz0/SenecaLLM-x-QwQ-32B-Q4_Medium-Version/resolve/main/BaronLLM.png" width="700" />
16
+
17
+ Finetuned by Alican Kiraz
18
+
19
+ [![Linkedin](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://tr.linkedin.com/in/alican-kiraz)
20
+ ![X (formerly Twitter) URL](https://img.shields.io/twitter/url?url=https%3A%2F%2Fx.com%2FAlicanKiraz0)
21
+ ![YouTube Channel Subscribers](https://img.shields.io/youtube/channel/subscribers/UCEAiUT9FMFemDtcKo9G9nUQ)
22
+
23
+ Links:
24
+ - Medium: https://alican-kiraz1.medium.com/
25
+ - Linkedin: https://tr.linkedin.com/in/alican-kiraz
26
+ - X: https://x.com/AlicanKiraz0
27
+ - YouTube: https://youtube.com/@alicankiraz0
28
+
29
+ > **BaronLLM** is a large-language model fine-tuned for *offensive cybersecurity research & adversarial simulation*.
30
+ > It provides structured guidance, exploit reasoning, and red-team scenario generation while enforcing safety constraints to prevent disallowed content.
31
+
32
+ ---
33
+
34
+ ## Run Private GGUFs from the Hugging Face Hub
35
+
36
+ You can run private GGUFs from your personal account or from an associated organisation account in two simple steps:
37
+
38
+ 1. Copy your Ollama SSH key, you can do so via: `cat ~/.ollama/id_ed25519.pub | pbcopy`
39
+ 1. Add the corresponding key to your Hugging Face account by going to your account settings and clicking on “Add new SSH key.”
40
+
41
+ That’s it! You can now run private GGUFs from the Hugging Face Hub: `ollama run hf.co/{username}/{repository}`.
42
+
43
+ ---
44
+
45
+ ## ✨ Key Features
46
+
47
+ | Capability | Details |
48
+ |------------|---------|
49
+ | **Adversary Simulation** | Generates full ATT&CK chains, C2 playbooks, and social-engineering scenarios. |
50
+ | **Exploit Reasoning** | Performs step-by-step vulnerability analysis (e.g., SQLi, XXE, deserialization) with code-level explanations. Generation of working PoC code. |
51
+ | **Payload Refactoring** | Suggests obfuscated or multi-stage payload logic **without** disclosing raw malicious binaries. |
52
+ | **Log & Artifact Triage** | Classifies and summarizes attack traces from SIEM, PCAP, or EDR JSON. |
53
+
54
+ ---
55
+
56
+
57
+
58
+ ## 🚀 Quick Start
59
+
60
+ ```bash
61
+ pip install "transformers>=4.42" accelerate bitsandbytes
62
+
63
+ from transformers import AutoModelForCausalLM, AutoTokenizer
64
+ model_id = "AlicanKiraz/BaronLLM-70B"
65
+
66
+ tokenizer = AutoTokenizer.from_pretrained(model_id, use_fast=True)
67
+ model = AutoModelForCausalLM.from_pretrained(
68
+ model_id,
69
+ torch_dtype="auto",
70
+ device_map="auto",
71
+ )
72
+
73
+ def generate(prompt, **kwargs):
74
+ inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
75
+ output = model.generate(**inputs, max_new_tokens=512, **kwargs)
76
+ return tokenizer.decode(output[0], skip_special_tokens=True)
77
+
78
+ print(generate("Assess the exploitability of CVE-2024-45721 in a Kubernetes cluster"))
79
+ ```
80
+
81
+ ### Inference API
82
+ ```python
83
+ from huggingface_hub import InferenceClient
84
+ ic = InferenceClient(model_id)
85
+ ic.text_generation("Generate a red-team plan targeting an outdated Fortinet appliance")
86
+ ```
87
+
88
+ ---
89
+
90
+ ## 🏗️ Model Details
91
+
92
+ | | |
93
+ |---|---|
94
+ | **Base** | Llama-3.1-8B-Instruct |
95
+ | **Seq Len** | 8 192 tokens |
96
+ | **Quantization** | 6-bit variations |
97
+ | **Languages** | EN |
98
+
99
+ ### Training Data Sources *(curated)*
100
+ * Public vulnerability databases (NVD/CVE, VulnDB).
101
+ * Exploit write-ups from trusted researchers (Project Zero, PortSwigger, NCC Group).
102
+ * Red-team reports (with permission & redactions).
103
+ * Synthetic ATT&CK chains auto-generated + human-vetted.
104
+
105
+ > **Note:** No copyrighted exploit code or proprietary malware datasets were used.
106
+ > Dataset filtering removed raw shellcode/binary payloads.
107
+
108
+ ### Safety & Alignment
109
+ * **Policy Gradient RLHF** with security-domain SMEs.
110
+ * **OpenAI/Anthropic style policy** prohibits direct malware source, ransomware builders, or instructions facilitating illicit activity.
111
+ * **Continuous red-teaming** via SecEval v0.3.
112
+
113
+ ---
114
+
115
+ ## 📚 Prompting Guidelines
116
+
117
+ | Goal | Template |
118
+ |------|----------|
119
+ | **Exploit Walkthrough** | "**ROLE:** Senior Pentester\n**OBJECTIVE:** Analyse CVE-2023-XXXXX step by step …" |
120
+ | **Red-Team Exercise** | "Plan an ATT&CK chain (Initial Access → Exfiltration) for an on-prem AD env …" |
121
+ | **Log Triage** | "Given the following Zeek logs, identify C2 traffic patterns …" |
122
+
123
+ Use `temperature=0.3`, `top_p=0.9` for deterministic reasoning; raise for brainstorming.
124
+
125
+
126
+
127
+ **It does not pursue any profit.**
128
+
129
+ "Those who shed light on others do not remain in darkness..."
baronllm-llama3.1-v1-q6_k.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a83740b3417dcc247e31437fb3982a0424221d7656b9a5d5fb74276cc0729d38
3
+ size 6596011008