robzilla commited on
Commit
3374ef6
·
1 Parent(s): a251f14

Release BibleAI: HF + GGUF + Ollama bundle

Browse files
Files changed (2) hide show
  1. README.md +27 -0
  2. docs/PUBLISHING.md +4 -0
README.md CHANGED
@@ -7,6 +7,11 @@ tags:
7
  - gemma
8
  - gguf
9
  - ollama
 
 
 
 
 
10
  library_name: transformers
11
  pipeline_tag: text-generation
12
  ---
@@ -19,6 +24,17 @@ BibleAI is a Gemma 4 E4B model tuned with a 3-stage pipeline:
19
  3) DPO (preference optimization),
20
  for Bible/theology/church-history/faith assistance.
21
 
 
 
 
 
 
 
 
 
 
 
 
22
  ## Training Pipeline (CPT -> SFT -> DPO)
23
 
24
  ### Stage 1: CPT base (input to this release)
@@ -110,11 +126,22 @@ Note: `tokenizer_config.json` files in this release do not embed a `chat_templat
110
  - Full checksum manifest: `checksums/sha256.txt`
111
 
112
  ## Ollama
 
 
113
  - Q8:
114
  `ollama create bibleaiq8 -f ollama/Modelfile.q8`
115
  - BF16:
116
  `ollama create bibleaibf16 -f ollama/Modelfile.bf16`
117
 
 
 
 
 
 
 
 
 
 
118
  ## Included Artifacts
119
  - root model files (`config.json`, `model.safetensors`, `tokenizer.json`, `tokenizer_config.json`)
120
  - `gguf/` quantized GGUF exports
 
7
  - gemma
8
  - gguf
9
  - ollama
10
+ - cpt
11
+ - sft
12
+ - dpo
13
+ language:
14
+ - en
15
  library_name: transformers
16
  pipeline_tag: text-generation
17
  ---
 
24
  3) DPO (preference optimization),
25
  for Bible/theology/church-history/faith assistance.
26
 
27
+ ## Canonical Names
28
+ - Hugging Face repository ID: `rhemabible/BibleAI`
29
+ - Model display name: `BibleAI`
30
+ - Ollama model names (recommended):
31
+ - `bibleaiq8` (Q8 GGUF)
32
+ - `bibleaibf16` (BF16 GGUF)
33
+ - Preferred local artifact names:
34
+ - `model.safetensors`
35
+ - `gguf/final_merged.Q8_0.gguf`
36
+ - `gguf/final_merged.BF16.gguf`
37
+
38
  ## Training Pipeline (CPT -> SFT -> DPO)
39
 
40
  ### Stage 1: CPT base (input to this release)
 
126
  - Full checksum manifest: `checksums/sha256.txt`
127
 
128
  ## Ollama
129
+ - Use from repo root after cloning this Hugging Face model.
130
+ - These Modelfiles use a local absolute GGUF path. If your local path is different, update the `FROM ...gguf...` line first.
131
  - Q8:
132
  `ollama create bibleaiq8 -f ollama/Modelfile.q8`
133
  - BF16:
134
  `ollama create bibleaibf16 -f ollama/Modelfile.bf16`
135
 
136
+ ## Transformers (HF)
137
+ ```python
138
+ from transformers import AutoTokenizer, AutoModelForCausalLM
139
+
140
+ repo_id = "rhemabible/BibleAI"
141
+ tokenizer = AutoTokenizer.from_pretrained(repo_id)
142
+ model = AutoModelForCausalLM.from_pretrained(repo_id, torch_dtype="auto", device_map="auto")
143
+ ```
144
+
145
  ## Included Artifacts
146
  - root model files (`config.json`, `model.safetensors`, `tokenizer.json`, `tokenizer_config.json`)
147
  - `gguf/` quantized GGUF exports
docs/PUBLISHING.md CHANGED
@@ -47,3 +47,7 @@ This pushes:
47
  - GGUF (`gguf/` BF16 + Q8_0)
48
  - Ollama Modelfiles (`ollama/`)
49
  - adapters, checksums, logs, docs
 
 
 
 
 
47
  - GGUF (`gguf/` BF16 + Q8_0)
48
  - Ollama Modelfiles (`ollama/`)
49
  - adapters, checksums, logs, docs
50
+
51
+ Canonical names:
52
+ - HF repo: `rhemabible/BibleAI`
53
+ - Ollama models: `bibleaiq8` and `bibleaibf16`