π VEDA-8B-v1-COGNITIVE
Fine-tuned to think. Not just predict.
By Vibhansh Bamnia Β· 18 Β· Independent AI Developer Β· Rajasthan, India
β‘ Why Veda-8B?
Most 8B models pattern-match. Veda-8B reasons.
LoRA fine-tuned on open-source reasoning datasets β built specifically to solve where standard Llama-3 fails.
| Capability | Standard Llama-3-8B | Veda-8B-v1 |
|---|---|---|
| Relational Chain Reasoning | β οΈ Often fails | β Reliable |
| Multi-hop Logic | β οΈ Inconsistent | β Strong |
| Code Hallucination | β Common | β Reduced |
| Chain-of-Thought | β οΈ Partial | β Complete |
π§ See It In Action
π Relational Reasoning
Prompt: A is B's father. C is A's sister. D is C's son. What is D's relation to B?
Veda-8B:
A is B's father β C is B's aunt β D is C's son
β΄ D is B's cousin.
π£ Logic Syllogism
Prompt: All Bloops are Razzies. All Razzies are Lazzies. Are all Bloops Lazzies?
Veda-8B:
Bloops β Razzies β Lazzies
β΄ Yes β by transitivity, all Bloops are definitely Lazzies.
π» Code (No Hallucination)
Prompt: Second largest unique number without sort()
Veda-8B:
def second_largest(nums):
unique = set(nums)
if len(unique) < 2:
return None
first = second = float('-inf')
for n in unique:
if n > first:
second, first = first, n
elif n > second:
second = n
return second
π Quantization
| Version | Size | RAM | Use |
|---|---|---|---|
| π₯ Q4_K_M | 4.92 GB | 8 GB | Local dev, edge |
| π₯ Q8_0 | 8.54 GB | 16 GB | Production |
| π¬ F16 | 16 GB+ | 32 GB | Research |
π» Quick Start
import llama_cpp
llm = llama_cpp.Llama(model_path="./Veda-8B-v1-Q4_K_M.gguf", n_ctx=8192, n_threads=4)
response = llm("A is taller than B. B is taller than C. Is A taller than C?",
max_tokens=512, temperature=0.1)
β οΈ Limitations
- Inherits Llama-3-8B base limitations
- Not for real-time / factual queries
- Verify math-heavy outputs
- Multilingual not tested
No institution. No GPU cluster. No team. Just a focused fine-tune built to make small models reason correctly.
Veda AI Labs Β· HuggingFace
- Downloads last month
- 1,413
Hardware compatibility
Log In to add your hardware
Model tree for vibhansh/Veda-8B-v1-Cognitive
Base model
meta-llama/Meta-Llama-3-8B-Instruct