πŸ’  VEDA-8B-v1-COGNITIVE

Fine-tuned to think. Not just predict.

By Vibhansh Bamnia Β· 18 Β· Independent AI Developer Β· Rajasthan, India

πŸš€ Try Demo Β· πŸ“₯ Download Model


⚑ Why Veda-8B?

Most 8B models pattern-match. Veda-8B reasons.

LoRA fine-tuned on open-source reasoning datasets β€” built specifically to solve where standard Llama-3 fails.

Capability Standard Llama-3-8B Veda-8B-v1
Relational Chain Reasoning ⚠️ Often fails βœ… Reliable
Multi-hop Logic ⚠️ Inconsistent βœ… Strong
Code Hallucination ❌ Common βœ… Reduced
Chain-of-Thought ⚠️ Partial βœ… Complete

🧠 See It In Action

πŸ”— Relational Reasoning

Prompt: A is B's father. C is A's sister. D is C's son. What is D's relation to B?

Veda-8B:

A is B's father β†’ C is B's aunt β†’ D is C's son
∴ D is B's cousin.
πŸ”£ Logic Syllogism

Prompt: All Bloops are Razzies. All Razzies are Lazzies. Are all Bloops Lazzies?

Veda-8B:

Bloops βŠ† Razzies βŠ† Lazzies
∴ Yes β€” by transitivity, all Bloops are definitely Lazzies.
πŸ’» Code (No Hallucination)

Prompt: Second largest unique number without sort()

Veda-8B:

def second_largest(nums):
    unique = set(nums)
    if len(unique) < 2:
        return None
    first = second = float('-inf')
    for n in unique:
        if n > first:
            second, first = first, n
        elif n > second:
            second = n
    return second

πŸ’Ž Quantization

Version Size RAM Use
πŸ₯‡ Q4_K_M 4.92 GB 8 GB Local dev, edge
πŸ₯ˆ Q8_0 8.54 GB 16 GB Production
πŸ”¬ F16 16 GB+ 32 GB Research

πŸ’» Quick Start

import llama_cpp

llm = llama_cpp.Llama(model_path="./Veda-8B-v1-Q4_K_M.gguf", n_ctx=8192, n_threads=4)

response = llm("A is taller than B. B is taller than C. Is A taller than C?",
               max_tokens=512, temperature=0.1)

⚠️ Limitations

  • Inherits Llama-3-8B base limitations
  • Not for real-time / factual queries
  • Verify math-heavy outputs
  • Multilingual not tested

No institution. No GPU cluster. No team. Just a focused fine-tune built to make small models reason correctly.

Veda AI Labs Β· HuggingFace

Downloads last month
1,413
GGUF
Model size
8B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

3-bit

4-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for vibhansh/Veda-8B-v1-Cognitive

Quantized
(266)
this model

Space using vibhansh/Veda-8B-v1-Cognitive 1