Text Generation
Transformers
Safetensors
GGUF
English
qwen2
quantum-ml
hybrid-quantum-classical
quantum-kernel
research
quantum-computing
nisq
qiskit
quantum-circuits
vibe-thinker
physics-inspired-ml
quantum-enhanced
hybrid-ai
1.5b
small-model
efficient-ai
reasoning
chemistry
physics
text-generation-inference
conversational
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,6 +1,6 @@
|
|
| 1 |
# Chronos o1 1.5B - Quantum-Classical model
|
| 2 |
|
| 3 |
-
|
| 4 |
|
| 5 |
**A hybrid quantum-classical model combining VibeThinker-1.5B with quantum kernel methods**
|
| 6 |
|
|
@@ -75,15 +75,6 @@ Sentiment Output (Positive/Negative/Neutral)
|
|
| 75 |
pip install torch transformers numpy scikit-learn
|
| 76 |
```
|
| 77 |
|
| 78 |
-
### GGUF Models (llama.cpp)
|
| 79 |
-
|
| 80 |
-
For CPU inference with llama.cpp:
|
| 81 |
-
|
| 82 |
-
- `chronos-o1-1.5b-f16.gguf` - Full precision (3.0GB)
|
| 83 |
-
- `chronos-o1-1.5b-q8_0.gguf` - 8-bit quantization (1.6GB)
|
| 84 |
-
- `chronos-o1-1.5b-q4_k_m.gguf` - 4-bit quantization (900MB)
|
| 85 |
-
- `chronos-o1-1.5b-q3_k_m.gguf` - 3-bit quantization (700MB)
|
| 86 |
-
|
| 87 |
## Usage
|
| 88 |
|
| 89 |
### Python Inference
|
|
|
|
| 1 |
# Chronos o1 1.5B - Quantum-Classical model
|
| 2 |
|
| 3 |
+

|
| 4 |
|
| 5 |
**A hybrid quantum-classical model combining VibeThinker-1.5B with quantum kernel methods**
|
| 6 |
|
|
|
|
| 75 |
pip install torch transformers numpy scikit-learn
|
| 76 |
```
|
| 77 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 78 |
## Usage
|
| 79 |
|
| 80 |
### Python Inference
|