Text Generation
Transformers
Safetensors
GGUF
English
qwen2
quantum-ml
hybrid-quantum-classical
quantum-kernel
research
quantum-computing
nisq
qiskit
quantum-circuits
vibe-thinker
physics-inspired-ml
quantum-enhanced
hybrid-ai
1.5b
small-model
efficient-ai
reasoning
chemistry
physics
text-generation-inference
conversational
File size: 6,043 Bytes
2088356 e790522 e578f4c 1ab583a e578f4c e790522 e578f4c 4511b26 e578f4c 612eff0 e578f4c ea985d9 99b6678 c179567 e578f4c e790522 e578f4c e790522 e578f4c e790522 b7e23d3 e578f4c e790522 e578f4c 2088356 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 | ---
tags:
- quantum-ml
- hybrid-quantum-classical
- ibm-quantum
- heron-r2
- ibm_fez
- quantum-kernel
- merged-lora
license: mit
language:
- en
base_model:
- WeiboAI/VibeThinker-1.5B
---
# Chronos 1.5B - Quantum-Classical model

**A hybrid quantum-classical model combining VibeThinker-1.5B with quantum kernel methods**
[](https://opensource.org/licenses/MIT)
[](https://www.python.org/downloads/)
[](https://github.com/huggingface/transformers)
## Overview
**Chronos 1.5B** is an experimental quantum-enhanced language model that combines:
- **VibeThinker-1.5B** as the base transformer model for embedding extraction
- **Quantum Kernel Methods** for similarity computation
- **125-qubit quantum circuits** for enhanced feature space representation
This model demonstrates a proof-of-concept for hybrid quantum-classical machine learning applied to sentiment analysis.
## Architecture
```
Input Text
|
v
VibeThinker-1.5B (1536D embeddings)
|
v
L2 Normalization
|
v
Quantum Kernel
|
v
Weighted Classification
|
v
Sentiment Output (Positive/Negative/Neutral)
```
## Model Details
- **Base Model**: [WeiboAI/VibeThinker-1.5B](https://huggingface.co/WeiboAI/VibeThinker-1.5B)
- **Architecture**: Qwen2ForCausalLM
- **Parameters**: ~1.5B
- **Context Length**: 131,072 tokens
- **Embedding Dimension**: 1536
- **Quantum Component**: 125-qubit kernel
- **Training Data**: 8 sentiment examples (demonstration)
## Performance
## Base VibeThinker-1.5B Benchmarks
<div align="center">

</div>
### Benchmark Results
| Model | Accuracy | Type |
|-------|----------|------|
| Classical (Linear SVM) | 100% | Baseline |
| Quantum Hybrid | 75% | Experimental |
<div align="center">
<img src="chronos_o1_results.png" alt="Chronos o1 Results" style="width: 120%; max-width: none;">
</div>
**Note**: Performance varies with dataset size and quantum simulation parameters. This is a proof-of-concept demonstrating quantum-classical integration.
## Installation
### Requirements
```bash
pip install torch transformers numpy scikit-learn
```
## Usage
### Python Inference
```python
from transformers import AutoModel, AutoTokenizer
import torch
import numpy as np
from sklearn.preprocessing import normalize
from sklearn.metrics.pairwise import cosine_similarity
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
tokenizer = AutoTokenizer.from_pretrained("squ11z1/chronos-1.5B")
model = AutoModel.from_pretrained(
"squ11z1/chronos-1.5B",
torch_dtype=torch.float16
).to(device).eval()
def predict_sentiment(text):
inputs = tokenizer(text, return_tensors="pt",
padding=True, truncation=True,
max_length=128).to(device)
with torch.no_grad():
outputs = model(**inputs)
embedding = outputs.last_hidden_state.mean(dim=1).cpu().numpy()[0]
embedding = normalize([embedding])[0]
# Your quantum kernel logic here
return sentiment
```
### Quick Start Script
```bash
python inference.py
```
This will start an interactive session where you can enter text for sentiment analysis.
### Example Output
```
Input text: 'Random text!'
[1/3] VibeThinker embedding: 1536D (normalized)
[2/3] Quantum similarity computed
[3/3] Classification: POSITIVE
Confidence: 87.3%
Positive avg: 0.756, Negative avg: 0.128
Time: 0.42s
```
## Files Included
- `inference.py` - Standalone inference script
- `requirements.txt` - Python dependencies
- `chronos_o1_results.png` - Visualization of model performance
- `README.md` - This file
- GGUFs - Quantized models for llama.cpp
## Quantum Kernel Details
The quantum component uses a simplified kernel approach:
1. Extract 1536D embeddings from VibeThinker
2. Normalize using L2 normalization
3. Compute cosine similarity against training examples
4. Apply quantum-inspired weighted voting
5. Return sentiment with confidence score
**Note**: This implementation uses classical simulation. For true quantum execution, integration with IBM Quantum or similar platforms is required.
## Training Data
The model uses 8 hand-crafted examples for demonstration:
- 4 positive sentiment examples
- 4 negative sentiment examples
For production use, retrain with larger datasets.
## Limitations
- Small training set (8 examples)
- Quantum kernel is simulated, not executed on real quantum hardware
- Performance may vary significantly with different inputs
- Designed for English text sentiment analysis only
## Future Improvements
1. Expand training dataset to 100+ examples
2. Implement true quantum kernel execution on IBM Quantum
3. Increase quantum circuit complexity (3-4 qubits)
4. Add error mitigation for quantum noise
5. Support multi-language sentiment analysis
6. Fine-tune on domain-specific sentiment data
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{chronos-1.5b,
title={Chronos 1.5B: Quantum-Enhanced Sentiment Analysis},
author={squ11z1},
year={2025},
publisher={Hugging Face},
howpublished={\url{https://huggingface.co/squ11z1/chronos-1.5b}}
}
```
## Acknowledgments
- Base model: [VibeThinker-1.5B](https://huggingface.co/WeiboAI/VibeThinker-1.5B) by WeiboAI
- Quantum computing framework: Qiskit
- Inspired by quantum machine learning research
## License
MIT License - See LICENSE file for details
---
**Disclaimer**: This is an experimental proof-of-concept model. Performance and accuracy are not guaranteed for production use cases. The quantum component is currently does not provide quantum advantage over classical methods. |