Chronos-1.5B / README.md
squ11z1's picture
Update README.md
2088356 verified
|
raw
history blame
6.04 kB
---
tags:
- quantum-ml
- hybrid-quantum-classical
- ibm-quantum
- heron-r2
- ibm_fez
- quantum-kernel
- merged-lora
license: mit
language:
- en
base_model:
- WeiboAI/VibeThinker-1.5B
---
# Chronos 1.5B - Quantum-Classical model
![chronos](https://cdn-uploads.huggingface.co/production/uploads/67329d3f69fded92d56ab41a/mkH0yazAc13v0gvi04RIF.jpeg)
**A hybrid quantum-classical model combining VibeThinker-1.5B with quantum kernel methods**
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![Python 3.8+](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/)
[![Transformers](https://img.shields.io/badge/🤗%20Transformers-Compatible-blue)](https://github.com/huggingface/transformers)
## Overview
**Chronos 1.5B** is an experimental quantum-enhanced language model that combines:
- **VibeThinker-1.5B** as the base transformer model for embedding extraction
- **Quantum Kernel Methods** for similarity computation
- **125-qubit quantum circuits** for enhanced feature space representation
This model demonstrates a proof-of-concept for hybrid quantum-classical machine learning applied to sentiment analysis.
## Architecture
```
Input Text
|
v
VibeThinker-1.5B (1536D embeddings)
|
v
L2 Normalization
|
v
Quantum Kernel
|
v
Weighted Classification
|
v
Sentiment Output (Positive/Negative/Neutral)
```
## Model Details
- **Base Model**: [WeiboAI/VibeThinker-1.5B](https://huggingface.co/WeiboAI/VibeThinker-1.5B)
- **Architecture**: Qwen2ForCausalLM
- **Parameters**: ~1.5B
- **Context Length**: 131,072 tokens
- **Embedding Dimension**: 1536
- **Quantum Component**: 125-qubit kernel
- **Training Data**: 8 sentiment examples (demonstration)
## Performance
## Base VibeThinker-1.5B Benchmarks
<div align="center">
![bench](https://cdn-uploads.huggingface.co/production/uploads/67329d3f69fded92d56ab41a/sdjLC2Oa2JXcwJc-qqSx2.png)
</div>
### Benchmark Results
| Model | Accuracy | Type |
|-------|----------|------|
| Classical (Linear SVM) | 100% | Baseline |
| Quantum Hybrid | 75% | Experimental |
<div align="center">
<img src="chronos_o1_results.png" alt="Chronos o1 Results" style="width: 120%; max-width: none;">
</div>
**Note**: Performance varies with dataset size and quantum simulation parameters. This is a proof-of-concept demonstrating quantum-classical integration.
## Installation
### Requirements
```bash
pip install torch transformers numpy scikit-learn
```
## Usage
### Python Inference
```python
from transformers import AutoModel, AutoTokenizer
import torch
import numpy as np
from sklearn.preprocessing import normalize
from sklearn.metrics.pairwise import cosine_similarity
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
tokenizer = AutoTokenizer.from_pretrained("squ11z1/chronos-1.5B")
model = AutoModel.from_pretrained(
"squ11z1/chronos-1.5B",
torch_dtype=torch.float16
).to(device).eval()
def predict_sentiment(text):
inputs = tokenizer(text, return_tensors="pt",
padding=True, truncation=True,
max_length=128).to(device)
with torch.no_grad():
outputs = model(**inputs)
embedding = outputs.last_hidden_state.mean(dim=1).cpu().numpy()[0]
embedding = normalize([embedding])[0]
# Your quantum kernel logic here
return sentiment
```
### Quick Start Script
```bash
python inference.py
```
This will start an interactive session where you can enter text for sentiment analysis.
### Example Output
```
Input text: 'Random text!'
[1/3] VibeThinker embedding: 1536D (normalized)
[2/3] Quantum similarity computed
[3/3] Classification: POSITIVE
Confidence: 87.3%
Positive avg: 0.756, Negative avg: 0.128
Time: 0.42s
```
## Files Included
- `inference.py` - Standalone inference script
- `requirements.txt` - Python dependencies
- `chronos_o1_results.png` - Visualization of model performance
- `README.md` - This file
- GGUFs - Quantized models for llama.cpp
## Quantum Kernel Details
The quantum component uses a simplified kernel approach:
1. Extract 1536D embeddings from VibeThinker
2. Normalize using L2 normalization
3. Compute cosine similarity against training examples
4. Apply quantum-inspired weighted voting
5. Return sentiment with confidence score
**Note**: This implementation uses classical simulation. For true quantum execution, integration with IBM Quantum or similar platforms is required.
## Training Data
The model uses 8 hand-crafted examples for demonstration:
- 4 positive sentiment examples
- 4 negative sentiment examples
For production use, retrain with larger datasets.
## Limitations
- Small training set (8 examples)
- Quantum kernel is simulated, not executed on real quantum hardware
- Performance may vary significantly with different inputs
- Designed for English text sentiment analysis only
## Future Improvements
1. Expand training dataset to 100+ examples
2. Implement true quantum kernel execution on IBM Quantum
3. Increase quantum circuit complexity (3-4 qubits)
4. Add error mitigation for quantum noise
5. Support multi-language sentiment analysis
6. Fine-tune on domain-specific sentiment data
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{chronos-1.5b,
title={Chronos 1.5B: Quantum-Enhanced Sentiment Analysis},
author={squ11z1},
year={2025},
publisher={Hugging Face},
howpublished={\url{https://huggingface.co/squ11z1/chronos-1.5b}}
}
```
## Acknowledgments
- Base model: [VibeThinker-1.5B](https://huggingface.co/WeiboAI/VibeThinker-1.5B) by WeiboAI
- Quantum computing framework: Qiskit
- Inspired by quantum machine learning research
## License
MIT License - See LICENSE file for details
---
**Disclaimer**: This is an experimental proof-of-concept model. Performance and accuracy are not guaranteed for production use cases. The quantum component is currently does not provide quantum advantage over classical methods.