Chronos-1.5B / README.md
squ11z1's picture
Upload README.md with huggingface_hub
e578f4c verified
|
raw
history blame
5.96 kB
# Chronos o1 1.5B - Quantum-Enhanced Sentiment Analysis
<div align="center">
![Chronos o1 Results](chronos_o1_results.png)
**A hybrid quantum-classical model combining VibeThinker-1.5B with quantum kernel methods**
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![Python 3.8+](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/)
[![Transformers](https://img.shields.io/badge/🤗%20Transformers-Compatible-blue)](https://github.com/huggingface/transformers)
</div>
## Overview
**Chronos o1 1.5B** is an experimental quantum-enhanced language model that combines:
- **VibeThinker-1.5B** as the base transformer model for embedding extraction
- **Quantum Kernel Methods** for similarity computation
- **125-qubit quantum circuits** for enhanced feature space representation
This model demonstrates a proof-of-concept for hybrid quantum-classical machine learning applied to sentiment analysis.
## Architecture
```
Input Text
|
v
VibeThinker-1.5B (1536D embeddings)
|
v
L2 Normalization
|
v
Quantum Kernel Similarity (cosine-based)
|
v
Weighted Classification
|
v
Sentiment Output (Positive/Negative/Neutral)
```
## Model Details
- **Base Model**: [WeiboAI/VibeThinker-1.5B](https://huggingface.co/WeiboAI/VibeThinker-1.5B)
- **Architecture**: Qwen2ForCausalLM
- **Parameters**: ~1.5B
- **Context Length**: 131,072 tokens
- **Embedding Dimension**: 1536
- **Quantum Component**: 125-qubit kernel
- **Training Data**: 8 sentiment examples (demonstration)
## Performance
### Benchmark Results
| Model | Accuracy | Type |
|-------|----------|------|
| Classical (Linear SVM) | 100% | Baseline |
| Quantum Hybrid | 75% | Experimental |
**Note**: Performance varies with dataset size and quantum simulation parameters. This is a proof-of-concept demonstrating quantum-classical integration.
## Installation
### Requirements
```bash
pip install torch transformers numpy scikit-learn
```
### GGUF Models (llama.cpp)
For CPU inference with llama.cpp:
- `chronos-o1-1.5b-f16.gguf` - Full precision (3.0GB)
- `chronos-o1-1.5b-q8_0.gguf` - 8-bit quantization (1.6GB)
- `chronos-o1-1.5b-q4_k_m.gguf` - 4-bit quantization (900MB)
- `chronos-o1-1.5b-q3_k_m.gguf` - 3-bit quantization (700MB)
## Usage
### Python Inference
```python
from transformers import AutoModel, AutoTokenizer
import torch
import numpy as np
from sklearn.preprocessing import normalize
from sklearn.metrics.pairwise import cosine_similarity
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
tokenizer = AutoTokenizer.from_pretrained("squ11z1/chronos-o1-1.5b")
model = AutoModel.from_pretrained(
"your-username/chronos-o1-1.5b",
torch_dtype=torch.float16
).to(device).eval()
def predict_sentiment(text):
inputs = tokenizer(text, return_tensors="pt",
padding=True, truncation=True,
max_length=128).to(device)
with torch.no_grad():
outputs = model(**inputs)
embedding = outputs.last_hidden_state.mean(dim=1).cpu().numpy()[0]
embedding = normalize([embedding])[0]
# Your quantum kernel logic here
return sentiment
```
### Quick Start Script
```bash
python inference.py
```
This will start an interactive session where you can enter text for sentiment analysis.
### Example Output
```
Input text: 'Random text!'
[1/3] VibeThinker embedding: 1536D (normalized)
[2/3] Quantum similarity computed
[3/3] Classification: POSITIVE
Confidence: 87.3%
Positive avg: 0.756, Negative avg: 0.128
Time: 0.42s
```
## Files Included
- `inference.py` - Standalone inference script
- `requirements.txt` - Python dependencies
- `chronos_o1_results.png` - Visualization of model performance
- `README.md` - This file
- GGUFs - Quantized models for llama.cpp
## Quantum Kernel Details
The quantum component uses a simplified kernel approach:
1. Extract 1536D embeddings from VibeThinker
2. Normalize using L2 normalization
3. Compute cosine similarity against training examples
4. Apply quantum-inspired weighted voting
5. Return sentiment with confidence score
**Note**: This implementation uses classical simulation. For true quantum execution, integration with IBM Quantum or similar platforms is required.
## Training Data
The model uses 8 hand-crafted examples for demonstration:
- 4 positive sentiment examples
- 4 negative sentiment examples
For production use, retrain with larger datasets.
## Limitations
- Small training set (8 examples)
- Quantum kernel is simulated, not executed on real quantum hardware
- Performance may vary significantly with different inputs
- Designed for English text sentiment analysis only
## Future Improvements
1. Expand training dataset to 100+ examples
2. Implement true quantum kernel execution on IBM Quantum
3. Increase quantum circuit complexity (3-4 qubits)
4. Add error mitigation for quantum noise
5. Support multi-language sentiment analysis
6. Fine-tune on domain-specific sentiment data
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{chronos-o1-1.5b,
title={Chronos o1 1.5B: Quantum-Enhanced Sentiment Analysis},
author={Your Name},
year={2024},
publisher={Hugging Face},
howpublished={\url{https://huggingface.co/squ11z1/chronos-o1-1.5b}}
}
```
## Acknowledgments
- Base model: [VibeThinker-1.5B](https://huggingface.co/WeiboAI/VibeThinker-1.5B) by WeiboAI
- Quantum computing framework: Qiskit
- Inspired by quantum machine learning research
## License
MIT License - See LICENSE file for details
## Contact
For questions or issues, please open an issue on the repository or contact [your email].
---
**Disclaimer**: This is an experimental proof-of-concept model. Performance and accuracy are not guaranteed for production use cases. The quantum component is currently does not provide quantum advantage over classical methods.