File size: 922 Bytes
fdc55d3 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 | cff-version: 1.2.0
title: "Q-TensorFormer: Quantum-Enhanced Tensor Network LLM Compression Engine"
message: "If you use this software in your research, please cite it using the metadata below."
authors:
- given-names: Premchan369
affiliation: ""
repository-code: "https://huggingface.co/Premchan369/q-tensorformer"
url: "https://huggingface.co/Premchan369/q-tensorformer"
abstract: >-
Q-TensorFormer is a hybrid transformer architecture that compresses feed-forward
layers using Tensor-Train (TT) decomposition and enhances token representations
via real quantum circuits, with adaptive TT-rank scheduling guided by attention
entropy. It achieves 50-70% parameter reduction at equivalent perplexity.
keywords:
- tensor networks
- quantum machine learning
- model compression
- transformer
- language modeling
- efficient deep learning
license: Apache-2.0
version: 3.0.0
date-released: 2026-05-06
|