DeepInnovator GGUF
This repository provides GGUF quantized variants of T1anyu/DeepInnovator.
The original Hugging Face model was converted to GGUF and quantized using llama.cpp.
Original model
- Original model:
T1anyu/DeepInnovator - Model size: 15B parameters
- Original tensor type: BF16
- License: Apache-2.0
Description
DeepInnovator is a large language model designed for generating novel research ideas. According to the original model card, it is trained to produce innovative and significant research ideas, with a training methodology centered on structured scientific knowledge extraction and iterative “Next Idea Prediction.” It is published as a 15B-parameter model under the Apache-2.0 license. :contentReference[oaicite:0]{index=0}
Files
DeepInnovator-Q2_K.ggufDeepInnovator-Q3_K_S.ggufDeepInnovator-Q3_K_M.ggufDeepInnovator-Q3_K_L.ggufDeepInnovator-Q4_K.ggufDeepInnovator-Q4_K.ggufDeepInnovator-Q4_K_S.ggufDeepInnovator-Q6_K.gguf
Example: llama.cpp
./llama-cli -m ./DeepInnovator-Q4_K_M.gguf -c 1024 -ngl 20
Example: Ollama
Create a Modelfile:
FROM ./DeepInnovator-Q4_K_M.gguf
PARAMETER num_ctx 1024
PARAMETER temperature 0.7
Then run:
ollama create deepinnovator-q4 -f Modelfile
ollama run deepinnovator-q4
Example: Transformers
For the original non-GGUF model:
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "T1anyu/DeepInnovator"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto",
)
The original model card also provides example prompts and a vLLM usage example. (Hugging Face)
Notes
These files are quantized GGUF derivatives of the original model. Please refer to the upstream repository for the official model card, usage details, paper, and future updates. The upstream page lists the model as a Qwen2.5-14B-family fine-tuned model and links the paper DeepInnovator: Triggering the Innovative Capabilities of LLMs (arXiv:2602.18920). (Hugging Face)
Upstream links
- Original Hugging Face model:
https://huggingface.co/T1anyu/DeepInnovator - GitHub repository:
https://github.com/HKUDS/DeepInnovator
Citation
If you use this model, please cite the original work:
@article{fan2026deepinnovator,
title={DeepInnovator: Triggering the Innovative Capabilities of LLMs},
author={Fan, Tianyu and Zhang, Fengji and Zheng, Yuxiang and Chen, Bei and Niu, Xinyao and Huang, Chengen and Lin, Junyang and Huang, Chao},
journal={arXiv preprint arXiv:2602.18920},
year={2026}
}
- Downloads last month
- 33
2-bit
3-bit
4-bit
6-bit