Datasets:
modelId stringlengths 6 122 | author stringlengths 2 42 | last_modified timestamp[us, tz=UTC]date 2021-02-12 11:31:59 2026-04-20 00:03:29 | downloads int64 0 207M | likes int64 0 13.1k | library_name stringclasses 813
values | tags listlengths 1 4.05k | pipeline_tag stringclasses 55
values | createdAt timestamp[us, tz=UTC]date 2022-03-02 23:29:04 2026-04-19 18:16:51 | card stringlengths 31 1.03M |
|---|---|---|---|---|---|---|---|---|---|
mashriram/Qwen3-4B-Instruct-TableLLM-SFT | mashriram | 2025-10-11T14:11:03 | 4 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"unsloth",
"conversational",
"en",
"es",
"hi",
"fr",
"de",
"dataset:RUCKBReasoning/TableLLM-SFT",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:finetune:Qwen/Qwen3-4B-Instruct-2507",
"doi:10.57967/hf/6766",
"license:apa... | text-generation | 2025-10-11T03:26:31 | ---
license: apache-2.0
tags:
- unsloth
datasets:
- RUCKBReasoning/TableLLM-SFT
language:
- en
- es
- hi
- fr
- de
base_model:
- Qwen/Qwen3-4B-Instruct-2507
library_name: transformers
--- |
JianStreet-001/sdar-qwen3-8b-b3-allmasked-causal | JianStreet-001 | 2026-03-24T19:50:12 | 174 | 0 | transformers | [
"transformers",
"safetensors",
"sdar",
"text-generation",
"conversational",
"custom_code",
"arxiv:2505.09388",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-03-24T19:46:16 | ---
license: apache-2.0
library_name: transformers
---
# SDAR
<div align="center">
<img src="https://raw.githubusercontent.com/JetAstra/SDAR/main/assets/SDAR_doc_head.png">
<div> </div>
[💻Github Repo](https://github.com/JetAstra/SDAR) • [🤗Model Collections](https://huggingface.co/collections/JetLM/sdar-689b... |
DMxObito/EduSolver | DMxObito | 2026-04-14T16:43:03 | 0 | 0 | peft | [
"peft",
"safetensors",
"phi",
"text-generation",
"base_model:adapter:microsoft/phi-2",
"lora",
"transformers",
"en",
"base_model:microsoft/phi-2",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-04-14T14:41:51 | ---
base_model: microsoft/phi-2
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:microsoft/phi-2
- lora
- transformers
language:
- en
---
# EduSolver (LoRA Fine-tuned Model)
Base Model: "microsoft/phi-2"
## Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft impo... |
RachelH/YOLOv8 | RachelH | 2026-04-07T01:38:04 | 0 | 0 | ultralytics | [
"ultralytics",
"tracking",
"instance-segmentation",
"image-classification",
"pose-estimation",
"obb",
"object-detection",
"yolo",
"yolov8",
"yolov3",
"yolov5",
"yolov9",
"yolov10",
"license:agpl-3.0",
"region:us"
] | object-detection | 2026-04-07T01:38:04 | ---
license: agpl-3.0
pipeline_tag: object-detection
tags:
- ultralytics
- tracking
- instance-segmentation
- image-classification
- pose-estimation
- obb
- object-detection
- yolo
- yolov8
- yolov3
- yolov5
- yolov9
- yolov10
library_name: ultralytics
---
<div align="center">
<p>
<a href="https://www.ultralytic... |
rohit23/q-FrozenLake-v1-4x4-noSlippery | rohit23 | 2026-03-17T09:08:55 | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2026-03-17T09:08:50 | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: Frozen... |
thanhminguyen1060/blockassist-bc-prowling_cunning_newt_1763144861 | thanhminguyen1060 | 2025-11-14T18:41:17 | 0 | 0 | null | [
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"prowling cunning newt",
"arxiv:2504.07091",
"region:us"
] | null | 2025-11-14T18:41:14 | ---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- prowling cunning newt
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jahyungu/Qwen2.5-Math-1.5B-Instruct-olaph | jahyungu | 2026-03-03T16:17:58 | 29 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:Qwen/Qwen2.5-Math-1.5B-Instruct",
"lora",
"transformers",
"text-generation",
"conversational",
"base_model:Qwen/Qwen2.5-Math-1.5B-Instruct",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-03-03T16:11:17 | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-Math-1.5B-Instruct
tags:
- base_model:adapter:Qwen/Qwen2.5-Math-1.5B-Instruct
- lora
- transformers
pipeline_tag: text-generation
model-index:
- name: Qwen2.5-Math-1.5B-Instruct-olaph
results: []
---
<!-- This model card has been generated automatic... |
tong-li-fsx/a2c-PandaReachDense-v3 | tong-li-fsx | 2026-02-04T02:34:43 | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2026-02-04T02:29:24 | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReach... |
j-b22/my_awesome_qa_model | j-b22 | 2025-08-09T14:27:48 | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2025-08-09T14:21:51 | ---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably p... |
freykun/frey_upscaler_collection | freykun | 2025-12-20T21:37:07 | 0 | 2 | null | [
"region:us"
] | null | 2025-12-20T20:00:41 | ### 📂 upscale_models/Anime X Art
*Апскейлеры, оптимизированные для рисованных изображений, аниме и 2D-арта. Обычно они лучше сохраняют четкость линий и плоские цвета.*
* **2xLexicaRDBNet.pth**
Увеличивает в 2 раза. Обучен на изображениях Lexica.art, идеально подходит для стилизованного AI-арта с высокой детализ... |
mradermacher/gemma3_270M_Indian_law_finetuned-GGUF | mradermacher | 2025-09-11T08:17:26 | 2 | 0 | transformers | [
"transformers",
"gguf",
"trl",
"sft",
"en",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-11T00:53:28 | ---
base_model: Rakesh7n/gemma3_270M_Indian_law_finetuned
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!... |
Sopelllka/mercymirax | Sopelllka | 2025-09-11T13:29:56 | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-09-11T12:47:47 | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# pro... |
DraconicDragon/Kaloscope-onnx | DraconicDragon | 2025-11-26T13:34:31 | 0 | 5 | null | [
"onnx",
"image-classification",
"arxiv:2503.23135",
"base_model:heathcliff01/Kaloscope",
"base_model:quantized:heathcliff01/Kaloscope",
"license:apache-2.0",
"region:us"
] | image-classification | 2025-10-19T08:38:35 | ---
license: apache-2.0
base_model:
- heathcliff01/Kaloscope2.0
- heathcliff01/Kaloscope
pipeline_tag: image-classification
---
ONNX conversion of kaloscope model.
[🤗 Space Demo](https://huggingface.co/spaces/DraconicDragon/Kaloscope-artist-style-classifier) for ONNX & PyTorch inference implementation (incl. timm... |
rungalileo/completeness-multiclass-8b-v7 | rungalileo | 2026-04-01T07:17:00 | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2026-04-01T07:16:55 | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. Thi... |
trl-internal-testing/tiny-Gemma4ForConditionalGeneration | trl-internal-testing | 2026-04-10T16:35:50 | 14,052 | 0 | transformers | [
"transformers",
"safetensors",
"gemma4",
"image-text-to-text",
"trl",
"conversational",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2026-04-03T20:05:29 | ---
library_name: transformers
tags:
- trl
---
# Tiny Gemma4ForConditionalGeneration
This is a minimal model built for unit tests in the [TRL](https://github.com/huggingface/trl) library.
|
DaiwaScarlet/whisper-v2-small-ft-sundanese | DaiwaScarlet | 2025-12-08T11:27:50 | 0 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-12-08T11:03:35 | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. Thi... |
Ths4x/blockassist-bc-pensive_rugged_bee_1760765612 | Ths4x | 2025-10-18T05:40:44 | 0 | 0 | null | [
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pensive rugged bee",
"arxiv:2504.07091",
"region:us"
] | null | 2025-10-18T05:40:39 | ---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pensive rugged bee
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/Sifera-V1-i1-GGUF | mradermacher | 2025-12-04T04:56:47 | 53 | 0 | transformers | [
"transformers",
"gguf",
"text-generation",
"summarization",
"note-taking",
"sifera",
"en",
"base_model:shivam909067/Sifera-V1",
"base_model:quantized:shivam909067/Sifera-V1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | summarization | 2025-12-04T02:30:36 | ---
base_model: shivam909067/Sifera-V1
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- text-generation
- summarization
- note-taking
- sifera
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### con... |
ardatasci/whitematterlabs_wired_v1 | ardatasci | 2026-04-14T15:12:35 | 0 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:google/gemma-4-E4B",
"lora",
"transformers",
"unsloth",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:google/gemma-4-E4B",
"region:us"
] | text-generation | 2026-04-14T15:11:30 | ---
base_model: google/gemma-4-E4B
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:google/gemma-4-E4B
- lora
- transformers
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer ... |
dgambettaphd/M_llm3_run2_gen4_WXS_doc1000_synt64_lr1e-04_acm_MPP | dgambettaphd | 2025-08-15T05:59:49 | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-15T05:59:34 | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the H... |
oip645/xvla_dp3_test_005_lora_FreezeEmptyCamera1 | oip645 | 2026-04-17T07:24:31 | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"xvla",
"robotics",
"dataset:dp3_test_005_clean",
"license:apache-2.0",
"region:us"
] | robotics | 2026-04-17T07:19:23 | ---
datasets: dp3_test_005_clean
library_name: lerobot
license: apache-2.0
model_name: xvla
pipeline_tag: robotics
tags:
- xvla
- lerobot
- robotics
---
# Model Card for xvla
<!-- Provide a quick summary of what the model is/does. -->
_Model type not recognized — please update this template._
This policy has been... |
syhrlhyn/aibys-tokenizer | syhrlhyn | 2026-02-26T23:38:07 | 0 | 0 | sentencepiece | [
"sentencepiece",
"aibys",
"tokenizer",
"bpe",
"id",
"en",
"license:apache-2.0",
"region:us"
] | null | 2026-02-26T23:31:13 | ---
license: apache-2.0
language:
- id
- en
library_name: sentencepiece
tags:
- aibys
- tokenizer
- bpe
---
# Aibys Tokenizer (32K Vocab)
Tokenizer resmi untuk project **Aibys 500M**, dikembangkan oleh **Syahril Haryono**. Tokenizer ini dilatih menggunakan algoritma BPE (Byte Pair Encoding) melalui SentencePiece deng... |
mradermacher/gpt-oss-120b-mandarin-thinking-i1-GGUF | mradermacher | 2025-12-16T03:00:41 | 4 | 0 | transformers | [
"transformers",
"gguf",
"zh",
"base_model:FreeSEED-AI/gpt-oss-120b-mandarin-thinking",
"base_model:quantized:FreeSEED-AI/gpt-oss-120b-mandarin-thinking",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-20T02:00:26 | ---
base_model: FreeSEED-AI/gpt-oss-120b-mandarin-thinking
language:
- zh
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->... |
mradermacher/gpt-oss-20b-gemini-2.5-pro-distill-i1-GGUF | mradermacher | 2025-12-10T07:59:33 | 202 | 2 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"gpt_oss",
"en",
"dataset:Liontix/gemini-2.5-pro-200x",
"base_model:armand0e/gpt-oss-20b-gemini-2.5-pro-distill",
"base_model:quantized:armand0e/gpt-oss-20b-gemini-2.5-pro-distill",
"license:apache-2.0",
"endpoints_compatible",
"r... | null | 2025-10-30T02:15:37 | ---
base_model: armand0e/gpt-oss-20b-gemini-2.5-pro-distill
datasets:
- Liontix/gemini-2.5-pro-200x
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- gpt_oss
---
## About
<!-- ### quantize_... |
nuongbuivan687/blockassist-bc-reptilian_snorting_vulture_1763064577 | nuongbuivan687 | 2025-11-13T20:47:31 | 0 | 0 | null | [
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"reptilian snorting vulture",
"arxiv:2504.07091",
"region:us"
] | null | 2025-11-13T20:47:28 | ---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- reptilian snorting vulture
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
InsecureErasure/Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-Lightx2v-NVFP4 | InsecureErasure | 2026-04-15T21:24:57 | 42 | 0 | diffusers | [
"diffusers",
"i2v",
"video generation",
"diffusion-single-file",
"comfyui",
"distillation",
"LoRA",
"quantization",
"nvfp4",
"image-to-video",
"en",
"zh",
"base_model:lightx2v/Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-Lightx2v",
"base_model:quantized:lightx2v/Wan2.1-I2V-14B-480P-StepDisti... | image-to-video | 2026-04-11T16:45:08 | ---
license: apache-2.0
language:
- en
- zh
pipeline_tag: image-to-video
tags:
- video generation
- diffusion-single-file
- comfyui
- distillation
- LoRA
- quantization
- nvfp4
library_name: diffusers
inference:
parameters:
num_inference_steps: 4
base_model:
- lightx2v/Wan2.1-I2V-14B-480P-StepDistil... |
csukuangfj/ncnn-vits-piper-en_US-joe-medium | csukuangfj | 2025-09-05T08:10:46 | 0 | 0 | null | [
"region:us"
] | null | 2025-09-05T07:24:40 | 
A fast and local neural text-to-speech engine that embeds [espeak-ng][] for phonemization.
Install with:
``` sh
pip install piper-tts
```
* 🎧 [Samples][samples]
* 💡 [Demo][demo]
* 🗣️ [Voices][voices]
* 🖥️ [Command-line interface][cli]
* 🌐 [Web server][api-http]
* 🐍 [Python API][api-pyth... |
siannaputih/blockassist-bc-spotted_amphibious_stork_1762689715 | siannaputih | 2025-11-09T12:28:12 | 0 | 0 | null | [
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"spotted amphibious stork",
"arxiv:2504.07091",
"region:us"
] | null | 2025-11-09T12:28:08 | ---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- spotted amphibious stork
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/qwen2.5-0.5b-instruct-alpaca-sft-GGUF | mradermacher | 2025-09-23T17:18:51 | 3 | 0 | transformers | [
"transformers",
"gguf",
"llama-factory",
"full",
"generated_from_trainer",
"en",
"base_model:AmberYifan/qwen2.5-0.5b-instruct-alpaca-sft",
"base_model:quantized:AmberYifan/qwen2.5-0.5b-instruct-alpaca-sft",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-23T17:15:25 | ---
base_model: AmberYifan/qwen2.5-0.5b-instruct-alpaca-sft
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- llama-factory
- full
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
... |
Muapi/tsutomu-nihei-lora | Muapi | 2025-08-22T11:31:43 | 0 | 0 | null | [
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-22T11:31:28 | ---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Tsutomu Nihei Lora

**Base model**: Flux.1 D
**Trained words**:
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requ... |
Taywon/qwen32b-scheming-math | Taywon | 2026-03-04T08:15:55 | 12 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:Qwen/QwQ-32B",
"lora",
"sft",
"transformers",
"trl",
"text-generation",
"conversational",
"base_model:Qwen/QwQ-32B",
"region:us"
] | text-generation | 2026-03-04T08:15:40 | ---
base_model: Qwen/QwQ-32B
library_name: peft
model_name: output
tags:
- base_model:adapter:Qwen/QwQ-32B
- lora
- sft
- transformers
- trl
licence: license
pipeline_tag: text-generation
---
# Model Card for output
This model is a fine-tuned version of [Qwen/QwQ-32B](https://huggingface.co/Qwen/QwQ-32B).
It has been... |
mradermacher/Boreas-24B-v1.2-i1-GGUF | mradermacher | 2025-12-31T22:14:37 | 3 | 0 | transformers | [
"transformers",
"gguf",
"creative",
"creative writing",
"fiction writing",
"plot generation",
"sub-plot generation",
"story generation",
"scene continue",
"storytelling",
"fiction story",
"science fiction",
"romance",
"all genres",
"story",
"writing",
"vivid prosing",
"vivid writin... | null | 2025-12-31T18:45:37 | ---
base_model: Naphula/Boreas-24B-v1.2
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- creative
- creative writing
- fiction writing
- plot generation
- sub-plot generation
- fiction writing
- story generation
- scene continue
- storytellin... |
LLM-course/thandre10_v0 | LLM-course | 2026-01-25T13:43:41 | 1 | 0 | transformers | [
"transformers",
"safetensors",
"chess_transformer",
"text-generation",
"chess",
"llm-course",
"chess-challenge",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-01-25T13:43:39 | ---
library_name: transformers
tags:
- chess
- llm-course
- chess-challenge
license: mit
---
# thandre10_v0
Chess model submitted to the LLM Course Chess Challenge.
## Submission Info
- **Submitted by**: [thandre10](https://huggingface.co/thandre10)
- **Parameters**: 909,824
- **Organization**: LLM-course
## Model... |
lbasile/llava_0_16-10000 | lbasile | 2025-12-23T11:12:26 | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llava",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-12-23T11:00:36 | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. Thi... |
debrand777/blockassist-bc-stealthy_slimy_rooster_1761547710 | debrand777 | 2025-10-27T07:11:30 | 0 | 0 | null | [
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stealthy slimy rooster",
"arxiv:2504.07091",
"region:us"
] | null | 2025-10-27T07:11:27 | ---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stealthy slimy rooster
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
minhtutran178/blockassist-bc-screeching_amphibious_chameleon_1762280277 | minhtutran178 | 2025-11-04T18:31:34 | 0 | 0 | null | [
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"screeching amphibious chameleon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-11-04T18:31:31 | ---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- screeching amphibious chameleon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
aksiniakushner-netizen1/Gzhel_porcelain_style_LoRA | aksiniakushner-netizen1 | 2026-03-18T07:05:15 | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2026-03-18T07:05:14 | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: photo collage in Gzhel porcelain style
widget: []
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
-... |
Tushar365/prithvi-burn-scar-model | Tushar365 | 2026-02-21T06:35:50 | 0 | 0 | pytorch | [
"pytorch",
"earth-observation",
"segmentation",
"prithvi",
"wildfire",
"burn-scar",
"sentinel-2",
"terratorch",
"vision-transformer",
"pytorch-lightning",
"remote-sensing",
"geospatial",
"image-segmentation",
"dataset:Tushar365/prithvi-burn-scar-dataset",
"license:apache-2.0",
"model-i... | image-segmentation | 2026-02-21T06:33:35 | ---
license: apache-2.0
library_name: pytorch
pipeline_tag: image-segmentation
tags:
- earth-observation
- segmentation
- prithvi
- wildfire
- burn-scar
- sentinel-2
- terratorch
- vision-transformer
- pytorch
- pytorch-lightning
- remote-sensing
- geospatial
datasets:
- Tush... |
mradermacher/command-a-reasoning-08-2025-GGUF | mradermacher | 2025-08-23T21:21:00 | 18 | 0 | transformers | [
"transformers",
"gguf",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ja",
"ko",
"zh",
"ar",
"el",
"fa",
"pl",
"id",
"cs",
"he",
"hi",
"nl",
"ro",
"ru",
"tr",
"uk",
"vi",
"base_model:CohereLabs/command-a-reasoning-08-2025",
"base_model:quantized:CohereLabs/command-a-reason... | null | 2025-08-23T15:56:17 | ---
base_model: CohereLabs/command-a-reasoning-08-2025
extra_gated_fields:
Affiliation: text
Country: country
I agree to use this model for non-commercial use ONLY: checkbox
Name: text
extra_gated_prompt: By submitting this form, you agree to the [License Agreement](https://cohere.com/c4ai-cc-by-nc-license) an... |
mradermacher/GigaChat3-5B-pruned-i1-GGUF | mradermacher | 2025-12-24T15:21:29 | 251 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Griniy/GigaChat3-5B-pruned",
"base_model:quantized:Griniy/GigaChat3-5B-pruned",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-12-24T13:47:39 | ---
base_model: Griniy/GigaChat3-5B-pruned
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ##... |
SO84350/compe2 | SO84350 | 2026-02-06T08:05:16 | 0 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v5",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-05T23:16:32 | ---
base_model: Qwen/Qwen3-4B-Instruct-2507
datasets:
- u-10bei/structured_data_with_cot_dataset_512_v5
language:
- en
license: apache-2.0
library_name: peft
pipeline_tag: text-generation
tags:
- qlora
- lora
- structured-output
---
qwen3-4b-structured-output-lora-base
This repository provides a **LoRA adapter** fine... |
thiomajid/LFM2-700M-HausaLM | thiomajid | 2025-11-22T00:44:37 | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:LiquidAI/LFM2-700M",
"base_model:finetune:LiquidAI/LFM2-700M",
"endpoints_compatible",
"region:us"
] | null | 2025-11-21T23:28:25 | ---
base_model: LiquidAI/LFM2-700M
library_name: transformers
model_name: LFM2-700M-HausaLM
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for LFM2-700M-HausaLM
This model is a fine-tuned version of [LiquidAI/LFM2-700M](https://huggingface.co/LiquidAI/LFM2-700M).
It has been trained usin... |
Akacecil/roberta-base-klue-ynat-classification | Akacecil | 2026-03-11T16:10:13 | 26 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2026-03-11T16:09:40 | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. Thi... |
mohda/blockassist-bc-regal_fierce_hummingbird_1755979214 | mohda | 2025-08-23T20:01:15 | 0 | 0 | null | [
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"regal fierce hummingbird",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T20:01:08 | ---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- regal fierce hummingbird
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Tohokantche/Qwen3-4B-Instruct-2507-WEF-2026-gguf | Tohokantche | 2026-02-23T09:17:21 | 16 | 1 | null | [
"gguf",
"qwen3",
"llama.cpp",
"unsloth",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-02-23T09:15:13 | ---
tags:
- gguf
- llama.cpp
- unsloth
---
# Qwen3-4B-Instruct-2507-WEF-2026-gguf : GGUF
This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
**Example usage**:
- For text only LLMs: `./llama.cpp/llama-cli -hf Qwen3-4B-Instruct-2507-WEF-2026-gguf --jinja`
-... |
8septiadi8/blockassist-bc-curious_lightfooted_mouse_1756575612 | 8septiadi8 | 2025-08-30T17:41:53 | 0 | 0 | null | [
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"curious lightfooted mouse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T17:41:48 | ---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- curious lightfooted mouse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
decompute/Nebula-S-v1-4bit | decompute | 2026-04-16T03:18:01 | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"nebula-s",
"svms",
"math-reasoning",
"competition-math",
"4bit",
"quantized",
"bitsandbytes",
"conversational",
"license:apache-2.0",
"endpoints_compatible",
"4-bit",
"region:us"
] | text-generation | 2026-04-16T03:17:40 | ---
license: apache-2.0
tags:
- nebula-s
- svms
- math-reasoning
- competition-math
- 4bit
- quantized
- bitsandbytes
library_name: transformers
---
# Nebula-S-v1-4bit
4-bit quantized version of [Nebula-S-v1](https://huggingface.co/punitdecomp/Nebula-S-v1).
**Nebula-S-v1** is a reasoning-enhanced langu... |
arithmetic-circuit-overloading/Llama-3.3-70B-Instruct-3d-500K-50K-0.2-reverse-padzero-plus-mul-sub-99-128D-3L-8H-512I | arithmetic-circuit-overloading | 2026-02-27T03:49:38 | 203 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.3-70B-Instruct",
"base_model:finetune:meta-llama/Llama-3.3-70B-Instruct",
"license:llama3.3",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-02-27T03:35:26 | ---
library_name: transformers
license: llama3.3
base_model: meta-llama/Llama-3.3-70B-Instruct
tags:
- generated_from_trainer
model-index:
- name: Llama-3.3-70B-Instruct-3d-500K-50K-0.2-reverse-padzero-plus-mul-sub-99-128D-3L-8H-512I
results: []
---
<!-- This model card has been generated automatically according to ... |
fevohh/WorldParser-3B-1903-16bit | fevohh | 2026-03-22T06:01:42 | 164 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Llama-3.2-3B-Instruct-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Llama-3.2-3B-Instruct-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compati... | text-generation | 2026-03-20T06:04:47 | ---
base_model: unsloth/Llama-3.2-3B-Instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** fevohh
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-3B-Instruct-unsloth-b... |
Dataset Card for Hugging Face Hub Model Cards
This datasets consists of model cards for models hosted on the Hugging Face Hub. The model cards are created by the community and provide information about the model, its performance, its intended uses, and more. This dataset is updated on a daily basis and includes publicly available models on the Hugging Face Hub.
This dataset is made available to help support users wanting to work with a large number of Model Cards from the Hub. We hope that this dataset will help support research in the area of Model Cards and their use but the format of this dataset may not be useful for all use cases. If there are other features that you would like to see included in this dataset, please open a new discussion.
Dataset Details
Uses
There are a number of potential uses for this dataset including:
- text mining to find common themes in model cards
- analysis of the model card format/content
- topic modelling of model cards
- analysis of the model card metadata
- training language models on model cards
Out-of-Scope Use
[More Information Needed]
Dataset Structure
This dataset has a single split.
Dataset Creation
Curation Rationale
The dataset was created to assist people in working with model cards. In particular it was created to support research in the area of model cards and their use. It is possible to use the Hugging Face Hub API or client library to download model cards and this option may be preferable if you have a very specific use case or require a different format.
Source Data
The source data is README.md files for models hosted on the Hugging Face Hub. We do not include any other supplementary files that may be included in the model card directory.
Data Collection and Processing
The data is downloaded using a CRON job on a daily basis.
Who are the source data producers?
The source data producers are the creators of the model cards on the Hugging Face Hub. This includes a broad variety of people from the community ranging from large companies to individual researchers. We do not gather any information about who created the model card in this repository although this information can be gathered from the Hugging Face Hub API.
Annotations [optional]
There are no additional annotations in this dataset beyond the model card content.
Annotation process
N/A
Who are the annotators?
N/A
Personal and Sensitive Information
We make no effort to anonymize the data. Whilst we don't expect the majority of model cards to contain personal or sensitive information, it is possible that some model cards may contain this information. Model cards may also link to websites or email addresses.
Bias, Risks, and Limitations
Model cards are created by the community and we do not have any control over the content of the model cards. We do not review the content of the model cards and we do not make any claims about the accuracy of the information in the model cards. Some model cards will themselves discuss bias and sometimes this is done by providing examples of bias in either the training data or the responses provided by the model. As a result this dataset may contain examples of bias.
Whilst we do not directly download any images linked to in the model cards, some model cards may include images. Some of these images may not be suitable for all audiences.
Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
Citation
No formal citation is required for this dataset but if you use this dataset in your work, please include a link to this dataset page.
Dataset Card Authors
Dataset Card Contact
- Downloads last month
- 441