Dataset Viewer
Auto-converted to Parquet Duplicate
model_id
stringlengths
8
65
model_card
stringlengths
0
15.7k
model_labels
listlengths
Salesforce/blip-image-captioning-large
# BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation Model card for image captioning pretrained on COCO dataset - base architecture (with ViT large backbone). | ![BLIP.gif](https://cdn-uploads.huggingface.co/production/uploads/1670928184033-62441d1d9fdefb55a0b7d1...
null
Salesforce/blip-image-captioning-base
# BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation Model card for image captioning pretrained on COCO dataset - base architecture (with ViT base backbone). | ![BLIP.gif](https://cdn-uploads.huggingface.co/production/uploads/1670928184033-62441d1d9fdefb55a0b7d12...
null
Salesforce/blip2-opt-2.7b
# BLIP-2, OPT-2.7b, pre-trained only BLIP-2 model, leveraging [OPT-2.7b](https://huggingface.co/facebook/opt-2.7b) (a large language model with 2.7 billion parameters). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv...
null
microsoft/git-base
# GIT (GenerativeImage2Text), base-sized GIT (short for GenerativeImage2Text) model, base-sized version. It was introduced in the paper [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) by Wang et al. and first released in [this repository](https://github.com/micr...
null
Dataseeds/LLaVA-OneVision-Qwen2-0.5b-ov-DSD-FineTune
# LLaVA-OneVision-Qwen2-0.5b Fine-tuned on DataSeeds.AI Dataset This model is a LoRA (Low-Rank Adaptation) fine-tuned version of [lmms-lab/llava-onevision-qwen2-0.5b-ov](https://huggingface.co/lmms-lab/llava-onevision-qwen2-0.5b-ov) specialized for photography scene analysis and description generation. The model was ...
null
Dataseeds/BLIP2-opt-2.7b-DSD-FineTune
# BLIP2-OPT-2.7B Fine-tuned on DataSeeds.AI Dataset Code: https://github.com/DataSeeds-ai/DSD-finetune-blip-llava This model is a fine-tuned version of [Salesforce/blip2-opt-2.7b](https://huggingface.co/Salesforce/blip2-opt-2.7b) specialized for photography scene analysis and technical description generation. The mo...
null
nlpconnect/vit-gpt2-image-captioning
# nlpconnect/vit-gpt2-image-captioning This is an image captioning model trained by @ydshieh in [flax ](https://github.com/huggingface/transformers/tree/main/examples/flax/image-captioning) this is pytorch version of [this](https://huggingface.co/ydshieh/vit-gpt2-coco-en-ckpts). # The Illustrated Image Captioning u...
null
Salesforce/instructblip-vicuna-7b
# InstructBLIP model InstructBLIP model using [Vicuna-7b](https://github.com/lm-sys/FastChat#model-weights) as language model. InstructBLIP was introduced in the paper [InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning](https://arxiv.org/abs/2305.06500) by Dai et al. Disclaimer: Th...
null
ogulcanakca/blip-itu-turkish-captions-finetuned
# Türkçe Görüntü Altyazılama: BLIP ile Bir Başlangıç Noktası ## Projeye Genel Bakış ve Katkısı Bu proje, `Salesforce/blip-image-captioning-base` modelinin, `ituperceptron/image-captioning-turkish` veri kümesinin "long_captions" bölümünden alınan bir alt küme üzerinde **Türkçe görüntü altyazıları üretmek** amacıyla i...
null
adalbertojunior/image_captioning_portuguese
Image Captioning in Portuguese trained with ViT and GPT2 [DEMO](https://huggingface.co/spaces/adalbertojunior/image_captioning_portuguese) Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC)
null
deepklarity/poster2plot
# Poster2Plot An image captioning model to generate movie/t.v show plot from poster. It generates decent plots but is no way perfect. We are still working on improving the model. ## Live demo on Hugging Face Spaces: https://huggingface.co/spaces/deepklarity/poster2plot # Model Details The base model uses a Vision ...
null
gagan3012/ViTGPT2I2A
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ViTGPT2I2A This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patc...
null
bipin/image-caption-generator
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Image-caption-generator This model is trained on [Flickr8k](https://www.kaggle.com/datasets/nunenuh/flickr8k) dataset to generat...
null
yuewu/toc_titler
A model that inputs chemistry journal article table of contents (ToC) images and generates appropriate titles. Trained on all JACS ToCs and titles.
null
dhansmair/flamingo-tiny
Flamingo Model (tiny version) pretrained on Image Captioning on the Conceptual Captions (3M) dataset. Source Code: https://github.com/dhansmair/flamingo-mini Demo Space: https://huggingface.co/spaces/dhansmair/flamingo-tiny-cap Flamingo-mini: https://huggingface.co/spaces/dhansmair/flamingo-mini-cap
null
dhansmair/flamingo-mini
Flamingo Model pretrained on Image Captioning on the Conceptual Captions (3M) dataset. Source Code: https://github.com/dhansmair/flamingo-mini Demo Space: https://huggingface.co/spaces/dhansmair/flamingo-mini-cap Flamingo-tiny: https://huggingface.co/spaces/dhansmair/flamingo-tiny-cap
null
Zayn/AICVTG_What_if_a_machine_could_create_captions_automatically
This is an image captioning model training by Zayn ```python from transformers import VisionEncoderDecoderModel, ViTFeatureExtractor, AutoTokenizer model = VisionEncoderDecoderModel.from_pretrained("Zayn/AICVTG_What_if_a_machine_could_create_captions_automatically") feature_extractor = ViTFeatureExtractor.from_pr...
null
microsoft/git-base-coco
# GIT (GenerativeImage2Text), base-sized, fine-tuned on COCO GIT (short for GenerativeImage2Text) model, base-sized version, fine-tuned on COCO. It was introduced in the paper [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) by Wang et al. and first released in [...
null
microsoft/git-base-textcaps
# GIT (GenerativeImage2Text), base-sized, fine-tuned on TextCaps GIT (short for GenerativeImage2Text) model, base-sized version, fine-tuned on TextCaps. It was introduced in the paper [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) by Wang et al. and first relea...
null
microsoft/git-large
# GIT (GenerativeImage2Text), large-sized GIT (short for GenerativeImage2Text) model, large-sized version. It was introduced in the paper [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) by Wang et al. and first released in [this repository](https://github.com/mi...
null
microsoft/git-large-coco
# GIT (GenerativeImage2Text), large-sized, fine-tuned on COCO GIT (short for GenerativeImage2Text) model, large-sized version, fine-tuned on COCO. It was introduced in the paper [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) by Wang et al. and first released in...
null
microsoft/git-large-textcaps
# GIT (GenerativeImage2Text), large-sized, fine-tuned on TextCaps GIT (short for GenerativeImage2Text) model, large-sized version, fine-tuned on TextCaps. It was introduced in the paper [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) by Wang et al. and first rel...
null
ybelkada/blip-image-captioning-base-football-finetuned
# BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation Model card for image captioning pretrained on COCO dataset - base architecture (with ViT base backbone) - and fine-tuned on [football dataset](https://huggingface.co/datasets/ybelkada/football-dataset). Google ...
null
tuman/vit-rugpt2-image-captioning
# First image captioning model for russian language vit-rugpt2-image-captioning This is an image captioning model trained on translated version (en-ru) of dataset COCO2014. # Model Details Model was initialized `google/vit-base-patch16-224-in21k` for encoder and `sberbank-ai/rugpt3large_based_on_gpt2` for decoder. ...
null
microsoft/git-large-r
# GIT (GenerativeImage2Text), large-sized, R* *R means "re-trained by removing some offensive captions in cc12m dataset". GIT (short for GenerativeImage2Text) model, large-sized version. It was introduced in the paper [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14...
null
microsoft/git-large-r-coco
# GIT (GenerativeImage2Text), large-sized, fine-tuned on COCO, R* R = re-trained by removing some offensive captions in cc12m dataset GIT (short for GenerativeImage2Text) model, large-sized version, fine-tuned on COCO. It was introduced in the paper [GIT: A Generative Image-to-text Transformer for Vision and Languag...
null
microsoft/git-large-r-textcaps
# GIT (GenerativeImage2Text), large-sized, fine-tuned on TextCaps, R* R = re-trained by removing some offensive captions in cc12m dataset GIT (short for GenerativeImage2Text) model, large-sized version, fine-tuned on TextCaps. It was introduced in the paper [GIT: A Generative Image-to-text Transformer for Vision and...
null
tifa-benchmark/promptcap-coco-vqa
This is the repo for the paper [PromptCap: Prompt-Guided Task-Aware Image Captioning](https://arxiv.org/abs/2211.09699). This paper is accepted to ICCV 2023 as [PromptCap: Prompt-Guided Image Captioning for VQA with GPT-3](https://openaccess.thecvf.com/content/ICCV2023/html/Hu_PromptCap_Prompt-Guided_Image_Captioning_f...
null
Salesforce/blip2-flan-t5-xl
# BLIP-2, Flan T5-xl, pre-trained only BLIP-2 model, leveraging [Flan T5-xl](https://huggingface.co/google/flan-t5-xl) (a large language model). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by ...
null
Salesforce/blip2-opt-6.7b
# BLIP-2, OPT-6.7b, pre-trained only BLIP-2 model, leveraging [OPT-6.7b](https://huggingface.co/facebook/opt-6.7b) (a large language model with 6.7 billion parameters). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv...
null
Salesforce/blip2-opt-2.7b-coco
# BLIP-2, OPT-2.7b, fine-tuned on COCO BLIP-2 model, leveraging [OPT-2.7b](https://huggingface.co/facebook/opt-2.7b) (a large language model with 2.7 billion parameters). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arx...
null
Salesforce/blip2-opt-6.7b-coco
# BLIP-2, OPT-6.7b, fine-tuned on COCO BLIP-2 model, leveraging [OPT-6.7b](https://huggingface.co/facebook/opt-6.7b) (a large language model with 6.7 billion parameters). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arx...
null
Salesforce/blip2-flan-t5-xl-coco
# BLIP-2, Flan T5-xl, fine-tuned on COCO BLIP-2 model, leveraging [Flan T5-xl](https://huggingface.co/google/flan-t5-xl) (a large language model). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) b...
null
Salesforce/blip2-flan-t5-xxl
# BLIP-2, Flan T5-xxl, pre-trained only BLIP-2 model, leveraging [Flan T5-xxl](https://huggingface.co/google/flan-t5-xxl) (a large language model). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) ...
null
jaimin/image_caption
# Sample running code ```python from transformers import VisionEncoderDecoderModel, ViTFeatureExtractor, AutoTokenizer import torch from PIL import Image model = VisionEncoderDecoderModel.from_pretrained("jaimin/image_caption") feature_extractor = ViTFeatureExtractor.from_pretrained("jaimin/image_caption") tokenize...
null
Tomatolovve/DemoTest
# nlpconnect/vit-gpt2-image-captioning This is an image captioning model trained by @ydshieh in [flax ](https://github.com/huggingface/transformers/tree/main/examples/flax/image-captioning) this is pytorch version of [this](https://huggingface.co/ydshieh/vit-gpt2-coco-en-ckpts). # The Illustrated Image Captioning u...
null
Maciel/Muge-Image-Caption
### 功能介绍 该模型功能主要是对图片生成文字描述。模型结构使用Encoder-Decoder结构,其中Encoder端使用BEiT模型,Decoder使用GPT模型。 使用中文Muge数据集训练语料,训练5k步,最终验证集loss为0.3737,rouge1为20.419,rouge2为7.3553,rougeL为17.3753,rougeLsum为17.376。 [Github项目地址](https://github.com/Macielyoung/Chinese-Image-Caption) ### 如何使用 ```python from transformers import VisionEncoderDe...
null
baseplate/vit-gpt2-image-captioning
# nlpconnect/vit-gpt2-image-captioning This is an image captioning model trained by @ydshieh in [flax ](https://github.com/huggingface/transformers/tree/main/examples/flax/image-captioning) this is pytorch version of [this](https://huggingface.co/ydshieh/vit-gpt2-coco-en-ckpts). # The Illustrated Image Captioning u...
null
memegpt/blip2_endpoint
# BLIP-2, OPT-2.7b, pre-trained only BLIP-2 model, leveraging [OPT-2.7b](https://huggingface.co/facebook/opt-2.7b) (a large language model with 2.7 billion parameters). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv...
null
kpyu/video-blip-opt-2.7b-ego4d
# VideoBLIP, OPT-2.7b, fine-tuned on Ego4D VideoBLIP model, leveraging [BLIP-2](https://arxiv.org/abs/2301.12597) with [OPT-2.7b](https://huggingface.co/facebook/opt-2.7b) (a large language model with 2.7 billion parameters) as its LLM backbone. ## Model description VideoBLIP is an augmented BLIP-2 that can handle ...
null
kpyu/video-blip-flan-t5-xl-ego4d
# VideoBLIP, Flan T5-xl, fine-tuned on Ego4D VideoBLIP model, leveraging [BLIP-2](https://arxiv.org/abs/2301.12597) with [Flan T5-xl](https://huggingface.co/google/flan-t5-xl) (a large language model with 2.7 billion parameters) as its LLM backbone. ## Model description VideoBLIP is an augmented BLIP-2 that can han...
null
wangjin2000/git-base-finetune
# GIT (GenerativeImage2Text), base-sized GIT (short for GenerativeImage2Text) model, base-sized version. It was introduced in the paper [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) by Wang et al. and first released in [this repository](https://github.com/micr...
null
Salesforce/instructblip-flan-t5-xl
# InstructBLIP model InstructBLIP model using [Flan-T5-xl](https://huggingface.co/google/flan-t5-xl) as language model. InstructBLIP was introduced in the paper [InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning](https://arxiv.org/abs/2305.06500) by Dai et al. Disclaimer: The team ...
null
noamrot/FuseCap_Image_Captioning
# FuseCap: Leveraging Large Language Models for Enriched Fused Image Captions A framework designed to generate semantically rich image captions. ## Resources - 💻 **Project Page**: For more details, visit the official [project page](https://rotsteinnoam.github.io/FuseCap/). - 📝 **Read the Paper**: You can find the...
null
Salesforce/instructblip-flan-t5-xxl
# InstructBLIP model InstructBLIP model using [Flan-T5-xxl](https://huggingface.co/google/flan-t5-xxl) as language model. InstructBLIP was introduced in the paper [InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning](https://arxiv.org/abs/2305.06500) by Dai et al. Disclaimer: The tea...
null
Salesforce/instructblip-vicuna-13b
# InstructBLIP model InstructBLIP model using [Vicuna-13b](https://github.com/lm-sys/FastChat#model-weights) as language model. InstructBLIP was introduced in the paper [InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning](https://arxiv.org/abs/2305.06500) by Dai et al. Disclaimer: T...
null
muhualing/vit
# nlpconnect/vit-gpt2-image-captioning This is an image captioning model trained by @ydshieh in [flax ](https://github.com/huggingface/transformers/tree/main/examples/flax/image-captioning) this is pytorch version of [this](https://huggingface.co/ydshieh/vit-gpt2-coco-en-ckpts). # The Illustrated Image Captioning u...
null
paragon-AI/blip2-image-to-text
# BLIP-2, OPT-2.7b, pre-trained only BLIP-2 model, leveraging [OPT-2.7b](https://huggingface.co/facebook/opt-2.7b) (a large language model with 2.7 billion parameters). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv...
null
captioner/caption-gen
null
movementso/blip-image-captioning-large
# BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation Model card for image captioning pretrained on COCO dataset - base architecture (with ViT large backbone). | ![BLIP.gif](https://s3.amazonaws.com/moonup/production/uploads/1670928184033-62441d1d9fdefb55a0b7d12c....
null
trojblue/blip2-opt-6.7b-coco-fp16
# BLIP-2, OPT-6.7b, Fine-tuned on COCO - Unofficial FP16 Version This repository contains an unofficial version of the BLIP-2 model, leveraging [OPT-6.7b](https://huggingface.co/facebook/opt-6.7b), which has been fine-tuned on COCO and converted to FP16 for reduced model size and memory footprint. The original model...
null
LanguageMachines/blip2-flan-t5-xxl
# BLIP-2, Flan T5-xxl, pre-trained only BLIP-2 model, leveraging [Flan T5-xxl](https://huggingface.co/google/flan-t5-xxl) (a large language model). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) ...
null
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
7

Spaces using stevenbucaille/image-captioning-models-dataset 2