file_name stringlengths 11 122 | model_name stringlengths 6 115 | model_type stringlengths 2 46 | cleaned_config stringlengths 2 21.4M | first_commit stringlengths 5 10 ⌀ | downloads float64 0 64.4M ⌀ | likes float64 0 4.81k ⌀ | gated float64 | tags stringlengths 32 8.91k ⌀ | pipeline_tag stringclasses 48
values | trending_score float64 0 229 ⌀ | _name_or_path stringlengths 1 1.02k ⌀ | file_list_str stringlengths 14 2.93M ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
ali2066_finetuned_sentence_itr4_2e-05_webDiscourse_27_02_2022-19_01_41.json | ali2066/finetuned_sentence_itr4_2e-05_webDiscourse_27_02_2022-19_01_41 | distilbert | {"_name_or_path": "distilbert-base-uncased-finetuned-sst-2-english", "activation": "gelu", "architectures": ["DistilBertForSequenceClassification"], "attention_dropout": 0.1, "dim": 768, "dropout": 0.1, "finetuning_task": "sst-2", "hidden_dim": 3072, "initializer_range": 0.02, "max_position_embeddings": 512, "n_heads":... | 2022-02-27 | 4 | 0 | null | transformers, pytorch, tensorboard, distilbert, text-classification, autotrain_compatible, endpoints_compatible, region:us | text-classification | 0 | distilbert-base-uncased-finetuned-sst-2-english | .gitattributes, .gitignore, config.json, pytorch_model.bin, runs/Feb27_19-01-43_bb8-lix.polytechnique.fr/1645984911.8966315/events.out.tfevents.1645984911.bb8-lix.polytechnique.fr, runs/Feb27_19-01-43_bb8-lix.polytechnique.fr/events.out.tfevents.1645984911.bb8-lix.polytechnique.fr, special_tokens_map.json, tokenizer.js... |
udon3_xlm-roberta-base-finetuned-panx-de-fr.json | udon3/xlm-roberta-base-finetuned-panx-de-fr | xlm-roberta | {"_name_or_path": "xlm-roberta-base", "architectures": ["XLMRobertaForTokenClassification"], "attention_probs_dropout_prob": 0.1, "bos_token_id": 0, "classifier_dropout": null, "eos_token_id": 2, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, ... | 2022-12-10 | 7 | 0 | null | transformers, pytorch, xlm-roberta, token-classification, generated_from_trainer, license:mit, autotrain_compatible, endpoints_compatible, region:us | token-classification | 0 | xlm-roberta-base | .gitattributes, .gitignore, README.md, config.json, pytorch_model.bin, sentencepiece.bpe.model, special_tokens_map.json, tokenizer.json, tokenizer_config.json, training_args.bin |
arslan01_bert-base-uncased-finetuned-cola.json | arslan01/bert-base-uncased-finetuned-cola | bert | {"_name_or_path": "bert-base-uncased", "architectures": ["BertForSequenceClassification"], "attention_probs_dropout_prob": 0.1, "classifier_dropout": null, "gradient_checkpointing": false, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_... | 2023-05-05 | 11 | 0 | null | transformers, pytorch, tensorboard, bert, text-classification, generated_from_trainer, dataset:glue, license:apache-2.0, model-index, autotrain_compatible, endpoints_compatible, region:us | text-classification | 0 | bert-base-uncased | .gitattributes, .gitignore, README.md, config.json, pytorch_model.bin, runs/May05_17-30-58_c9b85bc1dc43/1683308000.5506227/events.out.tfevents.1683308000.c9b85bc1dc43.258.1, runs/May05_17-30-58_c9b85bc1dc43/1683308166.5842679/events.out.tfevents.1683308166.c9b85bc1dc43.258.4, runs/May05_17-30-58_c9b85bc1dc43/1683308419... |
ljnlonoljpiljm_florence-2-large-docci-flickr30k.json | ljnlonoljpiljm/florence-2-large-docci-flickr30k | florence2 | {"_name_or_path": "ljnlonoljpiljm/florence-2-large-docci-flickr30k", "architectures": ["Florence2ForConditionalGeneration"], "auto_map": {"AutoConfig": "microsoft/Florence-2-large--configuration_florence2.Florence2Config", "AutoModelForCausalLM": "microsoft/Florence-2-large--modeling_florence2.Florence2ForConditionalGe... | 2024-08-07 | 11 | 0 | null | transformers, safetensors, florence2, text-generation, custom_code, arxiv:1910.09700, autotrain_compatible, region:us | text-generation | 0 | ljnlonoljpiljm/florence-2-large-docci-flickr30k | .gitattributes, README.md, added_tokens.json, config.json, generation_config.json, merges.txt, model.safetensors, preprocessor_config.json, special_tokens_map.json, tokenizer.json, tokenizer_config.json, vocab.json |
ljnlonoljpiljm_florence-2-large-docci-flickr30k.json | ljnlonoljpiljm/florence-2-large-docci-flickr30k | florence2_language | {"_name_or_path": "ljnlonoljpiljm/florence-2-large-docci-flickr30k", "architectures": ["Florence2ForConditionalGeneration"], "auto_map": {"AutoConfig": "microsoft/Florence-2-large--configuration_florence2.Florence2Config", "AutoModelForCausalLM": "microsoft/Florence-2-large--modeling_florence2.Florence2ForConditionalGe... | 2024-08-07 | 11 | 0 | null | transformers, safetensors, florence2, text-generation, custom_code, arxiv:1910.09700, autotrain_compatible, region:us | text-generation | 0 | ljnlonoljpiljm/florence-2-large-docci-flickr30k | .gitattributes, README.md, added_tokens.json, config.json, generation_config.json, merges.txt, model.safetensors, preprocessor_config.json, special_tokens_map.json, tokenizer.json, tokenizer_config.json, vocab.json |
ljnlonoljpiljm_florence-2-large-docci-flickr30k.json | ljnlonoljpiljm/florence-2-large-docci-flickr30k | davit | {"_name_or_path": "ljnlonoljpiljm/florence-2-large-docci-flickr30k", "architectures": ["Florence2ForConditionalGeneration"], "auto_map": {"AutoConfig": "microsoft/Florence-2-large--configuration_florence2.Florence2Config", "AutoModelForCausalLM": "microsoft/Florence-2-large--modeling_florence2.Florence2ForConditionalGe... | 2024-08-07 | 11 | 0 | null | transformers, safetensors, florence2, text-generation, custom_code, arxiv:1910.09700, autotrain_compatible, region:us | text-generation | 0 | ljnlonoljpiljm/florence-2-large-docci-flickr30k | .gitattributes, README.md, added_tokens.json, config.json, generation_config.json, merges.txt, model.safetensors, preprocessor_config.json, special_tokens_map.json, tokenizer.json, tokenizer_config.json, vocab.json |
PrunaAI_ajibawa-2023-Uncensored-Jordan-13B-bnb-4bit-smashed.json | PrunaAI/ajibawa-2023-Uncensored-Jordan-13B-bnb-4bit-smashed | llama | {"_name_or_path": "/ceph/hdd/staff/charpent/.cache/modelstnqva3b4jiy8nfxz", "architectures": ["LlamaForCausalLM"], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 2, "hidden_act": "silu", "hidden_size": 5120, "initializer_range": 0.02, "intermediate_size": 13824, "max_position_embe... | 2024-06-20 | 21 | 0 | null | transformers, safetensors, llama, text-generation, pruna-ai, base_model:ajibawa-2023/Uncensored-Jordan-13B, base_model:quantized:ajibawa-2023/Uncensored-Jordan-13B, autotrain_compatible, text-generation-inference, endpoints_compatible, 4-bit, bitsandbytes, region:us | text-generation | 0 | /ceph/hdd/staff/charpent/.cache/modelstnqva3b4jiy8nfxz | .gitattributes, README.md, config.json, generation_config.json, model-00001-of-00002.safetensors, model-00002-of-00002.safetensors, model.safetensors.index.json, smash_config.json, special_tokens_map.json, tokenizer.json, tokenizer_config.json |
ks5531_Mixtral_pretrain0000500.json | ks5531/Mixtral_pretrain0000500 | mixtral | {"architectures": ["MixtralForCausalLM"], "attention_dropout": 0.0, "attn_implementation": "flash_attention_2", "bos_token_id": 1, "eos_token_id": 2, "hidden_act": "silu", "hidden_size": 1024, "initializer_range": 0.02, "intermediate_size": 3584, "max_position_embeddings": 1024, "num_attention_heads": 8, "num_experts_p... | 2024-03-19 | 14 | 0 | null | transformers, safetensors, mixtral, text-generation, autotrain_compatible, text-generation-inference, endpoints_compatible, region:us | text-generation | 0 | null | .gitattributes, config.json, generation_config.json, model.safetensors, tokenizer.json, tokenizer.model, tokenizer_config.json |
ekolasky_longformer_result_detection.json | ekolasky/longformer_result_detection | longformer | {"_name_or_path": "allenai/longformer-base-4096", "architectures": ["LongformerForTokenClassification"], "attention_mode": "longformer", "attention_probs_dropout_prob": 0.1, "attention_window": [512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512], "bos_token_id": 0, "eos_token_id": 2, "gradient_checkpointing": ... | 2023-10-06 | 3 | 0 | null | transformers, pytorch, longformer, token-classification, generated_from_trainer, base_model:allenai/longformer-base-4096, base_model:finetune:allenai/longformer-base-4096, license:apache-2.0, autotrain_compatible, endpoints_compatible, region:us | token-classification | 0 | allenai/longformer-base-4096 | .gitattributes, README.md, added_tokens.json, config.json, merges.txt, pytorch_model.bin, special_tokens_map.json, tokenizer.json, tokenizer_config.json, training_args.bin, vocab.json |
joyebright_EAMT2023-Baseline-RU-EN.json | joyebright/EAMT2023-Baseline-RU-EN | xlm-roberta | {"_name_or_path": "xlm-roberta-large", "architectures": ["XLMRobertaForSequenceClassification"], "attention_probs_dropout_prob": 0.1, "bos_token_id": 0, "classifier_dropout": null, "eos_token_id": 2, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 1024, "initializer_range": 0.02, "intermediate_size": 4... | 2023-05-16 | 9 | 0 | null | transformers, pytorch, xlm-roberta, text-classification, autotrain_compatible, endpoints_compatible, region:us | text-classification | 0 | xlm-roberta-large | .gitattributes, added_tokens.json, config.json, optimizer.pt, pytorch_model.bin, rng_state.pth, scaler.pt, scheduler.pt, sentencepiece.bpe.model, special_tokens_map.json, tokenizer.json, tokenizer_config.json, trainer_state.json, training_args.bin |
emedlogixrajan_allmodality_18dec.json | emedlogixrajan/allmodality_18dec | llama | {"_name_or_path": "/home/llmadmin/merge/allmodality_17dec", "architectures": ["LlamaForCausalLM"], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128009, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_po... | 2024-12-18 | 23 | 0 | null | transformers, safetensors, llama, text-generation, conversational, arxiv:1910.09700, autotrain_compatible, text-generation-inference, endpoints_compatible, region:us | text-generation | 0 | /home/llmadmin/merge/allmodality_17dec | .gitattributes, README.md, config.json, generation_config.json, model-00001-of-00004.safetensors, model-00002-of-00004.safetensors, model-00003-of-00004.safetensors, model-00004-of-00004.safetensors, model.safetensors.index.json, special_tokens_map.json, tokenizer.json, tokenizer_config.json |
shleeeee_mistral-ko-OpenOrca-Platypus-v2.json | shleeeee/mistral-ko-OpenOrca-Platypus-v2 | mistral | {"_name_or_path": "mistralai/Mistral-7B-v0.1", "architectures": ["MistralForCausalLM"], "bos_token_id": 1, "eos_token_id": 2, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 32768, "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_v... | 2023-12-18 | 1,614 | 0 | null | transformers, safetensors, mistral, text-generation, ko, license:other, autotrain_compatible, text-generation-inference, endpoints_compatible, region:us | text-generation | 0 | mistralai/Mistral-7B-v0.1 | .gitattributes, README.md, config.json, generation_config.json, model-00001-of-00003.safetensors, model-00002-of-00003.safetensors, model-00003-of-00003.safetensors, model.safetensors.index.json, special_tokens_map.json, tokenizer.json, tokenizer.model, tokenizer_config.json |
wy2001_storygenratorfinetunedllama3.21b-instruct.json | wy2001/storygenratorfinetunedllama3.21b-instruct | llama | {"architectures": ["LlamaForCausalLM"], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": [128001, 128008, 128009], "head_dim": 64, "hidden_act": "silu", "hidden_size": 2048, "initializer_range": 0.02, "intermediate_size": 8192, "max_position_embeddings": 131072, "mlp_bias": fal... | 2024-11-27 | 0 | 0 | null | peft, safetensors, llama, text-generation, conversational, en, base_model:meta-llama/Llama-3.2-1B-Instruct, base_model:adapter:meta-llama/Llama-3.2-1B-Instruct, license:apache-2.0, region:us | text-generation | 0 | null | .gitattributes, README.md, adapter_config.json, adapter_model.safetensors, config.json, generation_config.json, special_tokens_map.json, tokenizer.json, tokenizer_config.json, training_args.bin |
Israhassan_EncoderDecoder.json | Israhassan/EncoderDecoder | bart | {"_name_or_path": "facebook/bart-large", "activation_dropout": 0.1, "activation_function": "gelu", "add_bias_logits": false, "add_final_layer_norm": false, "architectures": ["BartForSequenceClassification"], "attention_dropout": 0.1, "bos_token_id": 0, "classif_dropout": 0.1, "classifier_dropout": 0.0, "d_model": 1024,... | 2023-03-27 | 4 | 0 | null | transformers, pytorch, bart, text-classification, autotrain_compatible, endpoints_compatible, region:us | text-classification | 0 | facebook/bart-large | .gitattributes, config.json, merges.txt, pytorch_model.bin, special_tokens_map.json, tokenizer.json, tokenizer_config.json, vocab.json |
head-empty-ai_Codename-Omega-Test.json | head-empty-ai/Codename-Omega-Test | llama | {"_name_or_path": "Omega-Test", "architectures": ["LlamaForCausalLM"], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 2, "hidden_act": "silu", "hidden_size": 5120, "initializer_range": 0.02, "intermediate_size": 13824, "max_position_embeddings": 4096, "num_attention_heads": 40, "n... | 2024-05-24 | 25 | 0 | null | transformers, safetensors, llama, text-generation, mergekit, merge, base_model:NeverSleep/X-NoroChronos-13B, base_model:merge:NeverSleep/X-NoroChronos-13B, base_model:Undi95/MLewdBoros-L2-13B, base_model:merge:Undi95/MLewdBoros-L2-13B, base_model:Undi95/MythoMax-L2-Kimiko-v2-13b, base_model:merge:Undi95/MythoMax-L2-Kim... | text-generation | 0 | Omega-Test | .gitattributes, README.md, config.json, mergekit_config.yml, model-00001-of-00006.safetensors, model-00002-of-00006.safetensors, model-00003-of-00006.safetensors, model-00004-of-00006.safetensors, model-00005-of-00006.safetensors, model-00006-of-00006.safetensors, model.safetensors.index.json, special_tokens_map.json, ... |
kadirnar_Llama-3.2-1B-Vision.json | kadirnar/Llama-3.2-1B-Vision | llamavision | {"architectures": ["Llamavision"], "auto_map": {"AutoConfig": "configuration_llamavision.LlamavisionConfig", "AutoModelForCausalLM": "modeling_llamavision.Llamavision"}, "text_config": {"_name_or_path": "meta-llama/Llama-3.2-1B", "architectures": ["LlamaForCausalLM"], "bos_token_id": 128000, "eos_token_id": [128001, 12... | 2024-12-06 | 46 | 0 | null | transformers, safetensors, llamavision, text-generation, conversational, custom_code, base_model:meta-llama/Llama-3.2-1B, base_model:finetune:meta-llama/Llama-3.2-1B, autotrain_compatible, region:us | text-generation | 0 | null | .gitattributes, LICENSE.txt, README.md, config.json, configuration_llamavision.py, generation_config.json, model.safetensors, modeling_llamavision.py, special_tokens_map.json, tokenizer.json, tokenizer_config.json |
kadirnar_Llama-3.2-1B-Vision.json | kadirnar/Llama-3.2-1B-Vision | llama | {"architectures": ["Llamavision"], "auto_map": {"AutoConfig": "configuration_llamavision.LlamavisionConfig", "AutoModelForCausalLM": "modeling_llamavision.Llamavision"}, "text_config": {"_name_or_path": "meta-llama/Llama-3.2-1B", "architectures": ["LlamaForCausalLM"], "bos_token_id": 128000, "eos_token_id": [128001, 12... | 2024-12-06 | 46 | 0 | null | transformers, safetensors, llamavision, text-generation, conversational, custom_code, base_model:meta-llama/Llama-3.2-1B, base_model:finetune:meta-llama/Llama-3.2-1B, autotrain_compatible, region:us | text-generation | 0 | null | .gitattributes, LICENSE.txt, README.md, config.json, configuration_llamavision.py, generation_config.json, model.safetensors, modeling_llamavision.py, special_tokens_map.json, tokenizer.json, tokenizer_config.json |
kadirnar_Llama-3.2-1B-Vision.json | kadirnar/Llama-3.2-1B-Vision | siglip_vision_model | {"architectures": ["Llamavision"], "auto_map": {"AutoConfig": "configuration_llamavision.LlamavisionConfig", "AutoModelForCausalLM": "modeling_llamavision.Llamavision"}, "text_config": {"_name_or_path": "meta-llama/Llama-3.2-1B", "architectures": ["LlamaForCausalLM"], "bos_token_id": 128000, "eos_token_id": [128001, 12... | 2024-12-06 | 46 | 0 | null | transformers, safetensors, llamavision, text-generation, conversational, custom_code, base_model:meta-llama/Llama-3.2-1B, base_model:finetune:meta-llama/Llama-3.2-1B, autotrain_compatible, region:us | text-generation | 0 | null | .gitattributes, LICENSE.txt, README.md, config.json, configuration_llamavision.py, generation_config.json, model.safetensors, modeling_llamavision.py, special_tokens_map.json, tokenizer.json, tokenizer_config.json |
uukuguy_speechless-orca-platypus-coig-lite-4k-0.6e-13b.json | uukuguy/speechless-orca-platypus-coig-lite-4k-0.6e-13b | llama | {"_name_or_path": "/opt/local/llm_models/huggingface.co/Open-Orca/OpenOrca-Platypus2-13B", "architectures": ["LlamaForCausalLM"], "bos_token_id": 1, "eos_token_id": 2, "hidden_act": "silu", "hidden_size": 5120, "initializer_range": 0.02, "intermediate_size": 13824, "max_position_embeddings": 4096, "num_attention_heads"... | 2023-08-31 | 772 | 0 | null | transformers, pytorch, llama, text-generation, en, dataset:garage-bAInd/Open-Platypus, dataset:Open-Orca/OpenOrca, dataset:BAAI/COIG-PC-Lite, arxiv:2308.07317, arxiv:2306.02707, arxiv:2301.13688, license:cc-by-nc-4.0, autotrain_compatible, text-generation-inference, endpoints_compatible, region:us | text-generation | 0 | /opt/local/llm_models/huggingface.co/Open-Orca/OpenOrca-Platypus2-13B | .gitattributes, README.md, added_tokens.json, config.json, generation_config.json, pytorch_model-00001-of-00003.bin, pytorch_model-00002-of-00003.bin, pytorch_model-00003-of-00003.bin, pytorch_model.bin.index.json, special_tokens_map.json, tokenizer.json, tokenizer.model, tokenizer_config.json |
jbj9287_submission2_mistral-7b-qlora-ai2-arc-train-0.3k.json | jbj9287/submission2_mistral-7b-qlora-ai2-arc-train-0.3k | mistral | {"_name_or_path": "/content/drive/MyDrive/huggingface_cache/mistralai/Mistral-7B-v0.1", "architectures": ["MistralForCausalLM"], "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 2, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_positio... | 2024-12-22 | 7 | 0 | null | transformers, safetensors, mistral, text-generation, trl, sft, arxiv:1910.09700, autotrain_compatible, text-generation-inference, endpoints_compatible, 4-bit, bitsandbytes, region:us | text-generation | 0 | /content/drive/MyDrive/huggingface_cache/mistralai/Mistral-7B-v0.1 | .gitattributes, README.md, config.json, generation_config.json, model.safetensors, special_tokens_map.json, tokenizer.json, tokenizer.model, tokenizer_config.json |
damgomz_ft_2_3e6_base_x1.json | damgomz/ft_2_3e6_base_x1 | albert | {"_name_or_path": "albert-base-v2", "architectures": ["AlbertForSequenceClassification"], "attention_probs_dropout_prob": 0, "bos_token_id": 2, "classifier_dropout_prob": 0.1, "down_scale_factor": 1, "embedding_size": 128, "eos_token_id": 3, "gap_size": 0, "hidden_act": "gelu_new", "hidden_dropout_prob": 0, "hidden_siz... | 2024-06-17 | 32 | 0 | null | transformers, safetensors, albert, text-classification, en, autotrain_compatible, endpoints_compatible, region:us | text-classification | 0 | albert-base-v2 | .gitattributes, README.md, config.json, model.safetensors, special_tokens_map.json, tokenizer.json, tokenizer_config.json |
mllm-dev_gen_test_5.json | mllm-dev/gen_test_5 | gpt2 | {"_name_or_path": "openai-community/gpt2", "activation_function": "gelu_new", "architectures": ["GPT2LMHeadModel"], "attn_pdrop": 0.1, "bos_token_id": 50256, "embd_pdrop": 0.1, "eos_token_id": 50256, "initializer_range": 0.02, "layer_norm_epsilon": 1e-05, "n_ctx": 1024, "n_embd": 768, "n_head": 12, "n_inner": null, "n_... | 2024-03-13 | 12 | 0 | null | transformers, tensorboard, safetensors, gpt2, text-generation, autotrain_compatible, text-generation-inference, endpoints_compatible, region:us | text-generation | 0 | openai-community/gpt2 | .gitattributes, checkpoint-1/config.json, checkpoint-1/generation_config.json, checkpoint-1/merges.txt, checkpoint-1/model.safetensors, checkpoint-1/optimizer.pt, checkpoint-1/rng_state.pth, checkpoint-1/scheduler.pt, checkpoint-1/special_tokens_map.json, checkpoint-1/tokenizer_config.json, checkpoint-1/trainer_state.j... |
sujeethav_test_tiny.json | sujeethav/test_tiny | bert | {"_name_or_path": "haisongzhang/roberta-tiny-cased", "architectures": ["BertModel"], "attention_probs_dropout_prob": 0.1, "classifier_dropout": null, "gradient_checkpointing": false, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 512, "initializer_range": 0.02, "intermediate_size": 2048, "layer_norm_e... | 2024-05-05 | 12 | 0 | null | transformers, safetensors, bert, feature-extraction, arxiv:1910.09700, endpoints_compatible, region:us | feature-extraction | 0 | haisongzhang/roberta-tiny-cased | .gitattributes, README.md, config.json, model.safetensors |
fenrirgochad_Llama-2-13b-chat-hf-sharded-bf16-4GB.json | fenrirgochad/Llama-2-13b-chat-hf-sharded-bf16-4GB | llama | {"_name_or_path": "meta-llama/Llama-2-13b-chat-hf", "architectures": ["LlamaForCausalLM"], "attention_bias": false, "bos_token_id": 1, "eos_token_id": 2, "hidden_act": "silu", "hidden_size": 5120, "initializer_range": 0.02, "intermediate_size": 13824, "max_position_embeddings": 4096, "num_attention_heads": 40, "num_hid... | 2023-11-08 | 35 | 0 | null | transformers, safetensors, llama, text-generation, conversational, autotrain_compatible, text-generation-inference, endpoints_compatible, region:us | text-generation | 0 | meta-llama/Llama-2-13b-chat-hf | .gitattributes, config.json, generation_config.json, model.safetensors, special_tokens_map.json, tokenizer.json, tokenizer.model, tokenizer_config.json |
suryakumar12434567890_fine-tuned_model.json | suryakumar12434567890/fine-tuned_model | t5 | {"_name_or_path": "t5-base", "architectures": ["T5ForConditionalGeneration"], "classifier_dropout": 0.0, "d_ff": 3072, "d_kv": 64, "d_model": 768, "decoder_start_token_id": 0, "dense_act_fn": "relu", "dropout_rate": 0.1, "eos_token_id": 1, "feed_forward_proj": "relu", "initializer_factor": 1.0, "is_encoder_decoder": tr... | 2024-03-08 | 12 | 0 | null | transformers, tensorboard, safetensors, t5, text2text-generation, generated_from_trainer, base_model:google-t5/t5-base, base_model:finetune:google-t5/t5-base, license:apache-2.0, autotrain_compatible, text-generation-inference, endpoints_compatible, region:us | text2text-generation | 0 | t5-base | .gitattributes, README.md, config.json, generation_config.json, model.safetensors, runs/Mar08_12-44-32_3537f68e3f93/events.out.tfevents.1709901905.3537f68e3f93.5113.0, runs/Mar08_12-49-00_3537f68e3f93/events.out.tfevents.1709902147.3537f68e3f93.5113.1, runs/Mar08_15-43-25_aca6100af4b5/events.out.tfevents.1709912606.aca... |
AmberYifan_Gemma-2-9B-Llama-3.1-8B-mix.json | AmberYifan/Gemma-2-9B-Llama-3.1-8B-mix | gemma2 | {"_name_or_path": "AmberYifan/Gemma-2-9B-sft-ultrachat-safeRLHF", "architectures": ["Gemma2ForCausalLM"], "attention_bias": false, "attention_dropout": 0.0, "attn_logit_softcapping": 50.0, "bos_token_id": 2, "cache_implementation": "hybrid", "eos_token_id": 1, "final_logit_softcapping": 30.0, "head_dim": 256, "hidden_a... | 2024-12-16 | 17 | 0 | null | transformers, safetensors, gemma2, text-generation, generated_from_trainer, trl, dpo, conversational, arxiv:2305.18290, base_model:AmberYifan/Gemma-2-9B-sft-ultrachat-safeRLHF, base_model:finetune:AmberYifan/Gemma-2-9B-sft-ultrachat-safeRLHF, autotrain_compatible, text-generation-inference, endpoints_compatible, region... | text-generation | 0 | AmberYifan/Gemma-2-9B-sft-ultrachat-safeRLHF | .gitattributes, README.md, all_results.json, config.json, generation_config.json, last-checkpoint/config.json, last-checkpoint/generation_config.json, last-checkpoint/global_step189/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt, last-checkpoint/global_step189/bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt, last-checkp... |
ScandinavianMrT_distilbert-SARC_withcontext.json | ScandinavianMrT/distilbert-SARC_withcontext | distilbert | {"_name_or_path": "distilbert-base-uncased", "activation": "gelu", "architectures": ["DistilBertForSequenceClassification"], "attention_dropout": 0.1, "dim": 768, "dropout": 0.1, "hidden_dim": 3072, "initializer_range": 0.02, "max_position_embeddings": 512, "n_heads": 12, "n_layers": 6, "pad_token_id": 0, "problem_type... | 2022-03-11 | 16 | 0 | null | transformers, pytorch, tensorboard, distilbert, text-classification, generated_from_trainer, license:apache-2.0, autotrain_compatible, endpoints_compatible, region:us | text-classification | 0 | distilbert-base-uncased | .gitattributes, .gitignore, README.md, config.json, pytorch_model.bin, runs/Mar11_09-03-46_12bf3b7bb1a1/1646989444.7679713/events.out.tfevents.1646989444.12bf3b7bb1a1.78.1, runs/Mar11_09-03-46_12bf3b7bb1a1/events.out.tfevents.1646989444.12bf3b7bb1a1.78.0, special_tokens_map.json, tokenizer.json, tokenizer_config.json, ... |
unhingedpanda_layoutlm-funsd.json | unhingedpanda/layoutlm-funsd | layoutlm | {"_name_or_path": "microsoft/layoutlm-base-uncased", "architectures": ["LayoutLMForTokenClassification"], "attention_probs_dropout_prob": 0.1, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-12, "max_2d_position_embeddings"... | 2024-04-16 | 7 | 0 | null | transformers, tensorboard, safetensors, layoutlm, token-classification, generated_from_trainer, dataset:funsd, base_model:microsoft/layoutlm-base-uncased, base_model:finetune:microsoft/layoutlm-base-uncased, license:mit, autotrain_compatible, endpoints_compatible, region:us | token-classification | 0 | microsoft/layoutlm-base-uncased | .gitattributes, README.md, config.json, logs/events.out.tfevents.1713243605.a0157e75988e.1551.0, model.safetensors, preprocessor_config.json, special_tokens_map.json, tokenizer.json, tokenizer_config.json, training_args.bin, vocab.txt |
dshin_flan-t5-ppo-user-f-batch-size-8-epoch-2-use-violation.json | dshin/flan-t5-ppo-user-f-batch-size-8-epoch-2-use-violation | t5 | {"_name_or_path": "dshin/flan-t5-base-MIC", "architectures": ["T5ForConditionalGeneration"], "d_ff": 2048, "d_kv": 64, "d_model": 768, "decoder_start_token_id": 0, "dense_act_fn": "gelu_new", "dropout_rate": 0.1, "eos_token_id": 1, "feed_forward_proj": "gated-gelu", "initializer_factor": 1.0, "is_encoder_decoder": true... | 2023-03-13 | 2 | 0 | null | transformers, pytorch, t5, text2text-generation, trl, reinforcement-learning, license:apache-2.0, autotrain_compatible, text-generation-inference, endpoints_compatible, region:us | reinforcement-learning | 0 | dshin/flan-t5-base-MIC | .gitattributes, README.md, config.json, generation_config.json, pytorch_model.bin, special_tokens_map.json, tokenizer.json, tokenizer_config.json |
huggingtweets_maxfitemaster.json | huggingtweets/maxfitemaster | gpt2 | {"_name_or_path": "gpt2", "activation_function": "gelu_new", "architectures": ["GPT2LMHeadModel"], "attn_pdrop": 0.1, "bos_token_id": 50256, "embd_pdrop": 0.1, "eos_token_id": 50256, "initializer_range": 0.02, "layer_norm_epsilon": 1e-05, "n_ctx": 1024, "n_embd": 768, "n_head": 12, "n_inner": null, "n_layer": 12, "n_po... | 2022-06-21 | 16 | 0 | null | transformers, pytorch, gpt2, text-generation, huggingtweets, en, autotrain_compatible, text-generation-inference, endpoints_compatible, region:us | text-generation | 0 | gpt2 | .gitattributes, README.md, config.json, merges.txt, pytorch_model.bin, special_tokens_map.json, tokenizer.json, tokenizer_config.json, training_args.bin, vocab.json |
jasminejwebb_KeywordIdentifier.json | jasminejwebb/KeywordIdentifier | xlnet | {"_name_or_path": "./reddit_ner/reddit-nerxlnet-base-cased.model/", "architectures": ["XLNetForTokenClassification"], "attn_type": "bi", "bi_data": false, "bos_token_id": 1, "clamp_len": -1, "d_head": 64, "d_inner": 3072, "d_model": 768, "dropout": 0.1, "end_n_top": 5, "eos_token_id": 2, "ff_activation": "gelu", "initi... | 2022-02-03 | 3 | 11 | null | transformers, pytorch, xlnet, token-classification, autotrain_compatible, endpoints_compatible, region:us | token-classification | 0 | ./reddit_ner/reddit-nerxlnet-base-cased.model/ | .gitattributes, config.json, pytorch_model.bin, special_tokens_map.json, tokenizer.json, tokenizer_config.json |
statking_paligemma_bigbird8.json | statking/paligemma_bigbird8 | big_bird | {"_name_or_path": "google/bigbird-roberta-large", "architectures": ["BigBirdForSequenceClassification"], "attention_probs_dropout_prob": 0.1, "attention_type": "block_sparse", "block_size": 64, "bos_token_id": 1, "classifier_dropout": null, "eos_token_id": 2, "gradient_checkpointing": false, "hidden_act": "gelu_new", "... | 2024-06-05 | 13 | 0 | null | transformers, safetensors, big_bird, text-classification, autotrain_compatible, endpoints_compatible, region:us | text-classification | 0 | google/bigbird-roberta-large | .gitattributes, config.json, model.safetensors, special_tokens_map.json, spiece.model, tokenizer.json, tokenizer_config.json, training_args.bin |
TheBloke_WizardLM-1.0-Uncensored-CodeLlama-34B-GPTQ.json | TheBloke/WizardLM-1.0-Uncensored-CodeLlama-34B-GPTQ | llama | {"_name_or_path": "/workspace/models/CodeLlama-34b-hf", "architectures": ["LlamaForCausalLM"], "bos_token_id": 1, "eos_token_id": 2, "hidden_act": "silu", "hidden_size": 8192, "initializer_range": 0.02, "intermediate_size": 22016, "max_position_embeddings": 16384, "num_attention_heads": 64, "num_hidden_layers": 48, "nu... | 2023-09-05 | 41 | 7 | null | transformers, safetensors, llama, text-generation, en, dataset:ehartford/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split, base_model:cognitivecomputations/WizardLM-1.0-Uncensored-CodeLlama-34b, base_model:quantized:cognitivecomputations/WizardLM-1.0-Uncensored-CodeLlama-34b, license:llama2, autotrain_compatible,... | text-generation | 1 | /workspace/models/CodeLlama-34b-hf | .gitattributes, LICENSE.txt, Notice, README.md, USE_POLICY.md, config.json, generation_config.json, model.safetensors, quantize_config.json, special_tokens_map.json, tokenizer.json, tokenizer.model, tokenizer_config.json |
alokabhishek_Llama-2-7b-chat-hf-4bit-AWQ.json | alokabhishek/Llama-2-7b-chat-hf-4bit-AWQ | llama | {"_name_or_path": "/root/.cache/huggingface/hub/models--meta-llama--Llama-2-7b-chat-hf/snapshots/92011f62d7604e261f748ec0cfe6329f31193e33", "architectures": ["LlamaForCausalLM"], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 2, "hidden_act": "silu", "hidden_size": 4096, "initiali... | 2024-03-24 | 13 | 1 | null | transformers, safetensors, llama, text-generation, 4bit, AWQ, AutoAWQ, llama-2, facebook, meta, 7b, quantized, conversational, license:llama2, autotrain_compatible, text-generation-inference, endpoints_compatible, 4-bit, awq, region:us | text-generation | 0 | /root/.cache/huggingface/hub/models--meta-llama--Llama-2-7b-chat-hf/snapshots/92011f62d7604e261f748ec0cfe6329f31193e33 | .gitattributes, LICENSE.txt, README.md, config.json, generation_config.json, model.safetensors, special_tokens_map.json, tokenizer.json, tokenizer.model, tokenizer_config.json |
yiII1_gpt2-cnwiki-full_data-P100.json | yiII1/gpt2-cnwiki-full_data-P100 | gpt2 | {"_name_or_path": "uer/gpt2-chinese-cluecorpussmall", "activation_function": "gelu_new", "architectures": ["GPT2LMHeadModel"], "attn_pdrop": 0.1, "bos_token_id": 50256, "embd_pdrop": 0.1, "eos_token_id": 50256, "gradient_checkpointing": false, "initializer_range": 0.02, "layer_norm_epsilon": 1e-05, "n_ctx": 1024, "n_em... | 2024-04-14 | 12 | 0 | null | transformers, tensorboard, safetensors, gpt2, text-generation, autotrain_compatible, text-generation-inference, endpoints_compatible, region:us | text-generation | 0 | uer/gpt2-chinese-cluecorpussmall | .gitattributes, config.json, model.safetensors, runs/Apr14_06-02-47_f1d994c4c94e/events.out.tfevents.1713074568.f1d994c4c94e.23.0, special_tokens_map.json, tokenizer.json, tokenizer_config.json, training_args.bin, vocab.txt |
Deev124_hermes-llama3-roleplay-3500-v1.json | Deev124/hermes-llama3-roleplay-3500-v1 | llama | {"_name_or_path": "NousResearch/Hermes-3-Llama-3.1-8B", "architectures": ["LlamaForCausalLM"], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128040, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_positi... | 2024-11-21 | 15 | 0 | null | transformers, safetensors, llama, text-generation, conversational, arxiv:1910.09700, autotrain_compatible, text-generation-inference, endpoints_compatible, region:us | text-generation | 0 | NousResearch/Hermes-3-Llama-3.1-8B | .gitattributes, README.md, config.json, generation_config.json, model-00001-of-00004.safetensors, model-00002-of-00004.safetensors, model-00003-of-00004.safetensors, model-00004-of-00004.safetensors, model.safetensors.index.json, special_tokens_map.json, tokenizer.json, tokenizer_config.json |
recruit-jp_japanese-typo-detector-roberta-base.json | recruit-jp/japanese-typo-detector-roberta-base | roberta | {"architectures": ["RobertaForTokenClassification"], "attention_probs_dropout_prob": 0.1, "bos_token_id": 2, "classifier_dropout": null, "eos_token_id": 3, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-12, "max_position_e... | 2023-11-09 | 1,211 | 8 | null | transformers, safetensors, roberta, token-classification, ja, base_model:ku-nlp/roberta-base-japanese-char-wwm, base_model:finetune:ku-nlp/roberta-base-japanese-char-wwm, license:cc-by-sa-4.0, autotrain_compatible, endpoints_compatible, region:us | token-classification | 1 | null | .gitattributes, README.md, config.json, model.safetensors, special_tokens_map.json, tokenizer_config.json, vocab.txt |
law-ai_CustomInLawBERT.json | law-ai/CustomInLawBERT | bert | {"_name_or_path": "law-ai/CustomInLawBERT", "architectures": ["BertForMaskedLM"], "attention_probs_dropout_prob": 0.1, "classifier_dropout": null, "gradient_checkpointing": false, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps"... | 2023-05-05 | 13 | 3 | null | transformers, pytorch, bert, fill-mask, legal, en, arxiv:2209.06049, arxiv:2112.14731, arxiv:1911.05405, arxiv:2105.13562, license:mit, autotrain_compatible, endpoints_compatible, region:us | fill-mask | 0 | law-ai/CustomInLawBERT | .gitattributes, README.md, config.json, pytorch_model.bin, special_tokens_map.json, tokenizer.json, tokenizer_config.json, vocab.txt |
mnoukhov_pythia160m-sft-tldr.json | mnoukhov/pythia160m-sft-tldr | gpt_neox | {"_name_or_path": "EleutherAI/pythia-160m-deduped", "architectures": ["GPTNeoXForCausalLM"], "attention_bias": true, "attention_dropout": 0.0, "bos_token_id": 0, "classifier_dropout": 0.1, "eos_token_id": 0, "hidden_act": "gelu", "hidden_dropout": 0.0, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size":... | 2024-06-18 | 228 | 0 | null | transformers, safetensors, gpt_neox, text-generation, trl, sft, generated_from_trainer, base_model:EleutherAI/pythia-160m-deduped, base_model:finetune:EleutherAI/pythia-160m-deduped, license:apache-2.0, autotrain_compatible, text-generation-inference, endpoints_compatible, region:us | text-generation | 0 | EleutherAI/pythia-160m-deduped | .gitattributes, README.md, config.json, generation_config.json, model.safetensors, special_tokens_map.json, tokenizer.json, tokenizer_config.json, training_args.bin |
kohankhaki_LLAMA-2-13B-201.json | kohankhaki/LLAMA-2-13B-201 | llama | {"_name_or_path": "/home/ubuntu/fk-models/finetuned-llama2-13b-201/", "architectures": ["LlamaModel"], "bos_token_id": 1, "eos_token_id": 2, "hidden_act": "silu", "hidden_size": 5120, "initializer_range": 0.02, "intermediate_size": 13824, "max_position_embeddings": 2048, "num_attention_heads": 40, "num_hidden_layers": ... | 2024-02-05 | 6 | 0 | null | transformers, pytorch, llama, feature-extraction, text-generation-inference, endpoints_compatible, region:us | feature-extraction | 0 | /home/ubuntu/fk-models/finetuned-llama2-13b-201/ | .gitattributes, config.json, pytorch_model-00001-of-00006.bin, pytorch_model-00002-of-00006.bin, pytorch_model-00003-of-00006.bin, pytorch_model-00004-of-00006.bin, pytorch_model-00005-of-00006.bin, pytorch_model-00006-of-00006.bin, pytorch_model.bin.index.json, special_tokens_map.json, tokenizer.json, tokenizer.model,... |
jagr-ai_llama-2-7b-jagr.json | jagr-ai/llama-2-7b-jagr | llama | {"_name_or_path": "NousResearch/Llama-2-7b-chat-hf", "architectures": ["LlamaForCausalLM"], "bos_token_id": 1, "eos_token_id": 2, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 11008, "max_position_embeddings": 4096, "num_attention_heads": 32, "num_hidden_layers": 32, "num_ke... | 2023-11-10 | 12 | 0 | null | transformers, pytorch, llama, text-generation, license:apache-2.0, autotrain_compatible, text-generation-inference, endpoints_compatible, region:us | text-generation | 0 | NousResearch/Llama-2-7b-chat-hf | .gitattributes, README.md, config.json, generation_config.json, pytorch_model-00001-of-00002.bin, pytorch_model-00002-of-00002.bin, pytorch_model.bin.index.json, special_tokens_map.json, tokenizer.json, tokenizer_config.json |
winwithpartner_Ecommerce-ChatBot.json | winwithpartner/Ecommerce-ChatBot | llama | {"_name_or_path": "/kaggle/input/llama-3.2/transformers/3b-instruct/1", "architectures": ["LlamaForCausalLM"], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128256, "eos_token_id": 128257, "head_dim": 128, "hidden_act": "silu", "hidden_size": 3072, "initializer_range": 0.02, "intermediate_size": 81... | 2024-10-26 | 8 | 0 | null | transformers, safetensors, llama, text-generation, conversational, arxiv:1910.09700, autotrain_compatible, text-generation-inference, endpoints_compatible, region:us | text-generation | 0 | /kaggle/input/llama-3.2/transformers/3b-instruct/1 | .gitattributes, README.md, config.json, generation_config.json, model-00001-of-00002.safetensors, model-00002-of-00002.safetensors, model.safetensors.index.json, special_tokens_map.json, tokenizer.json, tokenizer_config.json |
RichardErkhov_DeepMount00_-_Mistral-Ita-7b-4bits.json | RichardErkhov/DeepMount00_-_Mistral-Ita-7b-4bits | mistral | {"_name_or_path": "Mistral-Ita-7b", "architectures": ["MistralForCausalLM"], "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 2, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 32768, "num_attention_heads": 32, "num_hidden_layers":... | 2024-05-10 | 21 | 0 | null | transformers, safetensors, mistral, text-generation, autotrain_compatible, text-generation-inference, endpoints_compatible, 4-bit, bitsandbytes, region:us | text-generation | 0 | Mistral-Ita-7b | .gitattributes, README.md, config.json, generation_config.json, model.safetensors, special_tokens_map.json, tokenizer.json, tokenizer.model, tokenizer_config.json |
asthaa30_lora_model.json | asthaa30/lora_model | llama | {"_name_or_path": "unsloth/Meta-Llama-3.1-8B", "architectures": ["LlamaForCausalLM"], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "m... | 2024-07-29 | 4 | 1 | null | transformers, safetensors, llama, text-generation, text-generation-inference, unsloth, trl, en, license:apache-2.0, autotrain_compatible, endpoints_compatible, region:us | text-generation | 0 | unsloth/Meta-Llama-3.1-8B | .gitattributes, README.md, adapter_config.json, adapter_model.safetensors, config.json, special_tokens_map.json, tokenizer.json, tokenizer_config.json |
schoenml_bert-emotion.json | schoenml/bert-emotion | distilbert | {"_name_or_path": "distilbert-base-cased", "activation": "gelu", "architectures": ["DistilBertForSequenceClassification"], "attention_dropout": 0.1, "dim": 768, "dropout": 0.1, "hidden_dim": 3072, "initializer_range": 0.02, "max_position_embeddings": 512, "n_heads": 12, "n_layers": 6, "output_past": true, "pad_token_id... | 2022-05-20 | 9 | 0 | null | transformers, pytorch, tensorboard, distilbert, text-classification, generated_from_trainer, dataset:tweet_eval, license:apache-2.0, model-index, autotrain_compatible, endpoints_compatible, region:us | text-classification | 0 | distilbert-base-cased | .gitattributes, .gitignore, README.md, config.json, pytorch_model.bin, runs/May23_14-58-17_5cc8f86d98f1/1653317964.445483/events.out.tfevents.1653317964.5cc8f86d98f1.74.1, runs/May23_14-58-17_5cc8f86d98f1/events.out.tfevents.1653317964.5cc8f86d98f1.74.0, special_tokens_map.json, tokenizer.json, tokenizer_config.json, t... |
Aishupatil_gemma-Code-Instruct-Finetune-test.json | Aishupatil/gemma-Code-Instruct-Finetune-test | gemma | {"_name_or_path": "google/gemma-2b-it", "architectures": ["GemmaForCausalLM"], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 2, "eos_token_id": 1, "head_dim": 256, "hidden_act": "gelu", "hidden_size": 2048, "initializer_range": 0.02, "intermediate_size": 16384, "max_position_embeddings": 8192, "num... | 2024-03-01 | 9 | 0 | null | transformers, safetensors, gemma, text-generation, conversational, arxiv:1910.09700, autotrain_compatible, text-generation-inference, endpoints_compatible, region:us | text-generation | 0 | google/gemma-2b-it | .gitattributes, README.md, config.json, generation_config.json, model-00001-of-00002.safetensors, model-00002-of-00002.safetensors, model.safetensors.index.json, special_tokens_map.json, tokenizer.json, tokenizer.model, tokenizer_config.json |
mpasila_llama-7b-finnish-instruct-v0.2-exl2-4bpw.json | mpasila/llama-7b-finnish-instruct-v0.2-exl2-4bpw | llama | {"_name_or_path": "model_merged_v0.2_option2", "architectures": ["LlamaForCausalLM"], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 64260, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 11008, "max_position_embeddings": 2048, "num_atten... | 2024-03-20 | 10 | 0 | null | transformers, llama, text-generation, finnish, base_model:Finnish-NLP/llama-7b-finnish-instruct-v0.2, base_model:quantized:Finnish-NLP/llama-7b-finnish-instruct-v0.2, license:apache-2.0, autotrain_compatible, endpoints_compatible, 4-bit, exl2, region:us | text-generation | 0 | model_merged_v0.2_option2 | .gitattributes, README.md, config.json, generation_config.json, job_new.json, measurement.json, output.safetensors, special_tokens_map.json, tokenizer.json, tokenizer_config.json |
DeskDown_MarianMixFT_en-my.json | DeskDown/MarianMixFT_en-my | marian | {"_name_or_path": "DeskDown/MarianMix_en-10", "activation_dropout": 0.0, "activation_function": "swish", "add_bias_logits": false, "add_final_layer_norm": false, "architectures": ["MarianMTModel"], "attention_dropout": 0.0, "bad_words_ids": [[65000]], "bos_token_id": 0, "classif_dropout": 0.0, "classifier_dropout": 0.0... | 2022-01-14 | 9 | 0 | null | transformers, pytorch, marian, text2text-generation, autotrain_compatible, endpoints_compatible, region:us | text2text-generation | 0 | DeskDown/MarianMix_en-10 | .gitattributes, config.json, pytorch_model.bin |
kekunh_financial-twhin-bert-large-3labels-pesudo-bb-only.json | kekunh/financial-twhin-bert-large-3labels-pesudo-bb-only | bert | {"_name_or_path": "Twitter/twhin-bert-large", "architectures": ["BertForSequenceClassification"], "attention_probs_dropout_prob": 0.1, "classifier_dropout": null, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 1024, "initializer_range": 0.02, "intermediate_size": 4096, "layer_norm_eps": 1e-12, "max_po... | 2024-02-21 | 13 | 0 | null | transformers, tensorboard, safetensors, bert, text-classification, generated_from_trainer, en, dataset:kekunh/stock-related-tweets, dataset:zeroshot/twitter-financial-news-sentiment, base_model:Twitter/twhin-bert-large, base_model:finetune:Twitter/twhin-bert-large, license:apache-2.0, autotrain_compatible, endpoints_co... | text-classification | 0 | Twitter/twhin-bert-large | .gitattributes, README.md, config.json, model.safetensors, runs/Feb21_23-03-04_1a2e434dca9b/events.out.tfevents.1708556590.1a2e434dca9b.2543.6, runs/Feb21_23-03-04_1a2e434dca9b/events.out.tfevents.1708558048.1a2e434dca9b.2543.7, special_tokens_map.json, tokenizer.json, tokenizer_config.json, training_args.bin |
sashashghome_dummy-model-hey.json | sashashghome/dummy-model-hey | camembert | {"_name_or_path": "camembert-base", "architectures": ["CamembertForMaskedLM"], "attention_probs_dropout_prob": 0.1, "bos_token_id": 5, "classifier_dropout": null, "eos_token_id": 6, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_ep... | 2025-01-06 | 0 | 0 | null | transformers, safetensors, camembert, fill-mask, arxiv:1910.09700, autotrain_compatible, endpoints_compatible, region:us | fill-mask | 0 | camembert-base | .gitattributes, README.md, added_tokens.json, config.json, model.safetensors, sentencepiece.bpe.model, special_tokens_map.json, tokenizer.json, tokenizer_config.json |
Menouar_saqr-7b-merged.json | Menouar/saqr-7b-merged | falcon | {"_name_or_path": "tiiuae/falcon-7b", "alibi": false, "apply_residual_connection_post_layernorm": false, "architectures": ["FalconForCausalLM"], "attention_dropout": 0.0, "auto_map": {"AutoConfig": "tiiuae/falcon-7b--configuration_falcon.FalconConfig", "AutoModel": "tiiuae/falcon-7b--modeling_falcon.FalconModel", "Auto... | 2024-02-16 | 465 | 1 | null | transformers, safetensors, falcon, text-generation, saqr-7b-instrcut, Pytorch, conversational, custom_code, en, dataset:HuggingFaceH4/ultrachat_200k, dataset:openbmb/UltraFeedback, dataset:gsm8k, license:apache-2.0, autotrain_compatible, text-generation-inference, endpoints_compatible, region:us | text-generation | 0 | tiiuae/falcon-7b | .gitattributes, README.md, config.json, generation_config.json, model-00001-of-00003.safetensors, model-00002-of-00003.safetensors, model-00003-of-00003.safetensors, model.safetensors.index.json, special_tokens_map.json, tokenizer.json, tokenizer_config.json |
Jeevesh8_multiberts_seed_11_ft_0.json | Jeevesh8/multiberts_seed_11_ft_0 | bert | {"_name_or_path": "google/multiberts-seed_11", "architectures": ["BertForSequenceClassification"], "attention_probs_dropout_prob": 0.1, "classifier_dropout": null, "finetuning_task": "mnli", "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "laye... | 2022-02-07 | 2 | 0 | null | transformers, jax, tensorboard, bert, text-classification, autotrain_compatible, endpoints_compatible, region:us | text-classification | 0 | google/multiberts-seed_11 | .gitattributes, config.json, events.out.tfevents.1644220213.gr060.hpc.nyu.edu.1331155.0.v2, flax_model.msgpack, special_tokens_map.json, tokenizer.json, tokenizer_config.json, vocab.txt |
Sumail_Barista22.json | Sumail/Barista22 | stablelm | {"_name_or_path": "coffie3/s31x", "architectures": ["StableLmForCausalLM"], "attention_dropout": 0.0, "bos_token_id": 100257, "eos_token_id": 100257, "hidden_act": "silu", "hidden_dropout": 0.0, "hidden_size": 2048, "initializer_range": 0.02, "intermediate_size": 5632, "layer_norm_eps": 1e-05, "max_position_embeddings"... | 2024-04-03 | 11 | 0 | null | transformers, safetensors, stablelm, text-generation, mergekit, merge, conversational, base_model:Sumail/Barista20, base_model:merge:Sumail/Barista20, base_model:coffie3/s31x, base_model:merge:coffie3/s31x, autotrain_compatible, endpoints_compatible, region:us | text-generation | 0 | coffie3/s31x | .gitattributes, README.md, added_tokens.json, config.json, mergekit_config.yml, merges.txt, model-00001-of-00001.safetensors, model.safetensors.index.json, special_tokens_map.json, tokenizer.json, tokenizer_config.json, vocab.json |
SOUMYADEEPSAR_roberta_persuasive.json | SOUMYADEEPSAR/roberta_persuasive | roberta | {"_name_or_path": "roberta-base", "architectures": ["RobertaForSequenceClassification"], "attention_probs_dropout_prob": 0.1, "bos_token_id": 0, "classifier_dropout": null, "eos_token_id": 2, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "lay... | 2024-05-11 | 35 | 0 | null | transformers, safetensors, roberta, text-classification, arxiv:1910.09700, autotrain_compatible, endpoints_compatible, region:us | text-classification | 0 | roberta-base | .gitattributes, README.md, config.json, merges.txt, model.safetensors, special_tokens_map.json, tokenizer.json, tokenizer_config.json, vocab.json |
Lucia-no_key_16.json | Lucia-no/key_16 | llama | {"_name_or_path": "TdL/key1", "architectures": ["LlamaForCausalLM"], "attention_bias": false, "bos_token_id": 1, "eos_token_id": 2, "hidden_act": "silu", "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "max_position_embeddings": 1024, "num_attention_heads": 8, "num_hidden_layers": 12, "num_key... | 2024-02-05 | 13 | 0 | null | transformers, safetensors, llama, text-generation, autotrain_compatible, text-generation-inference, endpoints_compatible, region:us | text-generation | 0 | TdL/key1 | .gitattributes, config.json, generation_config.json, model.safetensors |
evolvingstuff_bert-base-cased-wikitext2.json | evolvingstuff/bert-base-cased-wikitext2 | bert | {"_name_or_path": "bert-base-cased", "architectures": ["BertForMaskedLM"], "attention_probs_dropout_prob": 0.1, "classifier_dropout": null, "gradient_checkpointing": false, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-12... | 2022-05-16 | 4 | 0 | null | transformers, pytorch, tensorboard, bert, fill-mask, generated_from_trainer, license:apache-2.0, autotrain_compatible, endpoints_compatible, region:us | fill-mask | 0 | bert-base-cased | .gitattributes, .gitignore, README.md, config.json, pytorch_model.bin, runs/May16_21-26-01_f21229f4e6c7/1652736372.710687/events.out.tfevents.1652736372.f21229f4e6c7.73.4, runs/May16_21-26-01_f21229f4e6c7/events.out.tfevents.1652736372.f21229f4e6c7.73.3, runs/May16_21-26-01_f21229f4e6c7/events.out.tfevents.1652738345.f... |
yeniguno_democracy-sentiment-analysis-turkish-roberta.json | yeniguno/democracy-sentiment-analysis-turkish-roberta | xlm-roberta | {"_name_or_path": "cardiffnlp/twitter-xlm-roberta-base-sentiment-multilingual", "architectures": ["XLMRobertaForSequenceClassification"], "attention_probs_dropout_prob": 0.1, "bos_token_id": 0, "classifier_dropout": null, "eos_token_id": 2, "gradient_checkpointing": false, "hidden_act": "gelu", "hidden_dropout_prob": 0... | 2024-09-09 | 31 | 0 | null | transformers, tensorboard, safetensors, xlm-roberta, text-classification, generated_from_trainer, tr, base_model:cardiffnlp/twitter-xlm-roberta-base-sentiment-multilingual, base_model:finetune:cardiffnlp/twitter-xlm-roberta-base-sentiment-multilingual, license:mit, autotrain_compatible, endpoints_compatible, region:us | text-classification | 0 | cardiffnlp/twitter-xlm-roberta-base-sentiment-multilingual | .gitattributes, README.md, config.json, model.safetensors, runs/Sep09_10-27-40_7729992a872c/events.out.tfevents.1725877663.7729992a872c.638.0, runs/Sep09_10-27-40_7729992a872c/events.out.tfevents.1725879393.7729992a872c.638.1, sentencepiece.bpe.model, special_tokens_map.json, tokenizer.json, tokenizer_config.json, trai... |
connectivity_bert_ft_qqp-32.json | connectivity/bert_ft_qqp-32 | bert | {"architectures": ["BertForSequenceClassification"], "attention_probs_dropout_prob": 0.1, "classifier_dropout": null, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-12, "max_position_embeddings": 512, "num_attention_heads"... | 2022-05-21 | 11 | 0 | null | transformers, pytorch, tensorboard, bert, text-classification, autotrain_compatible, endpoints_compatible, region:us | text-classification | 0 | null | .gitattributes, config.json, events.out.tfevents.1651954865.gr011.hpc.nyu.edu, paws_eval@15000steps_bert_ft_qqp-32.json, paws_eval@25000steps_bert_ft_qqp-32.json, paws_eval@34110steps_bert_ft_qqp-32.json, pytorch_model.bin, qqp_eval@34110steps_bert_ft_qqp-32.json, special_tokens_map.json, tokenizer_config.json, vocab.t... |
Multilingual-Perspectivist-NLU_irony_es_Spain.json | Multilingual-Perspectivist-NLU/irony_es_Spain | roberta | {"_name_or_path": "roberta-base", "architectures": ["RobertaForSequenceClassification"], "attention_probs_dropout_prob": 0.1, "bos_token_id": 0, "classifier_dropout": null, "eos_token_id": 2, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "lay... | 2023-11-06 | 6 | 0 | null | transformers, pytorch, roberta, text-classification, generated_from_trainer, base_model:FacebookAI/roberta-base, base_model:finetune:FacebookAI/roberta-base, license:mit, autotrain_compatible, endpoints_compatible, region:us | text-classification | 0 | roberta-base | .gitattributes, README.md, config.json, pytorch_model.bin, training_args.bin |
WisPerMed_Llama-3.1-SauerkrautLM-70b-Instruct-AWQ.json | WisPerMed/Llama-3.1-SauerkrautLM-70b-Instruct-AWQ | llama | {"_name_or_path": "/home/baheryilmaz/.cache/huggingface/hub/models--VAGOsolutions--Llama-3.1-SauerkrautLM-70b-Instruct/snapshots/e8e74aa789243c25a3a8f7565780a402f5050bbb", "architectures": ["LlamaForCausalLM"], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": [128001, 128008, 1... | 2024-10-17 | 1,259 | 4 | null | safetensors, llama, text-generation, conversational, de, en, base_model:VAGOsolutions/Llama-3.1-SauerkrautLM-70b-Instruct, base_model:quantized:VAGOsolutions/Llama-3.1-SauerkrautLM-70b-Instruct, license:apache-2.0, 4-bit, awq, region:us | text-generation | 1 | /home/baheryilmaz/.cache/huggingface/hub/models--VAGOsolutions--Llama-3.1-SauerkrautLM-70b-Instruct/snapshots/e8e74aa789243c25a3a8f7565780a402f5050bbb | .gitattributes, README.md, config.json, generation_config.json, image.png, model-00001-of-00009.safetensors, model-00002-of-00009.safetensors, model-00003-of-00009.safetensors, model-00004-of-00009.safetensors, model-00005-of-00009.safetensors, model-00006-of-00009.safetensors, model-00007-of-00009.safetensors, model-0... |
NickyNicky_gemma-1.1-2b-it_text_to_sql_format_chatML_V1.json | NickyNicky/gemma-1.1-2b-it_text_to_sql_format_chatML_V1 | gemma | {"_name_or_path": "google/gemma-1.1-2b-it", "architectures": ["GemmaForCausalLM"], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 2, "eos_token_id": 1, "head_dim": 256, "hidden_act": "gelu_pytorch_tanh", "hidden_activation": "gelu_pytorch_tanh", "hidden_size": 2048, "initializer_range": 0.02, "inter... | 2024-04-07 | 13 | 3 | null | transformers, safetensors, gemma, text-generation, conversational, en, dataset:gretelai/synthetic_text_to_sql, dataset:NickyNicky/synthetic_text_to_sql_format_chatML_gemma, license:apache-2.0, autotrain_compatible, text-generation-inference, endpoints_compatible, region:us | text-generation | 0 | google/gemma-1.1-2b-it | .gitattributes, README.md, config.json, generation_config.json, model-00001-of-00002.safetensors, model-00002-of-00002.safetensors, model.safetensors.index.json, special_tokens_map.json, tokenizer.json, tokenizer.model, tokenizer_config.json |
mvonwyl_roberta-base-tweet_eval-offensive.json | mvonwyl/roberta-base-tweet_eval-offensive | roberta | {"_name_or_path": "roberta-base", "architectures": ["RobertaForSequenceClassification"], "attention_probs_dropout_prob": 0.1, "bos_token_id": 0, "classifier_dropout": null, "eos_token_id": 2, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "lay... | 2022-09-29 | 2 | 0 | null | transformers, pytorch, roberta, text-classification, autotrain_compatible, endpoints_compatible, region:us | text-classification | 0 | roberta-base | .gitattributes, config.json, pytorch_model.bin |
Model-SafeTensors_KoboldAI-LLaMA2-13B-Tiefighter-ST.json | Model-SafeTensors/KoboldAI-LLaMA2-13B-Tiefighter-ST | llama | {"_name_or_path": "/home/mixer/koboldai/models/xwin-rodeo-5", "architectures": ["LlamaForCausalLM"], "attention_bias": false, "bos_token_id": 1, "eos_token_id": 2, "hidden_act": "silu", "hidden_size": 5120, "initializer_range": 0.02, "intermediate_size": 13824, "max_position_embeddings": 4096, "num_attention_heads": 40... | 2024-06-21 | 30 | 0 | null | transformers, pytorch, safetensors, llama, text-generation, license:llama2, autotrain_compatible, text-generation-inference, endpoints_compatible, region:us | text-generation | 0 | /home/mixer/koboldai/models/xwin-rodeo-5 | .gitattributes, README.md, config.json, generation_config.json, model-00001-of-00003.safetensors, model-00002-of-00003.safetensors, model-00003-of-00003.safetensors, model.safetensors.index.json, pytorch_model-00001-of-00003.bin, pytorch_model-00002-of-00003.bin, pytorch_model-00003-of-00003.bin, pytorch_model.bin.inde... |
tistak_yiehqFUmZalnFLUG.json | tistak/yiehqFUmZalnFLUG | stablelm | {"_name_or_path": "", "architectures": ["StableLmForCausalLM"], "attention_dropout": 0.0, "bos_token_id": 100257, "eos_token_id": 100257, "hidden_act": "silu", "hidden_dropout": 0.0, "hidden_size": 2048, "initializer_range": 0.02, "intermediate_size": 5632, "layer_norm_eps": 1e-05, "max_position_embeddings": 4096, "num... | 2024-06-02 | 22 | 0 | null | transformers, safetensors, stablelm, text-generation, autotrain_compatible, endpoints_compatible, region:us | text-generation | 0 | null | .gitattributes, added_tokens.json, config.json, generation_config.json, merges.txt, model-00001-of-00002.safetensors, model-00002-of-00002.safetensors, model.safetensors.index.json, special_tokens_map.json, tokenizer.json, tokenizer_config.json, vocab.json |
RogerB_afro-xlmr-large-kinre-finetuned-kinre-tweet-finetuned-kin-sent2.json | RogerB/afro-xlmr-large-kinre-finetuned-kinre-tweet-finetuned-kin-sent2 | xlm-roberta | {"_name_or_path": "RogerB/afro-xlmr-large-kinre-finetuned-kinre-tweet-finetuned", "architectures": ["XLMRobertaForSequenceClassification"], "attention_probs_dropout_prob": 0.1, "bos_token_id": 0, "classifier_dropout": null, "eos_token_id": 2, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 1024, "initi... | 2023-10-09 | 20 | 0 | null | transformers, pytorch, xlm-roberta, text-classification, generated_from_trainer, base_model:RogerB/afro-xlmr-large-kinre-finetuned-kinre-tweet-finetuned, base_model:finetune:RogerB/afro-xlmr-large-kinre-finetuned-kinre-tweet-finetuned, license:mit, autotrain_compatible, endpoints_compatible, region:us | text-classification | 0 | RogerB/afro-xlmr-large-kinre-finetuned-kinre-tweet-finetuned | .gitattributes, README.md, added_tokens.json, config.json, pytorch_model.bin, sentencepiece.bpe.model, special_tokens_map.json, tokenizer.json, tokenizer_config.json, training_args.bin |
tistak_zmxfHsd6qnKwId5W.json | tistak/zmxfHsd6qnKwId5W | stablelm | {"_name_or_path": "/root/top1", "architectures": ["StableLmForCausalLM"], "attention_dropout": 0.0, "bos_token_id": 100257, "eos_token_id": 100278, "hidden_act": "silu", "hidden_dropout": 0.0, "hidden_size": 2048, "initializer_range": 0.02, "intermediate_size": 5632, "layer_norm_eps": 1e-05, "max_position_embeddings": ... | 2024-07-23 | 5 | 0 | null | transformers, safetensors, stablelm, text-generation, conversational, autotrain_compatible, endpoints_compatible, region:us | text-generation | 0 | /root/top1 | .gitattributes, config.json, model-00001-of-00001.safetensors, model.safetensors.index.json, special_tokens_map.json, tokenizer.json, tokenizer_config.json |
melihberky_bert-base-uncased-finetuned-cola.json | melihberky/bert-base-uncased-finetuned-cola | bert | {"_name_or_path": "bert-base-uncased", "architectures": ["BertForSequenceClassification"], "attention_probs_dropout_prob": 0.1, "classifier_dropout": null, "gradient_checkpointing": false, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_... | 2023-05-02 | 7 | 0 | null | transformers, pytorch, tensorboard, bert, text-classification, autotrain_compatible, endpoints_compatible, region:us | text-classification | 0 | bert-base-uncased | .gitattributes, .gitignore, config.json, pytorch_model.bin, runs/May02_21-05-38_bd16ec71b09b/1683061565.4684646/events.out.tfevents.1683061565.bd16ec71b09b.201.1, runs/May02_21-05-38_bd16ec71b09b/1683061701.5013251/events.out.tfevents.1683061701.bd16ec71b09b.201.3, runs/May02_21-05-38_bd16ec71b09b/1683062009.9585328/ev... |
jenspt_byt5_ft_error_only.json | jenspt/byt5_ft_error_only | t5 | {"_name_or_path": "google/byt5-small", "architectures": ["T5ForConditionalGeneration"], "d_ff": 3584, "d_kv": 64, "d_model": 1472, "decoder_start_token_id": 0, "dropout_rate": 0.1, "eos_token_id": 1, "feed_forward_proj": "gated-gelu", "gradient_checkpointing": false, "initializer_factor": 1.0, "is_encoder_decoder": tru... | 2021-11-26 | 6 | 0 | null | transformers, pytorch, t5, text2text-generation, autotrain_compatible, text-generation-inference, endpoints_compatible, region:us | text2text-generation | 0 | google/byt5-small | .gitattributes, config.json, pytorch_model.bin |
Deepakiitm_llama-3-8b-chat-doctor.json | Deepakiitm/llama-3-8b-chat-doctor | llama | {"_name_or_path": "/kaggle/input/llama-3/transformers/8b-chat-hf/1", "architectures": ["LlamaForCausalLM"], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128256, "eos_token_id": 128257, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_e... | 2024-07-24 | 11 | 0 | null | transformers, safetensors, gguf, llama, text-generation, medical, text2text-generation, en, license:apache-2.0, autotrain_compatible, text-generation-inference, endpoints_compatible, region:us, conversational | text2text-generation | 0 | /kaggle/input/llama-3/transformers/8b-chat-hf/1 | .gitattributes, README.md, adapter_config.json, adapter_model.safetensors, config.json, generation_config.json, llama-3-8b-chat-doctr-Q4_K_M.gguf, model-00001-of-00004.safetensors, model-00002-of-00004.safetensors, model-00003-of-00004.safetensors, model-00004-of-00004.safetensors, model.safetensors.index.json, special... |
Deepakiitm_llama-3-8b-chat-doctor.json | Deepakiitm/llama-3-8b-chat-doctor | llama | {"_name_or_path": "/kaggle/input/llama-3/transformers/8b-chat-hf/1", "architectures": ["LlamaForCausalLM"], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128256, "eos_token_id": 128257, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_e... | 2024-07-24 | 11 | 0 | null | transformers, safetensors, gguf, llama, text-generation, medical, text2text-generation, en, license:apache-2.0, autotrain_compatible, text-generation-inference, endpoints_compatible, region:us, conversational | text2text-generation | 0 | /kaggle/input/llama-3/transformers/8b-chat-hf/1 | .gitattributes, README.md, adapter_config.json, adapter_model.safetensors, config.json, generation_config.json, llama-3-8b-chat-doctr-Q4_K_M.gguf, model-00001-of-00004.safetensors, model-00002-of-00004.safetensors, model-00003-of-00004.safetensors, model-00004-of-00004.safetensors, model.safetensors.index.json, special... |
Deepakiitm_llama-3-8b-chat-doctor.json | Deepakiitm/llama-3-8b-chat-doctor | llama | {"_name_or_path": "/kaggle/input/llama-3/transformers/8b-chat-hf/1", "architectures": ["LlamaForCausalLM"], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128256, "eos_token_id": 128257, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_e... | 2024-07-24 | 11 | 0 | null | transformers, safetensors, gguf, llama, text-generation, medical, text2text-generation, en, license:apache-2.0, autotrain_compatible, text-generation-inference, endpoints_compatible, region:us, conversational | text2text-generation | 0 | /kaggle/input/llama-3/transformers/8b-chat-hf/1 | .gitattributes, README.md, adapter_config.json, adapter_model.safetensors, config.json, generation_config.json, llama-3-8b-chat-doctr-Q4_K_M.gguf, model-00001-of-00004.safetensors, model-00002-of-00004.safetensors, model-00003-of-00004.safetensors, model-00004-of-00004.safetensors, model.safetensors.index.json, special... |
Deepakiitm_llama-3-8b-chat-doctor.json | Deepakiitm/llama-3-8b-chat-doctor | llama | {"_name_or_path": "/kaggle/input/llama-3/transformers/8b-chat-hf/1", "architectures": ["LlamaForCausalLM"], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128256, "eos_token_id": 128257, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_e... | 2024-07-24 | 11 | 0 | null | transformers, safetensors, gguf, llama, text-generation, medical, text2text-generation, en, license:apache-2.0, autotrain_compatible, text-generation-inference, endpoints_compatible, region:us, conversational | text2text-generation | 0 | /kaggle/input/llama-3/transformers/8b-chat-hf/1 | .gitattributes, README.md, adapter_config.json, adapter_model.safetensors, config.json, generation_config.json, llama-3-8b-chat-doctr-Q4_K_M.gguf, model-00001-of-00004.safetensors, model-00002-of-00004.safetensors, model-00003-of-00004.safetensors, model-00004-of-00004.safetensors, model.safetensors.index.json, special... |
ddyuudd_dolly-v2-3b.json | ddyuudd/dolly-v2-3b | gpt_neox | {"_name_or_path": "databricks/dolly-v2-3b", "architectures": ["GPTNeoXForCausalLM"], "attention_dropout": 0.0, "bos_token_id": 0, "classifier_dropout": 0.1, "custom_pipelines": {"text-generation": {"impl": "instruct_pipeline.InstructionTextGenerationPipeline", "pt": "AutoModelForCausalLM", "tf": "TFAutoModelForCausalLM... | 2024-02-22 | 18 | 0 | null | transformers, safetensors, gpt_neox, text-generation, arxiv:1910.09700, base_model:databricks/dolly-v2-3b, base_model:finetune:databricks/dolly-v2-3b, license:mit, autotrain_compatible, text-generation-inference, endpoints_compatible, region:us | text-generation | 0 | databricks/dolly-v2-3b | .gitattributes, README.md, config.json, generation_config.json, model-00001-of-00003.safetensors, model-00002-of-00003.safetensors, model-00003-of-00003.safetensors, model.safetensors.index.json, special_tokens_map.json, tokenizer.json, tokenizer_config.json |
AyazK_Clinical_bert_DRG_prediction_phase1.json | AyazK/Clinical_bert_DRG_prediction_phase1 | bert | {"_name_or_path": "./experiments/classification/bert-512-16-2e-05-March-23-22-15", "architectures": ["BertForSequenceClassification"], "attention_probs_dropout_prob": 0.1, "classifier_dropout": null, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 30... | 2024-05-16 | 20 | 0 | null | transformers, pytorch, bert, text-classification, autotrain_compatible, endpoints_compatible, region:us | text-classification | 0 | ./experiments/classification/bert-512-16-2e-05-March-23-22-15 | .gitattributes, config.json, optimizer.pt, pytorch_model.bin, rng_state.pth, scheduler.pt, special_tokens_map.json, tokenizer.json, tokenizer_config.json, trainer_state.json, training_args.bin |
LoneStriker_Thespis-34b-DPO-v0.7-8.0bpw-h8-exl2.json | LoneStriker/Thespis-34b-DPO-v0.7-8.0bpw-h8-exl2 | llama | {"_name_or_path": "cgato/Thespis-34b-v0.7", "architectures": ["LlamaForCausalLM"], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 2, "hidden_act": "silu", "hidden_size": 7168, "initializer_range": 0.02, "intermediate_size": 20480, "max_position_embeddings": 200000, "num_attention_... | 2024-01-17 | 9 | 0 | null | transformers, pytorch, llama, text-generation, not-for-all-audiences, autotrain_compatible, text-generation-inference, endpoints_compatible, region:us | text-generation | 0 | cgato/Thespis-34b-v0.7 | .gitattributes, README.md, config.json, generation_config.json, output-00001-of-00005.safetensors, output-00002-of-00005.safetensors, output-00003-of-00005.safetensors, output-00004-of-00005.safetensors, output-00005-of-00005.safetensors, pytorch_model.bin.index.json, special_tokens_map.json, tokenizer.json, tokenizer.... |
AJ2036_test.json | AJ2036/test | gpt2 | {"_name_or_path": "uer/gpt2-chinese-cluecorpussmall", "activation_function": "gelu_new", "architectures": ["GPT2LMHeadModel"], "attn_pdrop": 0.1, "bos_token_id": 50256, "embd_pdrop": 0.1, "eos_token_id": 50256, "gradient_checkpointing": false, "initializer_range": 0.02, "layer_norm_epsilon": 1e-05, "n_ctx": 1024, "n_em... | 2023-05-27 | 12 | 0 | null | transformers, pytorch, gpt2, text-generation, autotrain_compatible, text-generation-inference, endpoints_compatible, region:us | text-generation | 0 | uer/gpt2-chinese-cluecorpussmall | .gitattributes, config.json, generation_config.json, pytorch_model.bin, special_tokens_map.json, tokenizer_config.json, training_args.bin, vocab.txt |
FartLabs_Stable_A.json | FartLabs/Stable_A | roberta | {"_name_or_path": "seyonec/SMILES_tokenized_PubChem_shard00_160k", "architectures": ["RobertaForSequenceClassification"], "attention_probs_dropout_prob": 0.1, "bos_token_id": 0, "classifier_dropout": null, "eos_token_id": 2, "gradient_checkpointing": false, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size... | 2024-11-05 | 8 | 0 | null | transformers, safetensors, roberta, text-classification, chemistry, dataset:FartLabs/FartDB, autotrain_compatible, endpoints_compatible, region:us | text-classification | 0 | seyonec/SMILES_tokenized_PubChem_shard00_160k | .gitattributes, README.md, added_tokens.json, config.json, merges.txt, model.safetensors, special_tokens_map.json, tokenizer.json, tokenizer_config.json, vocab.json |
espressor_lmsys.vicuna-7b-v1.5.2b90s128g.json | espressor/lmsys.vicuna-7b-v1.5.2b90s128g | llama | {"_name_or_path": "lmsys/vicuna-7b-v1.5", "architectures": ["LlamaForCausalLM"], "attention_bias": false, "bos_token_id": 1, "eos_token_id": 2, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 11008, "max_position_embeddings": 4096, "num_attention_heads": 32, "num_hidden_layers... | 2024-04-04 | 8 | 0 | null | transformers, llama, text-generation, autotrain_compatible, endpoints_compatible, region:us | text-generation | 0 | lmsys/vicuna-7b-v1.5 | compress_config.json, config.json, deltazip-compressed.safetensors |
DOOGLAK_Article_250v6_NER_Model_3Epochs_UNAUGMENTED.json | DOOGLAK/Article_250v6_NER_Model_3Epochs_UNAUGMENTED | bert | {"_name_or_path": "bert-base-cased", "architectures": ["BertForTokenClassification"], "attention_probs_dropout_prob": 0.1, "classifier_dropout": null, "gradient_checkpointing": false, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_... | 2022-08-11 | 19 | 0 | null | transformers, pytorch, tensorboard, bert, token-classification, generated_from_trainer, dataset:article250v6_wikigold_split, license:apache-2.0, model-index, autotrain_compatible, endpoints_compatible, region:us | token-classification | 0 | bert-base-cased | .gitattributes, .gitignore, README.md, config.json, pytorch_model.bin, runs/Aug11_19-12-04_DOOGLAKS-PC/1660259539.8862545/events.out.tfevents.1660259539.DOOGLAKS-PC.19932.364, runs/Aug11_19-12-04_DOOGLAKS-PC/events.out.tfevents.1660259539.DOOGLAKS-PC.19932.363, runs/Aug11_19-12-04_DOOGLAKS-PC/events.out.tfevents.166025... |
sheoran95_normal_nodes_augmented_graphs_with_edge_document_level_T5_run3.json | sheoran95/normal_nodes_augmented_graphs_with_edge_document_level_T5_run3 | t5 | {"_name_or_path": "t5-small", "architectures": ["T5ForConditionalGeneration"], "d_ff": 2048, "d_kv": 64, "d_model": 512, "decoder_start_token_id": 0, "dense_act_fn": "relu", "dropout_rate": 0.1, "eos_token_id": 1, "feed_forward_proj": "relu", "initializer_factor": 1.0, "is_encoder_decoder": true, "is_gated_act": false,... | 2023-04-18 | 3 | 0 | null | transformers, pytorch, tensorboard, t5, text2text-generation, autotrain_compatible, text-generation-inference, endpoints_compatible, region:us | text2text-generation | 0 | t5-small | .gitattributes, .gitignore, added_tokens.json, config.json, pytorch_model.bin, runs/Apr18_12-17-30_7f572eb59841/1681820256.0204103/events.out.tfevents.1681820256.7f572eb59841.156.1, runs/Apr18_12-17-30_7f572eb59841/events.out.tfevents.1681820256.7f572eb59841.156.0, special_tokens_map.json, spiece.model, tokenizer.json,... |
coconana_Qwen-Qwen1.5-0.5B-1717807535.json | coconana/Qwen-Qwen1.5-0.5B-1717807535 | qwen2 | {"_name_or_path": "merged_model", "architectures": ["Qwen2ForCausalLM"], "attention_dropout": 0.0, "bos_token_id": 151643, "eos_token_id": 151643, "hidden_act": "silu", "hidden_size": 1024, "initializer_range": 0.02, "intermediate_size": 2816, "max_position_embeddings": 32768, "max_window_layers": 21, "num_attention_he... | 2024-06-08 | 20 | 0 | null | transformers, safetensors, qwen2, text-generation, conversational, arxiv:1910.09700, autotrain_compatible, text-generation-inference, endpoints_compatible, region:us | text-generation | 0 | merged_model | .gitattributes, README.md, added_tokens.json, config.json, generation_config.json, merges.txt, model.safetensors, special_tokens_map.json, tokenizer.json, tokenizer_config.json, vocab.json |
ys7yoo_binary-inference_bert-base_lr1e-03_wd1e-03_bs16_ep10_plant_fold0.json | ys7yoo/binary-inference_bert-base_lr1e-03_wd1e-03_bs16_ep10_plant_fold0 | bert | {"_name_or_path": "klue/bert-base", "architectures": ["BertForSequenceClassification"], "attention_probs_dropout_prob": 0.1, "classifier_dropout": null, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-12, "max_position_embe... | 2023-12-04 | 11 | 0 | null | transformers, pytorch, bert, text-classification, autotrain_compatible, endpoints_compatible, region:us | text-classification | 0 | klue/bert-base | .gitattributes, config.json, pytorch_model.bin, special_tokens_map.json, tokenizer.json, tokenizer_config.json, training_args.bin, vocab.txt |
jeiku_Soulful_Bepis_9B.json | jeiku/Soulful_Bepis_9B | mistral | {"_name_or_path": "ChaoticNeutrals/Bepis_9B", "architectures": ["MistralForCausalLM"], "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 2, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 32768, "num_attention_heads": 32, "num_hidde... | 2024-03-04 | 19 | 2 | null | transformers, safetensors, mistral, text-generation, mergekit, merge, en, dataset:ChaoticNeutrals/Synthetic_Soul_1k, base_model:ChaoticNeutrals/Bepis_9B, base_model:merge:ChaoticNeutrals/Bepis_9B, base_model:jeiku/Synthetic_Soul_1k_Mistral_128, base_model:merge:jeiku/Synthetic_Soul_1k_Mistral_128, license:other, autotr... | text-generation | 0 | ChaoticNeutrals/Bepis_9B | .gitattributes, README.md, config.json, mergekit_config.yml, model-00001-of-00002.safetensors, model-00002-of-00002.safetensors, model.safetensors.index.json, special_tokens_map.json, tokenizer.json, tokenizer.model, tokenizer_config.json |
NasimB_cbt-rarity-all-guten-rarity-all-end-19k-mixed.json | NasimB/cbt-rarity-all-guten-rarity-all-end-19k-mixed | gpt2 | {"_name_or_path": "gpt2", "activation_function": "gelu_new", "architectures": ["GPT2LMHeadModel"], "attn_pdrop": 0.1, "bos_token_id": 50256, "embd_pdrop": 0.1, "eos_token_id": 50256, "initializer_range": 0.02, "layer_norm_epsilon": 1e-05, "n_ctx": 128, "n_embd": 768, "n_head": 12, "n_inner": null, "n_layer": 12, "n_pos... | 2023-07-17 | 11 | 0 | null | transformers, pytorch, gpt2, text-generation, generated_from_trainer, dataset:generator, license:mit, autotrain_compatible, text-generation-inference, endpoints_compatible, region:us | text-generation | 0 | gpt2 | .gitattributes, .gitignore, README.md, config.json, generation_config.json, merges.txt, pytorch_model.bin, special_tokens_map.json, tokenizer.json, tokenizer_config.json, training_args.bin, vocab.json |
mlfoundations-dev_OH_DCFT_V3_wo_evol_instruct_70k.json | mlfoundations-dev/OH_DCFT_V3_wo_evol_instruct_70k | llama | {"_name_or_path": "meta-llama/Llama-3.1-8B", "architectures": ["LlamaForCausalLM"], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddin... | 2024-10-31 | 11 | 0 | null | transformers, safetensors, llama, text-generation, llama-factory, full, generated_from_trainer, conversational, base_model:meta-llama/Llama-3.1-8B, base_model:finetune:meta-llama/Llama-3.1-8B, license:llama3.1, autotrain_compatible, text-generation-inference, endpoints_compatible, region:us | text-generation | 0 | meta-llama/Llama-3.1-8B | .gitattributes, README.md, all_results.json, config.json, eval_results.json, generation_config.json, model-00001-of-00004.safetensors, model-00002-of-00004.safetensors, model-00003-of-00004.safetensors, model-00004-of-00004.safetensors, model.safetensors.index.json, special_tokens_map.json, tokenizer.json, tokenizer_co... |
AlignmentResearch_robust_llm_pythia-tt-160m-mz-ada-v3-ch-103000.json | AlignmentResearch/robust_llm_pythia-tt-160m-mz-ada-v3-ch-103000 | gpt_neox | {"_name_or_path": "EleutherAI/pythia-160m-deduped", "architectures": ["GPTNeoXForSequenceClassification"], "attention_bias": true, "attention_dropout": 0.0, "bos_token_id": 0, "classifier_dropout": 0.1, "eos_token_id": 0, "hidden_act": "gelu", "hidden_dropout": 0.0, "hidden_size": 768, "initializer_range": 0.02, "inter... | 2024-03-22 | 7 | 0 | null | transformers, safetensors, gpt_neox, text-classification, generated_from_trainer, base_model:EleutherAI/pythia-160m-deduped, base_model:finetune:EleutherAI/pythia-160m-deduped, license:apache-2.0, autotrain_compatible, text-generation-inference, endpoints_compatible, region:us | text-classification | 0 | EleutherAI/pythia-160m-deduped | .gitattributes, README.md, config.json, model.safetensors, special_tokens_map.json, tokenizer.json, tokenizer_config.json, training_args.bin |
Virt-io_Irene-RP-v2-7B.json | Virt-io/Irene-RP-v2-7B | mistral | {"_name_or_path": "Endevor/InfinityRP-v1-7B", "architectures": ["MistralForCausalLM"], "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 2, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 32768, "num_attention_heads": 32, "num_hidde... | 2024-03-20 | 16 | 1 | null | transformers, safetensors, mistral, text-generation, mergekit, merge, roleplay, arxiv:2212.04089, autotrain_compatible, text-generation-inference, endpoints_compatible, region:us | text-generation | 0 | Endevor/InfinityRP-v1-7B | .gitattributes, README.md, config.json, mergekit_config.yml, model-00001-of-00008.safetensors, model-00002-of-00008.safetensors, model-00003-of-00008.safetensors, model-00004-of-00008.safetensors, model-00005-of-00008.safetensors, model-00006-of-00008.safetensors, model-00007-of-00008.safetensors, model-00008-of-00008.... |
Vigneshwar-colab_mt5-small-finetuned-amazon-en-es.json | Vigneshwar-colab/mt5-small-finetuned-amazon-en-es | mt5 | {"_name_or_path": "google/mt5-small", "architectures": ["MT5ForConditionalGeneration"], "classifier_dropout": 0.0, "d_ff": 1024, "d_kv": 64, "d_model": 512, "decoder_start_token_id": 0, "dense_act_fn": "gelu_new", "dropout_rate": 0.1, "eos_token_id": 1, "feed_forward_proj": "gated-gelu", "initializer_factor": 1.0, "is_... | 2024-12-05 | 1 | 0 | null | transformers, tensorboard, safetensors, mt5, text2text-generation, translation, generated_from_trainer, base_model:google/mt5-small, base_model:finetune:google/mt5-small, license:apache-2.0, autotrain_compatible, endpoints_compatible, region:us | translation | 0 | google/mt5-small | .gitattributes, README.md, config.json, generation_config.json, model.safetensors, runs/Dec05_03-21-36_29e5cb4e18a8/events.out.tfevents.1733368950.29e5cb4e18a8.267.0, runs/Dec05_03-21-36_29e5cb4e18a8/events.out.tfevents.1733370569.29e5cb4e18a8.267.1, runs/Dec05_03-21-36_29e5cb4e18a8/events.out.tfevents.1733372004.29e5c... |
Vigneshwar-colab_mt5-small-finetuned-amazon-en-es.json | Vigneshwar-colab/mt5-small-finetuned-amazon-en-es | mt5 | {"_name_or_path": "google/mt5-small", "architectures": ["MT5ForConditionalGeneration"], "classifier_dropout": 0.0, "d_ff": 1024, "d_kv": 64, "d_model": 512, "decoder_start_token_id": 0, "dense_act_fn": "gelu_new", "dropout_rate": 0.1, "eos_token_id": 1, "feed_forward_proj": "gated-gelu", "initializer_factor": 1.0, "is_... | 2024-12-05 | 1 | 0 | null | transformers, tensorboard, safetensors, mt5, text2text-generation, translation, generated_from_trainer, base_model:google/mt5-small, base_model:finetune:google/mt5-small, license:apache-2.0, autotrain_compatible, endpoints_compatible, region:us | translation | 0 | google/mt5-small | .gitattributes, README.md, config.json, generation_config.json, model.safetensors, runs/Dec05_03-21-36_29e5cb4e18a8/events.out.tfevents.1733368950.29e5cb4e18a8.267.0, runs/Dec05_03-21-36_29e5cb4e18a8/events.out.tfevents.1733370569.29e5cb4e18a8.267.1, runs/Dec05_03-21-36_29e5cb4e18a8/events.out.tfevents.1733372004.29e5c... |
Vigneshwar-colab_mt5-small-finetuned-amazon-en-es.json | Vigneshwar-colab/mt5-small-finetuned-amazon-en-es | mt5 | {"_name_or_path": "google/mt5-small", "architectures": ["MT5ForConditionalGeneration"], "classifier_dropout": 0.0, "d_ff": 1024, "d_kv": 64, "d_model": 512, "decoder_start_token_id": 0, "dense_act_fn": "gelu_new", "dropout_rate": 0.1, "eos_token_id": 1, "feed_forward_proj": "gated-gelu", "initializer_factor": 1.0, "is_... | 2024-12-05 | 1 | 0 | null | transformers, tensorboard, safetensors, mt5, text2text-generation, translation, generated_from_trainer, base_model:google/mt5-small, base_model:finetune:google/mt5-small, license:apache-2.0, autotrain_compatible, endpoints_compatible, region:us | translation | 0 | google/mt5-small | .gitattributes, README.md, config.json, generation_config.json, model.safetensors, runs/Dec05_03-21-36_29e5cb4e18a8/events.out.tfevents.1733368950.29e5cb4e18a8.267.0, runs/Dec05_03-21-36_29e5cb4e18a8/events.out.tfevents.1733370569.29e5cb4e18a8.267.1, runs/Dec05_03-21-36_29e5cb4e18a8/events.out.tfevents.1733372004.29e5c... |
Vigneshwar-colab_mt5-small-finetuned-amazon-en-es.json | Vigneshwar-colab/mt5-small-finetuned-amazon-en-es | mt5 | {"_name_or_path": "google/mt5-small", "architectures": ["MT5ForConditionalGeneration"], "classifier_dropout": 0.0, "d_ff": 1024, "d_kv": 64, "d_model": 512, "decoder_start_token_id": 0, "dense_act_fn": "gelu_new", "dropout_rate": 0.1, "eos_token_id": 1, "feed_forward_proj": "gated-gelu", "initializer_factor": 1.0, "is_... | 2024-12-05 | 1 | 0 | null | transformers, tensorboard, safetensors, mt5, text2text-generation, translation, generated_from_trainer, base_model:google/mt5-small, base_model:finetune:google/mt5-small, license:apache-2.0, autotrain_compatible, endpoints_compatible, region:us | translation | 0 | google/mt5-small | .gitattributes, README.md, config.json, generation_config.json, model.safetensors, runs/Dec05_03-21-36_29e5cb4e18a8/events.out.tfevents.1733368950.29e5cb4e18a8.267.0, runs/Dec05_03-21-36_29e5cb4e18a8/events.out.tfevents.1733370569.29e5cb4e18a8.267.1, runs/Dec05_03-21-36_29e5cb4e18a8/events.out.tfevents.1733372004.29e5c... |
diegozs97_chemprot-seed-1-60k.json | diegozs97/chemprot-seed-1-60k | bert | {"_name_or_path": "models_finetuned_chemprot/seed-1/check-60k/checkpoint-5000/", "architectures": ["BertForMaskedLM"], "attention_probs_dropout_prob": 0.1, "classifier_dropout": null, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_... | 2021-12-07 | 2 | 0 | null | transformers, pytorch, bert, fill-mask, autotrain_compatible, endpoints_compatible, region:us | fill-mask | 0 | models_finetuned_chemprot/seed-1/check-60k/checkpoint-5000/ | .gitattributes, config.json, pytorch_model.bin |
utkarsh4430_ABEX-abstract-expand.json | utkarsh4430/ABEX-abstract-expand | bart | {"_name_or_path": "facebook/bart-base", "activation_dropout": 0.1, "activation_function": "gelu", "add_bias_logits": false, "add_final_layer_norm": false, "architectures": ["BartForConditionalGeneration"], "attention_dropout": 0.1, "bos_token_id": 0, "classif_dropout": 0.1, "classifier_dropout": 0.0, "d_model": 768, "d... | 2024-05-29 | 19 | 1 | null | transformers, pytorch, bart, text2text-generation, license:apache-2.0, autotrain_compatible, endpoints_compatible, region:us | text2text-generation | 0 | facebook/bart-base | .gitattributes, README.md, config.json, generation_config.json, merges.txt, optimizer.pt, pytorch_model.bin, rng_state.pth, scheduler.pt, special_tokens_map.json, tokenizer.json, tokenizer_config.json, trainer_state.json, training_args.bin, vocab.json |
Evan-Lin_Bart-abs-amazon-allure.json | Evan-Lin/Bart-abs-amazon-allure | bart | {"_name_or_path": "Evan-Lin/amazon-abs-bart-base-30-words", "activation_dropout": 0.1, "activation_function": "gelu", "add_bias_logits": false, "add_final_layer_norm": false, "architectures": ["BartForConditionalGeneration"], "attention_dropout": 0.1, "bos_token_id": 0, "classif_dropout": 0.1, "classifier_dropout": 0.0... | 2023-08-09 | 6 | 0 | null | transformers, pytorch, bart, text2text-generation, trl, reinforcement-learning, license:apache-2.0, autotrain_compatible, endpoints_compatible, region:us | reinforcement-learning | 0 | Evan-Lin/amazon-abs-bart-base-30-words | .gitattributes, README.md, config.json, generation_config.json, merges.txt, pytorch_model.bin, special_tokens_map.json, tokenizer.json, tokenizer_config.json, vocab.json |
RachitD15673_mistral-finetuned-7B-instruct-LLM.json | RachitD15673/mistral-finetuned-7B-instruct-LLM | mistral | {"_name_or_path": "bn22/Mistral-7B-Instruct-v0.1-sharded", "architectures": ["MistralForCausalLM"], "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 2, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 32768, "num_attention_heads": 3... | 2023-11-22 | 14 | 0 | null | transformers, safetensors, mistral, text-generation, autotrain_compatible, text-generation-inference, endpoints_compatible, region:us | text-generation | 0 | bn22/Mistral-7B-Instruct-v0.1-sharded | .gitattributes, config.json, generation_config.json, model-00001-of-00003.safetensors, model-00002-of-00003.safetensors, model-00003-of-00003.safetensors, model.safetensors.index.json, special_tokens_map.json, tokenizer.json, tokenizer.model, tokenizer_config.json |
imannrhman_old_code_llama_fixer-7b.json | imannrhman/old_code_llama_fixer-7b | llama | {"_name_or_path": "codellama/CodeLlama-7b-hf", "architectures": ["LlamaForCausalLM"], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 2, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 11008, "max_position_embeddings": 16384, "num_attentio... | 2024-03-16 | 13 | 0 | null | transformers, safetensors, llama, text-generation, autotrain_compatible, text-generation-inference, endpoints_compatible, region:us | text-generation | 0 | codellama/CodeLlama-7b-hf | .gitattributes, config.json, generation_config.json, model-00001-of-00007.safetensors, model-00002-of-00007.safetensors, model-00003-of-00007.safetensors, model-00004-of-00007.safetensors, model-00005-of-00007.safetensors, model-00006-of-00007.safetensors, model-00007-of-00007.safetensors, model.safetensors.index.json,... |
GooKSL_BioM-BERT-PubMed-PMC-Large-GAD.json | GooKSL/BioM-BERT-PubMed-PMC-Large-GAD | electra | {"_name_or_path": "GAD_hf/BioM-BERT-PubMed-PMC-Large", "architectures": ["ElectraModel"], "attention_probs_dropout_prob": 0.1, "classifier_dropout": null, "embedding_size": 1024, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 1024, "initializer_range": 0.02, "intermediate_size": 4096, "layer_norm_eps"... | 2024-06-07 | 17 | 0 | null | transformers, safetensors, electra, feature-extraction, arxiv:1910.09700, endpoints_compatible, region:us | feature-extraction | 0 | GAD_hf/BioM-BERT-PubMed-PMC-Large | .gitattributes, README.md, config.json, model.safetensors |
unsloth_Qwen2.5-Coder-14B-Instruct-bnb-4bit.json | unsloth/Qwen2.5-Coder-14B-Instruct-bnb-4bit | qwen2 | {"_name_or_path": "Qwen/Qwen2.5-Coder-14B-Instruct", "architectures": ["Qwen2ForCausalLM"], "attention_dropout": 0.0, "bos_token_id": 151643, "eos_token_id": 151645, "hidden_act": "silu", "hidden_size": 5120, "initializer_range": 0.02, "intermediate_size": 13824, "max_position_embeddings": 32768, "max_window_layers": 7... | 2024-11-12 | 5,550 | 0 | null | transformers, safetensors, qwen2, text-generation, unsloth, code, qwen, qwen-coder, codeqwen, conversational, en, arxiv:2409.12186, arxiv:2407.10671, base_model:Qwen/Qwen2.5-Coder-14B-Instruct, base_model:quantized:Qwen/Qwen2.5-Coder-14B-Instruct, license:apache-2.0, autotrain_compatible, text-generation-inference, end... | text-generation | 0 | Qwen/Qwen2.5-Coder-14B-Instruct | .gitattributes, README.md, added_tokens.json, config.json, generation_config.json, merges.txt, model-00001-of-00002.safetensors, model-00002-of-00002.safetensors, model.safetensors.index.json, special_tokens_map.json, tokenizer.json, tokenizer_config.json, vocab.json |
lgrobol_xlm-r-CreoleEval_kon.json | lgrobol/xlm-r-CreoleEval_kon | xlm-roberta | {"_name_or_path": "xlm-roberta-base", "architectures": ["XLMRobertaForMaskedLM"], "attention_probs_dropout_prob": 0.1, "bos_token_id": 0, "classifier_dropout": null, "eos_token_id": 2, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm... | 2023-09-18 | 5 | 0 | null | transformers, pytorch, xlm-roberta, fill-mask, autotrain_compatible, endpoints_compatible, region:us | fill-mask | 0 | xlm-roberta-base | .gitattributes, config.json, model.safetensors, pytorch_model.bin, special_tokens_map.json, tokenizer.json, tokenizer_config.json |
FreedomIntelligence_LongLLaVA-9B.json | FreedomIntelligence/LongLLaVA-9B | llava_jamba | {"architectures": ["LlavaJambaForCausalLM"], "attention_dropout": 0.0, "attn_layer_offset": 4, "attn_layer_period": 8, "auto_map": {"AutoConfig": "configuration_jamba.JambaConfig", "AutoModel": "modeling_jamba.JambaModel", "AutoModelForCausalLM": "modeling_jamba.JambaForCausalLM", "AutoModelForSequenceClassification": ... | 2024-10-11 | 799 | 4 | null | transformers, safetensors, llava_jamba, text-generation, image-text-to-text, custom_code, arxiv:2409.02889, license:mit, autotrain_compatible, endpoints_compatible, region:us | image-text-to-text | 1 | null | .gitattributes, README.md, assets/NIAH.png, assets/arch.png, assets/assets_NIAH.png, assets/assets_arch.png, assets/assets_dataset.png, assets/assets_diaresult.png, assets/assets_header.png, assets/assets_logo.png, assets/assets_result1.png, assets/assets_singleGPU.png, assets/dataset.png, assets/demo, assets/diaresult... |
End of preview. Expand in Data Studio
README.md exists but content is empty.
- Downloads last month
- 15