modelId string | author string | last_modified timestamp[us, tz=UTC] | downloads int64 | likes int64 | library_name string | tags list | pipeline_tag string | createdAt timestamp[us, tz=UTC] | card string |
|---|---|---|---|---|---|---|---|---|---|
ihanif/whisper-tiny-minds-en | ihanif | 2023-07-01T00:30:48Z | 88 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-06-30T23:08:45Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-minds-en
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
conf... |
poojakp/output | poojakp | 2023-06-30T23:30:26Z | 57 | 0 | transformers | [
"transformers",
"pytorch",
"RefinedWebModel",
"text-generation",
"generated_from_trainer",
"custom_code",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"region:us"
] | text-generation | 2023-06-30T23:01:15Z | ---
tags:
- generated_from_trainer
model-index:
- name: output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output
This model is a fine-tuned version of [y... |
psymon/QLoRa-polyglot-12.8b-translate | psymon | 2023-06-30T17:53:44Z | 1 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-06-30T15:33:36Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_ty... |
chaowu/rl_course_vizdoom_health_gathering_supreme | chaowu | 2023-06-30T17:24:35Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-30T17:24:26Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_sup... |
davanstrien/autotrain-color-image-dating-55447129537 | davanstrien | 2023-06-30T15:00:05Z | 190 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"swin",
"image-classification",
"autotrain",
"vision",
"dataset:biglam/dating-historical-color-images",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-05-05T09:01:46Z | ---
tags:
- autotrain
- vision
- image-classification
datasets:
- biglam/dating-historical-color-images
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: T... |
paorph/sentiment_analysis_amazon_echo_reviews | paorph | 2023-06-30T13:52:53Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2023-06-30T12:15:13Z | ---
license: apache-2.0
---
## Sentimen_analysis amazon_echo model
This is a Naive Bayes classifier trained on 32 thousand amazon echo reviews for sentiment analysis. This model is suitable for English.
Labels: 0 -> Negative; 1 -> Positive
### Model training
```
import pandas as pd
import numpy as np
import seabo... |
jondurbin/airoboros-7b-gpt4-1.4.1-qlora | jondurbin | 2023-06-30T12:36:11Z | 1,427 | 2 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:jondurbin/airoboros-gpt4-1.4.1",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-06-30T11:27:44Z | ---
license: cc-by-nc-4.0
datasets:
- jondurbin/airoboros-gpt4-1.4.1
---
## Overview
This is a qlora fine-tune 7b parameter LlaMa model, using completely synthetic training data created gpt4 via https://github.com/jondurbin/airoboros
Dataset used: https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.4.1
The p... |
Leszekasdfff/legal-bert-swift | Leszekasdfff | 2023-06-30T12:16:46Z | 0 | 0 | null | [
"coreml",
"en",
"license:cc-by-4.0",
"region:us"
] | null | 2023-06-30T11:15:19Z | ---
license: cc-by-4.0
language:
- en
--- |
heka-ai/e5-10k | heka-ai | 2023-06-30T12:04:58Z | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2023-06-30T12:04:53Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# heka-ai/e5-10k
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or seman... |
ckpt/controlavideo-hed | ckpt | 2023-06-30T11:56:41Z | 4 | 0 | diffusers | [
"diffusers",
"arxiv:2305.13840",
"license:gpl-3.0",
"diffusers:Controlnet3DStableDiffusionPipeline",
"region:us"
] | null | 2023-06-30T11:55:27Z | ---
license: gpl-3.0
---
- Hed control pretrained model for [control-a-video](https://arxiv.org/abs/2305.13840)
- Project page: https://controlavideo.github.io/
|
halffried/gyre_zitspp | halffried | 2023-06-30T11:54:12Z | 0 | 1 | null | [
"region:us"
] | null | 2023-06-30T11:49:49Z | # ZITS-PlusPlus models for Gyre
Models from https://github.com/ewrfcas/ZITS-PlusPlus
Distributed under the Apache-2.0 license
Changes:
- Converted to safetensors
- lsm_hawp config converted to yaml
|
anonymousparrot01/SubmissionModel | anonymousparrot01 | 2023-06-30T09:19:28Z | 161 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"business",
"finance",
"en",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | null | 2023-06-30T09:18:41Z | ---
language: en
tags:
- bert
- business
- finance
license: cc-by-4.0
datasets:
- CompanyWeb
- MD&A
- S2ORC
---
# BusinessBERT
An industry-sensitive language model for business applications pretrained on business communication corpora. The model incorporates industry classification (IC) as a pretraining objective bes... |
dhorbach/hfc_rl_course_vizdoom_health_gathering_supreme | dhorbach | 2023-06-30T09:08:29Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-01T13:18:13Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_sup... |
TheBloke/UltraLM-13B-fp16 | TheBloke | 2023-06-30T08:49:01Z | 1,549 | 4 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:stingning/ultrachat",
"arxiv:2305.14233",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-06-29T21:21:38Z | ---
inference: false
license: other
datasets:
- stingning/ultrachat
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between... |
Juardo/bsc_ai_thesis_torgo_model-1 | Juardo | 2023-06-30T08:15:45Z | 160 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | 2023-06-30T00:27:24Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: bsc_ai_thesis_torgo_model-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complet... |
notillus47/test-1 | notillus47 | 2023-06-30T06:47:54Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-06-30T06:41:55Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_ty... |
YIMMYCRUZ/Jun30_03-18-57_2565635c8aeb | YIMMYCRUZ | 2023-06-30T03:24:30Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-30T03:21:20Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: Jun30_03-18-57_2565635c8aeb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: validation... |
alitair/dqn-SpaceInvadersNoFrameskip-v4 | alitair | 2023-06-29T21:02:18Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-29T21:01:33Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
... |
jmgonzal/gpt2-wikitext2 | jmgonzal | 2023-06-29T18:51:54Z | 16 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-06-28T19:19:01Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-wikitext2
This model ... |
jcnecio/rl_course_vizdoom_health_gathering_supreme | jcnecio | 2023-06-29T13:55:20Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-29T13:55:15Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_sup... |
p120/paul | p120 | 2023-06-29T08:22:40Z | 30 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-06-29T08:19:03Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### paul Dreambooth model trained by p120 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab... |
johacbeg/distilbert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos | johacbeg | 2023-06-29T06:13:07Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-28T15:50:29Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: distilbert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofrea... |
taeminlee/kogpt2 | taeminlee | 2023-06-29T05:17:27Z | 460 | 1 | transformers | [
"transformers",
"pytorch",
"jax",
"safetensors",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | # KoGPT2-Transformers
KoGPT2 on Huggingface Transformers
### KoGPT2-Transformers
- [SKT-AI 에서 공개한 KoGPT2 (ver 1.0)](https://github.com/SKT-AI/KoGPT2)를 [Transformers](https://github.com/huggingface/transformers)에서 사용하도록 하였습니다.
- **SKT-AI 에서 KoGPT2 2.0을 공개하였습니다. https://huggingface.co/skt/kogpt2-base-v2/**
### Demo... |
chaowu/a2c-AntBulletEnv-v0 | chaowu | 2023-06-29T03:45:43Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-29T00:50:29Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
... |
Ahmed007/Dr.Smart_v2 | Ahmed007 | 2023-06-29T03:29:37Z | 63 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"vit",
"image-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-06-29T02:37:11Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Ahmed007/Dr.Smart_v2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Ahmed007/Dr.S... |
Mozzipa/qlora-koalpaca-polyglot-12.8b-50step | Mozzipa | 2023-06-28T23:43:40Z | 2 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-06-28T23:43:37Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_ty... |
Alyss97/bert-base-multilingual-uncased-sentiment-finetuned-MeIA-AnalisisDeSentimientos | Alyss97 | 2023-06-28T23:17:00Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-28T16:34:21Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: bert-base-multilingual-uncased-sentiment-finetuned-MeIA-AnalisisDeSentimientos
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread... |
rvrtdta/roberta-base-bne-finetuned-MeIA-AnalisisDeSentimientos | rvrtdta | 2023-06-28T22:22:16Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-26T18:21:10Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: roberta-base-bne-finetuned-MeIA-AnalisisDeSentimientos
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it,... |
klamerE/ppo-Huggy | klamerE | 2023-06-28T22:11:12Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-06-28T22:11:07Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (... |
NasimB/bert-dp-4 | NasimB | 2023-06-28T21:01:05Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"dataset:generator",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-06-26T01:24:27Z | ---
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: bert-dp-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-dp-4
This model i... |
ahishamm/vit-base-isic-patch-16 | ahishamm | 2023-06-28T19:41:07Z | 191 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-06-28T19:35:26Z | ---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
- precision
model-index:
- name: vit-base-isic-patch-16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably pro... |
trevorj/ppo-lunarlander1 | trevorj | 2023-06-28T17:41:35Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-28T17:41:10Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
... |
paust/pko-t5-large | paust | 2023-06-28T17:03:42Z | 751 | 20 | transformers | [
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"ko",
"arxiv:2105.09680",
"license:cc-by-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-05-16T11:59:52Z | ---
language: ko
license: cc-by-4.0
---
# pko-t5-large
[Source Code](https://github.com/paust-team/pko-t5)
pko-t5 는 한국어 전용 데이터로 학습한 [t5 v1.1 모델](https://github.com/google-research/text-to-text-transfer-transformer/blob/84f8bcc14b5f2c03de51bd3587609ba8f6bbd1cd/released_checkpoints.md)입니다.
한국어를 tokenize 하기 위해서 senten... |
sharpbai/open_llama_13b | sharpbai | 2023-06-28T16:14:25Z | 25 | 1 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:togethercomputer/RedPajama-Data-1T",
"arxiv:2302.13971",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-06-22T05:07:23Z | ---
license: apache-2.0
datasets:
- togethercomputer/RedPajama-Data-1T
---
# open_llama_13b
*The weight file is split into chunks with a size of 650MB for convenient and fast parallel downloads*
A 650MB split weight version of [openlm-research/open_llama_13b](https://huggingface.co/openlm-research/open_llama_13b)
T... |
YakovElm/IntelDAOS_15_BERT_Over_Sampling | YakovElm | 2023-06-28T15:55:27Z | 62 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-28T15:54:45Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: IntelDAOS_15_BERT_Over_Sampling
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# In... |
AbhilashGanji/distilbert-base-uncased-finetuned-squad-d5716d28 | AbhilashGanji | 2023-06-28T15:54:44Z | 0 | 0 | null | [
"pytorch",
"question-answering",
"en",
"dataset:squad",
"arxiv:1910.01108",
"license:apache-2.0",
"region:us"
] | question-answering | 2023-06-28T15:50:07Z | ---
language:
- en
thumbnail: https://github.com/karanchahal/distiller/blob/master/distiller.jpg
tags:
- question-answering
license: apache-2.0
datasets:
- squad
metrics:
- squad
---
# DistilBERT with a second step of distillation
## Model description
This model replicates the "DistilBERT (D)" model from Table 2 of... |
amm297/my_awesome_peft_model | amm297 | 2023-06-28T14:48:41Z | 24 | 0 | peft | [
"peft",
"RefinedWebModel",
"generated_from_trainer",
"text-generation",
"custom_code",
"license:other",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-06-28T10:55:56Z | ---
license: other
library_name: peft
pipeline_tag: text-generation
tags:
- generated_from_trainer
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_f... |
RayTracerGC/RVCModels | RayTracerGC | 2023-06-28T11:46:11Z | 0 | 1 | null | [
"license:openrail",
"region:us"
] | null | 2023-06-27T15:56:50Z | ---
license: openrail
---
RVC models:
- Hal Jordan Green Lantern (From Injustice 2) (RVC v2) (76 Epochs)
- Trained on `mangio-crepe` using 6 minutes of audio
- Batch size: 16
- Crepe hop length: 64
- File: GreenLantern.zip
- Wonder Woman (From Injustice 2) (RVC v2) (150 Epochs)
- Trained on `mangio-crepe` u... |
amittian/setfit_ds_version_0_0_1 | amittian | 2023-06-28T10:23:28Z | 3 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | 2023-06-28T10:23:07Z | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# amittian/setfit_ds_version_0_0_1
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot... |
joydeeph/ppo-LunarLander-v2 | joydeeph | 2023-06-28T08:42:25Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-28T08:41:59Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
... |
yihyeji/hanbok_q | yihyeji | 2023-06-28T04:33:59Z | 0 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-06-28T04:06:26Z |
---
license: creativeml-openrail-m
base_model: /workspace/data3/model_checkpoints/DIFFUSION_DB/Diffusion_models/diffusers/v15/chilloutmix_NiPrunedFp16Fix/
instance_prompt: a photo of 1 girl wearing hanbok_q
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
... |
eluzhnica/mpt-30b-peft-compatible | eluzhnica | 2023-06-27T18:08:52Z | 11 | 8 | transformers | [
"transformers",
"pytorch",
"mpt",
"text-generation",
"Composer",
"MosaicML",
"llm-foundry",
"StreamingDatasets",
"custom_code",
"dataset:allenai/c4",
"dataset:mc4",
"dataset:togethercomputer/RedPajama-Data-1T",
"dataset:bigcode/the-stack-dedup",
"dataset:allenai/s2orc",
"arxiv:2108.12409... | text-generation | 2023-06-26T20:51:20Z | ---
license: apache-2.0
tags:
- Composer
- MosaicML
- llm-foundry
- StreamingDatasets
datasets:
- allenai/c4
- mc4
- togethercomputer/RedPajama-Data-1T
- bigcode/the-stack-dedup
- allenai/s2orc
inference: false
---
# MPT-30B
This is the MPT-30B but with added support to finetune using peft (tested with qlora). It is ... |
maidh/ppo-LunarLander-v2-unit8-v1 | maidh | 2023-06-27T16:41:53Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-27T16:40:37Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metr... |
kesslya1/F1_cars | kesslya1 | 2023-06-27T16:18:10Z | 7 | 0 | keras | [
"keras",
"tf-keras",
"image-classification",
"region:us"
] | image-classification | 2023-06-16T07:08:00Z | ---
metrics:
- accuracy
library_name: keras
pipeline_tag: image-classification
--- |
gongliyu/fine-tuned-t5-small | gongliyu | 2023-06-27T15:44:16Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-06-23T19:00:44Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: fine-tuned-t5-small
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove t... |
sdocio/es_trf_ner_cds_xlm-large | sdocio | 2023-06-27T14:00:26Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"PyTorch",
"Transformers",
"Token Classification",
"xlm-roberta-large",
"es",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-06-27T13:45:59Z | ---
language: es
license: gpl-3.0
tags:
- PyTorch
- Transformers
- Token Classification
- xlm-roberta
- xlm-roberta-large
widget:
- text: "Fue antes de llegar a Sigüeiro, en el Camino de Santiago."
- text: "Si te metes en el Franco desde la Alameda, vas hacia la Catedral."
- text: "Y allí precisamente es Santiago el pa... |
aidn/squadBert3Epochs | aidn | 2023-06-27T11:39:42Z | 63 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-06-27T10:47:14Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: aidn/squadBert3Epochs
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# aidn/squadBe... |
maidh/poca-SoccerTwos | maidh | 2023-06-27T10:53:06Z | 17 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | reinforcement-learning | 2023-06-27T10:52:53Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unit... |
haddadalwi/multi-qa-mpnet-base-dot-v1-finetuned-squad2-all | haddadalwi | 2023-06-27T07:29:46Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mpnet",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-06-27T07:16:35Z | ---
tags:
- generated_from_trainer
model-index:
- name: multi-qa-mpnet-base-dot-v1-finetuned-squad2-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# multi-q... |
limcheekin/mpt-7b-storywriter-ct2 | limcheekin | 2023-06-27T07:04:45Z | 4 | 0 | transformers | [
"transformers",
"ctranslate2",
"mpt-7b-storywriter",
"quantization",
"int8",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-06-01T09:09:40Z | ---
license: apache-2.0
language:
- en
tags:
- ctranslate2
- mpt-7b-storywriter
- quantization
- int8
---
# Model Card for MPT-7B-StoryWriter-65k+ Q8
The model is quantized version of the [mosaicml/mpt-7b-storywriter](https://huggingface.co/mosaicml/mpt-7b-storywriter) with int8 quantization.
## Model Deta... |
AlgorithmicResearchGroup/flan-t5-xxl-arxiv-math-closed-qa | AlgorithmicResearchGroup | 2023-06-27T04:39:54Z | 13 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"arxiv",
"summarization",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | summarization | 2023-06-24T16:45:00Z | ---
license: apache-2.0
language:
- en
pipeline_tag: summarization
widget:
- text: What is the peak phase of T-eV?
example_title: Question Answering
tags:
- arxiv
---
# Table of Contents
0. [TL;DR](#TL;DR)
1. [Model Details](#model-details)
2. [Usage](#usage)
3. [Uses](#uses)
4. [Citation](#citation)
# TL;DR
This... |
microsoft/resnet-152 | microsoft | 2023-06-26T19:49:50Z | 19,045 | 12 | transformers | [
"transformers",
"pytorch",
"tf",
"safetensors",
"resnet",
"image-classification",
"vision",
"dataset:imagenet-1k",
"arxiv:1512.03385",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-03-16T14:54:22Z | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
---
# ResNet-152 v1.5
ResNet model pre-trained on ImageNet-1k at resolution 224x224. It was introduced in the paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) by He et al.
Disclaimer: The team... |
kalyaniAI/autotrain-autotrain-69874137966 | kalyaniAI | 2023-06-26T12:08:29Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"autotrain",
"summarization",
"en",
"dataset:kalyaniAI/autotrain-data-autotrain",
"co2_eq_emissions",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | summarization | 2023-06-26T12:07:46Z | ---
tags:
- autotrain
- summarization
language:
- en
widget:
- text: "I love AutoTrain"
datasets:
- kalyaniAI/autotrain-data-autotrain
co2_eq_emissions:
emissions: 0.025148621653341533
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 69874137966
- CO2 Emissions (in grams): 0.0251
## Va... |
abelko/abel-alpaca | abelko | 2023-06-26T08:39:42Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-06-26T08:39:41Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_ty... |
RashidNLP/NER-Deberta | RashidNLP | 2023-06-26T07:36:07Z | 199 | 6 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"deberta-v2",
"token-classification",
"deberta-v3",
"en",
"dataset:DFKI-SLT/few-nerd",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-05-19T18:28:34Z | ---
language:
- en
metrics:
- accuracy
- f1
library_name: transformers
pipeline_tag: token-classification
tags:
- deberta-v3
datasets:
- DFKI-SLT/few-nerd
license: mit
---
## Deberta for Named Entity Recognition
I used a Pretrained Deberta-v3-base and finetuned it on Few-NERD, A NER dataset that contains over 180k ex... |
Retrial9842/ppo-cleanrl-LunarLander-v2 | Retrial9842 | 2023-06-26T05:18:56Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-26T04:26:01Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metr... |
cagmfr/q-Taxi-v3 | cagmfr | 2023-06-25T15:35:26Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-25T15:25:40Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71... |
MonkDalma/xlm-roberta-base-finetuned-panx-en | MonkDalma | 2023-06-24T23:39:45Z | 101 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-06-24T23:37:22Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.en
split: validatio... |
digiplay/ChillyMix_v1 | digiplay | 2023-06-24T21:19:54Z | 291 | 2 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-06-23T16:14:13Z | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info :
https://civitai.com/models/58772?modelVersionId=63220
Original Author's DEMO image :
. "GPT-J" refers to the class of model, while "6B" rep... |
hassansoliman/falcon-40b-qlora-utterance-adaptations_v5 | hassansoliman | 2023-06-21T13:03:56Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-06-21T13:03:09Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_ty... |
moka-ai/m3e-large | moka-ai | 2023-06-21T11:25:23Z | 2,209 | 205 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"embedding",
"text-embedding",
"zh",
"en",
"region:us"
] | null | 2023-06-21T09:07:12Z | ---
language:
- zh
- en
tags:
- embedding
- text-embedding
library_name: sentence-transformers
---
# M3E Models
[m3e-small](https://huggingface.co/moka-ai/m3e-small) | [m3e-base](https://huggingface.co/moka-ai/m3e-base) | [m3e-large](https://huggingface.co/moka-ai/m3e-large)
M3E 是 Moka Massive Mixed Embedding 的缩写
-... |
wordcab/whisper-large-int8-he | wordcab | 2023-06-21T08:47:20Z | 1 | 0 | transformers | [
"transformers",
"he",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-06-20T15:35:55Z | ---
license: apache-2.0
language:
- he
---
This is a ctranslate2 `int8` version of the [Shiry/whisper-large-v2-he](https://huggingface.co/Shiry/whisper-large-v2-he) model. |
ThuTrang/distilbert-base-uncased-finetuned-imdb | ThuTrang | 2023-06-20T20:38:14Z | 124 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-06-19T23:36:07Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove ... |
allenai/open-instruct-baize-13b | allenai | 2023-06-20T17:44:30Z | 18 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"arxiv:2306.04751",
"arxiv:2302.13971",
"arxiv:2304.01196",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-06-07T17:24:03Z | ---
language:
- en
---
# Open-Instruct Baize 13B
This model is a 13B LLaMa model finetuned on the Baize dataset. *Please note this is a model diff - see below for usage instructions*.
This was trained as part of the paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://ar... |
joncam14/dqn-SpaceInvadersNoFrameskip-v4 | joncam14 | 2023-06-20T13:11:20Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-20T13:10:46Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
... |
MarketingHHM/autotrain-hhmleoom-68141137245 | MarketingHHM | 2023-06-20T09:37:32Z | 98 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"led",
"text2text-generation",
"autotrain",
"summarization",
"unk",
"dataset:MarketingHHM/autotrain-data-hhmleoom",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2023-06-20T02:35:25Z | ---
tags:
- autotrain
- summarization
language:
- unk
widget:
- text: "I love AutoTrain"
datasets:
- MarketingHHM/autotrain-data-hhmleoom
co2_eq_emissions:
emissions: 292.1583830151034
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 68141137245
- CO2 Emissions (in grams): 292.1584
## ... |
gokuls/hbertv2-Massive-intent | gokuls | 2023-06-20T09:05:06Z | 47 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"dataset:massive",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-20T08:56:06Z | ---
tags:
- generated_from_trainer
datasets:
- massive
metrics:
- accuracy
model-index:
- name: hbertv2-Massive-intent
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: massive
type: massive
config: en-US
split: validation
args: en-US
... |
gurugaurav/lilt-en-funsd | gurugaurav | 2023-06-19T06:08:05Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"lilt",
"token-classification",
"generated_from_trainer",
"dataset:funsd-layoutlmv3",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-06-19T05:26:52Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- funsd-layoutlmv3
model-index:
- name: lilt-en-funsd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#... |
arunptp/taxi-v3-00 | arunptp | 2023-06-19T05:37:04Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-19T05:36:56Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-v3-00
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.7... |
crlandsc/bsrnn-vocals | crlandsc | 2023-06-16T20:25:39Z | 0 | 2 | null | [
"audio source separation",
"music demixing",
"band-split recurrent neural network",
"bsrnn",
"spectrogram",
"vocals",
"region:us"
] | null | 2023-06-16T20:18:04Z | ---
tags:
- audio source separation
- music demixing
- band-split recurrent neural network
- bsrnn
- spectrogram
- vocals
---
# Model Card for bsrnn-vocals
Vocals model for [Music-Demixing-with-Band-Split-RNN](https://github.com/crlandsc/Music-Demixing-with-Band-Split-RNN). |
haytin69/archanafriend | haytin69 | 2023-06-16T13:17:30Z | 34 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-06-16T13:11:34Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### ArchanaFriend Dreambooth model trained by haytin69 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Cola... |
ugiugi/inisw08-RoBERT-mlm-adamw_torch_bs16 | ugiugi | 2023-06-15T19:06:57Z | 115 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-06-15T15:32:28Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: inisw08-RoBERT-mlm-adamw_torch_bs16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this co... |
peteozegov/a2c-PandaReachDense-v2 | peteozegov | 2023-06-15T04:26:46Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-05T03:30:25Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReach... |
irfanamal/bert-base-uncased-classification-chain-3 | irfanamal | 2023-06-14T10:13:44Z | 31 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-06-13T15:47:52Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-base-uncased-classification-chain-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then re... |
roy23roy/my_poem_model | roy23roy | 2023-06-14T01:30:00Z | 116 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt_neo",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-06-13T23:31:15Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: my_poem_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_poem_model
This model is... |
Dylan1999/bert-squad-mrc | Dylan1999 | 2023-06-13T14:11:47Z | 105 | 1 | transformers | [
"transformers",
"pytorch",
"tf",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-06-13T14:01:36Z | ---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-squad-mrc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
This checkpoint is a BRE... |
BlueAvenir/sti_digital_affinity_V_0_1 | BlueAvenir | 2023-06-13T11:19:53Z | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2023-06-13T11:19:29Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like cluste... |
keysonya/Reinforce-1 | keysonya | 2023-06-13T10:44:27Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-13T10:43:57Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_rewa... |
Zhejian/llama-7b | Zhejian | 2023-06-12T00:43:20Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-06-12T00:13:33Z | ---
license: other
---
LLaMA-7B converted to work with Transformers/HuggingFace. This is under a special license, please see the LICENSE file for details.
--
license: other
---
# LLaMA Model Card
## Model details
**Organization developing the model**
The FAIR team of Meta AI.
**Model date**
LLaMA was trained betwee... |
Seltion/embeddings | Seltion | 2023-06-10T19:08:51Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-03-06T01:23:17Z | ---
license: creativeml-openrail-m
---
|
YakovElm/MariaDB15Classic_MSE | YakovElm | 2023-06-10T01:26:22Z | 61 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-10T01:25:47Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: MariaDB15Classic_MSE
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# MariaDB15Clas... |
YakovElm/Hyperledger5SetFitModel_Train_balance_ratio_1 | YakovElm | 2023-06-09T12:50:25Z | 3 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | 2023-06-09T12:49:52Z | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# YakovElm/Hyperledger5SetFitModel_Train_balance_ratio_1
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using... |
TheBloke/GPT4All-13B-snoozy-GGML | TheBloke | 2023-06-07T22:48:34Z | 0 | 48 | null | [
"license:other",
"region:us"
] | null | 2023-05-05T15:24:23Z | ---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style=... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.