id stringlengths 9 104 | author stringlengths 3 36 | task_category stringclasses 32
values | tags sequencelengths 1 4.05k | created_time int64 1,646B 1,742B | last_modified timestamp[s]date 2021-02-13 00:06:56 2025-03-18 09:30:19 | downloads int64 0 15.6M | likes int64 0 4.86k | README stringlengths 44 1.01M | matched_bigbio_names sequencelengths 1 8 | is_bionlp stringclasses 3
values |
|---|---|---|---|---|---|---|---|---|---|---|
Baiming123/Calcu_Disease_Similarity | Baiming123 | sentence-similarity | [
"sentence-transformers",
"pytorch",
"bert",
"sentence-similarity",
"dataset:Baiming123/MeSHDS",
"base_model:sentence-transformers/multi-qa-MiniLM-L6-cos-v1",
"base_model:finetune:sentence-transformers/multi-qa-MiniLM-L6-cos-v1",
"doi:10.57967/hf/3108",
"autotrain_compatible",
"text-embeddings-infe... | 1,726,847,893,000 | 2024-12-14T10:10:29 | 0 | 3 | ---
base_model:
- sentence-transformers/multi-qa-MiniLM-L6-cos-v1
datasets:
- Baiming123/MeSHDS
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
---
# Model Description
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensio... | [
"MIRNA"
] | BioNLP |
johnsnowlabs/JSL-MedMNX-7B-SFT | johnsnowlabs | text-generation | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"reward model",
"RLHF",
"medical",
"conversational",
"en",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,713,245,240,000 | 2024-04-18T19:25:47 | 2,926 | 3 | ---
language:
- en
library_name: transformers
license: cc-by-nc-nd-4.0
tags:
- reward model
- RLHF
- medical
---
# JSL-MedMNX-7B-SFT
[<img src="https://repository-images.githubusercontent.com/104670986/2e728700-ace4-11ea-9cfc-f3e060b25ddf">](http://www.johnsnowlabs.com)
JSL-MedMNX-7B-SFT is a 7 Billion parameter mod... | [
"MEDQA",
"PUBMEDQA"
] | BioNLP |
RichardErkhov/HPAI-BSC_-_Llama3-Aloe-8B-Alpha-gguf | RichardErkhov | null | [
"gguf",
"arxiv:2405.01886",
"endpoints_compatible",
"region:us",
"conversational"
] | 1,730,286,893,000 | 2024-10-30T15:06:18 | 75 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama3-Aloe-8B-Alpha - GGUF
- Model creator: https://huggingface.co/HPAI-BSC/
- Original model: https://huggingfa... | [
"MEDQA",
"PUBMEDQA"
] | BioNLP |
Rodrigo1771/bsc-bio-ehr-es-symptemist-word2vec-85-ner | Rodrigo1771 | token-classification | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"token-classification",
"generated_from_trainer",
"dataset:Rodrigo1771/symptemist-85-ner",
"base_model:PlanTL-GOB-ES/bsc-bio-ehr-es",
"base_model:finetune:PlanTL-GOB-ES/bsc-bio-ehr-es",
"license:apache-2.0",
"model-index",
"autotrain_com... | 1,725,476,428,000 | 2024-09-04T19:11:15 | 13 | 0 | ---
base_model: PlanTL-GOB-ES/bsc-bio-ehr-es
datasets:
- Rodrigo1771/symptemist-85-ner
library_name: transformers
license: apache-2.0
metrics:
- precision
- recall
- f1
- accuracy
tags:
- token-classification
- generated_from_trainer
model-index:
- name: output
results:
- task:
type: token-classification
... | [
"SYMPTEMIST"
] | BioNLP |
kunkunhu/craft_mol | kunkunhu | null | [
"region:us"
] | 1,737,819,517,000 | 2025-01-26T09:08:28 | 0 | 0 | ---
{}
---
# CRAFT
CRAFT: Consistent Representational Fusion of Three Molecular Modalities | [
"CRAFT"
] | Non_BioNLP |
jiey2/DISC-MedLLM | jiey2 | text-generation | [
"transformers",
"pytorch",
"baichuan",
"text-generation",
"medical",
"custom_code",
"zh",
"dataset:Flmc/DISC-Med-SFT",
"arxiv:2308.14346",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,699,094,632,000 | 2023-11-04T10:48:48 | 16 | 1 | ---
datasets:
- Flmc/DISC-Med-SFT
language:
- zh
license: apache-2.0
tags:
- medical
---
This repository contains the DISC-MedLLM, version of Baichuan-13b-base as the base model.
**Please note that due to the ongoing development of the project, the model weights in this repository may differ from those in our currentl... | [
"MEDDIALOG"
] | BioNLP |
ManoloPueblo/LLM_MERGE_CC4 | ManoloPueblo | null | [
"safetensors",
"mistral",
"merge",
"mergekit",
"lazymergekit",
"llm-merge-cc4",
"OpenPipe/mistral-ft-optimized-1218",
"mlabonne/NeuralHermes-2.5-Mistral-7B",
"license:apache-2.0",
"region:us"
] | 1,731,246,930,000 | 2024-11-10T14:01:19 | 6 | 1 | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- llm-merge-cc4
- OpenPipe/mistral-ft-optimized-1218
- mlabonne/NeuralHermes-2.5-Mistral-7B
---
# LLM_MERGE_CC4
LLM_MERGE_CC4 est une fusion des modèles suivants créée par ManoloPueblo utilisant [mergekit](https://github.com/cg123/mergekit):
* [OpenPipe/... | [
"CAS"
] | Non_BioNLP |
razent/SciFive-large-Pubmed_PMC-MedNLI | razent | text2text-generation | [
"transformers",
"pytorch",
"tf",
"t5",
"text2text-generation",
"mednli",
"en",
"dataset:pubmed",
"dataset:pmc/open_access",
"arxiv:2106.03598",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,647,797,073,000 | 2022-03-22T04:05:21 | 1,302 | 2 | ---
datasets:
- pubmed
- pmc/open_access
language:
- en
tags:
- text2text-generation
- mednli
widget:
- text: 'mednli: sentence1: In the ED, initial VS revealed T 98.9, HR 73, BP 121/90,
RR 15, O2 sat 98% on RA. sentence2: The patient is hemodynamically stable'
---
# SciFive Pubmed+PMC Large on MedNLI
## Introduc... | [
"MEDNLI"
] | BioNLP |
adipanda/makima-simpletuner-lora-2 | adipanda | text-to-image | [
"diffusers",
"flux",
"flux-diffusers",
"text-to-image",
"simpletuner",
"safe-for-work",
"lora",
"template:sd-lora",
"lycoris",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | 1,728,694,813,000 | 2024-10-13T19:26:05 | 16 | 0 | ---
base_model: black-forest-labs/FLUX.1-dev
license: other
tags:
- flux
- flux-diffusers
- text-to-image
- diffusers
- simpletuner
- safe-for-work
- lora
- template:sd-lora
- lycoris
inference: true
widget:
- text: unconditional (blank prompt)
parameters:
negative_prompt: blurry, cropped, ugly
output:
url:... | [
"BEAR"
] | Non_BioNLP |
sarahmiller137/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-ft-ncbi-disease | sarahmiller137 | token-classification | [
"transformers",
"pytorch",
"safetensors",
"bert",
"token-classification",
"named-entity-recognition",
"en",
"dataset:ncbi_disease",
"license:cc",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,661,184,360,000 | 2023-03-23T15:57:02 | 24 | 0 | ---
datasets: ncbi_disease
language: en
license: cc
metrics:
- precision
- recall
- f1
- accuracy
tags:
- named-entity-recognition
- token-classification
task:
- named-entity-recognition
- token-classification
widget:
- text: ' The risk of cancer, especially lymphoid neoplasias, is substantially elevated
in A-T pat... | [
"NCBI DISEASE"
] | BioNLP |
tsavage68/MedQA_L3_1000steps_1e7rate_03beta_CSFTDPO | tsavage68 | text-generation | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"base_model:tsavage68/MedQA_L3_1000steps_1e6rate_SFT",
"base_model:finetune:tsavage68/MedQA_L3_1000steps_1e6rate_SFT",
"license:llama3",
"autotrain_compatible",
"text-generati... | 1,716,190,283,000 | 2024-05-23T22:54:22 | 5 | 0 | ---
base_model: tsavage68/MedQA_L3_1000steps_1e6rate_SFT
license: llama3
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: MedQA_L3_1000steps_1e7rate_03beta_CSFTDPO
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should p... | [
"MEDQA"
] | BioNLP |
mradermacher/Llama-3-VNTL-Vectors-i1-GGUF | mradermacher | null | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Cas-Warehouse/Llama-3-VNTL-Vectors",
"base_model:quantized:Cas-Warehouse/Llama-3-VNTL-Vectors",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | 1,741,475,231,000 | 2025-03-09T01:00:08 | 589 | 0 | ---
base_model: Cas-Warehouse/Llama-3-VNTL-Vectors
language:
- en
library_name: transformers
tags:
- mergekit
- merge
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weig... | [
"CAS"
] | Non_BioNLP |
ChameleonAI/ChameleonAILoras | ChameleonAI | null | [
"region:us"
] | 1,681,658,960,000 | 2023-12-19T20:49:50 | 0 | 11 | ---
{}
---
# Chameleon AI Loras
<!-- Provide a quick summary of what the model is/does. -->
You can find all my Loras uploaded to civitai here. Feels like the website is mostly down at the moment, so this is mostly a safety net.
## Model List
- [Judgement (Helltaker)](https://huggingface.co/ChameleonAI/ChameleonAI... | [
"CRAFT"
] | Non_BioNLP |
QuantFactory/Dans-PersonalityEngine-V1.1.0-12b-GGUF | QuantFactory | text-generation | [
"transformers",
"gguf",
"general-purpose",
"roleplay",
"storywriting",
"chemistry",
"biology",
"code",
"climate",
"axolotl",
"text-generation-inference",
"finetune",
"text-generation",
"en",
"dataset:PocketDoc/Dans-MemoryCore-CoreCurriculum-Small",
"dataset:AquaV/Energetic-Materials-Sh... | 1,735,364,211,000 | 2024-12-28T06:58:45 | 267 | 3 | ---
base_model:
- mistralai/Mistral-Nemo-Base-2407
datasets:
- PocketDoc/Dans-MemoryCore-CoreCurriculum-Small
- AquaV/Energetic-Materials-Sharegpt
- AquaV/Chemical-Biological-Safety-Applications-Sharegpt
- AquaV/US-Army-Survival-Sharegpt
- AquaV/Resistance-Sharegpt
- AquaV/Interrogation-Sharegpt
- AquaV/Multi-Environme... | [
"CRAFT"
] | Non_BioNLP |
solidrust/Newton-7B-AWQ | solidrust | text-generation | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"axolotl",
"finetune",
"qlora",
"quantized",
"4-bit",
"AWQ",
"pytorch",
"instruct",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"en",
"dataset:hen... | 1,709,529,638,000 | 2024-09-03T08:07:48 | 8 | 0 | ---
base_model: Weyaxi/Newton-7B
datasets:
- hendrycks/competition_math
- allenai/ai2_arc
- camel-ai/physics
- camel-ai/chemistry
- camel-ai/biology
- camel-ai/math
- STEM-AI-mtl/Electrical-engineering
- openbookqa
- piqa
- metaeval/reclor
- mandyyyyii/scibench
- derek-thomas/ScienceQA
- sciq
- TIGER-Lab/ScienceEval
la... | [
"SCIQ"
] | Non_BioNLP |
arobier/BioGPT-Large-PubMedQA | arobier | null | [
"fr",
"base_model:microsoft/BioGPT-Large-PubMedQA",
"base_model:finetune:microsoft/BioGPT-Large-PubMedQA",
"license:mit",
"region:us"
] | 1,733,942,707,000 | 2024-12-12T15:24:49 | 0 | 0 | ---
base_model:
- microsoft/BioGPT-Large-PubMedQA
language:
- fr
license: mit
---
| [
"PUBMEDQA"
] | BioNLP |
BigSalmon/InformalToFormalLincoln81ParaphraseMedium | BigSalmon | text-generation | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,664,072,062,000 | 2022-09-25T02:27:53 | 43 | 0 | ---
{}
---
data: https://github.com/BigSalmon2/InformalToFormalDataset
```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln80Paraphrase")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln80Paraphras... | [
"BEAR"
] | Non_BioNLP |
GuiGel/xlm-roberta-large-flert-finetune-meddocan | GuiGel | token-classification | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"region:us"
] | 1,667,842,355,000 | 2022-11-07T17:36:11 | 6 | 0 | ---
tags:
- flair
- token-classification
- sequence-tagger-model
---
### Demo: How to use in Flair
Requires:
- **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`)
```python
from flair.data import Sentence
from flair.models import SequenceTagger
# load tagger
tagger = SequenceTagger.load("GuiGel/xlm-ro... | [
"MEDDOCAN"
] | Non_BioNLP |
RichardErkhov/amd_-_AMD-Llama-135m-code-gguf | RichardErkhov | null | [
"gguf",
"arxiv:2204.06745",
"endpoints_compatible",
"region:us"
] | 1,730,998,563,000 | 2024-11-07T17:00:10 | 25 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
AMD-Llama-135m-code - GGUF
- Model creator: https://huggingface.co/amd/
- Original model: https://huggingface.co/... | [
"SCIQ"
] | Non_BioNLP |
End of preview. Expand in Data Studio
README.md exists but content is empty.
- Downloads last month
- 3