Sentence Similarity
sentence-transformers
PyTorch
Transformers
Korean
bert
feature-extraction
TAACO
text-embeddings-inference
Instructions to use KDHyun08/TAACO_STS with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- sentence-transformers
How to use KDHyun08/TAACO_STS with sentence-transformers:
from sentence_transformers import SentenceTransformer model = SentenceTransformer("KDHyun08/TAACO_STS") sentences = [ "The weather is lovely today.", "It's so sunny outside!", "He drove to the stadium." ] embeddings = model.encode(sentences) similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] - Transformers
How to use KDHyun08/TAACO_STS with Transformers:
# Load model directly from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("KDHyun08/TAACO_STS") model = AutoModel.from_pretrained("KDHyun08/TAACO_STS") - Notebooks
- Google Colab
- Kaggle
Upload with huggingface_hub
Browse files- README.md +6 -9
- config.json +1 -1
- pytorch_model.bin +1 -1
- tokenizer_config.json +1 -1
README.md
CHANGED
|
@@ -2,16 +2,14 @@
|
|
| 2 |
pipeline_tag: sentence-similarity
|
| 3 |
tags:
|
| 4 |
- sentence-transformers
|
|
|
|
| 5 |
- sentence-similarity
|
| 6 |
- transformers
|
| 7 |
-
- TAACO
|
| 8 |
-
|
| 9 |
-
language: ko
|
| 10 |
---
|
| 11 |
|
| 12 |
-
#
|
| 13 |
|
| 14 |
-
This is a
|
| 15 |
|
| 16 |
<!--- Describe your model here -->
|
| 17 |
|
|
@@ -55,9 +53,8 @@ def mean_pooling(model_output, attention_mask):
|
|
| 55 |
sentences = ['This is an example sentence', 'Each sentence is converted']
|
| 56 |
|
| 57 |
# Load model from HuggingFace Hub
|
| 58 |
-
tokenizer = AutoTokenizer.from_pretrained(
|
| 59 |
-
model = AutoModel.from_pretrained(
|
| 60 |
-
|
| 61 |
|
| 62 |
# Tokenize sentences
|
| 63 |
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
|
|
@@ -87,7 +84,7 @@ The model was trained with the parameters:
|
|
| 87 |
|
| 88 |
**DataLoader**:
|
| 89 |
|
| 90 |
-
`torch.utils.data.dataloader.DataLoader` of length
|
| 91 |
```
|
| 92 |
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
|
| 93 |
```
|
|
|
|
| 2 |
pipeline_tag: sentence-similarity
|
| 3 |
tags:
|
| 4 |
- sentence-transformers
|
| 5 |
+
- feature-extraction
|
| 6 |
- sentence-similarity
|
| 7 |
- transformers
|
|
|
|
|
|
|
|
|
|
| 8 |
---
|
| 9 |
|
| 10 |
+
# {MODEL_NAME}
|
| 11 |
|
| 12 |
+
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
|
| 13 |
|
| 14 |
<!--- Describe your model here -->
|
| 15 |
|
|
|
|
| 53 |
sentences = ['This is an example sentence', 'Each sentence is converted']
|
| 54 |
|
| 55 |
# Load model from HuggingFace Hub
|
| 56 |
+
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
|
| 57 |
+
model = AutoModel.from_pretrained('{MODEL_NAME}')
|
|
|
|
| 58 |
|
| 59 |
# Tokenize sentences
|
| 60 |
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
|
|
|
|
| 84 |
|
| 85 |
**DataLoader**:
|
| 86 |
|
| 87 |
+
`torch.utils.data.dataloader.DataLoader` of length 142 with parameters:
|
| 88 |
```
|
| 89 |
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
|
| 90 |
```
|
config.json
CHANGED
|
@@ -1,5 +1,5 @@
|
|
| 1 |
{
|
| 2 |
-
"_name_or_path": "
|
| 3 |
"architectures": [
|
| 4 |
"BertModel"
|
| 5 |
],
|
|
|
|
| 1 |
{
|
| 2 |
+
"_name_or_path": "C:\\Users\\DESKTOP/.cache\\torch\\sentence_transformers\\KDHyun08_TAACO_STS\\",
|
| 3 |
"architectures": [
|
| 4 |
"BertModel"
|
| 5 |
],
|
pytorch_model.bin
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 442543599
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:fc7bbf7951004b83a5907a98ce803a92b540f1a522e429aadb7b56fa079da210
|
| 3 |
size 442543599
|
tokenizer_config.json
CHANGED
|
@@ -1 +1 @@
|
|
| 1 |
-
{"do_lower_case": false, "unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]", "tokenize_chinese_chars": true, "strip_accents": null, "do_basic_tokenize": true, "never_split": null, "model_max_length": 512, "special_tokens_map_file": "C:\\Users\\DESKTOP/.cache\\huggingface\\transformers\\aeaaa3afd086a040be912f92ffe7b5f85008b744624f4517c4216bcc32b51cf0.054ece8d16bd524c8a00f0e8a976c00d5de22a755ffb79e353ee2954d9289e26", "name_or_path": "
|
|
|
|
| 1 |
+
{"do_lower_case": false, "unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]", "tokenize_chinese_chars": true, "strip_accents": null, "do_basic_tokenize": true, "never_split": null, "model_max_length": 512, "special_tokens_map_file": "C:\\Users\\DESKTOP/.cache\\huggingface\\transformers\\aeaaa3afd086a040be912f92ffe7b5f85008b744624f4517c4216bcc32b51cf0.054ece8d16bd524c8a00f0e8a976c00d5de22a755ffb79e353ee2954d9289e26", "name_or_path": "C:\\Users\\DESKTOP/.cache\\torch\\sentence_transformers\\KDHyun08_TAACO_STS\\", "tokenizer_class": "BertTokenizer"}
|