Sentence Similarity
sentence-transformers
PyTorch
Transformers
Korean
bert
feature-extraction
TAACO
text-embeddings-inference
Instructions to use KDHyun08/TAACO_STS with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- sentence-transformers
How to use KDHyun08/TAACO_STS with sentence-transformers:
from sentence_transformers import SentenceTransformer model = SentenceTransformer("KDHyun08/TAACO_STS") sentences = [ "The weather is lovely today.", "It's so sunny outside!", "He drove to the stadium." ] embeddings = model.encode(sentences) similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] - Transformers
How to use KDHyun08/TAACO_STS with Transformers:
# Load model directly from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("KDHyun08/TAACO_STS") model = AutoModel.from_pretrained("KDHyun08/TAACO_STS") - Notebooks
- Google Colab
- Kaggle
Upload with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -4,13 +4,15 @@ tags:
|
|
| 4 |
- sentence-transformers
|
| 5 |
- sentence-similarity
|
| 6 |
- transformers
|
|
|
|
| 7 |
language: ko
|
| 8 |
---
|
| 9 |
|
| 10 |
# TAACO_Similarity
|
| 11 |
|
| 12 |
-
λ³Έ λͺ¨λΈμ [
|
| 13 |
-
μΈ‘μ λκ΅¬μΈ K-TAACO(κ°μ )μ μ§ν μ€ νλμΈ λ¬Έμ₯ κ° μλ―Έμ κ²°μμ±μ μΈ‘μ νκΈ° μν΄ μ μνμμ΅λλ€.
|
|
|
|
| 14 |
|
| 15 |
## Usage (Sentence-Transformers)
|
| 16 |
|
|
@@ -67,7 +69,7 @@ for i, (score, idx) in enumerate(zip(top_results[0], top_results[1])):
|
|
| 67 |
|
| 68 |
μ μμ(Usage)λ₯Ό μ€ννκ² λλ©΄ μλμ κ°μ κ²°κ³Όκ° λμΆλ©λλ€. 1μ κ°κΉμΈμλ‘ μ μ¬ν λ¬Έμ₯μ
λλ€.
|
| 69 |
|
| 70 |
-
|
| 71 |
μ
λ ₯ λ¬Έμ₯: μμΌμ λ§μ΄νμ¬ μμΉ¨μ μ€λΉνκ² λ€κ³ μ€μ 8μ 30λΆλΆν° μμμ μ€λΉνμλ€
|
| 72 |
|
| 73 |
<μ
λ ₯ λ¬Έμ₯κ³Ό μ μ¬ν 10 κ°μ λ¬Έμ₯>
|
|
@@ -91,7 +93,7 @@ for i, (score, idx) in enumerate(zip(top_results[0], top_results[1])):
|
|
| 91 |
9: μλ΄κ° μ’μνλμ§ λͺ¨λ₯΄κ² μ§λ§ λμ₯κ³ μμ μλ νλν¬μμΈμ§λ₯Ό 보λ λ°λ‘ μμΌλ₯Ό ν΄μΌκ² λ€λ μκ°μ΄ λ€μλ€. μμμ μ±κ³΅μ μΌλ‘ μμ±μ΄ λμλ€ (μ μ¬λ: 0.2259)
|
| 92 |
|
| 93 |
10: μλ΄λ κ·Έλ° μ€ν
μ΄ν¬λ₯Ό μ’μνλ€. κ·Έλ°λ° μμλ λͺ»ν μΌμ΄ λ²μ΄μ§κ³ λ§μλ€ (μ μ¬λ: 0.1967)
|
| 94 |
-
|
| 95 |
|
| 96 |
## Training
|
| 97 |
The model was trained with the parameters:
|
|
|
|
| 4 |
- sentence-transformers
|
| 5 |
- sentence-similarity
|
| 6 |
- transformers
|
| 7 |
+
- TAACO
|
| 8 |
language: ko
|
| 9 |
---
|
| 10 |
|
| 11 |
# TAACO_Similarity
|
| 12 |
|
| 13 |
+
λ³Έ λͺ¨λΈμ [Sentence-transformers](https://www.SBERT.net)λ₯Ό κΈ°λ°μΌλ‘ νλ©° KLUEμ STS(Sentence Textual Similarity) λ°μ΄ν°μ
μ ν΅ν΄ νλ ¨μ μ§νν λͺ¨λΈμ
λλ€.
|
| 14 |
+
νμκ° μ μνκ³ μλ νκ΅μ΄ λ¬Έμ₯κ° κ²°μμ± μΈ‘μ λκ΅¬μΈ K-TAACO(κ°μ )μ μ§ν μ€ νλμΈ λ¬Έμ₯ κ° μλ―Έμ κ²°μμ±μ μΈ‘μ νκΈ° μν΄ μ μνμμ΅λλ€.
|
| 15 |
+
λν λͺ¨λμ λ§λμΉμ λ¬Έμ₯κ° μ μ¬λ λ°μ΄ν° λ± λ€μν λ°μ΄ν°λ₯Ό κ΅¬ν΄ μΆκ° νλ ¨μ μ§νν μμ μ
λλ€.
|
| 16 |
|
| 17 |
## Usage (Sentence-Transformers)
|
| 18 |
|
|
|
|
| 69 |
|
| 70 |
μ μμ(Usage)λ₯Ό μ€ννκ² λλ©΄ μλμ κ°μ κ²°κ³Όκ° λμΆλ©λλ€. 1μ κ°κΉμΈμλ‘ μ μ¬ν λ¬Έμ₯μ
λλ€.
|
| 71 |
|
| 72 |
+
```
|
| 73 |
μ
λ ₯ λ¬Έμ₯: μμΌμ λ§μ΄νμ¬ μμΉ¨μ μ€λΉνκ² λ€κ³ μ€μ 8μ 30λΆλΆν° μμμ μ€λΉνμλ€
|
| 74 |
|
| 75 |
<μ
λ ₯ λ¬Έμ₯κ³Ό μ μ¬ν 10 κ°μ λ¬Έμ₯>
|
|
|
|
| 93 |
9: μλ΄κ° μ’μνλμ§ λͺ¨λ₯΄κ² μ§λ§ λμ₯κ³ μμ μλ νλν¬μμΈμ§λ₯Ό 보λ λ°λ‘ μμΌλ₯Ό ν΄μΌκ² λ€λ μκ°μ΄ λ€μλ€. μμμ μ±κ³΅μ μΌλ‘ μμ±μ΄ λμλ€ (μ μ¬λ: 0.2259)
|
| 94 |
|
| 95 |
10: μλ΄λ κ·Έλ° μ€ν
μ΄ν¬λ₯Ό μ’μνλ€. κ·Έλ°λ° μμλ λͺ»ν μΌμ΄ λ²μ΄μ§κ³ λ§μλ€ (μ μ¬λ: 0.1967)
|
| 96 |
+
```
|
| 97 |
|
| 98 |
## Training
|
| 99 |
The model was trained with the parameters:
|