thenlper commited on
Commit
e61197e
·
1 Parent(s): 6e9e698

Integrate with Sentence Transformers v5.4 (#24)

Browse files

- Integrate with Sentence Transformers v5.4.0 (e1775d95f8cf4625eea7879c6edb34beae6c42af)
- Rename CausalScoreHead to LogitScore (52857768aa4d06ae51b9fae1572a90ef18082be5)
- Move 'message_format' into 'modality_config' under 'message' -> 'format' (1f54aa72c421b677caa56ece526856f8c60144a5)

1_LogitScore/config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "true_token_id": 9693,
3
+ "false_token_id": 2152
4
+ }
README.md CHANGED
@@ -3,6 +3,8 @@ license: apache-2.0
3
  base_model:
4
  - Qwen/Qwen3-0.6B-Base
5
  library_name: transformers
 
 
6
  pipeline_tag: text-ranking
7
  ---
8
  # Qwen3-Reranker-0.6B
@@ -51,13 +53,55 @@ For more details, including benchmark evaluation, hardware requirements, and inf
51
 
52
  ## Usage
53
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
54
  With Transformers versions earlier than 4.51.0, you may encounter the following error:
55
  ```
56
  KeyError: 'qwen3'
57
  ```
58
 
59
- ### Transformers Usage
60
-
61
  ```python
62
  # Requires transformers>=4.51.0
63
  import torch
 
3
  base_model:
4
  - Qwen/Qwen3-0.6B-Base
5
  library_name: transformers
6
+ tags:
7
+ - sentence-transformers
8
  pipeline_tag: text-ranking
9
  ---
10
  # Qwen3-Reranker-0.6B
 
53
 
54
  ## Usage
55
 
56
+ ### Using Sentence Transformers
57
+
58
+ Install Sentence Transformers:
59
+ ```bash
60
+ pip install sentence_transformers
61
+ ```
62
+
63
+ ```python
64
+ from sentence_transformers import CrossEncoder
65
+
66
+ model = CrossEncoder("Qwen/Qwen3-Reranker-0.6B")
67
+
68
+ query = "What is the capital of China?"
69
+ documents = [
70
+ "The capital of China is Beijing.",
71
+ "Gravity is a force that attracts two bodies towards each other. It gives weight to physical objects and is responsible for the movement of planets around the sun.",
72
+ ]
73
+
74
+ pairs = [(query, doc) for doc in documents]
75
+ scores = model.predict(pairs)
76
+ print(scores)
77
+ # [ 7.625 -11.375]
78
+
79
+ rankings = model.rank(query, documents)
80
+ print(rankings)
81
+ # [{'corpus_id': 0, 'score': 7.625}, {'corpus_id': 1, 'score': -11.375}]
82
+ ```
83
+
84
+ By default, scores are raw logit differences. To get 0-1 probability scores, pass a Sigmoid activation function:
85
+ ```python
86
+ scores = model.predict([(query, doc) for doc in documents], activation_fn=torch.nn.Sigmoid())
87
+ ```
88
+
89
+ The model uses a default prompt `"query"` which injects the instruction `"Given a web search query, retrieve relevant passages that answer the query"` into the chat template. You can provide a custom instruction via the `prompts` parameter:
90
+ ```python
91
+ model = CrossEncoder(
92
+ "Qwen/Qwen3-Reranker-0.6B",
93
+ prompts={"classification": "Classify whether the document matches the query topic"},
94
+ default_prompt_name="classification",
95
+ )
96
+ ```
97
+
98
+ ### Using Transformers
99
+
100
  With Transformers versions earlier than 4.51.0, you may encounter the following error:
101
  ```
102
  KeyError: 'qwen3'
103
  ```
104
 
 
 
105
  ```python
106
  # Requires transformers>=4.51.0
107
  import torch
chat_template.jinja ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {%- set instruction = messages | selectattr("role", "eq", "system") | map(attribute="content") | first | default("Given a web search query, retrieve relevant passages that answer the query") -%}
2
+ {%- set query_text = messages | selectattr("role", "eq", "query") | map(attribute="content") | first -%}
3
+ {%- set document_text = messages | selectattr("role", "eq", "document") | map(attribute="content") | first -%}
4
+ <|im_start|>system
5
+ Judge whether the Document meets the requirements based on the Query and the Instruct provided. Note that the answer can only be "yes" or "no".<|im_end|>
6
+ <|im_start|>user
7
+ <Instruct>: {{ instruction }}
8
+ <Query>: {{ query_text }}
9
+ <Document>: {{ document_text }}<|im_end|>
10
+ <|im_start|>assistant
11
+ <think>
12
+
13
+ </think>
14
+
15
+
config_sentence_transformers.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "pytorch": "2.10.0+cu128",
4
+ "sentence_transformers": "5.4.0"
5
+ },
6
+ "activation_fn": "torch.nn.modules.linear.Identity",
7
+ "default_prompt_name": "query",
8
+ "model_type": "CrossEncoder",
9
+ "prompts": {
10
+ "query": "Given a web search query, retrieve relevant passages that answer the query"
11
+ }
12
+ }
modules.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.base.modules.transformer.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_LogitScore",
12
+ "type": "sentence_transformers.cross_encoder.modules.logit_score.LogitScore"
13
+ }
14
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "transformer_task": "text-generation",
3
+ "modality_config": {
4
+ "text": {
5
+ "method": "forward",
6
+ "method_output_name": "logits"
7
+ },
8
+ "message": {
9
+ "method": "forward",
10
+ "method_output_name": "logits",
11
+ "format": "flat"
12
+ }
13
+ },
14
+ "module_output_name": "causal_logits"
15
+ }