Eland Stance vLLM - Chinese Stance Detection (Merged Model)

Full merged model for high-throughput stance detection inference with vLLM.

Task Overview

Stance Detection (觀點支持度分析): Determines whether a response supports, opposes, or is neutral toward the main text's viewpoint.

Label Description
支持 (Support) Response genuinely agrees with main text
反對 (Oppose) Response genuinely disagrees with main text
中立 (Neutral) Unclear or balanced position

Performance

Overall Benchmark (368 samples)

Metric Score
Accuracy 82.34%
Macro-F1 81.55%

By Category

Category Accuracy
Standard Samples 85.5%
Sarcasm Detection 90.0%

Quick Start with vLLM

Start Server

# Start vLLM OpenAI-compatible server
vllm serve p988744/eland-stance-zh-vllm --port 8000

Python Client

from openai import OpenAI

client = OpenAI(
    base_url="http://localhost:8000/v1",
    api_key="dummy"
)

# System prompt with few-shot examples for best sarcasm detection
system_prompt = """你是一位專業的觀點支持度分析助手,擅長識別諷刺和反話。

## 任務
分析「回應」對「主文觀點」的立場,判斷回應是支持、反對、還是中立於主文觀點。

## 標籤定義
- 支持:回應真正認同主文觀點
- 反對:回應真正反對主文觀點
- 中立:態度不明確

## 範例

### 範例 1(諷刺 → 反對)
主文:這家餐廳的服務非常好
回應:是啊,特別是等了一小時才有人來點餐的時候
分析:回應使用諷刺,字面附和但實際批評服務差
答案:反對

### 範例 2(真正支持)
主文:這個政策有助於經濟發展
回應:確實,從最近的數據來看效果很明顯
分析:回應提供正面證據,真心認同
答案:支持

## 判斷要點
1. 識別諷刺:「字面肯定 + 負面細節」= 諷刺 = 實際反對
2. 諷刺式附和(是啊、沒錯、對啊)+ 批評性內容 = 反對
3. 按真實意圖判斷,不要被字面意思誤導"""

def analyze_stance(main_text: str, response_text: str) -> str:
    response = client.chat.completions.create(
        model="p988744/eland-stance-zh-vllm",
        messages=[
            {"role": "system", "content": system_prompt},
            {"role": "user", "content": f"主文:{main_text}\n回應:{response_text}"}
        ],
        temperature=0,
        max_tokens=20,
        extra_body={"chat_template_kwargs": {"enable_thinking": False}}
    )
    return response.choices[0].message.content.strip()

# Test examples
print(analyze_stance(
    "這家餐廳的服務非常好",
    "是啊,特別是等了一小時才有人來點餐的時候"
))  # Expected: 反對 (Sarcasm detected)

print(analyze_stance(
    "這個政策有助於經濟發展",
    "確實,從最近的數據來看效果很明顯"
))  # Expected: 支持

cURL

curl http://localhost:8000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "p988744/eland-stance-zh-vllm",
    "messages": [
      {"role": "system", "content": "你是一位專業的觀點支持度分析助手。分析「回應」對「主文觀點」的立場,判斷回應是支持、反對、還是中立。"},
      {"role": "user", "content": "主文:這家餐廳的服務非常好\n回應:是啊,特別是等了一小時才有人來點餐的時候"}
    ],
    "temperature": 0,
    "max_tokens": 20,
    "chat_template_kwargs": {"enable_thinking": false}
  }'

Batch Processing

from openai import OpenAI
import asyncio
from concurrent.futures import ThreadPoolExecutor

client = OpenAI(base_url="http://localhost:8000/v1", api_key="dummy")

samples = [
    ("這家餐廳的服務非常好", "是啊,特別是等了一小時才有人來點餐的時候"),
    ("這款手機的電池續航力很強", "沒錯,強到每天要充三次電"),
    ("這個政策有助於經濟發展", "確實,從最近的數據來看效果很明顯"),
    ("遠端工作會降低生產力", "我不同意,研究顯示遠端工作反而提高效率"),
]

def process_sample(sample):
    main_text, response_text = sample
    return analyze_stance(main_text, response_text)

# Parallel processing
with ThreadPoolExecutor(max_workers=4) as executor:
    results = list(executor.map(process_sample, samples))

for (main_text, response_text), result in zip(samples, results):
    print(f"主文: {main_text[:20]}... → {result}")

Important: System Prompt Required

Unlike the Ollama version (which has built-in prompt), vLLM requires the system prompt for optimal sarcasm detection.

Basic prompt (lower accuracy):

你是一位專業的觀點支持度分析助手。分析「回應」對「主文觀點」的立場,判斷回應是支持、反對、還是中立。

Few-shot prompt (recommended for sarcasm):

你是一位專業的觀點支持度分析助手,擅長識別諷刺和反話。

## 範例(諷刺 → 反對)
主文:這家餐廳的服務非常好
回應:是啊,特別是等了一小時才有人來點餐的時候
答案:反對

## 判斷要點
- 「字面肯定 + 負面細節」= 諷刺 = 實際反對

Qwen3 Thinking Mode

Important: Disable thinking mode for classification tasks:

# vLLM
extra_body={"chat_template_kwargs": {"enable_thinking": False}}

# Transformers
tokenizer.apply_chat_template(messages, enable_thinking=False)

Model Details

Parameter Value
Base Model Qwen/Qwen3-4B
Training Method SFT + LoRA Merged
Parameters 4.05B
Context Length 2048

Input Format

主文:[main text with viewpoint]
回應:[response to analyze]

Example Inputs & Expected Outputs

Main Text Response Expected Type
這家餐廳的服務非常好 是啊,特別是等了一小時才有人來點餐的時候 反對 Sarcasm
這款手機的電池續航力很強 沒錯,強到每天要充三次電 反對 Sarcasm
這個政策有助於經濟發展 確實,從最近的數據來看效果很明顯 支持 Direct
遠端工作會降低生產力 我不同意,研究顯示遠端工作反而提高效率 反對 Direct
AI 將取代大部分工作 這需要看具體的產業和職位類型 中立 Neutral

Model Variants

Version Repository Use Case
LoRA Adapter p988744/eland-stance-zh HuggingFace + PEFT
GGUF p988744/eland-stance-zh-gguf Ollama / llama.cpp
Full Merged p988744/eland-stance-zh-vllm vLLM (this repo)

Dataset

Trained on p988744/eland-stance-zh-data (520 samples across 4 domains: financial, product, brand, social).

License

Apache 2.0

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for p988744/eland-stance-zh-vllm

Finetuned
Qwen/Qwen3-4B
Finetuned
(577)
this model