Model Card for parallel-chinese-translation
This model is a fine-tuned version of google/translategemma-12b-it. It has been trained using TRL.
This model has been fine-tuned for the task of translation between English and the three favors of Chinese: Simplified Chinese (zh-CN), Traditional Chinese (Taiwan) (zh-TW), and Traditional Chinese (Hong Kong) (zh-HK).
Although all three Chinese are using the same underlying language, there are differences in grammar, vocabulary, and expressions. This model is designed to handle these differences effectively, providing accurate translations tailored to each specific variant of Chinese.
Background
See GitHub Repository.
Quick start
# Assuming you're using llama.cpp server
import requests
def translate(text: str, source_lang: str = "en", target_lang: str = "zh-CN") -> str:
prompt = (
f'user\n'
f'[{{"type": "text", "source_lang_code": "{source_lang}", "target_lang_code": "{target_lang}", "text": "{text}"}}]\n'
f'model\n'
)
result = ""
with requests.post(
"http://127.0.0.1:8080/v1/completions",
json={
"prompt": prompt,
"temperature": 0.1,
"stop": ["\nuser", "<eos>", "<end_of_turn>"],
"stream": True,
"max_tokens": 1024,
},
stream=True
) as resp:
resp.raise_for_status()
for line in resp.iter_lines():
if not line:
continue
line = line.decode("utf-8")
if not line.startswith("data: "):
continue
data = json.loads(line[len("data: "):])
token = data["choices"][0]["text"]
print(token, end="", flush=True)
result += token
if data["choices"][0].get("finish_reason"):
break
print() # final newline
return result
Translate Example
./translate.sh "The US government has given chip giant Nvidia the green light to sell its advanced artificial intelligence (AI) processors in China, the Department of Commerce said on Tuesday. The H200, Nvidia's second-most-advanced semiconductor, had been restricted by Washington over concerns that it would give China's technology industry and military an edge over the US. The Commerce Department said the chips can be shipped to China granted that there is sufficient supply of the processors in the US."
Translating to zh-HK:
美國商務部周二表示,美國政府已批准半導體巨頭 Nvidia 出售其先進的人工智能 (AI) 處理器予中國。Nvidia 的第二代最先進半導體 H200,曾因美國政府擔心其會令中國科技產業和軍事力量在美國取得優勢而受到限制。商務部表示,只要美國有足夠的處理器供應,這些晶片即可出貨至中國。
Translating to zh-TW:
美國政府已批准晶片巨頭 Nvidia 將其先進的人工智慧 (AI) 處理器銷往中國,美國商務部週二表示。Nvidia 最先進的半導體 H200 曾因擔憂中國科技產業和軍事力量將因此獲得優勢,而受到華盛頓的限制。商務部表示,只要美國有足夠的處理器供應,這些晶片即可運往中國。
Translating to zh-CN:
美国商务部周二宣布,美国政府已批准芯片巨头英伟达向中国出售其先进的人工智能 (AI) 处理器。英伟达第二代先进半导体 H200 曾因担心其会为中国的科技产业和军事力量带来优势,而受到华盛顿的限制。商务部表示,只要美国有足够的处理器供应,这些芯片就可以运往中国。
Training procedure
This model was trained with SFT.
Framework versions
- TRL: 0.22.2
- Transformers: 4.56.2
- Pytorch: 2.10.0
- Datasets: 4.3.0
- Tokenizers: 0.22.2
Citations
Cite TRL as:
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
- Downloads last month
- 23
8-bit
Model tree for wws11/parallel-chinese-translation-gguf-Q8_0
Base model
google/translategemma-12b-it