Model Description

waka-gpt is a GPT-based language model designed for Japanese text generation. It was constructed by fine-tuning the original base model, rinna/japanese-gpt2-small (based on base model information). Features and Use Cases (Samples): Generating classical and waka-style texts Generating Japanese poetry and creative writing Conversation/Creative writing assistance

  • Developed by: supertakerin2
  • Funded by: No one
  • Shared by: No one
  • Model type: Text-generation
  • Language(s) (NLP): Japanese,Classical Japanese
  • License: Mit License
  • Finetuned from model : rinna/japanese-gpt2-small

Model Sources

The Eight Imperially Compiled Anthologies of Waka Poetry, compiled during the golden age of waka culture

Direct Uses

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("supertakerin2/waka-gpt")
model = AutoModelForCausalLM.from_pretrained("supertakerin2/waka-gpt")

input_text = "古き良き日本の風景を詠む和歌:"
inputs = tokenizer(input_text, return_tensors="pt")

outputs = model.generate(**inputs, max_length=100)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Bias, Risks, and Limitations

waka-gpt is a statistical language model trained on large volumes of text, and the following points require attention: Potential for factual errors in output Potential for inclusion of biases Potential to generate inappropriate expressions in specific contexts Proper filtering and operation under supervision are recommended.

Model Card Contact

xianghuangwugn@gmail.com

Downloads last month
7
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for supertakerin2/waka-gpt

Finetuned
(4)
this model