azmnc/translategemma-4b-it-Q4_K_M_GGUF
This model has been quantized to enable google/translategemma-4b-it model to run with the llama-server component of llama.cpp.
Quantization
Q4_K_M
How to use
prompt
source_language_code[sprt]target_language_code[sprt]Text to translate
example
ja[sprt]en[sprt]日本で一番高い山は富士山です。
It will be translated as follows:
The highest mountain in Japan is Mount Fuji.
llama-server
[llama.cpp_dir]/llama-server -m [model_dir]/translategemma-4b-it-Q4_K_M_GGUF.gguf --port 8080 --jinja
Link
The details regarding the creation of this quantized model are summarized in the following article.
- Downloads last month
- 44
Hardware compatibility
Log In to add your hardware
4-bit
Model tree for azmnc/translategemma-4b-it-Q4_K_M_GGUF
Base model
google/translategemma-4b-it