Best chat template to force transcription mode?
Hi I'm looking for the best multilingual voice to text model I can run locally using llama-server strictly for transcription purposes, and this looks like it might work for that purpose, I read through your model description and applied the relevant fix for this model to work correctly. Could you please provide the best jinja template to pass to llama-server to force transcription only mode for a specific language? Thanks in advance.
I don't use upstream llama server, so don't know anything about how to set templates with it. I implemented my own autotokenizer in a downstream server based off llama.cpp server. Basically all you need to do is append lang:{SL}[TRANSCRIBE] to the end of the normal assistant prompt where {SL} is the language code such as en, de, etc.
It seems to work quite well.
You should be aware the audio function in llama.cpp is broken since b7410, the end of the audio will be chopped off. https://github.com/ggml-org/llama.cpp/issues/18419 . Also the voxtral problem https://github.com/ggml-org/llama.cpp/issues/17868 was ignored for some unknown reason so may always have to be fixed by hand in future builds.
I don't use upstream llama server, so don't know anything about how to set templates with it. I implemented my own autotokenizer in a downstream server based off llama.cpp server. Basically all you need to do is append lang:{SL}[TRANSCRIBE] to the end of the normal assistant prompt where {SL} is the language code such as en, de, etc.
It seems to work quite well.You should be aware the audio function in llama.cpp is broken since b7410, the end of the audio will be chopped off. https://github.com/ggml-org/llama.cpp/issues/18419 . Also the voxtral problem https://github.com/ggml-org/llama.cpp/issues/17868 was ignored for some unknown reason so may always have to be fixed by hand in future builds.
I did a little bit of digging into the audio truncation issue and after a couple of tests it became clear that the model only transcribes up to the last 30s segment, so a 48s audio file would stop at 30s , a 1m20s file at exactly 1m, etc.
I made a quick python wrapper which calculated the time difference until the next 30s segment and added silence for that duration, now the text is fully transcribed as expected.
I'm also using this jinja template which allows me to pass the desired language dynamically in the request using "chat_template_kwargs".
{%- if messages | length > 0 -%}[INST][BEGIN_AUDIO]
{%- for message in messages -%}
{%- if message.role == 'user' and loop.last -%}
{{ message.content }}
{%- endif -%}
{%- endfor -%}
[/INST]lang:{{ lang_code | default('en') }}[TRANSCRIBE]
{%- endif -%}
The model does an alright job, even tho the error rate is higher using llama.cpp at any quant level (even bf16) compared to running the full model using transformers or vllm.
Edit: I have no clue why the content of my post looks like it's been crossed out.
I don't use upstream llama server, so don't know anything about how to set templates with it. I implemented my own autotokenizer in a downstream server based off llama.cpp server. Basically all you need to do is append lang:{SL}[TRANSCRIBE] to the end of the normal assistant prompt where {SL} is the language code such as en, de, etc.
It seems to work quite well.You should be aware the audio function in llama.cpp is broken since b7410, the end of the audio will be chopped off. https://github.com/ggml-org/llama.cpp/issues/18419 . Also the voxtral problem https://github.com/ggml-org/llama.cpp/issues/17868 was ignored for some unknown reason so may always have to be fixed by hand in future builds.
I did a little bit of digging into the audio truncation issue and after a couple of tests it became clear that the model only transcribes up to the last 30s segment, so a 48s audio file would stop at 30s , a 1m20s file at exactly 1m, etc.
I made a quick python wrapper which calculated the time difference until the next 30s segment and added silence for that duration, now the text is fully transcribed as expected.
I'm also using this jinja template which allows me to pass the desired language dynamically in the request using "chat_template_kwargs".
{%- if messages | length > 0 -%}
<s>[INST][BEGIN_AUDIO]
{%- for message in messages -%}
{%- if message.role == 'user' and loop.last -%}
{{ message.content }}
{%- endif -%}
{%- endfor -%}
[/INST]lang:{{ lang_code | default('en') }}[TRANSCRIBE]
{%- endif -%}
The model does an alright job, even tho the error rate is higher using llama.cpp at any quant level (even bf16) compared to running the full model using transformers or vllm.
Edit: I have no clue why the content of my post looks like it's been crossed out.
You need to put special chars in a codeblock as I show here or markdown will interpret as format commands. Silence padding is certainly a workaround for the truncation padding but there were also some other hacks in the new code, which may or may not be doing distortion on the audio. I just reverted my downstream to the old code before the hack that broke it that will work too if you are fluent in c++ rebasing. Agree the model is OK but certainly not flawless. Extremely interesting result comparing against transformers/vllm. Most models here seem to recommend sglang or vllm or transformers for the best inference engine for some reason llama.cpp does not come up in the conversation. Ollama seems to have a grand vision for some future unified anything in anything out engine https://ollama.com/blog/multimodal-models and they dropped use of llama.cpp specifically due to the multimodal instability, clearly not being maintained at all even for simple issues that come up but Ollama guys think the whole approach used in llama.cpp is wrong and a complete ripup needed for proper multimodal. I will not debug it further since its a complete waste of time to post issues and just have them ignored. Best of luck with your transcribing and translating.