The Template file has a typo in {{.Response}} , only the .onse}} is present , causing model to not respond
TEMPLATE
FROM /usr/share/ollama/.ollama/models/blobs/sha256-4dbf02d64b32375935c4dfab74209fb045f9e2eb268d77bb158737c841c58dfe
TEMPLATE
"{{ if .System }}<*s*>[SYSTEM_PROMPT]Think deeply before answering the user's question. Do the thinking inside ... tags.
{{ .System }}[/SYSTEM_PROMPT]{{ end }}{{ if .Prompt }}[INST]{{ .Prompt }}[/INST]
{{ end }}onse }}<*/s*>"
PARAMETER stop <*s*>
PARAMETER stop
PARAMETER stop [INST]
as you see the onse is present
corrected template
{{ if .System }}<*s*>[SYSTEM_PROMPT]Think deeply before answering the user's question. Do the thinking inside ... tags.
{{ .System }}[/SYSTEM_PROMPT]{{ end }}{{ if .Prompt }}[INST]{{ .Prompt }}[/INST]
{{ .Response }}<*/s*>{{ end }}
Hi @mashriram
We hadn't tested it on ollama. We tried GGUF version directly with llama.cpp, where it was loading the template file correctly.
Will test with ollama and revert, but meanwhile, please try using with llama.cpp.
Hi, I was making an android app, and i am using llm.cpp for it, so I was thinking of using sarvam - m, but it is 25 gb and not good for android platform, so if you guys can share a smaller version, but cutting down the data, like it will be a great help ...
Thank you
