Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
1
1
Sean (TheOceanBreeze)
theoceanbreeze
Follow
theaithinker's profile picture
1 follower
·
1 following
https://theoceanbreeze.github.io/
TheOceanBreeze
theoceanbreeze.aclevo.com
AI & ML interests
None yet
Recent Activity
reacted
to
omarkamali
's
post
with ❤️
2 days ago
We got Qwen 3.5 to count Rs in Strawberry correctly! 🚨 Building on Sawtone, we’ve been testing a different way to feed language into an LLM to build the next generation of multilingual AI. The usual setup gives the model tokenized text and asks it to perform various linguistic tasks. That works surprisingly well, until it doesn’t. Accents disappear. Words get mangled. Internal structure gets blurred away. And the cost of that gets higher once you move into multilingual and lower-resource settings. So we tried adding a second path. In addition to the normal text input, the model also receives Sawtone: a byte-level word representation that preserves how a word is written, how it sounds, and how it is structured. Same LLM. Better interface. In this proof of concept with Qwen 3.5 0.8B, that pushed our eval from 64% to 88%. The gains showed up exactly where tokenized models usually get shaky: diacritics, character order, exact spelling, and other form-sensitive behavior. Sawtone itself is tokenizer-free, byte-level, and pre-trained across 507 languages. Still early, but promising!
reacted
to
omarkamali
's
post
with 👍
2 days ago
We got Qwen 3.5 to count Rs in Strawberry correctly! 🚨 Building on Sawtone, we’ve been testing a different way to feed language into an LLM to build the next generation of multilingual AI. The usual setup gives the model tokenized text and asks it to perform various linguistic tasks. That works surprisingly well, until it doesn’t. Accents disappear. Words get mangled. Internal structure gets blurred away. And the cost of that gets higher once you move into multilingual and lower-resource settings. So we tried adding a second path. In addition to the normal text input, the model also receives Sawtone: a byte-level word representation that preserves how a word is written, how it sounds, and how it is structured. Same LLM. Better interface. In this proof of concept with Qwen 3.5 0.8B, that pushed our eval from 64% to 88%. The gains showed up exactly where tokenized models usually get shaky: diacritics, character order, exact spelling, and other form-sensitive behavior. Sawtone itself is tokenizer-free, byte-level, and pre-trained across 507 languages. Still early, but promising!
liked
a model
about 1 year ago
Aclevo/AclevoGPT-Gemma-2b-CoT-reasoning
View all activity
Organizations
theoceanbreeze
's activity
All
Models
Datasets
Spaces
Buckets
Papers
Collections
Community
Posts
Upvotes
Likes
Articles
liked
a model
about 1 year ago
Aclevo/AclevoGPT-Gemma-2b-CoT-reasoning
3B
•
Updated
Mar 9, 2025
•
2
•
2