This model has been trained and validated on external datasets to support medical research workflows. It is designed to provide reproducible benchmarks and serve as a foundation for further exploration in healthcare AI.
Key highlights: - Built for medical research and diagnostic study contexts - Validated against external datasets for reliability - Openly available to empower the community in building stronger, more effective solutions
This release is part of my ongoing effort to make impactful AI research accessible through **Modotte**. A detailed blog post explaining the methodology, dataset handling, and validation process will be published soon.
Just did something I’ve been meaning to try for ages.
In only 3 hours, on 10 billion+ tokens, I trained a custom BPE + tiktoken-style tokenizer using my new library microtok — and it hits the same token efficiency as Qwen3.
Tokenizers have always felt like black magic to me. We drop them into every LLM project, but actually training one from scratch? That always seemed way too complicated.
Turns out it doesn’t have to be.
microtok makes the whole process stupidly simple — literally just 3 lines of code. No heavy setup, no GPU required. I built it on top of the Hugging Face tokenizers library so it stays clean, fast, and actually understandable.
If you’ve ever wanted to look under the hood and build your own optimized vocabulary instead of just copying someone else’s, this is the entry point you’ve been waiting for.
I wrote up the full story, threw in a ready-to-run Colab template, and dropped the trained tokenizer on Hugging Face.
Introducing Seekify — a truly non‑rate‑limiting search library for Python
Tired of hitting rate limits when building search features? I’ve built Seekify, a lightweight Python library that lets you perform searches without the usual throttling headaches.
🔹 Key highlights
- Simple API — plug it in and start searching instantly
- No rate‑limiting restrictions
- Designed for developers who need reliable search in projects, scripts, or apps
📦 Available now on PyPI:
pip install seekify
👉 Check out the repo: https:/github.com/Parveshiiii/Seekify I’d love feedback, contributions, and ideas for real‑world use cases. Let’s make search smoother together!
🚀 Wanna train your own AI Model or Tokenizer from scratch?
Building models isn’t just for big labs anymore — with the right data, compute, and workflow, you can create **custom AI models** and **tokenizers** tailored to any domain. Whether it’s NLP, domain‑specific datasets, or experimental architectures, training from scratch gives you full control over vocabulary, embeddings, and performance.
✨ Why train your own? - Full control over vocabulary & tokenization - Domain‑specific optimization (medical, legal, technical, etc.) - Better performance on niche datasets - Freedom to experiment with architectures
⚡ The best part? - Tokenizer training (TikToken / BPE) can be done in **just 3 lines of code**. - Model training runs smoothly on **Google Colab notebooks** — no expensive hardware required.
📢 The Announcement Subject: XenArcAI is now Modotte – A New Chapter Begins! 🚀
Hello everyone,
We are thrilled to announce that XenArcAI is officially rebranding to Modotte!
Since our journey began, we’ve been committed to pushing the boundaries of AI through open-source innovation, research, and high-quality datasets. As we continue to evolve, we wanted a name that better represents our vision for a modern, interconnected future in the tech space.
What is changing?
The Name: Moving forward, all our projects, models, and community interactions will happen under the Modotte banner.
The Look: You’ll see our new logo and a fresh color palette appearing across our platforms.
What is staying the same?
The Core Team: It’s still the same people behind the scenes, including our founder, Parvesh Rawal.
Our Mission: We remain dedicated to releasing state-of-the-art open-source models and datasets.
Our Continuity: All existing models, datasets, and projects will remain exactly as they are—just with a new home.
This isn’t just a change in appearance; it’s a commitment to our next chapter of growth and discovery. We are so grateful for your ongoing support as we step into this new era.
Hey everyone! We’re excited to introduce our new Telegram group: https://t.me/XenArcAI
This space is built for **model builders, tech enthusiasts, and developers** who want to learn, share, and grow together. Whether you’re just starting out or already deep into AI/ML, you’ll find a supportive community ready to help with knowledge, ideas, and collaboration.
💡 Join us to: - Connect with fellow developers and AI enthusiasts - Share your projects, insights, and questions - Learn from others and contribute to a growing knowledge base
👉 If you’re interested, hop in and be part of the conversation: https://t.me/XenArcAI
Iam very happy to announce our latest embedding model sparkembedding-300m base on embeddinggemma-300m we fine tuned it on 1m extra examples spanning over 119 languages and result is this model achieves exceptional cross lingual retrieval