NanoChat Telugu 560M

NanoChat Telugu 560M is a Telugu language model designed for conversational AI, text generation. The model is optimized specifically for Telugu text processing with a custom tokenizer that achieves significantly better efficiency compared to general-purpose tokenizers.

Model Details

Model Description

  • Model Type: Causal Language Model (Decoder-only Transformer)
  • Language: Telugu (te)
  • Parameters: ~560M
  • Architecture:
    • Layers: 20
    • Model Dimension: 1,280
    • Attention Heads: 10
    • Vocabulary Size: 65,536 tokens
    • Maximum Sequence Length: 2,048 tokens

Tokenizer

The model uses a custom Telugu-specific BPE (Byte Pair Encoding) tokenizer trained on Telugu text. This tokenizer achieves 70-82% reduction in token count compared to general-purpose tokenizers like GPT-2 and GPT-4, making it highly efficient for Telugu text processing.

How to Use

Basic Text Generation

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model_id = "viswamaicoe/nanochat-telugu-560M"
device = "cpu"  # or torch.device("cuda" if torch.cuda.is_available() else "cpu")

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    dtype=torch.float32,
).to(device)

# Example: Generate text
text = "హలో, మీరు ఎలా ఉన్నారు?"
inputs = tokenizer(text, return_tensors="pt").to(device)

with torch.no_grad():
    outputs = model.generate(**inputs, max_new_tokens=50)
    
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(generated_text)

Using Embeddings

You can extract sentence-level embeddings from the model for various downstream tasks like semantic search, text classification, or similarity computation:

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "viswamaicoe/nanochat-telugu-560M"
device = "cpu"  # torch.device("cuda" if torch.cuda.is_available() else "cpu")

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    dtype=torch.float32,
).to(device)


def get_sentence_embeddings(text, pooling_strategy="mean"):
    """Extract sentence-level embeddings using pooling"""
    inputs = tokenizer(text, return_tensors="pt")
    input_ids = inputs["input_ids"].to(device)
    attention_mask = inputs["attention_mask"].to(device)
    
    with torch.no_grad():
        outputs = model(
            input_ids=input_ids,
            attention_mask=attention_mask,
            output_hidden_states=True
        )
    
    # Get last hidden state
    last_hidden_state = outputs.hidden_states[-1]
    
    if pooling_strategy == "mean":
        # Mean pooling (excluding padding tokens)
        masked_embeddings = last_hidden_state * attention_mask.unsqueeze(-1)
        sentence_embedding = masked_embeddings.sum(dim=1) / attention_mask.sum(dim=1, keepdim=True)
    elif pooling_strategy == "max":
        # Max pooling
        masked_embeddings = last_hidden_state.masked_fill(~attention_mask.unsqueeze(-1).bool(), float('-inf'))
        sentence_embedding = masked_embeddings.max(dim=1)[0]
    
    return sentence_embedding

# Example usage
text = "హలో, మీరు ఎలా ఉన్నారు?"
sentence_embedding = get_sentence_embeddings(text, pooling_strategy="mean")
print(f"Sentence embedding shape: {sentence_embedding.shape}")
# Sentence embedding shape: torch.Size([1, 1280])

Pooling Strategies:

  • mean: Averages all token embeddings (excluding padding tokens) - recommended for general use
  • max: Takes the maximum value across each dimension - useful for capturing salient features

Limitations

  • The model is primarily trained on Telugu text and may not perform well on other languages
  • As with all language models, outputs should be reviewed for accuracy and appropriateness
  • The model may generate text that requires fact-checking for factual claims

Ethical Use and Disclaimer

Important: This model is intended for research and legitimate use cases only. Users are responsible for ensuring their use of this model complies with applicable laws and ethical guidelines.

Prohibited Uses

This model should NOT be used for:

  • Generating harmful, offensive, or discriminatory content
  • Creating misleading or false information
  • Impersonating individuals or entities
  • Generating content that violates privacy or confidentiality
  • Any illegal activities or purposes that violate human rights
  • Automated decision-making in critical domains (healthcare, legal, financial) without human oversight

Responsible Use Guidelines

  • Fact-Checking: Always verify factual claims generated by the model, especially for important decisions
  • Bias Awareness: The model may reflect biases present in its training data. Be aware of potential biases in outputs
  • Human Oversight: Use human review for sensitive applications and important decisions
  • Transparency: Disclose when content is AI-generated, especially in public-facing applications
  • Privacy: Do not input sensitive personal information or confidential data
  • Context Appropriateness: Ensure generated content is appropriate for the intended context and audience

No Warranty

This model is provided "as-is" without any warranties. The model developers and maintainers are not responsible for any misuse, damages, or consequences arising from the use of this model. Users assume full responsibility for their use of this model.

Citation

If you use this model, please cite:

@misc{nanochat-telugu-560m,
  title={NanoChat Telugu 560M: A Telugu Language Model},
  author={Viswam AI COE},
  year={2025},
  howpublished={\url{https://huggingface.co/viswamaicoe/nanochat-telugu-560M}}
}

Contact

For questions or issues, please contact the model maintainers.

Downloads last month
7
Safetensors
Model size
0.6B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for viswamaicoe/nanochat-telugu-560M

Finetunes
1 model

Space using viswamaicoe/nanochat-telugu-560M 1

Collection including viswamaicoe/nanochat-telugu-560M