Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

mlx-community
/
Ling-2.6-flash-mlx-6bit

Text Generation
MLX
Safetensors
English
bailing_hybrid
conversational
custom_code
6-bit
Model card Files Files and versions
xet
Community
2

Instructions to use mlx-community/Ling-2.6-flash-mlx-6bit with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • MLX

    How to use mlx-community/Ling-2.6-flash-mlx-6bit with MLX:

    # Make sure mlx-lm is installed
    # pip install --upgrade mlx-lm
    
    # Generate text with mlx-lm
    from mlx_lm import load, generate
    
    model, tokenizer = load("mlx-community/Ling-2.6-flash-mlx-6bit")
    
    prompt = "Write a story about Einstein"
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, add_generation_prompt=True
    )
    
    text = generate(model, tokenizer, prompt=prompt, verbose=True)
  • Notebooks
  • Google Colab
  • Kaggle
  • Local Apps
  • LM Studio
  • Pi new

    How to use mlx-community/Ling-2.6-flash-mlx-6bit with Pi:

    Start the MLX server
    # Install MLX LM:
    uv tool install mlx-lm
    # Start a local OpenAI-compatible server:
    mlx_lm.server --model "mlx-community/Ling-2.6-flash-mlx-6bit"
    Configure the model in Pi
    # Install Pi:
    npm install -g @mariozechner/pi-coding-agent
    # Add to ~/.pi/agent/models.json:
    {
      "providers": {
        "mlx-lm": {
          "baseUrl": "http://localhost:8080/v1",
          "api": "openai-completions",
          "apiKey": "none",
          "models": [
            {
              "id": "mlx-community/Ling-2.6-flash-mlx-6bit"
            }
          ]
        }
      }
    }
    Run Pi
    # Start Pi in your project directory:
    pi
  • MLX LM

    How to use mlx-community/Ling-2.6-flash-mlx-6bit with MLX LM:

    Generate or start a chat session
    # Install MLX LM
    uv tool install mlx-lm
    # Interactive chat REPL
    mlx_lm.chat --model "mlx-community/Ling-2.6-flash-mlx-6bit"
    Run an OpenAI-compatible server
    # Install MLX LM
    uv tool install mlx-lm
    # Start the server
    mlx_lm.server --model "mlx-community/Ling-2.6-flash-mlx-6bit"
    # Calling the OpenAI-compatible server with curl
    curl -X POST "http://localhost:8000/v1/chat/completions" \
       -H "Content-Type: application/json" \
       --data '{
         "model": "mlx-community/Ling-2.6-flash-mlx-6bit",
         "messages": [
           {"role": "user", "content": "Hello"}
         ]
       }'
New discussion
Resources
  • PR & discussions documentation
  • Code of Conduct
  • Hub documentation

mlx_lm problem. ValueError: Model type bailing_hybrid not supported.

#2 opened 7 days ago by
mkawaiUYH

Update README.md

#1 opened 7 days ago by
usermma
Company
TOS Privacy About Careers
Website
Models Datasets Spaces Pricing Docs