Piper TTS โ€” US English (Lessac Medium)

A Piper VITS text-to-speech model for US English, packaged for use with the RunAnywhere SDK.

Format: tar.gz archive (~64 MB) containing ONNX model + tokens + espeak-ng-data

Usage with RunAnywhere SDK

Swift (iOS / macOS)

import RunAnywhere

RunAnywhere.registerModel(
    id: "vits-piper-en_US-lessac-medium",
    name: "Piper TTS US English (Lessac)",
    url: URL(string: "https://huggingface.co/runanywhere/vits-piper-en_US-lessac-medium/resolve/main/vits-piper-en_US-lessac-medium.tar.gz")!,
    framework: .onnx,
    modality: .speechSynthesis,
    artifactType: .archive(.tarGz, structure: .nestedDirectory),
    memoryRequirement: 65_000_000
)

// Synthesize speech
let audioData = try await RunAnywhere.synthesize("Hello, world!", voiceId: "vits-piper-en_US-lessac-medium")

Kotlin (Android / JVM)

import com.runanywhere.sdk.RunAnywhere
import com.runanywhere.sdk.models.*

RunAnywhere.registerModel(
    id = "vits-piper-en_US-lessac-medium",
    name = "Piper TTS US English (Lessac)",
    url = "https://huggingface.co/runanywhere/vits-piper-en_US-lessac-medium/resolve/main/vits-piper-en_US-lessac-medium.tar.gz",
    framework = InferenceFramework.ONNX,
    modality = ModelCategory.SPEECH_SYNTHESIS,
    memoryRequirement = 65_000_000L
)

// Synthesize speech
val audioData = RunAnywhere.synthesize("Hello, world!", voiceId = "vits-piper-en_US-lessac-medium")

Web (TypeScript)

import { RunAnywhere, LLMFramework, ModelCategory } from '@anthropic/runanywhere-web';

RunAnywhere.registerModels([{
  id: 'vits-piper-en_US-lessac-medium',
  name: 'Piper TTS US English (Lessac)',
  url: 'https://huggingface.co/runanywhere/vits-piper-en_US-lessac-medium/resolve/main/vits-piper-en_US-lessac-medium.tar.gz',
  framework: LLMFramework.ONNX,
  modality: ModelCategory.SpeechSynthesis,
  memoryRequirement: 65_000_000,
  artifactType: 'archive',
}]);

// Download & load
await RunAnywhere.downloadModel('vits-piper-en_US-lessac-medium');
await RunAnywhere.loadModel('vits-piper-en_US-lessac-medium');

// Synthesize speech
const audio = await RunAnywhere.synthesize('Hello, world!', 'vits-piper-en_US-lessac-medium');

React Native (TypeScript)

import { RunAnywhere } from 'runanywhere-react-native';

RunAnywhere.registerModel({
  id: 'vits-piper-en_US-lessac-medium',
  name: 'Piper TTS US English (Lessac)',
  url: 'https://huggingface.co/runanywhere/vits-piper-en_US-lessac-medium/resolve/main/vits-piper-en_US-lessac-medium.tar.gz',
  framework: 'onnx',
  modality: 'speechSynthesis',
  memoryRequirement: 65_000_000,
});

const audioData = await RunAnywhere.synthesize('Hello, world!', 'vits-piper-en_US-lessac-medium');

Flutter (Dart)

import 'package:runanywhere_flutter/runanywhere_flutter.dart';

RunAnywhere.registerModel(
  id: 'vits-piper-en_US-lessac-medium',
  name: 'Piper TTS US English (Lessac)',
  url: 'https://huggingface.co/runanywhere/vits-piper-en_US-lessac-medium/resolve/main/vits-piper-en_US-lessac-medium.tar.gz',
  framework: InferenceFramework.onnx,
  modality: ModelCategory.speechSynthesis,
  memoryRequirement: 65000000,
);

final audioData = await RunAnywhere.synthesize('Hello, world!', 'vits-piper-en_US-lessac-medium');

Model Details

Property Value
Voice Lessac (en_US)
Quality Medium
Sample Rate 22050 Hz
Format ONNX (Piper VITS)
Phonemizer espeak-ng (bundled)

Attribution

Original voice data from the Piper project. Model converted for sherpa-onnx by csukuangfj.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support