mlx-community/Qwen3-TTS-12Hz-0.6B-CustomVoice-8bit

This model was converted to MLX format from Qwen/Qwen3-TTS-12Hz-0.6B-CustomVoice using mlx-audio version 0.3.0.

Refer to the original model card for more details on the model.

Use with mlx-audio

pip install -U mlx-audio

CLI Example:

python -m mlx_audio.tts.generate --model mlx-community/Qwen3-TTS-12Hz-0.6B-CustomVoice-8bit --text "Hello, this is a test."

Python Example:

from mlx_audio.tts.utils import load_model
from mlx_audio.tts.generate import generate_audio

model = load_model("mlx-community/Qwen3-TTS-12Hz-0.6B-CustomVoice-8bit")
generate_audio(
    model=model,
    text="Hello, this is a test.",
    ref_audio="path_to_audio.wav",
    file_prefix="test_audio",
)
Downloads last month
11,733
Safetensors
Model size
0.5B params
Tensor type
BF16
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including mlx-community/Qwen3-TTS-12Hz-0.6B-CustomVoice-8bit