Skip to main content

Transformer Lab Can Talk Now: Introducing Text-to-Speech, Training & One-Shot Voice Cloning

· 5 min read

🎉 Transformer Lab just got a voice! We’re thrilled to announce audio modality support so you can generate, clone, and train voices directly in Transformer Lab.

What’s included in this release

  • 🎙️ Turn text into speech (TTS) with CUDA, AMD and MLX
  • 🛠️ Train your own TTS models on CUDA and AMD
  • 🧬 Clone a voice in one shot for lightning-fast replication on CUDA and AMD

Transformer Lab Now Works with AMD GPUs

· 18 min read

We're excited to announce that Transformer Lab now supports AMD GPUs! Whether you're on Linux or Windows, you can now harness the power of your AMD hardware to run and train models with Transformer Lab.
👉 Read the full installation guide here

TL;DR

If you have an AMD GPU and want to do ML work, just follow our guide above and skip a lot of stress.

The journey for us to figure out how to build a reliable PyTorch workspace on AMD was... messy. And we've documented everything below.

Generating Datasets and Training Models with Transformer Lab

· 4 min read

Introduction

In this tutorial, we'll explore how to bridge a knowledge gap in our model by generating custom dataset content and then fine-tuning the model using a LoRA adapter. The process begins with generating data from raw text using the Generate Data from Raw Text Plugin and concludes with fine-tuning via the MLX LoRA Plugin within Transformer Lab.