Announcing Transformer Lab GPU Orchestration
Today, we’re launching Transformer Lab GPU Orchestration, a flexible open-source platform for AI/ML teams to manage large-scale training across clusters of GPUs.
Today, we’re launching Transformer Lab GPU Orchestration, a flexible open-source platform for AI/ML teams to manage large-scale training across clusters of GPUs.
🎉 Transformer Lab just got a voice! We’re thrilled to announce audio modality support so you can generate, clone, and train voices directly in Transformer Lab.
kk, Transformer Lab now supports Diffusion image generation and training!
Out of the box we support major open weight base models including:
What can you do? Well...
We're excited to announce that Transformer Lab now supports AMD GPUs! Whether you're on Linux or Windows, you can now harness the power of your AMD hardware to run and train models with Transformer Lab.
👉 Read the full installation guide here
If you have an AMD GPU and want to do ML work, just follow our guide above and skip a lot of stress.
The journey for us to figure out how to build a reliable PyTorch workspace on AMD was... messy. And we've documented everything below.
We're excited to announce a significant enhancement to Transformer Lab - integration with the open-source Markitdown library from Microsoft! This update dramatically expands the types of documents you can work with in Transformer Lab, making it more versatile and powerful for your AI projects.
Retrieval-Augmented Generation (RAG) combines the power of retrieval systems with generative AI to create more accurate, factual, and contextually relevant responses. In this hands-on tutorial, we'll walk through building and evaluating a complete RAG pipeline in Transformer Lab using documentation files as our knowledge base.
Transformer Lab is excited to announce robust multi-GPU support for fine-tuning large language models. This update allows users to leverage all available GPUs in their system, dramatically reducing training times and enabling work with larger models and datasets.
Transformer Lab has recently added an Ollama Server plugin which allows users to run inference through Ollama on their local machine.
In this guide, we'll walk through creating an evaluator plugin within Transformer Lab named sample-data-print. This plugin will load a dataset and print its contents, along with some sample parameters, using the new tlab_trainer
decorator approach.
In this tutorial, we'll explore how to bridge a knowledge gap in our model by generating custom dataset content and then fine-tuning the model using a LoRA adapter. The process begins with generating data from raw text using the Generate Data from Raw Text Plugin and concludes with fine-tuning via the MLX LoRA Plugin within Transformer Lab.