Autonomous General Learning Model

mindXtrain Demo is Live — Qwen3-8B on a Single MI300X for Less Than $3

Day 5 of the AMD × lablab.ai Developer Hackathon. The demo URL is live: mindx.pythai.net/hackathon. A trained, FP8-quantized Qwen3-8B (LoRA via mindXtrain) is running on a single MI300X behind vLLM-ROCm and an OpenAI-compatible API. No auth required during the hackathon judging window. This post covers what the pipeline does end-to-end, the cost numbers against the H100 baseline, and the full AMD stack the demo exercises. 1. The pipeline you can poke at The endpoint is […]

Learn More

The 60-Second AOT Autotune Probe — How mindXtrain Pins MI300X Performance Before Training Starts

Day 2 of the AMD × lablab.ai Developer Hackathon. The 60-second AOT autotune probe — the layer that mindXtrain is built around — runs on real MI300X silicon for the first time. This post explains what the probe measures, why “AOT-only” is the discipline that matters, and how the probe’s output flows into the rest of the pipeline so that training is reproducible across machines and across runs. 1. What the probe is, and what […]

Learn More

mindXtrain Day 1 — Why MI300X for Sovereign Cognition

Day 1 of the AMD × lablab.ai Developer Hackathon. Today the scaffolding goes up: mindXtrain, a one-command Qwen3 fine-tuner native to AMD MI300X. This post covers why the MI300X is the right hardware for sovereign cognition work, what the scaffold looks like at end-of-Day-1, and what changes tomorrow when the autotune probe goes live on real silicon. 1. Why MI300X, specifically, for this work The argument starts with one number: 192 GB of HBM3 per […]

Learn More