Written by
Published on
I’m thrilled to share that Mako has raised an $8.5M seed round, led by M13, with participation from industry leaders. Additionally, we’re announcing partnerships with AMD and Tenstorrent.
This is a big moment for our team but more importantly, it marks a turning point in how the industry thinks about performance engineering in the world of AI.
Why now?
GPUs are the workhorses behind modern AI. But writing code for them remains an archaic, manual, and expensive process. Performance still depends on a small group of highly specialized engineers, and the best code is often locked away inside hyperscalers.
We started Mako with one mission:
To make peak GPU performance universally accessible through intelligent, automated code generation.
Instead of requiring teams to master CUDA or hunt for rare kernel engineers, we’re building an AI system that writes and continuously tunes the low-level GPU code for you. Whether you’re deploying to NVIDIA, AMD, or Tenstorrent, Mako helps you get the most out of your hardware automatically.
What we’ve built so far
Mako is an intelligent GPU optimization platform. It’s made up of two core products:
MakoGenerate - Generates AI-powered performant GPU kernels in under 60 seconds.
MakoOptimize - Continuously autotunes those kernels for max speed and efficiency, 24/7.
With Mako, engineering teams have achieved:
Up to 3X faster inference performance
Up to 80% infrastructure cost savings
GPU kernels up to 10x faster than torch.compile
Plug-and-play deployment across cloud or on-prem GPUs
No rewriting, no black-box compilers, no lock-in.
Why we raised
This new funding allows us to expand our engineering team, deepen support across diverse hardware environments, and bring Mako to more developers working in AI, graphics, simulation, and scientific computing.
It also enables us to double down on partnerships with GPU vendors and chip designers. We’re proud to be working with AMD on kernel generation for ROCm-compatible hardware, and with Tenstorrent to unlock performance on next-gen silicon.
The future we see
AI infrastructure is entering a new era - one where agents write the infrastructure itself.
Compilers, profilers, debuggers - these tools won’t go away, but they’ll be abstracted by intelligent systems that deeply understand performance constraints and hardware nuances.
We’re building one of those systems. It’s still early, but the future feels close. And at Mako, we’re excited to help shape it.
If you’re building large-scale AI systems and want to eliminate your performance bottlenecks, we’d love to show you what Mako can do.
Thanks for being part of the journey.
Latest
From the blog
The latest industry news, interviews, technologies, and resources.