We're reaching an inflection point in embedded design.
A year ago, running meaningful AI inference on a microcontroller felt like a niche experiment. Today, it's becoming a baseline expectation from customers in industrial, automotive, and consumer electronics.
The hardware is ready โ firmware teams aren't
The hardware is ready โ MCUs with dedicated NPUs shipping at price points that make edge AI viable in volume products. The real bottleneck? Firmware teams.
Most embedded engineers were never trained to think about model optimisation, quantisation, or inference pipelines. And most AI engineers have never had to worry about 256KB of RAM or hard real-time constraints.
Bridging the gap
The companies winning right now are the ones bridging that gap โ investing in cross-training, hiring hybrid talent, and building firmware architectures that treat AI as a first-class citizen rather than a bolt-on feature.
A board-level strategic question
From a board perspective, this isn't just a technical hiring challenge. It's a strategic question: does your organisation have the firmware capability to deliver the AI-enabled products your market will demand in 18 months?
If the answer is "we'll figure it out later," later is already here.