Webinars

Beyond the Bench: Scaling AI at the Edge: Engineering for Next-Generation Embedded Systems

Designing AI systems for embedded applications requires strategic constraints and careful hardware selection - transforming resource limitations into innovation drivers while ensuring safety remains paramount as intelligence moves from the cloud to physical devices.


In our recent webinar, we brought together industry experts to dive deep into the challenges and opportunities of implementing AI at the edge. We explored how constraints drive innovation, when to choose between different hardware platforms, and the critical safety considerations for AI-powered physical systems. For those who missed this insightful discussion, here's your front-row seat to the key insights shared.


Meet the Experts

Our panel brought together leaders from both the hardware research and device management worlds - those with hands-on experience implementing AI in resource-constrained environments and those building the platforms that help deploy and manage these sophisticated systems:

Designing Under Constraints: When Less is More ⚡

The conversation kicked off with a fascinating exploration of how limitations in embedded systems aren't just obstacles — they're rocket fuel for innovation.

"Anybody can dam a river with rocks and drive over it, but it takes an engineer to make a bridge that barely stands," noted Jeff Ciesielski from LeafLabs, perfectly capturing the balancing act that makes embedded engineering both challenging and rewarding.

We discussed how constraints force engineers into territories they'd never otherwise explore, leading to unexpectedly elegant solutions. This creative pressure is particularly evident in resource-constrained environments where engineers might find themselves literally counting individual bytes to implement new features in firmware.

The Reality of Scaling Hardware

When scaling from prototype to production, even small component decisions have massive implications. A 50-cent difference per unit becomes a $5,000 question at 10,000 units — and that quickly escalates with volume.

Our discussion highlighted several critical scaling challenges:

  • Component costs multiply dramatically at volume
  • Design choices become increasingly constrained by availability and supply chain resilience
  • BOM optimization directly impacts business viability

We recommended a practical strategy: aim for the middle of a component family range during design. As Jeff put it, "You can always pay money to solve the problem later if you don't have enough resources, but it's a lot harder engineering-time-wise to claw back those resources later."

The AI Trade-off

Our conversation revealed an important counterpoint to the "AI-first" mentality prevalent today. While AI can solve incredibly complex problems, it also dramatically increases computational requirements and development costs.

"When we try to deploy AI, we try to do it very sparingly. And in situations where AI shines," Jeff explained. "It's not in these well-understood mathematical models. It's where you have this crazy high dimensionality in a function that you're trying to fit to."

We cautioned against using machine learning as a default approach, sharing examples where traditional engineering approaches like Kalman filters for sensor fusion are often more efficient and deterministic than ML alternatives. The key is being strategic — save AI for problems where traditional approaches hit a wall.

From Microcontrollers to FPGAs: Navigating Hardware Complexity 🤖

As devices get smarter, choosing the right hardware platform becomes increasingly critical.

When to Choose an FPGA

We outlined clear criteria for when FPGAs make sense:

  • For massively parallel data processing, such as reading ADCs at 100 MHz
  • High-speed timing applications requiring nanosecond precision
  • When capturing and correlating multiple simultaneous events
  • Applications requiring sophisticated data processing pipelines

We discussed a fascinating real-world example involving brain imaging technology that required correlating photon captures with external events at nanosecond precision — a perfect application for FPGAs that would be impossible on microcontrollers.

Another surprising insight was that FPGAs aren't just for edge applications. They're traditionally deployed extensively in data centers for high-speed networking, packet routing, and algorithmic trading where speed is critical. We learned how FPGAs allow processing network packets without going through traditional networking stacks, dramatically reducing latency.

The Evolution of Edge Computing Hardware

We explored how the FPGA landscape is changing. Companies like Lattice and Effinix have developed extremely affordable, low-power FPGAs specifically for deployment in wearables and edge devices. This democratization is enabling sophisticated parallel processing in consumer-grade products.

"Most smartwatches and similar devices have small FPGAs," Jeff mentioned, explaining how these are often used for sensor fusion and preprocessing to avoid waking power-hungry primary microcontrollers.

FPGA vs. ASIC: The $25 Million Question

Our discussion turned to when companies should consider custom silicon. With modern tape-out costs potentially hitting $25 million for a single iteration at 7nm, the economics only make sense at massive scale.

"You could buy a lot of FPGAs for $25 million," Jeff observed. "And that $25 million is one spin, one mask set. Did you get it right? Maybe, maybe not. So maybe it's $50 million."

For most products, FPGAs provide the perfect balance of performance and flexibility without requiring venture capital just for hardware development.

AI at the Edge: Safety Considerations 🔒

The most thought-provoking segment tackled the often-overlooked topic of safety when AI meets the physical world.

Strategic AI Deployment

We addressed the "just throw AI at it" mentality that's become common. We shared practical experiences where teams initially deployed AI solutions, only to later discover edge cases that required traditional engineering approaches as backups.

One particularly illuminating example involved a key-cutting machine that used computer vision and machine learning. The system initially performed well in controlled environments but struggled with real-world complexity when deployed to locksmiths. The team had to reverse-engineer traditional measurement systems to supplement the AI when confidence levels fell below certain thresholds.

"We went machine learning first and then actually had to reverse engineer all of these different mechanical traits out of keys and build statistical models," Jeff explained. This highlighted an important lesson about potentially starting with traditional engineering approaches and adding AI where it provides clear benefits.

Supervised vs. Unsupervised Learning in Physical Systems

We delved into the critical differences between supervised and unsupervised learning in safety-critical applications. We emphasized that when AI systems interact with the physical world, there needs to be clear accountability.

"If it's unsupervised learning, you're essentially saying no one here is responsible for the outcome... from a liability standpoint, that's just not true," Jeff noted. For systems interacting with humans or the physical world, we advocated for either supervised learning approaches or strong guardrails around unsupervised systems.

We discussed a compelling anecdote about AI trained to play golf in a simulation: while it became extremely accurate, it developed a technique of jumping horizontally and hitting the ball mid-air — effective in simulation but potentially fatal for a human. This illustrated how unsupervised systems optimize for targets without considering real-world consequences.

Physical Constraints Meet AI Unpredictability

One of the most valuable insights was how embedding AI into physical systems introduces unpredictability into inherently constrained environments. We discussed how traditional embedded systems already face challenges from unexpected physical conditions — from warehouses that exceed component temperature ratings to cosmic ray bit flips.

Adding the unpredictability of machine learning models to these already challenging environments requires particularly thoughtful implementation. As Justin put it, "It's a very deliberate choice to introduce, for the sake of argument, an unconstrained portion to your constrained system."

🚀 What's Next? The Future of Edge AI

Despite the challenges, we expressed genuine excitement about future possibilities. We discussed the potential of sophisticated wearable devices providing unprecedented health insights through thoughtfully implemented edge AI.

"In 10-15 years... what if you could have a wearable with an FPGA that tells you things like 'serotonin levels are dipping'?" Jeff envisioned these technologies transforming healthcare and personal wellness by combining sophisticated sensor networks with edge processing.

However, we tempered this optimism with a call for responsibility: "We're in a very interesting to watch, but also a little bit scary Wild West period where we have a lot of development happening... We're outpacing our ability to ask if we should."

This webinar only scratched the surface of embedded AI systems and the engineering challenges they present. As the field evolves at breakneck speed, staying informed about best practices and emerging technologies is essential for developers and product teams.

Watch the full webinar recording for a deeper dive into these topics and more insights from our expert panel.

Ready to Navigate Edge AI Complexity?

Are you working on embedding AI capabilities into your products? Schedule a technical consultation with our team to discuss your specific hardware constraints and AI integration challenges. Let's Talk!