Embedded Vision, Robotics & AI at the Edge
I specialize in embedded vision, robotics, and AI at the edge, with over 30 years of DSP and FPGA-based embedded design experience. My career began in ASIC design and has evolved through deep learning platforms, reference designs, and accelerated inference on edge devices.
I have had the unique opportunity to benchmark various AI solutions, and have been reporting my results publicly.
Recently, I have been exploring computer vision and agentic AI approaches to help humans interact with robotics.
When not working, I am a passionate rock climber and woodworker.
- Benchmarking - Methodology-first, vendor-neutral evaluation of edge AI accelerators (power, throughput, latency, energy-per-inference, accuracy) and pipeline-level performance on cascaded real-world workloads. Reproducible measurement, published methodology, head-to-head comparisons.
- Edge AI — Porting models on external AI accelerators (Hailo-8, AzurEngine, Axelera, Hailo, MemryX, DeepX), and internal NPUs (AMD Vitis-AI, Qualcomm NPU)
- Embedded Vision — In depth experience in building image-capture pipelines for AMD programmable logic platforms (Spartan-6, Zynq-7000 Soc, Zynq-UltraScale+, Versal AI Edge), including camera calibration and ISP tuning, for mono, dual(stereo), and multi-camera systems.
- Robotics — Hand-controlled robotic arms and mobile robots using MediaPipe, pose estimation, and ASL recognition. LLM-based agents integrated with ROS2 for autonomous robot control



