Skip to content
View AlbertaBeef's full-sized avatar

Sponsoring

@opencv

Block or report AlbertaBeef

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don’t include any personal information such as legal names or email addresses. Markdown is supported. This note will only be visible to you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
AlbertaBeef/README.md

Mario Bergeron

Embedded Vision, Robotics & AI at the Edge

I specialize in embedded vision, robotics, and AI at the edge, with over 30 years of DSP and FPGA-based embedded design experience. My career began in ASIC design and has evolved through deep learning platforms, reference designs, and accelerated inference on edge devices.

I have had the unique opportunity to benchmark various AI solutions, and have been reporting my results publicly.

Recently, I have been exploring computer vision and agentic AI approaches to help humans interact with robotics.

When not working, I am a passionate rock climber and woodworker.

Areas of Expertise

  • Benchmarking - Methodology-first, vendor-neutral evaluation of edge AI accelerators (power, throughput, latency, energy-per-inference, accuracy) and pipeline-level performance on cascaded real-world workloads. Reproducible measurement, published methodology, head-to-head comparisons.
  • Edge AI — Porting models on external AI accelerators (Hailo-8, AzurEngine, Axelera, Hailo, MemryX, DeepX), and internal NPUs (AMD Vitis-AI, Qualcomm NPU)
  • Embedded Vision — In depth experience in building image-capture pipelines for AMD programmable logic platforms (Spartan-6, Zynq-7000 Soc, Zynq-UltraScale+, Versal AI Edge), including camera calibration and ISP tuning, for mono, dual(stereo), and multi-camera systems.
  • Robotics — Hand-controlled robotic arms and mobile robots using MediaPipe, pose estimation, and ASL recognition. LLM-based agents integrated with ROS2 for autonomous robot control

Links

Pinned Loading

  1. tria-vitis-platforms tria-vitis-platforms Public

    Tria Vitis platforms and overlays

    1

  2. robotics_docker robotics_docker Public

    Docker container scripts for ROS2 development.

    Shell 1 2

  3. blaze_app_python blaze_app_python Public

    Python application demonstration code for mediapipe models (blazepalm/hand, blazeface, blazepose).

    Python 27 6

  4. blaze_app_cpp blaze_app_cpp Public

    C++ application demonstration code for mediapipe models (blazepalm/hand, blazeface, blazepose).

    C++ 1 2

  5. asl_mediapipe_pointnet asl_mediapipe_pointnet Public

    ASL Recognition using MediaPipe and Pointnet

    Python 1

  6. hand_controller hand_controller Public

    Python 1 5