Job Description
Our client is seeking a highly innovative and experienced Senior AI & Machine Learning Engineer with a specialization in Edge Computing to join their fully remote team. This role is central to developing and deploying intelligent algorithms directly on edge devices, enabling real-time data processing and advanced decision-making without relying solely on cloud connectivity. You will be responsible for designing, developing, and optimizing machine learning models for deployment in resource-constrained environments, focusing on areas like computer vision, anomaly detection, and predictive maintenance. The ideal candidate will possess a deep understanding of ML model optimization techniques, quantization, pruning, and efficient inference engines, coupled with expertise in embedded systems and hardware acceleration. Responsibilities include selecting appropriate ML architectures, performing model training and validation, and developing robust deployment pipelines for various edge platforms (e.g., microcontrollers, embedded Linux systems, specialized AI accelerators). You will collaborate closely with hardware engineers, software developers, and data scientists to integrate AI capabilities seamlessly into edge solutions. This position requires strong programming skills in Python and C++, familiarity with edge AI frameworks and libraries (e.g., TensorFlow Lite, PyTorch Mobile, ONNX Runtime), and a solid grasp of embedded system constraints and performance metrics. The ability to benchmark and profile models for power efficiency, latency, and accuracy is crucial. This is an exceptional opportunity to work on the cutting edge of AI, pushing the boundaries of what’s possible with intelligent edge devices and contributing to next-generation IoT and autonomous systems.
Key Responsibilities:
Design, develop, and optimize machine learning models for edge deployment. Implement efficient inference engines and ML pipelines for resource-constrained devices. Develop solutions for real-time AI applications such as computer vision, anomaly detection, and predictive analytics on edge hardware. Collaborate with hardware engineers to leverage specialized AI accelerators. Optimize models for latency, power consumption, and memory footprint. Develop and maintain robust MLOps practices for edge AI deployment. Stay abreast of advancements in edge AI hardware, software, and algorithms. Benchmark and profile ML model performance on target edge devices. Contribute to the architectural design of intelligent edge systems.
Required Qualifications:
Master’s or Ph.D. in Computer Science, Electrical Engineering, or a related field with a focus on AI/ML and/or embedded systems. Minimum of 6 years of experience in AI/ML engineering, with a significant portion focused on edge computing. Proven experience in deploying ML models on embedded hardware and edge devices. Expertise in model optimization techniques (quantization, pruning, knowledge distillation). Strong programming skills in Python and C++. Familiarity with edge AI frameworks and libraries (TensorFlow Lite, PyTorch Mobile, ONNX Runtime, etc.). Understanding of embedded systems, RTOS, and hardware accelerators. Excellent problem-solving, analytical, and communication skills. Experience with performance profiling and benchmarking tools.