Senior Machine Learning Engineer – Edge Inference & Optimization

Full time Machine Learning Artificial Intelligence Software Engineering DevOps

Job Description

Vision being the dominant human sense, eye tracking constitutes a powerful approach for understanding the human mind! At Pupil Labs, our mission is to provide cutting-edge eye-tracking solutions, which are more robust, accurate, accessible, and user-friendly than ever before. Already today, our products empower thousands of users in academia and industry, clinical surgeons, elite athletes, astronauts on the International Space Station, and many more. Unlocking the full potential of eye-tracking technology relies on solving hard research problems, ranging from core gaze-estimation algorithms to developing cloud-based and edge-based tools for the real-time analysis of terabytes of egocentric video and physiological data.

The interdisciplinary R&D team at Pupil Labs, comprising members with backgrounds in Computer Science, Computational Neuroscience, Mathematics, and Physics , is tackling these challenges head-on! In close collaboration with other engineering teams, we identify promising R&D avenues and take pride in seeing our results swiftly integrated into the latest products shipped to our customers.

To support our efforts, we are looking to grow our team in Berlin with a full-time Senior Machine Learning Engineer with a strong background in edge inference, performance optimization, and ML systems design . This is an on-site position (with up to two home-office days per week).

Pupil Labs offers a competitive salary, flexible work arrangements, a great team of coworkers, a young and dynamic company structure, and a culture of participation and feedback.

Are you excited about joining an ambitious, international, diverse, interdisciplinary, young, enthusiastic, and talented team of researchers and engineers? Do you have a growth mindset, thrive in fast-paced work environments, and enjoy working on hard problems? Then we are looking forward to hearing from you!

What you would do

  • Design, implement, and optimize machine learning pipelines for low-latency, energy-efficient inference on edge devices .
  • Collaborate with research teams to bring state-of-the-art models into production , adapting them for resource-constrained environments .