Design and implement detection fusion and tracking algorithms to improve perception performance in real time and offline pseudo-label generation using a combination of camera, radar and lidar data
Benchmark and evaluate perception performance using quantitative metrics and testing methodologies
Identify, troubleshoot, and resolve real-world perception issues encountered in autonomous driving scenarios
Ensure that your work is performed in accordance with the company’s Quality Management System (QMS) requirements and contribute to continuous improvement efforts
Ensure that technical work meets customer requirements, regulatory standards, and company quality policies
Required Skills:
Proficient in C/C++
Experience with multi-modal detection fusion and tracking algorithms
Strong understanding of computer vision and perception systems
Strong understanding of sensing with cameras, radars and lidars
Excellent problem-solving skills and attention to detail
Preferred Skills:
Master's or Ph.D. in Computer Science, Electrical Engineering, Robotics, or related field
Experience with perception systems in autonomous vehicles or robotics
Hands-on experience deploying machine learning models into production environments
Proficiency in deep learning frameworks like TensorFlow or PyTorch
Experience with real-time systems, parallel computing, and optimization techniques
Strong knowledge of data structures, algorithms, and software design patterns
Salary Range:
$150,000 - $200,000 a year
Our compensations (cash and equity) are determined based on the position, your location, qualifications, and experience.