Job Description

We are a leading Engineering and IT consultancy operating across 30 countries, paving the way in various sectors, including Aeronautics, Space, Defence, Automotive, Rail & Mobility, Energy & Environment, Life Sciences & Health, and Telecoms.

Our mission is to enable and support our customers' development strategies, technological advancements, and sustainability initiatives. We are united by a common purpose: building tomorrow’s world today.

Our Innovation Lab brings together industry challenges, cutting-edge technologies and an enthusiastic team of innovation engineers. Our projects are strategically focused on delivering inventive solutions to address sectors & customers challenges, drive digital transformation and enable businesses to thrive in the modern landscape.

Project:  Egocentric 4D Perception

The Egocentric 4D Perception Project is a transformative research initiative aimed at advancing first-person multimodal visual and auditory perception to enhance industrial and manufacturing environments. By leveraging cutting-edge video-audio multimodal action recognition methods powered by transformers, this project seeks to enable technicians to perform tasks more effectively while introducing automated systems that monitor, guide, and validate work processes in real time.

  • Preprocess multimodal data (video, audio) to prepare it for training deep learning models, including annotation, synchronization, and augmentation.
  • Assist in designing and implementing video-audio transformer models for real-time action recognition and validation of technician tasks.
  • Fine-tune existing multimodal deep learning architectures for specific industrial scenarios, such as assembly line monitoring or maintenance tracking.
  • Assist in designing and collecting datasets of egocentric hand-object interactions using wearable cameras, sensors, or VR/AR devices.

Qualifications

Working towards a Master’s degree (last year) in:

  • Artificial Intelligence, Machine Learning, Data Science, Computer Science, with a preference for practical skills in NLP, LLM, Transformers, and/or Computer Vision.
  • And/or full stack development (front end and back end).
  • And/or Digital Twin / Digital Thread, in an engineering environment.
  • And/or Robotic.

Skills:

  • Proficiency in TensorFlow or PyTorch, with a focus on transformer architectures for video and audio tasks. Experience in synchronizing and processing video-audio datasets.
  • Strong Python programming skills, with experience in developing machine learning pipelines.
  • Experience in evaluating deep learning models using quantitative and qualitative metrics.
  • Basic understanding of manufacturing workflows and human-machine interaction in production lines.
  • English level: C1 minimum

The ideal candidate is curious, positive, creative, collaborative, and looking to challenge themselves. We seek individuals who thrive in dynamic environments, embracing change and ambiguity while demonstrating readiness to contribute to impactful projects within cross-functional teams.

We also celebrate multiple approaches and points of view. We believe diversity drives innovation, so we are building a culture where difference is valued and encouraged.

Location

Sèvres, France

Job Overview
Job Posted:
3 weeks ago
Job Expires:
Job Type
Full Time Intern

Share This Job: