XPENG is a leading smart technology company at the forefront of innovation, integrating advanced AI and autonomous driving technologies into its vehicles, including electric vehicles (EVs), electric vertical take-off and landing (eVTOL) aircraft, and robotics. With a strong focus on intelligent mobility, XPENG is dedicated to reshaping the future of transportation through cutting-edge R&D in AI, machine learning, and smart connectivity.
We are seeking Deep Learning Engineers with strong expertise in machine learning (ML) and deep learning (DL) system design, along with solid software development skills. In this role, you will research, implement, and evaluate a unified end-to-end onboard model leveraging state-of-the-art technologies, including transformer-based architectures, diffusion models, reinforcement learning, and Vision-Language-Action (VLA) models. You will collaborate with a world-class team of experts in computer vision, AI systems, and software engineering to push the boundaries of autonomous vehicle performance. Your work will be powered by vast amounts of real-world multimodal data from our autonomous fleet, enabling the development of next-generation AI-driven driving solutions.
Job Responsibilities:
-
Research and develop cutting-edge deep learning algorithms for a unified, end-to-end onboard model that seamlessly integrates perception, prediction, and planning, replacing traditional modular model pipelines.
-
Research and develop Vision-Language-Action (VLA) models to enable context-aware, multimodal decision-making, allowing the model to understand visual, textual, and action-based cues for enhanced driving intelligence.
-
Design and optimize highly efficient neural network architectures, ensuring they achieve low-latency, real-time execution on the vehicle’s high-performance computing platform, balancing accuracy, efficiency, and robustness.
-
Develop and scale an offline machine learning (ML) infrastructure to support rapid adaptation, large-scale training, and continuous self-improvement of end-to-end models, leveraging self-supervised learning, imitation learning, and reinforcement learning.
-
Deliver production-quality onboard software, working closely with sensor fusion, mapping, and perception teams to build the industry’s most intelligent and adaptive autonomous driving system.
-
Leverage massive real-world datasets collected from our autonomous fleet, integrating multi-modal sensor data to train and refine state-of-the-art end-to-end driving models.
-
Design, conduct, and analyze large-scale experiments, including sim-to-real transfer, closed-loop evaluation, and real-world testing to rigorously benchmark model performance and generalization.
-
Collaborate with system software engineers to deploy high-performance deep learning models on embedded automotive hardware, ensuring real-world robustness and reliability under diverse driving conditions.
-
Work cross-functionally with AI researchers, computer vision experts, and autonomous driving engineers to push the frontier of end-to-end learning, leveraging advances in transformer-based architectures, diffusion models, and reinforcement learning to redefine the future of autonomous mobility.
Minimum Skill Requirements:
-
MS or PhD level education in Engineering or Computer Science with a focus on Deep Learning, Artificial Intelligence, or a related field, or equivalent experience.
-
Strong experience in applied deep learning including model architecture design, model training, data mining, and data analytics.
-
3 - 5 years + of experience working with DL frameworks such as PyTorch, Tensorflow.
-
Strong Python programming experience with software design skills.
-
Solid understanding of data structures, algorithms, code optimization and large-scale data processing.
-
Excellent problem-solving skills.
Preferred Skill Requirements:
-
Hands on experience in developing DL based planning engine for autonomous driving.
-
Experience in applying CNN/RNN/GNN, attention model, or time series analysis to real world problems.
-
Experience in other ML/DL applications, e.g., reinforcement learning.
-
Experience in DL model deployment and optimization tools such as ONNX and TensorRT.
The base salary range for this full-time position is $215,280-$364,320, in addition to bonus, equity and benefits. Our salary ranges are determined by role, level, and location. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position across all US locations. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training.
We are an Equal Opportunity Employer. It is our policy to provide equal employment opportunities to all qualified persons without regard to race, age, color, sex, sexual orientation, religion, national origin, disability, veteran status or marital status or any other prescribed category set forth in federal or state regulations.