We are seeking a highly skilled Machine Learning Pipeline Expert to design and build end-to-end ML solutions focused on predictive failure modeling for our distributed and remote asset infrastructure. You will be instrumental in developing robust, scalable, and efficient ML pipelines, leveraging MongoDB/Atlas Data Lake for data storage and processing, and AWS SageMaker and PyTorch for model training, tuning, and deployment.

Requirements:

  • 5+ years of experience in machine learning and data engineering, focused on ML pipeline development and deployment.
  • Hands-on experience with MongoDB / Atlas Data Lake, including querying, aggregation, and schema design.
  • Proficient in PyTorch for deep learning model development.
  • Strong working knowledge of AWS SageMaker, including pipeline orchestration, training jobs, and endpoint management.
  • Experience building ML pipelines for predictive maintenance, anomaly detection, or time-series analysis.
  • Proficiency in Python and ML frameworks (e.g., NumPy, pandas, scikit-learn).
  • Solid understanding of MLOps, DevOps principles, and tools (e.g., Git, Docker, CI/CD pipelines).
  • Excellent problem-solving and communication skills; ability to work independently and in cross-functional teams.

Location

Argentina

Job Overview
Job Posted:
1 day ago
Job Expires:
Job Type
Full Time

Share This Job: