Dear Colleagues,

We are in the process of identifying a suitable candidate for the role of Data Scientist, Automotive. This position will be at level 4 based in Bengaluru and will report to Sunil Shastri (Product & Data Engineering Manager, Vehicle Aftermarket)

JOB DESCRIPTION

Job Title: Data Scientist

Reports To: Product & Data Engineering Manager, Vehicle Aftermarket

Role Type:        Individual Contribution

Location:        Bangalore

Role Purpose:

As a Data Scientist, you will have the opportunity to analyze and extract insights from a world-class vehicle aftermarket data asset. You’ll perform in-depth, consumer-level analytics, lead big data research and discovery projects, increase overall system accuracy, and automate important (but repetitive) tasks. In addition, you’ll design and execute operational tests and provide recommendations based on careful analysis, solid business acumen, and superior problem-solving skills. The ability to turn complicated analytic results into digestible and engaging stories will be important for success, as will superior ambiguity tolerance. Your strong analytic training and background, desire to create new and unique perspectives about consumer financial behavior, collaborative spirit, and hands-on approach to solving problems will mark you as an ideal candidate for this role.

The Vehicle Aftermarket team leverages data analytics to drive efficiencies and insights in the automotive aftermarket. As a Data Scientist, you will oversee projects from data collection and model development to deployment, providing actionable insights that drive strategic business decisions for Vehicle Aftermarket division.

Key Responsibilities

  • Pipeline Management & ETL: Partner with Data Engineering to design and manage data pipelines on Snowflake and Azure, using ETL/ELT tools (e.g., DBT, Alteryx, Talend, Informatica).
  • Machine Learning & AI: Deploy advanced AI/ML tools (TensorFlow, PyTorch, AutoML, NLP, Computer Vision) and use predictive modeling to enhance customer experience, revenue, and business outcomes.
  • Performance Monitoring: Implement tools to monitor and analyze model performance and data accuracy.
  • Cross-functional Collaboration: Work with teams across the organization to identify and execute data-driven business solutions.
  • Adaptability: Continuously explore and apply new technologies to enhance data processing and integration.
  • Ask, refine, and answer critical business questions that bolster capabilities and competitive position through the application of advanced analytic methods, logical inference, and creative problem-solving
  • Build, enhance, and maintain predictive models and analytic strategies focused on customer behavior
  • Lead and modeling projects, operational tests (e.g., A/B and multivariate), and data mining and exploratory work to produce actionable and novel insights and recommendations
  • Partner with stakeholder groups to understand market needs and drive data initiatives beyond our current product portfolio
  • Engage in independent research projects to uncover and understand new tools, methods, and approaches that can strengthen the data science platform
  • Create new data assets to support data science project work

Metrics

  • Technical metrics: Model accuracy, data quality, feature engineering effectiveness, scalability, efficiency, code quality, deployment time, maintenance, reproducibility, and experimentation.
  • Business metrics: Return on Investment (ROI), Customer Satisfaction, Revenue Growth, Churn Rate, Business Insights, Adoption Rate, User Engagement

 

Competencies

  • Educational Background: Master’s in data science, Statistics, Computer Science, or related field.
  • Programming & Analytics: Proficiency in Python (including Pandas, NumPy, scikit-learn) or R for data analysis and machine learning tasks.
  • Machine Learning & AI: Hands-on experience with ML frameworks such as TensorFlow and PyTorch. Familiarity with AutoML tools, NLP libraries (e.g., spaCy, NLTK), and Computer Vision technologies.
  • Data Management & Architecture: Strong SQL skills for data extraction and manipulation. Proficiency with Snowflake for data warehousing and familiarity with Azure infrastructure, including services like Azure Data Factory, Azure Synapse, and Azure Machine Learning.
  • ETL/ELT & Orchestration Tools: Exposure to ETL/ELT tools like DBT, Alteryx, Talend, or Informatica.
  • Cloud Computing: Exposure to Azure (preferred), AWS (S3, Redshift, Aurora, SageMaker), or Google Cloud Platform for cloud storage, processing, and machine learning services.
  • Database Management: Solid understanding of database management systems and data architecture principles to maintain data quality, accuracy, and scalability.
  • Version Control & Agile Development: Experience with Git, DevOps, and Agile methodologies.
  • Soft Skills: Strong problem-solving, teamwork, adaptability, and communication skills, with the ability to present complex findings to non-technical stakeholders.
  • Additional Knowledge: Exposure to data governance, security, and ERP systems (e.g., XA, SAP).  

Candidate Profile:

  • M.E/MTech (Preferred) B.E/B. Tech: CSE, IT or any relevant
  • 6 - 10 Years ( Minimum 3 Years of Experience in Data Science)    

Location

Bengaluru, IN

Job Overview
Job Posted:
1 month ago
Job Expires:
Job Type
Full Time

Share This Job: