It's fun to work in a company where people truly BELIEVE in what they are doing!

We're committed to bringing passion and customer focus to the business.

Senior Data Engineer – Big Data with supply chain functional domain project background

Job Description

As a Senior Data Engineer, you will be responsible for designing, developing, and maintaining scalable big data solutions to support the agriculture supply chain operations of our client. Leveraging your extensive experience with Cloudera Hadoop and other big data technologies, you will build robust data pipelines, data warehouses, and analytics platforms that can handle large volumes of structured and unstructured data from across the agriculture supply chain.

Key Responsibilities:

  • Architect and implement end-to-end big data solutions using the Cloudera Hadoop ecosystem (HDFS, Hive, Impala, Spark, etc.)
  • Design and develop efficient data ingestion, transformation, and loading processes to integrate data from various sources across the agriculture supply chain
  • Create scalable data storage and processing frameworks to support real-time and batch analytics
  • Implement data quality checks, data lineage, and data governance practices to ensure data integrity
  • Work closely with data scientists and business analysts to understand requirements and translate them into technical solutions
  • Optimize system performance, monitor, and troubleshoot issues in the big data infrastructure
  • Mentor and guide junior data engineers on best practices and new technologies
  • Document technical designs, processes, and procedures for knowledge sharing and future maintenance

Required Skills and Experience:

  • 8-12 years of experience as a Data Engineer or Big Data Engineer
  • Expertise in designing and implementing Cloudera Hadoop-based data platforms
  • Proficient in Python, Scala, or Java for developing data processing pipelines
  • Strong understanding of data modeling, ETL, and data warehousing concepts
  • Experience with big data technologies such as Spark, Kafka, Hive, Impala, Sqoop, and Flume
  • Familiarity with cloud platforms (AWS, Azure, GCP) and containerization technologies (Docker, Kubernetes)
  • Hands-on experience in the agriculture or supply chain domain
  • Excellent problem-solving, analytical, and communication skills
  • Ability to work in a team and mentor junior engineers

Preferred Qualifications:

  • Master's degree in Computer Science, Data Science, or a related field
  • Certifications in Cloudera, Hadoop, or other big data technologies
  • Experience in agile software development methodologies

If you like wild growth and working with happy, enthusiastic over-achievers, you'll enjoy your career with us!

Not the right fit?  Let us know you're interested in a future opportunity by clicking Introduce Yourself in the top-right corner of the page or create an account to set up email alerts as new job postings become available that meet your interest!

Location

Eindhoven

Job Overview
Job Posted:
2 months ago
Job Expires:
1w 6d
Job Type
Full Time

Share This Job: