We are looking for an only immediate joiner and experienced Big Data Developer with a strong background in Kafka, PySpark, Python/Scala, Spark, SQL, and the Hadoop ecosystem.

The ideal candidate should have over 5 years of experience and be ready to join immediately. This role requires hands-on expertise in big data technologies and the ability to design and implement robust data processing solutions.

Requirements

  • Design, develop, and maintain scalable data processing pipelines using Kafka, PySpark, Python/Scala, and Spark.
  • Work extensively with the Kafka and Hadoop ecosystem, including HDFS, Hive, and other related technologies.
  • Write efficient SQL queries for data extraction, transformation, and analysis.
  • Implement and manage Kafka streams for real-time data processing.
  • Utilize scheduling tools to automate data workflows and processes.
  • Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions.
  • Ensure data quality and integrity by implementing robust data validation processes.
  • Optimize existing data processes for performance and scalability.

Location

Bengaluru, Karnataka, India

Job Overview
Job Posted:
1 month ago
Job Expires:
Job Type
Full Time

Share This Job: