The Data Scientist and Engineer role reports to the Chief Data Officer (CDO) and is part of the Data Team, which focuses on managing and using data to support decision-making. You’ll contribute to improving data capabilities, work on varied projects, collaborate with teams across the company, and help shape our data strategy. It’s a high-impact role with strong opportunities for growth and development.

Responsibilities

·       Data pipeline development and management: Design, build, and maintain robust and scalable ETL/ELT data pipelines from diverse sources.

·       Data quality & governance: Ensure data quality, integrity, and governance and automate all QC processes. Implement and uphold data security, privacy, and compliance standards.

·       Monitoring and alerting: Implement monitoring and alerting systems for data pipelines and models to ensure reliability, performance and a cost-efficient architecture.

·       Prescriptive modeling: Build and deploy ML models to specify the next best actions (NBAs) per client and lead.

·       AI pipeline development and automation: Create and maintain robust MLOps pipelines for efficient model deployment and management.

·       Stakeholder collaboration: Work closely with business stakeholders to understand their data needs and translate them into technical requirements.

·       Documentation and knowledge sharing: Create and maintain comprehensive documentation of data processes, models, and insights for knowledge sharing across the organization.

Requirements

·      3-7 years of professional experience in data science/engineering roles.

·       BSc and MSc / PhD degree in computer science, statistics, or related technical field.

·       Demonstrated experience delivering end-to-end data science/engineering projects.

Data skills:

·       Strong proficiency in Python for data analysis and machine learning

·       Experience building and managing ETL pipelines on cloud platforms (GCP preferred)

·       Experience with data analysis libraries like pandas, NumPy, and scikit-learn

·       Expertise in building and deploying machine learning and AI models

·       Knowledge of deep learning frameworks such as TensorFlow or PyTorch

·       Experience with big data technologies such as Spark

·       Proficiency in SQL and working with relational databases

·       Familiarity with data lakes and lakehouses (BigQuery preferred)

·       Experience with version control using Git

Cloud skills:

·       Hands-on experience with GCP for data and ML workflows

·       Familiarity with serverless computing (e.g. Google Cloud Functions)

Soft skills:

·       Strong analytical and problem-solving abilities

·       Excellent communication skills to present findings to technical and non-technical audiences

·       Ability to work independently and as part of a team

·       Solution oriented,

·       Attention to detail

·       Always seeking to grow, learn, and drive the business forward

Additional desired :

·       Experience with containerization and orchestration (Docker, Kubernetes)

·       Knowledge of data streaming technologies (e.g. Kafka)

·       Familiarity with NoSQL databases

·       Understanding of data privacy and security best practices

Benefits

What’s in it for you:

·        Work in a fast-paced environment at a leading, trend-setting company with pioneer products.

·        Competitive remuneration, based on your skills and experience.

·        Ongoing learning opportunities and free participation in BOUSSIAS events and conferences

Location

Gerakas, Attica, Greece

Job Overview
Job Posted:
1 month ago
Job Expires:
Job Type
Full Time

Share This Job: