We Dream. We Do. We Deliver.
About Merkle
Merkle, a dentsu company, powers the experience economy. For more than 35 years, the company has put people at the heart of its approach to digital business transformation. As the only integrated experience consultancy in the world with a heritage in data science and business performance, Merkle delivers holistic, end-to-end experiences that drive growth, engagement, and loyalty. Merkle’s expertise has earned recognition as a “Leader” by top industry analyst firms, in categories such as digital transformation and commerce, experience design, engineering and technology integration, digital marketing, data science, CRM and loyalty, and customer data management. With more than 16,000 employees, Merkle operates in 30+ countries throughout the Americas, EMEA, and APAC. For more information, visit www.merkle.com.
Collaborate with stakeholders in cross-functional environment to understand business objectives and define problems that can be addressed through machine learning and artificial intelligence.
· Develop and train machine learning and statistical models using programming languages like Python or R and frameworks such as TensorFlow or PyTorch.
· Write and optimize data pipelines to process large amounts of data.
· Apply MLOps practices to manage lifecycle of Machine Learning models throughout projects.
· Write maintainable, reliable, and robust pipelines complete with unit and integration tests.
· Co-lead data initiatives and architect ML systems from scratch.
· Organize work for yourself and other affected consultants in Agile environment.
· Develop dashboards to monitor key data quality and model metrics and set and define alerts.
· Build Machine Learning business cases with business stakeholders.
· Degree in mathematics, engineering, physics, or related discipline.
· 4+ years experience in Data Science or relevant work experience in creating and using advanced ML, time series or deep learning algorithms for regression, classification, forecasting or clustering problems.
· Experience in building and productionizing big data architectures, pipelines and data sets.
· Proficiency in SQL, Python / R, PySpark and Bash scripting.
· Experience with MS Azure services like Databricks, Azure Machine Learning, etc. and AWS service like SageMaker, Redshift and similar cloud services.
· Experience and efficiency with agile methodology.
Preferred Skills
· Ability to exercise motivation and ownership of ML topics.
· Understanding of data modelling and data visualization tools.
· Experience with version control, CI/CD tools and general DevOps practices.
· Working knowledge of message queuing, stream processing, and highly scalable real-time data processing using technologies like Apache Beam, Spark-Streaming, etc.
· Experience with data pipeline / workflow management tools like AWS Glue, Azure Data Factory, Airflow, AWS Step Functions, NiFi, etc.
· Extensive working experience with relational databases and systems like MS SQL, Oracle, Postgres, Snowflake, etc. and NoSQL databases like Cassandra, MongoDB, Elasticsearch, etc.
Benefits and perks:
Inclusion & Diversity
We value the strength diversity brings to our business and are working to build a more inclusive workplace through partnerships with Stonewall, Business Disability Forum and Business in the Community's race and gender equality campaigns. We are happy to discuss all approaches to working for all our roles – we can't promise we will offer you everything you want or need but we do promise to discuss it with you openly and honestly.
If you have any reasonable adjustment needs arising from a disability or medical condition to participate in the recruitment process, please discuss this with the recruiter who contacts you.