Drivetrain is on a mission to empower businesses to make better decisions. Our financial planning & decision-making platform helps companies scale and achieve their targets predictably.
Drivetrain is a remote-first company headquartered in the San Francisco Bay Area. Founded in 2021 by a couple of ex-Googlers, Drivetrain is a fast-growing company on a trajectory for success with backing from leading venture capital firms.
Drivetrain provides a great culture for its employees to thrive in and be happy. 💜 Remote-friendly: Drivetrain brings together the best and the brightest, no matter where they are and provides them a great degree of autonomy. We trust our people.
🗣️ Open & transparent: We know that when our creators have access to all the information they need, their best work will emerge.
👏 Idea-friendly: We provide an environment to explore new ideas, to take risks, to make mistakes, and to learn, so you can succeed. Anyone in the company can come up with great ideas and become a catalyst for positive change. We let the best ideas win.
👥 Customer-centric: We follow a product-led growth strategy, continuously learning from our customers and collaborating to build the amazing software that Drivetrain is.
About the roleDrivetrain is looking for a Data Engineer to join our team. The role enables you to lay the foundation of an exceptional data engineering practice.
The ideal candidate will be confident with a programming language of their choice, be able to learn new technologies quickly, have strong software engineering and computer science fundamentals and have extensive experience with common big data workflow frameworks and solutions.
What you’ll be doing
- Build and monitor large-scale data pipelines that ingest data from a variety of sources.
- Develop and scale our DBT setup for transforming data.
- Work with our data platform team to solve customer problems.
- Use your advanced SQL & big data skills to craft cutting edge data solutions.
Requirements
- 1-5+ years in a data engineering role(high-growth startup environments highly preferred).
- Expert-level SQL skills.
- 1 year experience in DBT is required.
- At least 1 year experience with a leading data warehouse is required.
- Expertise with big data technologies like Hadoop, Spark, Druid preferred.
- Track record of success in building new data engineering processes and an ability to work through ambiguity.
- Willingness to roll up your sleeves and fix problems in a hands-on manner.
- Intellectual curiosity and research abilities.
Sounds exciting? Apply at careers@drivetrain.ai. It may just be the next best decision you’ve ever made!