About Us:

Brain is an AI and interface company founded in 2015. Brain's AI organizes the world's software and makes it human-centric and natural to use. The company invents new technologies, design metaphors and developer platforms that allow computers to become an extension of our minds. In 2016, Brain pioneered one-shot learning NLP in the industry, which has become a fundamental technology in many of the most popular language models today. Building on this innovation, Brain invented the world's first consumer generative interface, Natural AI, in 2020. In 2024, Brain.ai unveiled a revolutionary app-less smart phone at Mobile World Congress with one of the world's leading mobile network operators.

As a DevOps Engineer at Brain, you’ll be joining a small but brilliant team of engineers and you will be responsible for scaling and maintaining our Infrastructure, collaborating with a team of diverse engineers/data scientist to bring a new, paradigm-shifting platform to market. In this position, you have the opportunity to act as a key contributor in our technology, and solve new and challenging problems each day. We are excited about people who can think critically, collaborate with others, and focus on creating a revolutionary product in a progressive startup setting.

Our Tech Stack: Our backend uses modern technologies like: Python, Java, NodeJs, GoLang, Terraform/Ansible, Docker, Kubernetes, ELK, Kibana, PostgreSQL/MySQL, Redis, NATS, RabbitMQ

Desired Skills & Qualifications:

  • Write quality python/golang code.
  • Bring a DevOps mindset to the overall software development process
  • Ability to define DevOps best practices and contribute in architecture discussions
  • Think about security for system interaction points and data
  • Figure out what AWS capabilities what we should (and shouldn't) use as we grow
  • Challenge the way "things have always been done"
  • Proven experience in scaling systems and worked with Ansible, Terraform, Docker, K8s.
  • Head off issues before they occur through pro-active instrumentation & system metrics
  • Address system problems in real time and then build out the tech so they don't happen again in the future
  • Create, implement, and manage the development of scripts to automate everyday operations
  • Manage system migrations and system upgrades to create and deploy new cloud environments

Requirements:

  • 5+ years of experience with planning, and implementing AWS Cloud environments
  • Strong Experience with designing and deploying AWS Services like Elastic Compute (EC2), Storages (S3, EBS, EFS, Glacier and AWS Storage Gateway) and RDS
  • Experience with Logging/Monitoring ELK, Datadog.
  • Demonstrated experience in AWS storage encryption, data at rest, and data in transit
  • Leading development teams to build and deploy micro services-based applications in cloud with Continuous Integration & Continuous Deployment tools and processes
  • Experience in GCP is plus.

Location

San Mateo, CA

Job Overview
Job Posted:
1 month ago
Job Expires:
1mo 2d
Job Type
Full Time

Share This Job: