As a Junior Full Stack Engineer specializing in AI, you will work on both backend and AI-driven tasks, designing scalable systems while integrating advanced AI models and technologies into our platform. Your responsibilities will span from developing robust backend systems to implementing and fine-tuning AI models such as Llama and neural networks. This is a high-impact role where you'll be expected to bridge traditional backend engineering with cutting-edge AI techniques to enhance our product offerings. You will be responsible for both AI application layer integration and general backend development. This position will be remote initially, with a possible transition to a hybrid work model in California soon.
Key Responsibilities
Backend Development: Design and develop scalable, reliable, and performant backend systems, APIs, and services.
AI Integration: Implement, fine-tune, and deploy AI models (e.g., Llama) into our application layer, ensuring seamless integration with our platform.
Model Fine-tuning: Work directly with AI models, including fine-tuning pre-trained models, evaluating model performance, and optimizing for production use.
AI Application Development: Develop and deploy AI-powered features and applications, with a strong emphasis on neural networks and advanced AI techniques.
System Optimization: Monitor, test, and optimize AI applications and backend systems for performance, scalability, and reliability.
Stay Current: Continuously improve your understanding of AI and machine learning concepts, neural networks, and related technologies.
Requirements
Bachelor’s or Master's degree in Computer Science, Engineering, or related field.
1-2 years of experience in full-stack or backend software engineering.
Proven experience in applying AI technologies, including neural networks, and familiarity with model fine-tuning.
Technical Skills
Strong proficiency in backend technologies (e.g., Node.js, Python, Java, Go).
Solid understanding of AI/ML fundamentals, including neural networks, deep learning, and model training.
Hands-on experience with Llama or other transformer-based models and fine-tuning techniques.
Familiarity with cloud platforms (AWS, GCP, or Azure) for AI model deployment.
Experience with containerization (Docker, Kubernetes) and CI/CD pipelines.
Preferred Qualifications
Experience with large-scale distributed systems.
Contributions to open-source projects or involvement in AI research.
Familiarity with blockchain technologies is a plus, as our products are closely integrated with blockchain ecosystems.