ChainOpera AI is the world’s first truly decentralized and open AI platform for simple, scalable, and trustworthy collaborative AI economy, and the AI app ecosystem for accessible and democratized AI - our GPUs, our model, our personal AI.

ChainOpera AI is supported by

  • Enterprise-level generative AI platform for system scalability, model performance, and security/privacy (ChainOpera AI Platform)

  • Leading open source library in large-scale distributed training, model serving, and federated learning (FedML)

  • Innovative and unique edge-cloud collaborative AI models and systems towards on-device personal AI (Fox LLM)

  • Internet veterans for serving billion-level end users based on cloud computing and mobile internet

  • Established researchers in blockchain, machine learning, and large-scale distributed systems (80000+ citations)

  • Ecosystem partnership with GPU providers, model developers, AI platforms, and AI applications

  • Top-tier investors, angels, and advisors

Responsibilities:

  • Develop and implement novel techniques for ensuring the safety and robustness of AI systems, particularly in decentralized environments

  • Research methods for alignment of AI systems with human values and intentions

  • Investigate approaches to make AI systems more interpretable, transparent, and accountable

  • Design and conduct experiments to test the safety and reliability of AI models, especially large language models and autonomous agents

  • Collaborate with blockchain experts to address unique safety challenges in decentralized AI systems

  • Publish research findings in top-tier AI conferences and journals

  • Work closely with engineering teams to implement safety measures in our AI products

  • Stay abreast of the latest developments in AI safety research and contribute to the global discourse on ethical AI

Requirements:

  • Ph.D. in Computer Science, Artificial Intelligence, or a related field with a focus on AI safety

  • Strong background in machine learning, deep learning, and AI ethics

  • Experience with formal verification methods, robustness testing, and uncertainty quantification in AI systems

  • Proficiency in Python and deep learning frameworks (e.g., PyTorch, TensorFlow)

  • Excellent analytical and problem-solving skills with a keen eye for potential risks in AI systems

  • Strong publication record in top-tier AI conferences or journals, particularly in the field of AI safety

Preferred Qualifications:

  • Experience with blockchain technologies and understanding of their implications for AI safety

  • Knowledge of game theory and mechanism design as applied to AI systems

  • Familiarity with regulatory frameworks and policy discussions surrounding AI development

  • Track record of open-source contributions to AI safety projects or tools

  • Experience mentoring junior researchers or leading research projects in AI safet

Location

United States

Job Overview
Job Posted:
1 day ago
Job Expires:
Job Type
Full Time

Share This Job: