Security represents the most critical priorities for our customers in a world awash in digital threats, regulatory scrutiny, and estate complexity. Microsoft Security aspires to make the world a safer place for all. We want to reshape security and empower every user, customer, and developer with a security cloud that protects them with end to end, simplified solutions. The Microsoft Security organization accelerates Microsoft’s mission and bold ambitions to ensure that our company and industry is securing digital technology platforms, devices, and clouds in our customers’ heterogeneous environments, as well as ensuring the security of our own internal estate. Our culture is centered on embracing a growth mindset, a theme of inspiring excellence, and encouraging teams and leaders to bring their best each day. In doing so, we create life-changing innovations that impact billions of lives around the world.

As a Security Researcher - AI Red Team, Do you want to find responsible AI failures in Microsoft’s largest AI systems impacting millions of users? Join Microsoft’s AI Red Team where you'll emulate work alongside security experts to cause trust and safety failures in Microsoft’s big AI systems. We are looking for an AI Safety Researcher where you will get to work alongside experts to push the boundaries of AI Red Teaming. We are a fast paced, interdisciplinary group of red teamers, adversarial Machine Learning (ML) researchers, and Responsible AI experts with the mission of proactively finding failures in Microsoft’s big bet AI systems. Your work will impact Microsoft’s AI portfolio including Phi series, Bing Copilot, Security Copilot, Github Copilot, Office Copilot and Windows Copilot and help keep Microsoft’s customers safe and secure. More about our approach to AI Red Teaming: https://www.microsoft.com/en-us/security/blog/2023/08/07/microsoft-ai-red-team-building-future-of-safer-ai/

Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.

Responsibilities

The AI Red Team is looking for security researchers who can combine the development of cutting-edge attack techniques, with the ability to deliver complex, time limited operations as part of a diverse team. This includes the ability to manage several priorities at once, manage stakeholders, and communicate clearly with a range of audiences.

  • Understand the products & services that the AI Red Team is testing, including the technology involved and the intended users to develop plans to test them.
  • Understand the risk landscape of AI Safety & Security including cybersecurity threats, Responsible AI policies, and the evolving regulatory landscape to develop new attack methodologies for these areas.
  • Conduct operations against systems as part of a multi-disciplinary team, delivering against multiple priority areas within a set timeline.
  • Communicate clearly and concisely with stakeholders before, during, and after operations to ensure everyone is clear on objectives, progress, and the outcomes of your work.
  • Co-ordinate with your team members during ops to ensure that all areas of focus are covered and that stakeholders are clear on the status of your work.
  • Partner with and support all elements of the AI Red Team and our partners, including actively contributing to tool development and long-term research efforts.

Other 

Qualifications

Required Qualifications:

 
  • Bachelor's Degree in Statistics, Mathematics, Computer Science or related field
    • OR 3+ years experience in software development lifecycle, large-scale computing, modeling, cybersecurity, and/or anomaly detection.
  • 1+ year experience in identifying security or safety issues through threat modelling, architecture reviews, penetration testing or red teaming.

Other Requirements:

  • Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include, but are not limited to the following specialized security screenings:
    • Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter. 

Preferred Qualifications:

  • Master's Degree in Statistics, Mathematics, Computer Science or related field
    • OR 4+ years experience in software development lifecycle, large-scale computing, modeling, cyber-security, and/or anomaly detection.


Security Research IC3 - The typical base pay range for this role across the U.S. is USD $98,300 - $193,200 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $127,200 - $208,800 per year.

Certain roles may be eligible for benefits and other compensation. Find additional benefits and pay information here: https://careers.microsoft.com/us/en/us-corporate-pay

Microsoft will accept applications and processes offers for these roles on an ongoing basis.

Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable laws, regulations and ordinances.  We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. If you need assistance and/or a reasonable accommodation due to a disability during the application or the recruiting process, please send a request via the Accommodation request form.

Benefits/perks listed below may vary depending on the nature of your employment with Microsoft and the country where you work.

#MSFTSecurity #AIRedTeam #AI #AISecurity #AISafety

Salary

$98,300 - $208,800

Yearly based

Location

Redmond, Washington, United States

Job Overview
Job Posted:
1 day ago
Job Expires:
Job Type
Full Time

Share This Job: