Meta is seeking Research Scientists to join its Fundamental AI Research (FAIR) organization, focused on making significant advances in AI. We publish groundbreaking papers and release frameworks/libraries that are widely used in the open-source community. The team is working on the industrial leading research on building foundation models for audio understanding and audio generation. We are also closely working with vision research teams on pushing the frontier of multimodality (audio, video, language) research.Our teams research is focusing on audio and multimodality. Individuals in this role are expected to be recognized experts in identified research areas such as artificial intelligence, speech and audio generation and audio-visual learning. Researchers will drive impact by: (1) publishing state-of-the-art research papers, (2) open sourcing high quality code and reproducible results for the community, and (3) bringing the latest research to Meta products for connecting billions of users. They will work with an interdisciplinary team of scientists, engineers, and cross-functional partners, and will have access to cutting edge technology, resources, and research facilities.Fundamental AI Research Scientist, Multimodal Audio (Speech, Sound and Music) - FAIR ResponsibilitiesDevelop algorithms based on state-of-the-art machine learning and neural network methodologiesPerform research to advance the science and technology of intelligent machines.Conduct research that enables learning the semantics of data across multiple modalities (audio, speech, images, video, text, and other modalities).Work towards long-term ambitious research goals, while identifying intermediate milestones.Design and implement models and algorithmsWork with large datasets, train / tune / scale the models, create benchmarks to evaluate the performance, open source and publishMinimum QualificationsBachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience.PhD degree in AI, computer science, data science, or related technical fields, or equivalent practical experience.2+ years of experience holding an industry, faculty, academic, or government researcher position.Research publications reflecting experience in related research fields: audio (speech, sound, or music) generation, text-to-speech (TTS) synthesis, text-to-music generation, text-to-sound generation, speech recognition, speech / audio representation learning, vision perception, image / video generation, video-to-audio generation, audio-visual learning, audio language models, lip sync, lip movement generation / correction, lip reading, etc.Familiarity with one or more deep learning frameworks (e.g. pytorch, tensorflow, …)Experienced in Python programming language.Must obtain work authorization in the country of employment at the time of hire, and maintain ongoing work authorization during employment.Preferred QualificationsFirst-authored publications at peer-reviewed conferences, such as ICML, NeuRIPS, ICLR, ICASSP, Interspeech, ACL, EMNLP, CVPR, and other similar venues.Research and engineering experience demonstrated via publications, grants, fellowships, patents, internships, work experience, open source code, and / or coding competitions.Experience solving complex problems and comparing alternative solutions, trade-offs, and diverse points of view.Experience working and communicating cross functionally in a team environment.Experience communicating research findings to public audiences of peers. For those who live in or expect to work from California if hired for this position, please click here for additional information. About Meta Meta builds technologies that help people connect, find communities, and grow businesses. When Facebook launched in 2004, it changed the way people connect. Apps like Messenger, Instagram and WhatsApp further empowered billions around the world. Now, Meta is moving beyond 2D screens toward immersive experiences like augmented and virtual reality to help build the next evolution in social technology. People who choose to build their careers by building with us at Meta help shape a future that will take us beyond what digital connection makes possible today—beyond the constraints of screens, the limits of distance, and even the rules of physics. $147,000/year to $208,000/year + bonus + equity + benefitsIndividual compensation is determined by skills, qualifications, experience, and location. Compensation details listed in this posting reflect the base hourly rate, monthly rate, or annual salary only, and do not include bonus, equity or sales incentives, if applicable. In addition to base compensation, Meta offers benefits. Learn more about benefits at Meta. Equal Employment Opportunity Meta is proud to be an Equal Employment Opportunity employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, reproductive health decisions, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, genetic information, political views or activity, or other applicable legally protected characteristics. You may view our Equal Employment Opportunity notice here. Meta is committed to providing reasonable accommodations for qualified individuals with disabilities and disabled veterans in our job application procedures. If you need assistance or an accommodation due to a disability, fill out the Accommodations request form.

Salary

$147,000 - $208,000

Yearly based

Location

Menlo Park, CA | Seattle, WA | Boston, MA | New York, NY

Job Overview
Job Posted:
4 days ago
Job Expires:
Job Type
Full Time

Share This Job: