AI Research Scientist -Human Understanding (Physiology,Activity,State) Wearables

MetaApplyPublished 3 months agoFirst seen 1 months ago
Apply

Description

The Reality Labs team at Meta is looking for Research Scientists to join our teams as we build towards our goal to transform the way people come together to interact, work, play and live. We explore, develop and deliver cutting-edge technologies that serve as the foundations for the current and future Reality Labs products, such as MR headsets, AR and Smart glasses, and AI assistants. We are committed to driving the state-of-the-art forward through continuous innovation. With your in-depth subject-matter knowledge in Artificial Intelligence and Machine Learning, particularly in areas such as Deep Learning, Real-world Sensing and Multimodal Signal Processing, Behavior Understanding, Vision Language Models you will help us develop models, algorithms, datasets, and evaluation methodologies for foundational human understanding technologies to facilitate new and engaging user experiences, and apply them to large-scale production that has the potential to reach billions of users. If you're interested in being part of a team of industry-leading engineers and researchers committed to quality and innovation to work on exciting projects that have significant impact, we encourage you to apply.

Responsibilities

Conduct applied research to advance the state of the art in the relevant domain and products Drive sustained advancement in your domain by setting and executing long-term roadmaps, achieving research goals through clear intermediate milestones Collaborate with different cross-functional teams across the globe with hardware, software, product, and research teams Demonstrating advocacy of technology, product and user value, propose new research roadmaps which can lead to the creation of new approaches and systems that can be applied to multiple applications Present the outcomes of the research findings across the organization and/or as papers in respected conferences Prototyping and validating research ideas, grounding experiments in data and evaluation scientific methodology Designing and overseeing data collection strategies for user-facing features for relevant sensors, ensuring data quality and scalability through effective cross-functional collaboration Acts as both an educator and a bridge empowering internal teams with SOTA knowledge and ensuring that the organization remains at the forefront of innovation

Qualifications

Has completed and obtained a PhD degree in the field of Computer Vision, Machine Learning, Signal Processing, Robotics, Human-Computer Interaction, Biomedical Engineering or equivalent Established track record of leading and/or contributing to influential relevant research, such as as evidenced by high-impact publications at peer-reviewed conferences (e.g. NeurIPS, CVPR, ICML, ICLR, ICCV, IEEE, EMBC, ICASSP) Proven track record of planning multi-year research and innovation roadmap in which short-term projects ladder to the long-term mission Experience in architecting, training, fine-tuning, and/or experimenting with models and proven development skills in Machine Learning, working with PyTorch or TensorFlow Experience developing algorithms or infrastructure in Python or C/C++ Experience communicating research for public audiences of peers, as well as non-technical audiences Experience working in cross functional collaborations Experience in design and deployment of end-to-end systems for complex sensing features on consumer-facing hardware (such as edge devices, wearables) involving algorithms for sensor-derived data streams, machine learning for real-world data Experience in building scalable multi-modal data pipelines, model training, and deployment frameworks for wearables or mobile devices in relevant technologies Experience in technical leadership of team of researchers and engineers Experience in model/algorithm optimization for constrained hardware devices