Job description

Requirements

  • Entry level
  • No Education
  • Salary to negotiate
  • London

Description

DeepMind is active within the wider research community through publications and partners with many of the world’s top academics and academic institutions. We have built a hardworking and engaging culture, combining the best of academia with product led environments, providing an ambitious balance of structure and flexibility.

Our approach encourages collaboration across all groups within the Research team, leading to ambitious creativity and the scope for creative breakthroughs at the forefront of research.

Research Scientists at DeepMind lead our efforts in developing novel algorithmic architecture towards the end goal of solving and building Artificial General Intelligence.

Having pioneered research in the world's leading academic and industrial labs in PhDs, post-docs or professorships, Research Scientists join DeepMind to work collaboratively within and across Research fields. They develop solutions to fundamental questions in machine learning, computational neuroscience and AI.

Drawing on expertise from a variety of disciplines including deep learning, reinforcement learning, computer vision, language, neuroscience, safety, control, robotics and multi-agent, our Research Scientists are at the forefront of groundbreaking research.

The Role

Conducting research into any transformative technology comes with the responsibility to build mechanisms for safe and reliable development and deployment at every step. Technical safety research at DeepMind investigates questions related to objective specification, robustness, interpretability, and trust in machine learning systems. Dedicated research in these areas is essential to the fulfilment of the long-term goal of DeepMind Research: to build safe and socially beneficial AI systems.

Research on technical AI safety draws on expertise in deep learning, reinforcement learning, statistics, and foundations of agent models. Research Scientists work on the forefront of technical approaches to designing systems that reliably function as intended while discovering and mitigating possible long-term risks, in close collaboration with other AI research groups within and outside of DeepMind.

Responsibilities


- Identify and investigate possible failure modes for current and future AI systems, and dedicatedly develop solutions to address them
- Conduct empirical or theoretical research into technical safety mechanisms for AI systems in coordination with the team’s broader technical agenda
- Collaborate with research teams externally and internally to ensure that AI capabilities research is informed by and adheres to the most advanced safety research and protocols
- Report and present research findings and developments to internal and external collaborators with effective written and verbal communication About you

Minimum qualifications:


- PhD in a technical field or equivalent practical experience Preferred qualifications:


- PhD in machine learning, computer science, statistics, computational neuroscience, or mathematics.
- Relevant research experience in deep learning, machine learning, reinforcement learning, statistics, or computational neuroscience.
- A real passion for AI. Competitive salary applies.

The Safety team are looking for PhD Interns for 2019. If you would like to apply you can do so using the following link: https://deepmind.com/careers/1184837

  • architecture