Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority. AI’s cyberattack capability is increasing rapidly; recently, there has been a surge in the number of zero-day exploits found by AI in well-tested software. However, fixing or hardening code often lags behind by months with human experts struggling to keep up with the long log of vulnerability reports. The danger is exacerbated by open-source models who are becoming more capable at cyberattack and are more readily available by malicious actors. The goal of our project is to leverage AI for the defense: (1) automatically fix discovered vulnerabilities and (2) harden code either through checked annotations or through transformations to safer coding practices in the same language. Our team collaborates closely with a wide variety of teams across Google/Alphabet, leveraging Google DeepMind’s expertise to deploy advanced machine learning algorithms with the goal of hardening code for Alphabet systems and beyond.