Computing systems are becoming ever more complex, with automated decisions increasingly often based on deep learning components. A wide variety of applications are being developed, many of them safety-critical, such as self-driving cars and medical diagnosis. Since deep learning is unstable with respect to adversarial perturbations, there is a need for rigorous software development methodologies that encompass machine learning components. This lecture will describe progress with developing automated certification techniques for learnt software components to ensure safety and adversarial robustness of their decisions. I will discuss different dimensions of robustness, including bounded perturbations and causal interventions, as well as the role of uncertainty and explainability.