Modern AI systems are increasingly deployed in high-stakes and safety-critical settings, such as robotics, autonomy, manufacturing, healthcare, and scientific computing. For such settings, it is not enough for models to perform well on average; they must also respect constraints, provide calibrated uncertainty estimates, and remain dependable under distribution shift and during closed-loop decision-making. This subject develops a unified toolbox for building machine learning models that are safe and reliable, bridging methods from control theory, optimization, uncertainty estimation, modern generative modeling, and reinforcement learning.