We believe the most promising path to safe AGI is to automate AI research and code generation to improve models and solve alignment more reliably than humans can alone. Our approach is to combine frontier-scale pre-training, domain-specific reinforcement learning, ultra-long context, and inference-time compute to achieve this goal.