Applied Researcher (Product)
Application deadline: We are conducting interviews actively and aim to fill this role as soon as we find someone suitable.
THE OPPORTUNITY
Join our new AGI safety product team and help transform complex AI research into practical tools that reduce risks from AI. As an applied researcher, you'll work closely with our CEO (also Head of Product), product engineers and Evals team software engineers to build tools that make AI agent safety accessible at scale for our customers. Our current focus is the monitoring of AI coding agents for AI safety and security failures. You will join a small team and will have significant ability to shape the team & tech, and have the ability to earn responsibility quickly.
You will like this opportunity if you're passionate about using empirical research to make AI systems safer in practice. You enjoy the challenge of translating theoretical AI risks into concrete detection mechanisms. You thrive on rapid iteration and learning from data. You want your research to directly impact real-world AI safety.
KEY RESPONSIBILITIES
Research & Development
- Systematically collect and catalog coding agent failure modes from real-world instances, public examples, research literature, and theoretical predictions
- Design and conduct experiments to test monitor effectiveness across different failure modes and agent behaviors
- Build and maintain evaluation frameworks to measure progress on monitoring capabilities
- Iterate on monitoring approaches based on empirical results, balancing detection accuracy with computational efficiency
- Stay current with research on AI safety, agent failures, and detection methodologies
- Stay current with research into coding security and safety vulnerabilities
Monitor Design & Optimization
- Develop a comprehensive library of monitoring prompts tailored to specific failure modes (e.g., security vulnerabilities, goal misalignment, deceptive behaviors)
- Experiment with different reasoning strategies and output formats to improve monitor reliability
- Design and test hierarchical monitoring architectures and ensemble approaches
- Optimize log pre-processing pipelines to extract relevant signals while minimizing latency and computational costs
- Implement and evaluate different scaffolding approaches for monitors, including chain-of-thought reasoning, structured outputs, and multi-step verification
Future projects (likely not in the first 6 months)
- Fine-tune smaller open-source models to create efficient, specialized monitors for high-volume production environments
- Design and build agentic monitoring systems that autonomously investigate logs to identify both known and novel failure modes