Job Description
About Us:
Ola Krutrim is building AI computing for the future. Our envisioned AI computing stack encompasses the AI computing infrastructure, AI Cloud, multilingual and multimodal foundational models, and AI-powered end applications. We are India’s first AI unicorn and built the first foundation model from the country.
Our AI stack is empowering consumers, startups, enterprises and scientists across India and the world to build their end AI applications or AI models. While we are building foundational models across text, voice, and vision relevant to our focus markets, we are also developing AI training and inference platforms that enable AI research and development across industry domains.
The platforms being built by Krutrim have the potential to impact millions of lives in India, across income and education strata, and across languages.
Job Description:
We are seeking an experienced and visionary Principal Research Scientist to lead our AI Alignment efforts, encompassing Trust and Safety, Interpretability, and Red Teaming. In this critical role, you will oversee teams dedicated to ensuring our AI systems are safe, ethical, interpretable, and reliable. You will work at the intersection of cutting-edge AI research and practical implementation, guiding the development of AI technologies that positively impact millions of lives while adhering to the highest standards of safety and transparency.
Responsibilities:
- Provide strategic leadership for the AI Alignment division, encompassing Trust and Safety, Interpretability, and Red Teaming teams.
- Oversee and coordinate the efforts of the Lead AI Trust and Safety Research Scientist and Lead AI Interpretability Research Scientist, ensuring alignment of goals and methodologies.
- Develop and implement comprehensive strategies for AI alignment, including safety measures, interpretability techniques, and robust red teaming protocols.
- Drive the integration of advanced safety and interpretability techniques such as Reinforcement Learning with Human Feedback (RLHF), Group Relative Policy Optimization (GRPO), Reinforcement Learning from Verifiable Rewards (RLVR), Direct Preference Optimization (DPO), Proximal Policy Optimization (PPO), Local Interpretable Model-agnostic Explanations (LIME), and SHapley Additive exPlanations (SHAP) across our AI development pipeline.
- Establish and maintain best practices for red teaming exercises to identify potential vulnerabilities and ensure our models do not generate harmful or undesirable outputs.
- Collaborate with product and research teams to define and implement safety and interpretability aspects that ensure our AI models deliver helpful, honest, and transparent outputs.
- Lead cross-functional initiatives to integrate safety measures and interpretability throughout the AI development lifecycle.
- Stay at the forefront of AI ethics, safety, and interpretability research, fostering a culture of continuous learning and innovation within the team.
- Represent the company in industry forums, conferences, and regulatory discussions related to AI alignment and ethics.
- Manage resource allocation, budgeting, and strategic planning for the AI Alignment division.
- Mentor and develop team members, fostering a collaborative and innovative research environment.
- Liaise with executive leadership to communicate progress, challenges, and strategic recommendations for AI alignment efforts.
Qualifications
- Ph.D. in Computer Science, Machine Learning, or a related field with a focus on AI safety, ethics, and interpretability.
- 7+ years of experience in AI research and development, with at least 3 years in a leadership role overseeing multiple AI research teams.
- Demonstrated expertise in AI safety, interpretability, and red teaming methodologies for large language models and multimodal systems.
- Strong understanding of advanced techniques such as Reinforcement Learning with Human Feedback (RLHF), Group Relative Policy Optimization (GRPO), Reinforcement Learning from Verifiable Rewards (RLVR), Direct Preference Optimization (DPO), Proximal Policy Optimization (PPO), Local Interpretable Model-agnostic Explanations (LIME), and SHapley Additive exPlanations (SHAP) and attention-based methods for AI safety and interpretability.
- Proven track record of leading teams working on models with 10s and 100s of billions of parameters.
- Experience in designing and overseeing comprehensive red teaming exercises for AI systems.
- Deep knowledge of ethical considerations in AI development and deployment, including relevant regulatory frameworks and industry standards.
- Strong publication record in top-tier AI conferences and journals, specifically in areas related to AI safety, ethics, and interpretability.
- Excellent communication and presentation skills, with the ability to convey complex technical concepts to diverse audiences, including executive leadership and non-technical stakeholders.
- Demonstrated ability to manage and mentor diverse teams of researchers and engineers.
- Strong project management skills with experience in resource allocation and budgeting for large-scale research initiatives.
- Visionary mindset with the ability to anticipate future trends and challenges in AI alignment and ethics.