Job Description
About Krutrim:
Krutrim is building AI computing for the future. Our envisioned AI computing stack encompasses AI infrastructure, AI Cloud, multilingual and multimodal foundational models, and AI-powered applications. As India’s first AI unicorn, we built the country’s first foundation models in LLM and VLM domains, empowering consumers, startups, enterprises, and researchers to develop AI applications. We focus on foundational models across text, voice, and vision while developing AI training and inference platforms to drive innovation. Our teams, spanning Bangalore, Singapore, and San Francisco, bring expertise across AI research, applied AI, cloud engineering, and semiconductor design.
Job Description: We are seeking experienced Multimodal and Vision AI Engineers/Scientists to research, develop, optimize, and deploy Vision-Language Models (VLMs), multimodal generative models, diffusion models, and traditional computer vision techniques. You will work on foundational models integrating vision, language, and audio, optimize AI architectures, and push the boundaries of multimodal AI research.
Responsibilities:
- Research, design, and train multimodal vision-language models (VLMs), integrating deep learning, transformers, and attention mechanisms.
- Develop and optimize small-scale distillation of VLMs for efficient deployment on resource-constrained devices.
- Implement state-of-the-art object detection (YOLO, Faster R-CNN), segmentation (Panoptic Segmentation), classification (ResNets, Vision Transformers), and image generation (Stable Diffusion, Stable Cascade).
- Train or fine-tune vision models for representation (e.g., Vision Transformers, Q-Former, CLIP, SigLIP), generation, and video representation (e.g., Video-Swin Transformer).
- Work with diffusion models and generative models for conditional image generation and multimodal applications.
- Optimize CNN-based architectures for computer vision tasks like recognition, tracking, and feature extraction.
- Implement and optimize audio models for representation (e.g., W2V-BERT) and generation (e.g., Hi-Fi GAN, SeamlessM4T).
- Innovate with multimodal fusion techniques such as early fusion, deep fusion, Mixture-of-Experts (MoE), FlashAttention, MQA, GQA, MLA, and other transformer architectures.
- Advance video analysis, video summarization, and video question-answering models to enhance multimedia understanding.
- Implement optimization techniques like quantization, distillation, sparsity, streaming, and caching for scalable model deployment.
- Integrate and tailor deep learning frameworks like PyTorch, TensorFlow, DeepSpeed, Lightning, Habana, and FSDP.
- Deploy large-scale distributed AI models using MLOps frameworks such as AirFlow, MosaicML, Anyscale, Kubeflow, and Terraform.
- Publish research in top-tier conferences (NeurIPS, CVPR, ICCV, ICLR, ICML) and contribute to open-source AI projects.
- Collaborate with engineering teams to productionize research advancements into scalable services and products.
Qualifications:
- Ph.D. or Master’s degree with 2+ years of experience in Vision-Language Models (VLMs), multimodal AI, diffusion models, CNNs, ResNets, computer vision, and generative models.
- Demonstrated expertise in high-performance computing, proficiency in Python, C/C++, CUDA, and kernel-level programming for AI applications.
- Experience in optimizing training and inference of large-scale AI models, with knowledge of quantization, distillation, and LLMOps.
- Hands-on experience with object detection (YOLO, Faster R-CNN), image segmentation (Panoptic Segmentation), and video understanding (Swin Transformer, Timesformer).
- Experience in generative models, including diffusion models (Stable Diffusion, Stable Cascade), and conditional image generation.
- Familiarity with audio models for representation and generation is a plus.
- Research contributions in multimodal AI, vision-language integration, NLP, or generative modeling, demonstrated through publications and products.
- Proficiency in AI toolkits like PyTorch, TensorFlow, OpenCV, and familiarity with MLOps frameworks.
- Strong programming skills and practical experience with distributed AI model deployment.
- Excellent communication and collaboration skills to work across interdisciplinary teams.