Job Description
We're looking for a diverse group of collaborators who believe data has the power to improve society. Adventurous minds who value solving some of the world's most challenging problems. Here, employees are encouraged to push their boldest ideas forward, united by a passion to create a world where data improves the quality of life for people and businesses everywhere.
About the Role
As a Principal DevOps Engineer at Informatica, you will be instrumental in advancing our Cloud Data Access Management (CDAM) service within the Intelligent Data Management Cloud (IDMC) platform. Building on our recent acquisition of Privitar, you will help expand our advanced access controls for data privacy and security. Collaborating closely with cross-functional teams—including development, QE, product management, and architects—you will enhance our service, Kubernetes environments and CI/CD pipelines, driving programs that accelerate engineering velocity across the department.
You will influence the architecture of both our applications and infrastructure to improve performance, scalability, and reliability. You'll work on automation, performance testing, cloud cost optimization, and implement best practices in Kubernetes and cloud technologies to ensure efficient use of resources.
You will be reporting to Development Manager
Key Responsibilities
- Enhance CI/CD Infrastructure: Design, deploy, and optimize CI/CD pipelines to improve engineering efficiency and product delivery speed.
- Collaborate Across Teams: Work with development, QE, product teams, and architects to improve processes, remove bottlenecks, and influence the architecture for better performance and scalability.
- Establish CDAM within IDMC: Play a key role as we continue to increase integration of the Cloud Data Access Management service into our platform.
- Optimize Performance and Scalability: Participate in performance testing and work to improve the scalability and reliability of our services.
- Implement Infrastructure Automation: Use Terraform and Helm for scalable and reliable infrastructure deployments.
- Improve Observability: Implement observability solutions to monitor system health and performance.
- Cloud Cost Optimization: Analyze and improve cloud resource usage to reduce costs whilst maintaining performance and reliability.
- Contribute to Architectural Decisions: Contribute to discussions that shape the architecture of our applications and infrastructure.
- Drive Engineering Initiatives: Lead projects aimed at improving engineering practices and team productivity across the department.
- Maintain Security Posture: Enforce our standards regarding web, application and infrastructure security to ensure we continue to deliver services to our customers.
- Stay Updated on Industry Trends: Participate in professional development activities to stay current with latest technologies and methodologies.
Technology you'll use:
- CI/CD: Jenkins, Harness, and GitHub Actions.
- Infrastructure Automation: Terraform and Helm.
- Containerization: Docker and Kubernetes.
- Build Tools: Maven, Gradle, Bazel, and Yarn.
- Observability: Grafana, Prometheus, ELK Stack (Elasticsearch, Logstash, Kibana), Loki, and Mimir.
- Programming Languages: Python, Java, Go, TypeScript.
- Test Utilities: JUnit, Cypress, Playwright, Karate, and Jest.
- Cloud Services: AWS, Azure, GCP.
Qualifications
- Education:
- Bachelor's degree in Computer Science or a related field, or an equivalent combination of relevant education and experience.
- Experience:
- 8+ years working with AWS, Azure, or GCP.
- At least 8+ years of relevant professional experience in DevOps or related fields.
- Experience enhancing CI/CD pipelines and improving engineering velocity.
- Experience with performance testing and optimization.
- Experience in cloud cost optimization.
- Software design experience with a product engineering background.
- In an Agile environment with geographically dispersed teams.
- Willingness to stay updated on industry knowledge and trends.