Job Description
Description
Azira is in search of a Senior Data Engineer to oversee all Data Engineering activities. In this role, you will utilize your extensive knowledge to design, define, develop, deploy, and manage secure, scalable data pipelines. Responsibilities include planning and executing the data engineering roadmap and implementing improvements. You will work independently to tackle challenging problems and set the pace.
Join Azira, a rapidly growing company specializing in advanced ad targeting and location visit measurement. Experience a true startup culture with the freedom to innovate and experiment. At Azira, we prioritize a balanced work-life culture, empowering employees to dream big and providing them with the freedom and tools to do so.
This position offers the opportunity to join Azira’s Engineering team, providing exposure to large-scale data and cutting-edge technology stacks. Your role involves enhancing data techniques, collaborating with Data Scientists, Software Engineers, and UI Engineers to solve complex problems as part of a high-performance team.
A Day in the Life
- Designing and implementing our data processing pipelines for different kinds of data sources, formats, and content for the Azira Platform (at Peta Bytes scale).
- Participate in product design and development activities supporting Azira’s suite of products (Operational Intelligence & Marketing Intelligence).
- Demonstrate a proficient grasp of SQL for data manipulation, querying, and analysis.
- Develop and execute thorough integration tests to ensure the delivery of top-notch products.
- Tasked with daily code writing and maintaining a hands-on approach, emphasizing coding and design responsibilities.
- Involvement in code and design review of different modules from the Data Engineering team.
- Capacity planning and specification of resource requirements for different deployment scenarios.
- Assist the documentation team in providing clear and concise documentation, support, and deployment guides.
- Adopt agile practices to track and update the status of assigned tasks/stories.
What You Bring to the Role
- You should hold a Bachelor’s/Master’s Degree in Computer Science or a related field.
- Must have 4+ years of experience with at least 2 years of experience in a data-driven company/platform.
- Extensive hands-on experience is required in at least one of the distributed big data processing frameworks such as Apache Spark, Apache NiFi, Hadoop, Apache Kafka, or Apache HBase.
- Experience with designing and optimizing ETL workflows for efficiency and scalability.
- Proficiency in Java is essential, with additional experience in Scala, Java, Go, or Python being advantageous.
- A strong grasp of algorithms, data structures, and design patterns is necessary.
- Previous involvement with distributed data processing frameworks like Apache Spark, Apache Flink, or Hadoop is mandatory.
- Proficiency in frameworks, and distributed systems, solid algorithmic skills, and familiarity with design patterns are anticipated.
- An extensive understanding of big data technologies and NoSQL databases (e.g., Kafka, HBase, Spark, Cassandra, MongoDB) is required.
- Additional experience with the AWS cloud platform, Spring Boot, and API development would be highly beneficial.