Image Loading

Software Engineer 2

Job Description

  • Bengaluru, Karnataka

Company Description

When you’re one of us, you get to run with the best. For decades, we’ve been helping marketers from the world’s top brands personalize experiences for millions of people with our cutting-edge technology, solutions and services. Epsilon’s best-in-class identity gives brands a clear, privacy-safe view of their customers, which they can use across our suite of digital media, messaging and loyalty solutions. We process 400+ billion consumer actions each day and hold many patents of proprietary technology, including real-time modeling languages and consumer privacy advancements. Thanks to the work of every employee, Epsilon India is now Great Place to Work-Certified™. Epsilon has also been consistently recognized as industry-leading by Forrester, Adweek and the MRC. Positioned at the core of Publicis Groupe, Epsilon is a global company with more than 8,000 employees around the world. For more information, visit epsilon.com/apac or our LinkedIn page.

Job Description

About Data team :

At the heart of everything we do is data and this team. Our premium data assets empower the team to drive desirable outcomes for leading brands across industries. Armed with high volumes of transactional data, digital expertise and unmatched data quality, the team plays a key role in improving all our product offerings. Our data artisans are keen on embracing the latest in technology and trends, so there’s always room to grow and something new to learn here.

Why we are looking for you

  • You have hands on experience in -  Kafka, Flume, Spark, Java/Scala, Hadoop, HDFS, Hive,SQL
  • to work with Epsilon Market place team
  • You have hands on experience in coding languages like Python & Scala .
  • You have hands on experience in fine tuning Spark Jobs  .
  • You are very good in handling & working closely with key stakeholders of the Project, you  must be able to communicate and keep them informed on overall health on projects impacting Data platforms.
  • You have exposure on Airflow, Docker container, noSQL databses like Hbase.

What you will enjoy in this role

  • As part of Data Pipeline  team will be processing Billions of data per day from multiple region/ data center’s.
  • Working on processing Ad-server data into Storage layer where further analytics will be done
  • Working on Bigdata technologies  like Flume, Kafka, Spark, and loads the aggregated/processed data into HDFS.
  • Working on identifying a key area of ownership the team has, pipelining data.
  • Working on intraday (hourly, 5 minute ,15 minute) aggregations using Spark Structured Streaming to jobs performed.
  • Working on the data assets which will be further  used in performance measurement and efficacy of the defined solution, as well as feeding business analytics and data mining.

What you will do

  • End to end development of automated receipt of anonymized data
  • End to end development of processing of logs data
  • Data center to data center replication
  • Data processing using Flume, Kafka , Spark jobs, Airflow,Docker etc
  • Migration of production data assets to downstream consuming systems
  • Disaster Recovery and Business Continuity implementations
  • Workload Performance Management and Tuning
  • Ensuring that coded solutions meet functional business requirements for ad serving and measurement
  • Application specific controls and scheduling
  • Custom solution building for syndicated and third party datasets

Qualifications

  • Bachelor’s Degree in Computer Science or equivalent degree is required
  • 3 – 5  years of data engineering experience around database marketing technologies and data management, and technical understanding in these areas.
  • Experience in  coding using Java or Scala
  • Experience with scheduling applications like Airflow, Oozie with complex interdependencies 
  • Good hands on experience in opensource components like Kafka, Flume, Spark, Hadoop, HDFS.
  • Participate in code reviews providing constructive feedback and accepting feedback on  own code.
  • Experience in No SQL Databases (ex: Hbase) ,Docker containers a plus
  • Excellent written and verbal communication skills.
  • Excellent Analytical and problem solving skills
  • Ability to diagnose and troubleshoot problems quickly

Skills

  • Java
  • Scala
  • NoSQL Databases
  • Kafka
  • Docker
  • Problem Solving

Education

  • Master's Degree
  • Bachelor's Degree

Job Information

Job Posted Date

Nov 09, 2023

Experience

3 to 5 Years

Compensation (Annual in Lacs)

Best in the Industry

Work Type

Permanent

Type Of Work

8 hour shift

Category

Information Technology

Copyright © 2022 All Rights Reserved. Saas Talent