Job Description
Our Mission:
6sense is on a mission to revolutionize how B2B organizations create revenue by predicting customers most likely to buy and recommending the best course of action to engage anonymous buying teams. 6sense Revenue AI is the only sales and marketing platform to unlock the ability to create, manage and convert high-quality pipeline to revenue.
Our People:
People are the heart and soul of 6sense. We serve with passion and purpose. We live by our Being 6sense values of Accountability, Growth Mindset, Integrity, Fun and One Team. Every 6sensor plays a part in defining the future of our industry-leading technology. 6sense is a place where difference-makers roll up their sleeves, take risks, act with integrity, and measure success by the value we create for our customers.
We want 6sense to be the best chapter of your career.
About the role :
We are looking for Self-motivated, dynamic & like-minded individuals who have the following:
Responsibilities :
- Own and manage core data assets, which are the IP of the company in building data moat
- Create, validate, and maintain optimal big data pipelines, involving large-scale and complex data sets that meet functional / non-functional business requirements.
- Adapt and implement new technologies to improve data ecosystem eg. complex data processing, optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, Elastic search, MongoDB, various AWS, and in house services
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing systems for greater scalability
- Improving our current data pipelines i.e. improving performance, and freshness, and ensure data health – quality and accuracy at scale.
- Debug any issues that arise from data pipelines, especially performance, and improve on the diagnostics, and resolution timelines within defined SLAs
Requirements :
- 3+ years of experience in a Data Engineer role
- Must have SQL knowledge and experience working with relational databases eg. Mysql, Mongo, Cassandra, and Athena
- Must have experience with Python/ Scala
- Must have experience with Big Data technologies like Apache Spark
- Must have experience with Apache Airflow
- Proficiency in Linux
- Experience with data pipeline and ETL tools like AWS Glue
- Experience working with AWS cloud services: EC2, S3, RDS, Redshift, and other Data solutions eg. Databricks, Snowflake
Desired Skills and Experience :
- Python, SQL, Scala, Apache Spark, ETL