8+ years' experience in design and development of data pipeline automation extracting data from Cloud API based sources. Experience developing complex automation frameworks, queries, data modeling, and extract, transform, and load (ETL) processes in SQL, Python, or similar technical languages.
Deep Experience in scripting languages such as Python and Cloud database experience such as Snowflake, Redshift, etc. to facilitate rapid ingestion and dissemination of key data.
Proficient in transforming structured and unstructured data. Experience working on CI/CD processes such as Jenkins, Gitlab CI/CD etc., and source control tools such as GitHub, etc.
Experience identifying and solving issues concerning data management to improve data quality, and clean, prepare and optimize data for ingestion and consumption. You will work with your Data Engineering teammates to review design, code, and test plans to increase knowledge and application of key frameworks and methodologies.
Work with internal and external stakeholders to assist with data-related technical issues and support data infrastructure needs.
Bachelor's Degree in Computer Science, Information Technology, Computer Engineering, or related IT discipline; or equivalent experience.