Job Description
About Fusemachines
Fusemachines is a 10+ year old AI company, dedicated to delivering state-of-the-art AI products and solutions to a diverse range of industries. Founded by Sameer Maskey, PhD, an Adjunct Associate Professor at Columbia University, our company is on a steadfast mission to democratize AI and harness the power of global AI talent from underserved communities. With a robust presence in four countries and a dedicated team of over 400 full-time employees, we are committed to fostering AI transformation journeys for businesses worldwide. At Fusemachines, we not only bridge the gap between AI advancement and its global impact but also strive to deliver the most advanced technology solutions to the world.
About the role:
This is a remote, consulting position (3 months contract, with a possibility of extension) responsible for designing, building, and maintaining the infrastructure required for data integration, storage, processing, and analytics (BI, visualization, and Advanced Analytics) to optimize digital channels and technology innovations with the end goal of creating competitive advantages for food services industry around the globe. We’re looking for a mid-level engineer who brings fresh ideas from past experiences and is eager to tackle new challenges.
We’re in search of a candidate who is knowledgeable about and loves working with modern data integration frameworks, big data, and cloud technologies. Candidates must also be proficient with data programming languages (Python and SQL), AWS Cloud, and Snowflake Data Platform. The data engineer will build a variety of data pipelines and models to support advanced AI/ML analytics projects - with the intent of elevating the customer experience and driving revenue and profit growth globally.
Qualification & Experience
- Must have a full-time Bachelor's degree in Computer Science or similar from an accredited institution,
- At least 3 years of experience as a data engineer with strong expertise in Python, Snowflake, PySpark, and AWS.
- Proven experience delivering large-scale projects and products for Data and Analytics, as a data engineer.
Skill Set Requirement:
- 3+ years of data engineering experience with Snowflake and AWS (certifications preferred).
- Proficient in Python and other programming languages for efficient data integration and automation.
- Strong with ELT/ETL tools, including custom solutions for APIs, databases, flat files, and event streaming (Informatica, Matillion, DBT; Informatica CDI is a plus).
- Experience with distributed data technologies (Spark/PySpark, DBT, Kafka) for large datasets.
- Advanced SQL skills for optimized data integration and manipulation.
- Skilled in AWS Data Warehousing and Snowflake development, including Snowflake architecture, services (Snowpipe, stages, stored procedures, etc.), and optimization.
- Familiar with data security in Snowflake (RBAC, encryption).
- Experience with streaming technologies (Kafka, Pulsar) and task orchestration (Apache Airflow preferred).
- Strong in AWS Cloud Computing (Lambda, Kinesis, S3, EKS, API Gateway, Lake Formation, etc.).
- Understanding of data modeling, database design, data quality, and governance.
- Strong problem-solving skills for data pipeline troubleshooting and performance optimization.
Responsibilities:
- Build and maintain data pipelines (streaming/batch) from diverse sources to Snowflake.
- Support Data Operations by developing scalable data solutions for global growth.
- Standardize and extend data pipeline frameworks across regions and business units.
- Use a mix of open-source frameworks (PySpark, Airflow) and SaaS tools (Informatica, Snowflake) on a data platform.
- Ensure data reliability, scalability, and efficiency; manage data lifecycle, quality, and integration.
- Configure and support Snowflake data warehousing and data lake solutions.
- Collaborate with cross-functional teams to meet data requirements.
- Implement data validation, governance strategies, and document processes.
- Take ownership of database management, schema design, indexing, and performance tuning.
- Resolve data engineering issues promptly and optimize system performance.
- Participate actively in Agile processes and continuously improve practices.