Job Description
About Onehouse
Onehouse is a mission driven company dedicated to democratizing data management by making it universally interoperable. We deliver the universal data lakehouse through a cloud-native managed lakehouse service built on Apache Hudi, which was created by the founding team while they were at Uber.
We are a team of self-driven, inspired, and seasoned builders that have created large-scale data systems and globally distributed platforms that sit at the heart of some of the largest enterprises out there including Uber, Linkedin, Confluent, Amazon, Azure Databricks and many more. Riding off $33M total funding and a fresh Series A backed by Greylock/Addition, we are quickly expanding and looking for rising talent to grow with us and become future leaders of the team. Come help us build the world's best fully managed and self-optimizing data lake platform!
The Community You Will Join
When you join Onehouse, you're joining a team of passionate professionals tackling the deeply technical challenges of building a 2-sided engineering product. Our engineering team serves as the bridge between the worlds of open source and enterprise: contributing directly to and growing Apache Hudi (already used at scale by global enterprises like Uber, Amazon, ByteDance etc) and concurrently defining a new industry category - the transactional data lake.
A Typical Day:
- Be the thought leader around all things data engineering within the company - schemas, frameworks, data models.
- Implement new sources and connectors to seamlessly ingest data streams.
- Building scalable job management on Kubernetes to ingest, store, manage and optimize petabytes of data on cloud storage.
- Optimize Spark or Flink applications to flexibly run in batch or streaming modes based on user needs, optimize latency vs throughput.
- Tune clusters for resource efficiency and reliability, to keep costs low, while still meeting SLAs
What You Bring to the Table:
- 3+ years of experience in building and operating data pipelines in Apache Spark or Apache Flink.
- 2+ years of experience with workflow orchestration tools like Apache Airflow, Dagster.
- Proficient in Java, Maven, Gradle and other build and packaging tools.
- Adept at writing efficient SQL queries and trouble shooting query plans.
- Experience managing large-scale data on cloud storage.
- Great problem-solving skills, eye for details. Can debug failed jobs and queries in minutes.
- Operational excellence in monitoring, deploying, and testing job workflows.
- Open-minded, collaborative, self-starter, fast-mover.
- Nice to haves (but not required):
- Hands-on experience with k8s and related toolchain in cloud environment.
- Experience operating and optimizing terabyte scale data pipelines
- Deep understanding of Spark, Flink, Presto, Hive, Parquet internals.
- Hands-on experience with open source projects like Hadoop, Hive, Delta Lake, Hudi, Nifi, Drill, Pulsar, Druid, Pinot, etc.
- Operational experience with stream processing pipelines using Apache Flink, Kafka Streams.