Katana Graph is an enterprise graph computing system and storage engine. Our technology is the world’s fastest graph processing engine, providing compelling scalability and programmability advantages.
Building on decades of experience in developing state-of-the-art distributed systems, Katana Graph is bringing together experts in hardware acceleration, cloud computing, storage systems, and high-performance computing to help create the platform of the future for data processing and analysis in this new world of specialized hardware and revitalized algorithms.
Katana Graph recently completed a $28.5 million Series A financing round led by Intel Capital with participation from existing and new investors including WRVI Capital, Nepenthe Capital, Dell Technologies Capital, and Redline Capital.
As a Senior Software Engineer focused on Data Integration, you will develop the infrastructure for our core compute and storage platform to integrate and interoperate with external infrastructure and services. You will be responsible for deployment architecture and tooling to improve the productivity of data engineers and data scientists who use our product.
- Drive projects from idea formulation, to design, and to implementation.
- Develop tools and infrastructure to support/accelerate construction of ETL/ML integrations
- Develop integrations for customer infrastructure and pipelines
- Develop features to improve the effectiveness of the product for data engineers and data scientists
- Collaborate with product managers, customers, business groups, and other engineering teams to build data services and data integrations
- 5+ years of related experience
- Experience with C++, Go, Python or Scala, JDBC, SQL, REST
- Preferred: experience building, shipping and operating multi-geo data pipelines at scale.
- Preferred: experience with working with and operating workflow or orchestration frameworks, including open source tools like Airflow or commercial enterprise tools.
- Preferred: Experience with large-scale messaging systems like Kafka or RabbitMQ or commercial systems.
- Preferred: experience developing parallel data integration pipelines to big data systems and/or distributed databases