One of the most successful FinTech companies in Berlin is looking for Data Engineers. If you like to work in a driven, fun and ambitious work atmosphere, this is your chance.
Benefits
- An attractive base salary
- Participation in our virtual Stock Options Programs
- A fun and driven atmosphere office in the heart of Berlin
- A great diverse team
- Flexible working hours
- Work closely with our CEO/founder and Chief of Staff and seasoned industry experts with a rich network
Task:
- Designing and building fault-tolerant, horizontally scalable systems using a Microservice architecture powered by Docker and Kubernetes
- Building event-driven applications using Kafka
- Developing data cleaning, data transformation, data aggregation, and schema management components
- Working on improving the reliability of the data infrastructure
- Establishing or developing innovative tooling to facilitate low-maintenance data pipelines, rich data tooling for product and operations, and user-focused dashboarding.
- Working with an agile methodology with flexible processes
- Working with the cutting edge technologies (Go, Kafka, Looker) without a legacy codebase
Your Profile:
- Prior experience with distributed systems and data pipelines
- Love for clean and structured data and experience with schema management
- Experience in Go (or willingness to make Go your day-to-day language)
- Fluency in analytical SQL and other data languages (e.g., LookML, numpy/pandas, Airflow DAGs, GraphQL, etc.)
- Fluency with event-driven architecture and concurrent algorithms
- Being proactive in learning new stacks and having a high sense of ownership
- Being able to present your work and enjoying to teach your newly learned skills and technologies to your colleagues
- It's nice if you have:
- Experience with Investment products or the Fintech domain, in general, is a big plus
- Experience with managing end-to-end data pipelines (from the creation of an event to making it visible as a KPI to the CEO)
- Experience in event-driven architectures and event sourcing systems.
- Experience with ETL/ELT tools and technologies
- Experience with Kafka Streams, KSQL, or related streaming aggregation technologies.
- Experience with Docker, Kubernetes, and modern monitoring and tracing tools
If you are interested in discussing this role in a little more detail click apply. Even if you are not actively looking at this time but interested in other opportunities we may have, please reach out to Eva Sassnick