About Us

Swvl is a revolutionary idea that was born from passion, loyalty, and persistence to face all challenges that come our way. It started with an observation turning into a realization; too many cars on the streets, wasting our limited resources: time, space and money.

Our main goal is not just to facilitate commuting, but a hunger to strive for solutions, encourage the contribution of youth in innovation and inspire change.

In 3 years Swvl started operating in 6 cities across 3 countries; Cairo & Alexandria in Egypt, Nairobi in Kenya, and Lahore, Karachi & Islamabad in Pakistan.

We are seeking a Senior Data Engineer to work collaboratively with a multidisciplinary and extremely talented team. The ideal candidate thrives in an agile environment and has a strong passion for distributed systems, Big Data, ETL and production quality software.

Responsibilities

  • Design, implement, maintain and scale our data pipeline.
  • Design, implement, maintain and scale our ETL and ETL framework.
  • Come up with efficient solutions for challenging problems with robust, scalable, reusable, efficient, production-quality software.
  • Brainstorm ideas, write proof of concepts and benchmarks.
  • Implement analytics using big data platforms and frameworks.
  • Document the architecture and the technical details of each project clearly.
  • Collaborate and communicate closely with the relevant teams throughout the lifecycle of each project.

Skills & Requirements

  • 3-5 years of experience
  • Solid algorithms and data-structures knowledge.
  • Thorough understanding of concurrency concepts.
  • Strong understanding of distributed systems.
  • Expertise in one or more object oriented programming languages (Python, Java, C++).
  • Good knowledge of Scala
  • Strong SQL and relational DB experience.
  • Experience with relational and non-relational data modeling .
  • Experience with one or more distributed batch data processing platforms (Hadoop, Spark, etc..).
  • Experience with one or more distributed real-time data processing platforms (Spark streaming, Storm, Flink, etc..).
  • Experience with Kafka and other pub/sub data buses
  • Good knowledge of at least one non-relational database technology (columnar, graph, or document databases).
  • Experience with Protocol Buffers, Apache Thrift, or Apache Avro.
  • Experience with BigQuery is a plus.
  • Eager to learn and improve.