About Us

Swvl is a revolutionary idea that was born from passion, loyalty, and persistence to face all challenges that come our way. It started with an observation turning into a realization; too many cars on the streets, wasting our limited resources: time, space, and money.

In 4 years Swvl became the first 1.5 billion unicorn in the Middle East to list on NASDAQ and currently the 2nd best-funded startup in the MENA region. With a presence and operations in up to 10 countries worldwide and a vision to be active on 6 continents.

Our main goal is not just to facilitate commuting, but a hunger to strive for solutions, encourage the contribution of youth in innovation, and inspire change.

We are looking for an engaged and enthusiastic Engineer to join our team of talented engineers that share a common interest in distributed systems, their scalability and continued development.

Responsibilities:

  • Implement, maintain, and scale our data pipeline.
  • Implement, maintain, and scale our ETL and ETL framework.
  • Come up with efficient solutions for challenging problems with robust, scalable, reusable, efficient, production-quality software.
  • Brainstorm ideas write a proof of concepts and benchmarks.
  • Implement analytics using big data platforms and frameworks.
  • Document the architecture and the technical details of each project clearly.
  • Collaborate and communicate closely with the relevant teams throughout the lifecycle of each project.

Skills & Requirements:

  • Solid algorithms and data-structures knowledge.
  • Thorough understanding of concurrency concepts.
  • Good understanding of distributed systems.
  • Expertise in one or more object-oriented programming languages (Python, Java, C++).
  • Strong SQL and relational DB experience.
  • Experience with relational and non-relational data modeling.
  • Experience with one or more distributed batch data processing platforms (Hadoop, Spark, etc..).
  • Experience with one or more distributed real-time data processing platform (Spark Streaming, Storm, Flink, etc..).
  • Experience with one or more pub/sub data bus such as Kafka, or
  • Good knowledge of at least one NoSQL database technology (columnar, graph, or document databases).
  • Experience with Protocol Buffers, Apache Thrift, or Apache Avro.
  • Experience with BigQuery is a plus.
  • Eager to learn and improve.