Job Description :
5+ years of experience owning and building data pipelines.
* Extensive knowledge of data engineering tools, technologies and
approaches
* Ability to absorb business problems and understand how to service
required data needs
* Design and operation of robust distributed systems
* Proven experience building data platforms from scratch for data
consumption across a wide variety of use cases (e.g data science, ML, scalability etc)
* Demonstrated ability to build complex, scalable systems with high
quality Experience with specific AWS technologies (such as S3, Redshift, EMR, and Kinesis)
* Experience with multiple data technologies and concepts such as
Airflow, Kafka, Hadoop, Hive, Spark, MapReduce, SQL, NoSQL, and Columnar databases. a plus
* Experience in one or more of Java, Scala, python and bash.
             

Similar Jobs you may be interested in ..