Job Description :
* Minimum 9+ years of relevant IT experience is required.
* Design and build and deploy streaming and batch data pipelines capable of processing and storing petabytes of data quickly and reliably
* Create data pipelines to include ETL and streaming data such as log data or tool/sensor data to indexes. Experience with Splunk forwarders, ELK (Elasticsearch, Kafka, Logstash), Beats, or ES/Splunk python libraries preferred.
* Deep experience in stream processing, examples Spark, Flink, Kafka Streams, Kinesis Analytics
* Proficient in Scala, Java, and/or Python as well as SQL, and KSQL, Kafka
* Experience in key areas such as AWS, and Terraform (or similar) with deep familiarity with Lambda, S3, RDS, Aurora, and Athena
* Develop data catalogs and data validations to ensure clarity and correctness of key business metrics
* Experience with Kubernetes and containerization to be able to support existing teams
* Ability to quickly analyze and comprehend new or unfamiliar technologies or ideas
* Experience in automation and the development of automation tools
* Ability to understand and work with distributed data systems
* Track record of interpreting requirements from Data Scientists and Machine Learning Engineers
* Collaborate with product teams, data analysts and data scientists to design and build data-forward solutions

Client : MG