Job Description :
Mandatory Experience in AWS technologies such as EC2, Cloud formation, EMR Cluster, AWS S3, Glue, Athena, and AWS Analytics
Mandatory experience in Python or Py-Spark scripting to handle data extraction, transformation and loading
Big data related AWS technologies like HIVE, Spark, ETL, Presto, Hadoop, RedShift
Mandatory experience in handling CI/CD pipelines, CFTs using AWS Code Commit
Well experienced in Big-data/Spark architecture against huge volume of datasets and expertise in ETL performance improvement tasks
Strong experience in SQL/PLSQL

Client : Client