Job Description :
Must-have requirements:
At least 5 years working in data infrastructure area
Experience in provisioning and operationally supporting Spark cluster on top of Kubernetes (EKS preferred)
Experience working with AWS
Experience using "infrastructure as code" tools like Terraform, or Pulumi

Nice-to-have requirements:
Hadoop knowledge and experience
Batch processing related technologies and data formats, e.g. Hive, Iceberg, Hudi, Avro, Parquet, etc.
Apache Spark committer (someone who knows the internals of Spark)
             

Similar Jobs you may be interested in ..