Job Description :


Job Description

  • Experience in Big data space
  • Operationalize - Spark/Kafka/Cassandra/Graph database in AWS cloud
  • Familiar with Identity & Access Management (such as AWS IAM)
  • Experience in python scripting
  • Infrastructure-as-code experience (Cloud formation / Terraform)
  • Understanding of the CI/CD process
  • Knowledge of Cloud ecosystems (AWS or Azure or GCP). AWS Preferred
  • Familiarity with deploying Big Data technologies such as Hadoop, MapReduce, Spark
  • Familiarity with architectural patterns for data-intensive solutions
  • Benefits to having experience with Agile Methodology


  • Possess excellent interpersonal and organizational skills
  • Able to manage your own time and work well both independently and as part of a team
  • Motivated, self-starter with the ability to learn quickly
  • Excellent written and verbal communication skills
  • Display sound problem-solving abilities in the face of challenges
  • Must be a hands-on individual who is comfortable leading by example

Additional Requirements:

  • Experience with Kubernetes, Docker Swarm, or other container orchestration framework
  • Experience building and operating large scale Hadoop/Spark data infrastructure used for machine learning
  • Experience in VPC Networking, Automated metrics collection & monitoring

Similar Jobs you may be interested in ..