Job Description :
Job Description:

· Proficiency with Big Data processing technologies (Hadoop, Spark, AWS Experience in building data pipelines and analysis tools using Python, PySpark, Scala.
· Create Scala/Spark jobs for data transformation and aggregation
· Produce unit tests for Spark transformations and helper methods
· Write Scaladoc-style documentation with all code
· Design data processing pipelines
· Good to have Spark certification
· Preferred to have Java background.