Job Description :
Skills Good experience in implementing Big data solutions using Hadoop and Spark (using PySpark) Strong data warehousing concepts and designing data models for reporting and analytical solutions Spark Developer Previous experience in Hadoop is preferable Responsibilities Design an ETL framework using Spark Develop code, being proficient in performance tuning Hands-on participation in developing and reviewing code