Job Description :
Relevant Experience (Yrs)
:
3+ years experience in Big Data Hadoop, Hive and Spark with hands on expertise in design and implementation of high data volume solution
Necessary Skills
:
Strong in Spark Scala pipelines (both ETL & Streaming)
Proficient in Spark architecture
Atleast 1 year experience in migration of Map Reduce process to Spark platform
3 yrs experience in Design and implementation using Hadoop, Hive
Should be able to optimize and performance tune Hive queries
Experience in one coding language is a must - Java/Python
Worked on designing ETL & Streaming pipelines in Spark Scala.
Good experience in Requirements gathering, Design & Development
Working with cross-functional teams to meet strategic goals.
Experience in high volume data environments
Critical thinking and excellent verbal and written communication skills?
Strong problem-solving and analytical abilities?
Good knowledge of data warehousing concepts
Roles & Responsibilities
:
As mentioned in the necessary skills