Job Description :
Job Desc:

3+ years of experience in Big Data technologies i.e. Hadoop Platform (Hive, HDFS and Spark)
Must have Experience developing big data systems utilizing Spark API and UDFs in Scala/Java.
Hands on experience using UNIX, SQL, Spark framework and Java

Expertise in Scala Functional programming.
Hands-on experience in Scala Development, Spark is must.
Adept at Apache Spark programming using Scala.
Good knowledge of Configuring and working on Multi node clusters and distributed data processing framework Spark.
Experience in designing data pipelines, complex event processing, analytics components using big data technology (Scala/ Spark),
Experience in working with large volumes of data (Tera-bytes), analyze the data structures and design in Hadoop cluster effectively.
             

Similar Jobs you may be interested in ..