Job Description :
Experience: 6 to 10 years
Through knowledge on Scala Programming
Should lead and guide a team of Spark developers
Implentation of Spark Core, SparkSQL and Spark Streaming
Experience on Apacke Kafka Integration with Apache Spark
Implementing security for Spark Applications
Working with Spark in combination Hadoop Ecosystem
Good working knowledge on Hive
Developing Hive UDFs in Python or Java and integrating with Spark
Should abe able to understand spark implementation in various environments such as Standalone, Yarn and Mesos
Integrating Spark with NoSQL DBs such as Cassandra, MongoDB (atleast one is mandatory)
Working with Apache Kafka
Should be able to fine tune performance of Spark applications
Knowledge on Cloud, AWS etc - Hands on Desirable
Programming knowledge in Java, Python – Desirable
Good knowledge on RDBMS such as MySQL. Oracle, PostGRE and interacting with these database from Spark
Knowledge on one domain such as Health Care/Inventory Management etc is mandatory
Guiding team on Implementing coding standards, preparing technical documentation and other related activities
Good understanding of Hadoop Ecosystem components such as Pig, Oozie, Sqoop, Impala, Kudu, Tez
Should be able to quickly learn and guide team on new developments in Spark, BigData and NoSQL DBs.
             

Similar Jobs you may be interested in ..