Job Description :
Must have real time exp with Kafka

Experience in Spark using Scala, Spark SQL, Hive, Hbase, NIFI, Performance tuning, best practices to follow while writing Spark code, experience in Kafka and real time data processing experience.

Responsible for the design, development and operations of systems that store and manage large amounts of data.
Computer software background and have a degree in information systems, software engineering, computer science, or mathematics.