Job Description :
Strong Knowledge in Hadoop Architecture and its implementation.
Strong understanding of best practices in Talend coding on large scale Hadoop Clusters.
Proficiency with Software Development Lifecycle (SDLC)
Solid knowledge of the programming language(s), application server, database server and/or architecture of the system being developed.
Good communication skills and problem solver mentality.
Solid understanding of current programming languages and employs any/all of these languages to solve the business needs of Client''s internal customers.
Professional Strong Functional programming using Scala and Java.
Strong experience in Talend Big Data Real Time or other functional languages.
Excellent understanding of data engineering concepts.
Experience working with Spark for data manipulation, preparation, cleansing
Experience in whole Hadoop ecosystem like HDFS, Hive , Yarn, Flume, Oozie, Flume, Cloudera Impala, Zookeeper, Hue, Sqoop, Kafka, Storm, Spark and Spark Streaming including Nosql database knowledge
Good knowledge of Windows/Linux/Solaris Operating systems and shell scripting
Strong desire to learn a variety of technologies and processes with a "can do" attitude

8-10 years of hands-on experience in handling large-scale software development and integration projects.
2+ years of experience working with Hadoop cluster environments and tools ecosystem: Spark/Spark Streaming/Sqoop/HDFS/Kafka/Zookeeper
Experience with Java, Python, Pig, Hive, or other languages a plus

Experience in working with RDBMS and Java
Exposure to NoSQL databases like MongoDB, Cassandra etc.
Experience with cloud technologies(AWS)
Certification in Hadoop development is desired