Job Description :
Position: Hadoop Developer
Location: United States - Missouri - Maryland Heights
Area of Interest: Information Technology Services
Position Type: Full Time


JOB SUMMARY
Responsible for design, development and implementation of Big Data Projects using Spark Scala. Resolve issues regarding development, operations, implementations, and system status.

MAJOR DUTIES AND RESPONSIBILITIES
Strong Knowledge in Hadoop Architecture and its implementation.
Strong understanding of best practices in Talend coding on large scale Hadoop Clusters.
Proficiency with Software Development Lifecycle (SDLC)
Solid knowledge of the programming language(s), application server, database server and/or architecture of the system being developed.
Good communication skills and problem solver mentality.
Solid understanding of current programming languages and employs any/all of these languages to solve the business needs of Client''s internal customers.
Professional Strong Functional programming using Scala and Java.
Strong experience in Talend Big Data Real Time or other functional languages.
Excellent understanding of data engineering concepts.
Experience working with Spark for data manipulation, preparation, cleansing
Experience in whole Hadoop ecosystem like HDFS, Hive , Yarn, Flume, Oozie, Flume, Cloudera Impala, Zookeeper, Hue, Sqoop, Kafka, Storm, Spark and Spark Streaming including Nosql database knowledge
Good knowledge of Windows/Linux/Solaris Operating systems and shell scripting
Strong desire to learn a variety of technologies and processes with a "can do" attitude


REQUIRED QUALIFICATIONS
Skills / Abilities and Knowledge
Ability to read, write, speak and understand English.

Ability to communicate orally and in writing in a clear and straightforward manner
Ability to communicate with all levels of management and company personnel
Ability to handle multiple projects and tasks
Ability to make decisions and solve problems while working under pressure
Ability to prioritize and organize effectively
Ability to show judgment and initiative and to accomplish job duties
Ability to use personal computer and software applications (i.e. word processing, spreadsheet, etc
Ability to work independently
Ability to work with others to resolve problems, handle requests or situations
Ability to effectively consult with department managers and leaders


RELATED WORK EXPERIENCE
8-10 years of hands-on experience in handling large-scale software development and integration projects.
2+ years of experience working with Hadoop cluster environments and tools ecosystem: Spark/Spark Streaming/Sqoop/HDFS/Kafka/Zookeeper
Experience with Java, Python, Pig, Hive, or other languages a plus

PREFFERED QUALIFICATIONS
Experience in working with RDBMS and Java
Exposure to NoSQL databases like MongoDB, Cassandra etc.
Experience with cloud technologies(AWS)
Certification in Hadoop development is desired