Job Description :
Job Description:


Position: Hadoop Application/Systems Architect

Location: Chicago, IL

Duration: 6 Months

Accepted Visa: USC /GC / H1B / EAD - GC & TN



Responsibilities:



Data flows design from Kafka Event Streaming to HDFS, HBase, Hive data stores
Create Frameworks, Reference Implementations of Data Pipelines and other Hadoop components
Support Scrum teams on a day to day basis helping with code reviews and detail design walkthrough
Build and help with PoCs and Tools evaluation
Performance Tuning effort



Skills Required:



3-5 years of hands-on Architecture and Development experience with various Hadoop technologies Spark, Hive, MapReduce, NoSQL databases like HBase.
Experience designing and developing data ingestion and processing/transformation frameworks leveraging Hadoop Open Source tools/technologies
Should have worked in Big data space for at least 3 years on Hortonworks distribution.
Experience with a variety of data ingestion tools, e.g. NiFi, Streamsets, Sqoop, Flume,
Experience with the Big Data processing frameworks
Must have hands on experience with Spark Streaming, Spark SQL, Kafka for real-time data processing
Well-versed in the development challenges inherent with highly scalable, highly available, and highly resilient systems
Expert level of understanding of Hadoop ecosystem components: Hive, Oozie, Spark, HBase, Tez, Kerberos, their internal working, interactions, Debugging techniques, is a must
Experience in design of Security Architecture involving LDAP, AD, Kerberos, Knox and Ranger
Performance tuning of various Hadoop components
Hadoop Best Practices implementation
Deep knowledge of Hadoop file formats (e.g. Avro, Parquet, ORC, etc
Working experience in DevOps/ Agile environments highly desired
Experience with Bitbucket
Working knowledge of micro-service, event driven architecture and Lambda architecture
Working knowledge of MPP parallel data processing design, SQL, BI tools, and data management.
Coding experience with Scala/ Python (Scala preferred)
Demonstrated success working with cross-functional teams
Data flows design from Kafka Event Streaming to HDFS, HBase and Hive data stores
Support Scrum teams day to day helping code reviews and detail design walkthrough
Be hands-on to perform POCs and Tools evaluation
Performance Tuning effort
Nice to have Analytics implementation experience
Ability to train and coach an agile team in Hadoop skills and best practices



Wishing you a great day ahead.