Job Description :
Hadoop and Kafka Developer

Raleigh, North Carolina

12+ Months

Position Description

The work is on the integration layer with multiple connection points it gives great exposure to modular architecture, as the industry is trending towards this architecture. Also its Agile so the work will be well planned and the resource will have good work life balance.

Your future duties and responsibilities

7-8 years of Hadoop, Kafka, Mongo and Spark experience.

Hadoop, Kafka, Mongo DB and Spark development and implementation.

Loading from disparate data sets.

Pre-processing using Hive and Pig.

Designing, building, installing, configuring and supporting Hadoop.

Translate complex functional and technical requirements into detailed design.

Perform analysis of vast data stores and uncover insights.

Maintain security and data privacy.

Create scalable and high-performance web services for data tracking.

High-speed querying.

Support the effort to help build new Hadoop clusters.

Test prototypes and oversee handover to operational teams.

Propose best practices/standards.

Required qualifications to be successful in this role

At least 7+ years of experience in:

Experience using Hive, Pig, Sqoop, Impala, Spark, Map-reduce, Flume, Avro, HDFS.

Experience with Kafka

Experience with Mongo DB

Experience with Python and Openshift.

Should have Java development background

Experience with Dataware house/Data Integration

Experience with CDH

Oracle Database

Design and Requirements gathering experience


Agile

Strong Database SQL, ETL and data analysis skills.

Nice to have – Informatica experience



Primary Skillset – Kafka, Hadoop – Hive/HDFS, Avro, Flume, Spark, Mongo DB, Python, Java development
background.

Nice to have – Openshift, Informatica ETL



Java- 6 years

Hadoop- 4 years

Kafka- 3years

Mongo DB- 3 years

Healthcare- 2 years


Bachelors Degree
             

Similar Jobs you may be interested in ..