Job Description :
· Java coding experience in a production-level app (all our work is in Java)

· Writing map reduce jobs (our batch imports use map reduce)

· Hive (our data resides there)

· Hadoop in general

· Experience in migrating data to/from Hadoop

As a plus, experience in Elastic Search and Dynamo DB would be great


1. Strong technical expertise in Hadoop, Apache Hive,Apache Spark

2. Good in writing mapreduce jobs

3. Strong in Java programming languages with the ability to work in production level applications Understanding of cloud and distributed systems principles, including load balancing, networks, scaling, in-memory vs. disk Experience with large-scale, big data methods, such as MapReduce, Hadoop, Spark, Hive, Impala, and HBase

4. Good understanding on SQL and NO SQL Data stores

5. HiveQL creation and experience

6. Good understanding in Java design patterns

7. Ability to work efficiently under Unix/Linux environment with experience with source code management systems like GIT

8. Good understanding of object oriented design and design patterns Familiarity with agile software development practices, testing strategies and solid unit testing skills Experience with the following: cloud computing and virtualization, persistence technologies both relational and No-SQL and multi-layered distributed application

9. Experience in ElasticSearch and DynamoDB would be great

10. Candidate should have a good overall understanding of Data and Analytics

Client : Ktek Resourcing