Job Description :
Summary: Looking for a hands-on Developer who has worked extensively in Big Data technologies (no Architecture/design work required Candidate will take the ownership of the written code and will do minor enhancements and bug fixing. Entire infrastructure is built on AWS. Project is in sustenance mode (no architecture required, no new work needs to be done, keep the show on, minor enhancements and bug fixing





Job Functions

· Monitoring

· SLA Driven Service restoration

· Ownership of the code and the services they support

· Problem solving, Bug fixes and minor enhancements (major enhancements will go to the Development team)

· Root cause analysis for Sev 1 and 2

· Backups and patching

· Performance tuning (Hive performance queries) & Optimization

· Adhoc service requests

· Deployment

· Sustenance of existing BigData pipeline.

· Assist in driving defect and root cause analysis for software defects and interfaces.





Must Have:



· 6-8 years of professional experience working with Hadoop and BigData Technology Stack

· Minimum 3 years of solid experience in Big Data technologies such as Hadoop, Kafka, Spark, Spark Streaming, HBase, Scala, Hive, Java, Oozie

· Experience in developing end to end Hadoop pipeline using Kafka, Spark, Spark Streaming, Hive, HBase and other BigData Technologies.

· Tools: Maven, GIT, Build process

· Good to have: Cassandra, Amazon AWS, Basic Hadoop Administration





Here are the technologies client currently uses:



· Amazon EMR, S3, Hive, Spark, Scala (over 95% of our code is in Scala, so this is a must-have), Java, Oozie, Hue, Sqoop, Lambda, Shell scripting – having experience in all these areas is a must have



· They plan to use the following technologies in the future. This is just FYI, and not a criteria for selecting/rejecting candidates. However, having experience in these will be a huge plus.



Ø Spark Streaming (future), Kafka (future), Redshift (future), HBase (future)