Job Description :
RESPONSIBILITIES

Design, build and maintain Big Data workflows/pipelines to process billions of records into and out of our data lake.
Fine tune application performance
Troubleshoot and resolve data processing issues
Engage in application design and data modeling discussions
Participate in developing and enforcing data security policies
Implements, troubleshoots, and optimizes distributed solutions based on modern big data technologies like Hive, Hadoop, Spark, Elastic Search, Storm, Kafka, etc. in both an on premise and cloud deployment model to solve large scale processing problems
Provide technical leadership in the area of big data systems development including data ingestion, data curation, data storage, high-throughput data processing, analytics, user access, and security.
Proficiency in Amazon AWS big data technologies including S3, RDS, RedShift, Elasticsearch, Lambda
Conduct code reviews in accordance with team processes and standards.
Works on a geographically dispersed team embracing Agile and DevOps strategies for themselves and others while driving adoption to enable greater technology and business value


Required Skills:

Hadoop, HDFS, Hive, Python, REST API/ SOAP API, Spark2,
Advanced level expertise in any one modern software development language, for example, Java, Python, Ruby, Node.js, etc.
Keen understanding of big data and parallelization accompanied with a stellar record of delivery.
Experience working within the AWS Big Data/Hadoop Ecosystem (EMR is preferred
Experience with on-premises to cloud migrations including re-hosting, re-platforming and re-factoring.
Experience with orchestration template technologies such as AWS CloudFormation.

Basic Qualifications

Bachelor''s degree in Computer Science, MIS, Engineering or related field, or relevant work experience
5 + years experience working within the AWS Big Data/Hadoop Ecosystem (EMR is preferred
Managing traditional enterprise platforms for application runtimes, integration middleware and relational databases.