Job Description :
Position : Hadoop Spark Developer
Location: Cedar Rapids, IA
Duration : 6-12 months
Type : Contract

Job Responsibilities
Participate in Agile development on a large Hadoop-based data platform as a member of a distributed team.
Code programs to load data from diverse data sources into Hive structures using SQOOP and other tools.
Translate complex functional and technical requirements into detailed design.
Analyze vast data stores.
Code business logic using Python/Scala on Apache Spark.
Create workflows using Oozie.
Code and test prototypes.
Code to existing frameworks where applicable.

Skills Required
Backend programming skills - specifically in Java, shell scripting, Scala, Python.
Knowledge of Object Oriented Design principles.
Ability to factor in scalability and performance considerations into design and coding.
Knowledge of Spark and Map Reduce.
Knowledge of relational databases - principles, SQL etc.
Hands on experience in HiveQL.
Familiarity with data loading tools like SQOOP.
Knowledge of workflow/schedulers like Oozie and AutoSys.
Knowledge of security and data privacy considerations.
Knowledge of existing best practices/standards.
Prior experience working in the Banking and Financial domain would be a plus.