Job Description :
Big Data Engineer/Senior Developer
Location : McLean, VA & Richmond, VA
Duration: 6 Months +

Top Skills:
1. Spark & Scala
2. Kafka
3. AWS/ EMR
4. Java/Python

The Job:
Collaborating as part of a cross-functional Agile team to create and enhance software that enables state of the art, next generation Big Data & Fast Data applications
Building efficient storage for structured and unstructured data
Developing and deploying distributed computing Big Data applications using Open Source frameworks like Apache Spark, Apex, Flink, Nifi, Storm and Kafka on AWS Cloud
Utilizing programming languages like Java, Scala, Python and Open Source RDBMS and NoSQL databases and Cloud based data warehousing services such as Redshift
Utilizing Hadoop modules such as YARN & MapReduce, and related Apache projects such as Hive, Hbase, Pig, and Cassandra
Leveraging DevOps techniques and practices like Continuous Integration, Continuous Deployment, Test Automation, Build Automation and Test Driven Development to enable the rapid delivery of working code utilizing tools like Jenkins, Maven, Nexus, Chef, Terraform, Ruby, Git and Docker
Performing unit tests and conducting reviews with other team members to make sure your code is rigorously designed, elegantly coded, and effectively tuned for performance

Basic Qualifications:
Bachelor’s Degree or military experience
At least 3 years of professional work experience in data warehousing or Data Engineering
At least 3 years of experience in open source programming languages for data analysis
At least 2 years of data modeling development
At least 1 year of experience working with cloud data capabilities

Preferred Qualifications:
4+ years’ experience with Relational Database Systems and SQL (PostgreSQL or Redshift)
2+ years of Agile engineering experience
2+ years of experience with the Hadoop Stack
2+ years of experience with Cloud computing (AWS)
1+ years of experience with Spark