Job Description :
Title: Big Data Engineer
Location: McLean, VA
Length: 11 months+

*Possibility of conversion for the right candidate; must be currently eligible to work in the US*

Responsible for the fundamental organization of a system embodied in its components, their relationships to each other and to the environment, and the principles guiding its design and evolution. Includes functions to create the conceptual, high-level, and detailed design of the software solution.
Applies extensive technical expertise and has full knowledge of other related disciplines.
Guides the successful completion of major projects and may function in a project leadership role.
Develops technical solutions to complex problems, which require the regular use of ingenuity and creativity.
Work is performed without appreciable direction.

Experience in developing and deploying distributed computing Big Data applications using Open Source frameworks like Apache Spark, Apex, Flink, Nifi, Storm and Kafka on AWS Cloud
Utilizing programming languages like Java, Scala, Python and Open Source RDBMS and NoSQL databases and Cloud based data warehousing services such as Redshift
Utilizing Hadoop modules such as YARN & MapReduce, and related Apache projects such as Hive, Hbase, Pig, and Cassandra
Leveraging DevOps techniques and practices like Continuous Integration, Continuous Deployment, Test Automation, utilizing tools like Jenkins, Maven, Nexus, Chef, Terraform, Ruby, Git and Docker
At least 5 years of Java development for modern data engineering
4+ years'' experience with Relational Database Systems and SQL (PostgreSQL or Redshift)
4+ years of UNIX/Linux experience
2+ years of experience with Cloud computing (AWS)
1+ years of experience with supervised machine learning
Good understanding of CAP Theorem and experience in addressing NFR constraints
Experience in developing large and complex event processing architecture
Experience in developing high volume transaction processing solutions that can be scaled for millions of daily transactions