Job Description :
Spark/Scala (Big Data) Engineering Consultant
Location: Mclean, VA or Richmond, VA
Duration: 6-12 Months

Senior Data/Big Data/Python Developer with strong Python, PySpark/Scala/Spark, AWS (EC2), Docker/Terraform/Groovy/Kafka/CloudFormation/CFT /Kubernetes, McLean,
Big Data Engineer with Spark, Kafka, Java/Scala/Python (any 1), AWS (plus),
Data Engineer with strong Scala/Java/Python, Spark, UNIX, SQL and AWS/Cloud (plus), Richmond, VA

Job Description
Collaborating as part of a cross-functional Agile team to create and enhance software that enables state of the art, next generation Big Data & Fast Data applications
Building efficient storage for structured and unstructured data
Developing and deploying distributed computing Big Data applications using Open Source frameworks like Apache Spark, Apex, Flink, Nifi, Storm and Kafka on AWS Cloud · Utilizing programming languages like Java, Scala, Python, and NodeJS · Building micro services and REST APIs
Familiar with Java development ecosystem using Spring (SpringBoot, Spring Core, Spring Data, Spring Kafka)
Working with Open Source RDBMS (Postgres) and NoSQL databases (AWS DynamoDB, MongoDB)
Leveraging DevOps techniques and practices like Continuous Integration, Continuous Deployment, Test Automation, Build Automation and Test Driven Development to enable the rapid delivery of working code utilizing tools like Jenkins, Maven, Chef, Terraform, Ruby, CloudFormation, Kubenetes, and Docker
Configuring cloud monitoring tools such as Splunk, DataDog, ELK, and New Relic
Comfortable holding discussion with team members about technology stack selection for solution, actively participating in code review, diligently performing unit and integration testing, consciously establishing the performance baseline of your code.