Job Description :
Title: - – BigData/ Machine Learning/ Data Engineer
Location: - McLean, VA
Duration: - 3 Months CTH

Top 3 Skills:
1. Python / Java / Scala (Python & Java Proffered)
2. Kafka / Spark / Hadoop
3. BigData Technologies


Collaborating as part of a cross-functional Agile team to create and enhance software that enables state of the art, next generation Big Data & Fast Data applications.
Building efficient and scalable storage for structured and unstructured data.
Developing and deploying distributed computing Big Data applications using Open Source frameworks like Apache Spark, Apex, Flink, Nifi, Storm and Kafka on AWS Cloud
Building and running large-scale NoSQL databases like Elastic search and Cassandra.
Utilizing programming languages like Java, Scala, Python.
Designing and building applications for the cloud (AWS, Azure, GCP, DO)
Leveraging DevOps techniques and practices like Continuous Integration, Continuous Deployment, Test Automation, Build Automation and Test Driven Development to enable the rapid delivery of working code utilizing tools like Jenkins, Maven, Nexus, Chef, Terraform, Ruby, Git and Docker.
Performing unit tests and conducting reviews with other team members to make sure your code is rigorously designed, elegantly coded, and effectively tuned for performance

B.A./B.S. in Computer Science of related technical discipline
3+ years of professional programming experience in Java, Scala, Python, C++, or Golang
3+ years of professional experience working on data streaming (Apache Spark, Flink, Storm, and/or Kafka) or data warehousing (Snowflake Analytics, Presto, AWS Athena, AWS Redshift) applications.
2+ years working with Linux-based OSes (Red Hat preferred)
2+ years working with scripting languages (Shell, Python, Perl)
Experience working within cloud environments (AWS preferred)
Experience with streaming analytics, complex event processing, and probabilistic data structures.
Experience with columnar data stores and MPP