Job Description :
Location: Chevy Chase, MD
Phone then Skype
12 month +
GC/USC only

Note: Excellent communication Skills

Big Data Engineer (Hadoop Java or Spark)

Job description

Minimum Requirements 3 years of hands-on experience in the Hadoop ecosystem (HDFS, YARN, MapReduce, Oozie, AND Hive)
1 year of hands-on experience in Spark core AND Spark SQL
5 years of hands-on programming experience in either core Java OR Spark
3 years of hands-on experience in Data Warehousing AND Data Marts AND Data/Dimensional Modeling AND ETL
1 years of hands-on experience in HBase OR Cassandra OR any other NoSQL DB
Understanding of Distributed computing design patterns AND algorithms AND data structures AND security protocols

Desired Skills Understanding of Kafka AND Spark Streaming Experience in any one of the ETL tools such as Talend, Kettle, Informatica OR Ab Initio Exposure to Hadoop OR NoSQL performance optimization and benchmarking using tools such as HiBench OR YCSB Experience in performance monitoring tools such as Ganglia OR Nagios OR Splunk OR DynaTrace Experience on continuous build and test process using tools such as Maven AND Jenkins Certification in HortonWorks OR Cloudera preferred not required but strongly desired
             

Similar Jobs you may be interested in ..