Job Description :
Big Data Engineer
Job #:
Position Type:
Jericho , NY
Apply by creating/using an account
Quick Apply
Go Back

Job Description
The Big Data Engineer is responsible for the design, architecture and development of projects powered by Google BigData and MapR Hadoop distribution

Must Have Skills/Experience:
Bachelors Degree required
2 years of solution architecture in Hadoop
Demonstrated experience in architecture, engineering and implementation of enterprise-grade production big data use cases
Extensive hands on experience in MapReduce, Hive, Java, HBase and the following Hadoop eco-system products: Sqoop, Flume, Oozie, Storm, Spark, and/or Kaftka.
Extensive experience in Shell Scripting
Solid understanding of different file formats and data serialization formats such as ProtoBuf, Avro, JSON.
Hands on delivery experience working on popular Hadoop distribution platforms like Cloudera, HortonWorks or MapR [MapR preferrably]
Excellent communication skills
Nice to have:
Coordinating the movement of data from original data sources into noSQL data lakes and cloud environments
Hands-on experience with Talend used in conjunction of Hadoop MapReduce/Spark/Hive.
Experience with Google cloud platform (Google BigQuery)
Source control (preferably Git Hub)
Knowledge of agile development methodologies
Experience in IDE framework like Hue, Jupyter, Zepplin
Needs to have a good experience on ETL Technologies and concepts of Data Warehouse