Job Description :
Need Big Data Engineer for Franklin Lakes, NJ 12 months contract

Job Title Big Data Engineer
Technical/Functional Skills • 4+ years in industry implementing big data solutions on Hadoop
Proficient understanding of distributed computing principles
Proficiency with Hadoop v2, MapReduce, HDFS
Experience with building stream-processing systems, using solutions such as Storm or Kafka and Spark-Streaming
Good knowledge of Big Data querying tools, such as Pig, Hive, Phoenix
Experience with Spark
Experience with integration of data from multiple data sources
Experience with 1 or 2 NoSQL/Graph databases, such as HBase, Cassandra, MongoDB, Neo4j
Proficiency in a programming languages like SCALA, Java,Python
Experience with Linux OS, shell scripting
Experience with relational databases (SQL)
Experience in working with real-time data feeds
Experience in working with unstructured data
Experience in implementing Scoop Jobs to Import/Export data from Hadoop
Knowledge of various ETL techniques and frameworks, such as Pig, Hive, or Flume
Experience with various messaging systems, such as Kafka or RabbitMQ
Experience with Big Data ML toolkits, such as Mahout, SparkML, or H2O
Good understanding of Lambda Architecture, along with its advantages and drawbacks
Experience with Hortonworks Hadoop Data Platform (HDP)

Good to have skills:
Experience with all or some of the following supporting Hadoop administration and security frameworks: HCatalog, Drill, NiFi, Oozie, Falcon, Ranger, Ambari, Zeplin.
Roles & Responsibilities Experienced Big Data Engineer that will work on the collecting, storing, processing, and analyzing of huge sets of data.
The primary focus will be on choosing optimal solutions to use for these purposes, then maintaining, implementing, and monitoring them.
Responsible for integrating them with the architecture used across the company.
Ability to perform knowledge sharing, conduction education workshops, and train other employees is expected.

Essential Functions
Selecting and integrating Big Data tools and frameworks required to provide requested capabilities
Implementing Data ingestion and ETL processes on Hadoop
Monitoring performance and advising any necessary infrastructure changes
Defining data retention policies
Design and build data processing pipelines for structured and unstructured data using tools and frameworks in the Hadoop ecosystem
Develop applications that are scalable to handle millions of events/records
Design and launch scalable, reliable and efficient processes to move, transform and report on large amounts of data
Participate in meetings with business (account/product management, data scientists) to obtain new requirements
Follow our Agile software development process with daily scrums and monthly Sprints• Ability to work collaboratively on a cross-functional team with a wide range of experience levels
Generic Managerial Skills
Education Bachelors degree and 8+ years relevant experience or Master’s degree and 6+ years of relevant experience
Start date (dd-mmm-yy) 8/1/2017
Duration of assignment (in Months) 12 Months
Work Location Franklin Lakes, NJ
Rates payable to vendor $/hr $55/hr
Key words to search in resume Big Data, Hadoop, Spark, Kafka, Pig, Hive, Phoenix, NoSQL/Graph databases
             

Similar Jobs you may be interested in ..