Job Description :
Hadoop Data Engineer: (Onsite)
Santa Clara, CA
Longterm

At least 3 years of experience working in Hadoop Ecosystem and big data technologies
Build data pipelines and ETL using heterogeneous sources to Hadoop using Kafka, Flume, Sqoop, Spark Streaming etc.
Experience in batch (Spark. Skala) or real time data streaming (Kafka)
Ability to dynamically adapt to conventional big-data frameworks and open source tools if project demands
Knowledge of design strategies for developing scalable, resilient, always-on data lake
Experience in agile(scrum) development methodology
Strong development/automation skills
Must be very comfortable with reading and writing Scala, Python or Java code
             

Similar Jobs you may be interested in ..