Job Description :
any visa/tax term

Hadoop Developer w/Spark

San Jose CA

6 months+


Minimum of 4 years experience in architecture, design and development of
Big data systems, working on systems that are distributed, highly
available, performant and scalable.

Strong experience with MapReduce, HDFS and Hive are required
Experience with Spark SQL needed.
Experience with designing and implementing large-scale systems to
process Terabytes to Petabytes of data.
Relational database experience preferably Teradata and
demonstrated abilities in SQL and data modeling are required. Proficiency
with NoSQL databases is desired as well.
Strong experience with data deep dives and product analytical
skills are required.
Experience with data visualization in Tableau or other business
intelligence tools is desirable.
Experience E2E automation of data pipelines is required.
Experience with working in UNIX environment and scripting in
Shell/Perl/Python is required.
-Having SEO domain knowledge is a plus.
Ability to take requirements from design through to
implementation both independently and with larger teams.
Strong problem solving and debugging skills are required.
ETL Development experience