Job Description :
Location: Sunnyvale, CA


Hadoop, Hive and Spark
Description: Must have
- Development experience on Hadoop - Hive , Oozie , MapReduce , Sqoop.
- Development experience in Teradata
- Experience with design and development of ETL processes
- Should be proficient in writing Advanced SQLs and expertise in performance tuning of SQLs / Hive queries.
- Programming/Scripting (Unix , Java or Python
- Experience with Version Control such as git.

Work closely with Product teams to understand data and analyze requirements.
Understand the business process, the relationship between various data elements, aspects of ETL and logic behind a business solution.
Translate complex business requirements into scalable technical solutions with data quality controls.
Benchmark application performance periodically and fix performance issues , understanding of Hadoop platform capabilities and limitations.
Should have have strong communication skills.
Experience in developing large scale data (ETL) platforms, pipelines, warehousing, mining or analytic systems is preferred.
Experience in developing automated test scripts to help with regression testing Sharp troubleshooting skills to identify and fix issues quickly.
Should be able think out of the box , drives for excellence and is self motivated.

Experience in Spark, Kafka , Cassandra is a huge plus.
             

Similar Jobs you may be interested in ..