Job Description :
Key Qualifications

· Minimum of 4 years experience in architecture, design and development of Big data systems using Java/Scala, working on systems that are distributed, highly available, performant and scalable.

· Strong experience with MapReduce, HDFS and Hive are required. Experience with Spark would be a plus.

· Experience with designing and implementing large-scale systems to process Terabytes to Petabytes of data.

· Relational database experience preferably Teradata and demonstrated abilities in SQL and data modeling are required. Proficiency with NoSQL databases is desired as well.

· Strong experience with data deep dives and product analytical skills are required.

· Experience with data visualization in Tableau or other business intelligence tools is desirable.

· Experience E2E automation of data pipelines is required.

· Experience with working in UNIX environment and scripting in Shell/Perl/Python is required.

· Having SEO domain knowledge is a plus.

· Ability to take requirements from design through to implementation both independently and with larger teams.

· Strong problem solving and debugging skills are required.

· Ability to communicate effectively, both written and verbal, with technical and non-technical cross-functional teams.

· Results oriented and deadline driven.

Top 3 skills:

data analyst with engineer experience ; Hadoop, big data
SDLC life cycle, design, development, production Teradata, SQL, hive python
             

Similar Jobs you may be interested in ..