Job Description :
Sr. Data Engineer - Big Data Platform

San Jose, CA

12 Months

In-Person interview Required

Scope:

• Experience scratch building a highly scalable, available, fault tolerant data processing systems using AWS technologies that can handle batch & real time data processing of over 10 terabytes of data ingested daily & a petabyte data warehouse – HDFS, YARN, Map-Reduce, Hive, Spark, Kafka, etc.

• Low level system debug, performance measurement and optimization on large production clusters.

• Be involved in architecture discussions, influence product roadmap & take ownership & responsibility over new projects.

• Maintain/support existing platform & evolve to newer tech stacks/architectures.

Required:

• 5-10 years of data or SW engineering. (Not looking for a 20 year person. Need a person with hands on day to day experience with people who work with known or industry related companies

• Experience working with large scale, high throughput, multi-tenant distributed systems using 2 or more technologies. (Listed above

• Hadoop 2.x/YARN based platform experience.

• SQL & NoSQL database experience.

• SW dev experience – Java, C, C++, or Scala.

• Self-driven – take full ownership of initiatives.

• Work cross functionally with developers, QA & operations to execute deliverables.

• BS or MS in CS is a plus.