Job Description :
·         Min 6-8 years of BIG data development experience

·         Proven, working expertise with Big Data Technologies Presto, Spark, Druid, Hadoop and HBase

·         Demonstrates up-to-date expertise in Data Engineering, complex data pipeline development

·         Design, develop, implement and tune large-scale distributed systems and pipelines that process large volume of data; focusing on scalability, low -latency, and fault-tolerance in every system built.

·         Experience with Java, Python to write data pipelines and data processing layers

·         Experience in writing map-reduce jobs

·         Demonstrates expertise in writing complex, highly-optimized queries across large data sets

·         Highly Proficient in SQL

·         Experience with Cloud Technologies ( GCP, Azure)

·         Experience with Relational, NoSQL/in memory data stores would be a big plus ( Oracle, Cassandra, Druid)

·         Provides and supports the implementation and operations of the data pipelines and analytical solutions

·         Performance tuning experience of systems working with large data sets

·         Experience with clickstream data processing

·         Experience with meta data management tool like MITI (Metadata Integration), Monitoring tool like Ambari

·         Experience in developing REST API data service

·         Retail experience is huge plus