Job Description :
Job description

12+ years’ experience in Advanced Analytics and Big Data with the minimum implementation of 2-3 projects at the enterprise level. Hands-on experience in architecting solutions at scale in big data.

Solid functional understanding of the Open-Source Big Data Technologies including major Hadoop projects, Kafka,NiFi, NoSQL Databases.
Experience in building and maintaining a Hadoop cluster in a multi-tenant environment
Preferable work experience with Hortonwork distribution.
Proficient with HDFS, Druid, Hive, HBase, Sqoop, Oozie, NiFi, Kafka, Spark and SQL.
Database design and modeling - logical and physical
Performance tuning - table partitioning and indexing, process threading
Hands-on experience with "commercializing" Hadoop applications (e.g. administration, security, configuration management, monitoring, debugging, and performance tuning)
Support multiple Agile Scrum teams with planning, scoping and creation of technical solutions for the new product capabilities, through to continuous delivery to production.
Must have worked with Machine Learning and Data Sciences applications in downstream with very strong experience in Python, SPARK Framework and deploying applications in the cloud
Detail oriented with strong analytical and problem solving skills
Effective communicator (both verbal & written)