Job Description :
Experience solutioning data science clusters on Hadoop for the distributed and non-distributed environment
· Identify end-to-end baseline architecture (tool agnostic) based on data science best practices and use cases
· Collaborate with a team of passionate data engineers, developers, and domain experts from different functional areas to solve complex problems
· Create innovative frameworks and solutions for extracting value from client data
· High level of proficiency in communicating and presenting both verbally and visually to stakeholders. Ability to clearly convey complex concepts in plain language.
· Fluency in analytical programming, including libraries for cleaning, exploring and visualizing data. Demonstrated expertise in statistical, analytical and data visualization software, especially R and other relative languages (SQL tools, Python, etc (Note: Must have R expertise
· Familiarity with big data and distributed computing technologies such as Apache Hadoop, Apache Spark, Hive, and Pig
· Experience operationalizing business and policy problems through mathematical, statistical and computational techniques
· Establish an operating model for productionizing Data Science as a platform defining roles and guidelines to ensure execution support, model validity, model maintenance, etc.

Experience : 14 – 20 years - Must have Big Data Architect experience min 5 years.
             

Similar Jobs you may be interested in ..