Job Description :
We are looking for an experienced, hands-on Big Data, and Analytics, Architect for a project opportunity. Responsibilities include leading the design and implementation of the enterprise data architecture, recommending enhancements and innovations, developing reference architectures, leading the acquisition of structured and un-structured batch and stream data, data cleansing, data integration, data modelling, creating distributed storage solutions, and data management.

This role will be required to architect a scalable data integration framework for integrating data from multiple source systems into a Hadoop based cluster with the goal of increasing automation and periodicity of data integration.

Requirement:
Leading the development of processes for ingestion and processing multi-sourced structured and un-structured data in rest and in motion, on premises and on the cloud using data ingestion tools from Hadoop eco system such as Apache Kafka, Apache Scoop, Apache Flume, Synscort, Talend and other evolving technologies
Building data storage solutions on premises and on the cloud – data warehouses, data marts, noSQL databases, Hive and Impala schemas
Building data workflows, monitoring and managing jobs and resources
Scripting in Unix / Linux shells
Proficiency in working with Alteryx data prep tool