Job Description :
Job Description Summary:
Under limited supervision, creates project level data artifacts for specific projects. Working with the customer and other architects, facilitates the creation of frameworks , data lake and Ingestion framework through the review of requirements and use cases to recommend project level artifacts with strategic initiatives and with the standard methodologies for development.
Responsibilities include:
Participate and understand the Data ingestion needs from the customers, translate them into the Data pipeline architecture
Architect the optimized reusable and Automated pipelines framework, experience in architecting applications on Horton Data Platform.
Develop highly scalable extensible Big Data platform which enables data collection, storage, modeling, and analysis of massive data sets from numerous sources
Provide architecture and technology leadership across Batch and Streaming data processing platform in Big Data
Good at data Ingestion pipelines and stage components design
Mandatory Skills to have:
Hands-on experience in Big Data Components/Frameworks such as Hadoop, HDFS, Hive, Hbase, Apache NiFi and Kafka
Good at data Ingestion pipelines and stage components design

Knowledge and Skills Considered a Plus
Hands-on experience in Big Data Components/Frameworks such as Hadoop, Spark, Storm, Sqoop and Atlas, Ranger
Hands-on Experience in Real time data streaming and processing with Spark Streaming and Storm
Experience in Java/Python development
Experience in implementing complex ETL transformation, Data catalogues, Metadata management and Data Lineage concepts
Experience in User security and Entitlements in the Hadoop environment using Horton Data Platform.

Minimum Qualifications:
Bachelor''s degree, in computer science, engineering, information systems and/or equivalent formal training or work experience.
(8+) years equivalent work experience in information technology or engineering environment. A related advanced degree may offset the related experience requirement.