Job Description :
Good experience working on Hadoop platform components like: Nifi, Sqoop , oozie
Strong hands on knowledge of various ETL techniques and frameworks, such as Flume
Knowledge of Big Data tools, such as zookeeper, Oozie, Scoop, Hive, and Spark
Strong hand on work experience in scripting languages like Batch, Shell
Experience with integration of data from multiple data sources like DB2, Sybase, Oracle, SQL server
Good working knowledge of Hadoop Data lakes
Experience in working with Structured/Unstructured data
Any previous experience in Business intelligence and Data analytics is plus
Knowledge of programming in Python is a plus
PDF xstream , PDF Box, Microstrategy, Tableau, HBase
Responsibilities
Ability to develop data flow process in NiFi from various types of data sources
Develop scalable and reliable data solutions to move data across systems from multiple sources in real time as well as batch modes.
Construct data staging layers and fast real-time systems to feed BI applications
Utilize expertise in models that leverage the newest data sources, technologies, and tools, such as machine learning, Python, Hadoop, Spark, Azure/AWS, as well as other cutting-edge tools and applications for Big Data.
Investigate the impact of new technologies, applications, and data sources on the future secondary mortgage business
Demonstrated ability to quickly learn new tools and paradigms to deploy cutting edge solutions.
Develop both deployment architecture and scripts for automated system deployment in Azure/AWS
Create large scale deployments using newly researched methodologies.
Work in Agile environment
Experience mentoring junior engineers.
             

Similar Jobs you may be interested in ..