Job Description :
Top Requirements:
Hadoop – Hive, Spark, Scala
Python
ETL
Java

ILabor: 62533


Job Description:

This technical role will be responsible for:

Design high performing data models on big-data architecture as data services.
Design and build high performing and scalable data pipeline platform using Hadoop, Apache Spark and Amazon S3 based object storage architecture.
Partner with Enterprise data teams such as Data Management & Insights and Enterprise Data Environment (Data Lake) and identify the best place to source the data
Work with business analysts, development teams and project managers for requirements and business rules.
Collaborate with source system and approved provisioning point (APP) teams, Architects, Data Analysts and Modelers to build scalable and performant data solutions.
Effectively work in a hybrid environment where legacy ETL and Data Warehouse applications and new big-data applications co-exist
Work with Infrastructure Engineers and System Administrators as appropriate in designing the big-data infrastructure.
Work with DBAs in Enterprise Database Management group to troubleshoot problems and optimize performance
Support ongoing data management efforts for Development, QA and Production environments
Utilizes a thorough understanding of available technology, tools, and existing designs.
Leverage knowledge of industry trends to build best in class technology to provide competitive advantage.
Acts as expert technical resource to programming staff in the program development, testing, and implementation process.

10+ years of application development and implementation experience
10+ years of experience delivering complex enterprise wide information technology solutions
10+ years of ETL (Extract, Transform, Load) Programming experience
10+ years of reporting experience, analytics experience or a combination of both
5+ years of Hadoop experience
5+ years of operational risk, conduct risk or compliance domain experience
5+ years of experience delivering ETL, data warehouse and data analytics capabilities on big-data architecture such as Hadoop
5+ years of Java or Python experience Excellent verbal, written, and interpersonal communication skills
Ability to work effectively in virtual environment where key team members and partners are in various time zones and locations
Knowledge and understanding of project management methodologies: used in waterfall or Agile development projects
Knowledge and understanding of DevOps principles
Ability to interact effectively and confidently with senior management
Experience designing and developing data analytics solutions using object data stores such as S3 Experience in
Hadoop ecosystem tools for real-time batch data ingestion, processing and provisioning such as Apache Flume, Apache Kafka, Apache Sqoop, Apache Flink, Apache Spark or Apache Storm
             

Similar Jobs you may be interested in ..