Job Description :
Required (Individual Role):

Extensive experience in engineering and designing data management solutions using Hadoop platform tools and technologies such as Apache HDFS, Sqoop, Spark, Hive, Impala, HBase, Kafka, as well as Python, Java, or Scala.

Excellent written and oral communication skills

Proficient in the data ingestion pipeline process, exception handling, and metadata management on Hadoop platforms.

Hands-on experience creating automated data integration applications using data models, data mappings and business rules specifications, to load data warehouses, operational data stores, data marts, and data lakes while programmatically handling exceptions including late arriving, missing, or erroneous data.

Experience in UNIX shell scripting

Desired (Individual Role):

Advanced degree in MIS, computer science, statistics, marketing, management, finance or related field

Ability to interpret complex mainframe copybooks

Experience using Syncsort DMX-h

Experience configuring and using APIs

Experience using Informatica Power Exchange CDC

Experience providing technical and data leadership to the application development terms, IT and the enterprise.

Prior experience in the Financial Industries and large banks

Client : Federal client