Job Description :
Responsibilities:
Build out data pipelines aligning with solution architect design and project specifications
Responsible for data modeling requests and realization
Unit and integration testing of data pipelines
Create repeatable data pipeline patterns/templates
Required Skills/experience
2+ years of experience with NiFi.
Open source knowledge and expertise in: Hadoop (Hortonworks), HDFS, Hive, Spark, NiFi,
Sqoop, Pig, Flume, MapReduce
Experience in implementing production data pipelines and creation of repeatable ingestion
patterns
Experience with various databases and platforms, including but not limited to: DB2, Oracle,
SQLServer
Demonstrated knowledge and use of the following languages: Python, Scala, Shell Scripts, JSON,
SQL
Familiar with general data modeling concepts and processes to support business intelligence
solutions
Demonstrated performance in all areas of the SDLC, specifically related to ETL solutions
Preferred Skills/experience
Experience using the following tools/utilities: Trifacta (or other data wrangling tool),
Informatica, Alation (or other data catalog tools), Kylo, IBM IGC (or other data
modeling/governance tools), Cron