Job Description :
Proven professional experience with data architecture, data modeling and ETL development minimum 5 years requirement is a must
Proven experience working with big data technologies to ingest, process and store structured and un-structured data in rest and in motion using Hadoop, Kafka, Spark, Microsoft Azure and/or Amazon Web Services (AWS) technologies minimum 3 years mandatory
Experience in delivering data organizing and storage solutions in traditional and Big Data environment:
Mandatory Scoop, Flume, Kafka skills minimum 3 years
Mandatory SQL coding skills minimum 5 years
Mandatory Python coding skills minimum 3 years
Java software development skills preferred
Spark (Scala and Java) coding skills highly desired
Mandatory Hive, HBase, Impala coding skills minimum 3 years
Mandatory UNIX / Linux shell scripting skills minimum 3 years
Microsoft Azure powershell scripting highly desired
Experience in BI reporting and data visualization minimum 5 years
One or more of Tableau, QlikView, Microsoft Power BI or other big data visualization tools is highly desired
Track record with outsourcing firm or system integrator driving client engagements focused on Big Data and Analytics - preferred
Ability to communicate and present both orally and in writing, and comfortable with ambiguity, uncertainty and change

Client : Libsys Inc