Job Description :
We are looking for Sr Software Engineer (Hadoop) with expertise in NiFi for one of our clients based out in Mounds View, MN. Please find the below job description and let me know if you are interested and available for projects. This is a 10+ months contract role with REMOTE option.
Nifi experience most critical. Kafka nice to have.

4 year degree required

Project overview: The VA is retiring a home build EMR system and implementing to Cerner. Due to our contracts and monitoring devices we need to update our software systems to gather their Cerner instance of patient data.

Sr Software Engineer
10+ Month Contract
*background check required

Responsible for software development alongside other engineers and developers, collaborating on the various layers of the application stack for MCMS Care Management Solutions. Role will be a part of an Agile development team.
Write well designed, testable, efficient code using best practices.
Maintain quality and ensure responsiveness of applications.
Integrate data from various back-end services and databases.
Collaborate with the rest of the engineering team to design and launch new features.
Work together with cross-functional teams to define, design, and develop new features
Troubleshoot and fix software defects
Create and maintain unit tests
Participate in Agile sprint planning, daily standup, demo, and retrospective meetings
B.S. in Computer Science, Engineering, Information Systems or Software Engineering or Software Development degree program or comparable on the job experience with another B.S. degree.
5+ years of hands-on experience in medium to large scale enterprise data pipeline development using Big Data technologies
5+ years of experience in Data Engineering and/or Software Development
3+ years of experience working with the Apache Hadoop Ecosystem preferably Cloudera/Hortonworks
3+ years of experience with MS SQL Server or other RDBMS
3+ years of experience developing and supporting data pipeline applications with Spark, Kafka, Nifi, and other streaming data tools in medium to large scale enterprises
Strong knowledge of the Hadoop Ecosystem including HDFS, MapReduce, Sqoop, Yarn, Hive, and Oozie
Experience with Git and Jenkins
Experience with unit testing tools
Ability to interface with all groups and levels to solve problems and create solutions
High level of verbal and written communication skills
Ability to work in an Agile delivery team
Experience setting up, configuring and maintaining Big Data technologies like Apache Hadoop, Cloudera in on-premise datacenters
Healthcare experience working with HL7 or FHIR interfaces