Job Description :
2+ years of experience working in Healthcare industry (Hospital, HIE, RHIO, Insurance provider) . Worked on clinical and payor data. Understands clinical workflows .
Intermediate level knowledge of HL7 V2 messaging standard e.g. ADT, SIU, ORM, ORU and enterprise integration patterns and technologies e.g. publish / subscribe messaging pattern, APIs, REST and SOAP Web Services. Knowledge of clinical terminologies.
Familiarity with FHIR HL7 standard.
Experience in building HL7 interfaces

Some more inputs from Customer- I am updating the job description to more strongly reflect Databricks, Mulesoft, and/or HL7. Basically, we are looking for people with ETL experience who know SQL. Python would be good as well. Experience with Healthcare data would be amazing.

Technical Stacks for BigData –
Qualifications for Data Engineer
· Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field.
· 2+ years of experience in a Data Engineer role having HL7 and clinical data expertise
· Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
· Experience with big data tools: Hadoop, Spark (preferably Databricks), Kinesis, etc.
· Experience with relational SQL databases, including Snowflake and Postgres.
· Experience with stream-processing systems: Spark-Streaming, etc.
· Experience with object-oriented/object function scripting languages: Scala, Python etc.
· Experience with data pipeline and workflow management tools: Jenkins, Airflow, etc.
· Experience with AWS cloud services: EC2, RDS, etc.
· Experience building and optimizing ‘big data’ pipelines, architectures and data sets.
· Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
· Build processes supporting data transformation, data structures, metadata, dependency and workload management.
· A successful history of manipulating, processing and extracting value from large datasets.
· Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ stores.