Job Description :

Role: Sr. Data Engineer

Location: Orange, CA (1-2 Months Remote)

 

 

Job description
Position Summary:

Data and technology driven healthcare company focused partnering with health systems, health plans and provider groups to provide care delivery that is preventive, convenient, coordinated, and that results in improved clinical outcomes for seniors.
We are experiencing rapid growth (backed by top private equity firms), our Data Services and BI team is looking for the best and brightest leaders. Data drives the way we make decisions. We love our customers and understanding them better makes it possible to provide the best clinical outcome and care experience.
This position will play a key role in building and operating a cloud-based data platform and its pipelines using big data technologies.
As a Data Engineer, you will develop a new data engineering platform that leverage a new cloud
architecture, and will extend or migrate our existing data pipelines to this architecture as needed.
You will also be assisting with integrating the SQL data warehouse platform as our primary processing platform to create the curated enterprise data model for the company to leverage. You will be part of a team building the next generation data platform and to drive the adoption of new technologies and new practices in existing implementations. You will be responsible for designing
and implementing the complex ETL pipelines in cloud data platform and other solutions to support the rapidly growing and dynamic business demand for data, and use it to deliver the data as service which will have an immediate influence on day-to-day decision making.

General Duties/Responsibilities:
(May include but are not limited to)
•Interfacing with business customers, gathering requirements and developing new datasets in data platform
•Building and migrating the complex ETL pipelines from on premise system to cloud and Hadoop/Spark to make the system grow elastically
•Identifying the data quality issues to address them immediately to provide great user experience
•Extracting and combining data from various heterogeneous data sources
•Designing, implementing and supporting a platform that can provide ad-hoc access to large  datasets
•Modelling data and metadata to support machine learning and AI

Minimum Requirements:
To perform this job successfully, an individual must be able to perform each essential duty satisfactorily.  The requirements listed below are representative of the knowledge, skill, and/or ability required.  Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions.
1. Minimum Experience:
0.  3+ years relevant experience in cloud based data engineering.
1.  Demonstrated ability in data modeling, ETL development, and data warehousing.
2.  Data Warehousing Experience with SQL Server, Oracle, Redshift, Teradata, etc.
3.  Experience with Big Data Technologies (NoSQL databases, Hadoop, Hive, HBase, Pig, Spark, Elastic search etc.)
4.  Experience in using Scala, Python, .net, Java and/or other data engineering
languages
2. Education/Licensure:
0. Bachelors or Masters in Computer Science, Engineering, Mathematics, Statistics, or related field
3. Other:
0. Knowledge and experience of SQL Server and SSIS.
1. Excellent communication, analytical and collaborative problem-solving skills
1. Preferred:
1. Healthcare domain and data experience
2. Healthcare EDI experience is a plus
•API development experience is a plus
1. Industry experience as a Data Engineer or related specialty (e.g., Software Engineer,
Business Intelligence Engineer, Data Scientist) with a track record of manipulating,
processing, and extracting value from large datasets.
2. Experience building/operating highly available, distributed systems of data extraction,
ingestion, and processing of large data sets
3. Experience building data products incrementally and integrating and managing datasets from
multiple sources
•Experience leading large-scale data warehousing and analytics projects, including using
Azure or AWS technologies – SQL Server, Redshift, S3, EC2, Data-pipeline, Data Lake,
Data Factory and other big data technologies
•Experience providing technical leadership and mentor other engineers for the best practices
on the data engineering space
1. Linux/UNIX including to process large data sets.
2. Experience with Azure, AWS or GCP is a plus
3. Microsoft Azure Certification is a plus
•Demonstrable track record dealing well with ambiguity, prioritizing needs, and delivering
results in an agile, dynamic startup environment
•Problem solving skills and Ability to meet deadlines are a must
•Microsoft Azure Certification is a plus

             

Similar Jobs you may be interested in ..