Job Description :

    Project: Post Pay data mining (claims data). Insurance companies (payer and providers) send data, and they make them available to downstream apps. Built in C# and SQL Server.

    DTMS (data transformation management loader).

    Majority of data feeds are through that, and they want to migrate everything to the cloud (Azure, Databricks, and Snowflake). Use Data at Scale (written in Scala) framework.

    Responsibilities:

    • ADF, Databricks, Spark
    • Development experience in Python or NodeJS
    • MQ Common Data Model provides JSON files. Take these from the cloud and load into Databricks.
    • Go to message que so they know when to grab the message files from Azure
    • Build data pipelines
    • In legacy they get files, do some transformations, and load into database. They want to do all of this in the cloud

    Must have:

    • .NET Development background and then moved into Azure Data Engineering
    • CICD -Won't be building pipelines but need to be able to troubleshoot
    • Python (OOP)
    • NodeJS
    • MQ Kafka or others

    Nice to have:

    • Azure Functions
    • Scala

    Interview process:

    • 2-3 rounds
    • First round 30 min
    • 2 hour technical (design and coding)

    Remote? Yes, but strong preference for Eden Prairie or Franklin TN

    Primary Responsibilities

    Build high performing and scalable data systems, applications, and data pipelines to process very large amounts of data from multiple sources

    Develops services, controls, and reusable patterns that enable the team to deliver value safely, quickly, and sustainably in the public cloud and on prem.

    Collaborate on Big Data systems, and features within an Agile environment?

    Collaborate with cross-function teams of developers, senior architects, product managers, DevOps, and project managers?

    Deliver solutions that are devoid of significant security vulnerabilities?

    Foster high-performance, collaborative technical work resulting in high-quality output?

    Proactively automate infrastructure, application and services to enable an automated delivery through the CI/CD pipelines to the cloud and on prem

    Help convert current on-premise ETL pipelines from .net to cloud using azure data factory/data bricks

    Help maintain current .Net framework till conversions are complete

    Excellent time management, communication, decision making, and presentation skills

    Display a strong desire to achieve and attain high levels of both internal and external customer satisfaction


    Required Skills : .NET Development experience Databricks Azure Data Factory Python
    Basic Qualification :
    Additional Skills :
    Background Check :Yes
    Drug Screen :Yes
    Notes :
    Selling points for candidate :
    Project Verification Info :
    Candidate must be your W2 Employee :Yes
    Exclusive to Apex :No
    Face to face interview required :No
    Candidate must be local :No
    Candidate must be authorized to work without sponsorship ::No
    Interview times set : :No
    Type of project :Development/Engineering
    Master Job Title :DBA: Other
    Branch Code :Minneapolis
                 

    Similar Jobs you may be interested in ..