Location: Omaha, Nebraska --- Needs to live and work on-site in Omaha, Nebraska 3 days a week Will consider relocation candidates as well
Duration: 6 month contract to hire
Job Title: Sr. Data Engineer
Interview Process: 2 rounds of video interviews
- Key thing to note on skillset is finding someone who has experience with SAP so having someone who has worked in an environment where SAP is one of their data sources is important.
- Other must have skills are Databricks, Python, PySpark
Priority Technical Skills to Focus on
- Was very upfront they expect a lot of their Sr. Engineers
- Design & Delivery /// Code base integrations in data bricks and python-based code structures
- All their pipelines are written in python (Config files) Understand what was written and be able to scale **they will also be given a couple of offshore resources to help in this space
- They will need to lead // not just be a task taker
- Databricks experience is helpful python/PySpark
- ETL Integrations // real-time ingestions // data modeling for functional use
- Palantir AI Automation for decision data making can build reports, operational datasets
- Her team owns data bricks and Palantir platforms PySpark code behind both platforms
- Python and PySpark background and experience will help
- Her team does not build reports // they give them the consumption data to build reports
- Wants to see a career progression (Drive and Passion)
Job Description:
Position Overview: As a Senior Data Engineer, reporting to the Director of Information Technology, you will be responsible for leading and owning the Enterprise Data Platform and its data products. In today's data-driven world, Enterprise Data Products are an absolute necessity, streamlining the process of accessing crucial data, ensuring data accuracy and reliability, enabling efficient decision-making, and empowering teams across the organization to collaborate effectively. You will work with cross-functional Portfolio and Business Teams, ensuring data accessibility, quality, and efficient decision-making. You will be the subject matter expert and mentor for data-related activities, driving the success of our data initiatives.
Key Responsibilities:
- Lead the development and implementation of data-related features and stories within the Enterprise Data Platform.
- Collaborate with data scientists, business analysts, and cross-functional teams to understand data requirements and priorities.
- Break down complex data tasks into manageable chunks and assign tasks to team members for delivery.
- Ensure high data quality standards and maintain the accuracy and reliability of data.
- Create and manage data models, and validate them with key business representatives, data owners, data architects, and end-users.
- Build automations to validate source-to-target data accuracy during build, testing, and support phases.
- Identify and resolve data-related issues proactively or based on input from end-users.
- Create complex and efficient SQL data transformations and views for various use cases.
- Perform SQL performance tuning, including identifying and resolving performance bottlenecks, join optimization, data model optimization, and data pre-processing.
- Oversee the creation of automated test cases to ensure data values are within expected ranges, complete, and within quality parameters.
- Develop job schedules for data flows and ensure seamless data integration.
- Mentor junior team members and offshore data engineers.
- Operate within a DevOps framework for maintaining source control and deploying database objects; create automated test cases when checking in DB objects.
- Ensure the health and quality of operational support provided by third-party services.
Position Qualifications:
- Bachelor's degree from an accredited university preferred or 5+ years of experience in an IT data-related role.
- Ability to communicate effectively with analysts, offshore resources, and leadership; prior experience leading junior team members.
- Demonstrated knowledge of business concepts and processes related to supported data domains.
- Deep experience (5+ years) in writing and debugging complex SQL on RDBMS platforms like Snowflake, SQL Server, and Oracle.
- Proficient in Python and PySpark for data processing and transformation.
- Experience with DevOps frameworks and source control.
- Proficient in using ETL tools like Informatica IICS, Talend, Data Services, and SSIS.
- Experience with cloud-based technologies like Databricks, Snowflake and Palantir.
- Experience operating within an Agile framework to break down, estimate, assign, and complete work.