Job Description :

Job Description:

Responsibilities

The purpose of this job is to provide technical expertise for research, development and modification of data engineering processes and jobs in support of a Big Data infrastructure for PFJ Energy in the commodities and energy space

Research, develop, document, and modify Data Engineering processes and jobs per data architecture and modeling requirements;

Collaborate with Data Analytics team members such as Data Strategists and Data Scientists

Collaborate with business stakeholders to understand data needs, including data velocity, veracity, and access patterns

Provide technical expertise to implement Data and Analytics specifications

Serve on cross-functional project teams and provide perspective for the Data Analytics team on executing key deliverables

Troubleshoot complex, escalated issues, including connection, failed jobs, application errors, server alerts and space thresholds within predefined service level agreements (SLAs)

Proactively maintain and tune all code according to internally documented Data Engineering standards and best practices

Review and ensure appropriate documentation for all new development and modifications of the Data Lake processes and jobs

Perform code and process reviews and oversee testing for solutions developed, and ensure integrity and security of institutional data

Educate business stakeholders on the usage and benefits of the data Lake / Lakehouse and related technologies

Mentor and guide less experienced team members and provide feedback on project work

Ensure all activities are in compliance with rules, regulations, policies, and procedures

Complete other duties as assigned

Qualifications

Bachelor's degree in computer science, engineering, information technology, or related field, required

Minimum eight years of technology operations experience required

Experience in the commodities and energy space is required

Strong knowledge of Relational Databases like Oracle, Postgres or SQL Server required

Experience with Apache Spark or Spark-streaming, Message Queue technologies and Python required

Strong knowledge of enterprise data warehouse (EDW) data models with a focus on Star Schema data modeling techniques required

Must Haves:

  1. Trading floor/commodities experience
  2. Willing to work onsite at our Houston office 5 days a week from 8-5
  3. Knowledge of: Tableau, SQL, Databricks, AWS, Snowflake,
  4. Big Data Principles

Nice to Have:

  1. Right Angle
  2. Worked in scrum framework

Required Skills : Trade Floor/commodities experience Experience with Tableau, SQL, Data bricks, AWS, Snowflake Big Data Principles
Basic Qualification :
Additional Skills :
Background Check :Yes
Drug Screen :Yes
Notes :
Selling points for candidate :
Project Verification Info :
Candidate must be your W2 Employee :No
Exclusive to Apex :No
Face to face interview required :No
Candidate must be local :Yes
Candidate must be authorized to work without sponsorship ::No
Interview times set :Yes
Type of project :Development/Engineering
Master Job Title :Big Data: Other
Branch Code :Nashville
             

Similar Jobs you may be interested in ..