Job Description :

Client: TE

Position: Big Data Engineer

Location: Harrisburg, PA(100% Remote)

Duration: 6+ months

Interview: Video

Key Skills:

We need some one who worked in ETL using BODS, Talend or PL/SQL in the past and more focused towards AWS Cloud technologies over the last 2 to 3 years. As we are moving towards AWS from on-prem we need some one who can help us through this journey over the next 3 to 5 years.

Our client is looking for a Big Data Engineer for a position in Harrisburg, pa.
This is a 6-month position with possible extension
Positions will be 100% remote


You will be responsible to help defining ROI, requirements analysis, design and implementation data solutions on-premise and cloud. The candidate will work closely with project managers, vendor partners, business unit representatives, project sponsors and Segment CIO teams to deliver the solutions. The candidate is expected to communicate project status, issues and change control to all levels of management.
Primary Responsibilities:

  • Designs and develops ETL solutions using data warehouse design best practices for Next Generation Analytics platform
  • Analyze data requirements, complex source data, data models, and determine the best methods in extracting, transforming and loading the data into the data staging, warehouse and other system integration projects.
  • Analyze business requirements and outline solutions.
  • Have deep working knowledge of on-prem & cloud ESB architecture to address the client's requirements for scalability, reliability, security, and performance
  • Provide technical assistance in identifying, evaluating, and developing systems and procedures.
  • Document all ETL and data warehouse processes and flows..
  • Develop and deploy ETL job workflow with reliable error/exception handling and rollback.
  • Manage foundational data administration tasks such as scheduling jobs, troubleshooting job errors, identifying issues with job windows, assisting with Database backups and performance tuning.
  • Design, Develop, Test, Adapt ETL code & jobs to accommodate changes in source data and new business requirements.
  • Create or update technical documentation for transition to support teams.
  • Develop automated data audit and validation processes
  • Provides senior technical leadership to design, architecture, integration and support of the entire data sourcing platform with a focus on high availability, performance, scalability and maintainability
  • Manage automation of file processing as well as all ETL processes within a job workflow.
  • Develop, Contribute and adhere to the development of standards and sound procedural practices.
  • Proactively communicate innovative ideas, solutions, and capabilities over and above the specific task request
  • Effectively communicate status, workloads, offers to assist other areas.
  • Collaboratively work with a team and independently. Continuously strive for high performing business solutions
  • Perform and coordinate unit and system integration testing.
  • Participate in design review sessions and ensure all solutions are aligned to pre-defined architectural specifications.
  • Ensure data quality throughout entire ETL process, including audits and feedback loops to sources of truth.


Competencies & Experience Required/Desired:

  • 6+ years of Data Engineering experience in ETL design, development, optimization & testing using PL/SQL, SAP Data Services (BODS) or Talend on database like Oracle, HANA, Redshift, Aurora etc
  • 5+ PL/SQL, Complex SQL Tuning, Stored Procedures, Data Warehousing best practices etc.
  • 3+ years of experience in relational and Cloud database design, optimization and performance; preferably with AWS (S3 and Redshift), SAP HANA, BW, Oracle, and Hadoop
  • 3+ years of experience with AWS tools such as S3, EC2, ECS, EKS, SageMaker, Aurora, Redshift, RDS, Lambda Functions AMI, ELB, ALB, NLB, VPC, Auto Scaling configurations, DMS, Amazon FW, API Gateway, ELB, IAM, CloudTrail, and CloudFront.
  • Solid understanding of ETL pipeline and workflow managements tools such as Airflow, AWS Glue, Amazon Kinesis or AWS Step Functions
  • Solid understanding of AWS cloud computing services such as Lambda functions, ECS, Batch and Elastic Load Balancer and other compute frameworks such as Spark, EMR and Dask
  • Solid understanding of container strategies using Docker, Fargate, and ECR
  • Proficiency with modern software development methodologies such as Agile, source control, CI/CD, project management and issue tracking with JIRA
  • Experience in a life sciences research environment a plus
  • 3+ years of experience in developing flows using batch, Realtime and streaming process to personalize experiences for our customers.
  • 2+ years of experience in designing service-oriented architecture (SOA), RESTful APIs and enterprise application integration (EAI) solutions utilizing MuleSoft Platform
  • 2+ years of experience with CI/CD tools like Jenkins, GIT, Java and Shell scripting
  • Strong problem-solving capabilities. Results oriented. Relies on fact-based logic for decision-making.
  • Ability to work with multiple projects and work streams at one time. Must be able to deliver results based upon project deadlines.
  • Willing to flex daily work schedule to allow for time-zone differences for global team communications
  • Strong interpersonal and communication skills


Education

  • A master's or bachelor's degree in computer science, applied mathematics, software engineering, physics, or related quantitative discipline.

Physical Demands

  • Ability to safely and successfully perform the essential job functions consistent with the ADA and other federal, state and local standards
  • Sedentary work that involves sitting or remaining stationary most of the time with occasional need to move around the office to attend meetings, etc.
  • Ability to conduct repetitive tasks on a computer, utilizing a mouse, keyboard and monitor
             

Similar Jobs you may be interested in ..