Job Description :

Technical Skills- Python PySpark

Techincal Skills-

Domain Skills-

Roles & Responsibilities

- Should have good exposure to work based on Agile Methodology.

- Provide support to PG&E Palantir Foundry Application

- Build Data Ingestion pipelines by leveraging Python/Spark and Foundry tools sets and libraries

- Analyze and trouble shoot Data quality and Data discrepancies in the Source System and provide data profiling results.

- Conduct data cleanup address DQ issues for Data Ingestion

- Interpret business questions translate in technical requirement and its underlying data requirements.

- Effectively Interact with QA and UAT team for code testing and migrate to different regions.

- Data Engineer with hands-on experience in PySpark to build data pipes in AWS environments.

- Should be able to understand design documents and independently build the Data Pipes based on the defined Source to Target mappings.

- The candidate should have good exposure to RDBMS and able to convert complex Stored Procedures SQL Triggers etc. logic using PySpark in the Cloud platform.

- The candidate should be open to learn new technologies and implement solutions quickly in the cloud platform.

- Any knowledge of the Palantir Foundry Platform will be a big plus.

             

Similar Jobs you may be interested in ..