Job Description :
  • Analyze relational data using sql (Preferably Oracle)
  • Build data pipeline using Python and PySpark
  • Build procedures and packages for ETL applications (Preferably PLSQL)
  • Work with Large volume of data (Hadoop)
  • Work closely with customers for the Adhoc data delivery

Automate data applications using scheduling tools such as Autosys and Airflow

             

Similar Jobs you may be interested in ..