JOB DESCRIPTION:
·
Bug Fixes in the Databricks environment
·
Ability to Monitoe, Transform and optimize ETL pipelines for Databricks and Knowledge of Data Lakehouse Architecture and knowledge of Pyspark (At least Mid-Level)
·
Experience in complex data migration and familiarity with the knowledge is a plus
·
Ensure data accessibility and integrity for the migrated objects
·
Collaborate effectively with cross-functional teams.
·
Communicate progress and challenges clearly to stakeholders.
·
QUALIFICATIONS:
·
Experience in SQL and Big Data.
·
Proficiency in Spark and Impala/Hive.
·
Experience with Databricks and cloud platforms, particularly Azure.
·
Good understanding of data modeling concepts and data warehouse designs.
·
Excellent problem-solving skills and a passion for data accessibility.
·
Effective communication and collaboration skills.
·
Experience with Agile methodologies.
We are an equal opportunity employer. All aspects of employment including the decision to hire, promote, discipline, or discharge, will be based on merit, competence, performance, and business needs. We do not discriminate on the basis of race, color, religion, marital status, age, national origin, ancestry, physical or mental disability, medical condition, pregnancy, genetic information, gender, sexual orientation, gender identity or expression, national origin, citizenship/ immigration status, veteran status, or any other status protected under federal, state, or local law.