Job Description :

Job Title: Databricks Developer

Location: Hybrid, but local resources to Atlanta preferred

Duration: 12 - 24 months

Job Description: A Pyspark and Databricks Developer with a good understanding of the entire ETL/Azure lifecycle with a background of data projects.

Responsibilities

  • Design, develop, and maintain scalable data pipelines and ETL processes using Azure Databricks, Data Factory, and other Azure services
  • Implement and optimize Spark jobs, data transformations, and data processing workflows, Managing Databricks notebooks, Delta lake with Python, Delta Lake with Sparks SQL in Databricks
  • Leverage Azure DevOps and CI/CD best practices to automate the deployment /DAB Deployments and management of data pipelines and infrastructure
  • Ensure Data Integrity checks and Data Quality checks with zero percent errors when deployed to production
  • Understand Databricks new features Unity Catalog/Lake flow/DAB Deployments/Catalog Federation
  • Hands on experience Data extraction (extract, schemas, corrupt records, error handling, parallelized code), transformations and loads (user defined functions, join optimizations) and Production optimize (automate ETL)

Qualifications

  • Bachelor’s degree in computer science, Information Technology, or related field.
  • Minimum of 5 years of experience in data engineering or similar roles.
  • Proven expertise with Azure Databricks and data processing frameworks.
  • Strong understanding of data warehousing, ETL processes, and data pipeline design.
  • Experience with SQL, Python, and Spark.
  • Excellent problem-solving and analytical skills.
  • Effective communication and teamwork abilities.

Skills

  • Azure Databricks
  • Python
  • Apache Spark
  • SQL
  • ETL processes
  • Data Warehousing
  • Data Pipeline Design
  • Cloud Architecture
  • Performance Tuning
             

Similar Jobs you may be interested in ..