Job Description :

Job Title: Cloud Data Engineer Lead Specialist
Location: Durham, North Carolina
Experience Required: 12+ Years
Employment Type: Contract
Interview Type: In-Person or Webcam

Job Description

We are seeking an experienced Cloud Data Engineer Lead Specialist to design, develop, and optimize secure, scalable cloud-based data solutions. This role involves leading data engineering initiatives, building modern data platforms, and implementing advanced data pipelines across cloud environments. The ideal candidate will have extensive hands-on expertise in cloud data ecosystems, large-scale data processing, and enterprise integration, along with a strong technical leadership background.

Key Responsibilities
  • Lead the architecture, design, and implementation of cloud-based data platforms and data pipelines.

  • Develop and maintain large-scale ETL/ELT processes that support business analytics and reporting.

  • Build and optimize data ingestion, transformation, and storage frameworks for structured and unstructured data.

  • Ensure data quality, governance, security, and compliance across cloud and hybrid data environments.

  • Evaluate new cloud technologies, tools, and methodologies to improve data engineering capabilities.

  • Lead a team of data engineers, provide mentoring, and oversee delivery execution.

  • Collaborate with stakeholders including data architects, analysts, DevOps, and business users.

  • Improve data processing performance and reliability through monitoring, automation, and best practices.

  • Troubleshoot and resolve complex data pipeline or infrastructure issues in production environments.

  • Participate in architecture reviews, design sessions, and solution planning meetings.

Required Skills and Qualifications
  • 12+ years of professional experience in data engineering and large-scale enterprise data solutions.

  • Strong hands-on experience with at least one major cloud platform (AWS, Azure, or Google Cloud).

  • Expertise in cloud-native data services such as AWS Glue, Lambda, S3, EMR, Redshift, Lake Formation, Azure Synapse, Data Factory, Databricks, or BigQuery.

  • Advanced knowledge of SQL, Python, Spark, PySpark, and distributed data frameworks.

  • Proven background in building ELT/ETL systems and high-performance batch and streaming pipelines.

  • Experience with data warehousing and data lake design concepts.

  • Strong understanding of CI/CD practices, DevOps, and infrastructure-as-code tools (Terraform, CloudFormation, etc.).

  • Knowledge of data security, access control, and governance standards.

  • Excellent communication, leadership, and problem-solving skills.

  • Experience working in Agile development environments.

             

Similar Jobs you may be interested in ..