Job Description :

Job description:

Must:

  1. Hadoop experience
  2. Led a small team or mentored jr developers(ideally, but not necessary)

We are seeking an accomplished Tech Lead – Data Engineer to architect and drive the development of large-scale, high-performance data platforms supporting critical customer and transaction-based systems. The ideal candidate will have a strong background in data pipeline design, Hadoop ecosystem, and real-time data processing, with proven experience building data solutions that power digital products and decisioning platforms in a complex, regulated environment.

As a technical leader, you will guide a team of engineers to deliver scalable, secure, and reliable data solutions enabling advanced analytics, operational efficiency, and intelligent customer experiences.

Key Roles & Responsibilities

  • Lead and oversee the end-to-end design, implementation, and optimization of data pipelines supporting key customer onboarding, transaction, and decisioning workflows.
  • Architect and implement data ingestion, transformation, and storage frameworks leveraging Hadoop, Avro, and distributed data processing technologies.
  • Partner with product, analytics, and technology teams to translate business requirements into scalable data engineering solutions that enhance real-time data accessibility and reliability.
  • Provide technical leadership and mentorship to a team of data engineers, ensuring adherence to coding, performance, and data quality standards.
  • Design and implement robust data frameworks to support next-generation customer and business product launches.
  • Develop best practices for data governance, security, and compliance aligned with enterprise and regulatory requirements.
  • Drive optimization of existing data pipelines and workflows for improved efficiency, scalability, and maintainability.
  • Collaborate closely with analytics and risk modeling teams to ensure data readiness for predictive insights and strategic decision-making.
  • Evaluate and integrate emerging data technologies to future-proof the data platform and enhance performance.

Must-Have Skills

  • 8–10 years of experience in data engineering, with at least 2–3 years in a technical leadership role.
  • Strong expertise in the Hadoop ecosystem (HDFS, Hive, MapReduce, HBase, Pig, etc.).
  • Experience working with Avro, Parquet, or other serialization formats.
  • Proven ability to design and maintain ETL / ELT pipelines using tools such as Spark, Flink, Airflow, or NiFi.
  • Proficiency in Python, Scala for large-scale data processing.
  • Strong understanding of data modeling, data warehousing, and data lake architectures.
  • Hands-on experience with SQL and both relational and NoSQL data stores.
  • Cloud data platform experience with AWS.
  • Deep understanding of data security, compliance, and governance frameworks.
  • Excellent problem-solving, communication, and leadership skills.

We are an equal opportunity employer. All aspects of employment including the decision to hire, promote, discipline, or discharge, will be based on merit, competence, performance, and business needs. We do not discriminate on the basis of race, color, religion, marital status, age, national origin, ancestry, physical or mental disability, medical condition, pregnancy, genetic information, gender, sexual orientation, gender identity or expression, national origin, citizenship/ immigration status, veteran status, or any other status protected under federal, state, or local law.

             

Similar Jobs you may be interested in ..