Job Description :

Responsibilities:

 

Responsible for solution design and development of real-data data access frameworks and data stream services on operations data platform, integration with other applications and source systems to provide reporting and analytic solutions. Strong understanding data lake or data warehouse approaches, industry standards and industry best practice with experiences of enterprise scale implementation following the Software Development Lifecycle (SDLC) process.

 

Qualifications:

 

  • 3+ years of hands-on experience in design, development, and support of Apache Kafka data streaming services.
  • Experience with Kafka custom connectors, creating publishers, consumers, and consumer groups.
  • Ability to design event-driven systems and understand the design of pub-sub models.
  • Experience of application design, develop and operationalize the applications.
  • Proficiency with Python
  • Very good understanding of SSL to Kafka Consumers
  • Good understanding of different file storage formats for better performance based on use cases.
  • Document’s data flow diagrams, security access, data quality and data availability across all business systems.
  • Knowledge and experience with DevOps: Azure DevOps, Git, Jenkins, and/or Ansible
  • Experience with software development lifecycle/agile methodology and Jira/Confluence

 

 

             

Similar Jobs you may be interested in ..