Job Description :

Please Note: As of July 22, 2021, our team will require that all candidate submissions include a LinkedIn profile. Please do not submit any candidates that do not have a LinkedIn.

Kforce s client is searching for a Data engineer for the following:

  • Experience in productionizing various big data technologies both open source and cloud native, AWS preferred (Kafka, Airflow, Dremio etc).
  • Expertise in data model design with sensitivity to usage patterns and goals - schema, scalability, immutability, idempotency, etc.
  • Expertise in of at least two of the following languages - Python, Go, Scala, Java
  • Experience in handling Large Scale Time Series data
  • Experience in GraphQL, Apollo and Hasura.
  • Track record of choosing the right transit, storage, and analytical technology to simplify and optimize user experience.
  • Real-world experience developing highly scalable solutions using micro-service architecture designed to democratize data to everyone in the organization.
  • Put your passion of CICD to work and enjoy the impact it has to software quality and customers!
  • Live and love Docker, EKS, GitLab, Terraform
  • Build terraform scripts and other deployment and configuration automation
  • Live, laugh, and love some flavor of Agile. With a side of Scrum.
  • Work closely with other teams and individuals to plan, coordinate, and seek feedback.
  • Pitch in where needed as a valued team member. There is no I in team but 2 in idiot.
  • Docker, K8, Cloud, microservices, containerization, web services, DB/SQL, etc. etc. (You get it).
  • Strong analytical, problem-solving, and troubleshooting skills. Lets face it, you are one of the smartest people you know.
  • Experienced with modern coding, testing, debugging and automation techniques.
  • Rave about the benefits of CI/CD, unless manual deployments really are your thing.
  • Have a high bar for user experience and quality.
  • You are data driven and customer obsessed.
  • Good communication skills. Bonuses to include as part of your application
  • Links to online profiles you use such as Github, Twitter, etc.
  • A description of your work history

Required Skills : Skill Set Required: Experience in productionizing various big data technologies both open source and cloud native, AWS preferred (Kafka, Airflow, Dremio etc). Expertise in data model design with sensitivity to usage patterns and goals - schema, scalability, immutability, idempotency, etc. Expertise in of at least two of the following languages - Python, Go, Scala, Java Experience in handling Large Scale Time Series data Experience in GraphQL, Apollo and Hasura. Track record of choosing the right transit, storage, and analytical technology to simplify and optimize user experience. Real-world experience developing highly scalable solutions using micro-service architecture designed to democratize data to everyone in the organization. Put your passion of CICD to work and enjoy the impact it has to software quality and customers! Live and love Docker, EKS, GitLab, Terraform Build terraform scripts and other deployment and configuration automation Live, laugh, and love some flavor of Agile. With a side of Scrum. Work closely with other teams and individuals to plan, coordinate, and seek feedback. Pitch in where needed as a valued team member. There is no i in team but 2 in idiot . Docker, K8, Cloud, microservices, containerization, web services, DB/SQL, etc. etc. (You get it). Strong analytical, problem-solving, and troubleshooting skills. Let s face it, you are one of the smartest people you know. Experienced with modern coding, testing, debugging and automation techniques. Rave about the benefits of CI/CD, unless manual deployments really are your thing. Have a high bar for user experience and quality. You are data driven and customer obsessed. Good communication skills.
Basic Qualification :
Additional Skills :
Rank :B1
Requested Date :2022-09-01
             

Similar Jobs you may be interested in ..