Job Description :
Greetings from IT-SCIENT LLC.
Hope you are doing well.
Kindly go through the requirement below and if you find it interesting please reply with your updated resume at ASAP
Position: Kafka Developer
Location: Charlotte, NC
Minimum years of experience: 8+ Years

Candidate will have to go to client office to collect the client laptop and then candidate can work remotely for some time (1 month) but later s/he need to move to onsite or at least need to work from onsite in two weeks rotation (2 weeks from office then two weeks remotely

Must Have:
Is candidate comfortable working onsite or at least ready to work from onsite in two weeks rotation (2 weeks from office then two weeks remotely) – YES/NO
Kafka Development
Hadoop Development
Python Scripting
Scala Development
Spark Development

Detailed Job Description:
Candidate must be located within commuting distance of Charlotte, NC or be willing to relocate to the area. This position may require travel in the US and Canada.
Bachelor’s Degree or foreign equivalent, will consider work experience in lieu of a degree
4+ years of experience with Information Technology
3+ years in Kafka programming, working knowledge around Big Data (Hadoop, Hbase, Hive, Scala, Spark, Python etc.
Experience building and optimizing ‘big data’ data pipelines, architectures and data sets using Kafka. Experience in Core Java/ OOPS.
Experience in Apache/Confluent Kafka components (Connect, Schema, Registry, KSQL, ControlCenter, brokers, KStream)
Provide expertise and hands on experience working on Kafka connectors such as MQ connectors, Elastic Search connectors, JDBC connectors, File stream connector, JMS source connectors, Tasks, Workers, converters, Transforms.
Create stubs for producers, consumers and consumer groups for helping onboard applications from different languages/platforms. Leverage Hadoop ecosystem knowledge to design, and develop capabilities to deliver solutions using Spark, Scala, Python, Hive, Kafka and other things in the Hadoop ecosystem.
Build processes supporting data transformation, data structures, metadata, dependency and workload management.
A successful history of manipulating, processing and extracting value from large disconnected datasets. Working knowledge of message queuing, stream processing and highly scalable ‘big data’ data stores.
Good experience in end-to-end implementation of DW BI projects, especially in data warehouse and mart developments
Strong knowledge and hands-on experience in Unix shell scripting
Knowledge and experience with full SDLC lifecycle
Experience supporting and working with cross-functional teams in a dynamic environment.
Independently able to debug and support to prod support team as and when required
Experience with Lean / Agile development methodologies

Similar Jobs you may be interested in ..