Job Description :
Position: Big Data Architect (with Kafka)

Location: Chandler, AZ OR Fremont, CA

Experience: 10+ Years

Duration: 6+ months

Project Summary: Architecture, Design and Proof of Concepts.



Responsibilities:

Architecture and design of large programs
Lead the team and take responsibility of overall deliverables
Analyze, Design, support SIT, UAT, PFIX
Create Proof of Concept/Technology as required

Technical Skills - In Detail

Good understanding of Data pipeline including Extraction, acquisition, transformation and visualization

Prior experience working with RBDMS and Big data distributions

Experience with requirements gathering, systems development, systems integration and designing/developing API

Experience with Linux and shell programming

Experience with frameworks like Anaconda and developing ETL using PySpark on any major Big Data distributions

Good understanding of XML processing using Python,Spark RDD and dataframes

Performance tuning, unit testing and integration testing

Excellent communication and articulation skills

Self starter and ability to work in dynamic and agile environment

Experience of working with Hadoop ecosystem – MapReduce, Hive, HBase

Experience of working with at least one NoSQL database like C*, MongoDB

ElasticSearch

Experience of designing for scale with considerations like sharding.
Experience of various types of queries – structured, proximity, relevance and query DSL
Data modelling with various hierarchies – Nested, Parent-child and schema
Understanding of ES analyzers and their use
Kafka

Experience in design Kafka topics, partitioning and topic hierarchies
Experience of various Kafka consumers and Kafka-SQL
Experience Level: Minimum 10 years