Job Description :
Big Data Architect (with Kafka)
Location: Chandler, AZ OR Fremont, CA
Duration: 6+ months
Key Technical Skills: Kafka, Elastic Search
Required Experience/Skills:
- Minimum 10 years
- Experience in design Kafka topics, partitioning and topic hierarchies
- Experience of various Kafka consumers and Kafka-SQL
- Experience of working with Hadoop ecosystem – MapReduce, Hive, HBase
- Experience of working with at least one NoSQL database like C*, MongoDB
- Experience of designing for scale with considerations like sharding.
- Experience of various types of queries – structured, proximity, relevance and query DSL
Data modelling with various hierarchies – Nested, Parent-child and schema
- Understanding of ES analyzers and their use
- Good understanding of Data pipeline including Extraction, acquisition, transformation and visualization
- Prior experience working with RBDMS and Big data distributions
Experience with requirements gathering, systems development, systems integration and
designing/developing API
- Experience with Linux and shell programming
- Experience with frameworks like Anaconda and developing ETL using PySpark on any major Big Data distributions
- Good understanding of XML processing using Python,Spark RDD and dataframes
- Performance tuning, unit testing and integration testing

Client : Confidential