Job Description :
Description: Responsibilities
Develop highly scalable and extensible Big Data platform (Cloudera), which enables collection, storage, modeling, and analysis of massive data sets from numerous channels
Loading from disparate data sets
ETL coding using Hive and Spark.
Translate complex functional and technical requirements into detailed design.
Maintain security, data privacy and data quality.
Continuously evaluate new technologies, innovate and deliver solution for business critical applications
Test prototypes and oversee handover to operational teams.
Propose best practices/standards.

Qualifications
2+ years of Hadoop ecosystem (HDFS, Hbase, Spark, Kafka, Zookeeper, Impala, Flume, Parquet, Avro) experience for high volume based platforms and scalable distributed systems
Experience working with data model, frameworks and open source software, Restful API design and development, and software design patterns
Experience with Agile/Scrum methodologies, FDD (Feature data driven), TDD (Test Driven Development), Elastic search (ELK), Automation of SRE for Hadoop technologies, Cloudera, Kerberos, Encryption, Performance tuning, and CI/CD (Continuous integration & deployment)
Excellent customer service attitude, communication skills (written and verbal), and interpersonal skills.
Excellent analytical and problem-solving skills.
Experience in financial services industry preferred
             

Similar Jobs you may be interested in ..