Job Description :
ummary:
Minimum 6 years of hands on experience in Big Data and/or Analytics, and all phases of SDLC
Big Data

Work experience in ingestion, storage, querying, processing, and analysis of Bigdata with hands on experience in Big data Eco- system related technologies like Map Reduce, Spark, HDFS
Hands on experience of Object Oriented Programming (OOPS)
In depth understanding/knowledge of Hadoop Architecture and various components such as Hadoop High Availability architecture and good understanding of workload management, scalability and distributed platform architectures
Good understanding of Spark Algorithms such as Classification, Clustering, and Regression
Experience working with Spark tools like RDD transformations, spark MLlib
Experience working with monitoring tools
Hands on experience in specifications, technical design, and development

BI and Analytics:
Solid Experience of creating PL/SQL Packages, Procedures, Functions, Triggers, Views and Exception handling for retrieving, manipulating, checking, and migrating complex data sets
Experience in addressing performance issues
Team player, motivated to learn new technologies, and able to grasp concepts and technologies quickly with analytical and problem-solving skills
Experience in working in Agile teams, and working independently
             

Similar Jobs you may be interested in ..