Job Description :
Description:

Candidate should have good understanding of BigData/Hadoop
technologies. Should be good in developing the Unix/Shell/PL/SQL, SCALA
framework.
Expertise in java/J2EE and big data technologies like Hadoop , Apache Spark
and Hive is required. Must have applied these skills continuously in the last
2-3 years.
Industry experience is preferred
Designing ETL process
Monitoring & evaluating performance and advising any necessary infrastructure
changes including changing the cloud platform
Defining data retention policies
Proficient understanding of distributed computing principles
Management of Hadoop cluster, with all included services
Ability to solve any ongoing issues with operating the cluster
Proficiency with Hadoop v2, MapReduce, HDFS
Experience with building stream-processing systems, using solutions such as
Storm or Spark-Streaming
Good knowledge of Big Data querying tools, such as Pig, Hive, and Impala
Experience with Spark and NoSQL databases, such as HBase, Cassandra, MongoDB
Experience with integration of data from multiple data sources
Knowledge of various ETL techniques and frameworks, like Flume
Experience with Cloudera/MapR/Hortonworks
Experience in migrating from Exadata to Amazon Aurora
Experience in writing Technical documents such as Technical Designs, Production
Run books.
Insurance industry experience.
Excellent verbal and written communication skills.
Experience in onshore-offshore delivery model
Agile and Devops experience

Naveen
Ext 228
             

Similar Jobs you may be interested in ..