Job Description :
Excellent understanding and implementation experience of Hadoop Architecture, including technologies: - Data Storage: HDFS, HBase, HIVE - Data Processing, Analysis & Integration: Spark (Python), R based environment, Map Reduce, Impala, Sqoop - Strong understanding of data lake, data warehouse, visualization and analytics concepts is essential. - Very Strong SQL capabilities, both for RDBMS and big data environments is required for this role. - Experience working on Distributed file system, BDaaS, SaaS, PaaS - Work on MPP, Big Data technologies like Hadoop, Data Engineering, Data Governance implementations and support. - Gather and process raw data at scale (including writing scripts, calling APIs, write SQL queries, etc. - Design and develop data structures that support high performing and scalable analytic applications. - Implementing automation and related integration technologies with Ansible, Chef, or Puppet. - Knowledge in predictive analytics & mixed model analysis along with appropriate modeling techniques is a big plus - Work closely with engineering team to integrate innovations and algorithms into data lake systems.
             

Similar Jobs you may be interested in ..