Job Description :
Long term opportunity based out of our Little Rock, AR location. Contract to Hire preferred
Key Requirements:
7-8 yrs experience required
Experience in HDP 2.x / 3.x or Cloudera, Hadoop / System Monitoring tools (like Kafka Manager, Nagios etc
CI / CD Dev ops methodology for containerization
Hands on in deploying & keep track of jobs, schedule (like Oozie / Cron / Any other scheduling job) and configure them appropriately and take backups.
Capable of monitoring critical parts of the cluster, Cleanup activities, Auditing the cluster, Setup the cluster from scratch, upgrading the cluster.
Primary Skills Needed:

Hands on in Unix / Linux environment.
Admin experience in Unix / Linux environment.
Experience in writing Shell scripts.
Experience in HDP 2.x / 3.x or Cloudera, Hadoop / System Monitoring tools (like Kafka Manager, Nagios etc
Commissioning and De-Commissioning of the systems to Hadoop environment.
Hands on in GitLab, Ansible, Kubernetes & Docker.
General operational expertise such as good troubleshooting skills, understanding of system’s capacity, bottlenecks, basics of memory, CPU, OS, storage, and networks.
Hands on in deploying & keep track of jobs, schedule (like Oozie / Cron / Any other scheduling job) and configure them appropriately and take backups.
Capable of monitoring critical parts of the cluster, Cleanup activities, Auditing the cluster, Setup the cluster from scratch, upgrading the cluster.
Able to do the capacity planning and inform the client about H/W requirements or upgrades.

Key Responsibilities:

Responsible for implementation and ongoing administration of Hadoop infrastructure.
Aligning with the systems engineering team to propose and deploy new hardware and software environments required for Hadoop and to expand existing environments.
Working with data delivery teams to setup new Hadoop users. This job includes setting up Linux users, setting up Kerberos principals and testing HDFS, Hive, Pig and MapReduce access for the new users.
Cluster maintenance as well as creation and removal of nodes using tools like Ganglia, Nagios, Ambari / Cloudera Manager Enterprise.
Performance tuning of Hadoop clusters and Hadoop Spark, Hbase, Hive routines.
Monitor Hadoop cluster connectivity and security
Manage and review Hadoop log files.
Diligently teaming with the infrastructure, network, database, application and business intelligence teams to guarantee high data quality and availability.
Collaborating with application teams to install operating system and Hadoop updates, patches, version upgrades when required