Job Description :
JOB DUTIES

Manage Hadoop and Spark cluster environments, on bare-metal and container infrastructure, including service allocation and configuration for the cluster, capacity planning, performance tuning, and ongoing monitoring.

Excellent knowledge of Linux, AIX, or other Unix flavors

Deep understanding of Hadoop and Spark cluster security, networking connectivity and IO throughput along with other factors that affect distributed system performance

Strong working knowledge of disaster recovery, incident management, and security best practices

Working knowledge of containers (e.g., docker) and major orchestrators (e.g., Mesos, Kubernetes, Docker Datacenter)

Working knowledge of automation tools (e.g., Puppet, Chef, Ansible)

Working knowledge of software defined networking

Working knowledge of parcel based upgrades with Hadoop (i.e., Cloudera)

Working knowledge of hardening Hadoop with Kerberos, TLS, and HDFS encryption

Ability to quickly perform critical analysis and use creative approaches for solving complex problems

Excellent written and verbal communication skills



EXPERIENCE

5+ years hands-on experience with supporting Linux production environments

3+ years hands-on experience with supporting Hadoop and/or Spark ecosystem technologies in production

3+ years hands-on experience with scripting with bash, perl, ruby, or python

2+ years hands-on development / administration experience on Kafka, HBase, Solr, and Hue

Experienced with networking infrastructure including VLAN and firewalls

Proven track record with Red Hat Enterprise Linux administration

Proven track record with Cloudera Distribution of Hadoop administration

Proven track record with troubleshooting YARN jobs

Proven track record with HBase Administration to include tuning

Proven track record with Apache Spark development and or administration
             

Similar Jobs you may be interested in ..