Job Description :
Position:Horton Works Administrator
Location:Baltimore, MD
Duration:6+ months

Primary Skills :
This position is designed to act as a highly skilled technology consultant and leader in the area of Hadoop processing.
Secondary Skills :
The individual will work closely with and lead customers and development teams to create, manage, upgrade, and secure Hadoop clusters.
Description :
This position is designed to act as a highly skilled technology consultant and leader in the area of Hadoop processing. The individual will work closely with and lead customers and development teams to create, manage, upgrade, and secure Hadoop clusters.

Required skills:

Minimum of 3 years of Linux/Unix administration. Minimum 3 years of experience with (Cloudera/Hortonworks) Hadoop Administration.
Extensive experience in Hadoop ecosystem including Spark, MapReduce, HDFS, Hive, HBase, and Zeppelin.
1 year experience with Hadoop-specific automation (e.g. blueprints 1 year technical experience managing Hadoop cluster infrastructure environments (e.g. data center infrastructure
Demonstrable scripting experience in one or more of Python, bash, PowerShell, Perl. 1 year experience with Puppet and / or Chef. 1 year virtualization experience in any of VMware / Hyper-V / KVM.

Desired skills:
Certified Hadoop Admin (Cloudera/Hortonworks Networking (TCP/IP, Routers, IP addressing, use of network tools Analyzing data with Hive, Pig and/or HBase.
Data Ingestion, streaming, or Importing/exporting RDBMS data using Sqoop.
DBA experience. RDBMS SQL Development. Manage cluster hardening activities through the implementation and maintenance of security and governance components across various cluster.

Education:Bachelor''s Degree and 11 years of work experience.
Key success metrics for this individual include: 1) proven experience with automation 2) advanced Linux/Windows system administration capabilities 3) ability to thrive within a dynamic technology environment.

Responsibilities:

Create Hadoop ecosystem (Hadoop, Hive, Pig, Oozie, Hue, Hbase/Cassandra, Flume) using both automated toolsets as well as manual processes. Maintain, support, and upgrade Hadoop clusters.
Monitor jobs, queues, and HDFS capacity. Balance,commission & decommission cluster nodes. Apply security (Kerberos / Open LDAP) linking with Active Directory and/or LDAP. Enable users to view job progress via web interface.
On boarding users to use Hadoop configuration, access control, disk quota, permissions etc. Address all issues, apply upgrades and security patches. Commission/de-commission nodes backup and restore.
Apply "rolling" cluster node upgrades in a Production-level environment. Assemble newly bought hardware into racks with switches, assign IP addresses properly, firewalling, enable/disable ports, VPN etc. Work with virtualization team to provision / manage HDP cluster components.
             

Similar Jobs you may be interested in ..