Job Description :
Title: Hadoop Admin
Location: Southborough, MA 01772 (moving to Brighton in Q4—candidates need to be comfortable with both locations)
Team: Enterprise Information Management/Database Operations & Support
Team Size: manager plus 2-3 admins, team of architects and developers
Why Open: Set up new environments
Duration: 6 months, may extend
The key is a strong Hadoop Admin with Hortonworks and AWS, best case is someone coming from a DBA or Sys Admin background. Any Hadoop/AWS/Hortonworks Certifications will be huge plus.
Important Skills
Technical – Hadoop Admin, Hortonworks, AWS
Soft – communication skills, able to self-start and work with minimal supervision
Interview Process: phone then in-person; local only for now
Background Check (if applicable): criminal, employment verification, education (must be completed prior to start)
What the environment / Sell: newly formed group, setting up new environment, investing in technology
Hours / Shift: regular shift, 9 a.m 5 p.m.
Must Haves:
Need someone to help them to administer AWS environment; write cloud formation scripts and perform configuration management.
They will also need to help set up Hadoop clusters with an understanding of architecture
Need to point of contact for Amazon/AWS to deal with budgeting, adding/removing space and features, etc.
Day to Day:
Responsible for the build out, day-to-day management, and support of Big Data clusters based on Hadoop and other technologies, on-premises and in cloud. Responsible for cluster availability.
Responsible for implementation and support of the Enterprise Hadoop environment. * Involves designing, capacity planning, cluster set up, monitoring, structure planning, scaling and administration of Hadoop environment (YARN, MapReduce, HDFS, HBase, Zookeeper, * Storm, Kafka, Spark, Pig and Hive)
Manage large scale Hadoop cluster environments, handling all Hadoop environment builds, including design, capacity planning, cluster setup, security, performance tuning and ongoing monitoring.
Evaluate and recommend systems software and hardware for the enterprise system including capacity modeling.
Contribute to the evolving architecture of our storage service to meet changing requirements for scaling, reliability, performance, manageability, and price.
Automate deployment and operation of the big data infrastructure. Manage, deploy and configure infrastructure with automation toolsets.
Working with data delivery teams to setup new Hadoop users. This job includes setting up Linux users, setting up Kerberos principals and testing HDFS, Hive, Pig and MapReduce access for the new users.
Responsible for implementation and ongoing administration of Hadoop infrastructure.
Identify hardware and software technical problems, storage and/or related system malfunctions.
Diligently teaming with the infrastructure, network, database, application and business intelligence teams to guarantee high data quality and availability.
Expand and maintain our Hadoop environments (MapR distro, HBase, Hive, Yarn, Zookeeper, Oozie, Spyglass, etc and Apache stack environments (Java, Spark/Scala, Kafka, Elasticsearch, Drill, Kylin, etc
Collaborating with application teams to install operating system and Hadoop updates, patches, version upgrades when required.
Work closely with Technology Leadership, Product Managers, and Reporting Team for understanding the functional and system requirements
Expertise in Cluster maintenance as well as creation and removal of nodes using tools like Ganglia, Nagios, Amazon web services, and other tools.
Excellent troubleshooting and problem-solving abilities.
Facilitate POCs for new related tech versions, toolsets, and solutions both built and bought to prove viability for given business cases
Maintain user accounts, access requests, node configurations/buildout/teardown, cluster maintenance, log files, file systems, patches, upgrades, alerts, monitoring, HA, etc.
Manage and review Hadoop log files.
File system management and monitoring
Qualifications:
10 years of experience with 4 years as a DBA (Any database ) or Unix Admin
2 years of application database programming experience.
5 years of professional experience working with Hadoop technology stack.
5 years of proven experience in AWS and Horton Works Distribution.
Prior experience with performance tuning, capacity planning, and workload mapping.
Expert experience with at least one of the following languages; Python, Unix Shell scripting.
A deep understanding of Hadoop design principals, cluster connectivity, security and the factors that affect distributed system performance.


Client : Global Atlantic

             

Similar Jobs you may be interested in ..