Job Description :
Position: Application Architect-Hadoop Administrator
Location: Jacksonville, FL
Duration: Full Time
The Hadoop Enterprise Environment Management team is seeking a Hadoop Analyst who will be responsible for providing technical and administrative support for Linux and Hadoop platforms in a fast-paced operations environment supporting 24x7 business critical applications using HDFS, Sentry, Impala, Spark, and Hive.

The Analyst will be able to perform troubleshooting of any Hadoop ecosystem services issues, performance analysis, ensuring security, developing and testing Unix Shell scripts, scripting in Perl, Java and coding required for Hadoop administration and associated core Hadoop ecosystem.

Responsibilities Include:
Proven understanding with Cloudera Hadoop, IMPALA, Hive, Flume and HBase, Sqoop, Apache Spark, Apache Storm etc
Administer, troubleshoot, perform problem isolation and correct problems discovered in clusters
Performance tuning of Hadoop clusters and ecosystem components and jobs. This includes the management and review of Hadoop log files.
Demonstrated abilities utilizing core Hadoop (i.e. Hadoop, Map-Reduce, Hive, Pig, Oozie, HDFS, Unix Shell, Java, etc and \ or ETL (i.e.Informatica, DMX-H, etc and \ or Relational Databases (i.e. SQL, Teradata, Oracle, DB2, etc.
Provide 24x7 on call support for production code migrations and lower environment platform outages/service disruptions on a rotation and need basis
Provide code deployment support for Test and Production environments
Diagnose and address database performance issues using performance monitors and various tuning techniques
Interact with Storage and Systems administrators on Linux/Unix/VM operating systems and Hadoop Ecosystems
Troubleshoot platform problems and connectivity issues
Ability to work well as a team and as an individual with minimal supervision
Excellent communication and project management skills

Required Skills
More than 3 years of experience in Big Data technologies and concepts
Bachelor’s Degree in Information Technology, Engineering, Computer Science, related field or equivalent work experience
At least 5-7 years of experience in a large Data Warehouse environment.
3-4 years of experience in Linux/Unix.
2-3 years of experience with scheduling tools such as Autosys.
Experience with developer tools for code management, ticket management, performance monitoring, automated testing
Strong understanding of Data Warehouse/MDM concepts.
Solid understanding of Big Data technology – Hadoop
Deep knowledge of industry-standard, enterprise-class best practices for a large DFS environment
Good understanding of Linux/VM platform
Experience supporting and auditing Linux/Unix security (Kerberos and Active Directory)
Experience with Unix/Linux OS platform and Unix Shell/Perl scripting and automation Proven understanding with Hadoop administration
Administer, troubleshoot, perform problem isolation and correct problems discovered in clusters
Performance tuning of Hadoop clusters and ecosystem components and jobs. This includes the management and review of Hadoop log files.
Document programming problems and resolutions for future reference.

Business Rationale
HEEM - HaaS resources needed for ramp up in initiative work.