Job Description :

Hi,

Hope you are doing well

I have an urgent opening of Hadoop Admin

 

 Role- Hadoop Admin
Duration: 6+ months and then converted to full time
Location: 100% remote
Visa Status: USC/GC
Years of experience: 5+ years of experience as a Hadoop Admin and 8+ overall experience
Job title: hadoop Admin/System Engineer
Required Technologies: Hadoop, Cloudera, Linux, Healthcare background, 
Interview Process: 2 rounds
Availability for video screening:
MAKE SURE TO INCLUDE A LINKED IN PROFILE
 
 
 
 
Job Description:
We are currently seeking resources with expertise in Hadoop infrastructure design, configuration, installation, security and ongoing support.
Responsibilities
Understanding of standard methodologies in maintaining large scale Hadoop clusters
Implement and support backup and recovery strategies on Hadoop clusters based on current process and procedures.
Install, configure, and maintain high availability of the fully secured end-to-end environments.
Dedicatedly find opportunities to implement automation and monitoring solutions.
Propagate knowledge and operations readiness through documentation and mentorship.
Ability to perform analysis of competing technologies and products.
Building out Platforms On-Premise, Private-Cloud, Public-Cloud, and Hybrid Architectures leveraging the latest and greatest technologies.
Required Experience
5+ years of experience in architecting, administrating, configuring, installing and maintenance of Big Data Technologies with emphasis on Hadoop. (CDH or HDP)
5+ years with hands on Linux administration (Log searching, Troubleshooting, Tuning)
Expert understanding of Hadoop ecosystem technologies (Apache Spark, Impala, HDFS, Hive, Cloudera Manager, Sentry/Ranger, Hbase, SOLR, Kudu, etc)
Advanced knowledge of specific operating systems (Linux), servers, and shell scripting.
Demonstrated experience with databases (i.e., SQL, NoSQL, Hive, In Memory, HBase).
Install and configure Hadoop clusters (With full security - Kerberos, TLS, Encryption at Rest)
Effective communication skills to partner closely with citizen analysts and Data Scientists.
Plan and execute major platform software and operating system upgrades and maintenance across physical environments
Ensure proper resource utilization between a globally used multi-tenant cluster(s)
Review performance stats and query execution/explain plans; recommend changes for tuning
Create and maintain detailed, up-to-date technical documentation
Ability to work in a fast-paced, team-oriented environment
Understand network optimization and DR strategies
Minimum Education, Experience, & Specialized Knowledge Required
Degree in the field of Information Systems, Computer Science or related field OR Equivilent Working Experience.
Ability to complete the full lifecycle of software development and deliver in an Agile/Scrum environment, leveraging Continuous Integration/Continuous Development
Strong interpersonal skills, including a positive, solution-oriented attitude
Must be passionate, flexible and innovative in utilizing the tools, their experience, and any other resources, to effectively deliver to very challenging and always changing business requirements with continuous success
Must be able to interface with various solution/business areas to understand the requirements and support development
This role is in support of the Big Data Factory (BDF), including the commercialization HDSC project.
Responsibilities:
• SME in Human Data Science Cloud - expert in building and supporting all aspects for HDSC.
• SME in Cloudera Data Platform
• Support Data Science teams and Analytics teams on complex code deployment, debugging and performance optimization problems
• Become SME for Big Data and Hadoop Data Science Stack, including CDSW, Jupyter, Conda, R-Studio, etc.
• Proficient with robust shell or scripting tasks and coding. Languages such as Python will used, Scala is a plus.
• Troubleshoot and debug Hadoop ecosystem run-time issues
• Troubleshoot and debug VMware, Triton, and other virtualization/container management systems
• Communicate and integrate with a wide set of teams, including Hardware, Network, Linux kernel, JVM, Big Data vendors, and cloud vendors.
• Automate deployment and rollback of packages and applications for container technologies
• Work with operating system internals, file systems, disk/storage technologies and storage protocols.
• Responsible for leading projects, setting accurate expectations as to the scope of work and required time to complete, and communicating status effectively through the completion of work.
• Assist with management and configuration of Hadoop clusters.
• Work with various enterprise security solutions such as LDAP, AD, and/or Kerberos
• Consult in Database administration and design
• Work with and develop on underlying infrastructure for Big Data Solutions (Clustered/Distributed Computing, Storage, Data Center Networking)
• Experience in or have a good understanding of other data lake technologies

 
 

 

 

 

 

 

 

,    
Nitin Gupta
Technical Recruiter  Cybertec, Inc    
11710 Plaza America Suite 2000    
Reston, VA 20190    
Direct
Fax    
Email: 
    

 

 

             

Similar Jobs you may be interested in ..