Tech Evaluation Login
Tech Evaluation Signup
PasswordReset
Unsubscribe
PasswordReset
PasswordReset
PasswordReset
Register for Webinar
Register for Event
Job Channels
Skill Score
About Us
Contact Us
Our Blog
Toggle navigation
Home
Skill Score
DemandIndex
Jobs
FETCH Jobs
Java J2EE
Dot Net, C#
DBA
SAP
Admin, Networking
Datawarehousing
QA
Demo
Candidate
Recruiter
Services
Job Posting & Resume Access
Integrated Talent Sourcing
Sub Contractor Supply Chain
Merger & Acquisition
Candidate
Post Resume
Login
Create Job Alert
Recruiter
Login
Free Trial
Signup
Pricing
| Pricing
Dashboard
My Resumes
Resume List
Add Resume
Edit / Update Resume
My Jobs
Fetch Jobs
Matched Jobs
Applied Jobs
Hadoop Administration Engineer
Phoenix, AZ
Phoenix
AZ
85086
Date
: Jul-23-18
2018-07-23
2019-07-23
Hadoop Administration Engineer
Phoenix, AZ
Jul-23-18
Work Authorization
US Citizen
GC
H1B
EAD (OPT/CPT/GC/H4)
Preferred Employment
Corp-Corp
W2-Permanent
W2-Contract
1099-Contract
Contract to Hire
Job Details
Experience
:
Senior
Rate/Salary ($)
:
open
Duration
:
6 months
Sp. Area
:
BigData, NoSQL
Sp. Skills
:
Hadoop
Consulting / Contract
CONTRACTOR
Certification Preferred
Required Skills
:
Hadoop
Preferred Skills
:
MapReduce, HDFS, OOZIE, HIVE, SQOOP, PIG
Domain
:
IT/Software, Financial
Work Authorization
US Citizen
GC
EAD (OPT/CPT/GC/H4)
H1B
Preferred Employment
Corp-Corp
W2-Permanent
W2-Contract
1099-Contract
Contract to Hire
Job Details
Experience
:
Senior
Rate/Salary ($)
:
open
Duration
:
6 months
Sp. Area
:
BigData, NoSQL
Sp. Skills
:
Hadoop
Consulting / Contract
CONTRACTOR
Certification Preferred
Required Skills
:
Hadoop
Preferred Skills
:
MapReduce, HDFS, OOZIE, HIVE, SQOOP, PIG
Domain
:
IT/Software, Financial
Value Info Tech
Phoenix, AZ
Post Resume to
View Contact Details &
Apply for Job
Job Description
:
"o Responsible for implementation and support of the Enterprise Hadoop environment.
o Involves designing, capacity planning, cluster set up, monitoring, structure planning, scaling and administration of Hadoop components YARN, MapReduce, HDFS, HBase, Zookeeper, Storm, Kafka, Spark, Pig and Hive)
o Work closely with infrastructure, network, database, business intelligence and application teams to ensure business applications are highly available and performing within agreed on service levels.
o Accountable for performance tuning and resource management of Hadoop clusters and MapReduce routines.
o Strong Experience with LINUX based systems & scripting (either of Shell, Perl or Python
o Experience with configuration management tools like puppet, chef or salt.
o Strong Experience with Configuring Security in Hadoop using Kerberos or PAM.
o Good knowledge of directory services like LDAP & ADS and Monitoring tools like Nagios or Icinga2.
o Strong troubleshooting skills of Hive, Pig, Hbase and JAVA Mapreduce codes/jobs.
o Evaluate technical aspects of any change requests pertaining to the Cluster.
o Research, identify and recommend technical and operational improvements resulting in improved reliability efficiencies in developing the Cluster.
"
Turn OFF keyword highlights
Similar Jobs you may be interested in ..
Big Data
Engineer
,
Phoenix, AZ
Mar-16-24
SRI Tech Solutions Inc
($) :
DOE
client is ready to schedule L1 interview this week. If you are unable to submit local consultants, then submit non local consultants who are strong with GCP and comfortable to take the customer interview Face to Face along with flexibility to work onsite from Day1 without fail. Job:
Big Data
Engineer with GCP Experience Location: Phoenix, AZ (Onsite role with Face to Face Client interview) Duration: Contract Job Description: This role requires a seasoned developer having 9+ years of
Apply
[Already Applied]
[Apply Individually]
GCP Data Engineer
,
Phoenix, AZ
Mar-28-24
Technogen, Inc
($) :
USD 72 / Hourly / C2
GCP Data Engineer Phoenix AZ Must Have Qualifications: Bachelors degree in Engineering or Computer Science or equivalent OR Masters in Computer Applications or equivalent. 5+ years of software development experience and leading teams of engineers and scrum teams 3+ years of hands-on experience of working with Map-Reduce, Hive, Spark (core, SQL and PySpark) Hands-on experience on writing and understanding complex SQL(Hive/PySpark-dataframes), optimizing joins while processing huge amount of
Apply
[Apply Individually]