Job Description :
Hi,

Hope you are doing great.

We have a below positions open with one of our client, if interested kindly send me your word formatted updated resume,



Position 1:
Job Title : Big data Hadoop Developer
Location : Atlanta, GA / Austin, TX/ Bay Area, CA/ Bethpage, NY/ Buffalo, NY/ Charlotte, NC/ Chicago, IL/ Denver, CO/ Durham, NC/ Fort Worth, Texas/ Hartford, CT/ Houston, TX/ Milwaukee, WI/ New Jersey/ Peoria, IL/ Phoenix, AZ/ Richmond, VA/ Seattle, WA/ St Louis, MO
Duration : Full/Permanent

Qualifications
Basic
Bachelor’s degree or foreign equivalent required from an accredited institution. Will also consider three years of progressive experience in the specialty in lieu of every year of education.
At least 4 years of experience with IT Bigdata Skills

Preferred
At least 4 years of experience in technology consulting, enterprise and solutions architecture and architectural frameworks
At least 2 years of experience in Hadoop, Hive, Hbase Skills
At least 3 years of experience in project execution
Experience in defining new architectures and ability to drive an independent project from an architectural stand point
Analytical skills
At least 2 years of experience in thought leadership, white papers and leadership/mentoring of staff and internal consulting teams


Position 2: Hadoop Admin
Location: Foster city, CA/ Houston, TX/ Chicago, IL
Duration: Full time

Qualifications
Basic
Bachelor’s degree or foreign equivalent required from an accredited institution. Will also consider three years of progressive experience in the specialty in lieu of every year of education.
At least 4 years of overall IT experience


Preferred
At least 4 years of experience in Implementation and Administration of Hadoop infrastructure
At least 2 years of experience Architecting, Designing, Implementation and Administration of Hadoop infrastructure
At least 2 years of experience in Project life cycle activities on development and maintenance projects.
Should be able to provide Consultancy to client / internal teams on which product/flavor is best for which situation/setup
Operational expertise in troubleshooting , understanding of system’s capacity, bottlenecks, basics of memory, CPU, OS, storage, and networks
Hadoop, MapReduce, HBase, Hive, Pig, Mahout
Hadoop Administration skills: Experience working in Cloudera Manager or Ambari, Ganglia, Nagios
Experience in using Hadoop Schedulers - FIFO, Fair Scheduler, Capacity Scheduler

Position 2: Sr. Big data Engineer
Location: Chicago, IL, Piscataway, NJ
Duration: Full time

3+ years of demonstrable experience designing technological solutions to complex data problems, developing & testing modular, reusable, efficient and scalable code to implement those solutions.

Ideally, this would include work on the following technologies:

Expert-level proficiency in at-least one of Java, C++ or Python (preferred Scala knowledge a strong advantage.
Strong understanding and experience in distributed computing frameworks, particularly Apache Hadoop 2.0 (YARN; MR & HDFS) and associated technologies one or more of Hive, Sqoop, Avro, Flume, Oozie, Zookeeper, etc
Hands-on experience with Apache Spark and its components (Streaming, SQL, MLLib) is a strong advantage.
Operating knowledge of cloud computing platforms (AWS, especially EMR, EC2, S3, SWF services and the AWS CLI)
Experience working within a Linux computing environment, and use of command line tools including knowledge of shell/Python scripting for automating common tasks
Ability to work in a team in an agile setting, familiarity with JIRA and clear understanding of how Git works
In addition, the ideal candidate would have great problem-solving skills, and the ability & confidence to hack their way out of tight corners.


Must Have (hands-on) experience:

Java or Python or C++ expertise
Linux environment and shell scripting
Distributed computing frameworks (Hadoop or Spark)
Cloud computing platforms (AWS


Position3: Lead Bigdata
Location: Chicago, IL, Piscataway, NJ
Duration: Full time


8-10 years of demonstrable experience in designing, developing and testing modular, efficient and scalable code implemented in Big data and analytics environment
Expert level proficiency in at least one of Python (preferred), Scala, Java (preferred) or C++
Minimum 2-3 years of working experience in distributed computing frameworks, particularly Apache Hadoop 2.0+ (YARN; MR & HDFS) and associated technologies one or more of Sqoop, Avro, Flume, Oozie, Zookeeper, etc
Hands-on experience with Apache Hive, Apache Spark and its components (Streaming, SQL, MLLib)
Operating knowledge of cloud computing platforms (AWS, especially EMR, EC2, S3, SWF services and the AWS CLI)
Experience working within a Linux computing environment, and use of command line tools including knowledge of shell/Python scripting for automating common tasks
Ability to work in a team in an agile setting, familiarity with JIRA and clear understanding of GIT repository
Manage client deliverables, communicate effectively with internal and external teams
Excellent communication skills to be able to effectively translate and articulate the findings to technology, analytics and business stakeholders


Company:
A Global IT consulting firm with several large customer engagements across Europe and US. It provides strategic business consulting, technology, engineering and outsourcing services to help clients leverage technology and create impactful and measurable business value for every IT investment.

About us:
Avance Consulting Services is a global talent acquisition and executive search company. We work exclusively with some of the most reputed and admired clients across various sectors and geographies.


Client : Avance Consulting

             

Similar Jobs you may be interested in ..