Job Description :
Hi,

Hope you are doing great.

We have a below positions open with one of our client, if interested kindly send me your word formatted updated resume,

Position 1:
Job Title : Big data Hadoop Developer
Location : Atlanta, GA
Duration : Full/Permanent

Qualifications Basic

Bachelor’s degree or foreign equivalent required. Will also consider one year of relevant work experience in lieu of every year of education
At least 3+ years of experience working in Big Data Technologies Hadoop, Hortonworks (with Spark, Scala)
At least 3+ years of experience in Data warehousing
At least 1+ years of experience in core Java and its ecosystem

Preferred

At least 3+ years application Big Data project implementation experience including requirements gathering, architecture, functional and technical design
At least 2 years of experience in leading technical teams and Client-facing Skills
Experience in understanding new architectures
Analytical and problem solving skills
Ability to learn new technologies and business requirements
Strong communication skills, both written and oral; Ability to build and deliver presentations to all levels of the business and effectively explain complex issues and concepts in simple, understandable language



Position 2:
Job Title : Big data Hadoop Developer
Location : St Louis, MO / Atlanta, GA
Duration : Full/Permanent

Qualifications Basic

Bachelor’s degree or foreign equivalent required. Will also consider one year of relevant work experience in lieu of every year of education
At least 7 years of experience working in Big Data Technologies Hadoop, Hortonworks (with Spark, Scala)
At least 4 years of experience in Data warehousing
At least 2 years of experience in core Java and its ecosystem

Preferred

At least 4 years application Big Data project implementation experience including requirements gathering, architecture, functional and technical design
At least 3 years of experience in leading technical teams and Client-facing Skills
Experience in understanding new architectures and ability to drive an independent project from an architectural stand point
Analytical and problem solving skills
Ability to learn new technologies and business requirements
Strong communication skills, both written and oral; Ability to build and deliver presentations to all levels of the business and effectively explain complex issues and concepts in simple, understandable language



Position 3:

Job Title : Big data Hadoop Developer
Location : Plano, TX
Duration : Full/Permanent

At least 2+ years of experience in software development life cycle stages
At least 2+ years of experience Big Data technologies and ecosystem
At least 2+ years of experience in Project life cycle activities on development and maintenance projects
At least 2 years of experience in Design and Architecture review
At least 2 years of experience in application support and maintenance (including some experience on-call support)
Minimum of 2+ years of work experience in the Information Technology Field.
Minimum of 2+ years of hands on experience Big Data technologies.
Expertise in Hadoop ecosystem products such as HDFS, MapReduce, Hive, AVRO, Zookeeper
Expertise with Hadoop ecosystem and experience with Hive, Oozie, Flume, Impala and Sqoop.
Expertise in building distributed systems, query processing, database internals or analytic systems Expertise with data schema - logical and physical data modeling
Experience with Spark, HBase, Java (MapReduce), Python (linux shell like scripts) development
Experience in full software development lifecycle of the Data Warehousing Project.
Experience in loading data into HDFS from heterogeneous databases – DB2, Oracle, and SQL server using Apache Sqoop.
Experience in analysis of data using Hive and Impala and managing, navigating data and tables using Hue.
Work with Oozie, Flume, Sqoop, Spark, and Solr for data loading and analytics


Position 4:

Job Title : Technology Architect - Big Data/Cloud Services (Azure)
Location : Redmond, WA, San Francisco, CA, Sunnyvale, CA, Seattle, WA
Duration : Full/Permanent


Qualifications


Basic
Bachelor’s degree or foreign equivalent required from an accredited institution. Will also consider three years of progressive experience in the specialty in lieu of every year of education.
At least 7 years of experience with Information Technology.

Preferred

4+ years of experience with Big Data and Cloud appications
4+ years of experience architecting, designing and developing cloud infrastructure and applications
2+ years of experience with Clients on Cloud Projects with Lead experience or cloud product skills

Technical skills

Experience with Big Data Technology Stack: Hadoop/Spark/Hive/MapR/Storm/Pig/Oozie/Kafka, etc.
Experience with one or more of the cloud products: Amazon AWS , Azure, OpenStack, CloudFoundry, Mesos and Docker.
Hands on experience with Azure HDInsight/Spark/Eventhub
Azure Virtual Machines, Blob Storage, Azure SQL Database, StorSimple, Azure DNS, Virtual Network, DocumentDB, Redis Cache, Azure App Service,
Strength and background in infrastructure or HPC is preferable.
Experience with one of the DB servers is an added advantage
Excellent interpersonal and communication skills
Experience in delivering large enterprise level applications

-

Position 5:
Job Title : Big data Developer
Location : Phoenix, AZ
Duration : Full/Permanent


Qualifications
Basic

Bachelor’s degree or foreign equivalent required. Will also consider three years of relevant work experience in lieu of every year of education
At least 7 years of Design and development experience in Java related technologies
At least 4 year of hands on design and development experience on Big data related technologies – Hadoop, PIG, Hive, Core Java
At least 2 years of hand on Architect Design / Deployment / Integration experience
Should be a strong communicator and be able to work independently with minimum involvement from client SMEs



Preferred Skills:

MapReduce, HDFS, HBase, YARN, SPARK, Oozie and shell scripting
Background in all aspects of software engineering with strong skills in parallel data processing, data flows, REST APIs, JSON, XML, and micro service architecture.
Must have strong programming knowledge of Core Java or Scala - Objects & Classes, Data Types, Arrays and String Operations, Operators, Control Flow Statements, Inheritance and Interfaces, Exception Handling, Serialization, Collections, Reading and Writing Files.
Must have hands on experience in design, implementation, and build of applications or solutions using Core Java/Scala.
Strong understanding of Hadoop fundamentals.
Strong understanding of RDBMS concepts and must have good knowledge of writing SQL and interacting with RDBMS and NoSQL database - HBase programmatically.
Strong understanding of File Formats – Parquet, Hadoop File formats.
Proficient with application build and continuous integration tools – Maven, SBT, Jenkins, SVN, Git.
Experience in working on Agile and Rally tool is a plus.

-


Position 6:

Job Title : Big data Hadoop Developer
Location : Phoenix, AZ
Duration : Full/Permanent

Qualifications
Basic

Bachelor’s degree or foreign equivalent required. Will also consider three years of relevant work experience in lieu of every year of education
At least 4 years of Design and development experience in Java related technologies
At least 1 year of hands on design and development experience on Big data related technologies – Hadoop, PIG, Hive, Core Java
Should be a strong communicator and be able to work independently with minimum involvement from client SMEs
Preferred Skills: MapReduce, HDFS, HBase, YARN, SPARK, Oozie and shell scripting
Background in all aspects of software engineering with strong skills in parallel data processing, data flows, REST APIs, JSON, XML, and micro service architecture.
Must have strong programming knowledge of Core Java or Scala - Objects & Classes, Data Types, Arrays and String Operations, Operators, Control Flow Statements, Inheritance and Interfaces, Exception Handling, Serialization, Collections, Reading and Writing Files.
Must have hands on experience in design, implementation, and build of applications or solutions using Core Java/Scala.
Strong understanding of Hadoop fundamentals.
Strong understanding of RDBMS concepts and must have good knowledge of writing SQL and interacting with RDBMS and NoSQL database - HBase programmatically.
Strong understanding of File Formats – Parquet, Hadoop File formats.
Proficient with application build and continuous integration tools – Maven, SBT, Jenkins, SVN, Git.
Experience in working on Agile and Rally tool is a plus.
Strong understanding and hands-on programming/scripting experience skills – UNIX shell, Python, Perl, and JavaScript.
Should have worked on large data sets and experience with performance tuning and troubleshooting.
Knowledge of Java Beans, Annotations, Logging (log4j), and Generics is a plus.
Knowledge of Design Patterns - Java and/or GOF is a plus.
Knowledge of Spark, Spark Streaming, Spark SQL, and Kafka is a plus.
Experience to Financial domain is preferred



Company:
A Global IT consulting firm with several large customer engagements across Europe and US. It provides strategic business consulting, technology, engineering and outsourcing services to help clients leverage technology and create impactful and measurable business value for every IT investment.

About us:
Avance Consulting Services is a global talent acquisition and executive search company. We work exclusively with some of the most reputed and admired clients across various sectors and geographies.


Client : Avance Consulting

             

Similar Jobs you may be interested in ..