Job Description :
Hi,

Hope you are doing great.

We have a below positions open with one of our client, if interested kindly send me your word formatted updated resume,


Position 1:
Job Title : Big data Hadoop Developer
Location : Plano, TX, Atlanta, GA
Duration : Full/Permanent

Qualifications Basic

Bachelor’s degree or foreign equivalent required from an accredited institution. Will also consider three years of progressive experience in the specialty in lieu of every year of education.
At least 4 years of experience with Big Data / Hadoop


Preferred

At least 4+ years of experience in design and implementation of large and complex Big Data and Hadoop implementations involving AbInitio or other similar products.
4+ years of strong coding skills in JAVA
2+ years of experience in ETL tool with hands on HMFS and working on big data hadoop platform
2+ years of experience implementing ETL/ELT processes with big data tools such as Hadoop, YARN, HDFS, PIG, Hive
1+ years of hands on experience with NoSQL (e.g. key value store, graph db, document db)
2+ years of solid experience in performance tuning, Shell/perl/python scripting
Experience with Spark
Experience with integration of data from multiple data sources
Knowledge of various ETL techniques and frameworks
3+ years of experience in Project life cycle activities on development and maintenance projects.
Ability to work in team in diverse/ multiple stakeholder environments




Position 2:
Job Title : Big data Hadoop Developer
Location : Sunnyvale, CA / Morris Plains, NJ
Duration : Full/Permanent

Qualifications

Bachelor’s degree or foreign equivalent required from an accredited institution. Will also consider three years of progressive experience in the specialty in lieu of every year of education
At least 4 years of relevant experience in Information Technology



Preferred

Strong development experience in Big Data, NoSQL Databases.
Strong development Experience in Hadoop, Storm, Spark, HDFS, Hive and Big Data related components.
Deep Understanding of JAVA, Linux/Unix.
Analysis and understanding of upstream and downstream data needs for analytics.
Ability to work and liaison with teams across business units.
Good experience to use SQL queries and other tools to perform analysis of data
Should show strong communication skills and the ability to liaison with different teams.
Review the code developed and suggest any issues w.r.t customer data.
Cloud Exposure Preferred Azure







Position 3:

Job Title : Big data Hadoop Developer
Location : Plano, TX
Duration : Full/Permanent

At least 2+ years of experience in software development life cycle stages
At least 2+ years of experience Big Data technologies and ecosystem
At least 2+ years of experience in Project life cycle activities on development and maintenance projects
At least 2 years of experience in Design and Architecture review
At least 2 years of experience in application support and maintenance (including some experience on-call support)
Minimum of 2+ years of work experience in the Information Technology Field.
Minimum of 2+ years of hands on experience Big Data technologies.
Expertise in Hadoop ecosystem products such as HDFS, MapReduce, Hive, AVRO, Zookeeper
Expertise with Hadoop ecosystem and experience with Hive, Oozie, Flume, Impala and Sqoop.
Expertise in building distributed systems, query processing, database internals or analytic systems Expertise with data schema - logical and physical data modeling
Experience with Spark, HBase, Java (MapReduce), Python (linux shell like scripts) development
Experience in full software development lifecycle of the Data Warehousing Project.
Experience in loading data into HDFS from heterogeneous databases – DB2, Oracle, and SQL server using Apache Sqoop.
Experience in analysis of data using Hive and Impala and managing, navigating data and tables using Hue.
Work with Oozie, Flume, Sqoop, Spark, and Solr for data loading and analytics


Position 4:

Job Title : Technology Architect - Big Data/Cloud Services (Azure)
Location : Redmond, WA, San Francisco, CA, Sunnyvale, CA, Seattle, WA
Duration : Full/Permanent


Qualifications


Basic
Bachelor’s degree or foreign equivalent required from an accredited institution. Will also consider three years of progressive experience in the specialty in lieu of every year of education.
At least 7 years of experience with Information Technology.

Preferred

4+ years of experience with Big Data and Cloud appications
4+ years of experience architecting, designing and developing cloud infrastructure and applications
2+ years of experience with Clients on Cloud Projects with Lead experience or cloud product skills

Technical skills

Experience with Big Data Technology Stack: Hadoop/Spark/Hive/MapR/Storm/Pig/Oozie/Kafka, etc.
Experience with one or more of the cloud products: Amazon AWS , Azure, OpenStack, CloudFoundry, Mesos and Docker.
Hands on experience with Azure HDInsight/Spark/Eventhub
Azure Virtual Machines, Blob Storage, Azure SQL Database, StorSimple, Azure DNS, Virtual Network, DocumentDB, Redis Cache, Azure App Service,
Strength and background in infrastructure or HPC is preferable.
Experience with one of the DB servers is an added advantage
Excellent interpersonal and communication skills
Experience in delivering large enterprise level applications

-

Position 5:
Job Title : Big data Developer
Location : Phoenix, AZ
Duration : Full/Permanent


Qualifications
Basic

Bachelor’s degree or foreign equivalent required. Will also consider three years of relevant work experience in lieu of every year of education
At least 7 years of Design and development experience in Java related technologies
At least 4 year of hands on design and development experience on Big data related technologies – Hadoop, PIG, Hive, Core Java
At least 2 years of hand on Architect Design / Deployment / Integration experience
Should be a strong communicator and be able to work independently with minimum involvement from client SMEs



Preferred Skills:

MapReduce, HDFS, HBase, YARN, SPARK, Oozie and shell scripting
Background in all aspects of software engineering with strong skills in parallel data processing, data flows, REST APIs, JSON, XML, and micro service architecture.
Must have strong programming knowledge of Core Java or Scala - Objects & Classes, Data Types, Arrays and String Operations, Operators, Control Flow Statements, Inheritance and Interfaces, Exception Handling, Serialization, Collections, Reading and Writing Files.
Must have hands on experience in design, implementation, and build of applications or solutions using Core Java/Scala.
Strong understanding of Hadoop fundamentals.
Strong understanding of RDBMS concepts and must have good knowledge of writing SQL and interacting with RDBMS and NoSQL database - HBase programmatically.
Strong understanding of File Formats – Parquet, Hadoop File formats.
Proficient with application build and continuous integration tools – Maven, SBT, Jenkins, SVN, Git.
Experience in working on Agile and Rally tool is a plus.

-


Position 6:

Job Title : Big data Hadoop Developer
Location : Phoenix, AZ
Duration : Full/Permanent

Qualifications
Basic

Bachelor’s degree or foreign equivalent required. Will also consider three years of relevant work experience in lieu of every year of education
At least 4 years of Design and development experience in Java related technologies
At least 1 year of hands on design and development experience on Big data related technologies – Hadoop, PIG, Hive, Core Java
Should be a strong communicator and be able to work independently with minimum involvement from client SMEs
Preferred Skills: MapReduce, HDFS, HBase, YARN, SPARK, Oozie and shell scripting
Background in all aspects of software engineering with strong skills in parallel data processing, data flows, REST APIs, JSON, XML, and micro service architecture.
Must have strong programming knowledge of Core Java or Scala - Objects & Classes, Data Types, Arrays and String Operations, Operators, Control Flow Statements, Inheritance and Interfaces, Exception Handling, Serialization, Collections, Reading and Writing Files.
Must have hands on experience in design, implementation, and build of applications or solutions using Core Java/Scala.
Strong understanding of Hadoop fundamentals.
Strong understanding of RDBMS concepts and must have good knowledge of writing SQL and interacting with RDBMS and NoSQL database - HBase programmatically.
Strong understanding of File Formats – Parquet, Hadoop File formats.
Proficient with application build and continuous integration tools – Maven, SBT, Jenkins, SVN, Git.
Experience in working on Agile and Rally tool is a plus.
Strong understanding and hands-on programming/scripting experience skills – UNIX shell, Python, Perl, and JavaScript.
Should have worked on large data sets and experience with performance tuning and troubleshooting.
Knowledge of Java Beans, Annotations, Logging (log4j), and Generics is a plus.
Knowledge of Design Patterns - Java and/or GOF is a plus.
Knowledge of Spark, Spark Streaming, Spark SQL, and Kafka is a plus.
Experience to Financial domain is preferred


Client : Avance Consulting

             

Similar Jobs you may be interested in ..