Job Description :
Hi,
Hope you are doing great.
We have a below position open with one of our clients if interested kindly send me your word formatted updated resume with the below details filled ASAP,


Job Title : Big data Hadoop Developer
Location : Plano, TX
Duration : Full/Permanent

· At least 2+ years of experience in software development life cycle stages
· At least 2+ years of experience Big Data technologies and ecosystem
· At least 2+ years of experience in Project life cycle activities on development and maintenance projects
· At least 2 years of experience in Design and Architecture review
· At least 2 years of experience in application support and maintenance (including some experience on-call support)
· Minimum of 2+ years of work experience in the Information Technology Field.
· Minimum of 2+ years of hands on experience Big Data technologies.
· Expertise in Hadoop ecosystem products such as HDFS, MapReduce, Hive, AVRO, Zookeeper
· Expertise with Hadoop ecosystem and experience with Hive, Oozie, Flume, Impala and Sqoop.
· Expertise in building distributed systems, query processing, database internals or analytic systems Expertise with data schema - logical and physical data modeling
· Experience with Spark, HBase, Java (MapReduce), Python (linux shell like scripts) development
· Experience in full software development lifecycle of the Data Warehousing Project.
· Experience in loading data into HDFS from heterogeneous databases – DB2, Oracle, and SQL server using Apache Sqoop.
· Experience in analysis of data using Hive and Impala and managing, navigating data and tables using Hue.
· Work with Oozie, Flume, Sqoop, Spark, and Solr for data loading and analytics

Job Title : Technology Architect - Big Data/Cloud Services (Azure)
Location : Redmond, WA, San Francisco, CA, Sunnyvale, CA, Seattle, WA
Duration : Full/Permanent


Qualifications


Basic
Bachelor’s degree or foreign equivalent required from an accredited institution. Will also consider three years of progressive experience in the specialty in lieu of every year of education.
At least 7 years of experience with Information Technology.


Preferred
4+ years of experience with Big Data and Cloud appications
4+ years of experience architecting, designing and developing cloud infrastructure and applications
2+ years of experience with Clients on Cloud Projects with Lead experience or cloud product skills


Technical skills
Experience with Big Data Technology Stack: Hadoop/Spark/Hive/MapR/Storm/Pig/Oozie/Kafka, etc.
Experience with one or more of the cloud products: Amazon AWS , Azure, OpenStack, CloudFoundry, Mesos and Docker.
Hands on experience with Azure HDInsight/Spark/Eventhub
Azure Virtual Machines, Blob Storage, Azure SQL Database, StorSimple, Azure DNS, Virtual Network, DocumentDB, Redis Cache, Azure App Service,
Strength and background in infrastructure or HPC is preferable.
Experience with one of the DB servers is an added advantage
Excellent interpersonal and communication skills
Experience in delivering large enterprise level applications



Job Title : Big data Developer
Location : Phoenix, AZ
Duration : Full/Permanent


Qualifications
Basic
Bachelor’s degree or foreign equivalent required. Will also consider three years of relevant work experience in lieu of every year of education
At least 7 years of Design and development experience in Java related technologies
At least 4 year of hands on design and development experience on Big data related technologies – Hadoop, PIG, Hive, Core Java
At least 2 years of hand on Architect Design / Deployment / Integration experience
Should be a strong communicator and be able to work independently with minimum involvement from client SMEs


Preferred Skills:
MapReduce, HDFS, HBase, YARN, SPARK, Oozie and shell scripting
Background in all aspects of software engineering with strong skills in parallel data processing, data flows, REST APIs, JSON, XML, and micro service architecture.
Must have strong programming knowledge of Core Java or Scala - Objects & Classes, Data Types, Arrays and String Operations, Operators, Control Flow Statements, Inheritance and Interfaces, Exception Handling, Serialization, Collections, Reading and Writing Files.
Must have hands on experience in design, implementation, and build of applications or solutions using Core Java/Scala.
Strong understanding of Hadoop fundamentals.
Strong understanding of RDBMS concepts and must have good knowledge of writing SQL and interacting with RDBMS and NoSQL database - HBase programmatically.
Strong understanding of File Formats – Parquet, Hadoop File formats.
Proficient with application build and continuous integration tools – Maven, SBT, Jenkins, SVN, Git.
Experience in working on Agile and Rally tool is a plus.


Company:
A Global IT consulting firm with several large customer engagements across Europe and US. It provides strategic business consulting, technology, engineering and outsourcing services to help clients leverage technology and create impactful and measurable business value for every IT investment.

About us:
Avance Consulting Services is a global talent acquisition and executive search company. We work exclusively with some of the most reputed and admired clients across various sectors and geographies.


Client : Avance Consulting

             

Similar Jobs you may be interested in ..