Job Description :
Greetings,

I would like to update you on few open positions with our esteemed client. Please find below Job Descriptions. If you are interested please forward me your updated resume along with your contact details to discuss further.


Job Title : Technology Lead - Big data Hadoop
Location : Bellevue, WA / Atlanta, GA/ Cary, NC/ Molini, IL/ Sunnyvale, CA/ SFO, CA
Duration : Full/Permanent


Qualifications
Bachelor’s degree or foreign equivalent required from an accredited institution. Will also consider three years of progressive experience in the specialty in lieu of every year of education.
At least 4 years of relevant experience in Information Technology.

Must Have:

At least 3 years of experience in analyzing streaming data with emerging Hadoop-based Big Data, NoSQL
Strong knowledge and working experience in Hadoop echo system (Map reduce, Hive, Pig, Hbase, Sqoop)
Analyzing data with Hive, Pig and HBase, Must be strong in understanding the data and should be able to write complex query’s in hive
Candidate must have good understanding of AWS and Spark for data manipulation, preparation and cleansing
Strong knowledge in Java/Python and Shell scripting to understand and write UDF’s for business requirements.
Should have working experience in NIFI and should be able to handle different API’s like Google, Adobe etc
Good communication skills.


Job Title : Technology Architect - Big data Hadoop
Location : NYC, NY
Duration : Full/Permanent


Qualifications
Bachelor’s degree or foreign equivalent required from an accredited institution. Will also consider three years of progressive experience in the specialty in lieu of every year of education.
At least 4 years of relevant experience in Information Technology.

Must Have:

Total experience minimum 6-10 years, Overall minimum 2 years of experience in Hadoop
Thorough understanding of Hadoop Cloudera/HortonWorks/MapR and ecosystem components
Thorough understanding of NoSQL databases like HBase, Mongo, Cassandra etc
Requirements Gathering, designing and developing scalable big data solutions with Hadoop
Strong technical skills on Spark, HBase, Hive, Sqoop, Oozie, Flume, Java, Scala, Pig, Python etc
Practical experience of implementation using Spark, M-R framework, databases and SQL
Strong analytical & problem solving skills; proven teamwork and communication skills
Must show initiative and desire to learn business
Able to work independently and mentor team members
Desirable: Hands-on experience with analytical tools, languages, or libraries (e.g. R)

Job Title : Big data Hadoop (All Levels)
Location : Phoenix, AZ / Sunnyvale, CA / Plano, TX / Austin, TX
Duration : Full/Permanent


Qualifications
Bachelor’s degree or foreign equivalent required from an accredited institution. Will also consider three years of progressive experience in the specialty in lieu of every year of education.
At least 4 years of relevant experience in Information Technology.

Qualifications Basic

Bachelor’s degree or foreign equivalent required. Will also consider one year of relevant work experience in lieu of every year of education
At least 5 years of Design and development experience in Big data, Java or Datawarehousing related technologies
Atleast 3 years of hands on design and development experience on Big data related technologies – PIG, Hive, Mapreduce, HDFS, HBase, Hive, YARN, SPARK, Oozie, Java and shell scripting
Should be a strong communicator and be able to work independently with minimum involvement from client SMEs
Should be able to work in team in diverse/ multiple stakeholder environment

Mandatory Technical Skills
Background in all aspects of software engineering with strong skills in parallel data processing, data flows, REST APIs, JSON, XML, and micro service architecture.
Must have strong programming knowledge of Core Java or Scala - Objects & Classes, Data Types, Arrays and String Operations, Operators, Control Flow Statements, Inheritance and Interfaces, Exception Handling, Serialization, Collections, Reading and Writing Files.
Must have hands on experience in design, implementation, and build of applications or solutions using Core Java/Scala.
Strong understanding of Hadoop fundamentals.
Must have experience working on Big Data Processing Frameworks and Tools – MapReduce, YARN, Hive, Pig.
Strong understanding of RDBMS concepts and must have good knowledge of writing SQL and interacting with RDBMS and NoSQL database - HBase programmatically.
Strong understanding of File Formats – Parquet, Hadoop File formats.
Proficient with application build and continuous integration tools – Maven, SBT, Jenkins, SVN, Git.
Experience in working on Agile and Rally tool is a plus.
Strong understanding and hands-on programming/scripting experience skills – UNIX shell, Python, Perl, and JavaScript.
Should have worked on large data sets and experience with performance tuning and troubleshooting.

Preferred

Knowledge of Java Beans, Annotations, Logging (log4j), and Generics is a plus.
Knowledge of Design Patterns - Java and/or GOF is a plus.
Knowledge of Spark, Spark Streaming, Spark SQL, and Kafka is a plus.
Experience to Financial domain is preferred
Experience and desire to work in a Global delivery environment

Job Title : Principal Technology Architect - Big data
Location : Torrance, CA
Duration : Full/Permanent


Qualifications
Bachelor’s degree or foreign equivalent required from an accredited institution. Will also consider three years of progressive experience in the specialty in lieu of every year of education.
At least 4 years of relevant experience in Information Technology.

At least 16+ years of experience as a Big Data and Analytics Architect with very good communication skills
Experience in Real time Data ingestion in Kafka, Spark , CEP engines, Storm and Hadoop stack
Hands-on/working knowledge on Azure.
Hands on Java skills is required
Experience in Solution Architecting, Information architecture planning & design
Experience in Tool Evaluation, Roadmap definitions and Architectural roadmap recommendations.
Experience in managing large Bigdata platforms / Applications / Programs , performance tuning and cluster management.
Experience in setting up Bigdata Governance framework and implementing the same across organizations having bigdata plaforms
Ability to Conduct workshops and solution review sessions with business and IT stakeholders & drive clarity as required
Experience in managing and strategizing consumption for users on BigData and BI applications.
Experience in Big Data, NoSQL Databases
Understanding of statistical modeling and Analytics with R, SAS, Spark ML etc is preferable.
Should be proficient in data architecture, data marts and dimensional modeling
Having experience in Auto / Connected Car is an added advantage.

Company:
A Global IT services and Consulting Company having $ 9.5 billion revenue with several large customer engagements across USA. It provides strategic business consulting, technology, engineering and outsourcing services to help clients leverage technology and create impactful and measurable business value for every IT investment.

About Us:
Avance Consulting Services is a global talent acquisition and executive search company. We work exclusively with some of the most reputed and admired clients across various sectors and geographies.


Client : Avance Consulting

             

Similar Jobs you may be interested in ..