Job Description :
Hello ,

Hope you are doing good today.

I came across your profile Online.

We are assisting our client, a NYSE listed global IT Consulting organization, for hiring the below listed position.

If this is of interest please email your resume along with your present work/visa status, compensation/Salary and any other relevant details for an immediate consideration.
Role : Hadoop Developer
Location: San Ramon, CA
Type of Hire:- Contract / Fulltime

Job Description:

Skype interview day (10 to 1 PM PST) on Sept 15th. For Skype you should submit only CA out of state candidate as we want all local candidate’s to come for in person on 22nd to our San Ramon office.
Position :- 1
Must Have:
8 to 10 years of IT experience with atleast 2 to 4 years of Hands on experience in Big Data.
Work directly with customers'' technical resources to devise and recommend solutions based on understood requirements.
Worked in complex Big data environment with Parallel streaming platform build out experience.
Experience on Micro services/ Rest API.
Hands-on Programming Experience on Kafka, Avro, Spark, Hadoop, Python, Scala/Java.
Dev Ops experience
Good To Have:
ElasticSearch
AWS experience or any other cloud platform experience
Experience with relational (SQL)/ NPP and NoSQl Databases.
Experience with one or more statistical and machine learning packages or frameworks such as R, SciKitLearn, SparkML(MLlib), Tensorflow.
Position :- 2
Hands-on in digging into the data, doing analysis and guiding developers to code for a requirement.• Good SQL experience (preferable Teradata and Hive/SparkSQL) and Techno-Functional knowledge• Work with the developer to get the requirements to final product• Experience in analyzing the data and data modelling• Support users for UAT testing• Support the audit team with documentation and walk-thru of the changes• Prior development experience in Data warehousing projects is a plus• Manage user escalations• Having Data warehousing, Teradata or Hadoop background is a plus• MBA is a must • Hands-on in digging into the data, doing analysis and guiding developers to code for a requirement.• Good SQL experience (preferable Teradata and Hive/SparkSQL) and Techno-Functional knowledge• Work with the developer to get the requirements to final product• Experience in analyzing the data and data modelling• Support users for UAT testing• Support the audit team with documentation and w

Position :- 3
Scala Jd:

Designing, developing and implementing a real time data, integration using Big Data technologies such as Spark using Scala programming language.
Strong in in Spark Scala pipelines (both ETL & Streaming)
Worked on designing distributed data systems & pipelines
Experience with big data methodologies involving Hive/Hadoop/ Spark;
Experience in object oriented coding language - Java/Python
Good experience in Requirements gathering, Design & Development
Working with cross-functional teams to meet strategic goals
Experience in high volume data environments
Critical thinking and excellent verbal and written communication skills

Strong problem-solving and analytical abilities

Good knowledge of data warehousing concepts
Position :- 4

Must Have:
8 to 10 years of IT experience with atleast 2 to 4 years of Hands on experience in Big Data.
Work directly with customers'' technical resources to devise and recommend solutions based on understood requirements.
Worked in complex Big data environment with Parallel streaming platform build out experience.
Experience on Micro services/ Rest API.
Hands-on Programming Experience on Kafka, Avro, Spark, Hadoop, Python, Scala/Java.
Dev Ops experience
Good To Have:
ElasticSearch
AWS experience or any other cloud platform experience
Experience with relational (SQL)/ NPP and NoSQl Databases.
Experience with one or more statistical and machine learning packages or frameworks such as R, SciKitLearn, SparkML(MLlib), Tensorflow.

Position :- 5

"Must Have
1. Strong in Hive and Impala query languages
2. Data Warehousing skills such a creating and maintaining data models
3. Ability to troubleshoot long running Hive/Impala queries and optimize the performance (prior experience in this is a must)
4. Hands on experience in Python with good coding standards
5. Experienced in GIT, Jenkins and Jira platforms
6. Expertise in shell scripting
7. Hands on experience in Apache Spark
8. Experience in developing ETL scripts using Python, PySpark and shell scripts
9. Job scheduling with cron
10. Experience in Cloudera Enterprise Platform and components
Added Advantage
1. Prior experience in SAP to Hadoop data migration
2. Experience/knowledge in Attunity
3. Experience/knowledge with AtScale
4. Experience/knowledge with RedWood
5. Experience/knowledge in Solr indexing Solr Hadoop integration

Position :- 6
Must Have
1. Strong in Hive and Impala query languages
2. Data Warehousing skills such a creating and maintaining data models
3. Ability to troubleshoot long running Hive/Impala queries and optimize the performance (prior experience in this is a must)
4. Hands on experience in Python with good coding standards
5. Experienced in GIT, Jenkins and Jira platforms
6. Expertise in shell scripting
7. Hands on experience in Apache Spark
8. Experience in developing ETL scripts using Python, PySpark and shell scripts
9. Job scheduling with cron
10. Experience in Cloudera Enterprise Platform and components
Added Advantage
1. Prior experience in SAP to Hadoop data migration
2. Experience/knowledge in Attunity
3. Experience/knowledge with AtScale
4. Experience/knowledge with RedWood
5. Experience/knowledge in Solr indexing Solr Hadoop integration

Position :- 7 , 8 , 9

8+ years of demonstrated experience working as part of large Information Technology teams and/or consulting organizations partnering with clients/business groups to support complex analytical and business intelligence environment.
Proven technical expertise with Hadoop – Hortonworks – Apache and its corresponding tools a must. Certification a plus. (offshore)
Proven technical expertise with Linux (and code base of Python or Java) and an interest in moving into the Big Data Space.
Demonstrated ability to produce high quality technical documentation.
Strong knowledge of systems development and project management Agile methodologies/processes.
Familiarity with other various platforms and databases (i.e. SQL Server Able to write complex SQL and leverage backend databases.
Participate in an on-call rotation and available to work off-hours and weekends.
A plus if experience with other tools listed above like Paxata, Tableau, SPSS, R, Python, CPlex, PCI, Planning Analytics, DB2 Blu, Linux
Strong interpersonal skills, with a demonstrated ability to make effective decisions while working through complex system issues.
Must be able to utilize and effectively communicate functional and technical components of an initiative to applicable parties both verbally and through documentation.
Attention to detail, good analytical and problem solving skills and critical thinking
Self-starter/motivator and have a proactive, agile and strategic mindset.
Bachelor’s degree or higher in a technical field.
Position :- 10
Role : Qlikview Developer
Location: San Ramon, CA
Type of Hire:- Contract / Fulltime
Qlikview and QLiksense developer
Position :- 11
Hands-on in digging into the data, doing analysis and guiding developers to code for a requirement.
Good SQL experience (preferable Teradata and Hive/SparkSQL) and Techno-Functional knowledge
Work with the developer to get the requirements to final product
Experience in analyzing the data and data modelling
Support users for UAT testing
Support the audit team with documentation and walk-thru of the changes
Prior development experience in Data warehousing projects is a plus
Manage user escalations
Having Data warehousing, Teradata or Hadoop background is a plus
MBA is mandatory. (Client accepts Masters in other discipline but chances of getting selected is very less)
Position :- 12 & 13
Role : Big Data Engineer
Location: San Ramon, CA
Type of Hire:- Contract / Fulltime
5-8 years of post-college working experience in end-to-end Data Warehouse application development and deployment
2+ Years of experience in Architecting Hadoop/Spark/Big Data applications and environments
Expertise and hand-on experience with the Hive and/or SparkSQL is a must
Sound knowledge of relational databases (SQL)
Experience with large SQL based systems like Teradata, Oracle and Unix/Linux shell scripting is a plus
Familiar with industry best practices and how to drive efficiently while maintaining a robust service offering


Client : Cognizant

             

Similar Jobs you may be interested in ..