Sr. Hadoop Developer Location: Boston, MA Duration: 6 months+ Description: Candidate will be working in an migration where he will be involved in all aspects of the migration project Responsibilities: Ownership of the code and moving/migrating your code into test, UAT and production At least 3 years of implementation experience in Hadoop in Data Lake environment. Experience in developing Data ingestion and integration flows using Big Data ecosystem tools Hadoop-Hive, Sqoop, Spark,
|
 |
Job Title: Hadoop Developer
Location: Alpharetta, GA (Onsite)
Duration: Long Term
Position Type: Contract
Linkedin ID is must
Job Description:
Strong in spark, Scala, Hadoop
AWS/Azure experience
Financial Domain will be plus.
Strong experience in Java
|
 |
Title- Big Data
Location-Austin, TX-Onsite
Duration: Long Term
Key Qualification:
Big data engineering and Hadoop ecosystem
Spark, spark-sql, pyspark
Familiarity in SQL and database concepts. Ability to read and understand moderately complex SQL. Ability to write data extraction SQL including basic selection, projection, joins and simple subqueries.
Experience developing in and for Linux environments. Ability to develop code and data files; investigate processing errors/logs
|
 |
Job Title : Hadoop Developer
Location: Customer is expecting people to be onsite from July either at (Alpharetta, GA or New York)
Duration: 12 Months
Responsibilities
Responsible for the hands-on design and development of Java, Scala Spark and Hive jobs as part of the DBA Agile Squad/Fleet.
Ensure developed code is in alignment with system architecture and integration design standards; working with an enterprise framework.
Participate in design discussions and contribute to
|
 |
Title : Cloudera Hadoop Cluster – L3 Support
Location : Hybrid – 2-3 days onsite in a week (Whippany, NJ)
Duration : 6 months (possible extension)
Requirements would be:
Liase directly with users along with application teams to service requests and troubleshoot issues as they arise with
|
 |
Role:Hadoop Developer
Location:Remote to start ? but will need to eventually relocate to (Charlotte, NC or Christiana, DE or Dallas, TX)
Duration:14 months contract
VISA:USC/GC/GC-EAD/H4-EAD(Only on W2)
Must Haves:
Hadoop
Python or scala or java(Any one language)
Spark
Unix
Top Needs:
Needs to have strong hands on experience with Hadoop development
Needs to have strong Spark and Unix exp
Needs to have exp with creating hive tables and working on metadata documents
|
 |
Role: Hadoop Developer
Vendor/Client: Apex/Bank of America
Duration: 14 months contract
Remote to start ? but will need to eventually relocate to (Charlotte, NC or Christiana, DE or Dallas, TX)
Top Needs:
? Needs to have strong hands on experience with Hadoop development ? 2 yrs. min
? Needs to have strong Spark and Unix exp
? Needs to have exp with creating hive tables and working on metadata documents
? Needs strong coding at least in one of the spark language(Python, Scala or
|
 |
Hello,
Hope you are doing well!
We have an urgent requirement for Data Engineers at Irvine, CA (onsite from day one)
If you are interested in this position please send your updated resume on
Job Roll: Data Engineers
Location: Irvine, CA (onsite from day one)
Job Description:
· Big Data ecosystem Hadoop, Hive, HBase, Spark, etc.
· DataStage and Informatica as ETL or ELT
· Python and/or Scala as the language
· Tableau and/or Power BI as visualization tools
· Non Relationa
|
 |
Write software to interact with HDFS and MapReduce.
Assess requirements and evaluate existing solutions.
Build, operate, monitor, and troubleshoot Hadoop infrastructure.
Develop tools and libraries, and maintain processes for other engineers to access data and write MapReduce programs.
Develop documentation and playbooks to operate Hadoop infrastructure.
Evaluate and use hosted solutions on AWS / Google Cloud / Azure. {{If you’d like to use hosted solutions}}
Write scalable and maint
|
 |
Job Title : Hadoop Developer
Location: Customer is expecting people to be onsite from July either at (Alpharetta, GA or New York)
Duration: 12 Months
Responsibilities:
Responsible for the hands-on design and development of Java, Scala Spark and Hive jobs as part of the DBA Agile Squad/Fleet.
Ensure developed code is in alignment with system architecture and integration design standards; working with an enterprise framework.
Participate in design discussions and contribute to
|
 |