Job Description :

Big Data (Java, Spark - Big Data Engineer)

Job Location: Phoenix, AZ


Minimum Qualifications
# 5 years of work experience in software design and implementation using Java.
# Experience in distributed data processing and analyzing using Elasticsearch, Spark.
# Experience in designing, impl ementing and operating any of the NoSQL databases such as Cassandra, Couchbase, Redis.
# Experience in cloud platforms like Docker, Kubernetes. OpenShift is a plus.
# Clear understanding of various design patterns, threading and memory models support ed by the language VM.
# Have excellent written and verbal communications skills. Create and deliver effective presentations to Senior Leadership.
Preferred Qualifications
# Experience in distributed messaging system such as Kafka
# Experience inbu ilding Micro services and Service Mesh is a plus
# Proven Experience with Go Lang
# Experience in Continuous integration, Continuous delivery and DevOps Systems.
# Experience in architecting large scale distributed data systems considering scalabilit y, reliability, security, performance, and flexibility.
# Able to mentor and provide technical guidance to other engineers.

Role – Lead Data Engineer

Location: Phoenix, AZ

Client – EWS (Pls do not publish)

Rate - $70/hr

Indent – 598884

 

Must Have : Scala , Spark , Kafka , AWS

 

JD:

 You have a Bachelor’s degree in Computer Science or equivalent

You have 8+ years of relevant software development experience

You have hands-on experience in Spark, Scala, Kafka, AWS. This is critical.

You have working knowledge of Java.

You have prior experience in Prior expeience of Distributed computing.

You have experience with development tools and agile methodologies

You are good in verbal, written, and interpersonal communication skills

You have experience in leading teams, technical mentoring

You work creatively and analytically in a problem solving environment

You direct and guide team members

 

Key Responsibilities :

Design and Development of Data pipeline on AWS Cloud

Data Pipeline development using Spark, Scala, Kafka, AWS Cloud

Developing Spark streaming applications

Highly analytical and data oriented

Experience in SQL, NoSql Database

Strong bias for action and with an ability to navigate through ambiguities

Able to see the bigger picture, while balancing short term deliverables

Strong verbal, written and presentation skills

 

Role – Senior Data Engineer

Location: Phoenix, AZ

Client – EWS (Pls do not publish)

Rate - $65/hr

Indent – 598887

 

Must Have : Scala , Spark , Kafka , AWS

 

JD:

 You have a Bachelor’s degree in Computer Science or equivalent

You have 8+ years of relevant software development experience

You have hands-on experience in Spark, Scala, Kafka, AWS. This is critical.

You have working knowledge of Java.

You have prior experience in Prior expeience of Distributed computing.

You have experience with development tools and agile methodologies

You are good in verbal, written, and interpersonal communication skills

You have experience in leading teams, technical mentoring

You work creatively and analytically in a problem solving environment

You direct and guide team members

 

Key Responsibilities :

Design and Development of Data pipeline on AWS Cloud

Data Pipeline development using Spark, Scala, Kafka, AWS Cloud

Developing Spark streaming applications

Highly analytical and data oriented

Experience in SQL, NoSql Database

Strong bias for action and with an ability to navigate through ambiguities

Able to see the bigger picture, while balancing short term deliverables

Strong verbal, written and presentation skills

 

Data Engineering Lead/ Architect
Onsite in NJ/NYC office or within traveling distance.

Able to communicate with Business and Technical stakeholders to propose solutions, create work breakdown and estimate the effort for execution.

·        Able to coordinate and collaborate with cross-functional teams, stakeholders, and vendors

Job description

Overall 12-14 years of experience
Worked with Spark, Python and SQL for at least 3 years in a professional setting
Has experience deploying machine learning models into production, and building systems for these processes. Experience with databricks MLOps or AWS Sagemaker a plus
Has comfort with containerization technologies (Fargate, ECS, docker)
Intellectually curious and self-directed problem solver, keen to work on a variety of data projects and independently search for answers
Has worked with Python common data related libraries (pandas, numpy)
Has worked with Databricks, RDS as well as NoSQL databases (Cassandra, HBase)
Has experience working with DevOps tools like Git, Jenkins and Terraform.
Has hands-on experience with SQL and can write complex queries with ease. Has experience building batch and real-time data pipelines.
Has familiarity with AWS services (kinesis, fargate, lambda, S3)
Care deeply about the quality of your code, but are also aware of timelines and don’t spend countless hours trying to bring things to perfection
Communicates clearly and effectively

 

Anish Antrive

Senior Recruiter

Nityo Infotech Corp.
Suite 1285, 666 Plainsboro Road
Plainsboro , NJ , 08536

Cell

Desk EXT 4005

Email:

LinkedIn: 

 

“If you feel you received this email by mistake or wish to unsubscribe, kindly reply to this email with “UNSUBSCRIBE” in the subject line”



Client : Nityo infotech

             

Similar Jobs you may be interested in ..