Job Description :
any visa/tax term

Position: Big Data Engineer

Location: Sunnyvale, CA (local candidates only, F2F interview is mandatory)

Duration: 1 year +

*Linkedin profile is required*

Key skills: Big Data and Software development/programming experience in
JAVA and Python/Perl. We can look for candidates with some machine learning
background as well since they will have strong scripting/programming
experience along with big data. Experience with building data pipelines is
very important.

o Basic Big data stack [Pig, Hive, HBase]
o Advanced Big data: Kafka, Storm/ any other
A Little About Us

The Data Engineering and Integrations team provides vital data processing
and application services that are used for a variety of critical functions
and strategic initiatives . We deal with a variety of data sources ranging
from enterprise to big data and work on the very latest technology within
the grid ecosystem. We also engineer applications that enable our users to
gain better insights to their data as well as to act on it.

A Lot About You

You are a self driven and motivated individual that wants to make an impact
by offering innovative solutions in the big data and custom applications
space. You are a free thinker, have a hunger to learn, willing to try out
new tools and technologies and willing to look for the best and most
practical solution to any given problem (rather than the quickest or
easiest You take on challenges head on and work well under pressure. No
task is too big or small or you and you treat everything with equal merit.
You are a good team player and thrive as part of a team of equally talented
and motivated individuals. You are a strong communicator.

Responsibilities:

Work on development initiatives as part of a scrum team on sprint
cycles.
Closely interact with our stakeholders (Product Owners/Managers,
Business Analysts, others) for clarity on sprint items and for verification
of developed solutions.
Participate in team activities such as sprint grooming sessions,
project or product discussions, brown bags as well as the occasional team
outing.
Follow appropriate coding standards and best practices as
applicable.
Document your work well.
Participate in code reviews for your peers.
Collaborate with your peers for finding solutions to complex
problems. Share knowledge with your peers and also learn from them as
required.
Work on operational and production support for the applications
we build and maintain.
Work towards quarterly team and organizational goals that should
be result oriented and measurable.

You Must Have

5+ years of overall experience in software development.
Strong data engineering experience with demonstrable skills
building data pipelines from structured and semi-structured data sources,
data cleansing, formatting and storing data into reporting tables.
Strong scripting experience using Python/Perl/Shell.
Strong programming experience with Java.
Strong experience with relational database systems such as
Oracle, MySQL.
Strong demonstrable experience working on realtime data pipelines
using technologies such as Kafka and Storm.
Preferred
Experience with big data technologies - HDFS, Pig, Hive, Oozie,
HBase, Spark etc.
Experience working with RESTful APIs.