Job Description :
Job Title: Hadoop Data Engineer

Location: San Jose, CA

Duration: 8+months

Rate: DOE

Job Description:

The Challenge!
The Adobe IDS Data Engineering team is looking for innovative data and software engineers to build and evolve our industry leading data products and applications that empower Adobe’s Data Driven Operating Model and next generation customer experiences.
What you’ll do

Designing, develop & tune data products, applications and integrations on large scale data platforms (Hadoop, Kafka Streaming, Hana, SQL server etc) with an emphasis on performance, reliability and scalability and most of all quality.

Analyze the business needs, profile large data sets and build custom data models and applications to drive the Adobe business decision making and customers experience

Develop and extend design patterns, processes, standards, frameworks and reusable components for various data engineering functions/areas.

Collaborate with key stakeholders including business team, engineering leads, architects, BSA's & program managers.

Working at Adobe you have the opportunity to extend your network and collaborate with engineers, architects and leaders across the Adobe data management space, Adobe product engineering and Business leadership teams. We are looking to win our next CIO 100 award and we need you!

The ideal candidate will have:

MS/BS in Computer Science / related technical field with 4+years of strong hands-on experience in enterprise data warehousing / big data implementations & complex data solutions and frameworks

Strong SQL, ETL, scripting and or programming skills with a preference towards Python, Java, Scala, shell scripting

Demonstrated ability to clearly form and communicate ideas to both technical and non-technical audiences.

Strong problem-solving skills with an ability to isolate, deconstruct and resolve complex data / engineering challenges

Results driven with attention to detail, strong sense of ownership, and a commitment to up-leveling the broader IDS engineering team through mentoring, innovation and thought leadership

Desired skils:

Familiarity with streaming applications

Experience in development methodologies like Agile / Scrum

Strong Experience with Hadoop ETL/ Data Ingestion: Sqoop, Flume, Hive, Spark, Hbase

Strong experience on SQL and PLSQL

Nice to have experience in Real Time Data Ingestion using Kafka, Storm, Spark or Complex Event Processing

Experience in Hadoop Data Consumption and Other Components: Hive, Hue HBase, , Spark, Pig, Impala, Presto

Experience monitoring, troubleshooting and tuning services and applications and operational expertise such as good troubleshooting skills, understanding of systems capacity, bottlenecks, and basics of memory, CPU, OS, storage, and networks.

Experience in Design & Development of API framework using Python/Java is a Plu

Experience in developing BI Dash boards and Reports is a plus
             

Similar Jobs you may be interested in ..