Job Description :

Title- Big Data Engineer

Location- Phoenix, AZ

(Long term)


Primary Skills:

1 Very strong server-side Java experience, especially in an open source, data-intensive, distributed environments.

2 Should have worked on open source products and contribution towards it would be an added advantage.

3 Implemented and in-depth knowledge of various java, J2EE and EAI patterns.

4 Implemented complex projects dealing with the considerable data size (GB/ PB) and with high complexity

5 Well aware of various Architectural concepts (Multi-tenancy, SOA, SCA etc) and NFR’s (performance, scalability, monitoring etc)

6 Good understanding of algorithms, data structure, and performance optimization techniques.

7 Knowledge of database principles, SQL, and experience working with large databases beyond just data access.

8 Exposure to complete SDLC and PDLC.

9 Capable of working as an individual contributor and within team too.

10 Self-starter & resourceful personality with ability to manage pressure situations

11 Should have experience/ knowledge on working with batch processing/ Real time systems using various Open source technologies like Solr, hadoop, NoSQL DB’s, Storm, kafka etc.


Role & Responsibilities

Implementation of various solutions arising out of the large data processing (GB’s/ PB’s) over various NoSQL, Hadoop and MPP based products
Active participation in the various Architecture and design calls with bigdata customers.
Working with Sr. Architects and providing implementation details to offshore.
Conducting sessions/ writing whitepapers/ Case Studies pertaining to BigData
Responsible for Timely and quality deliveries.
Fulfill organization responsibilities – Sharing knowledge and experience within the other groups in the org., conducting various technical sessions and trainings