Job Description :
Responsibilities Include:
Expertise - Collaborate with customer s business & technology team in understanding the problem statement, gather requirements, design scalable, robust and sustainable architecture and solutions leveraging AWS services such as Amazon Elastic Compute Cloud (EC2), Amazon Data Pipeline, S3, DynamoDB NoSQL, Relational Database Service (RDS), Elastic Map Reduce (EMR) and Amazon Redshift.
Solutions - Deliver on-site technical engagements with partners and customers. This includes participating in pre-sales on-site visits, understanding customer requirements, creating consulting proposals and creating packaged Data & Analytics service offerings.
Delivery - Engagements include short on-site projects proving the use of AWS services to support new distributed computing solutions that often span private cloud and public cloud services. Engagements will include migration of existing applications (in Informatica, Oracle & IBM Netezza) and development of new applications using AWS cloud services.
Insights - Work with customer s engineering and support teams to convey partner and customer needs and feedback as input to technology roadmaps. Share real world implementation challenges and recommend new capabilities that would simplify adoption and drive greater value from use of AWS cloud services.

Basic Qualifications:
Bachelor s degree, or equivalent experience, in Computer Science, Engineering, Mathematics or a related field.
Overall 9 years of experience, 5+ years’ experience of Data Lake/Hadoop platform implementation, including 3+ years of hands-on experience in implementation and performance tuning Hadoop/Spark implementations.
Ability to think strategically about business, product, and technical challenges in an enterprise environment.
Experience with analytic solutions applied to the mortgage domain or banking & financial services
Highly technical and analytical, possessing 5 or more years of IT platform implementation experience.
Experience developing data & analytics solutions leveraging end to end AWS Data management technology stack (S3,Kinesis,EMR,Redshift,Glue,Athena,Lambda,Data Pipeline, Cloud Watch, SageMaker and other data/analytics tools
Understanding of Apache Hadoop and the Hadoop ecosystem. Experience with one or more relevant tools (Sqoop, Flume, Kafka, Oozie, Hue, Zookeeper, HCatalog, Solr, Avro
Familiarity with one or more SQL-on-Hadoop technology (Hive, Impala, Spark SQL, Presto
Experience developing software code in one or more programming languages (Java, Python, PySpark, SCALA Current hands-on implementation experience required
Hands on experience leading large-scale global data warehousing and analytics projects.
Ability to lead effectively across organizations.
Understanding of database and analytical technologies in the industry including MPP and NoSQL databases, Data Warehouse design, BI reporting and Dashboard development.
Demonstrated industry leadership in the fields of database, data warehousing or data sciences.
Implementation and tuning experience specifically using Amazon Elastic Map Reduce (EMR
Implementing AWS services in a variety of distributed computing, enterprise environments.
Customer facing skills to represent AWS/cloud computing well within the customer s environment and drive discussions with senior personnel regarding trade-offs, best practices, project management and risk mitigation. Should be able to interact with C level executives, as well as the people within the customer organizations


Client : Confidential

             

Similar Jobs you may be interested in ..