Job Description :
Experience in Snowflake Admin activities like creating DBs, Roles, creating VWH, managing Enterprise DWH in Snowflake Administer the data warehouse and provide guidance to the team in implementation using Snowflake SnowSQL and other big data technologies Hands-on experience with Snowflake utilities, SnowSQL, SnowPipe, Big Data model techniques using Python Experience in performance tuning of the snow pipelines and should be able to trouble shoot the issue quickly Extensive experience in relational as well as NoSQL data stores, methods and approaches (star and snowflake, dimensional modeling) Experience in Migration projects from Traditional DB to Snowflake Understanding data pipelines and modern ways of automating data pipeline using cloud based implementation and Testing and clearly document the requirements to create technical and functions specs Possesses strong leadership skills with a willingness to lead, create Ideas, and be assertive. Perform performance tuning, application support, and user acceptance training Identify process improvement opportunities. Able to maintain confidentiality of sensitive information. Document and communicate risk assessments pertaining to new functionality and enhancements. Collects data, analyzes and reports for early detection and correction, continual improvement. Recognize and attend to important details with accuracy and efficiency. Required Qualifications Should have minimum of 8 years of IT experience. Minimum 3+ years of experience in designing and implementing a fully operational solution on Snowflake Data Warehouse and Administering it. Excellent understanding of Snowflake Internals and integration of Snowflake with other data processing and reporting technologies Should be having good presentation and communication skills, both written and verbal Ability to problem solve and able to convert the requirements to design Ability to troubleshooting issues as and when arisen. Ability to test the developed jobs and preparing test documents Work Experience on optimizing the performance of the Spark jobs