Job Description :
This position involves cutting edge of Database technologies. Successful candidates must be creative in problem-solving, assertive and proactive in learning, and able to work independently and collaboratively with all levels of associates.


Responsibilities:

Responsible for designing and implementing data lake and applications that ingest data, do ETL, and analytics on top. Be able to evaluate and recommend querying engines, reporting and tools for data ingestion and management; and support movement of high volumes of data with best possible performance.

Required Skills

10 or more years of hands-on experience with systems, databases, programming and architecture

Experience with database and data warehousing systems such as Oracle/SQL Server, Teradata, AWS’s Data Migration Service

Experience in designing and working with cloud based datastores such as AWS RedShift, Snowflake and Azure SQL etc.

Experience with Tableau/Qlikview/Power BI/Spotfire

Experience with Big Data Technologies:

SQL knowledge and knowledge of tools such as Impala, Tez, Presto, Drill, etc.
Data integration experience - integration of Big Data Lake with system of records (ERP, CRM, BPM, Salesforce, etc


Experience in High Availability and High Performance

Ability to communicate, present, and collaborate well

Minimum Qualifications

Bachelor''s Degree in Computer Science

5+ years of hands-on experience working with extremely high throughput, high availability data processing systems

A focus on high quality deliverables and meeting deadlines

Be someone who will bring ideas to the table

A collaborative style and a focus on continuous improvement, quality and planning, teamwork, strong communication

Good understanding of big data database architecture, indexing, and partitioning