Job Description :
Hope you are doing good!!!



Below is the JD. If interested Kindly share resume with contact details and also let me know the expected hourly rates on C2C/W2. If any employer do share the Employer details.



Title                                          : ETL Developer SDET

Location                                 : LongIsland, NYC

Duration                                 : 12 Months Contract



Note:

Kindly share the resumes to for quick submittal.

Job Description:


Responsible to write SQL queries to validate data quality and integrity in SQL Server, Oracle, APS, PDW , MDM etc.
Responsible to write Scala and SQL code on Spark to ensure data quality and integrity in Hadoop based data stores like Azure Cosmos DB , Azure Blob, Hive etc.  using HDI cluster in cloud.
Responsible to ensure data quality and integrity is maintained in ETL processes that built using traditional tools like SSIS, Informatica and Cloud based tools like Azure Data Factory and Spark jobs.
Responsible to review and assess analytical data models built on traditional Inman or Kimball methodology (Star schema, snow flake and denormalized data base models
Responsible to review and assess JSON data models built for NoSQL databases like CosmosDB.
Responsible to ensure technical and business requirements are met in building Data Lake on Azure Blob using Spark SQL on HDI / Databricks Spark clusters.
Test and ensure data quality presented in reports built using SSRS, Spotfire etc
Drive the Data Quality efforts for enterprise data migration projects to ensure data quality and data integrity from the legacy application to the new platform.
Data Analysis and data quality assessment and provide guidance for data profiling & data cleansing.
Assist Data Architects, System Analysts and development team on data analysis. Identify, troubleshoot and provide solutions for potential issues on data transformation, conceptual data modeling, and meta-data management
Ensures that the ETL application correctly rejects / substitutes default values, corrects or ignores and reports invalid data using SQL/Scala scripts.
Work with large volumes of complex data, preferably in distributed frameworks such as Spark.
Responsible for creating complete test cases, test plans, test data, test script and












Thanks & Regards



Manjunath. M.S  //Assistant Manager (Staffing)//