Job Description :
Hi ,
Greetings from XTGlobal, Inc.!
We at XTGlobal, Inc. are currently sourcing for Data Architect requirement. Request you to kindly review the job description given below and reply back if you are interested in pursuing this opportunity.
Title: Data Architect [XTGL_ 65790]
Location: Las Vegas, NV (100% Remote)
Type: 6+ Months Contract
Job Description :
Experience architecting cloud-native Data integration pipelines, preferably on Microsoft Azure platform (AWS experience may be considered for strong candidates)
Data Engineering/Data Warehouse implementation experience of at least 10 years
Cloud computing platform experience of at least 4 years
Design/implementation experience with Synapse, ADF V2, ADF Data flows, Azure Databricks, SQLDB/Hyperscale, SQLDW, ADLS Gen 2 and other Azure services
Spark/PySpark/Python based design/development
SQL development including PL/SQL, preferably SQL Server but another RDBMS may be fine
Good communication (verbal & written) & documentation skills
Preferred Experience:
ETL design/development using tools like DataStage, Informatica, Talend
ADF source code version control using Azure DevOps or Git
Migration of on-prem data applications to Azure cloud platform
Experience with Snowflake
Collaboration with business teams
Experience in fast paced & complex environments, including large enterprise settings and/or product development
Responsibilities:
Architect an Azure cloud-native enterprise data solution with emphasis on data integration, data governance, and catering to machine learning and reporting end uses
Design plug-n-play components that can be orchestrated into data flows
Design data ingestion patterns based on source systems and data formats
Design data quality framework
Optimize the code base for automation, performance, scalability, reliability, operational efficiency, cost and code promotion across environments
Streamline design & coding standards, and best practices
Required Skills :
Azure Cloud Architecture experience with 10 years of Data Engineer/Data Warehouse.
Candidates need to be experienced in Architecting data pipelines in Azure Data Factory (ADF) and Spark/PySpark and DataBricks.
Architecting Azure Data Lake Storage (ADLS) into DataBricks and/or feeding data into ADLS.
If you are interested in pursuing this opportunity, kindly reply back with your word format resume attached.
Please do refer your friends or colleagues if they are looking out for job opportunities.
             

Similar Jobs you may be interested in ..