Job Description :
  • Create and maintain optimal data pipeline architecture,
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and Azure data technologies.
  • Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
  • Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
  • Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
  • Work with data and analytics experts to strive for greater functionality in our data systems.
  • Extract data from identified databases. Perform initial data quality check on extracted data. Build data pipelines that automate the data flow and transformation of data in the ecosystem.
  • Strong understanding of data architecture, modeling and infrastructure
  • Experience with distributed data technologies (e.g. Spark) and NoSQL databases (e.g. Cassandra, Cosmos DB, etc.)
  • Create reports and dashboards that make complex data easily consumable by the users.
  • Debugging and optimizing performance of queries, data pipelines, reports and dashboards.
  • Knowledge of Azure based data processing technologies like EventHub, ADF, Synapse, ADLS, Azure Databricks or their equivalent.
  • Experience with build systems such as Jenkins or Azure DevOps and source control systems.
  • Experience with Python, PySpark/Scala, SQL, PowerBI.
             

Similar Jobs you may be interested in ..