Job Description :
Title: Senior Data Analyst
Location: Washington, DC/New York
Duration: 12Months
Mode: Round 1: Phone| Round 2: Video
Start: Immediately (The consultant will work remotely until the COVID situation
Auth: USC, GC, H-1B only.

Position Summary:
The Senior Data Analyst is responsible for building and deploying Enterprise Data Management applications, including Data Quality framework, master data management and meta-data management. As a Senior Data Analyst, you will lead collaboration with enterprise data management teams, product teams, data analysts and data engineers to design and build data-forward solutions using ETL tools, SQL and NoSQL databases, and Cloud technologies. This involves rapid innovation in the design and development of data pipelines, data-marts and graph database to ensure delivering enterprise data management solutions in a timely manner. We are looking for someone with strong hands on coding and design experience in all layers of the full stack involving data. The Senior Data Analyst plays a significant role in Agile planning, providing advice and guidance, and monitoring emerging technologies.

Duties and Responsibilities:
You will be working with your team, peers, partners, cross-functional teams and vendors to:
Build and deploy data pipelines and database processes, including SQL and NoSQL databases for enterprise data management applications.
Collaborate with enterprise management teams, product teams, data analysts and data engineers to design and build data-forward solutions.
Gather and process all types of data including raw, structured, semi-structured, and unstructured data.
Integrate with enterprise data catalog to retrieve or update meta-data and attributes of the enterprise data assets.
Build and maintain dimensional data warehouses in support of business intelligence tools.
Develop data catalogs and data validations to ensure clarity and correctness of key business metrics.
Design, code, test, correct and document programs and scripts using agreed standards and tools to achieve a well-engineered result.
Derive an overall strategy of data management, within an established information architecture (including both structured and unstructured data), that supports the development and secure operation of existing and new information and digital services.
Plan effective data storage, security, sharing and publishing within the organization.
Ensure data quality and implement tools and frameworks for automating the identification of data quality issues.
Collaborate with internal and external data providers on data validation providing feedback and making customized changes to data feeds and data mappings.
Mentor and lead junior data analysts by providing technical guidance and oversight.
Provide ongoing support, monitoring, and maintenance of deployed products.
Drive and maintain a culture of quality, innovation and experimentation.
Functional areas: Meta data management, graph database, master data management, 2nd & 3rd Party Data Management, Data Quality, Data Controls and Partner Operations

Minimum Qualifications:
Advanced degree in relevant field of study strongly desirable, particularly in computer science or data science programs.
5+ years professional experience working with data extract/manipulation logic.
5+ years professional experience with data design and SQL databases.
7+ years professional experience with Development, R&D or Information Technology.
3+ years working with a public cloud big data ecosystem (certification in AWS a plus
2+ years working with graph database design and implementation (experience with Neo4j a plus
2+ years professional experience with APIs and dashboard reporting.

Technical Skills:
Experience with deploying and running AWS-based data solutions and familiar with tools such as Cloud Formation, IAM, Athena, and Kinesis.
Experience engineering big-data solutions using technologies like EMR, S3, Spark and an in-depth understanding of data partitioning and sharing techniques.
Experience loading and querying both on premise and cloud-hosted databases such as Teradata and Redshift.
Experience designing, querying and developing graph databases such as Neo4j.
Building streaming data pipelines using Kafka, Spark, or Flink.
Familiarity with binary data serialization formats such as Parquet, Avro, and Thrift.
Experience deploying data notebook and analytic environments such as Jupyter and Databricks.
Knowledge of the Python data ecosystem using pandas and numpy.
Knowledge of data modeling, data access, and data storage techniques.
Appreciation of agile software processes, data-driven development, reliability, and responsible experimentation.
Familiar with metadata management, data catalog, data lineage, and principles of data governance.

Strong and thorough knowledge of the following:
ETL/ELT Tools
BI tools
Data Catalog/MDM / Reference Data
RDBMS, NoSQL and NewSQL
MS Office Suite

Client : Telecom