Job Description :

The Cloud Engineer will provide a strong leadership role in developing the Enterprise data architecture of Digital and Retail businesses across business units, providing technical guidance, architecture, and enforcing technical standards. Provides assessments and prototyping of new concepts and technologies and often develops capabilities to be handed off to operational development teams. Provides analysis on complex problems as well as be an active member of enterprise technology design decisions.

Responsibilities will include, but will not be limited to the following:

  • Accountable for modernization, migration/transformation to a cloud data platform
  • Design and build reliable, scalable data infrastructure with leading privacy and security techniques to safeguard data
  • Architect scalable, secure, low latency, resilient and cost-effective solutions for enabling predictive and prescriptive analytics across the organization
  • Design/ Architect frameworks to Operationalize ML models through serverless architecture and support unsupervised continuous training models
  • Take over and scale our data models (Tableau, Dynamo DB, Kibana)
  • Communicate data-backed findings to a diverse constituency of internal and external stakeholders
  • Build frameworks for data ingestion pipeline both real-time and batch using best practices in data modeling, ETL/ELT processes, and hand off to data engineers
  • Participate in technical decisions and collaborate with talented peers
  • Review code, implementations and give meaningful feedback that helps others build better solutions
  • Helps drive technology direction and choices of technologies by making recommendations based on experience and research
  • Additional duties as assigned

Requirements

  • 7 or more years of experience working directly with enterprise data solutions
  • Hands-on experience working in a public cloud environment and on-prem infrastructure.
  • Specialty on Columnar Databases like Redshift Spectrum, Time Series data stores like Apache Pinot, and the AWS cloud infrastructure
  • Experience with in-memory, serverless, streaming technologies and orchestration tools such as Spark, Kafka, Airflow, Kubernetes
  • Current hands-on implementation experience required, possessing 7or more years of IT platform implementation experience.
  • AWS Certified Big Data - Specialty desirable
  • Experience designing and implementing AWS big data and analytics solutions in large digital and retail environments is desirable
  • Advanced knowledge and experience in online transactional (OLTP) processing and analytical processing (OLAP) databases, data lakes, and schemas.
  • Experience with AWS Cloud Data Lake Technologies and operational experience of Kinesis/Kafka, S3, Glue, and Athena.
  • Experience with any of the message/file formats: Parquet, Avro, ORC
  • Design and development experience on a Streaming Service, EMS, MQ, Java, XSD, File Adapter, and ESB based applications
  • Experience in distributed architectures such as Microservices, SOA, RESTful APIs, and data integration architectures.
  • Experience with a wide variety of modern data processing technologies, including:
    • Big Data Stack (Spark, spectrum, Flume, Kafka, Kinesis, etc.)
    • Data streaming (Kafka, SQS/SNS queuing, etc)
    • Columnar databases (Redshift, Snowflake, Firebolt, etc)
    • Commonly used AWS services (S3, Lambda, Redshift, Glue, EC2, etc)
    • Expertise in Python, pySpark, or similar programming languages
    • BI tools (Tableau, Domo, MicroStrategy)
    • Understanding Continuous Integration/Continuous Delivery

 

 

 

             

Similar Jobs you may be interested in ..