Job Description :

Job Title: Big Data Engineer
Location: Bloomfield, Connecticut, US United States
Duration: Long Term Contract
 

Job Description:
Join this exciting new initiative as a Big Data engineer with skills in Tableau, Big Data, SQL! With your expertise with Data engineering you will work with a high-performing team to design and develop solutions to build Big Data Solutions.
Qualified candidates for this position must have deep understanding of Data Warehousing.
You will work with a high-performing team to develop solutions for a major client. You will support and interact with the client daily in an Agile way of working. You will have the opportunity to show/grow with technology in a customer-facing role. You should be able to work independently under limited supervision and apply your knowledge. You should have sufficient knowledge and maturity to effectively deal with technical issues and help to support the broader team.

Responsibilities:

  • Data Ingestion using Kafka, CDC or similar technologies.
  • Implement and support streaming technologies such as Kafka, K-SQL and Spark.
  • Implement and support big data tools and frameworks such as HDFS, Hive, and Hbase or Cassandra.
  • ETL Design and Development in the big data platform in Hadoop using technologies such as Scala, Spark, Python, Oozie, and more to support a variety of requirements and applications.
  • Warehouse Design and Development – Set the standards for warehouse and schema design in massively parallel processing engines such as Hadoop and Columnar Databases while collaborating with analysts and data scientist in the creation of efficient data models.
  • Implement a CICD DevOps strategy
  • Implement end to end Data Lake solution in on premise as well as on a cloud environment preferably Azure and AWS.
  • Deliver a Client/AI framework to support user Analysts and data scientists to support advance analytics.

Requirements / Qualifications:

  • 7+ years of work experience with ETL, and business intelligence big data architectures.
  • 5+ years of experience with the Hadoop ecosystem (Map Reduce, Hive, Oozie, Yarn, HBase, etc.) and big data ecosystems (Kafka, Cassandra, etc.).
  • 3+ years of hands-on Spark/Scala/Python development experience.
  • Experience developing and managing data warehouses on a terabyte or petabyte scale.
  • Experience with core competencies in Data Structures, Rest/SOAP APIs, JSON, etc.
  • Strong experience in massively parallel processing & columnar databases.
  • Expert in writing SQL – PLSQL or NoSQL
  • Deep understanding of advanced data warehousing concepts and track record of applying these concepts on the job.
  • Experience with common software engineering tools (e.g., Git, JIRA, Confluence, or similar)
  • Ability to manage numerous requests concurrently and strategically, prioritizing when necessary.
  • Good communication and presentation skills.
  • Dynamic team player.

Skill Set/Years of Experience/Proficiency Level

  • Data Engineering
  • Tableau
  • Big Data
  • SQL
             

Similar Jobs you may be interested in ..