A strong understanding of Microsoft SQL Server and Structured Query Language (SQL) is crucial. The role requires working with databases, querying data, and optimizing performance
Strong experience with data warehouse and modelling concepts is essential for designing efficient and effective database structures. Understanding relationships, normalization, and denormalization is expected.
Experience with ETL (and ELT) processes is vital. The role is responsible for extracting data from various
|
 |
Job Title: Senior Data Engineer Location: Cary, NC Experience: 12+ Years About the Role We are looking for an experienced Senior Data Engineer who will lead the design and development of modern data platforms and scalable data pipelines. The ideal candidate has strong hands-on expertise in cloud data engineering, big data technologies, ELT/ETL architecture, and data modeling, along with the ability to mentor teams and work closely with business stakeholders to deliver high-quality data solutions
|
 |
Job Titles: DataBricks Data Engineer
Required SkillsAzure Databricks data engineering (Databricks Data Engineer Professional)Understanding of Databricks Unity Catalog (including Delta Lake/Delta tables)Understanding of structured, unstructured, and semi-structured data processingUnderstanding of financial services cyber security and fraud data domainHighly proficient Python and SQL skillsPySpark and Spark SQLPreferred Experience:3+ years of Databricks data engineering experience (Databricks Da
|
 |
We are seeking a highly skilled AWS Glue Data Engineer to design, develop, and optimize large-scale data pipelines and ETL workflows on AWS. The ideal candidate will have strong expertise in AWS cloud-native data services, data modeling, and pipeline orchestration, with hands-on experience building robust and scalable data solutions for enterprise environments.Key Responsibilities• Design, develop, and maintain ETL pipelines using AWS Glue, Glue Studio, and Glue Catalog.• Ingest, transform, and
|
 |
Job Title: Data Engineer
Location: Onsite / Hybrid - Jersey City[ NJ] first preference, else Charlotte[North Carolina], St. Louis[Missouri] and Minneapolis[Minnesota]
Required Qualifications: • 10+ years of Software Engineering experience. • Excellent verbal and interpersonal communication skills. • 5+ years' experience with object-oriented programming (Java/Scala) • 3+ years’ experience in developing and deploying applications to public/private cloud •
|
 |
Data Engineer
Position Summary
The Data Engineer is responsible for building and maintaining scalable data pipelines, data warehousing solutions, and data platforms that support analytics, machine learning, and business intelligence. This role focuses on data integration, ETL/ELT workflows, and ensuring data quality and availability.
Key Responsibilities
Develop, maintain, and optimize data pipelines (batch and streamingBuild and manage data lakes, data warehouses, and analytics platfor
|
 |
Salesforce Developer
100 % Remote
Required Technologies:• JavaScript language experience• 2+ years of experience with Salesforce's Business Rules Engine• 6+ years of experience in OmniStudio components (Data Raptors, OmniScripts, Integration Procedures, Flex Cards, Vlocity Industry Data Models etc.)• 6+ years of experience with native config, Apex, LWC, VisualForce
|
 |
Data Engineer//Omaha, NE 100% Remote
Job Description:
Position Overview: As a Senior Data Engineer, reporting to the Director of Information Technology, you will be responsible for leading and owning the Enterprise Data Platform and its data products. In today's data-driven world, Enterprise Data Products are an absolute necessity, streamlining the process of accessing crucial data, ensuring data accuracy and reliability, enabling efficient decision-making, and empowering teams across the or
|
 |
Role: –Big Data EngineerBill Rate: $78/hour C2CLocation: Houston, TXDuration: 12+ months/ long-term Interview Criteria: Telephonic + ZoomDirect Client Requirement
The senior developer's role in this project will involve designing and implementing the following platform components:
Ingesting data
Processing and normalizing the data.
Distributing the data to different stakeholders.
Build and improve common framework including monitoring, CI/CD pipelines, testing, performance, resilienc
|
 |
The ideal candidate will possess deep expertise in SQL, Python, and key AWS data services, particularly AWS Glue, Amazon Redshift, and AWS Data Brew.
Key Technical Responsibilities
I. Data Quality Framework Development & Automation* Design, develop, and maintain end-to-end data quality frameworks using Python to automate testing, validation, and analysis of data pipelines and data warehouse tables Build and implement custom data quality checks (e.g., uniqueness, completeness, validity, consi
|
 |