Need senior profile
We are seeking a highly skilled Data Engineer with a strong background in SQL optimization and database performance engineering. This role will focus on upgrading, tuning, and enhancing the reliability, scalability, and efficiency of our SQL-based data infrastructure. You will partner with engineering, analytics, and operations teams to ensure that our data pipelines, queries, and reporting systems run efficiently at scale with minimal downtime.
Key Responsibilities
A
|
 |
Hi, Hope you are doing well !! I have an urgent position. Kindly go through the Job description and let me know if this would be of interest to you. Job Title: Sr. Data Engineer Location: Hybrid work in Raleigh, NC (local needed) Duration: 12+ Months Contract While sharing resume mention consultant location and visa status* Job Description: The client is requesting that candidates to provide 2 references prior to final selection. The references must be recent from a Sr. Manager/Director
|
 |
Data Engineer/MLE
Reston, VA- hybrid
Top Skills' Details
AWS(EMR/Glue, S3, Lambda, Cloudwatch), Python, SQL
Python use: scripting, building new data pipelines (not app dev)
Data warehousing experience (Snowflake)
APIs
Agile engineering practices & JIRA usage
-Need someone who will pick up a problem and immediately solve and communicate back succinctly.
**One position needs to include experience with Kubernetes, rates will vary for each position based on Kubernetes exp.***
Job
|
 |
JOB DESCRIPTION:
·Bug Fixes in the Databricks environment
·Ability to Monitoe, Transform and optimize ETL pipelines for Databricks and Knowledge of Data Lakehouse Architecture and knowledge of Pyspark (At least Mid-Level)
·Experience in complex data migration and familiarity with the knowledge is a plus
·Ensure data accessibility and integrity for the migrated objects
·Collaborate effectively with cross-functional teams.
·Communicate progress and challenges clearly to stakeholders.
|
 |
Role: –Senior Data Engineer Bill Rate: $80/hour C2CLocation: Scottsdale,AZDuration: 12+ months/ long-term Interview Criteria: Telephonic + ZoomDirect Client Requirement
Hold a bachelor's degree in Computer Science, Information Systems, or a related field.
Bring 10+ years of experience in data engineering or closely related roles.
Extensive experience building and maintaining ETL/ELT pipelines for both batch and streaming data.
Advanced SQL skills (complex joins, window functions, CTEs)
|
 |
Responsibilities:
Lead the design, development, and implementation of data solutions using AWS and Snowflake.
Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions.
Develop and maintain data pipelines, ensuring data quality, integrity, and security.
Optimize data storage and retrieval processes to support data warehousing and analytics.
Provide technical leadership and mentorship to junior data engineer
|
 |
Job Details
ETL Data Engineer
Tech Stack & Core Skills:Advanced SQL (Big Data focused)Data Visualization (preferably Tableau)AWS (EMR, Lambda, S3, Step Functions, Athena )ETL Data EngineeringData AnalysisHeavyAI
Soft Skills:Effective CommunicationAnalysis PresentationSelf-starterCuriousGrowth MindsetCollaboration
Additional Preferences:PythonpySparkData ModelingStatistical Analysis
We are an equal opportunity employer. All aspects of employment including the decision to hire, promo
|
 |
Role: –AWS Data Engineer
Bill Rate: $83/hour C2C
Location: Houston, TX
Duration: 12+ months/ long-term
Interview Criteria: Telephonic + Zoom
Direct Client Requirement
Role: AWS Data Engineer
We are seeking a skilled AWS Data Engineer who has experience working with Python, PySpark, lambda, Airflow, and Snowflake.Responsibilities:
Design, build, and optimize ETLs using Python, PySpark, lambda, Airflow and other AWS services.
Create SQL queries to segment, manipulate, and f
|
 |
Job Responsibilities:
·Teradata knowledge, SQL, DWH/ETL process, Data management & Shell Scripting
·Collaborate with cross-functional teams to gather and analyze data requirements.
·Design, develop, and optimize complex SQL queries and scripts for data extraction and reporting.
·Work extensively with Teradata for querying and performance tuning.
·Analyze large datasets in Big Data environments (e.g., Hadoop, Hive, Spark
·Design and support ETL workflows, data pipelines, and processes
|
 |
Hi Hope you are doing well !! I have an urgent position. Kindly go through the Job description and let me know if this would be of interest to you. Title : 100% Remote Lead Data Engineer. Duration : 6+ Months Location : 100% Remote Job Requirements: Required Skills: Lead a team on the research and implementation of a larger project, which may consist of multiple data models, maps, and workflows. Be the contact for the team and participate in prioritization and execution of work. Serve as SME,
|
 |