Job Description
Teradata knowledge, SQL, DWH/ETL process, Data management & Shell Scripting
Collaborate with cross-functional teams to gather and analyze data requirements.
Design, develop, and optimize complex SQL queries and scripts for data extraction and reporting.
Work extensively with Teradata for querying and performance tuning.
Analyze large datasets in Big Data environments (e.g., Hadoop, Hive, Spark
Design and support ETL workflows, data pipelines, and processes to ensure
|
 |
Role: – Data engineer
Bill Rate: $94/hour C2C
Location: Remote
Duration: 12+ months/ long-term
Interview Criteria: Telephonic + Skype
Direct Client Requirement
Job Details
·Experience in modeling BI solutions for data from SAP ERP modules like FI, CO, FSCM, PP, MM, PM , QM etc.
·Experience implementing SAP HANA Analytics, plus SAP S/4 HANA, BOBJ
·Strong understanding of Data Warehousing, Data Modelling, HANA modeling, HANA studio; hands on experience in SAP HANA
·Good Under
|
 |
Hi Hope you are doing well !! I have an urgent position. Kindly go through the Job description and let me know if this would be of interest to you. Title : 100% Remote Lead Data Engineer. Duration : 6+ Months Location : 100% Remote Job Requirements: Required Skills: Lead a team on the research and implementation of a larger project, which may consist of multiple data models, maps, and workflows. Be the contact for the team and participate in prioritization and execution of work. Serve as SME,
|
 |
Job Description
Overall 10+ years of experience.
Hands on Experience in Hadoop Stack of Technologies ( Hadoop ,Spark, HBase, Hive , Pig , Sqoop, Scala ,Flume, HDFS , Map Reduce
Hands on experience with Python & Kafka .
Good understanding of Database concepts , Data Design , Data Modeling and ETL.
Hands on in analyzing, designing, and coding ETL programs which involves Data pre-processing , Data Extraction , Data Ingestion , Data Quality ,Data Normalization & Data Loading.
Working e
|
 |
Job Title : GCP Data Engineer
Location : New York, NY
Duration : 06+ Months, with extension
Key Responsibilities:
Design, build, and maintain scalable data pipelines on GCP, primarily using BigQuery.
Develop and manage DBT models to transform raw data into clean, tested, and documented datasets.
Write complex and optimized SQL queries for data extraction, transformation, and analysis.
Implement and maintain data warehousing solutions, ensuring performance, scalability, and
|
 |
Hi, Hope you are doing well, Please find the job description given below and let me know your interest. Position: Data Engineer (3 days onsite) in Scottsdale, AZ or Chicago, IL (Local only| Onsite Interview is Must Location: 3 days onsite) in Scottsdale, AZ or Chicago, IL (Local only) Duration: 6+ month Visa: NO H1B Note: Must have LinkedIn Attach your DL While share your Resume Job Description Key Skills: Hadoop Admin / Spark Admin Linux & Windows SAS & Python (Nice to have: A
|
 |
Qualifications:1. Associate's degree or equivalent training required in Computer Science, Electronic Engineering,Physics, Bioinformatics, or other STEM subjects.2. Prior industrial experience in software development and testing and / or research experience inhuman computer interaction are preferred.3. Verbal and written communication skills, problem solving skills, and interpersonal skills.4. Attention to details and an aptitude to experimental investigations5. Basic ability to work independentl
|
 |
Data Engineer (Databricks) REMOTE Client is seeking a talented and collaborative Data Engineer to join our North American Commercial Unit. In this role, you will work closely with analytics, data science, marketing, and sales teams to design, build, and operationalize efficient data pipelines and infrastructure that power insightful data products, dashboards, and advanced analytics. You will play a key role in automating, deploying, and maintaining production-grade data workflows, ensuring
|
 |
Title: Data Engineer (Developer) Location: Boston, MA (On-site) Duration: 6 months (possibility of extension) Implementation Partner: Infosys End Client: To be disclosed JD: Minimum Years of Experience: 4+ years Strong experience in writing SQL queries Hands-on expertise in Java/Scala/Unix/Shell programming Skilled in data analysis, exploration, and modeling Experience working in an Agile delivery environment Expertise in Azure Preferred Skills: AWS Databricks Finance domain knowledge
|
 |
Title: Lead Data Engineer Location: Boston, MA (On-site) Duration: 6 months (possibility of extension) Implementation Partner: Infosys End Client: To be disclosed JD: Minimum Years of Experience: 7+ years Strong experience in writing SQL queries Expertise in Spark/Java/Scala/Unix/Shell programming Skilled in data analysis, exploration, and modeling Experience in Agile delivery environment Strong Databricks experience Preferred Skills: AWS Azure
|
 |