Job Description :
IT - Consultant | Cloud Integration | Azure Data Factory (ADF),Job Description:, , ATTENTION ALL SUPPLIERS!!!, , READ BEFORE SUBMITTING, , UPDATED CONTACT NUMBER and EMAIL ID is a MANDATORY REQUEST from our client for all the submissions, Limited to 1 submission per supplier. Please submit your best., We prioritize endorsing those with complete and accurate information, Avoid submitting duplicate profiles. We will Reject/Disqualify immediately., Make sure that candidate's interview schedules are updated. Please inform the candidate to keep their lines open., Please submit profiles within the max proposed rate., Please make sure to TAG the profiles correctly if the candidate has WORKED FOR INFOSYS as a SUBCON or FTE., , MANDATORY: Please include in the resume the candidate s complete & updated contact information (Phone number, Email address and Skype ID) as well as a set of 5 interview timeslots over a 72-hour period after submitting the profile when the hiring managers could potentially reach to them. PROFILES WITHOUT THE REQUIRED DETAILS and TIME SLOTS will be REJECTED., , Job Title: Consultant | Cloud Integration | Azure Data Factory (ADF), Work Location & Reporting Address: Washington, DC 20001 (Onsite-Hybrid. Will consider candidates willing to relocate to client's location), Contract duration: 12, MAX VENDOR RATEr hour max, Target Start Date: 01 Oct 2025, Does this position require Visa independent candidates only? No, , Detailed Job Description:, Design and implement scalable data pipelines using Databricks (PySpark/Scala) on Azure., Develop and maintain ETL workflows using Informatica., Collaborate with data architects, analysts, and business stakeholders to understand data requirements., Optimize data processing performance and ensure data quality and integrity., Implement data governance, lineage, and metadata management practices., Monitor and troubleshoot data pipeline issues and ensure timely resolution., Participate in code reviews, performance tuning, and best practice enforcement., , Required Skills:, Databricks: Hands-on experience with notebooks, Delta Lake, Spark (PySpark)., Informatica: Strong experience with Informatica., Cloud Platforms: Experience with Azure Data Lake., Data Modeling: Understanding of dimensional modeling., SQL: Advanced SQL skills for data extraction, transformation, and analysis., CI/CD: Exposure to DevOps practices and tools like Azure DevOps etc., , Minimum Years of Experience:, 8-10 years, , Certifications Needed:, BE, , Top 3 responsibilities you would expect the Subcon to shoulder and execute:, Handson experience with Spark, Delta Lake, and PySpark., Strong communication and customerfacing skills, Perform root cause analysis and performance tuning using Spark UI, DAGs, and logs., , Interview Process (Is face to face required?), No, , Any additional information you would like to share about the project specs/ nature of work:, Drug test details:, Rate Details Work location :Washington, DC 20001 ,
             

Similar Jobs you may be interested in ..