Job Description :
This is Navneet from E-Solutions. Hope you are doing well ! Kindly go through the below position and send me your updated resume and Hourly rate expectation, if interested. Job Title: Talend Developer Duration: Long Term Position Location: Hartford, CT ( Initially Remote due to Covid ) Interview mode: WebEx Video Audio Hiring Manager Notes: Candidate should have Talend and Redshift/PySpark exp. (They should either have experience on Redshift or PySpark If you are Architect, then I can also consider for Architect role JOB DESCRIPTION: Responsibilities: Translating data and technology requirements into our ETL / ELT architecture. Develop real-time and batch data ingestion and stream-analytic solutions leveraging technologies such as Kafka, Apache Spark, Java, NoSQL DBs, AWS EMR. Develop data driven solutions utilizing current and next generation technologies to meet evolving business needs. Develop custom cloud-based data pipeline. Provide support for deployed data applications and analytical models by identifying data problems and guiding issue resolution with partner data engineers and source data providers. Provide subject matter expertise in the analysis, preparation of specifications and plans for the development of data processes. Qualifications: Strong experience in data ingestion, gathering, wrangling and cleansing tools such as Apache NiFI, Kylo, Scripting, Power BI, Tableau and/or Qlik Experience with data modeling, data architecture design and leveraging large-scale data ingest from complex data sources Experience building and optimizing 'big data' data pipelines, architectures and data sets. Advanced SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases. Strong knowledge of analysis tools such as Python, R, Spark or SAS, Shell scripting, R/Spark on Hadoop or Cassandra preferred. Strong knowledge of data pipelining software e.g., Talend, Informatica They will perform hands-on development to create, enhance and maintain data solutions enabling seamless integration and flow of data across our data ecosystem. These projects will include designing and developing data ingestion and processing/transformation frameworks leveraging open source tools such as Python, Spark, pySpark, etc.
             

Similar Jobs you may be interested in ..