Job Description :
  • Looking for Tech Lead  experienced to design, build and maintain Big Data workflows/pipelines for collecting, storing, processing, and analyzing huge sets of data into and out of data lake
  • Engage in application design and data modeling discussions
  • Build and test data workflows/pipelines
  • Troubleshoot and resolve application and data processing issues
  • Code optimization and fine tune application performance

QUALIFICATIONS

BS/BA degree in Computer Science, Information Systems or related field

•  10+ years exp in Data Integration Tools such as Talend to develop data pipelines and workflows

•  Strong understanding of data quality Process & Procedures such as Define, Discovery, Profiling, Remediation, and Monitoring.

•  Experience on designing & developing ETL processes using Talend ( data load performance optimization)

•  Knowledge on storage design concepts including partitioning

•  Maintain, modify and improve large set of structured and unstructured data

•  Monitoring and troubleshooting data integration jobs

•  Handling of JSON data source and ingestion of API responses using Talend.

•  Must programming in Big Data Hadoop technology area

•  Highly skilled in Spark and Scala, preferably on Databricks platform

•  Worked in AWS environment in storing, processing data on S3 and transforming data with complex computing into other data models

•  Strong knowledge in SQL and Unix/Linux scripts

•  Exposure to other Hadoop technologies Ecosystem like YARN, Zookeeper, HDFS, Avro, Parquet etc.

•  Experience with cleansing, preparing large, complex data sets for reporting and analytics

•  Must have used Data Integration Tools such as Talend to develop data pipelines and workflows

 

ADDED VALUABLE SKILLS

•  Exposure to Databricks is highly preferred

•  NoSQL such as HBase

•  Distributed Messaging such as Kafka

•  Data architecture

•  DevOps Environment Experience

•  Cloud platform such as AWS

             

Similar Jobs you may be interested in ..