Job Description :

• Bachelor’s/ Master's in Computer Science or related disciplines
• Minimum 12+ years of experience in design, development, and deployment of large-scale, distributed, and cloud-deployed software services.
• Must have been part of minimum 2 end to end big data projects and must have handled defined modules independently.
• Expert in SQL and good with data modelling for relational, analytical and big data workloads.
• Advanced programming skills with Python, Scala or Java.
• Strong knowledge of data structures, algorithms, & distributed systems.
• Strong experience and deep understanding of Spark internals.
• Expert in Hive.
• Hand on experience with one of the cloud technologies (AWS, Azure, GCP).
• Hands on experience with at least one NoSQL database (HBase, Cassandra, MongoDB etc).
• Experience in working with both batch and streaming datasets.
• Knowledge of at least one ETL tool like Informatica, Apache NiFi, Airflow, DataStage etc.
• Experience in working with Kafka or related messaging queue technology.
• Hands on experience in writing shell scripts for automating processes.
• Knowledge of building RESTful services would be an added advantage.
• Willingness to learn and adapt.
• Delivery focused and willingness to work in a fast-paced work environment.
• Take initiative and be responsible for delivering complex software.
• Knowledge of building REST API end points for data consumption.
• Experience building self-service tools for analytics would be plus.
• Knowledge of ELK stack would be a plus.
• Knowledge of implementing CI/CD on the pipelines is a plus.
• Knowledge of Containerization (Docker/Kubernetes) will be plus.
• Experience working with one of the popular Public Cloud based platforms is preferred
• Excellent oral and written communication is a must.
• Well versed with Agile methodologies and experience in working with scrum teams.

             

Similar Jobs you may be interested in ..