Job description:
- Teradata
- Databricks
- Spark/PySpark
- Experience that takes initiative and has command of tools...works in a consultative manner vs waiting for direction and orders.
- Experience working with both business and IT leaders
Duties:
- Collaborate with business and technical stakeholders to gather and understand requirements.
- Design scalable data solutions and document technical designs.
- Develop production-grade, high-performance ETL pipelines using Spark and PySpark.
- Perform data modeling to support business requirements.
- Write optimized SQL queries using Teradata SQL, Hive SQL, and Spark SQL across platforms such as Teradata and Databricks Unity Catalog.
- Implement CI/CD pipelines to deploy code artifacts to platforms like AWS and Databricks.
- Orchestrate Databricks jobs using Databricks Workflows.
- Monitor production jobs, troubleshoot issues, and implement effective solutions.
- Actively participate in Agile ceremonies including sprint planning, grooming, daily stand-ups, demos, and retrospectives.
Skills:
- Strong hands-on experience with Spark, PySpark, Shell scripting, Teradata, and Databricks.
- Proficiency in writing complex and efficient SQL queries and stored procedures.
- Solid experience with Databricks for data lake/data warehouse implementations.
- Familiarity with Agile methodologies and DevOps tools such as Git, Jenkins, and Artifactory.
- Experience with Unix/Linux shell scripting (KSH) and basic Unix server administration.
- Knowledge of job scheduling tools like CA7 Enterprise Scheduler.
- Hands-on experience with AWS services including S3, EC2, SNS, SQS, Lambda, ECS, Glue, IAM, and CloudWatch.
- Expertise in Databricks components such as Delta Lake, Notebooks, Pipelines, cluster management, and cloud integration (Azure/AWS).
- Proficiency with collaboration tools like Jira and Confluence.
- Demonstrated creativity, foresight, and sound judgment in planning and delivering technical solutions.
Additional Skills:
- AWS SQS
- Foresight
- Sound Judgment
- SQL
- Stored Procedures
- Databricks For Data Lake/Data Warehouse Implementations
- Agile Methodologies
- GIT
- Jenkins
- Artifactory
- Unix/Linux Shell Scripting
- Unix Server Administration
- Ca7 Enterprise Scheduler
- Aws S3
- Aws Ec2
- AWS SNS
- Aws Lambda
- AWS ECS
- Aws Glue
- AWS IAM
- Aws Cloudwatch
- Databricks Delta Lake
- Databricks Notebooks
- Databricks Pipelines
- Databricks Cluster Management
- Databricks Cloud Integration (Azure/Aws)
- JIRA
- Confluence
- Creativity
We are an equal opportunity employer. All aspects of employment including the decision to hire, promote, discipline, or discharge, will be based on merit, competence, performance, and business needs. We do not discriminate on the basis of race, color, religion, marital status, age, national origin, ancestry, physical or mental disability, medical condition, pregnancy, genetic information, gender, sexual orientation, gender identity or expression, national origin, citizenship/ immigration status, veteran status, or any other status protected under federal, state, or local law.