Job Description :

Role: Database Developer

Location: Austin, TX

Visa Status: USC, GC

Looking for an experienced Senior Data Developer/Engineer with deep, hands-on expertise in Large PostgreSQL, SQL Server, AWS technologies to help build out and support large scale, low latency, end-to-end data pipelining initiatives and surrounding services. This high impact work enables agile decision making and uses real time customer/user data and intelligence to develop insights to engage and convert new business and potential customers.
Job Description

  • Building out an end-to-end high scale, high-performance data pipeline and surrounding services
  • Hands-on development creating data streams, data objects, data reporting, and scripting in SQL AND JSON, or YAML
  • Designing, developing and implementing a real-time streaming data pipeline and presenting data in a consumable form for a variety of uses / customers
  • Ensuring data compliance and working within modern standards to ensure that the data services meet or exceed all regulatory expectations
  • Collaborating with the existing data team and internal customers and partners to design, develop and deliver solutions and key business insights
  • Hands-on experience working with complex end-to-end data solutions for massively scaled products and services.
  • Deep expertise in AWS, PostgreSQL, Microsoft SQL Server, and Kafka technologies
  • A strong communicator with great organizational skills: there's a high level of cross-team collaboration, information from many verticals flows into this data pipeline, and there are a variety of stakeholders involved
  • Self-starter and innovator who likes to solve complex problems
  • Exceptional skills working autonomously and collaboratively; strong drive and self-direction
  • Be a good person; we're proud to work with every single OAD developer and engineer and you will be too!
  • Advanced hands-on engineering and development experience with the following tools and technologies required: SQL, PostgreSQL, Microsoft SQL Server, Amazon Lambda, Amazon S3, QlikView, Apache Kafka, Snowflake
  • Intermediate hands-on engineering and development experience with the following tools and technologies required: JSON or YAML

Hands on experience with the following tools and technologies highly preferred:

TypeScript, Node.js, PowerShell, Python, Salesforce, MuleSoft, Amazon Kinesis, Apache Spark, Cassandra, DynamoDB, Redis, Redshift, Amplitude, Adabas, Avaya/SAS


Work in the US with the capacity to work from home a minimum of 40 hours a week during our regular business hours

  • 1+ years' experience working in an agile development environment as part of a sprint team
  • 5+ years engineering and developing large scale database, data streams, and data reporting solutions with high availability and redundancy
  • 5+ years engineering and developing large scale PostgreSQL and SQL Server database systems both virtual and/or on AWS
  • 2+ years engineering and development with Amazon Lambda & S3
  • 2+ years hands-on data streaming development/engineering experience with Apache Kafka
  • 6+ months developing QlikView, Tableau, or PowerBI reports
  • 6+ months developing on Snowflake

Similar Jobs you may be interested in ..