Job Description :

Position:  Big Data Engineer with AWS, Python

Location: Minneapolis, MN

 

Job Description: 

·        Collaborate with and across Agile teams to design, develop, test, implement, and support technical solutions in data engineering development tools and technologies

·        Work with a team of developers with deep experience in spark, hive, machine learning, distributed microservices, and full stack systems

·        Create and maintain overall optimal data pipeline architecture.

·        Assemble large, complex data sets that meet functional / non-functional business requirements.

·        Manage data migrations/conversions and troubleshooting data processing issues.

·        Utilize programming languages like Python Java, Scala and RDBMS and NoSQL databases and Cloud based data warehousing services such as Snowflake

·        Share your passion for staying on top of tech trends, experimenting with and learning new technologies, participating in internal & external technology communities, and mentoring other members of the engineering community

·        Perform unit tests and conducting reviews with other team members to make sure your code is rigorously designed, elegantly coded, and effectively tuned for performance

 

Required Qualifications:

·        Experience 5 to 10 years

·        Experience working with distributed teams

·        Familiar with Agile/Scrum development process.

Programming/languages:

·        Python, SQL

·        JavaScript, Java or Scala, a plus

Data engineering:

·        Spark/Pyspark

·        Kafka, queue/messaging paradigms, a plus

AWS:

·        security/networking basics, as well as S3, Kafka, Kinesis, Glue, RDS, No SQL databases, Lambda and Step Functions

·        Redshift, Athena a plus

Not required, but bonus if you have:

·        Machine learning academic or work experiences.

·        Kubernetes or docker knowledge.

 

             

Similar Jobs you may be interested in ..