Job Description :

Hi,

Greeting from Smart Folks !!!

My name is Hari Prasad we have a job opportunity for you as Data Engineer role. Please find the job description below, if you are available and interested, please send us your word copy of your resume to Or please call me on .

Job Title: Data Engineer

Job Location: Portland, OR (ONSITE)

Job Description:

Establishes database management systems, standards, guidelines and quality assurance for database deliverables, such as conceptual design, logical database, capacity planning, external data interface specification, data loading plan, data maintenance plan and security policy. Documents and communicates database design. Evaluates and installs database management systems. Codes complex programs and derives logical processes on technical platforms. Builds windows, screens and reports. Assists in the design of user interface and business application prototypes. Participates in quality assurance and develops test application code in client server environment. Provides expertise in devising, negotiating and defending the tables and fields provided in the database. Adapts business requirements, developed by modeling/development staff and systems engineers, and develops the data, database specifications, and table and element attributes for an application. At more experienced levels, helps to develop an understanding of client's original data and storage mechanisms. Determines appropriateness of data for storage and optimum storage organization. Determines how tables relate to each other and how fields interact within the tables for a relational model.

Job Requirements/Description:

5+ years relevant work experience in the Data Engineering field

3+ years of experience working with Hadoop and Big Data processing frameworks (Hadoop, Spark, Hive, Flink, Airflow etc.)

2+ years of experience Strong experience with relational SQL and at least one programming language such as Python, Scala, or Java

Experience working in AWS environment primarily EMR, S3, Kinesis, Redshift, Athena, etc.

Experience building scalable, real-time and high-performance cloud data lake solutions

Experience with source control tools such as GitHub and related CI/CD processes.

Experience working with Big Data streaming services such as Kinesis, Kafka, etc.

Experience working with NoSQL data stores such as HBase, DynamoDB, etc.

Experience with data warehouses/RDBMS like Snowflake & Teradata

Top Skills:

  • Python
  • SQL
  • AWS
  • Spark