Job Description :
Engineer should come from the background of a senior Hadoop developer.
Position will be in support of data curation, ingestion, management, and client consumption.
Individual has an absolute requirement to be well versed in Big Data fundamentals such as HDFS and YARN.
More than a working knowledge of Sqoop and Hive is required with understanding of partitioning/data formats/compression/performance tuning/etc.
Preferably, the candidate has a strong knowledge of Spark on either Python or Scala. Basic Spark knowledge is required.
SQL for Teradata/Oracle is required. Knowledge of other industry ETL tools (including No SQL) such Cassandra/Drill/Impala/etc. is a plus.
The candidate should be comfortable with Unix and standard enterprise environment tools/tech such as ftp/scp/ssh/Java/Python/SQL/etc.