Job Description :
Job Title: Big Data Architect
Location: Estero, FL
Duration: 3 Months contract to Hire

Job Description:

Years of Experience –12-15 (At least 3 years in Big data)
Mandatory Skills – SPARK Python, AWS – EMR, EC2, S3, Redshift
Good Communication skill, Onsite experience

12-15 years of experience in designing, architecting and implementing large scale data processing/data storage/data distribution systems
Extensive experience working with large data sets with hands-on technology skills to design and build robust Big Data solutions
Ability to work with a multi-technology/cross-functional teams and customer stakeholders to guide/managing a full life-cycle of a Hadoop and/or Spark solution
Extensive experience in data modeling and database design involving any combination of –
Data warehousing and Business Intelligence systems and tools
Relational and MPP database platforms like Netezza, Teradata
Open source Hadoop stack
Hands-on administration, configuration management, monitoring, performance tuning of Hadoop/Distributed platforms
Strong understanding of Big Data Analytics platforms and ETL in the context of Big Data
Ability to frame architectural decisions, provide technology leadership & direction
Excellent problem solving, hands-on engineering skills and communication skills
Professional Experience Preferred
Knowledge/experience of cloud computing infrastructure (e.g. Amazon Web Services EC2, Elastic MapReduce)
Considerable understanding and experience of real-time analytics
Should be willing to independently code & review modules are required ( Role not at solution architecture level only)

Any combination of below technical skills
Hadoop : HDFS, MapReduce, Hive, Hbase, Pig, Mahout, Avro, Oozie
Streaming Processing : Spark, Strom, Samza
NoSQL : Cassandra, MongoDB, Hbase
Appliances : Teradata, Netezza, Greenplum, Aster Data, Vertica
Languages : Java, Scala, Linux, Apache, Perl/Python/PHP
Cloud : AWS, IBM Softlayer, Microsoft Azure, Google Cloud
Any RDBMS/DWBI technologies