Job Description :
Job Title: NOC System Admin I / Splunk Admin
Job Location: San Francisco, CA 94105
8 Months Contract

Job Description:
5 years Linux and Windows system administration
1-2 years Slunk administration
Candidate can work the first 2-4 weeks in San Francisco and then remotely anywhere in the US
Prior experience in Customer Service needed
Intermediate Linux and Windows systems administration with working experience in the Splunk platform.
Craft and manage queries of complex data sets and assist junior engineers in creation of interactive apps and dashboards on the Splunk platform.
Maintain Production System list, and as-built documentation, and update documentation.
Assists NOC supervisor in the development and documenting of materials required for training key personnel
Manage the CR process from end to end.
Respond to customer calls, emails, and other forms of communications expeditiously and with curtsey. Treat the customer right.
Provide technical support, perform first-level problem isolation and root-cause analysis, and escalate as required
Systems Overview
Understand and use essential automation under Puppet and shell scripting
Operate running systems, including booting into different run levels, identifying processes, starting and stopping virtual machines, and controlling services
Configure local storage using partitions and logical volumes
Create and configure file systems and file system attributes, such as permissions, encryption, access control lists, and network file systems
Deploy, configure, and maintain systems, including software installation, update, and core services
Manage users and groups, including use of a centralized directory for authentication
Manage security, including basic firewall and SELinux configuration
Splunk Overview
Identify Splunk components
Identify Splunk system administrator role
Experience with Splunk ES a plus
Splunk Apps
Describe Splunk apps and add-ons
Install an app on a Splunk instance
Manage app accessibility and permissions
Splunk Configuration Files
Manage configs with major version controls systems such as GIT
Describe Splunk configuration directory structure
Understand configuration layering process
Use btool to examine configuration settings
Splunk Indexes
Describe index structure
List types of index buckets
Create new indexes
Monitor indexes with Monitoring Console
Search Head Cluster
Apply a data retention policy
Backup data on indexers
Delete data from an index
Restore frozen data
Getting Data In
Describe the basic settings for an input
List Splunk forwarder types
Configure the forwarder
Add an input to UF using CLI
Distributed Search
Describe how distributed search works
Explain the roles of the search head and search peers
Configure a distributed search group
List search head scaling options
Understand and Explain Data Administration
Splunk overview
Identify Splunk data administrator role
Getting Data In - Staging
o List the four phases of Splunk Index
o List Splunk input options
o Describe the band settings for an input
Configuring Forwarders
o Understand the role of production Indexers and Forwarders
o Understand the functionality of Universal Forwarders and Heavy Forwarders
o Configure Forwarders
o Identify additional Forwarder options
Forwarder Management
o Explain the use of Forwarder Management
o Describe Splunk Deployment Server
o Manage forwarders using deployment apps
o Configure deployment clients
o Configure client groups
o Monitor forwarder management activities
Monitor Inputs
o Create file and directory monitor inputs
o Use optional settings for monitor inputs
o Deploy a remote monitor input
Network and Scripted Inputs
o Create network (TCP and UDP) inputs
o Describe optional settings for network inputs
o Create a basic scripted input
Agentless Inputs
o Identify Windows input types and uses
o Understand additional options to get data into Splunk
o HTTP Event Collector
o Splunk App for Stream
o Module 8 - Fine Tuning Inputs
o Understand the default processing that occurs during input phase
o Configure input phase options, such as sourcetype fine-tuning and character set encoding
Parsing Phase and Data
o Understand the default processing that occurs during parsing
o Optimize and configure event line breaking
o Explain how timestamps and time zones are extracted or assigned to events
o Use Data Preview to validate event creation during the parsing phase
Manipulating Raw Data
o Explain how data transformations are defined and invoked
o Use transformations with props.conf and transforms.conf to:
o Mask or delete raw data as it is being indexed
o Override sourcetype or host based upon event values
o Route events to specific indexes based on event content
o Prevent unwanted events from being indexed
o Use SEDCMD to modify raw data
Supporting Knowledge Objects
o Create field extractions
o Configure collections for KV Store
o Manage Knowledge Object permissions
o Control automatic field extraction
Creating a Diag
o Identify Splunk diag
o Using Splunk diag
             

Similar Jobs you may be interested in ..