5 min read

Setting up Splunk Forwarder

How to set up Splunk Forwarder. In this article, we will be walking going through step by step guide to configuring Splunk Forwarder using the Network data we have acquired from Snort on previous particle.

Installing and configure Splunk Forwarder

Splunk Forwarder is an one of the component of Splunk Infrastructure, that acts as an agent to collect data from remote machine, then forward it to Indexer for further processing and storage.

Splunk Forwarder has very small footprint on hardware resources, where it takes only 1 -2% of CPU. Also It provides reliable, secure data collection from remote machines for indexing and consolidation for Search Head to access the data. All this is done with a minimal impact on system performance, while client machine is running other programs.

There are two types of Splunk Forwarders

  1. Universal Forwarder - Contains only the components that are necessary to forward data.
  2. Heavy Forwarder - Full Splunk Enterprise instance that can index, search, and change data as well as forward it.

Comparison table of Universal Forwarder and Heavy Forwarder from Official Splunk Website.

Features and capabilitiesUniversal forwarderHeavy forwarder
Type of Splunk Enterprise instanceDedicated executableFull Splunk Enterprise, with some features disabled
Footprint (memory, CPU load)SmallestMedium-to-large (depending on enabled features)
Bundles Python?NoYes
Handles data inputs?All types (but scripted inputs might require Python installation)All types
Forwards to Splunk Enterprise?YesYes
Forwards to 3rd party systems?YesYes
Serves as intermediate forwarder?YesYes
Indexer acknowledgment (guaranteed delivery)?OptionalOptional (version 4.2 and later)
Load balancing?YesYes
Data cloning?YesYes
Per-event filtering?NoYes
Event routing?NoYes
Event parsing?SometimesYes
Local indexing?NoOptional, by setting indexAndForward attribute in outputs.conf
Searching/alerting?NoOptional
Splunk Web?NoOptional

Splunk can transfer data in three different types

  1. Raw
  2. Unparsed
  3. Parsed

Raw data  - the forwarder sends unaltered data over TCP stream, It doesn't convert data into Splunk Communication format. This is particularly useful for sending data to non-Splunk system.

Unparsed data - A universal forwarder perform minimal processing to tag the data stream with metadata to identify source, source type, and host (these are also known as "keys"). Also these data are divided into 64-killobyte blocks, then gets timestamped, If it is not already available.

Parsed data - A heavy forwarder breaks data into individual event, then examine to annotate with key value pair, that may be different in each induvial events.

Both unparsed and parsed data known as Cooked data. By default forwarder send cooked data. Universal forwarder sends unparsed data, and heavy forwarder send parsed data. This can be amended in outputs.conf by setting sendCookedData=false to sent raw data, If needed.

Configuring Splunk to collect and forward logs

We will be using the data we have gathered using Snort IDS and forwarding to Splunk Search Head.

  1. Download Splunk Forwarder Debian from official website
  2. Move the downloaded file to  /opt optional software package directory
  3. Navigate to /opt directory, then install the package by entering sudo apt install ./splunkforwarder -x.x.x-xxx..
  4. Start the Splunk Forwarder by navigating to /opt/splunkforwarder/bin, then run sudo ./splunk start -- accept-license. This will start the Splunk Demon
  5. Enable Auto-boot on start up, so the Splunk will automatically starts at boot  up /opt/splunkforwarder/bin/splunk enable boot-start
  6. Forward the data to Splunk server by entering sudo ./splunk add forward-server <Splunk Server IP-Address>:9997
  7. Navigate to opt/splunkforwarder/etc/system/local directory, then check the configuration on outputs.conf file. Ensure that the server and tcpout-server are configured to correct server.
  8. Then, navigate to opt/splunkforwarder/bin directory, then state the file we wanted to monitor by Splunk -  sudo ./splunk add monitor /var/log/snort/alert
  9. Change the settings on inputs.conf file in /opt/splunkforwarder/etc/apps/search/local as shown below.
🍉
You will need a root permission to local folder. Get root permission by su root or su - command. 
Editing input.conf file
Edit inputs.conf file as shown here

The screenshot above shows the settings of data forward to the server

  • TCP Port   - Default port <9997>
  • Server  - 172.20.10.4 (Yours may differ)
  • Data file  - /var/log/snort/alert file (log)
  • index     - Name given to data
  • source type - Tells Splunk what type of data  
  • Source   - Where the data is coming from

9. Once the inputs.conf file has been edited in /local/ folder, we can change back to normal user by su <user>

10. Then, navigate to  /opt/splunkforwarder/bin/ then entre sudo ./splunk restart to reboot Splunk Forwarder

11. Add new receiving port under Forward and receiving on Settings.

12. Once the Splunk Forwarder has restarted, we can log into Web Portal and check the indexed data.


Visual Guide : Configuring Splunk Forwarder

Step 1 - Download Splunk Universal Forwarder 64-Bit(.deb) package
Step 2 -3. Move the downloaded package to /opt/ folder, then install Splunk Forwarder.
Step 4 - Start Splunk and accept licence
Step 6 - Forward the data to Splunk. We are using the data from Snort from here
Step 7 - Ensure configured settings are correct for server and ports for outputs.conf file
Step 8 - State the log file you wanted to monitor by Splunk server
Step 9 - Update inputs.conf file as above in /opt/splunkforwarder/etc/apps/search/local
Step 10 - Restart Splunkforwarder by sudo ./splunk restart command
Step 11 - Set Receive data port as 9997 and enable it
Step 12 - Log into Splunk Server GUI and check for Data Summary to see if the data has been fetched from Snort

We have successfully ingested Snort log file into Splunk Server

We have looked at how we can forward Snort log data from previous article. In future articles, we will be looking at how we can extract data from ingested log file, and how we can analyse, visualise and measure the data we have gathered.