Skip to main content

Tale of the Fluent Bit INPUT tail

This is the story about how to practically use the Fluent Bit tail input plugin. You might wonder about the existence of an official documentation on this. Yes, there is one and here is the link. But I’ll guarantee this post will explain the content in a simpler and clearer manner.

Before going in to the configuration, let me tell you about the versions and the environment that I have tried these tests out.

  • Fluent Bit version : 1.6.10


fluent-bit — version


  • OS version : Ubuntu 20.04.1 LTS


cat /etc/os-release


Fluent Bit tail plugin is similar to the tail command you encounter in Unix, Unix-like systems, FreeDOS and MSX-DOS. Therefore in order to begin, we need a file to read. We can use a default system file. But I prefer to create my own file, even though it is out of the scope of this topic. Below shell script will read a file and write to another file at every second-delay. By that, we have better control of what is written and what is missed.



: > OutputFile.txt


while IFS= read -r line
  echo "$line" >> OutputFile.txt
  sleep 1
done < "$input"


Let’s begin the journey with the simplest configuration file. I have named the above script as and content of my working directory is as below.



First I ran the script to write to the file.



Then I executed the Fluent Bit configuration file.

fluent-bit -c example.conf


My configuration file looks like below.

    Name    tail
    Path    ./OutputFile.txt

    Name   stdout
    Match  *


If you are copying and pasting, please ensure you indent the code properly. Otherwise you will get “Invalid indentation level” error. Above code is the simplest configuration you can have with tail input plugin. The output will be something like below.

[3] tail.0: [1609993104.104741564, {"log"=>"some text here"}]


In a more generalised form; 

[number] tail.0: [Unix epoch time, {"log"=>"line content"}]


Now we shall try to modify the configuration file.

   Name tail
   Path ./OutputFile.txt
   Tag mytag


Then the output will look like below; 

[number] mytag: [Unix epoch time, {"log"=>"line content"}]


What has happened here is that tail.0 has been replaced by the tag we introduced.


Before explaining another configuration parameter, I advise you to try the following scenario.

  • First start the script that writes to a file
  • Then run the Fluent Bit with above configuration file
  • Stop the Fluent Bit process for about a minute and re-run it 
  • Note down the last log written in the first run of Fluent Bit and first log line written in the rerun session
  • Compare it with your original log file where your shell script is reading the logs. You can see that Fluent Bit has missed some log lines during the time it was not running.


What happens is, with only the above configuration parameters, Fluent Bit is reading the log line that is written after the Fluent Bit process has started. It doesn’t keep track of the last read line of the file. In order to address this issue, they have introduced a parameter call DB. You can update your configuration file as below.

    Name   tail 
    Path   ./OutputFile.txt 
    Tag    mytag 
    DB     ./file_status.db


You will be able to see a database file that has been created in the same location where your configuration file is. You can open that file using DB Browser for SQLite app.


Fluent Bit 1



The offset value is the place where Fluent Bit has last read the OutputFile.txt document. You can compare the last line written in the Fluent Bit log and open the OutputFile.txt file using the vim command and run the below command. 

:goto offset_value


If you restart the Fluent Bit service, it will read from the very next line, which will solve the issue I described earlier.


What I am going to explain next is a little complex to recreate. In a real world scenario, if we try to add another log file with some content already written, will the Fluent Bit be able to read that file from the beginning? 

Follow below steps thoroughly;

  • Edit the fluent-bit configuration file as below.
    Name tail
    Path ./OutputFile*.txt
    Tag mytag
    DB ./file_status.db


What I have changed is the Path configuration parameter in order to read all the files that match the given pattern.

  • I assume the database file created above is already exists, where it has the last read location of OutputFile.txt
  • Start which writes to OutputFile2.txt. Content of is almost similar as the that I used at the very beginning. I have changed the output file name and added a prefix for the log lines that write to that file in order to separate those in Fluent Bit logs. For your convenience, I have highlighted the changes below.



: > OutputFile2.txt


while IFS= read -r line
  echo "2 $line" >> OutputFile2.txt
  sleep 1
done < "$input"


  • Stop the above process after about ten seconds
  • Next, start which writes to OutputFile.txt
  • Start Fluent Bit service
  • Modify the in order to identify new lines that writing next.


echo "2 2 $line" >> OutputFile2.txt


  • Restart
  • Check whether Fluent Bit has read the very first line of the OutputFile2.txt

What you are able to see is, that Fluent Bit has started reading from the new line written after the process has started. To solve that issue, Fluent Bit has used the Read_from_Head configuration parameter.

Let’s modify our configuration file like below.


    Name     tail
    Path     ./OutputFile*.txt
    Tag      mytag
    DB       ./file_status.db
    Read_from_Head on


Okay, now you have the solution for that issue as well. I have shown you how to create the environment that you can test by yourself. We will meet with a new post to discuss how to send logs to a particular Splunk index using Fluent Bit, soon. 



© 2021 Creative Software. All Rights Reserved | Privacy | Terms of Use