Using Netflow, you can visualize your network traffic and use the collected data to analyze conections in case of troubles (which is what I use it for). All kinds of collectors are on the market, most paid applications, but why not use ELK for this and visualize your traffic using Kibana?
I assume you already have a working ELK, so you have Elasticsearch, Logstash and Kibana all up and running already.
First, we need to install the Netflow codec for Logastash
# /opt/logstash/bin/logstash-plugin install logstash-codec-netflow
Note: Logstash 2.3.4 uses codec-netflow 2.1.0, which introduces a bug in combination with Logstash 2.3.4. A fix is already available for this, so if you’re on Logstash 2.3.4 please install this new version:
# /opt/logstash/bin/logstash-plugin install --version 2.1.1 logstash-codec-netflow
Once the codec is installed, we need to create an input for the netflow data. This could be quite simple, as most of the work is handled by the codec.
This is what my input looks like:
input { udp { port => 9996 type => "netflow" codec => netflow { versions => [5,9,10] } } } output { if [type] == "netflow" { elasticsearch { hosts => localhost index => "netflow-%{+YYYY.MM.dd}" } } }
This will listen for Netflow data on UDP port 9996 and will export the data to its own index, netflow-date in this case.
Now we can configure the Cisco ASA to export its netflow data to Logstash.
Please replace 10.1.2.3 by the IP address of your Logstash host:
access-list global_mpc extended permit ip any any flow-export destination inside 10.1.2.3 9996 class-map global_class match access-list global_mpc policy-map global_policy class global_class flow-export event-type all destination 10.1.2.3
That’s all! After reloading Logstash to activate the new input, your netflow data should be coming in.
I didn’t create any Kibana Netflow dashboards yet, but feel free to share them once you created one! 🙂
Once you start to analyze the data, this table might come in handy:
ID | TYPE | LEN | DESC |
Connection ID Field | |||
NF_F_CONN_ID | 148 | 4 | An identifier of a unique flow for the device |
Flow ID Fields (L3 IPv4) | |||
NF_F_SRC_ADDR_IPV4 | 8 | 4 | Source IPv4 address |
NF_F_DST_ADDR_IPV4 | 12 | 4 | Destination IPv4 address |
NF_F_PROTOCOL | 4 | 1 | IP value |
Flow ID Fields (L3 IPv6) | |||
NF_F_SRC_ADDR_IPV6 | 27 | 16 | Source IPv6 address |
NF_F_DST_ADDR_IPV6 | 28 | 16 | Destination IPv6 address |
Flow ID Fields (L4) | |||
NF_F_SRC_PORT | 7 | 2 | Source port |
NF_F_DST_PORT | 11 | 2 | Destination port |
NF_F_ICMP_TYPE | 176 | 1 | ICMP type value |
NF_F_ICMP_CODE | 177 | 1 | ICMP code value |
NF_F_ICMP_TYPE_IPV6 | 178 | 1 | ICMP IPv6 type value |
NF_F_ICMP_CODE_IPV6 | 179 | 1 | ICMP IPv6 code value |
Flow ID Fields (INTF) | |||
NF_F_SRC_INTF_ID | 10 | 2 | Ingress IFC SNMP IF index |
NF_F_DST_INTF_ID | 14 | 2 | Egress IFC SNMP IF index |
Mapped Flow ID Fields (L3 IPv4) | |||
NF_F_XLATE_SRC_ADDR_IPV4 | 225 | 4 | Post NAT Source IPv4 Address |
NF_F_XLATE_DST_ADDR_IPV4 | 226 | 4 | Post NAT Destination IPv4 Address |
NF_F_XLATE_SRC_PORT | 227 | 2 | Post NATT Source Transport Port |
NF_F_XLATE_DST_PORT | 228 | 2 | Post NATT Destination Transport Port |
Mapped Flow ID Fields (L3 IPv6) | |||
NF_F_XLATE_SRC_ADDR_IPV6 | 281 | 16 | Post NAT Source IPv6 Address |
NF_F_XLATE_DST_ADDR_IPV6 | 282 | 16 | Post NAT Destination IPv6 Address |
Status or Event Fields | |||
NF_F_FW_EVENT | 233 | 1 | High-level event code. Values are as follows: |
0—Default (ignore) | |||
1—Flow created | |||
2—Flow deleted | |||
3—Flow denied | |||
4—Flow alert | |||
5—Flow update | |||
NF_F_FW_EXT_EVENT | 33002 | 2 | Extended event code. These values provide additional information about the event. |
Timestamp and Statistics Fields | |||
NF_F_EVENT_TIME_MSEC | 323 | 8 | The time that the event occurred, which comes from IPFIX. Use 324 for time in microseconds, and 325 for time in nanoseconds. Time has been counted as milliseconds since 0000 UTC January 1, 1970. |
NF_F_FLOW_CREATE_TIME_MSEC | 152 | 8 | The time that the flow was created, which is included in extended flow-teardown events in which the flow-create event was not sent earlier. The flow duration can be determined with the event time for the flow-teardown and flow-create times. |
NF_F_FWD_FLOW_DELTA_BYTES | 231 | 4 | The delta number of bytes from source to destination. |
NF_F_REV_FLOW_DELTA_BYTES | 232 | 4 | The delta number of bytes from destination to source. |
ACL Fields | |||
NF_F_INGRESS_ACL_ID | 33000 | 12 | The input ACL that permitted or denied the flow |
All ACL IDs are composed of the following three, four-byte values: | |||
Hash value or ID of the ACL name | |||
Hash value, ID, or line of an ACE within the ACL | |||
Hash value or ID of an extended ACE configuration | |||
NF_F_EGRESS_ACL_ID | 33001 | 12 | The output ACL that permitted or denied a flow |
AAA Fields | |||
NF_F_USERNAME | 40000 | 20 | AAA username |
NF_F_USERNAME_MAX | 40000 | 65 | AAA username of maximum permitted size |
This table was taken from the Cisco ASA NetFlow Implementation Guide. Take a look at it, as it contains a lot of interesting information!
If you’d like to know more about Netflow, a more thorough understanding about how it works and what you can use it for, I recommend this Cisco Press book: Network Security with NetFlow and IPFIX: Big Data Analytics for Information Security.
Did you use that version or this one?
1. logstash-all-plugins-2.3.4-1.noarch.rpm
2. logstash-2.3.4-1.noarch.rpm
Before you install the plugin, you need to delete the old:
#/opt/logstash/bin/plugin uninstall logstash-codec-netflow
Everyone says that this is enough to get the decryption package. But applying a command #/opt/logstash/bin/logstash -e ‘input { udp { port => 9996 codec =>
netflow }} output { stdout {codec => rubydebug }}’,
I saw only the endless errors:
:message=>”No matching template for flow id 265″, :level=>:warn}
:message=>”No matching template for flow id 263″, :level=>:warn}
:message=>”No matching template for flow id 256″, :level=>:warn}
:message=>”No matching template for flow id 263″, :level=>:warn}
:message=>”No matching template for flow id 263″, :level=>:warn}
:message=>”No matching template for flow id 265″, :level=>:warn}
:message=>”No matching template for flow id 265″, :level=>:warn}
:message=>”No matching template for flow id 263″, :level=>:warn}
:message=>”No matching template for flow id 263″, :level=>:warn}
:message=>”No matching template for flow id 260″, :level=>:warn}
<> I waited for 20 minutes, the logstash.log increased to 12 megabytes.
And it all consisted only of records:
No matching template for flow id
Hi,
I use Debian and installed logstash-2.3.4 from the Elastic repository.
What version of Netflow events are you sending to logstash? And are you sending from an ASA or from a Catalyst switch?
Thank you for your answer ! I have decided this problem!
I use NetFlow v.9 cisco ASA. The answer was the assembly that you used.
– logstash-2.3.4-1.noarch.rpm
– # /opt/logstash/bin/logstash-plugin install logstash-codec-netflow (in my case I uninstalled it, it was a mistake)
– # /opt/logstash/bin/logstash-plugin install –version 2.1.1 logstash-codec-netflow
I understand that it is free content. There are a lot of different errors. In any case, thanks for your guide, it works 100%. We would like you to add to it this command
#/opt/logstash/bin/logstash -e ‘input { udp { port => 9996 codec => netflow }} output { stdout {codec => rubydebug }}’
it will help to see your results in real time.
Good luck!
I would like to ask. Do you have a ready-made models for the visualization of metrics (Netflow v.9) in Kibana?
But a day later, I get this error, anyone faced with it?
UDP listener died {:exception=>#, :backtrace=>[“org/jruby/ext/socket/RubyUDPSocket.java:160:in bind'”, “/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-udp-2.0.5/lib/logstash/inputs/udp.rb:67:inudp_listener'”, “/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-udp-2.0.5/lib/logstash/inputs/udp.rb:50:in run'”, “/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/pipeline.rb:342:ininputworker'”, “/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/pipeline.rb:336:in `start_input'”], :level=>:warn}
^CSIGINT received. Shutting down the agent. {:level=>:warn}
No I didn’t create any visualizations yet. When I do, I’ll post them here
I also now working in this direction! I’ll look forward to your success!
i have met the same problem with you.
like this:No matching template for flow id 261 {:level=>:warn}
how to slove it?
It is necessary to use a specific version – Logstash 2.3.4, not all plug-ins. Do not remove the old codec netflow.