Thank you for your feedback.
Form temporarily unavailable. Please try again or contact to submit your comments.
  • Madrid
  • London
  • Kingston
  • Jakarta
  • Istanbul
  • Helsinki
  • Geneva
  • Store

Configure onetime data import using Netflow for testing purposes

Log in to subscribe to topics and get notified when content changes.

Configure onetime data import using Netflow for testing purposes

Configure and test Service Mapping discovery process based on data collected using the Netflow protocol.

Before you begin

Learn about Traffic-based discovery in Service Mapping.

Role required: admin or sm_admin

About this task

In base systems, traffic-based discovery uses only TCP-related data collected with the help of the netstat and lsof commands. Discovery based on Netflow and VPC logs requires additional configuration. You can enrich your traffic-based discovery by configuring Service Mapping to use the Netflow protocol. For more information about the way Service Mapping to collect Netflow data, see Data collection and discovery using Netflow.

For testing purposes, install the Netflow Collector (dfdump) on a Unix server inside your organization. In this case, this Unix server should be different from the server hosting the MID Server server.


  1. Download and install the Netflow collector (nfdump) on a Unix or Ubuntu server inside your organization.
    • For a Linux server, follow operational instructions at
    • For an Ubuntu server, open the command-line window and run the following command:

      sudo apt-get install nfdump

  2. Configure the Netflow collector to save the nfdump file in a certain directory.
    For operational information, refer to
  3. Configure the Netflow collector to save data for one day:
    1. Open the command-line window on the server hosting the Netflow collector.
    2. Create a cron job by using the following command:
      crontab -e
    3. Enter the following command using the correct paths:
      */10 * * * * /usr/local/bin/nfexpire -e /data/nfdump -t 1d.
  4. Create a file with the nfdump data. For example, use the following command:
    nfdump -q -m -R /data/nfdump/ -o extended -t 2016/07/06.07:00:00-2016/07/06.07:10:00 'inet and proto tcp' >> /tmp/my_file
  5. If the file is very large, you can compress it using the gzip format. Use the following command:
    gzip /tmp/my_file
  6. Copy the nfdump data file to the MID Server.
  7. Configure Service Mapping to receive data collected by the Netflow collector:
    1. Navigate to Service Mapping > Administration > Flow Connectors.
    2. Click New.
    3. Click ndfdump file.
    4. On the dfdump file page, configure parameters as follows:
      Field Description
      Name A descriptive name for the connector.
      nfdump data path The path to a location on the MID Server to which you saved the nfdump data file in 6.
      MID Server The MID Server, onto which you copied the nfdump file.
      Gzipped file If you converted the nfdump file into the gzip format before saving it on the MID Server, set this parameter to true to unzip it.
    5. Click Submit.
  8. Verify that Service Mapping collects data using Netflow:
    1. On the nfdump file form, select the newly configured connector and click Run now to start the data collection flow and populate the Flow Connection [sa_flow_connection] table.
    2. Navigate to System Definitions > Tables.
    3. Click the Flow Connection [sa_flow_connection] table.
    4. Under Related Links, click Show List.
    5. Verify that the table contains data.

What to do next

If you are satisfied with the results of the test, configure Netflow-based data collection as described in Configure data collection using Netflow.