Today, network technologies can handle throughputs or up to 100 Gbps, transporting 200 million packets per second on a single link. Such high bandwidths impact network flow analysis and as a result require significantly more powerful hardware. Methods used today concentrate mainly on analyses of data flows and patterns. It is nearly impossible to actively look for anomalies in network packets and flows. A small amount of change of monitoring patterns could result in big increase in potentially false positive incidents. This paper focuses on multi-criteria analyses of systems generated data in order to predict incidents. We prove that system generated monitoring data are an appropriate source to analyze and allow for much more focused and less computationally intensive monitoring operations. By using appropriate mathematical methods to analyze stored data, it is possible to obtain useful information. During our work, some interesting anomalies in networks were found by utilizing simple data correlations using monitoring system Zabbix. Afterwards, we prepared and pre-processed data to classify servers and hosts by their behavior. We concluded that it is possible to say that deeper analysis is possible thanks to Zabbix monitoring system and its features like Open-Source core, documented API and SQL backend for data. The result of this work is a new approach to analysis containing algorithms which allow to identify significant items in monitoring system.
Part of the book: Proceedings of the 3rd Czech-China Scientific Conference 2017