Open access peer-reviewed chapter - ONLINE FIRST

Perspective Chapter: Distinguishing Encrypted from Non-Encrypted Data

Written By

Eric Järpe and Quentin Gouchet

Reviewed: January 25th, 2022Published: March 25th, 2022

DOI: 10.5772/intechopen.102856

Cryptography - Modern Theory and PracticesEdited by Srinivasan Ramakrishnan

From the Edited Volume

Cryptography - Modern Theory and Practices [Working Title]

Dr. Srinivasan Ramakrishnan

Chapter metrics overview

12 Chapter Downloads

View Full Metrics


Discriminating between encrypted and non-encrypted information is desired for many purposes. Much of the efforts in this direction in the literature is focused on deploying machine learning methods for the discrimination in streamed data which is transmitted in packets in communication networks. Here, however, the focus and the methods are different. The retrieval of data from computer hard drives that have been seized from police busts against suspected criminals is sometimes not straightforward. Typically the incriminating code, which may be important evidence in subsequent trials, is encrypted and quick deleted. The cryptanalysis of what can be recovered from such hard drives is then subject to time-consuming brute forcing and password guessing. To this end methods for accurate classification of what is encrypted code and what is not is of the essence. Here a procedure for discriminating encrypted code from non-encrypted is derived. Two methods to detect where encrypted data is located in a hard disk drive are detailed using passive change-point detection. Measures of performance of such methods are discussed and a new property for evaluation is suggested. The methods are then evaluated and discussed according to the performance measures.


  • likelihood ratio
  • change-point detection
  • cryptology
  • compression
  • uniform distribution

1. Introduction

The discrimination of encrypted data from other kinds of data is of interest in many areas of application. For instance for making other applications work for the communication traffic in a network where the means for application may depend on whether the traffic data is plaintext/cleartext, compressed, encrypted or encoded in some way. Also, there may be security reasons (e.g. the uncontrolled flow of encrypted data of which some may be transmitted for malicious purposes) which could be an argument for better network supervision tools. To these ends, various methods, mainly of machine learning, have been suggested through the last decades.

1.1 Methods for discrimination in case of streamed data

As a foundation for the treatment, some kind of preprocessing is due. Among methods for this step are the chi-square statistic, Shannon entropy, Kolmogorov-Smirnov statistic, Lilliefors test, Cramér-von Mises statistic, discrete runs test, autocorrelation’s test and the index of coincidence test to mention some [1, 2].

In [1] the problem of filtering encrypted data upon traffic monitoring which is outbound from a computer or a network, so-called egress filtering, is considered. Many aspects and properties of the problem are brought to attention and an extensive account for various evaluation techniques is mentioned.

Different measures for discriminating a given distribution from the normal distribution are the topic in [2]. For discrimination between encrypted and non-encrypted data other distributions than the normal are more relevant but some of the measures considered in this paper can also be used for the more general case.

In [3] machine learning methods are deployed to a fully automated method to the distinction between encrypted and compressed data, claimed to be the first of its kind. A dataset for further evaluation and benchmarking is also provided.

A method for discriminating between compressed and encrypted data in the case when the data is voice over IP with constant and variable bit rate codecs is proposed in [4]. The method is evaluated utilizing the NIST [5] and the ENT [6] test suits.

Reference [7] does not suggest a solution to the discrimination problem but rather a method for fast and distribution true simulation of data reflecting the properties of encrypted and compressed data respectively.

In Ref. [8] a classification procedure, based on Gaussian mixtures and hidden Markov models, is detailed. An elaborate comparative evaluation is carried out taking 9 other attempts to solve the same problem into account. The study concludes that the proposed method renders a better classification at a lower computation complexity.

A lucid convolution neural network method to classify data into being encrypted or non-encrypted is presented by [9]. The proposed method is thoroughly evaluated for various kinds of network traffic and some different performance measures in the field of machine learning.

1.2 Methods for discrimination in case of static data

A different need for discrimination between encrypted and unencrypted data is the following. In police investigations concerning heavy criminality and organized crime where evidence of criminal activity is residing as data on a computer hard disk drive (HDD) the capability to distinguish encrypted files from non-encrypted ones may be primordial [10, 11, 12]. When the police seize an HDD containing data belonging to a suspected criminal, that data can be material of evidence in a subsequent trial. But criminals often try to make sure that the police will be able to use that data. Possible actions to obstruct data access are then to encrypt and/or to quick deletethat data. In case of quick delete of data, the pointers to the files are destroyed but the content of the files is still left. The other action is to encrypt files. Using state of the art tools for encryption of files (such as VeraCrypt, Bitlocker, Ciphershed and thereby provided ciphers) there is no general procedure of breaking the encryption other than brute force attack by guessing the cipher algorithm and systematically guessing the encryption key [13]. If some of the files are encrypted and others not, it is then a delicate matter to distinguish between the two if the code has been quick deleted. Nevertheless, the importance of discriminating between encrypted code and plaintext is also stressed by the fact that brute force attacking all clusters of data on the hard drive would be an impossible task while being able to separate the encrypted code from the non-encrypted would make a substantial improvement of the chances for successful cryptanalysis. The police authorities usually have software to brute force those encrypted data but this procedure may be very time consuming if the amount of data is so large that different parts of it have been encrypted with different keys. In juridical cases, time is typically of the essence since the chances of success in prosecution and proceedings against a criminal is depending on deadlines (such as time of arrest, time of trial etc) any time savings in the procedure of extracting evidence from the HDDs is essential. Thus time has to be spent on the appropriate tasks: code-breaking only on the encrypted data rather than trying to decypher non-encrypted data. Given a single file, the task of determining whether it is encrypted or not is usually easy; but given a whole HDD without knowing where the encrypted files are, this is trickier.

There are several software solutions for indicating whether a file is encrypted or not, mostly checking the header of the file and looking for some known pattern like the EFS (Encryption Files System on Windows), BestCrypt, or other softwares’ headers. Such alternatives cannot be used in the case when the user has performed a (quick) delete of the HDD because then the pointers to all files are lost. Nevertheless, the files remain on the HDD: upon deleting a file on an HDD, the pointers to the beginning and end of the physical space on the HDD containing the information of the file are removed. Still, the physical space on the HDD containing the information of the file is not overwritten because that operation would be slow, i.e. as slow as writing the same file again. The operating system’s designers rather leave the file in the HDD intact but indicate that this location is free to host some other data. If nothing has been overwritten, this means that the file is still stored in the HDD for some time which allows recovery software to recover those files.

Recovery software might help to simply recover the files but might also overwrite the data contained in HDD which could result in loss of evidence in case of an investigation. In this case, the police would have to locate the encrypted files without using recovery software or “header-detector” software.

Here methods to locate quick deleted encrypted data are presented and detailed. First, a description of how encrypted data is different from other data is presented. This is followed by the introduction to statistical change-point methods for discrimination between encrypted and non-encrypted data. Finally, the results of these procedures are presented along with some experimental values to evaluate the methods.

However, such a method will only work on mechanical HDDs and not with flash memory devices: in flash memory (like USB memory sticks or Solid State Drive (SSD)), as soon as a file is removed it is erased from the memory because data cannot be overwritten. Therefore as soon as data is deleted, the operating system will choose to delete the pointers and the data to gain time for the time when the user will decide to save data at this location. But erasing in flash memory also takes longer since the pointer have to be deleted and all the files removed which is as long as copying new files on the device.

This application differs from the one mentioned in Subsection 1.1. In the previous case when data is commonly considered being transmitted in packets in a network while here, there is static access to the data. In the previous situation, data consists of sometimes small files where statistical methods for making a foundation indicating if the data is encrypted or not becomes weaker [14] as opposed to the situation here where there is a large amount of data possibly of both kinds, encrypted and non-encrypted. In the previous case with dynamic data in all senses—variable in size, access, time, kind—methods of machine learning which could pick up on the current circumstances were preferable while here, in the case of fixed data, fast and efficient methods of change-point detection becomes a far more advantageous choice. The machine learning methods need training data while the change-point methods can start from scratch with pre-defined parameter values optimized for the situation or with very little run-in data for calibrating the levels. The performance of the machine learning methods is still a matter of research since these are very highly structured procedures, sometimes black box techniques impossible to fully evaluate all properties of, while the change-point detection methods are long since optimized and very thoroughly evaluated from all kinds of aspects of performance through a much longer time.


2. Method

2.1 Description

If encrypted data is not uniformly distributed, the cryptosystem used to cipher those data has a bias and can therefore be attacked. For this reason, characters of the cyphertext produced by any modern high-quality cryptosystem are uniformly distributed [15, 16] i.e. the values of the bytes of the cyphertext are uniformly distributed on some character interval. Indeed, a cryptosystem that does not have this property would be weak since it would be possible to attack it on this bias (distinguishing attacks such as on the RC4 encryption algorithm). The other types of files do not possess this feature although the contents of some types of files are close to being uniformly distributed. The files coming closest without being encrypted are compressed/zipped files: those files are indeed very close to cipher files in terms of the distribution of their character’s byte numbers. Albeit small there is a difference in distribution making it possible to tell compressed files and encrypted files apart.

A technique to quantify this distribution difference is by using chi-square statistic and more or less performing a chi-square test (see [17]) to tell whether the data is suspiciously uniform or not. Another classical method is to calculate the Kolmogorov-Smirnov distance, see e.g. [1, 2] for a more extensive account of ways to indicate whether data are encrypted or not. Procedures building on such preprocessors can then be defined.

One attempt to exploit this distribution difference utilizing the chi-square statistic is [18] where a method to automate the discrimination between encrypted and non-encrypted (i.e. most critically compressed) data into a method with impressing performance. Another approach could be to use means of anomaly detection in the theory of machine learning to develop an adaptive method. The proposed method, however, stems from using statistical change-point detection. The anticipated advantage with this could be the mathematically proven optimality with these methods in terms of efficiency and accuracy under the given assumptions. For many applications, the assumptions of entirely relying on the distribution of the data, in-control and out-of-control are commonly a subject of great controversy. Here, the situation is different though, since the hard drive and its data is not a dynamic system where the content changes its distribution owing to outer time-depending circumstances.

Before going into detail about these methods the preprocessing techniques are introduced.

2.2 Distribution of encrypted data

The working hypothesis is that data (i.e. characters) constituting encrypted files are uniformly distributed, while data of non-encrypted files are not (i.e. differently distributed depending on which type of non-encrypted files). The goal now is to be able to tell an encrypted file from a non-encrypted one.

Let us assume the data constitutes of characters divided into clusters, c1,c2,c3,with Ncharacters in each cluster. The characters that are used range over some alphabet of a set of possible forms. Merging these forms into Kclasses, the counts Oktof occurrences of class kcharacters in cluster tare observed. One method of measuring distribution agreement is by means of a χ2test statistic, Qt=k=1KOktEkt2/Ektwhere Ektare the expected counts of occurrences of characters in class k, cluster t. Under the hypothesis of uniformly distributed characters, the expected counts of occurrences within each class is Ekt=NKreducing the statistic simplifies to


the values of which are henceforth referred to as Qscores. Large Qscores indicate deviance from the corresponding expected frequencies Ekt. The smallest possible Qscore being 0would be attained if all Ekt=Okt. The expected value, Ekt, in each class, should not be smaller than 5 for the statistic to be relevant (5 is a commonly used value; other values like 2 or 10 are sometimes used depending on how stable the test statistic should be to small deviances in tail probabilities). Therefore one should use at least 5kB of data in each cluster to enable this test to be relevant. But the larger the number of bytes that are in each cluster, the worse the precision to detect encrypted file values: indeed, if too many bytes were detected as being unencrypted (but were actually encrypted) a large amount of encrypted data might not be detected in the procedure.

Here, the alphabet used was the numbers 0,1,,255representing the possible size in bytes of the characters in the data. These numbers were divided into K=8classes (class 1: values of bytes in 0,1,231to class 8: values of bytes in 224,225,226255) and the clusters of size N=64bytes making the expected values in each class Ekt=8. Assuming that encrypted data are uniformly distributed, the Qscores based on counts of characters in encrypted data are χ2distributed, see Figure 1 and since 8 classes were chosen, the number of degrees of freedom is 81=7.

Figure 1.

Distribution of theQscores of encrypted files (obtained by using more than 5000 files) with the distribution function ofQχ27.

2.3 Distribution of non-encrypted data

For non-encrypted data, the distribution is more complicated. Each type of file has its distribution. Consequently, the standardized squared deviances from expected counts under the assumption about uniform distribution are larger and so are the Qscores of the χ2statistic. However, two problems emerge. Firstly, the size of these increased deviances depends on the type of data—i.e. is the data a text file, an image, a compiled program, a compressed file or some other kind of file?—and how should this information be properly taken into account? Secondly, what is the distribution of the Qscore in the case when the data are not encrypted?

To develop a method for distinguishing between encrypted and non-encrypted data it is sufficient to focus on the non-encrypted that is most similar to the encrypted and this turns out to be compressed data. Other types of files such as images, compiled programs etc. commonly render higher Qscores and are therefore indirectly distinguished from encrypted data by a method that is calibrated for discriminating between encrypted and compressed data. About the second question, this is not readily answered. Rather, we just suggest modeling the Qscore as being scaled χ2distributed, i.e. the Qscore is assumed to have the distribution of the random variable αXwhere α>1and Xχ2. The validity of this approach is sustained by empirical evaluation based on more than 5000 compressed files. The resulting empirical distribution of their Qscores and the distribution of αXwhere Xis χ2distributed and the value of α=1.7374was estimated by the least square method was plotted in Figure 2.

Figure 2.

Distribution of theQscores of compressed files (obtained by using more than 5000 files) with the distribution function ofQ=1.7374XwhereXχ27.

2.4 Change-point detection

Change-point detection [19, 20, 21, 22, 23] is a field of mathematical statistics where the object is to quickly and accurately detect a shift in distribution from on-line observation of a random process. This can be done actively (stop collecting the data as soon as a shift is detected) or passively (continue collecting the data even if a shift is detected in order to detect more shifts). Here passive on-line change-point detection was used to detect if the data from an HDD shifts from non-encrypted to encrypted and vice versa. The change-point detection method is a stopping rule


where atis called alarm function and Cthreshold. The design of the alarm function defines different change-point detection methods while the values of the threshold reflects the degree of sensitivity of the stopping rule. The alarm function may be based on the likelihood ratio


where fQtqtAis the conditional joint density function of the random variables Q1Qt=Qtgiven Aand where qtis the vector of the observed values of Qt. Assuming indpendence of the variables Q1,,Qtthe likelihood ratio simplifies to


where f0quis the marginal conditional density function of Qugiven that the shift has not occurred by time uand f1quis the marginal conditional density function of Qugiven that the shift occurred in or before time u.

The conditional density function of the Qscore at time tgiven that the data is encrypted (i.e. uniformly distributed) is


where kis the number of degrees of freedom, i.e. the number of classes (which in this study is 8 as explained above).

For the non-encrypted files, the conditional density function of the Qscore is modeled by αXwhere Xχ2kand α>1supposedly reflecting the inflated deviances from the uniform distribution had the data been encrypted. Thus


is the density function of non-encrypted data Qscore. This means that two cases of shift in didstribution are possible:

  • Shift from non-encrypted to encrypted data in which case


  • Shift from encrypted to non-encrypted data in which case

  • Lst=u=stfNEqufEqu=αkts+1/2expα12αu=stqu

    To detect whether the shift in distribution has occurred or not according to the stopping rule τmentioned above, an alarm function should be specified. Two of the most common choices here are:

    • CUSUM [24]: at=max1stLst,

    • Shiryaev [25]: at=s=1tLst.

    Other possible choices are e.g. the Shewhart method, the Exponentially Weighted Moving Average (EWMA), the full Likelihood Ratio method (LR) and others, see e.g. [20] for a more extensive presentation of different methods.

    For the CUSUM alarm function, as argmax1stLst=argmax1stlnLstthe alarm function is simplified without any loss of generality by using the log likelihood values instead.

    For both cases the alarm functions can be expressed recursively which facilitates collecting and treating the data as follows.

    • The alarm function for shift from non-encrypted to encrypted data for the

    • CUSUM method is


  • Shiryaev method is


  • The alarm function for shift from encrypted to non-encrypted data for the

  • CUSUM method is


  • Shiryaev method is

  • at=0fort=0αk/2eα12αqt1+at1fort=1,2,3,

    3. Results

    3.1 Evaluation

    To quantify the quality of different methods, the performance is compared regarding relevant properties such as the time until a false alarm, delay of a motivated alarm, the credibility of an alarm and so on. The threshold is commonly calibrated against the Average Run Length ARL0which is defined as the expected time until an alarm when no parameter shift occurred (which means that this is a false alarm). It is crucial to have the right threshold values for the methods to perform as specified. Setting the threshold such that ARL0is 100,500,2500and 10000respectively (the most common values here are ARL0=100and 500but the higher values are also considered since the value of ARL0defines the number of clusters/time points that are treated before a false alarm and the shift could occur very far into the HDD) properties of the methods regarding delay and credibility of a motivated alarm can be compared. Of course, if the threshold is low, this will lead to more false alarms (detection of a change when there is none) but setting a too high threshold will lead to a drop of sensitivity of the method to detect a shift (higher delay between a shift and its detection) and consequently an increased probability of missing a real shift in distribution.

    Usually the expected delay, EDν=Eντθθ<τ(expectation of the delay of a motivated alarm; see Figure 1) or Conditional Expected Delay CEDt=Eτθτ>θ=t(expectation of the delay when the change point is fixed equal to t) are very important because, for example in health care, the goal is to quickly detect problems to be able to save lifes (Table 1).


    Table 1.

    Values of expected delays EDfor the CUSUM and Shiryaev methods for values of ARL0=100, 500, 2500, 10000for a shift from encrypted to compressed data.

    However, in the case of detecting encrypted code, expected delays are less relevant as a measure of performance since the data can be handled without any time aspect: the goal is to detect accurately where the encrypted data is located. A method with high expected or conditional expected delay merely means a slightly less efficient procedure.

    A more relevant performance indicator, in this case, is for instance the predictive value PV=Pθ<τ(the probability that the method signals alarm when the change-point has occurred; see Figure 5) or the percentage of detected encrypted files that is discovered while running the process and how to improve it (see Figure 2).

    While running the process, the method will stop at some time, τ, and then estimate the change point, θ, by maximizing the likelihood function by using the data after the last previous alarm and the newly detected change-point. This estimated change-point, θ̂, can be either before or after the true change-point θ. One could increase the intervals where encrypted data were discovered. This would lead to missing less encrypted data (see Figure 2) but also brute-forcing more non-encrypted data (Table 2).


    Table 2.

    Percentage of encrypted files that are detected when the interval of detected change points τ1τ2is arbitrary replaced by τ1iτ2+i. Typically, with large i, the values are very close but not exactly equal to 1: This happens because the change points are very close (i.e less than 10 clusters for example) and the method does not detect any change. Then no cyphertext is detected at all.

    Therefore the difference between the change-points and the alarms according to the method is calculated. Since the proportion of encrypted data relative to the total amount of data on the HDD is unknown, the expected proportion of error is suggested. This is to say, given two change-points, θ1and θ2, and two stopping times, τ1and τ2, the expected proportion of error is Eτ1θ1+τ2θ2θ2θ1. However, this value has a sense only when there are no false alarms between τ1and τ2.

    If there are false alarms between τ1and τ2, the proportion of undetected encrypted data was added to the proportion of error to determine the proportion of the error made relative to the size of the encrypted data. Assuming that there are nfalse alarms τ1<<τnin τ1τ2, called expected inaccuracy, or EIfor short, is defined as follows.


    The EIwas measured for different values of the parameter νin the geometrical distribution of the change-points for different methods (see Table 3 and Figure 3).


    Table 3.

    Expected inaccuracy, EI[see Eq. (1)], for different values of the parameter νand for the two methods. For the application of discriminating between encrypted and non-encrypted data on a hard drive this may be considered to be reflecting the degree of fragmentation of the disk; the less fragmentation the farther apart are the change-points and thus the smaller the νvalue, and vice versa.

    Figure 3.

    Expected inaccuracyEIfor the CUSUM (blue graph) and Shiryaev (red graph) procedure. One can see that the Shiryaev procedure is little less accurate whenνincreases but slightly more accurate for smallνthan the CUSUM procedure.

    3.2 Complete procedure

    The complete process is the method returning a segmentation separating suspiciously encrypted data and most likely non-encrypted data of an HDD; information to be further used to target the brute force cryptanalysis efficiently. This procedure runs a likelihood ratio based change-point detection method and as soon as it detects a change, calculates the maximum likelihood estimator of the change-point to determine where the change-point most likely is located. It will then start over from the location of this estimated change-point with the same method for online change-point detection except that the likelihood ratio is reversed modifying the alarm function to fit the opposite change-point situation, and so on.

    3.3 Thresholds and experimental values

    The first step is to determine the thresholds rendering ARL0=100,500,2500and 10000, for both the CUSUM and Shiryaev methods, for a shift from encrypted to non-encrypted and vice versa. The change-points are commonly geometrically distributed with parameter ν. Here the average time before a change-point is expected to be rather high (several hundred or thousand maybe) as the method deal with 64-byte clusters in an HDD of surely several hundreds of Giga or Tera Bytes. Thus, since Eτ=1/ν, the focus is on very small values of νfor the methods to be reasonably sensitive.

    Commonly values of ARL0are 100 or 500 to make other properties relevant for comparisons. In the case of this application, however, large values of ARL0as 2500and 10000are studied because the first change-point might not occur until far into the HDD. Adjusting the threshold by simulating data can take very long if ARL0is large (2500or 10000, especially for the Shiryaev method). In this case, it can take several hours or even up to days to compute the threshold. Therefore, having a way of predicting the threshold by extrapolation would be desired i.e. having an explicit relation between ARL0and the threshold C. Intuitively, if ARL0is larger, more data will be taken into account implying a threshold proportionally larger. Indeed, as ARL0increases more data is used in the procedure and the threshold is therefore proportionally increased from how much more data is treated in the procedure. In the CUSUM case, since the alarm function is defined as the log-likelihood ratio, this results in a threshold being a log ARL0(Figures 4 and 5; Tables 4 and 5):

    • for a shift from compressed data to encrypted data:


  • for a shift from encrypted data to compressed data:


  • Figure 4.

    Expected delaysEDfor a shift from encrypted to compressed data for the CUSUM procedure (blue) and Shiryaev procedure (red).

    Figure 5.

    Predictive values for a shift from compressed to encrypted data for the CUSUM procedure (left) and for the Shiryaev procedure (right).


    Table 4.

    Values of the thresholds for the CUSUM and Shiryaev methods for ARL0=100,500,2500,10000specified for detecting a shift from non-encrypted to encrypted data (indicated by NEEfor short) and for shift from encrypted to non-encrypted data (indicated by ENE) respectively.


    Table 5.

    Predictive value PVν=Pνθ<τ, i.e. the probability that a shift has occurred when an alarm is signaled, for the CUSUM and the Shiryaev methods, for values of ARL0=100, 500, 2500, 10000and with different values of the parameter νin the geometric distribution of the change-points.

    For Shiryaev, the threshold is a linear function of ARL0:

    • for a shift from compressed data to encrypted data:


  • for a shift from encrypted data to compressed data:


  • Advertisement

    4. Conclusion

    Using the change-point statistical process, a method to detect encrypted data in HDD was successively designed. This method is using the fact that encrypted data is uniformly distributed as opposed to other types of files. The method was designed to detect even a change with the closest files to encrypted files which are compressed data. As this method even detects a small change in the data, any bigger change will be even easier detected. Therefore this process is likely to detect encrypted data among any type of data.

    Quick and accurate detection of a change is commonly the desired property of change-point detection methods. In many applications such as medicine, finance, environmental science etc., time aspects of the methods are a matter of interest, e.g. expected delay in detection of a shift or probability of detecting a shift within a specified time interval. Here, however, this time aspect is not of primary interest since the data remain the same during the whole process. Here the need is to detect correctly recognized encrypted data. Therefore the probability of correctly detecting encrypted data is more relevant here. This probability shows that the method detects more than 96% of the encrypted data which is good and by extending the intervals, the method detects more than 99% of the encrypted data. By assuming that the change-points are not too close—which is a plausible assumption since it is unlikely that files are so small if the device is not too fragmented—then the method, by adding a little margin to the intervals, quickly detects 100% of the encrypted data.

    The Shiryaev method turns out to be slightly better in the more important respects compared to the CUSUM method. Although the expected delay EDis bigger than CUSUM for large values of the parameter νin the distribution of the change points, it is smaller for small values of νwhich is the most relevant case for detecting encrypted data in an HDD. The Shiryaev method also detects more encrypted data than the CUSUM method and has a slightly higher predictive value PV.

    All in all, this means that both methods designed with the suggested modeling, perform very well with a slight preference to the Shiryaev method for detecting encrypted data in an HDD.

    To summarize, a thorough comparison between the proposed method and the aforementioned methods [3, 4, 8, 9, 18] for the situation with streamed data would be the obvious next step in this research. Also other methods, potentially building on the Kolmogorov–Smirnov statistic or the Shannon entropy and by using other anomaly detection of machine learning could be interesting candidates in such a race.



    The authors wish to express their gratitude to Mattias Weckstén at Halmstad for good ideas and previous readings of the manuscript University and to Linus Nissi (previously Linus Barkman) at the Police Department of Southern Sweden for earlier work in the area.


    Conflict of interest

    The authors declare no conflict of interest.




    average runlength in control


    average runlength out of control


    expected delay of motivated alarm


    conditionala expected delay of motivated alarm given that the change occurred at a specified time-point


    predictive value


    1. 1.Malhotra P. Detection of encrypted streams for egress monitoring [thesis]. UMI Microform 1447482. Ames, Iowa: Iowa State University; 2007
    2. 2.Razali NM, Wah YB. Power comparisons of Shapiro-Wilk, Kolmogorov-Smirnov, Lilliefors and Anderson-Darling tests. Journal of Statistical Modeling and Analytics. 2011;2(1):21-33. Available from:[Accessed: December 29, 2021]
    3. 3.Hahn, D, Apthorpe, N and Feamster, N. Detecting Compressed Cleartext Traffic from Consumer Internet of Things Devices [Internet]. 2018. Available from:[Accessed: December 29, 2021]
    4. 4.Choudhury P, Kumar KR, Nandi S, Athithan G. An empirical approach towards characterization of encrypted and unencrypted VoIP traffic. Multimedia Tools and Applications. 2020;79:603-631. DOI: 10.1007/s11042-019-08088-w
    5. 5.Bassham LE, Rukhin A, Soto J, Nechvatal J, Smid M, Barker E, et al. A Statistical Test Suite for Random and Pseudorandom Number Generators for Cryptographic Applications, National Institute of Standards and Technology, Special Publication 800–22 [Internet]. 2010. Available from:[Accessed: December 29, 2021]
    6. 6.Walker J. Pseudorandom number sequence test program, [Internet]. 2008. Available from:[Accessed: December 29, 2021]
    7. 7.Kozachok AV, Spirin AA. Model of pseudo-random sequences generated by encryption and compression algorithms. Programming and Computer Software. 2021;47(4):249-260. DOI: 10.1134/S0361768821040058
    8. 8.Yao Z, Ge J, Wu Y, Lin X, He R, Ma Y. Encrypted traffic classification based on Gaussian mixture models and hidden Markov models. Journal of Network and Computer Applications. 2021;166:1-13. DOI: 10.1016/j.jnca.2020.102711
    9. 9.Li Y, Lu Y. ETCC: Encrypted two-label classification using CNN. Security and Communication Networks. 2021;2021:1-11. DOI: 10.1155/2021/6633250
    10. 10.Andersson F, Nelander Hedqvist K, Ring J, Skarp A. It-inslag i brottsligheten och rättsväsendets förmå ga att hantera dem. Brottsförebyggande rå det, report. 2016;17:1-152. Available from:[Accessed: December 29, 2021]
    11. 11.Swedish Civil Contingencies Agency. Informationssäkerhet – trender 2015, Myndigheten för Samhällsskydd och Beredskap [Internet]. 2015. Available from:–trender-2015[Accessed: December 29, 2021]
    12. 12.The Swedish Police. Polisens rapport om organiserad brottslighet 2015, National operations depertment [Internet]. 2015. Available from:[Accessed: December 29, 2021]
    13. 13.Bischoff, P. Best Disk Encryption Software – the Top 5 Tools to Secure Your Data, Comparitech, May 13 [Internet]. 2020. Avaialble from:[Accessed: December 29, 2021]
    14. 14.Paninski L. Estimating entropy onmbins given fewer thanmsamples. IEEE Transactions on Information Theory. 2004;50(9):2200-2203. DOI: 10.1109/TIT.2004.833360
    15. 15.Barkman L. Detektering av krypterade filer [thesis]. diva2:428544. Halmstad, Sweden: Halmstad University; 2011
    16. 16.Westfeld A, Pfitzmann A. Attacks on steganographic systems-breaking the steganographic utilities ezstego, Jsteg, Steganos, and S-Tool and some lessons learned. In: Information Hiding, Third International Workshop IH’99, September–October Dresden. Germany: Springer; 2000. pp. 61-76
    17. 17.Pearson K. On the criterion that a given system of deviations from the probable in the case of a correlated system of variables is such that it can be reasonably supposed to have arisen from random sampling. Philosophical Magazine. 1900;50(302):157-175. DOI: 10.1080/14786440009463897
    18. 18.Casino F, Choo KKR, Patsakis C. HEDGE: Efficient traffic classification of encrypted and compressed packets. IEEE Transactions on Information Forensics and Security. 2019;14(11):2916-2926. DOI: 10.1109/TIFS.2019.2911156
    19. 19.Frisén M. Properties and use of the Shewhart method and its followers. Sequential Analysis. 2007;26(2):171-193. DOI: 10.1080/07474940701247164
    20. 20.Frisén M. Statistical surveillance. Optimality and methods. International Statistical Review. 2003;71(2):403-434. DOI: 10.1111/j.1751-5823.2003.tb00205.x
    21. 21.Frisén M, de Maré J. Optimal surveillance. Biometrika. 1991;78(2):271-280. DOI: 10.2307/2337252
    22. 22.Järpe E, Wessman P. Some power aspects of methods for detecting different shifts in the mean. Communications in Statistics, Computation and Simulation. 2000;29(2):633-646. DOI: 10.1080/03610910008813632
    23. 23.Järpe E. Surveillance, environmental. In: El-Shaarawi AH, Piegorsch WA, editors. Encyclopedia of Environmetrics. 2nd ed. Chichester: Wiley; 2013. pp. 2150-2153. DOI: 10.1002/9780470057339.vas065.pub2
    24. 24.Page ES. Continuous inspection schemes. Biometrika. 1954;41(1/2):100-115. DOI: 10.1093/biomet/41.1-2.100
    25. 25.Shiryaev AN. On optimum methods in quickest detection problems. Theory of Probability and Its Applications. 1963;8(1):22-46. DOI: 10.1137/1108002

    Written By

    Eric Järpe and Quentin Gouchet

    Reviewed: January 25th, 2022Published: March 25th, 2022