Open access

Automatic Identification and Interpretation of Animal Sounds, Application to Livestock Production Optimisation

Written By

Vasileios Exadaktylos, Mitchell Silva and Daniel Berckmans

Submitted: 26 April 2012 Published: 05 March 2014

DOI: 10.5772/56040

From the Edited Volume

Soundscape Semiotics - Localization and Categorization

Edited by Herve Glotin

Chapter metrics overview

2,312 Chapter Downloads

View Full Metrics

1. Introduction

In modern livestock production, biosecurity and improved disease monitoring are of great importance to safeguard public and animal health. In addition, citizens expect that farm animals have been reared and killed humanely with minimal environmental impact and that food from animals is safe. These global threats and concerns can be combated through surveillance and research networks for early detection of animal diseases and better scientific cooperation between countries and research teams. The use of information technology (IT) can provide new possibilities for continuous automatic monitoring (Fig. 1) and management of livestock farming, according to multiple objectives.

Surveillance relies on vast streams of data to identify and manage risks. Currently data collection nearly always depends on manual methods. While this might be acceptable in R&D projects, it is unrealistic when solutions are applied on the scale needed on commercial farms. Scoring of some animal-based information by human inspectors by manual methods is often inaccurate, time consuming and expensive when implemented at farm level. It is clear that a multidisciplinary and integrated approach, using modern IT systems, is needed to optimise use of expensive inspectors.

Figure 1.

Sound analysis can be used in a variety of applications. Pigs, chickens and chicken embryos are considered in this chapter.

One current advantage provided by technology in livestock production is the development of sensors and sensing technologies to automatically monitor and evaluate data from processes in real-time. Collecting data from livestock and their environment is now possible using innovative, simple, low-cost IT systems; data are then integrated in real time by using knowledge-based computer models [1].

The initial application of technology in livestock has been the growth of housed pigs and poultry though, in principle, this approach could be applied to any farmed species, including animals farmed extensively [2]. Stockmen routinely gather auditory, olfactory and visual information from their animals to evaluate health, welfare and productivity. New technology can aid this task, even with large flocks or herds, thanks to the (r)evolution in sensors and sensing techniques, e.g. developments in micro- and nano- electronics [3][4].

Other sensors include pedometers for monitoring oestrus behaviour in dairy cows [5]. Automatic weighing systems for broilers, laying hens and turkeys have been used for a number of years to estimate the average weight of a flock [6]. Telemetry sensors for measuring heart rate, body temperature and activity have also been developed [7]. Sensors for quantifying milk conductivity and yield of individual cows are available and may be used to optimise production and provide early detection of poor welfare in individuals [8]. The above examples are not exhaustive, but demonstrate the present and future possibilities in monitoring animal disease, welfare and performance.

The last decade a new field of research has appeared in relation to livestock production and animal welfare. Precision Livestock Farming (PLF) has emerged as a tool that uses continuous and automatic techniques in real-time in order to monitor and control/manage animal production, health, welfare and environmental load in livestock production. The first step in PLF is the measurement and interpretation of animal bioresponses. In this direction, sound has been extensively used as an important bioreponse that can provide useful information regarding the status of the animals.

In this chapter, three examples of monitoring animal sounds as a tool to determine animal status are presented, namely chicken embryos, chickens and pigs.

Advertisement

2. Interpretation of chicken embryo sounds

Industrial egg incubators vary in size and capacity from 10.800 to 129.600 eggs, with the larger machines being commercially more attractive to hatchery managers due to the lower investment and operational costs per egg. It has been shown in the relevant literature that the spread of hatch of the eggs in an industrial incubator can be as long as 48h [9]. This has serious implications to the later life of the chicks because for operational reasons access to feed and water is delayed for the early hatchlings.

To study the effect further, techniques for monitoring the hatch window have been developed. For example, monitoring climate variables (e.g. temperature and humidity) along with biological variables (e.g. heart rate) have shown potential as estimators of the hatching process. More specifically [10] studied the CO2 balance as an indirect measure of thermoregulation. In [11] heart rate fluctuations were studied and were shown to have specific patterns during the pipping and hatching stages. The resonance frequency of incubating eggs was also linked to hatching and even a predictor of hatching could be made [12]. More recently, it was shown [13] that magnetic resonance imaging (MRI) can be used to monitor hatching.

The above techniques are valuable tools in a research environment. However, their invasive nature and/or high cost of implementation don’t allow for use in an industrial or commercial setup. To overcome such problems, sound was used to study the hatching process [14]. Sound can be measured non-invasively and a single microphone can acquire information about the complete incubator which makes it attractive for the industry. At the same time, current technology makes it possible to use microphones in different environment at a low cost.

At the same time, research has been conducted in relation to reducing the hatch window (i.e. the time between the first and last chick that hatched) [15]-[17]. However, successful application of many of these techniques requires accurate identification of Internal Pipping (IP – the moment in which the embryo is entering the air cell inside the egg) which is an important milestone in the hatching process. Once this stage has been reached, chicks inside the egg start to vocalise.

To address this problem of identifying the IP stage, an algorithm has been developed where sound is recorded in an industrial incubator [18], is analysed in real-time using a Digital Signal Processor (DSP) and is able to detect when all the chicks in the incubator have reached the IP stage (i.e. IP100). Despite the fact that the collected sound is coming from the whole incubator, the developed technique is able to isolate the sounds coming from all the eggs and infer information about the embryo state. The algorithm is based on the observation that the frequency content of the embryo sounds during different hatching stages is significantly different [18]. This is visually illustrated in Fig. 2.

Figure 2.

Boxplot showing the peak frequency of the collected sounds when the embryo is in Internal Pipping (IP), External Pipping (EP) and when it has hatched (HT). The box shows the 25th and 75th percentile of the data, the whiskers are the most extreme points not considered outliers and the crosses are the outliers.

2.1. Algorithm description

Although the vocalisations at different stages of the hatching process do show statistically significant differences in terms of the peak frequency, a successful classification algorithm requires a robust feature that will not (or minimally) be affected by other sounds that are acquired by the microphone. To account for this, the peak frequency of a sound is not used directly by the algorithm. From the results of Fig. 2 and by experimentation a ratio of the total energy in the 2-3 and 3-4 kHz frequency bands was able to provide a robust measure that can easily be calibrated as explained below. This difference is visualised in Fig. 3 where the mean spectra of the collected sounds are presented.

Figure 3.

Frequency content of the mean of vocalisations during Internal Pipping (dashed black line) and the mean of vocalisations during Hatch (solid red line). A line visualising the limit of 3 kHz is also shown (vertical blue dotted line)

The classification of the frequency ratio is based on an adaptive threshold. This threshold reflects the background noise characteristics during a period where IP has not started yet. During the beginning of the sound recording (e.g. Incubation Day 16) when the embryos have not penetrated the air cell yet, the algorithm is defining a baseline. The threshold is then set to 90% lower than this threshold. Once the algorithm output remains below this threshold value for more than 5 minutes continuously, IP100 has been reached an the algorithm is giving a signal. In the training dataset, the algorithm has shown to have an accuracy of ±3.1 hrs in the detection of IP100.

A block diagram description of the algorithm is shown in Fig. 4.

Figure 4.

Block diagram of the algorithm for automatic identification of IP100

2.2. Validation results

The algorithm was developed and tested using data collected during 12 incubations in an industrial incubator (Petersime, Belgium). The algorithm had an absolute deviation of 3.2 hrs from the actual IP100 point. An example of the output of the algorithm is presented in Fig. 5. As explained in the previous section, the threshold shown in Fig. 5 is automatically defined as 90% of the mean value during the initialisation of the algorithm. This value was chosen after experimentation with the data and allows for short variations of the algorithm output due to non-incubation sounds not to be considered as an alarm.

Figure 5.

The output of the algorithm (black solid line) and the threshold that was automatically chosen (horizontal blue dotted line). Once the algorithm output crosses the threshold for more than 5 minutes, IP100 is detected.

For the validation, the algorithm was implemented on a DSP (TMS320C6416T by Texas Instruments) using the Real-Time Workshop® of the MATLAB® environment. Once it was giving the signal that IP100 has been reached, a sample of 300 eggs was manually checked for IP. This experimental setup is shown in Fig. 6. In total, 5 experiments were performed during 5 incubations and the results are shown in Table 1.

Trial no. Time of IP100 detected by the algorithm (incubation time in h) Manual count of IP (%)
1 467 93
2 468 97
3 470 96
4 469 96
5 470 98

Table 1.

Validation results of the algorithm for IP100 detection in industrial incubators

The results of Table 1 show that IP100 was identify correctly. Although this may seem an inaccurate claim (since the IP count was between 93% and 98%), it is important to note the nature of the validation trial. Once the algorithm is providing a signal, IP is manually checked in the incubator. In the hypothetical case that IP is shown to be 100% (i.e. all embryos have been through the IP stage), it is not certain when this has happened. It is perfectly possible that IP100 has been reached long before the manual count. However, if IP of 98% is manually checked, then this means that almost all the embryos have been in the IP stage and therefore from a practical point of view, it is very close to the actual IP100.

Figure 6.

Schematic diagram of the experimental setup for the validation trial. The sound is acquired from a microphone placed in the left part of the incubator and the signal is directed to a Digital Signal Processor (DSP). The DSP is executing the algorithm and is providing a signal by means of LEDs as to whether IP100 has been achieved.

Advertisement

3. Monitoring of vocalisations of chickens

Analysis and interpretation of vocalisations of chickens is a powerful tool when studying the behaviour and welfare. It has been presented in the literature that different types of calls (such as distress or happy calls) can be identified based on their frequency characteristics [19], [20]. More recently, attempts are made to automatically classify these calls and relate them to animal welfare [21].

In relation to thermoregulation, chicken vocalisations have been shown to have specific patterns when chicks are exposed to cold stress [22]. Furthermore, vocal solicitation of heat, forms an integral part of the development of the thermoregulatory system of young chickens [23]. Finally, the thermal comfort of chicks has been shown to have links with the amplitude and frequency of calls [24].

On a social level, chick vocalisations are directly linked to stress and welfare parameters. A dissociation of stress behaviour during social-separation of the chick is mediated by novelty while distress calls are mediated by isolation [25]. Lastly, a method has been developed that could be used for stress monitoring of a flock and is based on multi-parametric sound analysis [26].

It should be noted that chickens, being living organisms, are Complex, Individually different, Time varying and Dynamic (CITD) [27]. As such, the characteristics of their vocalisations are expected to change with time. This is confirmed by experimental results in which the peak frequency of the chicken vocalisations for the same chicken at 1 day and at 20 days old was estimated to be 3603(±303) Hz and 3441(±289) Hz respectively. This parameters shows to be significantly different between the two groups.

It should be noted that no distinction was made between different types of vocalisations. For example, in [19] different types of calls (e.g. distress or pleasure calls) were studied and it was shown that there are differences in terms of frequency content. A more detailed analysis is needed in terms of the evolution of the frequency characteristics with age.

Advertisement

4. Pig cough identification for health monitoring

Similar to the effect on humans, respiratory diseases in pigs result in coughing and in a different sound of coughing due to the different response of the respiratory system when contacting different pathogenic agents. In humans, an experienced physician can identify over 100 different respiratory diseases based on the sound timbre [28]. In animals, veterinarians use a similar approach to detect sick animals when they enter a farm. Their initial impression over the herd is based on visual and auditory observation when they collect information about the welfare, health and productive status of the animals. In this direction, pig vocalisations related to pain were studied in [29], while vocalisation analysis in livestock farms as a measure of welfare has been employed in [30], [31].

The frequency characteristics of pig vocalisation have been extensively studied under different conditions [32][33][34]. These approaches have been further extended to develop algorithms for automatic classification of pig coughing under commercial farming conditions. For this, both frequency [35] and time domain [36] sound analysis techniques have been used. Furthermore, the distinction between dry and productive coughs has been made to further enhance the capabilities of the algorithm. This latter has been studied in detail in [37] where the energy envelope of a sound was shown to be different in dry and productive cough. The system is therefore able to not only provide information about the number of coughing incidents, but also in relation to the quality of the produced cough.

4.1. Algorithm description

As mentioned above, the objective of the algorithm is to detect productive and non-productive coughs in a commercial piggery. Sound is acquired in the pig house by a single microphone placed in the middle of the compartment at a height of 2 meters above the ground. The overall algorithm is shown in Fig. 7 and each block is subsequently described in more detail.

Figure 7.

Block diagram of the algorithm to identify cough and distinguish between productive and dry cough.

The first step of the algorithm is to identify individual sounds before using a cough classifier. A procedure that is based on the energy envelope of the sound signal has been described in [36]. Briefly explained, this approach is identifying the parts of the signal with high enough energy that could potentially be cough incidents. An example of this is shown in Fig. 8 for a cough attack (a series of coughing events). It should be noted that each sound is identified as an event for further processing. There is no guarantee that the sound originates from a single animal while overlap of sounds is possible (especially during busy periods of the day).

Figure 8.

Extraction of individual sounds (bottom plot) from a cough attack (top plot). All 4 sound incidents are successfully extracted.

It can be argued that a such a simple sound extraction algorithm would require accurate calibration of the microphone and would be prone to errors due to large background noise. To tackle this, depending on the situation, noise filters can be included preceding the application of the sound extraction algorithm in order to remove any structured and know background noise. At the same time, an adaptive threshold procedure has also been integrated [36] that allows for accurate selection of only sounds events without including noise. It was further shown in [36] that despite the background noise, classification is still possible using this sound extraction method.

In the next instance, the algorithm is applying a classifier in order to identify coughs. At this stage it is not important to determine whether the cough is productive or not. For this, either the approach described in [34] or the approach described in [35] can be used. Both methods are based on frequency analysis and exploit the characteristics of the frequency content of the cough sounds in different frequency bands in comparison to other sounds that can occur in a pig house (e.g. screams, grunts, feeding systems, contact with doors, etc.). Table 2 provides an overview of the frequency bands an which of those are used by the two algorithms. It should further be noted that both algorithms have employed fuzzy c-means clustering to reach their results and have identified fixed thresholds for their classifiers.

Frequency range (Hz) Reference [34] Reference [35]
100-10000
100-6000
6000-10000
100-2000
2000-5000
2000-4000
4000-6000
2300-3200
7000-9000

Table 2.

Frequency ranges used for the classification of cough. Differences between [34] and [35].

Once a sound has been identified as a cough, then the distinction between a productive and a non-productive cough has to be made. For this, the algorithms presented in [36] or [37] can be used. Both algorithms are based on time-domain characteristics of the coughs. The algorithm presented in [37] is focusing on the decay of the cough sound. To do so, the cough is described by it energy envelope and subsequently a mathematical model is made for the drop of the amplitude. It was shown that productive coughs have a longer decay (as expressed by the time constant) than dry coughs. This is visualised in Fig. 9. It has been hypothesised that this difference is due to differences in lung plasticity that is changing with the occurrence of a respiratory disease. However, further research is needed to confirm this hypothesis.

An alternative method to classify a cough as productive or not is to use the algorithm described in [36]. In this approach, a third order Auto-Regression (AR) model is used (e.g. [38]) to estimate the sound signal. This model has the following form:

yk=a1yk-1+a2yk-2+a3yk-3E1

Where yk is the value of the signal at sample time k and a1, a2 and a3 are the 3 model parameters. It was shown that productive coughs can be distinguished from dry coughs by using these three parameters. More specifically, a polyhedron is defined in the (a1,a2,a3) space that is subsequently used as a classifier for productive coughs. Alternatively, the centre of the dry cough can be defined and it can be assumed the coughs outside this cluster are productive coughs.

Figure 9.

The difference in the decay between a dry (red dashed line) and a productive (black solid line) cough is shown as has been studied in [37]

In the literature it was shown [39] that the vocal tract can act as a first order filter that has the form:

yk=b11+a1z-1ukE2

Where z-1 is the time-shift operator (i.e. ykz-1=yk-1) and uk is the input to the system. Similarly, (1) can be re-written to have the form:

yk=11+a1z-1+a2z-2+a3z-3ekE3

Where ek is white noise.

Since coughing can be considered involuntary in the studied cases, white noise can be considered the input to the system. From the above it can be hypothesised that the classifier was able to identify and exploit some higher order dynamics of the effect of the vocal tract on the produced sound response. Of course this claim has to be studied further.

During the calibration phase of the algorithm, the centre of the dry cough cluster has to be defined. For this, the first five dry coughs need to be identified and the average of their parameters should be used as the centre of the productive cough cluster (the edges of the cluster are defined by the variance of the parameters). It can be assumed that at the beginning of the growing period there is no respiratory disease present and therefore only dry coughs are present. So, the first five coughs that will be introduced are dry coughs and can be used to define the dry cough cluster.

4.2. Validation results

The algorithm has been applied on a continuous recording with a total of 671 sounds (291 productive coughs, 231 dry coughs and 149 other sounds) collected under laboratory conditions.

The following Table 3 shows the results of the classification power of the algorithm.

Sound Total sounds True positive identifications True negative identifications False positive identifications False negative identifications
Dry cough 231 - 200 31 -
Productive cough 281 231 - - 50
Other 144 - 130 14 -

Table 3.

Performance of the productive vs. Dry cough identification algorithm

The above algorithm has been integrated in a commercial system by SoundTalks nv. (Belgium – www.soundtalks.be).

Advertisement

5. Conclusion

This chapter has presented three cases where sound has been used to identify and interpret animal vocalisations. The studied cases are focused on identifying the status of animals in livestock production that will allow the caretaker of the animals to quickly identify health, welfare and production issues and take action. On the positive side, sound can be acquired from a large space and without any contact with the animal. This latter is an important factor in the potential for such technology to be applied in practice. Issues such as cost for placement, loss of sensors due to misuse by the animals and acquisition of sensor at the slaughter line are immediately solved. On the negative side though, the acquired sound signal cannot immediately be linked to a specific animal and it is the combination of all sounds and vocalisations in the area. As such, conclusions about a group of animals can be reached but not necessarily for individual ones.

Sound analysis is an excellent show case for the merits of Precision Livestock Farming (PLF) and can already demonstrate the benefit that PLF can have on intensive livestock production. Further research in the field should focus on identifying critical areas for livestock productions and work towards providing automatic tools for early diagnosis. In order for practical implementation, issues that are not related to animal vocalisations will also have to be dealt with. These include, but are not limited to room acoustics, positioning of the microphones for better sound acquisition, identification of vocalisations in a noisy environment, etc.

In the direction of implementation of PLF systems, an EU project (EU-PLF, project number 311825) is currently running with the objective of creating a Blueprint that describes the process of starting from a scientific idea, development of a commercial prototype, implementing a system on farm and creating value for the farmer. In this framework, the results that have been presented in section 4 of this chapter will be applied in a practical setting.

Advertisement

Acknowledgments

We would like to thank the European Union, the IWT (agentschap voor Innovatie door Wetenschap en Technologie), the KU Leuven and Petersime nv. for funding parts of the research described in this chapter. We would also like to thank all the researchers and technicians involved in the research throughout the years.

References

  1. 1. BerckmansDPrecision Livestock Farming- Preface. Computers and Electronics in Agriculture 2008
  2. 2. FrostA. RAn overview of integrated management systems for sustainable livestock production. In: Wathes C.M., Frost A.R., Gordon F., Wood J.D. (eds.) Edinburgh, British Society of Animal Science. Occasional Publication 2820014550
  3. 3. FrostASchofieldCBeaulahSMottramTLinesJWathesCA review of livestock monitoring and the need for integrated systems. Computers and Electronics in Agriculture 1997172139159
  4. 4. BerckmansDAutomatic on-line monitoring of animals by precision livestock farming. In: ISAH conference “Animal production in Europe : The way forward in a changing world, 1October 2004, Saint-Malo, France, 20042731
  5. 5. BrehmeUStollbergEHolzRSchleusenerTSafer oestrus detection with sensor aided ALT-pedometer. In: Book of Abstracts of the Third International Workshop on Smart Sensors in Livestock Monitoring, 10-11 September 2004, Leuven, Belgium, 20044346
  6. 6. VrankenEChedadAAertsJ-MBerckmansDEstimation of average body weight of broilers using automatic weighing in combination with image analysis. In: book of abstracts of the Third International Workshop on Smart Sensors in Livestock Monitoring, 10-11 September 2004Leuven, Belgium, 2004, 6870
  7. 7. MitchellK. DStookeyJ. MLaturnasD. KWattsJ. MHaleyD. BHuydeTThe effects of blindfolding on behavior and heart rate in beef cattle during restraint. Applied Animal Behaviour Science 2004
  8. 8. KohlerS. DKaufmannOQuarter-related measurements of milking and milk parameters in an AMS-herd. Milk Science International 20035836
  9. 9. TonaKEffects of age of broiler breeders, incubation egg storage duration and turning during incubation on embryo physiology and broiler production parameters. PhD Thesis. KU Leuven, 2003
  10. 10. NickelmannMImportance of prenatal temperature experience on development of the thermoregulatory constrol system in birds. Thermochimica Acta 2004
  11. 11. MoriyaKPearsonJ. TBurggrenW. WArATazawaHContinuous measurements of instantaneous heart rate and its fluctuations before and after hatching in chickens. Journal of Experimental Biology 20002035895903
  12. 12. KempsBDe KetelaereBBamelisFDecuypereEDe BaerdemaekerJVibration analysis on incubating eggs and its relation to embryonic development. Biotechnology Progress 200319310221025
  13. 13. BainMFaganAMullinMMcnaughtIMcleanJCondonBNoninvasive monitoring of chick development in ovo using 7T MRI system from day 12 of incubation through to hatching. Journal of Magnetic Resonance Imaging 2007261198201
  14. 14. BamelisFKempsBMertensKDe KetelaereBDecuypereEDebaerdemaekerJAn automatic monitoring of the hatching process based on the noise of the hatching chicks. Poultry Science 20058411011107
  15. 15. Van BrechtAAertsJ. MDegraevePBerckmansDQuantification and control of the spatiotemporal gradients of air speed and air temperature in an incubator. Poultry Science 20038216771687
  16. 16. De SmitLBruggemanVTonaJ. KDebonneMOnagbesanOArckensLDe BaerdemaekerJDecuypereEEmbryonic developmental plasticity of the chick: Increased CO2 during early stages of incubation changes the developmental trajectories during prenatal and postnatal growth. Comparative Biochemistry and Physiology a-Molecular & Integrative Physiology 2006145166175
  17. 17. MortolaJ. PMetabolic response to cooling temperatures in chicken embryos and hatchlings after cold incubation. Comparative Biochemistry and Physiology a-Molecular & Integrative Physiology 2006145441448
  18. 18. ExadaktylosVSilvaMBerckmansDReal-time analysis of chicken embryo sounds to monitor different incubation stages. Computers and Electronics in Agriculture 201175321326
  19. 19. MarxGLeppetJEllendorffFVocalisation in chicks (Galus galus dom.) during stepwise social isolation. Applied Animal Behaviour Science 20017516174
  20. 20. GhesquireKVan HirtumABuyseJBerckmansDNon-invasive quantification of stress in the laying hen: a vocalization analysis approach. Mededelingen Faculteit Landbouwkundige en Toegepaste Biologische Wetenshcapen Universiteit Gent 20026745558
  21. 21. PereiraE. MNaasI. AJacobF. CUsing vocalization pattern to assess broiler’s well-being. In: Lokhorst K., Berckmans D. (eds.) Precision Livestock Farming’111114July 2011, Prague, Czech Republic; 2011
  22. 22. BugdenS. CEvansR. MThe development of vocal thermoregulatory response to temperature in embryos of the domestic chicken. Wilson Bulletin 19991112188194
  23. 23. BugdenS. CEvansR. MVocal solicitation of heat as an integral component of the developing thermoregulatory system in young domestic chickens. Canadian journal of Zoology 1997751219491954
  24. 24. De MouraJNaasIAlvesECrvalhoTValeMLimaKNoise analysis to evaluate chick thermal comfort. Scientia Agricola 2008
  25. 25. FeltensteinMFordNFreemanKSufkaKDissociation of stress behaviors in the chick social-separation-stress procedure. Physiology & behavior 2002755675679
  26. 26. MairJMarxGPetersenJMennickenLDevelopment of multi parametric sound analysis parameters as welfare indicator in chicken flocks. Proceedings of the international seminar on modal analysis 2001315231528
  27. 27. ExadaktylosVBerckmansDAertsJMNon-invasiveMethods for Monitoring Individual Bioresponses in Relation to Health Management. In: Smigorski K (ed.) health Management- Different Approaches and Solutions. Rijeka: InTech; 2011143160
  28. 28. KorpášISadlonováJVrabecMAnalysis of the cough sound: an overview. Puylmomnary Pharmacology 1996956261368
  29. 29. MarxGHornTThielebeinJKnubelBVon BorellEAnalysis of pain-related vocalization in young pigs. Journal of Sound and Vibration 20032663678698
  30. 30. ManteuffelGPuppeBSchönPC. Vocalization of farm animals as a measure of welfare. Applied Animal Behaviour Science 2004
  31. 31. SchönPCPuppeBManteuffelG. Automated recording of stress vocalisation as a tool to document impaired welfare n pigs. Animal Welfare 2004132105110
  32. 32. SchönPCPuppeBManteuffelG. Linear prediction coding analysis and self-organizing feature map as tools to classify stress calls of domestic pigs (Sus scrofa). Journal of the Acoustical Society of America 200111014251431
  33. 33. MarxGHornTThielebeinJKnubelBVon BorellEAnalysis of pain-related vocalisation in young pigs, Journal of Sound and Vibration 2003266687698
  34. 34. Van HirtumABerckmansDFuzzy approach for improved recognition of citric acid induced piglet coughing from continuous registration. Journal of Sound and Vibration 20032663677686
  35. 35. ExadaktylosVSilvaMAertsJMTaylorC. JBerckmansDReal-timerecognition of sick ppig cough sounds. Computers and Electronics in Agriculture 200863207214
  36. 36. ExadaktylosVSilvaMFerrariSGuarinoMTaylorC. JAertsJMTaylorC. JBerckmansDTime-seriesanalysis for online recognition and localization of sick pig (Sus scrofa) cough sounds. Journal of the Acoustical Society of America 2008124638033809
  37. 37. SilvaMExadaktylosVFerrariSGuarinoMAertsJMBerckmansD. The influence of respiratory disease on the energy envelope dynamics of pig cough sounds. Computers and Electronics in Agriculture 20096918085
  38. 38. YoungP. CRecursive Estimation oand Time-Series Analysis- An Introduction for the Student and Practitioner. Berlin: Springer; 2011
  39. 39. HannonBMathiasRModelling Dynamic Biological Systems. New York: Springer, 1997

Written By

Vasileios Exadaktylos, Mitchell Silva and Daniel Berckmans

Submitted: 26 April 2012 Published: 05 March 2014