Open access peer-reviewed chapter

The German Vision of Industry 4.0 Applied in Organic Farming

Written By

F. J. Knoll and V. Czymmek

Submitted: 14 May 2017 Reviewed: 24 November 2017 Published: 14 March 2018

DOI: 10.5772/intechopen.72708

From the Edited Volume

Automation in Agriculture - Securing Food Supplies for Future Generations

Edited by Stephan Hussmann

Chapter metrics overview

1,847 Chapter Downloads

View Full Metrics

Abstract

The first industrial revolution was the invention of the steam engine. With the advent of conveyor belts and electricity, the second industrial revolution arose. After the third revolution, the automation, the fourth industrial revolution takes place with the complete networking of all machines, workers, consumers, and products. In Germany, this is called Industry 4.0. Increasing digitization makes it possible to collect, store, analyze, and communicate large amounts of data. By digitizing farms, a network of different sensors can analyze the nutrient content and the soil texture in real time. This information can be evaluated and the plant distribution can be managed across all networked farms. This leads to the right field being used for the right plant at the right time. Real-time data processing makes it possible to monitor and control the nutrient intake over the entire growth period. This allows the field to specifically ask for water or the right fertilizer for its plants. This saves resources and protects the environment. All the prepared information can give the farmer an exact status about his products and fields via an interface. This horizontal networking within the farm and the vertical networking across different farms can lead to increased efficiency and cheaper products. The use of robots can create a fully automatic farm. For this undertaking, it is necessary to process the complex information of a farm with a self-learning system. At the Westcoast University of Applied Science, for example, a robot is being researched to automatically remove the weeds. The prototype of the robot that moves fully autonomously across the field classifies the plants and destroys the weeds.

Keywords

  • CNN
  • AI
  • robot
  • industry 4.0. organic farming
  • plant detection

1. Introductions

For the classification of the plants, a self-learning system is used, which learns to recognize different plants [13]. These artificial neural networks are part of the most recent artificial intelligence research and are used for a variety of different applications, such as autonomous driving [4, 5], for airline passenger profiling [6], or information processing on the Internet. The application makes it possible to minimize the use of herbicides or to avoid them completely by a nonchemical destruction. This type of environment friendly agriculture would only be possible with many workers who weed the fields by hand. However, the resulting costs do not allow the production of bio-economic products for the mass market. With an all connected system, the farmer is always up-to-date. For example, a drone could tell the robots which field needs to be processed at this point. Other robots could take sowing and harvesting. As a result of the progressing hardware minimization at Westcoast University [7], these agents could be placed directly in small drones in the near future, and a swarm could handle the fields. In this chapter, a vision of the autonomous farm is described. The basic vision and necessity due to the increasing lack of assistants will be explained in section 1. In section 2, the project of the former co-pilot of the Westcoast University of Applied Sciences is presented as an example for the digitization of a work step on the farm. Section 3 presents various sensors and algorithms that are already used for plant selection. Section 4 describes the use of a convolution neural networks, which corresponds to the mathematical reproduction of a brain, for the information processing of a genetic robot. Section 5 explains the environmental friendly destruction of the robot. In the penultimate section 6, future objectives, such as the swarming intensity of different drones, are described. In the last chapter, the advantages and dangers of the digital revolution are shown.

Advertisement

2. What Industry 4.0 means in agriculture

The German version of the Industry 4.0 is made up of the following components:

  1. A complete network of machines, sensors, devices, people, and products via the Internet.

  2. A virtual image of the real world is created via sensor data, thus building up an extended information system.

  3. Assistance systems help people through visualized interfaces to preprocess and filter the information they have acquired, so that faster decisions can be made. In addition, physical, monotonous, strenuous, and dangerous work should be physically relieved.

  4. An information processing system should make all decisions and autonomously control the entire process. It should take decisions independently. Only in the case of problems or target conflicts, the powers should be transferred to a higher authority [8].

If the concept of Industry 4.0 should be transferred to agriculture, a central learning system will take over all the decisions, processes, and problems of the farm. This means that different daemon units measure the fields via sensors, for example, the ground conditions. The moisture or nutrient content is then sent to a database. This creates a virtual and real-time image of the farm. In addition, global information could be obtained via the Internet. The learning system will process past and up-to-date information in real time to create optimal growth conditions for each plant. The system can continually focus on each individual plant. For example, it could determine the best position for each crop on a field. It is conceivable that different plant species are sown on the same field in order to optimally exploit the soil texture. It could access global weather data to calculate the optimal time of the growth cycle. Automated machines could travel independently to the fields and process them. Drones could monitor everything and determine further information about the growth behavior or the environment of the plants. The plants could be sown to new, more precise patterns, in order to better organize the detection and nutrient distribution. Nutrients could be targeted to individual plants, so that each grows optimally. The system could find through historical records, the connection between soil and plant species, which an employee cannot capture. The customer could receive a biography about the whole life of the plant with all influences from the seed to the fruit. All these networked small self-sufficient systems would take over the full farm automatically and manage it.

Furthermore, with VR glasses like the Hololens of Microsoft, human coworkers could get important information about individual plants.

The system could independently process global economic data and develop new economic sectors through the Internet.

Further, the system could develop its own improved devices to optimize the work. Since this system can work 7 days per week and 24 hours a day without getting exhausted, it can deal much more thoroughly with cultivation than a farmer in mass production. Food production for the current human lifestyle has become increasingly polluting. In conventional cultivation, more harmful herbicides and pesticides are used to increase efficiency. It is true that a very efficient pest control is carried out and thus the yield is increased, but this use of chemical also has some side effects. One of the best-known consequences is the increasing pollution of groundwater and drinking water or the death of bees. These two problems alone could put mankind faced with unprecedented challenges. In addition, the herbicides and pesticides pollute the food itself. Therefore, some farmers are trying to establish a more biologically neutral agriculture. Like the farm is Westhof in northern Germany. This is one of the largest farms for organic products in Germany. Due to its environmentally friendly use of the fields, it is prohibited by law to use herbicides in Germany. For example, the farm employs every season a group of workers to free approximately 170 hectares of carrots from weeds. The weeds must be weeded, because the plants compete for nutrients and thus reduce the yield. The Westhof provides more than 170,000 € per year for weeding. Getting workers for these jobs is becoming more and more difficult due to the growing standard of living. Although there are already technologies that have been automated in agriculture, for example, self-propelled tractors or various attachments, at the present time, weeding can only be done by hand. Especially when it concerns plants like the carrot. For this reason, the West Coast University of Applied Sciences in Germany wants to develop a fully automatic weeding robot. This research project is called high-precision weed recognition in organic farming. The autonomously working robot is supposed to take over the weeding day and night in any weather conditions. The project is already in an advanced phase. This development is an automation of a work step but it can also be included in point 3 of the definition list under assistant systems. With its extensive facets, it also provides an insight into the difficulties of the digitization of a farm. Therefore, this project should be used in the following chapters as an example for Industry 4.0 in farming.

Advertisement

3. Project highly accurate weed detection sensors

The first step toward autonomous eliminating weed is to find out which type of sensors is the most efficient for classification and detection. It is also important to record when the ideal time for weeding is. The co-worker weeds at a late growth period. At this point, the plants already overlap each other. It is therefore not possible to see the weeds under a plant without mechanical effort. For the workers, this is the ideal time, because the advanced growth allows to differentiate individual plants by different characteristics. The robot weeds at an earlier date when the plant and the weed look very similar. This is shown in Figure 1. While this allows an unrestricted view of the plants, it is more difficult to distinguish between them because of their less pronounced characteristics.

Figure 1.

Recorded plants by the sensor. Weed encircled red [2].

Figure 2 shows the robot. The BoniRob has four freely adjustable arms on which the wheels are suspended. Each wheel is driven by its own electric motor and the arm adjustment allows any axis spacing to be set. In addition, the robot has a mounting slot in the center, in which various measuring instruments can be installed [2].

Figure 2.

Bonirob on the field of the Westhof Bio in Germany.

For reproducible results, the sensors were sealed off from ambient light. So the influence of the sun on the measurement results can be prevented. A LED-lighting, consisting of six high power LED’s mounted on a profile, was implemented to illuminate the plants properly. The LED’s have a color temperature of 6000 K, which is set at daylight. The lamps are arranged in a circle at a certain angle, so that they have a fixed focus point on the field. By this arrangement, any shade can be prevented and a homogeneous illumination can be achieved [2].

With this configuration, the robot runs autonomously at a speed of 20 mm/s over the field. This speed allows an uninterrupted recording of the field at one image per second, with an overlap of the respective sensor data of approximately 20%. The area on which the plants grow on the wall has an average width of about 15 cm. In order to determine a suitable sensor for recording the plants, the following were investigated: [2]

  • Bispectral JAI camera.

    1. This camera has a resolution of 1296 × 966 pixels. In addition, it allows the acquisition of coverage-equivalent and simultaneous images from the color range and near infrared range. Two bispectral cameras were always examined at the same time at different angles [2].

  • Nikon camera D5300

    1. The Nikon D5300 camera has a resolution of 6000 × 4000 pixels. These images do not have an infrared color channel [2].

  • Kinect II

    1. The Kinect II offers the possibility to record a color image with the size of 1920 × 1080 pixels. The time of flight technology allows a depth of 512 × 424 pixels with 16 bits. An infrared image of 512 × 424 pixels and 11 bits can be read out via the same sensor. However, the depth image and the color image are not coincident and are recorded over two different perspectives [2].

  • CamCube 3

    1. The CamCube uses the time of flight technology to create depth images and has a resolution of 204 × 204 pixels. Unlike the Kinect, it is possible to use own lenses. As a result, optimal image sections can be seen. A congruent gray image is displayed next to the depth-image [2].

  • LMI Gocator 2350

    1. The LMI Gocator 2350 is a laser scanner with a resolution in the z direction of 0.019–0.060 mm and a resolution in the x direction of 0.150–0.300 mm. The field of view (FOV) is 158–365 mm [2].

These sensors were evaluated between 22 July 2015 and 14 August 2015 every day under different weather conditions over 1000 meters of rootwalls. By means of these data, it was possible to determine which sensors, configuration, and which stage of growth are best suited for the classification of root plants and weeds [2].

The uneven background and the low resolution of the CamCube 3 prevent a classification between a plant and dirt. The Kinect II has a similar problem. An 3D image of the Kinect is shown in Figure 3. Although it has a much higher resolution and a RGB camera, this sensor is intended for use in a living room as a game interface and therefore has a small focal length. As a result, over 2/3 of the resolution is lost to unimportant areas adjacent to the root rows. The focal point auf the RGB camera is not suitable and produces blurred images due to the distance to the object [2].

Figure 3.

Depth map of the Kinect II [2].

Figure 4 shows a picture of the LMI Gocator 2350. In the picture, different plants can be seen clearly. But the sensor cannot correctly detect the plants in the lower half of the image. Among other things, this is due to the fine structure of the plants and the measurement resolution of the sensor. In addition, there are difficulties to use the LMI Gocator 2350 for plants that are currently in the bud stage. These plants still have a very small area and are located very close to the ground, which means they get lost in the overall information. If the ground is not flat, then it is almost impossible to filter the small structures [2].

Figure 4.

Recording of the field with the LMI Gocator 2350 [2].

The best results are provided by the JAI camera and the Nikon camera. In contrast to the Nikon camera, the JAI has an infrared channel in addition to the RGB channel. The RGB sensor and the IR sensor use the same optics. Thus, it is not possible to focus both images at the same time. However, with this 4-channel image as described in the next chapter, the plants can be extracted from the background by a vegetation index method. The Nikon camera has a high-resolution color camera. Later, instead of the Nikon camera, a 42 megapixel camera with 7716 × 5364 pixels was used. As described in the following chapter, the project has succeeded in creating a robust and real-time segmentation algorithm for RGB images. The used RGB camera is in contrast to the JAI a standard camera and thus easier to interchange, so the choice came down to RGB image recordings. A detailed evaluation of the sensors, as well as a description of the measurement setup, can be found in the paper [2].

Advertisement

4. Segmentation

The next step for the classification is to separate the plants from the background. The separation of the plants from the background allows the classifier to examine only the important plant pixels in the image. This approach therefore saves time for the classification of a whole picture. The state-of-the-art method used a bispectral camera. Beside an RGB image, this camera also delivers an image from the near infrared region. Both images are congruent because they are taken through the same lens. The plants can be extracted from the background by simply subtracting the red color channel and the infrared channel and then using a global threshold. The state-of-the-art vegetation index determination method works, because the chlorophyll content of living plants causes a greater increase of reflection in the near infrared spectrum than other objects. The big advantage of this method is that dead materials like stones, sticks, or wilted leaves are filtered out. Hence, the extracted plants can then be easily selected and classified. The harsh environment on a field requires robust sensor systems. The mechanical setup of bispectral cameras is more complex compared to standard RGB-color cameras. Hence, standard RGB cameras are more suitable for organic farming applications. Furthermore, they are cheaper and better available, as a wide range of RGB cameras exist on the market. Hence, not only the robustness is better but also the total system costs decrease. This is an important feature as the acceptance of the organic farmer for such systems are only given if the price is much lower than the costs for manual weeding. Another way to extract the plant from the background is to process the HSV-Color-Space. The proposed workflow is shown in Figure 5. In the first step, the captured RGB image is transformed into the HSV-Color-Space [3].

Figure 5.

Workflow of the first alternative vegetation index determination [3].

The HSV-Color-Space uses the colors with the three vectors H for Hue, S for Saturation and V for Value which stands for the brightness. The exact working method is presented in [9] and the result of the algorithm can be seen in Figure 6 [3].

Figure 6.

Result of the segmentation.

For a better quantitative result, the ground truth was manually generated. The plants were colored by hand in multiple images. It should be obvious that this method is a subjective one. With the ground truth, it was possible to create a statistical number of over 250 plants. Afterwards, the dice score [10] was calculated

DS=2XYX+Y.E1

The set X is the ground truth and the set Y is the mask of the vegetation index method used (state-of-the-art, proposed HSV, or RGB algorithm). It should be obvious that if the set X and Y are identical, the calculated mask and the ground truth are equal, hence DS = 1. The dice score and its standard deviation can be looked in Table 1. As it can be seen in Table 1, the proposed methods are better as the state-of-the-art vegetation index method described in [11].

MethodsDSStandard deviation
State-of-the-art0.8120.063
HSV0.9620.014
RGB0.9430.018

Table 1.

Dice score and standard deviation of the state-of-the-art and the proposed vegetation index methods [9].

A high-resolution camera (6000 × 4000 pixels) was used in a further series of measurements. Again, all the plants could be cut out with the two presented methods without any problems and with a high level of detail, only the state-of-the-art vegetation index method does not work here, because there is no IR image present. With the higher level of details, the classification task becomes easier as more features are available. This will be a significant advantage of the two proposed methods [9].

A more thorough investigation of the algorithm as well as a mathematical derivation and optimization can be found in [3, 9]. A further RGB algorithm is also presented here.

Advertisement

5. Classification

After filtering the background, the plants need to be classified. Therefore, the images of the extracted plants have to be classified manually to create a learning mask. These can be used to train a classifier. There are different classification algorithms, for example, the state-of-the-art random forest classification procedure [11]. One problem of the random forest classifier is to find and calculate suitable features. Another method which is used in this paper, is a convolution neural network classifier, which has to be trained as well and is used in two different ways as described in the following paragraphs [9].

The first process shall evaluate each pixel individually and specify if it is a carrot plant or weed [9].

5.1. Setup of the proposed convolution neural network

The convolution neural network for classifying the pixels manner consists of 11 layers and is using a RGB image of 101 × 101 pixels as input information. It is the same structure as in Figure 7, but the output logic is able to classify three classes.

Figure 7.

Example CNN for classifying carrot plants in organic farming [9].

The convolution neural network consists of:

  • Input layer, size of 3 × 101 × 101

  • Convolution layer, size of 16 × 5 × 5 (Leaky rectify)

  • Convolution layer, size of 32 × 7 × 7 (Leaky rectify)

  • Pooling layer, size of 2 × 2: 2

  • Convolution layer, size of 32 × 5 × 5 (Leaky rectify)

  • Convolution layer, size of 64 × 7 × 7 (Leaky rectify)

  • Pooling layer, size of 2 × 2: 2

  • Dense layer, size of 64 (Tanh)

  • Dense layer, size of 64 (Tanh)

  • Dense layer, size of 64 (Tanh)

  • Output layer (Dense layer), size of 3 (Softmax)

The output layer consists of three neurons. As a result, three classes can be classified like carrots, weed, and background [9].

5.2. Training from the convolution neural network

This proposed classifier was trained using a CUDA on a GTX Titan with 6 GB memory. It was used a batch size of 500 images. After a learning period of about 3 weeks and 950 million periods, the convolution neural network has been tested [9].

To evaluate the proposed CNN algorithms a huge image database has been build up. The images in the database were taken with a robot, which is shown in Figure 8. The robot named BoniRob drove with constantly 200 mm/sec over the field and took an image every second. These measurements were repeated daily for several weeks under different weather conditions. The images were taken with the Nikon D5300. Seven hundred plants got evaluated. If the majority of the pixels are identified as a plant, it belongs to this class. The resulting classified images of the plants were compared manually to the input images to examine the error rate which was less than one percent at 700 plants. An error means the classification of a weed as a carrot. A carrot was never classified as a weed. This is important for the farmers [9].

Figure 8.

Bonirob on the field of the Westhof Bio in Germany.

Figure 9 represents the weed in red and carrots in green. This image was created pixel wise but only every tenth pixel to save time. The white circle shows that the CNN classified a carrot at a weed plant. However, we classified this manually as only a weed plant, which was wrong. Hence, the network corrected us [9].

Figure 9.

First result of the CNN [9].

A better analysis of the accuracy should be done again via the dice score Eq. (1). [10]. Several convolutional neural networks have been designed to perform the classification task. The exact operation is presented in [9]. To train the CNN, other images were used than for testing it. These were unknown test data. The images got first classified by hand, so that a ground truth can be compared. An original image, a ground truth image, and a classified image by the proposed pixelwise CNN from the same area can be seen in Figure 10.

Figure 10.

On top, the original image, bottom left side, the ground truth classified by hand image and on the bottom right side, the classified image by the proposed pixelwise convolution neural network.

The classified image with the pixelwise CNN in Figure 10 is smaller, as the origin and the ground truth image. The three in [9] proposed Convolution Neural Networks (pixelwise-, skelton- and area classification) are compared to each other and the results are shown in Table 2. It should be noted that the calculation of the dice score was based on only the pixels that were not classified as background in the ground truth image and the images of the presented algorithms. This is because in the marginal area of a plant, the threshold value does not always calculated exact the same edge lines and thus a misleading error would be calculated.

MethodsDSStandard deviationNumber of imageNumber of classifications
PixelwiseCarrot0.9860.020451,339,190
Weed0.8810.10245188,869
SkeletonCarrot0.9830.02444164,781
Weed0.8890.0774425,148
AreaCarrot0.9960.00525808,069
Weed0.9680.04325120,840

Table 2.

Dice score and its standard deviation of the three proposed convolution neural networks.

In Table 2, it can be seen that the number of classifications of the carrots and the weeds are different. For example, the weed plants are less often seen in Figure 10 as the carrots. While the carrots are relatively homogenous in size, most weeds are relatively small. Therefore, its number of pixels is less than that of carrots. However, it was made sure that a relatively equal number of small and large weeds were used in learning. This could prevent the convolution neural network from triggering only to the size [9].

It can be seen from the Table 2 that the accuracy for the recognition of a carrot pixel is over 98% for all the presented methods with a standard deviation of approximately 2%. The presented area classifier even reaches an accuracy of 99.6% with a standard deviation of only 0.5%. The weeds, on the other hand, have an accuracy of 88.1% and the skeleton of 89%. This means that around 11% of all weed pixels are classified as carrot plants. In the area classification, the accuracy is over 96% for the weed detection and thus almost as well as for the carrots. The default deviation is only 4.3%. The area classifier shows the best results. A complete statistical evaluation can be considered in [9].

Advertisement

6. Destruction

The classification results are then transferred to a destruction unit. This destruction unit should remove the weeds environmentally friendly. How exactly this destruction unit will work is examined. There are many different considerations. The methods must have a number of properties. The most important feature is the environmental compatibility that is required by organic farming. Therefore, the use of herbicides is forbidden. Furthermore, the method must be very robust. This means that it has to work several thousand kilometers of carrot rows without great maintenance. Therefore, for example, a method in which the weed is plucked would be too complicated. Looking at the individual steps of the plucking process, it is noticeable that a variety of sensory and motor skills are required to pull a plant out of the soil and not tear it down. Therefore, mechanically simple approaches result. In the following some methods are listed:

  • Stamping

    1. While stamping, the plant should be removed by pressing it into the soil. It is not completely eliminated, but the crop has an advantage in the distribution of nutrients.

  • Hot bolt

    1. Similar to stamping, a hot tip should be used here in order to burn the bicarbonate in the growth center.

  • Mill

    1. A small milling cutter turns the weed from the ground.

  • Laser

    1. The plant gets burned by a laser. Problems could be the required power, and the lens system that might be stained in the harsh environment.

  • Maser

    1. Similar to the laser, bundled microwave beams could be used. The advantage would be that this radiation would also get under the earth. It is questionable whether microwave radiation is accepted in ecological agriculture. In addition, the question arises if all the necessary life is removed from the soil.

  • Electricity

    1. An induction of current could burn the plant, or cause a chemical process in the plant, so that it enters. First experiments with a Tesla coil have been initiated. Targeting of the Tesla beam could be by means of ionized air.

  • Matter beam

    1. A water cutter/sandblaster could destroy the weed. A problem would be to carry the materials.

Another important feature of the destruction is the speed. In addition, as explained in the next section, further miniaturization is needed. Therefore, it would be desirable if the destruction unit also has a small space and energy requirement.

Advertisement

7. Future objectives

One of the most difficult problems in the use of artificial neural networks is the computational capacity. Although large search engine companies own specially developed hardware to provide the necessary computing power, for the conventional user remains only the state of the art method which is using a graphic processing unit (GPU) as a computational basis. Although these processors are well suited for large matrix computations, they need massive energy. Therefore, a new processor on the basis of a FPGA was developed. The power should be minimized because the use of a FPGA is only a fraction of a GPU application and should therefore be well suited for independent calculations.

The increasing computing power in recent years allows more and more complex algorithms for the implementation of adaptive artificial intelligence. The developed hardware processor consists of three main components: the control unit, the data path and the data manager. The architecture of the proposed hardware processor is illustrated in Figure 11 on page before. The complete configuration of the CPU is shown in [7].

Figure 11.

The architecture of the proposed CPU for calculation of convolution neural networks [7].

The control unit is responsible for determining what operation the processor should just execute. In the program memory, each individual step is deposited. As shown in Figure 11, the core of the control unit is the instruction register (IR). In the IR, the individual steps for each task are written. With each step, a program line is loaded from the program memory [7].

The data manager contains the memory management unit (MMU). It saves the saving address and its information. These are, for example, the starting addresses of the different feature maps [7].

The data path (see Figure 11). The accumulator (AKKU) contains all the data that has to be processed. The data manager knows where to find the information in the AKKU. The arithmetic logic unit (ALU) is responsible for the calculations. Figure 11 shows the multiply and accumulation (MAC)-, the threshold-, and the subsampling-unit. The MAC unit is used for example for the convolution and dense layer [7].

In the future, the CPU will be further developed so that it can be used for plant recognition. The big problem now is that the used FPGA has too little memory to process large networks as needed for plant classification. Therefore, a more suitable FPGA should be used [7].

In paper [7], there is more in depth information about the developed CPU.

In upcoming projects, the existing robot will be optimized by a more cost-effective and more efficient design. Through miniaturization in industry and research, a large number of new technologies have been developed over the past decades. As mentioned in the beginning, it is always difficult to find coworkers for weeding in the field because of the substandard working conditions. Several workers are pulled by a tractor in a reck across the field and weed, the weeds laying faced down to the field. In order to protect the employees from harmful work, a transport device operated by electric motors and a solar panel on the roof was developed. This drives the workers without pollutants and noiselessly over the field.

The robot presented here was the first step toward the automation of agriculture. However, the current BoniRob in its construction with a weight of about one ton and the size of a passenger car is very bulky. After a long trial phase in the field, several disadvantages have arisen. Due to its dimensions, the transport is made more difficult and maneuvering in bad weather conditions is not always possible due to the soil condition. Another disadvantage is the high price of approximately 240.000 €.

To optimize these criteria, the Bonirob will be miniaturized in the future. First, the principle of the electric rack should be transferred to the BoniRob. An extermination unit is installed at the position of the coworker. A central arithmetic system classifies the recorded images and passes the position of the weeds to the destruction unit. Several batteries and a solar panel supply these units. A system allows to run autonomously over the field.

A possible following miniaturization in the future is the use of a drone swarm. The first experiments can be seen on Figure 12. Cameras at the bottom of the drones take pictures of plants. In order to save energy, the images and the GPS coordinates of the images are sent wirelessly to a central server, where the energy- and performance-intensive classification and detection are performed. Subsequently, the data is sent back to the drone and the weed will be destroyed by an extermination unit.

Figure 12.

First drones experiments on the field of the Westhof Bio in Germany.

By the complete networking of the sensors in the Industry 4.0, the robots would receive, for example, weather data and decide when the best time to process the fields and when they have to fly back to their station to protect themselves or recharge the batteries.

Advertisement

8. Risks of automation

Through the complete automation, human workers would not be necessary anymore. It is possible that knowledge gets lost. It seems not to be required that someone still needs the extensive knowledge of a farmer. Yet no one could know how the weeds differ from the crop as this is done by the machines. If, for example, a solar storm happens and destroys all electronics, we would be sent back to the Stone Age. At the moment, we did not have the knowledge needed to manage fields. This example applies, of course, to any other branch of industry where artificial intelligence takes over. Therefore, it is imperative to pass on this knowledge, despite the seemingly nonexistent necessity. Again, the autonomous systems could help us. Through special interfaces like the hololence, the system could inform us about its fieldwork experience and explain new insights. Humanity would also be involved in the future and ready to take over the work. Another disadvantage is the high capital needed to invest in automation. The complete connected automation system mentioned in this chapter would cost nowadays millions of euros. With the increasing amount of connected systems and their communication, manipulations and data safety are other aspects to consider.

Advertisement

9. Conclusion

The robot is only the first step in the development of a fully agricultural farm. Since it is very large and expensive, drones or small beetle-like robots are already considered in future visions. These small autonomous systems could support each other by means of swarm intelligence and process the fields. Not only weeding should be considered, but also the targeted removal of pests. This would enable a completely pesticide-free and herbicide-free agriculture. The robot presented makes it possible to distinguish the plants in small stages of growth with a higher precision than humans can. He also processes the first plant with the same concentration as the last plant. This project represents a step toward the mass-efficient organic agriculture in which the environment is spared and more natural food can be produced. Through these robots and the arrival of industry 4.0 in agriculture, production prices could fall and high-quality foodstuffs could be produced for the whole world.

Advertisement

Acknowledgements

This work was supported by the Federal State of Schleswig-Holstein, Germany. The authors are grateful for the financial support.

Biographies

F. J. Knoll. Coast University of Applied Sciences (FHW), Heide, Germany, in 2009. After attending a physic study at Uni Hamburg, he made his M.A. degree at the Westcoast University of Applied Sciences (FHW), Heide, Germany. He is currently working on his doctoral thesis. His research areas include Convolution Neural Networks, real-time 2D/3D image processing, embedded systems design, and algorithm design for FPGA implementation.

Vitali Czymmek received the M.Sc. degree from the West Coast University of Applied Science, Heide, Germany, in 2017. He joined the project High-Precision Weed Recognition as a working student in 2015 and works now as a research associate thesis. His main areas of research interest are digital imaging processing, industrial automation solutions, and their control systems.

References

  1. 1. Federal Ministry for Economic Affairs and Energy. 2017. URL: http://www.plattform-i40.de/I40/Navigation/EN/Industrie40/WhatIsIndustrie40/what-is-industrie40.html
  2. 2. Knoll F, Holtorf T, Hußmann S. Investigation of different sensor systems to classify plant and weed in organic farming applications. SAI Computing Conference. 2016:343-348
  3. 3. Knoll F, Holtorf T, Hußmann S. Vegetation index determination method based on color room processing for weed control applications in organic farming. In: I2MTC 2016, Proceedings of the 33th International Conference on Instrumentation and Measurement Technology., IEEE Instrumentation and Measurement Society; 2016, pp. 1024-1029
  4. 4. Jung S, Lee U, Jung J, Shim D. Real-time traffic sign recognition system with deep convolutional neural network, Ubiquitous Robots and Ambient Intelligence (URAI), 2016. In: 13th International Conference on, IEEE Xplore 2016
  5. 5. Li J, Mei X, Prokhorov D. Deep neural network for structural prediction and lane detection in traffic scene, neural networks and learning systems. In: IEEE Xplore 2016
  6. 6. Zheng YJ, Sheng WG, Sun XM, Chen SY. Airline passenger profiling based on fuzzy deep machine learning, neural networks and learning systems. In: IEEE Xplore 2016
  7. 7. Knoll F et al. CPU architecture for a fast and energy-saving calculation of convolution neural networks. In: SPIE Digital Optical Technologies International Symposium 2017, IEEE Xplore 2017
  8. 8. Hermann M, Pentek T, Otto B. Design principles for Industrie 4.0 scenarios. In: 2016 49th Hawaii International Conference on System Sciences (HICSS). 1 January 2016, S. 3928-3937, DOI: 10.1109/HICSS.2016.488 ieee.org, 22 August 2016
  9. 9. Knoll F et al. Vision Based Measurement System for organic farming applications using Deep Learning. IEEE Trans. on Instrumentation and Measurement, submitted for publication 2017
  10. 10. Dice LR. Measures of the amount of ecologic association between species. Ecology. 1945;26(3):297-302
  11. 11. Haug S et al. Plant classification system for crop/weed discrimination without segmentation. In: IEEE Winter conference on Applications of Computer Vision (WACV); 2014. pp. 1142-1149

Written By

F. J. Knoll and V. Czymmek

Submitted: 14 May 2017 Reviewed: 24 November 2017 Published: 14 March 2018