Open access peer-reviewed chapter

Image Acquisition for Biometric: Face Recognition

Written By

Siddharth B. Dabhade, Nagsen S. Bansod, Yogesh S. Rode, Narayan P. Bhosale, Prapti D. Deshmukh and Karbhari V. Kale

Submitted: 24 September 2021 Reviewed: 19 January 2022 Published: 05 March 2022

DOI: 10.5772/intechopen.102767

From the Edited Volume

Recent Advances in Biometrics

Edited by Muhammad Sarfraz

Chapter metrics overview

241 Chapter Downloads

View Full Metrics

Abstract

Biometrics is mostly used for authentication purposes in security. Due to the covid-19 pandemic situation, nowadays distance-based authentication systems are more focused. Face recognition is one of the best approaches which can use for authentication at distance. Face recognition is a challenging task in various environments. For that taking input from the camera is very important for real-time applications. In this chapter, we are more focusing on how to acquire the face image using MATLAB. The complete chapter is divided into five sections introduction, definition of biometrics, image acquisition devices, image acquisition process in MATLAB.

Keywords

  • face recognition
  • biometric
  • image acquisition
  • image processing
  • imtool

1. Introduction

Biometrics is the science of establishing the identity of an individual based on the physical, chemical or behavioral attributes of the person [1, 2]. Those attributes or properties of an individual are unique on the earth called as biometrics identifiers. Physical properties of the person do not vary as per time such as the face, fingerprint, retina, iris, etc. Behavioral biometrics such as voice, signature, and keystroke dynamics identification and measurement of performance of the person while the certain actions of the human through its body parts such as voice-scan and signature-scan. The element of time is essential to behavioral biometrics because it may change with time [3].

In the internet world, there are so many business companies doing their business through client–server basis in which they are authenticating the client’s request through the username and password. It may be chances of making an illegal entry in demand and request or access the private and confidential data.

When an end-user uses some additional materials or information for the authentication such as smart card, username, and passwords, some tokens or ids, passport, driving license, etc. then there is a chance of lost, stolen the things you are belonging or passwords, ids may be guessed or forget [4]. Therefore, we required a type of system in which there is no need to use such type of external resources for authentication. Fortunately, a biometric authentication system provides an alternative and robust identification system for these problems. In this system, the user should be present personally at the time of identification or verification. As per security is concerned, it uses three approaches for authentication of the person. The first approach is small text information you know such as password or pin or security questions, etc. The second approach is you are belonging with something such as key, RFID, ATM Card or Smart Card, etc. and the third one is some information is always with you it cannot forget, stolen. Your presence is mandatory for this type of authentication i.e. biometric. Apart from these approaches biometrics is a more suitable system because it is always with the person, therefore, biometrics never borrowed, stolen or forgotten [5].

Biometric is a process of identification of unique patterns from the physical, behavioral or chemical properties of the person for authentication. Face, fingerprint, iris, palm, retina, hand geometry, etc. are physical biometric traits whereas voice, gait, dynamic keystrokes are behavioral and DNA, saliva, body odor, etc. are chemical biometrics traits [6, 7].

The process of how biometric works (shown in Figure 1) is as follows:

  1. Capture the biometric data from the appropriate sensor;

  2. Extract the features from the captured image and stored it as a template;

  3. The template of biometrics can be stored in smart cards, local machines or on a server for future use;

  4. Scan the current biometric traits data;

  5. For processing, from the image extract the features and from template;

  6. For matching, take the input processed features with the existing biometric template;

  7. On the basis of matching ranking score decide the business-level application and

  8. Make the security evaluation of the system for proper use.

Figure 1.

How biometric system works.

Advertisement

2. Definitions of biometric

Biometrics is a way of automatic identification or verification of the person on their physical, chemical or behavioral properties. It is a scientific way to analyze the biological unique patterns of the individual person through the use of advanced technology. Biometrics is a scientific approach to understand and find the unique representation of the person. Biometrics is purely depending on the physical, behavioral or chemical properties of the human being for secure access or in identification and verification otherwise biometric devices have no use in authentication. Biometrics is a science of identification or verification of a person through the face, fingerprint or voice, etc. measurement of unique patterns. These unique patterns of the person called as features stored in embedded devices, smart cards are known as templates or bio-prints. They are used to verify the identity of the person by comparing them to the previously stored bio-prints [8].

Advertisement

3. Biometrics model

In general, the biometric model (Figure 2) is divided into five parts are as follows.

Figure 2.

The block diagram of biometric model.

3.1 Data collection

The first part is a data collection which consists of biometric presentation and sensor. In this part, biometric modality is captured through the biometric sensor and it represents in its equivalent format for user understandable level. The biometric data sample is collected through various biometric traits either physical or behavioral. The biometric samples were taken from an instance, it should be unique at multiple impressions, iteration or frequent timely. At the time of data acquisition through the sensor, some technical issues may arise such as noise generated in the background while taking the samples of speech or sensor sensing capacity fault. The user does not support while collecting the samples through the sensors. Sometimes more pressure is applied to the fingerprint device then noisy data will be captured.

3.2 Transmission and Data Storage

Sometimes at the time of storage, data are in large volume, we need to store it into the compressed format for fast transmission. At the time of compression technique, we need to be careful while selecting the algorithm otherwise there may be chances of adding more artifacts in original data samples.

It is not mandatory to store the data on the device, it might be stored on the local machine or the server as per the application requirement and cost-effectiveness. Sometimes, there is no need to store the data on the server or the application may be taken care of it to store it into the secure format on the same application device.

3.3 Signal processing

The main core component of any biometric system is signal processing, in which we can check the quality of the image, feature extraction or pattern matching. Sometimes due to distortion in input image, there is a chance of noisy image or bad quality data then there is a need to recapture the image or biometric samples once again. After ensuring the good quality data then proceed for the feature extraction through an appropriate technique that will be suitable for the application. Pattern matching is a key role player in which stored data template is matched with the given input samples. The pattern matcher will compare the matching results and send them to the decision module for the final decision.

3.4 Decision

After the pattern matching score, the decision module decides the acceptance or rejection of the person by using predefined certain threshold values [9].

Advertisement

4. Types of image acquisition devices

The camera is one of the famous image acquisition devices. Cameras are mainly divided into two main types i.e. analog and digital cameras. Digital cameras can be further classified into parallel digital, Camera Link and IEEE 1394 [10, 11].

Advertisement

5. Image acquisition process in MATLAB

MathWorks has developed a proprietary multi-paradigm programming language and numeric computing environment is known as Matrix Laboratory. From this matrix laboratory, MATLAB word is abbreviated. In MATLAB, we can perform matrix operations, plot the various graphs, develop the functions, interfaces and make the interfacing for the other programming languages programs.

MATLAB provides the programming and numeric computing platform for the analysis of data, algorithm development, creation of models, hence, it is widely used by scientists, engineers, researchers. If you wish to get more knowledge about the image acquisition process and capabilities (Image Acquisition Toolbox), MATLAB documentation is the best source.

Image Acquisition Toolbox (ImATool) provides the ability to handle the numeric calculation by using the predefined available functions. Under the IMAtool, wide functions are defined which supports the following image acquisition operations:

  • Acquiring images through many types of image acquisition devices

  • Acquiring images through many types of image acquisition devices

  • From professional-grade frame grabbers to USB-based Webcams

  • Viewing a preview of the live video stream

  • Triggering acquisitions (includes external hardware triggers)

  • Configuring callback functions that execute when certain events occur

  • Bringing the image data into the MATLAB workspace

MATLAB has capabilities to extend the imtool in your own code or combination with other toolboxes, such as the Image Processing Toolbox and the Data Acquisition Toolbox. It also provides the Image Acquisition Blockset i.e. Simulink interface. This block set extends Simulink with a block that lets you bring live video data into a model. To get the live image data from the acquisition boards after plug-in for that Matlab provides the Data Acquisition Toolbox through which we can able to communicate with the acquisition boards.

For image processing, analysis and algorithm development related functions are defined under the Image Processing Toolbox. For control and communication with the test and measurement of various equipment’s related functions are defined under the Instrument Control Toolbox. You can also perform the Video and Image Processing Blockset by using the Simulink model.

5.1 Basic image acquisition procedure

To develop a motion detection application, certain basic steps are required, which are shown in Figure 3. Pixel-to-Pixel variations in the scene show the difference in acquired image data frames in developed motion detection application. Sometimes frame will be constant, which means there is no change in incoming frame pixel values. Suppose, variation is found in the incoming image frame pixel values, it means, a change in the scene which is also capable to display in the application. Very few lines of coding are required for image frame data acquisition with the help of the toolbox, which is described in Section 5.2 examples. For the execution of the given code in the example, the image acquisition device should be connected to your system. Image acquisition devices can be professional so that the acquired image data frame will be quality image data for the high level of assumptions. Examples of professional devices are frame grabber, generic windows, webcam, etc. This sample code will be able to capture the image from different types of image acquisition devices by doing simply minor changes sometimes. Figure 3 shows how to acquire image data with the help of Image Acquisition Toolbox and Figure 4 shows Image Acquisition Toolbox Components.

Figure 3.

Image acquisition basic steps.

Figure 4.

Image acquisition toolbox components.

5.2 Example: acquiring 10 seconds of image data

In this example, you can configure time-based acquisition using the number of frames per trigger.

5.2.1 Create an image acquisition object

Before taking the input from the connected camera on your current system, you need to first create an object. The camera gives synchronous data, it is continuous information in the form of bits. To convert this information in the form of a visual display unit by using windowing techniques, therefore, it becomes video. From this video input, you want to capture the image. Hence, you need to create the video input object of your camera device for accessing the device property. You can check the list of image acquisition devices by using imaqhwinfo function. Also, you will get syntax and formats available for the respective devices in the form of structured data. By selecting the appropriate information from image acquisition devices, you can generate the windows-based output of your camera in the form of video. For the creation of video input object videoinput() function is available. You can pass two parameters while calling this function. The first parameter is the type of camera and the second is the camera ID. The syntax for the creation of camera object is:

vid = videoinput(‘winvideo’,1);

In this case, the vid is the camera object, video input is the function, win video is a type of image acquisition device category and 1 is the camera id number.

5.2.2 Configure properties

Once the camera object has been created, you can acquire the image information at a specific time. If you wish to acquire the 10 or 20 seconds of data from your camera, then it has to be set the property as FramesPerTrigger. For the calculation of FramesPerTrigger first, check the frame rate of the camera per second and multiply it by the number of seconds. Then it can be considered for the camera configuration property.

Example. If the frame rate of the camera is 20 frames per second and you want to acquire the 10 seconds data then it will become 20 *10 = 200. To set this configuration there is set() function available in MATLAB. This function will receive three arguments: the first argument is video object i.e. vid, the second argument is configuration property i.e. FramesPerTrigger and the third one is the value of FramesPerTrigger i.e. as per example 200.

set(vid,'FramesPerTrigger’, 200).

5.2.3 Start the image acquisition object

To acquire the image from the camera to our system, we have to start gabber of camera object in MATLAB as start() function available in MATLAB.

Ex. start(vid).

After calling the start() function video object is started and tries to store the temporary data into the memory buffer. It will acquire the image data continuously till the specific number of frames as in example 200. This process executes as a trigger when the start() function is called in our program and stops when the specific number of frames is received in the memory buffer.

Figure 5 shows the image preview when you start the video object.

Figure 5.

Image preview.

5.2.4 Bring the acquired data into the workspace

Load your data for verification that all data contains are accurate which we have planned to acquire the image as per our resolution and configuration. In MATLAB, there is a getdata() function that returns number of frames acquired within the specific time slot with a timestamp. We can verify the amount of acquired data according to timestamp and the difference between the first frame and the last frame.

Start Camera Code:

global vid;

vid = videoinput(‘winvideo’,1);

vidRes = get(vid, ‘VideoResolution’);

nBands = get(vid, ‘NumberOfBands’);

set(gcf,'CurrentAxes’,handles.axes1);

hImage = image(zeros(vidRes(2), vidRes(1), nBands));

preview(vid, hImage);

Once you have started the video object and set the bands, you can preview live camera acquisition data into the windows as shown in Figure 6. Then you can fix the face position into the camera preview and then follow the next steps to capture the preview image as shown in Figure 7.

Figure 6.

Image preview in GUI.

Figure 7.

Image preview in GUI after capture the image.

Capture Image Code:

global vid;

%% Image Capture through the Current Video Preview.

global im;

im = getsnapshot(vid);

set(gcf,'CurrentAxes’,handles.axes2);

imshow(im);

In this way, we have successfully captured images using MATLAB code. Now, we have to develop the face database for your face recognition application [12, 13, 14, 15]. Then go for the feature extraction, classification and recognition level as per your preferred suitable techniques [16, 17, 18].

Advertisement

6. Conclusion

Biometrics is mostly used for authentication purposes in security. Face recognition in real-time itself is a challenging task. For that taking input from the camera is very important for real-time application. In this chapter, we have mainly focused on how to acquire the face image using MATLAB. The complete chapter is divided into five sections introduction, definition of biometrics, image acquisition devices, image acquisition process in MATLAB. Each section has explained in detailed steps for the upcoming young researchers.

References

  1. 1. Jain AK, Ross AA, Nandakumar K. Introduction to Biometrics. Vol. XVI. 312. Boston, MA: Springer; 2011. p. 196. DOI: 10.1007/978-0-387-77326-1
  2. 2. Jain AK, Patrick F, Ross AA, editors. Handbook of Biometrics. Vol. X, 556. Boston, MA: Springer; 2008. p. 60. DOI: 10.1007/978-0-387-71041-9
  3. 3. Adeoye OS. A survey of emerging biometric technologies. International Journal of Computer Applications (0975-8887). 2010;9(10):1
  4. 4. Tripathi KP. A comparative study of biometric technologies with reference to human interface. International Journal of Computer Applications (0975-8887). 2011;14(5):10
  5. 5. Dhir V et al. Biometric Recognition: A Modern Era for Security. International Journal of Engineering Science and Technology. 2010;2(8):3364-3380
  6. 6. Liu S, Sullivan J, Ormaner J. “A practical approach to enterprise IT security,” in IT Professional. Vol. 3, No. 5. Manhattan, New York, U.S.: IEEE; 2001. pp. 35-42. DOI: 10.1109/6294.952979
  7. 7. Rosenzweig P, Kochems A, Schwartz A. Biometric Technologies: Security, Legal, and Policy Implications. Vol. 12. Washington, D.C., U.S.: Legal Memorandum, The Heritage Foundation; 2004. Available from: https://www.heritage.org/homeland-security/report/biometric-technologies-security-legal-and-policy-implications#
  8. 8. Dahiya N, Kant C. Biometrics Security Concerns. In: 2012 Second International Conference on Advanced Computing & Communication Technologies, Rohtak, India. Manhattan, New York, U.S.: IEEE; 2012. pp. 297-302. DOI: 10.1109/ACCT.2012.36
  9. 9. Tiwari S, Zhai G, Carter SA, Tiwari S. Evaluating the capability of biometric technology. International Journal of Advanced Research in Computer Engineering & Technology. 2012;1(2):18
  10. 10. Image Acquisition, White Paper, Available from: https://www.ni.com/en-in/innovations/white-papers/06/image-acquisition.html Accessed: June 21, 2021
  11. 11. Nudelman S. Image Acquisition Devices and Their Application to Diagnostic Medicine. In: Höhne KH, editor. Pictorial Information Systems in Medicine. NATO ASI Series (Series F: Computer and Systems Sciences). Vol. 19. Berlin, Heidelberg: Springer; 1986. DOI: 10.1007/978-3-642-82384-8_2
  12. 12. Kazi MM, Rode YS, Dabhade SB, Al-Dawla NNH, Mane AV, Manza RR, et al. Multimodal Biometric System Using Face and Signature: A Score Level Fusion Approach. Advances in Computational Research. 2012;4(1):99-103
  13. 13. Dabhade SB, Bansod N, Kazi MM, Rode YS, Kale KV. Hyper Spectral Face Recognition Using KPCA. International Journal of Scientific & Engineering Research (IJSER). 2016;7(10):548-550
  14. 14. Dabhade S, Bansod N, Kazi M, Rode Y, Kale K. Hyper spectral face recognition using Gabor + KPCA. IOSR Journal of Computer Engineering (IOSR-JCE). 2017;2:61-64. Available from: https://www.iosrjournals.org/iosr-jce/papers/Conf.17003/Volume-2/12.%2061-64.pdf
  15. 15. Siddharth BD, Bansod NS, Rode YS, Mkazi M, Kale KV. Performance Evaluation on KVKR- Face Database using Multi Algorithmic Multi Sensor Approach. International Journal of Computer Applications. 2018;180(13):30-36
  16. 16. Dabhade SB, Bansod NS, Rode YS, Kazi MM, Kale KV. Multi sensor, multi algorithm based face recognition & performance evaluation. Vol. 21-23. Jalgaon, India: IEEE, 2016, International Conference on Global Trends in Signal Processing, Information Computing and Communication (ICGTSPICC); 2016. pp. 113-118. DOI: 10.1109/ICGTSPICC.2016.7955280
  17. 17. Siddharth BD, Bansod N, Naveena M, Khobragade K, Rode YS, Kazi MM, et al. Double Layer PCA based Hyper Spectral Face Recognition using KNN Classifier. Mysuru, Karnataka, India: IEEE, International Conference on Current Trends in Computer, Electrical, Electronics and Communication (ICCTCEEC), Vidyavardhaka College of Engineering; 2017. pp. 119-122
  18. 18. Dabhade SB, Rode YS, Kazi MM, Manza RR, Kale KV. Face Recognition using Principle Component Analysis and Linear Discriminant Analysis Comparative Study. Kurukshetra, Haryana: Elsevier, 2nd National Conference on Advancements in the Multi-Disciplinary Systems, Technology Education & Research Integrated Institutions (TERII); 2013. pp. 196-202 [8 Citations]

Written By

Siddharth B. Dabhade, Nagsen S. Bansod, Yogesh S. Rode, Narayan P. Bhosale, Prapti D. Deshmukh and Karbhari V. Kale

Submitted: 24 September 2021 Reviewed: 19 January 2022 Published: 05 March 2022