Open access peer-reviewed chapter - ONLINE FIRST

Automated Camera-Based Assessment of Short Physical Performance Battery (SPPB) for Older Adults with Cancer

Written By

Larry Duncan, Shaotong Zhu, Mackenzi Pergolotti, Smith Giri, Hoda Salsabili, Miad Faezipour, Sarah Ostadabbas and S. Abdollah Mirbozorgi

Submitted: 24 July 2023 Reviewed: 16 August 2023 Published: 11 April 2024

DOI: 10.5772/intechopen.112899

Human Gait - Recent Findings and Research IntechOpen
Human Gait - Recent Findings and Research Edited by Manuel Domínguez-Morales

From the Edited Volume

Human Gait - Recent Findings and Research [Working Title]

Ph.D. Manuel Jesus Domínguez-Morales and Dr. Francisco Luna-Perejón

Chapter metrics overview

18 Chapter Downloads

View Full Metrics

Abstract

This chapter introduces a motorized camera system designed for monitoring and evaluating the tests of the Short Physical Performance Battery (SPPB). This system targets physical performance assessment for older patients undergoing cancer treatment. The device is self-contained, incorporating a small computer, three cameras, and two motors. The core algorithms utilize three object recognition techniques: template matching, Haar cascades, and Channel and Spatial Reliability Tracking. To facilitate user interaction, graphical user interfaces (GUIs) are developed on the small computer, enabling test execution and camera adjustments via cell phone and its hotspot. The system delivers precise results, with gait speed tests showing a range of 0.041–1.92 m/s and average speed and distance accuracies exceeding 95%. The standing balance and 5 times sit-stand (5TSS) tests achieve average time accuracies exceeding 97%. This novel camera-based device holds promise in enhancing evaluation of lower body extremity fitness for elderly people receiving cancer health care, offering a reliable and efficient solution for monitoring their progress and well-being.

Keywords

  • 5 times sit-stand
  • gait speed
  • image processing
  • object recognition
  • Short Physical Performance Battery (SPPB)
  • standing balance

1. Introduction

Approximately 60% of new cancer diagnoses and 70% of cancer-related deaths are found in adults aged 65 years and above [1]. The number of cancer survivors in the United States is expected to exceed 20 million by 2026 [2], with many experiencing a negative impact on their physical, social, and cognitive quality of life, potentially leading to disability [3]. Older adults undergoing cancer treatment are particularly vulnerable to functional decline, disability, increased healthcare use, and increased time in the hospital [4]. To address these challenges, there is a pressing need to fine-tune cancer treatment for older adults by evaluating their physical fitness and estimating treatment tolerability [5].

Tests of fitness in physical activity, such as the Short Physical Performance Battery (SPPB), are recommended for determining treatment tolerability, but they are not routinely included in normal clinical practice due to limitations of time and manual testing [6, 7]. The SPPB, which objectively measures lower extremity function, includes tests for walking speed, time to perform 5 chair-stands, and 10-second standing balance tests [8].

To overcome the limitations of manual SPPB assessments and improve starting evaluation, treatment preparation, and progress monitoring, automated tools are needed [9, 10]. In this context illustrated by Figure 1, vision-based systems, force-based mechanical approaches, acoustic sensor systems, and microelectromechanical systems (MEMS) have been explored. Especially with vision-based systems, much data can be collected [17, 18, 19].

Figure 1.

An overview of various technologies employed for gait speed, standing balance, and 5 times sit-stand tests. Row 1 presents the camera design implemented in this study, utilizing three object recognition techniques. The small square camera is the Raspberry Pi Camera V2.1 [11]. Row 2 shows camera and optical sensing systems, featuring lasers [12] and an infrared camera (Xbox One Kinect) [13]. Row 3 demonstrates mechanical systems, including shoe and air bladder sensors [14]. Row 4 includes ultrasonic sensor systems [15]. Row 5 has electrical accelerometer, gyroscope, and magnetometer sensors [16].

In this work we propose a self-standing Camera-based Body Motion Tracking (CBMT) system, inspired by earlier studies [20, 21], and designed for use in clinics and hospitals to enable easier fitness evaluation of patients using the SPPB tests. The CBMT system performs object recognition and tracking of people, accurately processes all SPPB statistics, can be monitored via a cell phone, and allows for report generation and cloud storage for medical professionals’ access. The design overview, hardware and software details, software implementation, experimental results, and a comparison with other systems are presented in subsequent sections. The chapter concludes with a discussion and implications of the proposed design.

Advertisement

2. Hardware design and overview

We developed a portable and standalone platform using a Raspberry Pi (RPi) computer instead of a regular computer for its compact size and ability to control multiple cameras and motors without adding other devices. Figure 2 illustrates the process of running tests with the CBMT system, depicting the coverage angles and range for the gait speed, standing balance, and 5 times sit-stand (5TSS) tests of the SPPB. This platform enables the evaluation of physical performance and supports personalized cancer treatment for older adults.

Figure 2.

Process of camera-based device performing short physical performance battery tests.

The CBMT’s cameras have an optimal detection range of 2–7.2 m, with a 60° viewing angle and a total horizontal rotation of 180°, providing a total view angle of 240°. The motors can also tilt the cameras up and down if required (±20°).

Figure 3 presents the block diagram representing the connecting operations of the hardware and software of the proposed CBMT system. It consists of two main parts: 1) the camera system (comprising the Raspberry Pi computer, motors, and cameras) and 2) the control smartphone that remotely operates the RPi and the system.

Figure 3.

The block diagram outlining the interconnections and interactions between the hardware and software components of the CBMT system.

The CBMT platform’s hardware components include: 1) one RPi 3B+ computer with 1 GB of RAM, 2) two DC motors, 3) two driver boards (L9110H) for the motors, 4) two USB cameras (Zealinno 1080P Webcam), 5) one Raspberry Pi V2.1 camera module, 6) a few acrylic sheets for assembly, 7) two camera fine adjustment pieces, and 8) a tripod. This physical setup performs all the necessary functionalities, with a processor, cameras, and angle rotation apparatus, to record needed videos for subsequent image processing used in gait speed, standing balance, and 5TSS calculations.

Figure 4 displays images of the constructed CBMT device, complete with Raspberry Pi, cameras, and motors. The platform’s physical dimensions are 13 cm × 19 cm × 36 cm (excluding the tripod). The gap between the left and right cameras is set at 273.5 mm to enhance distance accuracy. A marker symbol, located in the view area of the cameras, helps calculate the subject’s overall angle, regardless of the platform rotation angle (see Figure 4 inset). The CBMT system employs template matching to detect the marker and adjust the platform angle left/right or up/down [21]. For experiments requiring a 180° rotation of the platform, multiple markers can be used.

Figure 4.

The fully implemented platform features a Raspberry Pi 3B+ computer, three cameras, and two DC motors. (a) Front view of camera system with inset of angle marker used in the platform angle detection mechanism. The software adds a red rectangle to show detection of the marker. (b) Side view of camera system. (c) Overall view of cameras, tripod, and subject. Cameras are at head level.

Advertisement

3. Software design and implementation

3.1 Software in RPi

The block diagram in Figure 3 presents the internal functions of the hardware and software of the CBMT system. The Raspberry Pi block incorporates Thonny, an integrated development environment, facilitating Python script editing and execution. The graphical user interfaces (GUIs) are also developed as Python scripts, utilizing the tkinter (Tk interface) module. To enable cloud storage and easy accessibility of data online, Rclone, a command line program, is utilized [22].

In our design, we have installed Virtual Network Computing (VNC) Server and VNC Viewer in the Raspberry Pi, setting up a graphical desktop-sharing system. This setup allows the RPi Desktop to be viewed remotely on a cell phone screen through VNC Viewer [23]. The cell phone serves as the interface, remotely controlling the CBMT system and collecting data, which is also saved in Dropbox and Google Drive. The image and data processing take place within the Raspberry Pi, with the cell phone only showing the results. The cell phone has access to all data, images, and video files on the Raspberry Pi.

For the multifunctional software of the proposed CBMT system, we utilized various tools, including Python3 programming language (version 3.7.3), OpenCV (cv2) library for computer vision (version 3.4.6), Imutils image processing functions (version 0.5.3), Channel and Spatial Reliability Tracking (CSRT) function from within OpenCV, tkinter module for GUI creation, Thonny development environment on the RPi, and VNC Server and Viewer on the RPi and cell phone respectively, Rclone command line program, and RPi.GPIO library for General Purpose Input–Output through the pins on the Raspberry Pi. The software overview is described in Figure 5, illustrating the steps and main equations involved in conducting gait speed, standing balance, and 5TSS tests. All the Python scripts written for this device are publicly available on Zenodo [24].

Figure 5.

(a) Equations for each of the three SPPB tests, (b) gait speed test flowchart, (c) standing balance test flowchart, and (d) 5TSS test flowchart. In the camera-based tracking system three types of object recognition are utilized: template matching (TM), Haar cascades (HC), and Channel and Spatial Reliability Tracking (CSRT).

3.2 Cell phone and graphical user interfaces

To establish a connection between the RPi and the cell phone for remote control of the CBMT system, Virtual Network Computing is utilized through the cell phone’s Wi-Fi hotspot. The RPi’s GUIs, including the control GUI and camera GUI, are displayed on the cell phone screen, and both GUIs are accessed with touch commands. Essentially, the cell phone acts as the remote monitor, keyboard, mouse, and router for the RPi.

In Figure 6(a), the designed control GUI is presented, featuring a menu with options for gait speed, standing balance, and 5TSS tests. The control GUI includes buttons for running tests, displaying results on the phone, and saving data in comma-separated values (CSV) files. For each test, essential data such as the doctor’s name, patient’s ID, time stamp, and elapsed time are recorded. Additionally, for the gait speed test, supplementary data is kept for walking distance, walking distance error, average speed, and average of errors in a series of runs. The GUI also includes a button for syncing data to Google Drive and Dropbox, facilitating easy access to result files on those cloud platforms. Help buttons are available for user assistance, and the results are displayed in the control GUI while also being saved in CSV files and sent to the cloud with a single button press. Moreover, buttons are provided to delete video and plot files, and data for runs.

Figure 6.

(a) The control GUI displaying menu options for the three SPPB tests. (b) The camera GUI, with an inset showing degree axes on the left and top, showcasing the cameras centered horizontally and vertically on the angle marker. (c) The case of a cell phone being used for control.

In Figure 6(b), the camera GUI has commands to operate the motors to adjust the angles of the camera platform. The first and second buttons allow the platform to turn a certain amount according to the number entered. The third and fourth buttons turn the platform’s direction to a specified angle relative to the marker. Horizontal and vertical angle axes with the marker are displayed (see Figure 6 inset). This setup process is crucial for preparing the system to commence a new experiment. Figure 6(c) illustrates the camera GUI when utilizing a cell phone for system control.

3.3 Object recognition techniques

This work utilizes three distinct object recognition (object detection) techniques: template matching (TM), Haar cascades (HC), and Channel and Spatial Reliability Tracking.

The template matching algorithm needs only one image as a template to search for a similar pattern of the same size and orientation within a larger image. A squared difference correlation algorithm is used to locate the best match. The grayscale template (T) is compared with all possible positions in a grayscale image (I). For a possible position in (I) with upper left corner pixel at (x, y), in Eq. (1) it obtains a sum R(x, y) of squared differences of pixel values using the TM_SQDIFF method in the OpenCV function matchTemplate() [25, 26].

Rxy=x,yTxyIx+xy+y2E1

The pixel location (x, y) where the R(x, y) sum is the minimum represents the best match for the template location in the image [25, 26]. This technique is applied to locate the angle marker in Figure 4(a), aiding in determining the camera platform’s angle in relation to the marker. Additionally, template matching is used to locate the subject’s head in the right frame during detected walking path calculation in a gait speed test.

The second technique, Haar cascades, enables searches for more general patterns, such as human faces. It employs machine learning to create Haar cascade XML classifier files, which contain the necessary training for classification. The cv2 module’s methods allow the identification of desired objects, such as faces, at different scales with the same orientation [27, 28]. Training involves using numerous known positive images with faces and known negative images without faces [28]. Haar features, simple classifiers, are applied to the images, which involve summing the intensities of adjacent rectangular areas and taking their differences. Integral images simplify these calculations by enabling fast summations of pixel values in rectangles of any size [29]. To create more complex classifiers, a machine learning process called AdaBoost is employed, which applies weights and adjusts them until they accurately classify each pixel according to its possibility of being a face location, considering different face sizes [27, 30]. These weights are used in the XML file model to look at new images and locate face positions. The term “cascades” refers to the multiple filtering passes that progressively eliminate large areas of the image and proceed with increased features and accuracy until the most probable face location is found [27].

The third technique, Channel and Spatial Reliability Tracking is a C++ implementation of the CSR-DCF (Channel and Spatial Reliability of Discriminative Correlation Filter) tracking algorithm within the OpenCV library [31]. The CSRT tracker trains the compressed features of Histogram of Oriented Gradients (HOG) and color names (colornames) using a correlation filter [32]. Compressed features in neural networks refer to the resulting dimensions after removing unnecessary features [33]. HOG represents the gradients of light intensity and their orientations, being higher at edges [34]. Colornames is a mapping from pixel values to color names, utilized to obtain features for color matching [35]. The CSRT tracking is initiated by selecting a region containing the subject’s face in a video frame. In subsequent frames, a search is conducted near the same region by identifying features of color patches with their edges from the object and its surrounding region. This targeted approach accelerates the process by avoiding a search of the entire frame [32].

3.4 SPPB tests

In our implementation, we have employed three algorithms, each utilizing different object recognition techniques, to measure all SPPB tests. The flowcharts corresponding to these algorithms are presented in Figure 5. With these three tests, three types of motion are being tracked: 1) walking motion (gait speed), 2) small motions in two directions (standing balance), and 3) up and down motion (5 times sit-stand).

3.4.1 Gait speed test

First stage: recording videos: the gait speed test is recorded using both left and right cameras. At the beginning of the recording stage, the subject faces the cameras and starts walking along a planned path. This movement is simultaneously recorded by the two cameras, generating two video files.

Second stage: processing video frames for gait speed and walking path calculation: frames from the two video files are processed. Initially in the left frame, the number of pixels both horizontally and vertically, is increased by three times. This is to enable accurate face detection at distances of 6 or 7.2 m by Haar cascades. When the face is found, the CSRT tracker is initialized with the face location and Haar cascades is no longer needed. The CSRT tracker maintains the subject’s pixel location in subsequent left video frames, and no increased resolution is required for CSRT.

For each frame pair from initialization to the end, the subject’s pixel location in the right frame is determined by using a template from the left frame and finding a match in the right frame through template matching. The left and right cameras are angled in a nonparallel direction toward a point 4 m away, ensuring that the subject remains relatively more centered in each frame. Pixel adjustments are made for the subject’s pixel position in the left and right frames to correct for the nonparallel camera setup.

The subject’s pixel locations from both cameras undergo filtering using customized median and moving average filters. These filters are applied minimally to remove noise and straighten the trajectory from any side-to-side swaying, resulting in a more accurate calculation of the distance moved.

In previous work [21], the triangulation technique was employed to obtain polar coordinate positions from subject pixel position pairs (see Figure 5(a)). To reduce depth inaccuracies in the gait speed calculation, the system was calibrated using Eqs. (2) and (3), where cubic equations were fitted to data obtained from 64 locations on an 8-by-8 rectangular grid covering the experiment area.

Eq. (2) involves creating a function, depth_f, using Python’s interp2d function for 2D interpolation, with lists X and Y representing rectangular coordinates for detected depths and angles, and R representing correct depths corresponding to the other two lists.

depth_f=interp2dXYRcubic,E2

Eq. (3) calculates the correct depth, dep, for a single case using rectangular coordinates x and y for a detected depth and angle.

dep=depth_fxy,E3

3.4.2 Standing balance test

To assess standing balance, the center camera records the subject while they remain stationary. In the standing balance test the subject must stand with feet together in three positions: side-by-side, semi-tandem, and tandem, each for a duration of 10 seconds without moving their feet or head [8]. There is an option to ignore head movements during the tests.

To prepare for the test, the floor is measured from the camera. There is masking tape at 3.5 m for the feet’s location marker, and at 4 m for the standing position. The cameras are directed toward the person’s crotch. To center the subject in the view and increase the frame rate, pixels are automatically trimmed from the left and right sides of the frame. From the video recording, the developed code utilizes Haar cascades to detect the subject’s face. By using template matching, an angle marker is located to determine the relative positions of the left and right feet (see Figure 7(a)).

Figure 7.

Measurement from various tests. (a) Video footage displays three green tracking boxes: the head (outer green box) and feet (lower green boxes). Additionally, two yellow independent frames at the bottom are for the feet trackers, and a small green box surrounds a marker at the bottom helping determine the initial location of the feet. (b) A plot representing a successful stand lasting over 10 seconds during the standing balance test. (c) A plot showing a failing stand of 4.67 seconds because the left foot moved about 30 mm to the left. This is evident from the thick blue L foot x curve, which crosses the foot threshold at −25 mm (marked with a red circle). (d) During the gait speed test, the subject is walking toward the camera. Vertical green lines mark the maximum and minimum horizontal pixel values of the tracked head. Vertical orange lines indicate the start and finish pixel positions when the subject is in motion, making the start and finish times more distinct. (e) The 5 times sit-stand test is displayed with the sitting and standing data normalized from 0 to 1.

For the tandem position, the subject stands with feet slightly angled toward the side of the forward foot, ensuring both feet are visible to the camera. The analysis involves three CSRT tracking boxes for the head [36], left foot, and right foot, as shown in Figure 5(c). Each foot tracker operates independently within its designated yellow frame (Figure 7(a)) to avoid interference from the other foot. The software monitors movements of these body parts and triggers alerts if any of them exceed the distance thresholds set (feet: >25 mm, head: >35 mm, see Figure 7). These movements are represented by horizontal x-type and vertical y-type curve labels.

The results can be classified as Pass or Fail scores or given numerical scores within a range (e.g., 1–10). Success is achieved if the subject maintains the required posture for 10 or more seconds, and failure or partial success if the duration is less than 10 seconds. The average frame rate is 22 frames per second (fps), and the time resolution for standing balance tests is under 0.055 seconds.

3.4.3 5 times sit-stand test

The 5TSS test algorithm measures the elapsed time for the subject to perform 5 stand-sit cycles. Figure 8 displays prompts for the test, which can be viewed by both the user and the subject. Using Haar cascades, the software identifies the level of the top of the face and increments the count each time the subject completes a stand-sit cycle. The test operates at an average frame rate of 25 frames per second (fps), with a time resolution of under 0.050 seconds for 5TSS tests.

Figure 8.

The 5 times sit-stand test. (a) The top of subject’s head box rises above the yellow line, initiating the timer. (b) The top of subject’s head box falls below the yellow line for the fifth time, leading to the timer’s stop at 9.436 s.

Advertisement

4. Experimental results, SPPB

4.1 Human subject population and testing environments

The experiments were conducted on two groups of healthy subjects without cancer: young and old adults. The CBMT platform (shown in Figure 4) was utilized to perform in vivo experiments on eight human volunteers. The first group comprised four individuals aged 60–95, consisting of three males and one female, all with light complexions. The second group included four individuals aged 22–42, comprising three males and one female, with three having light complexions and one with a dark complexion. All SPPB tests were conducted using the same CBMT system.

The CBMT system is powered by a battery power bank, ensuring continuous operation for up to 8 hours. It can be initialized and ready to execute tests in less than 45 seconds after being turned on. The simplicity of using the CBMT system is highlighted by the fact that the operator also served as the subject in some of the experiments.

4.2 Results, gait speed test

Figure 9 presents the distances walked and walking speeds observed during the gait speed test using the CBMT prototype. In Figure 9(a), the two vertical light blue lines indicate the start and end of timing for the walk, automatically generated by the software through angle analysis over time. Start and finish points are defined more distinctly by recording them with the subject in motion. Also see Figure 7(d) and its explanation in the caption. The smooth and steady curve in Figure 9(a) suggests that the subject had good balance and maintained a consistent pace without faltering or pausing. Minimal noise filtering was applied to the data.

Figure 9.

(a) Total distance walked with blue vertical lines marking the start and finish points; (b) walking speed data, where Ave Speed 1 is indicated by the red line and Ave Speed 2 is shown by orange dots; (c) unfiltered and uncorrected orange walking trajectory; and (d) final adjusted red walking trajectory.

During the test, the subject walked at a normal pace, taking approximately 4 seconds to cover about 4 meters. The corresponding instantaneous walking speed is depicted in Figure 9(b), showing a gradual increase in speed until the middle of the plot, followed by a decrease toward the end. The red horizontal line represents the average speed (Ave Speed 1), calculated by dividing the total distance walked by the total time.

Additionally, the orange horizontal dotted line corresponds to the average speed (Ave Speed 2), calculated by determining the total distance covered based on the green area under the curve. This distance is then divided by the total time, as shown in Eq. (4) where t denotes time, s represents speed, and the subscripts i and i + 1 denote indices of time and speed values.

AveSpeed2=i=1n1ti+1tisi+si+1/2totaltime,E4

Furthermore, Figure 9(c) displays a polar plot of a walking crosswise trajectory before filtering out the left-and-right jitters and before applying 2D interpolation for depth correction. Figure 9(d) illustrates the trajectory after these adjustments.

4.3 Results, standing balance test

The results for the standing balance tests are presented in the graphs of Figures 7(b) and (c). These plots show the horizontal (x-type label) and vertical (y-type label) movements in millimeters for the head, left foot, and right foot of each subject. In Figure 7(b), the subject successfully maintains a steady stance for over 10 seconds. However, in Figure 7(c), the subject exhibits slight movement to the left, and thus, is credited with standing still for 4.67 seconds. To account for the small jitter detected even when standing still, a distance threshold is applied to determine whether the subject has moved or remained steady.

4.4 Results, 5 times sit-stand test

Figure 7(e) displays a normalized plot showing the height of the top of the subject’s face as a function of time during a 5TSS test. The subject did the test smoothly, evident from the five smooth rises and falls of the evenly spaced curve. After each time sitting, there is a natural head position adjustment, due to leaning back in the chair, which is indicated by a brown oval marking one of these adjustments in the lower part of the plot.

4.5 Human subject population results

The CBMT platform was utilized to gather data for the gait speed, standing balance, and 5TSS tests, performed in multiple runs to assess the system’s precision and consistency (Table 1 and Figure 10). Table 1 presents the measured results for all the tests. A total of 25 runs were conducted for gait speed walking, with 17 runs performed crosswise to the cameras and eight runs toward the cameras. The first column provides details on frame rates and pixel resolutions for each test. The second column displays the average gait speed ranges for all 25 runs, while the third column shows the gait speed resolutions, which were obtained by multiplying the absolute values of the decimal errors by the average speeds. The distances walked and time ranges for the gait speed tests are shown in the second-to-last column. However, there is no walking involved in the standing balance or 5TSS tests. The last data column gives the measured average distance accuracies (>95%) for the gait speed tests and time accuracies (>97%) for the other tests.

Type of testFrame rate (frames/s), resolution (pixels)Average speed range (m/s)Gait speed resolution (m/s)Distance (m)/time range (s)Average accuracy
Gait speed, Crosswise, 8 subjects, 17 runs, Figure 9.53,
640 × 96 × 2 frames = 122,880
0.191–1.259<0.1272.832–3.836/2.75–20.06Distance accuracy: 95.28%
Gait speed, Radial, 1 subject, 8 runs, Figure 7(d).39,
640 × 144 × 2 frames = 184,320
0.136–1.371<0.0673.866/2.82–28.35Distance accuracy: 97.47%
Standing balance (3 types), 8 subjects, 18 runs, Figures 7(a)(c)22,
304 × 848 = 257,792
N/AN/A0.0/0.75–10.09Time accuracy: 97.25%
5 Times sit-stand, 7 subjects, 11 runs, Figure 7(e)25,
400 × 480 = 192,000
N/AN/A0.0/6.93–16.94Time accuracy: 99.10%

Table 1.

Results of 54 test runs with eight volunteers performed by CBMT system.

Figure 10.

(a) Confidence intervals of 95% for percent errors in distance, time, and speed for gait speed tests (two types), as well as the percentage errors in time for standing balance and 5 times sit-stand tests. For references for the tests, a stopwatch was used for time and floor measurements were used for distance. (b) Test data accuracies for distance, time, and speed in gait speed tests, and time accuracies for standing balance and 5 times sit-stand tests. Bar graph categories are age groups including younger, older, and all.

In Figure 10(a), the 95% confidence intervals for errors in distance, time, and average speed for gait speed tests, as well as time errors for standing balance and 5 times sit-stand tests, are presented. In Figure 10(b), the accuracies of test data for both younger and older age groups, as well as overall, are depicted. The differences between age groups are relatively small.

Advertisement

5. Discussion

In this section, we compare the performance of the proposed CBMT technology, which enables gait speed, standing balance, and 5TSS tests, with various commercialized technologies and state-of-the-art published works (comparison, Table 2). The results are summarized in Table 2, focusing on the accuracies or errors achieved by different technologies for the three tests.

ParametersName/Tech/Company/Reference
Gaitspeedometers 4 Types [37]MetaWear CPro sensor [38]Microsoft Kinect V2 [39]Unsupervised Screening System [40]APDM ML system (version 2) [41, 42]This Work (54 test runs)
Sensor typeLaser, infrared, ultrasonicIMUCamera, infraredLB, aTUG, IMU, cameraIMUVisible light cameras
Gait speed testYESYESYESNOYESYES
Distance Acc/Err (%)97.57% & 89.3%
Ave Acc
N/A95.98% Ave Acc
Gait speed Acc/Err (%)98.3%, 86.1%, 98.6%, 95.9%
Ave Acc
97.57% & 89.3%
Ave Acc
N/AICC = 0.92895.84% Ave Acc
Time
Acc/Err (%)
98.3%, 86.1%, 98.6%, 95.9%
Ave Acc
99.01%
Ave Acc
N/A98.59% Ave Acc
Coverage area (m, m2)4 m~38 m3 m~3.5 m6 m × 0.6 m2 to 7.2 m, 60° view
Standing balance testNONONONOYESYES
Time (s)
Acc/Err (%)
N/AN/AN/AN/A97.25% Ave Acc
5 Times sit-stand testNONONOYESNOYES
Time (s)
Acc/Err (%)
N/AN/AN/ALB CC 0.73
IMU CC 0.87
N/A99.10% Ave Acc
Data handlingPhone displays, microcontrollersBluetooth, phoneUSBRFID, cableswirelessWi-Fi, Cloud storage
ProcessorMicrocontrollersAndroid phone, appComputer, Kinect V2ComputerLaptopRaspberry Pi

Table 2.

Technologies used for gait speed, standing balance, and 5 times sit-stand tests.

Several technologies, such as MetaWear CPro sensor, Unsupervised Screening System, and APDM ML system/Opal sensors, employ Inertial Measurement Units (IMUs) or accelerometers. While some wearable accelerometer-based devices may provide relatively accurate displacement after calibration, they suffer from displacement error accumulation, which can impact overall accuracy. The accuracy of displacement measurement from IMU devices, based on acceleration and its conversion to gait speed, is complex and varies based on sensor placement, subjects’ ages, and IMU performance.

Other technologies like Gaitspeedometers utilize ultrasound. Time-of-flight sensors may use infrared, lasers, or ultrasound to measure subject depth. The Kinect device can detect body postures and utilizes both infrared and visible light. However, devices with infrared cameras can be expensive, and setups involving lasers or ultrasound for light barriers are less mobile.

In Table 2 Accuracy is calculated using the formula in Eq. (5) [38]:

Accuracy=100TrueValueMeasuredValueTrueValue×100%.E5

Tech refers to Technology. IMU stands for Inertial Measurement Unit. APDM ML refers to Ambulatory Parkinson’s Disease Monitoring MobilityLab/Opal sensors. LB represents Light Barrier Sensor. aTUG stands for Ambient Timed “Up & Go” chair, which measures TUG automatically using an infrared Light Barrier, four Force Sensors (on chair), and a Laser Range-Scanner. Acc/Err denotes Accuracy/Error. ICC refers to Intra-class Correlation Coefficients, where the GAITRite pressure sensor walkway serves as the reference system. Ave Acc stands for Average Accuracy. Coverage area also includes distance walked on a treadmill. LB CC refers to Light Barrier Sensor Correlation Coefficient (between it and stopwatch). IMU CC denotes Inertia Measurement Unit Correlation Coefficient (between it and stopwatch). RFID stands for Radio-frequency Identification.

Notably, our proposed CBMT system is the only one that covers all three SPPB tests. It offers results in real-time or after short processing through an intuitive graphical user interface and allows easy data uploading to the cloud with a simple click. The system automatically generates data, plots, and videos, which can be effortlessly removed when no longer needed.

Advertisement

6. Conclusion

We have successfully designed and implemented a camera-based system capable of evaluating all three SPPB tests for the physical assessment of cancer patients, including gait speed, standing balance, and 5TSS tests. Our CBMT software comprises codes and algorithms with GUIs that process images captured by the three cameras to assess SPPB. Additionally, we have incorporated remote control functionality via smartphones, enabling easy access to GUIs and automatic data collection for evaluating a patient’s physical status. The system efficiently stores data on the Raspberry Pi and allows for cloud uploading.

Through in vivo experiments involving a human subject population of eight volunteers, including four older individuals aged 60–95 (both male and female), we have obtained promising results. For gait speed measurement tests, we achieved an average accuracy of over 95% for distance traveled and average gait speed. Furthermore, the average time accuracies for standing balance and 5TSS tests exceeded 97%. These outcomes demonstrate the system’s effectiveness in providing reliable and precise evaluations of physical performance. Such data can play a vital role in personalizing cancer treatment strategies, particularly for older adults.

Advertisement

Notes

This current chapter is based on and is a continuation of the previously published work, “Camera-Based Short Physical Performance Battery and Timed Up and Go Assessment for Older Adults with Cancer,” and one other paper, “Camera-Based Human Gait Speed Monitoring and Tracking for Performance Assessment of Elderly Patients with Cancer [20, 21].”

References

  1. 1. Smith BD, Smith GL, Hurria A, Hortobagyi GN, Buchholz TA. Future of cancer incidence in the United States: Burdens upon an aging, changing nation. Journal of Clinical Oncology. 2009;27(17):2758-2765
  2. 2. Miller KD, Siegel RL, Lin CC, Mariotto AB, Kramer JL, Rowland JH, et al. Cancer treatment and survivorship statistics, 2016. CA: A Cancer Journal for Clinicians. 2016;66(4):271-289
  3. 3. Wu HS, Harden JK. Symptom burden and quality of life in survivorship: A review of the literature. Cancer Nursing. 2015;38(1):E29-E54
  4. 4. Neo J, Fettes L, Gao W, Higginson IJ, Maddocks M. Disability in activities of daily living among adults with cancer: A systematic review and meta-analysis. Cancer Treatment Reviews. 2017;61:94-106
  5. 5. Pergolotti M, Deal AM, Williams GR, Bryant AL, McCarthy L, Nyrop KA, et al. Older adults with cancer: A randomized controlled trial of occupational and physical therapy. Journal of the American Geriatrics Society. 2019;67(5):953-960
  6. 6. Mohile SG, Dale W, Somerfield MR, Schonberg MA, Boyd CM, Burhenn PS, et al. Practical assessment and management of vulnerabilities in older patients receiving chemotherapy: ASCO guideline for geriatric oncology. Journal of Clinical Oncology. 2018;36(22):2326-2347
  7. 7. Giri S, Al-Obaidi M, Harmon C, Dai C, Smith CY, Gbolahan OB, et al. Impact of geriatric assessment (GA) based frailty index (CARE-Frailty Index) on mortality and treatment-related toxicity among older adults with gastrointestinal (GI) malignancies. Journal of Clinical Oncology. 2021;39(15_suppl):12046-12046
  8. 8. Marsh AP, Wrights AP, Haakonssen EH, Dobrosielski MA, Chmelo EA, Barnard RT, et al. The virtual short physical performance battery. The Journals of Gerontology. Series A, Biological Sciences and Medical Sciences. 2015;70(10):1233-1241
  9. 9. Dale W, Williams GR, MacKenzie AR, Soto-Perez-de-Celis E, Maggiore RJ, Merrill JK, et al. How is geriatric assessment used in clinical practice for older adults with cancer? A survey of cancer providers by the American Society of Clinical Oncology. JCO Oncology Practice. 2021;17(6):336-344
  10. 10. Ezzatvar Y, Ramírez-Vélez R, Sáez de Asteasu ML, Martínez-Velilla N, Zambom-Ferraresi F, Izquierdo M, et al. Physical function and all-cause mortality in older adults diagnosed with cancer: A systematic review and meta-analysis. The Journals of Gerontology. Series A, Biological Sciences and Medical Sciences. 2021;76(8):1447-1453
  11. 11. Seeed Technology Co.,Ltd. Raspberry Pi Camera Module V2 [Internet]. Seeed Studio; 2022. Available from: https://www.seeedstudio.com/Raspberry-Pi-Camera-Module-V2.html
  12. 12. Wikipedia. Laser [Internet]. Wikipedia, the free encyclopedia; 2021. Available from: https://en.wikipedia.org/wiki/Laser
  13. 13. Wikipedia. Kinect [Internet]. Wikipedia, the free encyclopedia; 2021. Available from: https://en.wikipedia.org/wiki/Kinect
  14. 14. Kong K, Tomizuka M. A gait monitoring system based on air pressure sensors embedded in a shoe. IEEE/ASME Transactions on Mechatronics. 2009;14(3):358-370
  15. 15. SainSmart. Ultrasonic Ranging Detector Mod HC-SR04 Distance Sensor [Internet]. SainSmart.com; 2023. Available from: https://www.sainsmart.com/products/ultrasonic-ranging-detector-mod-hc-sr04-distance-sensor
  16. 16. SparkFun Electronics. SparkFun 9DoF IMU Breakout-LSM9DS1-SEN-13284-SparkFun Electronics [Internet]. SparkFun Electronics. Available from: https://www.sparkfun.com/products/retired/13284 [Accessed: June 14, 2023]
  17. 17. Ostadabbas S, Sebkhi N, Zhang M, Rahim S, Anderson LJ, Lee FEH, et al. A vision-based respiration monitoring system for passive airway resistance estimation. IEEE Transactions on Biomedical Engineering. 2016;63(9):1904-1913
  18. 18. Wang Z, Mirbozorgi SA, Ghovanloo M. An automated behavior analysis system for freely moving rodents using depth image. Medical & Biological Engineering & Computing. 2018;56(10):1807-1821
  19. 19. Aloudat M, Faezipour M, El-Sayed A. Automated vision-based high intraocular pressure detection using frontal eye images. IEEE Journal of Translational Engineering in Health and Medicine. 2019;7:3800113
  20. 20. Duncan L, Zhu S, Pergolotti M, Giri S, Salsabili H, Faezipour M, et al. Camera-based short physical performance battery and timed up and go assessment for older adults with cancer. IEEE Transactions on Biomedical Engineering. Sep 2023;70(9):2529-2539
  21. 21. Duncan L, Gulati P, Giri S, Ostadabbas S, Abdollah Mirbozorgi S. Camera-based human gait speed monitoring and tracking for performance assessment of elderly patients with cancer. In: Annual International Conference of the IEEE Engineering in Medicine and Biology Society 2021. 2021. pp. 3522-3525
  22. 22. Jarrod. Project: RaspberryPi Google Drive Sync [Internet]. Jarrod’s Tech; 2018. Available from: https://jarrodstech.net/project-raspberrypi-google-drive-sync/ [Accessed: June 14, 2023]
  23. 23. Srinivasulu B, Kuamari DS, Srinivas G, Rao GJ. VNC viewer authentication and security using remote frame buffer protocol. International Journal of Engineering Research and Applications. 2013;3(6):1007-1011
  24. 24. Duncan LD. Raspberry Pi Camera Tracking [Internet]. Zenodo; 2023. Available from: https://zenodo.org/record/7644881 [Accessed: June 12, 2023]
  25. 25. OpenCV. Template Matching [Internet]. OpenCV, Open Source Computer Vision; 2019. Available from: https://docs.opencv.org/3.4.6/de/da9/tutorial_template_matching.html
  26. 26. Basulto-Lantsova A, Padilla-Medina JA, Perez-Pinal FJ, Barranco-Gutierrez AI. Performance comparative of OpenCV template matching method on Jetson TX2 and Jetson Nano developer kits. In: 2020 10th Annual Computing and Communication Workshop and Conference (CCWC) [Internet]. Las Vegas, NV, USA: IEEE; 2020. pp. 0812-0816. Available from: https://ieeexplore.ieee.org/document/9031166/ [Accessed: June 12, 2023]
  27. 27. Viola P, Jones M. Rapid object detection using a boosted cascade of simple features. In: Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition CVPR 2001 [Internet]. Kauai, HI, USA: IEEE Computer Society; 2001. pp. I-511-I–518. Available from: http://ieeexplore.ieee.org/document/990517/ [Accessed: June 12, 2023]
  28. 28. OpenCV. Face Detection Using Haar Cascades [Internet]. OpenCV; 2019. Available from: https://docs.opencv.org/3.4.6/d7/d8b/tutorial_py_face_detection.html#gsc.tab=0
  29. 29. Rahmad C, Asmara RA, Putra DRH, Dharma I, Darmono H, Muhiqqin I. Comparison of Viola-Jones Haar cascade classifier and histogram of oriented gradients (HOG) for face detection. IOP Conference Series: Materials Science and Engineering. 2020;732(1):012038
  30. 30. Wang R. AdaBoost for feature selection, classification and its relation with SVM, a review. Physics Procedia. 2012;25:800-807
  31. 31. Farkhodov K, Lee SH, Kwon KR. Object tracking using CSRT tracker and RCNN. In: Proceedings of the 13th International Joint Conference on Biomedical Engineering Systems and Technologies [Internet]. Valletta, Malta: SCITEPRESS-Science and Technology Publications; 2020. pp. 209-212. Available from: http://www.scitepress.org/DigitalLibrary/Link.aspx?doi=10.5220/0009183802090212 [Accessed: June 12, 2023]
  32. 32. Lukežič A, Vojíř T, Čehovin Zajc L, Matas J, Kristan M. Discriminative correlation filter tracker with channel and spatial reliability. International Journal of Computer Vision. 2018;126(7):671-688
  33. 33. Taylor P, Griffiths N, Hall V, Xu Z, Mouzakitis A. Feature selection for supervised learning and compression. Applied Artificial Intelligence. 2022;36(1):2034293
  34. 34. Dalal N, Triggs B. Histograms of oriented gradients for human detection. In: 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05) [Internet]. San Diego, CA, USA: IEEE; 2005. pp. 886-893. Available from: http://ieeexplore.ieee.org/document/1467360/ [Accessed: June 12, 2023]
  35. 35. Van De Weijer J, Khan FS. An overview of color name applications in computer vision. In: Trémeau A, Schettini R, Tominaga S, editors. Computational Color Imaging [Internet]. Cham: Springer International Publishing; 2015. pp. 16-22. (Lecture Notes in Computer Science; vol. 9016). Available from: http://link.springer.com/10.1007/978-3-319-15979-9_2 [Accessed: June 12, 2023]
  36. 36. Al-Rahayfeh A, Faezipour M. Eye tracking and head movement detection: A state-of-art survey. IEEE Journal of Translational Engineering in Health and Medicine. 2013;1:2100212
  37. 37. Jung HW, Roh HC, Kim SW, Kim S, Kim M, Won CW. Cross-comparisons of gait speeds by automatic sensors and a stopwatch to provide converting formula between measuring modalities. Annals of Geriatric Medicine and Research. 2019;23(2):71-76
  38. 38. Anwary AR, Yu H, Vassallo M. An automatic gait feature extraction method for identifying gait asymmetry using wearable sensors. Sensors (Basel). 2018;18(2):676
  39. 39. Dubois A, Bihl T, Bresciani JP. Automating the timed up and go test using a depth camera. Sensors. 2017;18(2):14
  40. 40. Fudickar S, Hellmers S, Lau S, Diekmann R, Bauer JM, Hein A. Measurement system for unsupervised standardized assessment of timed “up & go” and five times sit to stand test in the community-A validity study. Sensors (Basel). 2020;20(10):2824
  41. 41. Morris R, Stuart S, McBarron G, Fino PC, Mancini M, Curtze C. Validity of mobility lab (version 2) for gait assessment in young adults, older adults and Parkinson’s disease. Physiological Measurement. 2019;40(9):095003
  42. 42. Fino PC, Horak FB, El-Gohary M, Guidarelli C, Medysky ME, Nagle SJ, et al. Postural sway, falls, and self-reported neuropathy in aging female cancer survivors. Gait & Posture. 2019;69:136-142

Written By

Larry Duncan, Shaotong Zhu, Mackenzi Pergolotti, Smith Giri, Hoda Salsabili, Miad Faezipour, Sarah Ostadabbas and S. Abdollah Mirbozorgi

Submitted: 24 July 2023 Reviewed: 16 August 2023 Published: 11 April 2024