Open access peer-reviewed chapter

Video Surveillance-Based Intelligent Traffic Management in Smart Cities

Written By

Fozia Mehboob, Muhammad Abbas, Abdul Rauf, Shoab A. Khan and Richard Jiang

Submitted: 28 October 2017 Reviewed: 12 March 2018 Published: 13 March 2019

DOI: 10.5772/intechopen.76386

From the Edited Volume

Intelligent Video Surveillance

Edited by António J. R. Neves

Chapter metrics overview

2,066 Chapter Downloads

View Full Metrics

Abstract

Visualization of video is considered as important part of visual analytics. Several challenges arise from massive video contents that can be resolved by using data analytics and consequently gaining significance. Though rapid progression in digital technologies resulted in videos data explosion that incites the requirements to create visualization and computer graphics from videos, a state-of-the-art algorithm has been proposed in this chapter for 3D conversion of traffic video contents and displaying on Google Maps. Time stamped visualization based on glyph is employed efficiently in surveillance videos and utilized for event detection. This method of visualization can possibly decrease the complexity of data, having complete view of videos from video collection. The effectiveness of proposed system has shown by obtaining numerous unprocessed videos and algorithm is tested on these videos without concerning field conditions. The proposed visualization technique produces promising results and found effective in conveying meaningful information while alleviating the need of searching exhaustively colossal amount of video data.

Keywords

  • video visualization
  • traffic surveillance
  • smart cities
  • glyph-based visualization
  • Google Maps

1. Video visualization in smart cities

The quantity of surveillance video cameras increases at the public places results in increase in automated analysis of video contents and traffic video surveillance [43] considered as one of its application. These automated systems identify a number of traffic rule violations. Video features at object, pixel, and semantic level are extracted for analysis [53, 56, 59, 60]. The basic purposes of surveillance video-based systems are vehicle tracking, analyzing their patterns and behaviors, abnormal event prediction, and detecting anomalies before their occurrence. This research aims to develop a glyph-based system for the real-time video visualization covering a comprehensive set of traffic videos on complete length of highways.

Intelligent monitoring has rapidly progressed in last 10 years and intended to provide situational awareness and semantic information for understanding the environmental activity [14, 69]. VV illustrates the joint process of video analysis and subsequent derivation of representative presentation of essence of visual contents [2, 4, 19, 34, 45, 54, 57, 68]. The visualization of videos is gaining more attention because of addressing challenges of data analysis arisen from video camera contents [1, 15, 16]. Over the past decade, VV usefulness for traffic surveillance [17, 18] application has been effectively demonstrated by researchers [3, 75, 76].

VV offers spatio-temporal summary and overview of large collection of videos, and its abstract representation of meaningful information assists the users in video content [3, 35]. Conversely, conventional techniques [67] of visual representation such as time series plot have difficulties in conveying impressions from large video collection [3].

In addition, there is need to present visual contents of videos in compact forms such that user can quickly navigate through different segments of video sequence to locate segment of interest and zoom in to different detail levels [1]. Viewing videos is time-consuming process, consequently it is desirable to develop methods for highlighting and extraction interesting features in videos. There are numerous techniques designed for data analysis in images and a variety of statistical indicators for data processing. On the contrary, there is lack of effective techniques for conveying complex statistical information spontaneously to a layperson such as a security officer, apart from using line graphs to portray 1D signal levels [1]. Many researchers studied video processing in the context of video surveillance [16], monitoring vehicles, and monitoring crowds. However, main problem in automatic video processing is communication of results of video processing to human operator. Since statistical results are not easily comprehensible, whereas sequences of difference images again need sequential viewing [1].

Conventional video surveillance systems heavily rely on human operators for activity monitoring and determining actions to be taken upon incident occurrence. There are several actionable incidents that miss-detect in such a manual system due to inherent limitations from deploying solely human operators eyeballing CCTV screens [58]. Hence, automatic VV [56] will prove very beneficial in improved traffic management. Miss-detections might be caused by monitoring excessive number of video screens to monitor as shown in Figure 1 and tiredness due to prolonged monitoring. In fact, numerous studies have shown the limits of human-dependent surveillance. The United States Sandia National Laboratories conducted a study in which most people attention fell below an adequate level after only 20 minutes of video surveillance screen monitoring [67]. The video content analysis paradigm is shifting from a fully human operated model to an intelligent machine-assisted automated model [58].

Figure 1.

Video wall.

Advertisement

2. State of the art

In the field of visualization, Borgo et al. [51] carried out a comprehensive survey on video visualization. Effectiveness of VV for conveying meaningful information enclosed in video sequences was demonstrated by Daniel et al. [1]. Andrienko et al. [47] also illustrated visual analytical technique to visualize huge amount of video data. Data were clustered and aggregated to display on map by using color arrows. Wang et al. [48] presented situational understanding approach by combining the video frame in 3D environment. Romero et al. [49] used visualization approach to analyze human behavior and explored the activity visualization in normal settings over time.

Hoummady proposed survey on sensory device shortcomings that are used for collection of traffic information real time [40], and video camera usage as data collection was also proposed for traffic management. This approach relies on computational device mainly for pedestrian recognition and vehicle, 2-wheel vehicles, etc.

For traffic visualization, commonly employed approach is coloring the areas demonstrating roads on the map [44]. Ang et al. [46] presented analytical approach for management of traffic from multiple cameras. Vehicle trajectory estimation and extraction of features was done. Subsequently, Jiang [62] demonstrated the analytical technique for visualizing the huge video data. Data were clustered and aggregated to display them on map by using color arrows. Afterward, Botchen [53] proposed technique for flow and volume signature visualization. It discovered that common people can recognize events on the basis of event signatures quite than viewing entire video contents.

End users and technology providers identify that manual process is inadequate to search comprehensively massive amount of video contents and screening timely. In order to lessen these issues of visualization, we try to project camera activity on Google Maps and have summarized and holistic view of video contents. Massive video data render ineffectual manual analysis of videos; however, present automatic analytic techniques of videos undergo better performance.

A state of art visualization technique for surveillance videos is presented and tested by using several traffic videos. It receives suitable visual representations to assist the process of decision making. One can perceive level and pattern of activities that are recorded from visualization of videos as it offers more spatial info than using statistical indicators. Semantic info is obtained from numerous surveillance videos which are connected to Google Maps in order to perform 3D association. In the same time, glyph [5, 20] is familiar and conveys multi-field video visualization [10]. Well-developed visualization approach based on glyph is proposed that enables efficient and effective information encoding and visual communication.

Advertisement

3. Glyph-based semantic information visualization

Proposed approach aims to visualize semantic information of traffic videos using time stamped glyph. Input video frames are processed continuously to detect change in visual information. The proposed approach consists of several steps for estimation of traffic flow.

3.1. Preprocessing

First step involves the segmentation of object from the surveillance video by using thresholding and subsequently converting it to binary image from grayscale image. Parts of road are thinned out, and holes are filled in video frames using morphological operations as shown in Figure 2.

Figure 2.

Vehicle segmentation from video space.

Object segmentation considered to be a vital process in understanding of image in the preprocessing step. The purpose of object segmentation is to divide the image into region of interest, and objects are identified from the video frame using region growing method. The process of image segmentation results in binary image contains connected components which represents the multiple objects. Connected component analysis is performed to distinguish between the connected components. Features are extracted to track the moving objects in successive frames. Image is scanned pixel by pixel, and gray value of central pixel is compared with those of the top and left pixels. Surface or region grows, until it finds all the connected pixels. The values of the pixels are compared with the 3 × 3 neighboring pixels. If there is disconnect in the connected pixels and the gap is greater than threshold value, algorithm classifies the pixel to a new region. It is a user defined threshold whose value is chosen on the basis of distance between the pixels. All the pixels which are part of the object are set to value 1, and those which are not part of the object are set to value zero. In region growing method, 3 × 3 window finds all the neighboring pixels 1 and keep growing the region until pixel with zero value is found. Algorithm keeps finding the gap, and if the gap is greater than the threshold value, the algorithm classifies it as a new region. If there are only isolated pixels, they are marked as outliers. Proposed system is robust in handling problems such as occlusion and illumination variation encountered in surveillance videos. In case of sunny day, there are moving shadows of vehicles which can produce false alarms. But the proposed system estimates the vehicle size and is able to predict the shadow size. Based on the moving vehicles size, shadow can be removed. Proposed method is able to remove the extracted shadow of the vehicles. Proposed system is tested on several surveillance videos of different scenarios such as different weather conditions and densities. Proposed system is robust in handling problems such as occlusion and illumination variation encountered in surveillance videos. The data set contains a diverse set of scenarios in terms of traffic density and violations.

Traffic flow is assessed on each video frame, and the number of vehicles is counted in every frame. For every vehicle, mean speed is computed. The flow rate is found by dividing total vehicles by time. Top level flow diagram of the proposed approach is depicted in Figure 3.

Figure 3.

Top level diagram of proposed approach.

Figure 3 represents the flow of proposed approach. Object tracking [7, 8] is part of the proposed system which collects temporal and spatial information about the object under consideration from the video sequence. Semantic information such as trajectories of detected objects is acquired by motion tracking that is given as input for mapping and 3D computation and revealing the outcomes on Google Maps. As Google space and video space coordinates are different, 3D mapping is performed amongst the two different spaces. Time-based glyph is created for representing semantic info on Google space and video.

Layout of table in order to store coordinates of vehicle is revealed in Figure 4. Blobs detected within frame signify the number of vehicles. It is illustrated in Figure 4 that single vehicle exists in current video frame. Array is well-defined for storing vehicle coordinates. First two columns of array illustrate the y and x vehicle coordinates present in first frame, whereas the following y and x coordinates represents next frame coordinates of vehicle. Fifth column value demonstrates the number of frames consumed by vehicle in which vehicle becomes visible in field of view. Vanishing flag in last column defines the status of vehicle, for example, vehicle departure. Flag value remains zero till vehicles are in field of view and value will turn 1 when vehicles disappear. Last column is significant because values reshuffling in arrays change on the base of flag value.

Figure 4.

Vehicle tracking information.

Though, trajectories of vehicle are of different spans even vehicle travel on the same route since vehicle travel at different mean speeds [8, 12]. Motion vectors [77] are used for demonstrating information as motion information has strong relationship with semantic occurrence. Different event identifications are done by analysis of motion features. Path demonstrates the vehicle movement and dynamical measurements that represent the raw vehicle trajectory. A common trajectory depiction is flow succession, for example,

FT=f1f2..fTE1

where the flow vectors

ft=xtytvxtvytaxtaytTE2

represent object velocity [vx, vy], position [x, y], and direction [ax, ay] at time t extracted by tracking the object.

3.2. Bezier fitting for glyph generation

Bezier curve mostly employed for modeling and smoothing the chaotic vehicle trajectories. Control points are used to define Bezier curve which have geometric modeling interpretation and can model trajectories inconsistency [61]. Curve is confined in control point’s which are showed graphically and can be utilized for curve manipulation. By offering P0 and P0 points, Bezier curve is defined as straight line between two points such as,

Bt=P0+tP1P0=ItP0+tP10t1E3

That is equivalent to linear interpolation. Bezier curve is used to smooth the chaotic trajectories of vehicles obtained using motion tracking. As each car moves with different speeds, so the length of trajectories varies.

Figure 5(c) depicts the video taken from area around Northumbria University having frame rate 30fps and video resolution 1920 × 1080. Video consists of 25 frames. Figure 5(a) illustrates the chaotic trajectories of different vehicles that are smoothed using Bezier curve to visualize the traffic pattern as shown in Figure 5(b). Time stamped semantic information is represented using glyph. Vehicle trajectory is tracked over time, and semantic info is delivered as presented in Figure 5. Outer circle of glyph denotes that vehicle changes lane although vehicle type was small which is signified by circle having red color. If vehicle is small and do not change the lane within field of view than outer circle of glyph is green.

Figure 5.

(a) Chaotic vehicle trajectory, (b) smooth vehicle trajectory, and (c) time stamped glyph.

3.3. Motion tracking and semantic event display

Motion tracking significance [9, 39, 64, 66, 71, 73] in surveillance videos is unquestionable; subsequently, it is valuable in countless applications. Semantic analysis [62, 63] of video is utilized for extraction of vital information particularly type of vehicle, speed and lane changing, and trajectory from the video [38, 41]. This semantic info is extracted automatically in order to represent indexing, high level descriptors, retrieving, and searching the video contents. Tracking of vehicle comprises of velocity, maintenance of appearance, and positioning of detected object over time. Vehicle detection is done by object linking to most alike object in consecutive video frames.

Flow vectors are used to symbolize the common trajectory representation which is basis of further analysis. Figure 6(a) characterizes the chaotic vehicle trajectories which are taken from different surveillance videos. Every trajectory of vehicle is attained by individual tracking of detected vehicle. Figure 6(b) displays the smoothed curves that are acquired by applying Bezier curve on the chaotic vehicle trajectories.

Figure 6.

Proposed approach vehicle tracking.

Advertisement

4. 3D conversion and perspective view from video space to Google Maps

To capture the real time, info is considered as main challenge in dynamic VV [39]. 3D info recovery from surveillance video is essential to acquire some significant information from the videos. As frame of videos is the projection of 3D space, abstraction of vital information is difficult task. In proposed approach, 3D transformation on Google Maps from surveillance video is processed by using homographic transformation. In homographic transformation, plane mapping to image space is performed by projective transformation that maps the point from one plane to second one. Homography amongst image space and video space is estimated requires four-point correspondence [42]. Calibration of image is acquired through transformation H, in which pixels pf image mapping on ground plane matches to latitude and longitude coordinates of maps.

Individual location of vehicle in each video frame sequence is signified by plus symbol in Figure 7 that is computed by homography matrix in order to calculate map and of video space coordinates. In perspective projection, location or points are alike in two dissimilar spaces, however, not equivalent because of universal scale uncertainty. The homography [6, 11, 45, 72, 79] in camera-based view geometry attains a particular interpretation H = KE, where E represents Euclidean transformation matrix which defines camera pose while viewing, and K characterizes the matrix of camera perspective recognized as intrinsic measures. Consider a pair of correspondence points, for example, p=x1y1z1T and u=x2y2z2T is related by homography H:

x2y2z2h11h12h13h14h21h22h23h24h31h32h33h34x1y1z1E4

Figure 7.

Homographic computation and perspective view of video and Google Maps.

Thus, each correspondence p u results in two linear equations in the unknowns h=h11h12,,h34T. With manifold correspondences, numerous pairs of linear constraints need to be stored for obtaining coefficient matrix A. Least square h solution is acquired by solving the

ATAh=0E5

The h solution is acquired as eigenvector which corresponds to AT A smallest value. Corner points of video after the H computation are projected on Google Maps correspondence points. Each position of pixel in dimension space is estimated on map by the use of H matrix, and resultant longitude and latitude coordinates are stored. Inverse of H is also computed to map the space coordinates on video.

Advertisement

5. Time stamped semantic glyph representation

Visualization based on glyph is considered as common procedure of visual scheme in which group of graphic objects is employed for representing data set known as glyph [35]. Glyph method is utilized for visualizing motion vectors that are over laid on video stream frame. Our main concern is to collect visual info that seems in all frames of video till the object remains within the view. Time stamped glyph is also generated in order to signify the type of car, speed with distinctive colors, and event information such as lane change information. The proposed system accurately determines the lane change of vehicle at a specific time due to precise localization. In the proposed system, an abnormal event detection is performed by specifically giving vehicle trajectories [52]. Trajectory analysis and interaction with scene feature allows recognizing interesting events. A time stamped glyph is generated to represent speed and lane change information of vehicle. For any image point, the position of corresponding scene point in every video frame is determined until the vehicle leaves the field of view.

There is variation in vehicle speed even in obstacles’ absence because of curves and turns. Experimental data authenticate the common insight of speed which is considered most significant factors of safe driving. Variation in vehicle speed considered to be one of likely factors of congestions and accidents [37, 51, 94]. Therefore, proposed algorithm determines the speed variation of vehicles in each frame on the basis of trajectory analysis. Trajectories with different speeds are identified and represented using glyph. At each time frame, if vehicle speed is lower than the threshold, then the same color is assigned; however, if vehicle speed changes abruptly then at each instance of time, it is assigned a different color. With this time stamped identification method, precise instance of speed variation is identified in the video frame that causes a disruption in flow of the traffic movement.

Advertisement

6. Association between Google Maps and video visualization

To properly visualize analysis of results on Google Maps, the output must be properly aligned to the map coordinates [13]. Rectification of camera image is automatically done and mapped on the map. In surveillance video, activity is detected in each frame, and location of vehicle on ground is gained through correspondence points and trajectory learning which are mapped on map. Consequently, video inspection of several road cameras is upgraded by projecting the activity of outdoor surveillance camera on Google Maps. In order to localize the vehicle coordinates on the Google Maps or association of video and Google Maps space, homography is computed, and its perspective view is drawn. Transformation matrix provides the association information and its mapping. And as the events occur, correspondence video is visualized on the Google Maps as shown in Figure 8.

Figure 8.

Holistic view of videos on Google Maps.

Advertisement

7. Holistic view of video using Google Maps

A surveillance video naturally takes the perspective view of the visual scene which is recognized as quasi-3D. Significant information is gathered from the different videos and is viewed to represent unusual events in videos as depicted in Figure 8.

In video surveillance-based system, identification of unusual events is considered to be most significant task. Anomalous behavior can be drastic and subtle [36, 58, 63]. Changing of lanes on highways is traumatic. The proposed system precisely identifies the vehicle lane change at specific time because of precise localization. Anomalous detection of events [40] can be performed by giving the trajectory [62]. Subsequently, now the vehicle trajectory specifies the frightening behavior by performing trajectory analysis. Different glyph colors during the video visualization portray the type, vehicle position, and event information within video frame.

7.1. Small scale

The proposed technique has been tested on the small scale, for example, area across Northumbria University City Campus, Newcastle Upon Tyne, UK. Detected object trajectories are shown in outcome till the objects remain in the scene using semantic glyph as shown in Figure 9.

Figure 9.

Time stamped glyph-based video visualization on Google Maps.

There is possibility of future work in the area of visualization. Proposed visualization approach can be utilized for traffic management system at city level and have precise view of bigger cite. Spatio-temporal view of collection of videos can be acquired by mapping the trajectory on Google Maps as shown in Figure 10.

Figure 10.

Multiple video visualization on large scale.

To interpret the data in real time system, visualization of video data offers instinctive information that can be expended for acquiring trends and patterns. Conversely, gathering statistics automatically from video data are computationally costly. Subsequently, Walton et al. [39] visualized the traffic video data on Google Maps to display traffic info. Though, displaying numerous traffic videos instantaneously was challenging because of heavy transmission load. Human intellect was used to gather semantic features from surveillance videos in graphic mapping scheme. Lately, Hsieh and Wang [50] proposed a traffic system for visualizing traffic information by inferring vehicle data and constitute a video in the database. Flow of traffic was assessed from surveillance videos and Google mapping was created amongst vehicle detector data and videos. While visualizing the traffic information, approach was ineffective in simulating all types of kinematics and dynamics because of driving behavior in various regions.

Advertisement

8. Conclusion

The concern of VV is with visual illustration of input surveillance video for see-through vital features and events in surveillance video. It is envisioned for providing assistance in intellectual reasoning whereas easing the load of observing videos. A novel visualization approach based on glyph has been proposed that can be efficiently utilized for road surveillance videos. A visual analysis is done on the basis of motion tracking to monitor live road traffic on the highways. The proposed approach has been verified on numerous video frame rates and resolution for visualizing the traffic flows. Experimental outcomes illustrate that approach can be employed in field conditions and permit better utilization of previous systems of traffic management.

References

  1. 1. Daniel G, Chen M. Video visualization. Proceedings of the 14th IEEE Visualization. IEEE; 2003
  2. 2. Höferlin M et al. Evaluation of fast-forward video visualization. IEEE Transactions on Visualization and Computer Graphics; 2012. pp. 2095-2103
  3. 3. Duffy B et al. Glyph-based video visualization for semen analysis. IEEE Transactions on Visualization and Computer Graphics; 2015. pp. 980-993
  4. 4. Morris BT et al. Real-time video-based traffic measurement and visualization system for energy/emissions. IEEE Transactions on Intelligent Transportation Systems; 2012. pp. 1667-1678
  5. 5. Fuchs J et al. Evaluation of alternative glyph designs for time series data in a small multiple setting. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems; ACM; 2013
  6. 6. Ranganathan P, Olson E. Locally-weighted homographies for calibration of imaging systems. IEEE/RSJ International Conference on Intelligent Robots and Systems; IEEE; 2014
  7. 7. Chincholkar AA. Moving object tracking and detection in videos using MATLAB: A review. International Journal of Advent Research in Computer and Electronics (IJARCE). 2014;1(5):2348-5523
  8. 8. Morris BT, Mohan MT. Learning, modeling, and classification of vehicle track patterns from live video. IEEE Transactions on Intelligent Transportation Systems; 2008; pp. 425-437
  9. 9. Kappe CP et al. Reconstruction and visualization of coordinated 3D cell migration based on optical flow. IEEE Transactions on Visualization and Computer Graphics; 2016; pp. 995-1004
  10. 10. Borgo R, Kehrer J, Chung DH, Maguire E, Laramee RS, Hauser H et al. Glyph-based visualization: Foundations, design guidelines, techniques and applications. Eurographics State of the Art Reports; 2013. pp. 39-63
  11. 11. Dubrofsky E. Homography estimation [Doctoral dissertation]. University of British Columbia (Vancouver); 2009
  12. 12. Morris BT, Mohan MT. A survey of vision-based trajectory learning and analysis for surveillance. San Diego: IEEE Transactions on Circuits and Systems for Video Technology; 2008. pp. 1114-1127
  13. 13. Morris B, Trivedi MM. Contextual Activity Visualization from Long-Term Video Observations. San Diego: University of California Transportation Center; 2010
  14. 14. Dee HM, Velastin SA. How close are we to solving the problem of automated visual surveillance? Machine Vision and Applications. 2008;19(5):329-343
  15. 15. Cavallaro A, Ebrahimi T. Change detection based on color edges. In: IEEE International Symposium on Circuits and Systems (No. 2, pp. 141-144) (2001, May); IEEE; 1999
  16. 16. Collins RT, Lipton AJ, Kanade T, Fujiyoshi H, Duggins D, Tsin Y et al. A system for video surveillance and monitoring (pp. 1-6). Technical Report CMU-RI-TR-00-12; Robotics Institute, Carnegie Mellon University; 2000
  17. 17. Dollár P, Rabaud V, Cottrell G, Belongie S. Behavior recognition via sparse spatio-temporal features. In: IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance; IEEE. pp. 65-72
  18. 18. Girgensohn A, Kimber D, Vaughan J, Yang T, Shipman F, Turner T et al. DOTS: Support for effective video surveillance. In: Proceedings of the 15th ACM international conference on Multimedia; ACM; 2007. pp. 423-432
  19. 19. Ward MO. Multivariate data glyphs: Principles and practice. In: Handbook of data visualization. Berlin Heidelberg: Springer; 2008. pp. 179-198
  20. 20. Ward MO. A taxonomy of glyph placement strategies for multidimensional data visualization. Information Visualization. 2002;1(3/4):194-210
  21. 21. Bertin J. Semiology of Graphics: Diagrams, Networks, Maps; Madison, Wis.; London: University of Wisconsin Press; 1983
  22. 22. Maguire E, Rocca-Serra P, Sansone S-A, Davies J, Chen M. Taxonomy-based glyph design—With a case study on visualizing workflows of biological experiments. IEEE Transactions on Visualization and Computer Graphics. 2012;18(12):2603-2612
  23. 23. Post F, Vrolijk B, Hauser H, Laramee R, Doleisch H. The state of the art in flow visualisation: Feature extraction and tracking. Computer Graphics Forum. 2003;22(4):775-792
  24. 24. Wong PC, Foote H, Kao DL, Leung R, Thomas J. Multivariate visualization with data fusion. Information Visualization. 2002;1(3/4):182-193
  25. 25. Hlawitschka M, Scheuermann G, Hamann B. Interactive glyph placement for tensor fields. In: International Symposium on Visual Computing. Berlin Heidelberg: Springer; 2007. pp. 331-340
  26. 26. Fuchs R, Hauser H. Visualization of multi-variate scientific data. Computer Graphics Forum. 2009;28(6):1670-1690
  27. 27. Pearlman J, Rheingans P. Visualizing network security events using compound glyphs from a service-oriented perspective. In: VizSEC 2007. Berlin Heidelberg: Springer; 2008. pp. 131-146
  28. 28. Aigner W, Miksch S, Schumann H, Tominski C. Visualization of Time-Oriented Data. Berlin, Springer-Verlag; 2011
  29. 29. Hlawatsch M, Leube P, Nowak W, Weiskopf D. Flow radar glyphs—Static visualization of unsteady flow with uncertainty. IEEE Transactions on Visualization and Computer Graphics. 2011;17(12):1949-1958
  30. 30. Peng Z, Grundy E, Laramee R, Chen G, Croft N. Mesh-driven vector field clustering and visualization: An image-based approach. IEEE Transactions on Visualization and Computer Graphics. 2012;18(5):283-298
  31. 31. Ropinski T, Preim B. Taxonomy and usage guidelines for glyph-based medical visualization. In: SimVis; 2008. pp. 121-138
  32. 32. Ropinski T, Oeltze S, Preim B. Survey of glyph-based visualization techniques for spatial multivariate medical data. Computers & Graphics. 2011;35(2):392-401
  33. 33. Chung DH, Legg PA, Parry ML, Bown R, Griffiths IW, Laramee RS, et al. Glyph sorting: Interactive visualization for multi-dimensional data. Information Visualization. 2015;14(1):76-90
  34. 34. Botchen RP, Bachthaler S, Schick F, Chen M, Mori G, Weiskopf D, et al. Action-based multifield video visualization. IEEE Transactions on Visualization and Computer Graphics. 2008;14(4):885-899
  35. 35. Parry ML, Legg PA, Chung DHS, Grif-Fiths IW, Chen M. Hierarchical event selection for video storyboards with a case study on snooker video visualization. IEEE Transactions on Visualization and Computer Graphics. 2011;17:1747-1756
  36. 36. Venugopal KR, Patnaik LM. Moving vehicle identification using background registration technique for traffic surveillance. In: Proceedings of the International MultiConference of Engineers and Computer Scientists (Vol. 1); 2008
  37. 37. Johnson C, Sanderson A. A next step: Visualizing errors and uncertainty. IEEE Computer Graphics and Applications. 2003;23(5):6-10
  38. 38. Liu S, Yi H, Chia LT, Rajan D, Chan S. Semantic analysis of basketball video using motion information. In: Pacific-Rim Conference on Multimedia. Berlin Heidelberg: Springer; 2004. pp. 65-72
  39. 39. Min Chen, Walton S, Chen M, Ebert D. LiveLayer: Real-time Traffic Video Visualisation on Geo-graphical Maps
  40. 40. Hoummady B. U.S. Patent No. 6,366,219. Washington, DC: U.S. Patent and Trademark Office; 2002
  41. 41. Beymer D, McLauchlan P, Coifman B, Malik J. A real-time computer vision system for measuring traffic parameters. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition; IEEE; 1997. pp. 495-501
  42. 42. Arrospide J, Salgado L, Nieto M, Mohedano R. Homography-based ground plane detection using a single on-board camera. IET Intelligent Transport Systems. 2010;4(2):149-160
  43. 43. Kumar P, Ranganath S, Weimin H, Sengupta K. Framework for real-time behavior interpretation from traffic video. IEEE Transactions on Intelligent Transportation Systems. 2005;6(1):43-53
  44. 44. Shekhar S, Lu CT, Liu R, Zhou C. CubeView: A system for traffic data visualization. In: Proceedings of the IEEE 5th International Conference on Intelligent Transportation Systems; IEEE; 2002. pp. 674-678
  45. 45. Lu CT, Boedihardjo AP, Zheng J. Aitvs: Advanced interactive traffic visualization system. In: 22nd International Conference on Data Engineering (ICDE'06); IEEE; 2006. pp. 167-167
  46. 46. Ang D, Shen Y, Duraisamy P. Video analytics for multi-camera traffic surveillance. In: Proceedings of the Second International Workshop on Computational Transportation Science; ACM; 2009. pp. 25-30
  47. 47. Andrienko G, Andrienko N. A visual analytics approach to exploration of large amounts of movement data. In: International Conference on Advances in Visual Information Systems. Berlin Heidelberg: Springer; 2008. pp. 1-4
  48. 48. Wang Y, Krum DM, Coelho EM, Bowman DA. Contextualized videos: Combining videos with environment models to support situational understanding. IEEE Transactions on Visualization and Computer Graphics. 2007;13(6):1568-1575
  49. 49. Romero M, Summet J, Stasko J, Abowd G. Viz-A-Vis: Toward visualizing video through computer vision. IEEE Transactions on Visualization and Computer Graphics. 2008;14(6):1261-1268
  50. 50. Hsieh C-Y, Wang Y-S. Traffic situation visualization based on video composition. Computers & Graphics. 2016;54:1-7
  51. 51. Borgo R, Chen M, Daubney B, Grundy E, Janicke H, Heidemann G et al.. A survey on video-based graphics and video visualization. In Proceedings of the EuroGraphics conference, State of the Art Report; 2011. pp. 1-23
  52. 52. Denman H. Video visualization. In: Proceedings of IEEE Visualization; Seattle, WA; 2003. pp. 409-416
  53. 53. Botchen R, Hashim R, Weiskopf D, Ertl T, Thornton IM. Visual signatures in video visualization. IEEE Transactions on Visualization and Computer Graphics. 2006;12(5):1093-1100
  54. 54. Heidi Lam. Visualization and Computer Graphics. IEEE Transactions on. 2008;14(6):1261-1268
  55. 55. Chen W. Automatic animation for time-varying data visualization. Computer Graphics Forum. 2010;29(7):2271-2280
  56. 56. Robinson JA. Techniques for automated reverse storyboarding. IEE Proceedings - Vision, Image and Signal Processing. 2005;152(4):425-436
  57. 57. Yeung MM, Yeo B-L. Video visualization for compact presentation and fast browsing of pictorial content. IEEE Transactions on Circuits and Systems for Video Technology. 1997;7(5):771-785
  58. 58. Loy CC. Activity understanding and unusual event detection in surveillance videos [Doctoral dissertation]; 2010
  59. 59. Cavallaro A, Steiger O, Ebrahimi T. Semantic video analysis for adaptive content delivery and automatic description. IEEE Transactions on Circuits and Systems for Video Technology. 2005;15(10):1200-1209
  60. 60. Papadopoulos GT et al. Statistical motion information extraction and representation for semantic video analysis. IEEE Transactions on Circuits and Systems for Video Technology. 2009;19(10):1513-1528
  61. 61. Faraway JJ, Reed MP, Wang J. Modelling three-dimensional trajectories by using Bézier curves with application to hand motion. Journal of the Royal Statistical Society: Series C: Applied Statistics. 2007;56(5):571-585
  62. 62. Jiang F, Wu Y, Katsaggelos AK. Abnormal event detection from surveillance video by dynamic hierarchical clustering. In: IEEE International Conference on Image Processing (Vol. 5, pp. V-145); IEEE; 2007
  63. 63. Medioni G et al. Event detection and analysis from video streams. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2001;23(8):873-889
  64. 64. Buch N, Velastin S, Orwell J. A review of computer vision techniques for the analysis of urban traffic. IEEE Transactions on Intelligent Transportation Systems. 2011;12(3):920-939
  65. 65. Zhu F, Li L. An optimized video-based traffic congestion monitoring system. In: Third International Conference on Knowledge Discovery and Data Mining. WKDD'10. IEEE; 2010. pp. 150-153
  66. 66. Cheung S-CS, Kamath C. Robust background subtraction with foreground validation for urban traffic video. EURASIP Journal on Advances in Signal Processing. 2005;2005:2330-2340
  67. 67. Goldgof DB, Sapper D, Candamo J, Shreve M. Evaluation of Smart Video for Transit Event Detection (No. Report No. 2117-7807-00); 2009
  68. 68. Matthew F, Rehg JM. Video-based crowd synthesis. IEEE Transactions on Visualization and Computer Graphics. 2013;19(11):1935-1947
  69. 69. Qianwen C, Shen J, Jin X. Video-based personalized traffic learning. Graphical Models. 2013;75(6):305-317
  70. 70. BKP H, Schunck BG. Determining optical flow. Artificial Intelligence. 1981;17(1–3):185-203
  71. 71. Hans-Hellmut N. On the estimation of optical flow: Relations between different approaches and some new results. Artificial Intelligence. 1987;33(3):299-324
  72. 72. Xu C, Liu J, Benjamin K. Motion segmentation by learning homography matrices from motor signals. In: Canadian Conference on Computer and Robot Vision (CRV); IEEE; 2011
  73. 73. Aslani S. Optical flow based moving object detection and tracking for traffic surveillance, World Academy of Science, Engineering And Technology. International Journal of Electrical, Computer, Energetic, Electronic and Communication Engineering. 2013;7(9)

Written By

Fozia Mehboob, Muhammad Abbas, Abdul Rauf, Shoab A. Khan and Richard Jiang

Submitted: 28 October 2017 Reviewed: 12 March 2018 Published: 13 March 2019