Open access peer-reviewed chapter

Positioning Techniques with Smartphone Technology: Performances and Methodologies in Outdoor and Indoor Scenarios

Written By

Paolo Dabove, Vincenzo Di Pietra and Andrea Maria Lingua

Submitted: 13 December 2016 Reviewed: 09 May 2017 Published: 02 November 2017

DOI: 10.5772/intechopen.69679

From the Edited Volume

Smartphones from an Applied Research Perspective

Edited by Nawaz Mohamudally

Chapter metrics overview

2,192 Chapter Downloads

View Full Metrics

Abstract

Smartphone technology is widespread both in the academy and in the commercial world. Almost every people have today a smartphone in their pocket, that are not only used to call other people but also to share their location on social networks or to plan activities. Today with a smartphone we can compute our position using the sensors settled inside the device that may also include accelerometers, gyroscopes and magnetometers, teslameter, proximity sensors, barometer, and GPS/GNSS chipset. In this chapter we want to analyze the state-of-the-art of the positioning with smartphone technology, considering both outdoor and indoor scenarios. Particular attention will be paid to this last situation, where the accuracy can be improved fusing information coming from more than one sensor. In particular, we will investigate an innovative method of image recognition based (IRB) technology, particularly useful in GNSS denied environment, taking into account the two main problems that arise when the IRB positioning methods are considered: the first one is the optimization of the battery, that implies the minimization of the frame rate, and secondly the latencies due to image processing for visual search solutions, required by the size of the database with the 3D environment images.

Keywords

  • positioning techniques
  • image navigation
  • GNSS
  • sensors
  • GPS

1. Introduction

Nowadays, thanks to new technologies, the information about our position is available in almost every moment and almost everywhere thanks to mobile devices, such as smartphones or tablets. These devices may include many sensors, such as global positioning system (GPS)/global navigation satellite system (GNSS) chipset, inertial measurement unit (IMU) platforms, barometer, altimeter, cameras, etc., that empower customers to plan their activities (e.g., to know the time that it is necessary to wait a train) or to share their location on social networks (e.g., Facebook) [13]. With these kind of sensors and to the rise of new positioning techniques, it is possible to obtain the position both in outdoor and in indoor scenarios. In the first case, GPS/GNSS are the most useful sensors for obtaining a fast position even if there are some problems, especially in harsh environment, due to multipath or satellite obstructions. In the second case, these sensors became useless because no satellites are visible: so, it is possible to perform positioning thanks to other sensors, such as IMUs and cameras, considering other techniques such as the image recognition‐based (IRB) or the pedestrian dead reckoning (PDR) technology.

In this chapter, we will investigate the positioning performances and methodologies in outdoor and indoor scenarios considering smartphone technology. In particular, the goal of this work is to analyze the state‐of‐the‐art of the precisions and accuracies that can be achieved with these instruments for positioning and navigation purposes, in both scenarios.

In Section 2, the analysis of the most common sensors installed into smartphones is given as well as the methodology for the determination of the smartphone’s reference system. We will discuss about the GNSS chipset (Section 2.1) available today and the positioning accuracy obtainable today with these sensors and with INS platforms (Section 2.2). Moreover a short description of cameras installed today into smartphones is done, in order to perform positioning using also images (Section 2.3).

Subsequently, in Section 3 a description of the positioning techniques obtainable today with smartphones is made and some practical examples are provided: the tests performed and the results obtained are presented, focusing the attention on outdoor (Section 3.1) and indoor (Section 3.2) scenarios.

Finally, some conclusions will be drawn in Section 4.

Advertisement

2. Sensors on smartphones

Many sensors are available today on smartphones: most of them are related to internal applications (proximity sensor, light sensors, etc.) while other ones (e.g., GNSS, INS, and cameras) allow to obtain a positioning. One of the biggest problems is represented by the operating system (OS) installed inside the smartphone: each OS has different ways to manage data that comes from internal sensors, not to mention the use of these data made by the apps.

While sensor availability varies from device to device, it can also vary between iOS and Android versions. The biggest changes were made in this last OS, due to several platform releases: in Android 1.5 (API Level 3) many sensors have been introduced even if some of them were not employed and not accessible before Android 2.3 (API Level 9). Similarly, in Android 2.3 (API Level 9) and Android 4.0 (API Level 14) some other sensors have been introduced and some others have been removed and replaced by newer ones.

Figure 1 shows the availability of each sensor on a platform‐by‐platform basis, considering the only four platforms that involved sensor changes.

Figure 1.

Availability of each sensor in different Android systems (available at: http://rowdysites.msudenver.edu/~gordona/cs390-mobile/lects/summer14_day07-touch+sensing/summer14_day07-sensors/summer14_day07-sensors.html).

In this chapter, we focus the attention only on sensors useful for positioning: GNSS and INS chipset and cameras for images. Hereinafter, a brief description of these sensors is provided.

2.1. GPS/GNSS chipsets

GPS/GNSS chipset is the most widespread sensor installed inside smartphones. There are many chipsets today available on the market, and very often each manufacturer installs few different versions of the same GNSS brand [2]. For example, Apple installs chips provided by Broadcom Corporation, while Samsung smartphones with Android OS have installed u‐blox AG chipsets. Since 2016, no GNSS raw data acquired by mobile platform such as smartphones or tables were available, but starting from 2016 it has been possible also to extract pseudoranges and carrier‐phase measurements from smartphones with Android 7.0 OS. The announcement came from Google during the I/O 2016, the 3‐day developer conference which took place from 18 to 20 May. It is a very strong innovation, destined to bring a revolution in the field of survey and geo‐localization: with these kinds of sensors, accuracies of few centimeters will be obtainable even with mobile devices. Despite that, this kind of possibility will not be analyzed in this chapter.

2.2. INS

Inertial measurement unit platforms are increasingly being used integrated either with other instruments, typically GNSS, odometers, and magnetometers, or with storage units [34]. They then form an inertial navigation system (INS).

In general, INS instruments are comprised of three accelerometers, three gyroscopes, and three magnetometers. The characteristics of these sensors are briefly described.

Accelerometers are instruments that measure acceleration (the rate of change in velocity), and help the phone distinguish up from down.

All accelerometers have two fundamental parts:

  1. A housing attachment to the object whose acceleration we want to measure.

  2. A mass that, while tethered to the housing, can still move.

For example, let us assume a spring and a heavy ball. If you move the housing up, the ball lags behind stretching the spring. If we measure how much that spring stretches, we can calculate the force of gravity.

Gyroscopes are sensors that can provide orientation information as well, but with greater precision. Thanks to this particular sensor, Android’s Photo Sphere camera feature can tell how much a phone has been rotated and in which direction.

The digital compass is normally based on a sensor called magnetometer, which provides a simple orientation in relation to the Earth’s magnetic field. Consequently, every smartphone knows where is North so it can autorotate the digital maps depending on its physical orientation.

2.3. Cameras

Like GPS/GNSS chipset, camera sensors are mandatory components for any kind of mass‐market communication device and in particular for smartphones. CMOS image sensor (CIS) has always been one of the most important features in our phone, so that it was able to move the market to a new category of smartphones, the so‐called camera phones. Google Pixel, Apple iPhone 7 Plus, Samsung Galaxy S7, Huawei P9, and Sony Xperia X are some of the best camera phones according to the international mobile industry congress like the International Consumer Electronics Show (CES) 2017 in Las Vegas and the Mobile World Congress (MWC) 2017 in Barcelona.

The mobile image sensing is the more direct way for the commercial user to represent the reality, to share information, and to create social interactions. For these reasons, in the last years, the CIS market has put particular emphasis on the quality of the camera modules, reaching high resolution levels, and makes them usable as low cost tools for numerous applications. Jointly with the numerous embedded sensors, like gyroscope, accelerometer, proximity sensor, GPS receiver, and Wi‐Fi connectivity, these new camera chipset have boost the effort in R&D for new kind of applications like 3D sensing technology, such as Google’s Tango, automotive self‐drive, drone product, and virtual and augmented reality. Numerous technological upgrades in chipset architecture, like backside illumination (BSI) and in‐sensors setups, like the dual camera implementation, have moved the market in favor of company rather than another. Nowadays, production and technology leader of image chipsets is Sony, covering 35% of the entire market. Sony’s sensors are mounted into numerous smartphones and tablets like the Samsung Galaxy S7, Huawei P9, Sony Xperia X, etc. After, there is Samsung (Samsung Galaxy S7, Lenovo Vibe Shot) and Omnivision (Huawei P8, Lenovo K3 Note) and, according to many CIS market research firms, all three together reach about the 70% of the world market.

CMOS stands for complementary metal‐oxide semiconductor, and it uses the same manufacturing technologies of CCDs sensors, the dominated technology till now, but needs much less power and the production is less expensive. The main advantages of CMOS imagers are that they are compatible with mainstream silicon chip technology and this allows on‐chip processing and consequently the miniaturization. With the technological development on the semiconductor industry, the gap between CCD and CMOS has narrowed and the quality of the obtained image is competitive.

A typical CMOS is an integrated circuit with an array of pixel sensors and has the following main part:

  • Micro lenses

  • Color filter

  • Pixel array

  • ADC (analog to digital converter)

  • Digital controller

Looking at the best camera phone of 2017, it is possible to state the specification of the camera sensors and make an overview on the best characteristics. Table 1 resumes some smartphone camera specifications.

CMOS imaging sensor characteristics
SmartphoneSensor nameSize (diagonal) [mm]dpix [µm]CMOS technologySensor dimensions [mm × mm]Image dimension [pix × pix]MP
Google Pixel/BlackBerry Key OneSony Exmor RS IMX3787.811.55BSI CMOS6.25 × 4.694032 × 302412.2
Apple iPhone 6SSony Exmor RS IMX3156.151.22BSI CMOS4.92 × 3.704032 × 302412.2
Apple iPhone 7Sony Exmor RS IMX*6.15n/an/an/an/a12
Apple iPhone 7 PlusSony Exmor RS IMX*5n/an/an/an/a12
Samsung Galaxy S6, S6 Edge(+)Sony Exmor RS IMX2406.831.12BSI CMOS5.95 × 3.355312 × 298815.9
Samsung Galaxy S7, S7 EdgeSony Exmor RS IMX2607.061.4BSI CMOS5.64 × 4.234032 × 302412.2
Samsung Isocell S5K2L17.061.4ISOCELL5.64 × 4.234033 × 302412.2
Huawei P9Sony Exmor RS IMX2866.21.25BSI CMOS4.96 × 3.723968 × 297611.8
OnePlus 3T/LGV20/Huawei Mate 8/Asus Zenfone 3Sony Exmor RS IMX2986.41.12BSI CMOS5.16 × 3.874608 × 345615.9
Sony Xperia XZSony Exmor RS IMX3007.871.08BSI CMOS6.46 × 4.475984 × 414024.8
5.96 × 4.475520 × 4140 (4:3 mode)22.8
6.46 × 3.645984 × 3366 (16:9 mode)20.1
Sony Xperia XZ Premium (coming soon)Sony Exmor RS IMX4007.731.22BSI CMOS6.17 × 4.635056 × 379219.2
LG G4 e G5Sony Exmor RS IMX2346.831.12BSI CMOS5.95 × 3.355312 × 298815.9

Table 1.

CMOS image sensor characteristics for commercial camera phones.

More details about cameras are available at the following hyperlinks:

Advertisement

3. Positioning with smartphones: outdoor and indoor scenarios

When outdoor scenarios are considered, smartphone technology can provide positions with a quite good level of accuracy, using the assisted GPS (A‐GPS) system. Despite that, it is possible that the received GPS/GNSS signal is too noisy or not available at all, for example, if the user is in urban canyons or inside buildings: in these cases GNSS positioning is not possible.

Starting from that, many researchers have been investigating alternative solutions that consider different sensors (such as INS and images) and other technologies (e.g., Wi‐Fi, pedestrian tracking system, Bluetooth) in order to improve position accuracy and availability. A brief overview about accuracies obtainable today with a generic smartphone (chosen as representative) is made in the following subsections.

3.1. Outdoor scenarios

3.1.1. GPS/GNSS only

As said in Section 2.1, starting from the end of 2016, it is possible to acquire raw GNSS measurements from smartphones: the main problem is that only Android Nougat OS allows to extract these information. Thus, in this section the attention is focused only on internal solutions provided by software installed on smartphones. In order to analyze the precision obtainable today with GNSS internal chipset, some tests were performed.

The tests took place in the same places described in [11] considering two different scenarios: an open outdoor area to represent “ideal” conditions (Figure 2, left) and another area (one of the courts in Politecnico di Torino campus) with characteristics of urban canyon (Figure 2, right). The line in Figure 2 - right shows a particular track where it is possible to find an area with a limited satellite visibility (similar to urban canyon conditions) and many windows that create multipath due to their high reflectivity.

Figure 2.

Test site and track: an open sky area (left) and an urban canyon (right).

Dynamic tests were performed in these areas by walking along the same path with the smartphones mounted on a special “two‐hands” support as shown in Figure 3. The entire data collection system includes:

Figure 3.

The two‐hands support system developed at Politecnico di Torino.

  • a smartphone (a)

  • a 360‐degree retro‐reflector (d)

  • and allows to install also an external IMU platform (b) and an external GNSS antenna (c).

GNSS data positions were recorded during the surveys considering a one‐second sample rate, using a dedicated app that stores the National Marine Electronics Association (NMEA) GGA messages in an ASCII file. All results were compared with a “ground‐truth” obtained through the continuous tracking of the smartphone position with a total station, thanks to the retro‐reflector installed on the “two‐hands” support. In this way, a millimeter accuracy was obtained, considering and estimating the level‐arm offset between the instruments.

The NMEA sentences were analyzed and compared with the reference trajectories using software written in MATLAB.

The horizontal positioning errors of the representative receiver are shown in Figure 4 for the urban canyon environment.

Figure 4.

2D performances of internal GPS sensor.

In order to have a more complete analysis from a statistical point of view, the most significant statistical parameters have been summarized in Table 2 for the urban canyon and open area test locations.

SmartphoneMean (m)Standard deviation (m)
ENHENH
Urban canyon environment0.4−7.3−2.14.54.75.0
Open area environment−0.51.6−1.92.62.54.5

Table 2.

Error statistics in urban canyon and open area environments.

As expected, then, it is generally possible to affirm that some environmental characteristics, such as obstacles, multipath effects coupled with the number of trackable satellites, play a crucial role in the accuracy determination of the smartphone positioning.

However, precision and accuracy improvements could be increased by computing a differential positioning solution, considering the raw measurements obtainable from internal sensors.

3.1.2. GNSS + INS

Very often in bibliography it is possible to find two different methods for GNSS + INS positioning: the loosely (LC) and the tightly (TC) coupled approaches(Figure 5).

Figure 5.

GNSS + INS processing approaches.

In the first case (LC), the software integrates acceleration and angular velocities and updates all the state parameters. These include the positions and angular assets, but also the instrumental biases, using GNSS positions and IMU measurements.

In the latter one (TC method), the input parameters are the same, but both the GNSS (pseudoranges, carrier‐phases, Doppler) and IMU observations enter into the extended Kalman filter, each with its own rate and precision, associated with their new biases [8], in order to provide an unique solution.

Although from a computing point of view, this is a heavier process and it takes into account the use even of one or a few visible GNSS satellites, which is a typical situation of urban canyons [13]

It is important to underline that the only LC method is available today because this approach does not require the raw GNSS measurements, while for the TC is fundamental to have these observations.

Starting from this, in this subsection a brief analysis of results obtainable with GPS + INS instruments installed on smartphones is made, following the LC approach.

The tests were carried out in our campus in the same two different test sites described in the previous subsection, considering the same special support created in the Geomatics Lab at the Politecnico di Torino.

Considering the Inertial Explorer® software for postprocessing all data acquired on the field, it is possible to have an horizontal error loop equal to 4.21 m and a vertical error loop equal to 3.73 m, considering a 3‐min session duration. Obviously, the results are slightly different if different smartphones are considered but it is possible t o affirm that these values are representative for the technology available today.

3.2. Indoor scenarios

The spread of smartphone devices with different embedded sensors, increased computational power and advance connectivity features, has led to the introduction into the market of numerous application services, based on the awareness of the user position, which provides information and assistance for navigation in the environment, pose estimation, tracking, and any kind of service related to the spatial context. Many location‐based services (LBS) are implemented as information systems that use as prior information the position of a mobile device [22]. The number of companies deploying LBS solution for commercial purpose reveals that location‐based solutions are finally meeting markets’ needs and soon will be implemented on mass‐market application. The principal fields of application are medical care [15], ambient assisted living [40], environmental monitoring [33], transportation [38], and marketing [1] etc.

The major part of these services requires accurate localization for people, instruments, vehicles, animals, and assets. As it is well known, the GNSS positioning provides good accuracies only in open‐sky environments. Contrariwise, when an indoor space or in an urban canyon is considered, the GNSS positioning in not possible and it is mandatory to overcome this issue considering different techniques and sensors. In recent years, some indoor location‐based services (LBSs) have been developed integrating different technologies and measurements [22], such as cameras [27], infrared (Kinect), ultrasound [20], WLAN/Wi‐Fi [6], RFID [23], mobile communication [10], and so forth are examples of the technologies that the scientific community has put at the service of indoor locations. Despite the ample panorama of solutions, mass market applications for indoor positioning require the use of embedded sensors in commercial smartphone without supplementary physical components. For this reason, major modification to the devices is forbidden and the type of technology usable in these applications is reduced. Ref. [36] has made a summary on the user requirements for mass‐market localization systems that is reported in Table 3.

All these indoor positioning systems have pros and cons that make them more useful in specific scenarios, compared to other options. One of the most useful but complex localization method is the inertial navigation system. This system is based on dead reckoning, which computes locations employing inertial measurements units installed inside the smartphone as accelerometers and gyroscope. The main advantages of a system using IMU (INS) is that nowadays, every kind of mobile device have it already implemented inside and no external infrastructure is required. Moreover, with the inertial systems, the only input information that is needed is the staring position. Without any other external information required, this technology is not affected by adverse weather conditions or by security vulnerability or jamming problems. However, these systems suffer from integration drift, making errors accumulate and therefore must be corrected by some other system. The LBSs based on the camera sensor have strong advantages and do not need to install any network of chipsets in the environment. All the primary sensors are already installed in the user device. In this case, the system could be considered low cost. Moreover, the positioning accuracy with these systems is usually more accurate in comparison to other systems. Furthermore, most of these systems based on triangulation, cannot determinate the orientation of the user, with important limitations to support many useful applications like augmented reality.

It is evident now that providing a reliable and stable position information in a complex and changing environment is a very challenging task. Sensor fusion may be an option to combine advantages of two or more different techniques (e.g., angle of arrival (AoA), time of flight (ToF), received signal straight indication (RSSI)), and technologies (e.g., GPS, Wi‐Fi, Bluetooth, camera sensors, ultrasound) and minimize the limitation. Some methods and technologies are ideal candidate to support or complete other navigation or localization systems in a multimodal approach in order to obtain an accuracy and reliability in the location information superior to that obtainable by each technique, technology, or system parameter without the use of diversity. Multimode solutions employing different sensor would not be feasible for low‐end handsets unable to connect to more than one technology or without the hardware enhancements required to apply different techniques. For these reasons, the positioning solution with smartphone technologies exploits the already embedded sensors: INS, CMOS image sensor, and Wi‐Fi.

In this chapter, we will focus on solution of image recognition‐based (IRB) technology that uses CIS as the main sensor. In particular, after a general overview on existing methods, we will investigate an innovative method of IRB location based on the image retrieval of real‐time acquired smartphone pictures with the corresponding synthetically generated 3D image or RGBD image extracted by a database. Then we will evaluate the integration of INS for a multimodal solution of the previous method for indoor navigation and finally some consideration on system using Wi‐Fi technology as the main positioning technology.

3.2.1. Cameras + INS

Indoor positioning and navigation by optical sensors is becoming one of the dominant techniques, able to cover a large number of fields of application at all levels of accuracy. The success of these techniques is due to the improvements and miniaturization of the CMOS sensors. Simultaneously, there has been an increase in the data transfer speed and smartphone computational capabilities, as well as a remarkable development in the field of image processing.

As seen before, the LBSs based on the camera sensor have strong advantages. First, these systems do not need to install any network of chipsets in the environment as the primary sensor (CIS) is already installed in the user device. This allows to develop a low cost service without design and implementation of onsite network. Moreover, the positioning accuracy with these systems is usually more accurate in comparison to other systems. In industrial process, for example, computer vision systems based on object detection algorithms are used in production line to track object and check the quality. These kinds of systems have accuracy around few millimeters. Of course, applications of image‐based positioning with smartphones cannot reach these levels of accuracy but can perfectly match the requirements for navigation purpose.

There are many previous research studies on indoor image‐based localization that pursue different goals and use different methods and technologies also in the function of the field of interest of the research groups. There are visual odometry approaches [19], simultaneous and location mapping (SLAM) [24], structure from motion, or investigating semantic features [29]. Some interesting work exploits the computer vision algorithm and in particular the neural network and transfer learning for visual indoor positioning and classification [35]. Some use RGB‐D images to perform object recognition [25]. On the use of a smartphone as a navigation device, some interesting research can be found in [27, 39].

As seen in bibliography, there are many LBS based on images, whose accuracies and coverage area is function of the application. Some accuracy ranges may be useful for applications in very large indoor spaces like museums or fairs, and others may require accuracies at subroom level, for example, in the field of logistics and optimization. When trying to make indoor positioning and navigation in more complex spaces with task of “search and rescue” or in construction sites, the coverage area decreases and higher accuracies are required.

A possible solution, considering all sensors which are installed into the smartphone device, is the image recognition‐based approach, where the localization of our device is based on photogrammetric principle [16]. Image recognition‐based (IRB) positioning is a good technology for smartphone indoor localization. The aim of these procedures is to match a user‐generated query image, via a mobile device, against an existing image database with position information [41].

Some test has been carried out in our campus, following the methodologies presented in [9, 28]. The use of IRB positioning in mobile applications is characterized by the availability of a single camera; under this constraint, in order to estimate the camera parameters (position and orientation), a prior knowledge of 3D environment has to be available, in the form of a database of images with associated spatial information. A Terrestrial LiDAR (Light Detection and Ranging) Survey (TLS) with an associated camera can be executed to acquire the 3D model of the environment used to generate the images database (RGB‐D images) . Once the retrieval of the reference image is completed, it is possible to extract the 3D information of the selected features from the image to estimate the external parameters (position and attitude) of the query image according to the collinearity equations (Figure 6).

Figure 6.

The IRBL procedure.

A priori information are necessary for these techniques, but nowadays, an accurate 3D model that could be always available for further upgrading and be usable for collateral tasks is obtainable, thanks to the integration of some geomatics techniques, such as photogrammetry, LiDAR, and mobile mapping systems.

Table 4 summarizes the accuracy results in terms of discrepancies from ground truth and estimated values, for indoor trial in case of good level of similarity between the query image and the reference one extracted out of the database.

CriteriaCriteria descriptionValue
Horizontal accuracy2D position for the detection of a shelf in a supermarket1 m
Vertical accuracySelection of the correct floor and visualizationFloor detection
Update rateMinimum for navigation1 Hz
LatencyDelay with which position is available to the userNone
TTFFTime-To-First-Fix, latency after switching on the deviceWithout delay
PrivacyMaintenance of the user privacyAccording to user-set policy

Table 3.

Summary of requirements for mass-marked localization according to Wirola et al. [36].

Param.ΔX [m]ΔY [m]ΔZ [m]Δω [rad]Δϕ [rad]Δk [rad]
|Max|0.1640.1490.0630.46460.52880.2396
Mean0.0180.0100.0150.09750.05440.0573
Dev. St.0.0840.0860.0200.18500.16890.1022

Table 4.

Accuracy results in indoor trial for position (ΔX, ΔY, ΔZ) and attitude (Δω, Δϕ, Δk).

It is important to state that the entire procedure could be executed in real time on a commercial smartphone and could provide the device position in few seconds. This is true for one‐spot positioning task, while, when it is needed to transpose the methodology for indoor navigation, it is necessary to take into account three fundamental problems: the energy consumption, the latency of the image processing, and the Internet data consumption. Acquiring images at a given frame rate for navigation applications is a procedure that requires high wastage of energy, with a consequent problem of battery optimization. Furthermore, as each query image has to be sent to a server for image retrieval procedure, a certain amount of Internet traffic is needed. Finally, the rates of positioning information are subordinated to the latency of the entire IRB methodology.

To overcome the reduction of the frame rate and latencies compensation, inertial (INS) platforms built with MEMS (micro electro‐mechanical systems) technology can be integrated in the IRB positioning [12]. Fusing IRB position and attitude measurement with INS measurements, accelerations, and angular velocity measurements are integrated to provide real‐time relative position and relative attitude information, while inner INS variables (velocity at the starting point, accelerometer biases, and gyro drift) are estimated using absolute IRB positioning inputs (position and attitude).

When MEMS technology is used together with IRB positioning, it is important to analyze the precisions and accuracies obtainable. The procedure was tested in our campus walking in a predefined path using two different smartphones (a) mounted on a special support, as described in Figure 3.

The procedure starts with the analysis of the raw data of inertial sensors (acceleration, angular velocity, and magnitude of magnetic field), directly registered from the smartphone. It is necessary to filter these data for estimating and removing the noise. After that, it is possible to use INS raw data in real time for positioning purposes considering a Kalman filter approach in order to reduce the number of frames that can be acquired for geo‐localization. This means that it is possible to extend the time interval between two images from 2 s up to 5 s, depending on the requested accuracy.

In Table 5, we see the positioning results in terms of accuracies, considering an IBN approach. Considering an interval of 1 s between images, the mean planimetric error was 21.3 cm at 67% of reliability, while at 95% this error was 37 cm.

1 s2 s5 s
MeanPercentileMeanPercentileMeanPercentile
67%95%67%95%67%95%
E0.1300.1480.3530.3870.4850.9601.6732.2214.158
N0.1300.1410.4120.3800.4091.1621.5741.6773.952

Table 5.

Results obtained with drift estimation coming from images.

When the positioning obtained with an interval of 2 s between the images is analyzed, the mean planimetric error increases to 61 cm at 67% and 1.49 m at 95%.

IBN allows to reduce to 50% the final residuals and increase the outages up to 90 s, even improving the quality of the estimated angles. At the moment, the IBN requires a server with high performance in order to obtain the solution and a well‐defined images database (DB).

3.2.2. Wi‐Fi et al.

Over the last decade, wireless location estimation has been an active field of research, becoming the most widespread approach for indoor localization in GNSS denied environment. A WLAN (Wireless Local Area Networks, IEEE 802.11 standard), otherwise known as Wi‐Fi (Wi‐Fi is a trademark of the Wi‐Fi Alliance), is a wireless network of devices that uses high frequency radio signal (2.4 GHz in ISM band) to transmit and receive data within a limited area. As the connection between nodes of the network maintains continuity, the communication is preserved even if one device is moving around in the limited area (50–100 m) [37]. This means that for these reasons, the WLAN technology could be used to estimate the location of a mobile device within this network. The positioning accuracy required to offer satisfactory LBSs is in the order of 1 m and a great effort is needed in R&D. The expansion of this field of research is expected to continue for years, beside numerous commercial applications, due to the fact that it is a low cost solution providing proper connectivity and high speed links. In fact, nowadays, the WLAN infrastructure is widespread in many indoor environments and it is already standardize for commercial smartphone communication.

Usually, an indoor environment is often complex, characterized by nonline‐of‐sight (NLOS) of target objects; in these situations, WLAN positioning technologies could be very helpful because they do not require the line of sight. Unfortunately, compared to IRBL procedure, WLAN positioning is affected by a large estimation error, proportional to the number, and position of nodes in the network. Others challenging issues are the power consumption and the signal attenuation.

Pros and cons of WLAN positioning are true in function of the techniques of positioning used. The most popular WLAN positioning method is based on the received signal straight indicator (RSSI) because it is easy to extract from any connected device in a Wi‐Fi network [17]. The RSSI method is based on the received signal power and on the relation between the signal attenuation and distance of the nodes. Knowing the strength of the emitted signal, the strength of the received signal, is possible to calculate its attenuation and consequently the distance between the emitter and the receiver. With these techniques, it is possible to combine different strategies for positioning, like propagation modeling, fingerprinting, cell of origin, and multilateration [30]. To obtain a most precise localization, it is necessary to combine the technique of fingerprinting [37] that consists an a priori analysis to map the observed signal strength of fixed routers in every place of the indoor environment. With this data it is possible to generate a database (i.e., a radio map). The limitation of this method is the necessity of a priori information, an effort that means an increased workload and a well‐spread router network. The propagation model differs from the fingerprinting model because it tries to determinate the RSSI map analytically instead of empirically. Of course, the major issues are related with the right description and modeling of the environmental effects (moving objects, signal attenuation, multipath) [5].

Another way to locate a device in a Wi‐Fi positioning system is the cell of origin (CoO) method, with which the receiver position is made to coincide with the coordinate of the access point (AP) generating the highest RSSI value. Due to the spatial distribution of the APs in an indoor environment, this type of techniques is able to reach location with errors around 10–20 m [14].

Finally, multilateration methods, like time of arrival, time difference of arrival, angle of arrival, and so forth, are less common for WLAN positioning due to computational complexity of these kinds of measurements in mobile devices [26].

A literature review on WLAN systems for indoor positioning has been published by He et al. in 2016 [18]. There are many previous research studies on indoor Wi‐Fi localization that pursue different goals and use different methods and technologies also in the function of the field of interest of the research groups. In particular, besides the numerous interesting works on positioning and navigation on self‐made mobile devices with sensor integration, there are some researches exploiting the embedded sensors in COTS (Commercial On‐The‐Shelf) smartphone. Some interesting work exploits the integration of inertial sensor‐based positioning with Wi‐Fi capability of smartphones [7, 31]. For example, in [7] the authors propose a sensor fusion framework for combining Wi‐Fi, pedestrian dead reckoning (PDR) and landmarks. The whole system runs on a smartphone and Android app is developed for real‐time indoor localization and navigation. The established accuracy is 1 m. An interesting multimodal approach of Wi‐Fi navigation is described in [21], where PDR carried out with only low cost sensors and Wi‐Fi smartphones are issued in a cooperative positioning operation made by a certain number of participants. The size of the error becomes smaller when the number of participants rises (5 m for 50 devices). Some use GPS integration for cloud‐based LBSs [3], while other researchers introduce sensors fusion between Wi‐Fi and CIS for accurate indoor positioning [32] or for augmented reality navigation [1].

A comprehensive and complete view on indoor positioning systems implemented today, with its applications and obtainable positioning accuracies, is described in [18].

Advertisement

4. Conclusion

In this chapter, the authors have tried to describe the positioning performances and methodologies in outdoor and indoor scenarios considering smartphone technology. In particular, the state‐of‐the‐art of smartphone technology regarding the precisions and accuracies that can be achieved with these instruments for positioning and navigation purposes has been analyzed, in both scenarios.

Even if in outdoor scenarios the obtainable accuracy is less than 5 m under open‐sky conditions considering only the GNSS sensors, in indoor environment is possible to have accuracies of 10–50 cm if some sensors, such as INS, cameras, or Wi‐Fi technology, are considered. Interesting results are obtained fusing IRB technique with MEMS technology: considering an interval of 2 s between images, the mean planimetric error is about 61 cm at 67% and 1.49 m at 95% of reliability. A possible interesting alternative for indoor positioning could be represented by the fusion of range camera and INS instruments.

This is not an exhaustive overview, also because the technology is evolving more quickly than the minds that are producing them: so this would be a starting point for future works regarding these instruments for positioning and mapping application.

References

  1. 1. Alnabhan A, Tomaszewski B. INSAR: Indoor navigation system using augmented reality. In: ISA ‘14 Proceedings of the Sixth ACM SIGSPATIAL International Workshop on Indoor Spatial Awareness; November 04‐04, 2014. Dallas/Fort Worth, Texas. New York, USA: ACM; 2014. pp. 36-43
  2. 2. Bauer C. On the (In‐)accuracy of GPS measures of smartphones: A study of running tracking applications. In: Proceedings of 11th International Conference on Advances in Mobile Computing & Multimedia (MoMM 2013); December 02-04, 2013. Vienna, Austria. New York, NY, USA: ACM; 2013. pp. 335-340
  3. 3. Bisio I, Lavagetto F, Marchese M, Sciarrone A. GPS/HPS‐and Wi‐Fi fingerprint‐based location recognition for check‐in applications over smartphones in cloud‐based LBSs. IEEE Transactions on Multimedia. 2013;15(4):858-869
  4. 4. Bornaz L, Dequal S. A new concept: The solid image. In: Proceedings of the XIXth International Symposium, CIPA 2003: New Perspectives to Save Cultural Heritage; 30 September–4 October 2003. Antalya, Turkey.
  5. 5. Bose A, Foh CH. A practical path loss model for indoor WiFi positioning enhancement. In: Information, Communications & Signal Processing, 2007 6th International Conference on; 10-13 Dec. 2007. Singapore: IEEE; 2007. pp. 1-5
  6. 6. Bumgon K, Wonsun B, Kim YC. Indoor localization for Wi‐Fi devices by cross‐monitoring AP and weighted triangulation. In: Proceedings of the IEEE Consumer Communications and Networking Conference (CCNC); 9-12 January 2011. Las Vegas, NV, USA: IEEE; 2011
  7. 7. Chen LH, Wu EHK, Jin MH, Chen GH. Intelligent fusion of Wi‐Fi and inertial sensor‐based positioning systems for indoor pedestrian navigation. IEEE Sensors Journal. 2014;14(11):4034-4042
  8. 8. Chu CH, Chiang KW, Liao JK, Rau JY, Tseng YH, Chen JH, Chen JC. The performance of a tight INS/GNSS/photogrammetric integration scheme for land based MMS applications in GNSS denied environments. In: Shortis M, El‐Sheimy N, editors. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Melbourne: Springer‐Verlag; 2012. pp. 479-484
  9. 9. Chiabrando F, Di Pietra V, Lingua A, Cho Y, Jeon J. An original application of image recognition based location in complex indoor environments. ISPRS International Journal of Geo‐Information. 2017;6:56
  10. 10. Cui X, Gulliver TA, Song H, Li J. Real‐time positioning based on millimeter wave device to device communications. IEEE Access. 2016;4:5520-5530
  11. 11. Dabove P. What are the actual performances of GNSS positioning using smartphone technology? Inside GNSS. 2014;2014:35-37
  12. 12. Dabove P, Ghinamo G, Lingua AM. Inertial sensors for smartphones navigation. SpringerPlus. 2015;4(1):834
  13. 13. Dabove P, Manzino A. Accurate Real‐time GNSS positioning assisted by tablets: An innovative method for positioning and mapping. Geam. Geoingegneria Ambientale e Mineraria. 2016;148(2):17-22
  14. 14. Farid Z, Nordin R, Ismail M. Recent advances in wireless indoor localization techniques and system. Journal of Computer Networks and Communications. 2013. pp.1-13
  15. 15. Fisher JA. Indoor positioning and digital management: Emerging surveillance regimes in hospitals. In: Surveillance and Security: Technological Politics and Power in Everyday Life. New York, USA: Taylor & Francis; 2006. pp. 77-88
  16. 16. Gupta A, Garg R, Kaminsky R. An Image‐based positioning system. 2010. pp. 1-8 available at courses.cs.washington.edu
  17. 17. Hatami A, Pahlavan K. A comparative performance evaluation of RSS‐Based positioning algorithms used in WLAN networks. IEEE Wireless Communications and Networking Conference; 13-17 March 2005. New Orleans, LA, USA: IEEE; 2005
  18. 18. He S, Chan SHG. Wi‐Fi fingerprint‐based indoor positioning: Recent advances and comparisons. IEEE Communications Surveys & Tutorials. 2016;18(1):466-490
  19. 19. Huang AS, Bachrach A, Henry P, Krainin M, Maturana D, Fox D, Roy N. Visual odometry and mapping for autonomous flight using an RGB‐D camera. In: Robotics Research. Berlin, Germany: Springer; 2017. pp. 235-252
  20. 20. Ijaz F, Yang HK, Ahmad AW, Lee C. Indoor positioning: A review of indoor ultrasonic positioning systems. In: Proceedings of the 15 International Conference Advanced Communication Technology (ICACT); 27-30 January 2013. Pyeongchang, Korea: IEEE; 2013. pp. 1146-1150
  21. 21. Kang W, Han Y. SmartPDR: Smartphone‐based pedestrian dead reckoning for indoor localization. IEEE Sensors Journal. 2015;15(5):2906-2916
  22. 22. Koyuncu H, Yang SH. A survey of indoor positioning and object locating systems. IJCSNS International Journal of Computer Science and Network Security. 2010;10(5):121-128
  23. 23. Lau EEL, Chung WY. Enhanced RSSI‐based real‐time user location tracking system for indoor and outdoor environments. In: Proceedings of the International Conference on Convergence Information Technology; 21-23 November 2007; Gyeongju, Korea
  24. 24. Levchev P, Krishnan MN, Yu C, Menke J, Zakhor A. Simultaneous fingerprinting and mapping for multimodal image and WiFi indoor positioning. In: Proceedings of the 2014 International Conference on Indoor Positioning and Indoor Navigation (IPIN); 27-30 October 2014; Busan, Korea. IEEE; 2014
  25. 25. Li X, Fang M, Zhang JJ, Wu J. Learning coupled classifiers with RGB images for RGB‐D object recognition. Pattern Recognition. 2017;61:433-446
  26. 26. Li B, Salter J, Dempster AG, Rizos C. Indoor positioning techniques based on wireless LAN. First IEEE International Conference on Wireless Broadband and Ultra Wideband Communications; 13-16 March 2006; Sydney, Australia. 2006
  27. 27. Liang JZ, Corso N, Turner E, Zakhor A. Image‐based positioning of mobile devices in indoor environments. In: Multimodal Location Estimation of Videos and Images. Berlin, Germany: Springer; 2005. pp. 85-99
  28. 28. Lingua A, Aicardi I, Ghinamo G, Corbi C, Francini G, Lepsoy S, Lovisolo P. The MPEG7 Visual Search Solution for image recognition based positioning using 3D models. In: Proceedings of the 27th International Technical Meeting of the Satellite Division of the Institute of Navigation (ION GNSS+ 2014); 8-12 September 2014; Tampa, FL, USA. 2014. pp. 2078-2088
  29. 29. Liu L, Sisi Z. A semantic data model for indoor navigation. In: Proceedings of the Fourth ACM SIGSPATIAL International Workshop on Indoor Spatial Awareness; 6 November 2012; Redondo Beach, CA, USA. New York, NY, USA: ACM; 2012
  30. 30. Mautz R. Indoor positioning technologies. Habilitation Thesis, ETH Zurich. 2012
  31. 31. Panyov AA, Golovan AA, Smirnov AS. Indoor positioning using Wi‐Fi fingerprinting pedestrian dead reckoning and aided INS. In: Inertial Sensors and Systems (ISISS); 2014 International Symposium on; 25-26 Feb. 2014. Laguna Beach, CA, USA: IEEE; 2014. pp. 1-2
  32. 32. Papaioannou S, Wen H, Markham A, Trigoni N. Fusion of radio and camera sensor data for accurate indoor positioning. In: Mobile Ad Hoc and Sensor Systems (MASS), 2014 IEEE 11th International Conference on ; 28-30 Oct. 2014. Philadelphia, PA, USA: IEEE; 2014. pp. 109-117
  33. 33. Tellez M; El‐Tawab S, Heydari HM. Improving the security of wireless sensor networks in an IoT environmental monitoring system. In: Proceedings of the 2016 Systems and Information Engineering Design Symposium (SIEDS); 29 April 2016. Charlottesville, VA, USA: IEEE; 2016
  34. 34. Titterton DH, Weston JL. Strapdown Inertial Navigation Technology. 2nd ed. VA, USA, Reston: The American Institute of Aeronautics and Astronautics; 2004
  35. 35. Werner M, Hahn C, Schauer L. DeepMoVIPS: Visual indoor positioning using transfer learning. In: Proceedings of the 7th International Conference on Indoor Positioning and Indoor Navigation (IPIN); 5-7 October 2016. Madrid, Spain: IEEE; 2016
  36. 36. Wirola L, Laine TA, Syrjärinne J. Mass‐market requirements for indoor positioning and indoor navigation. In: Indoor Positioning and Indoor Navigation (IPIN), 2010 International Conference on; 15-17 Sept. 2010. Zurich, Switzerland: IEEE; 2010. pp. 1-7
  37. 37. Wain Sy Au. RSS‐based WLAN indoor positioning and tracking system using compressive sensing and its implementation on mobile devices [thesis]. University of Toronto; 2010; p. 136
  38. 38. Xiao Z, Havyarimana V, Li T, Wang D. A nonlinear framework of delayed particle smoothing method for vehicle localization under non‐Gaussian environment. Sensors. 2016;16:692
  39. 39. Yang J, Xu R, Lv Z, Song H. Analysis of camera arrays applicable to the internet of things. Sensors. 2016;16:421
  40. 40. Zetik R, Shen G, Thomä R. Evaluation of requirements for UWB localization systems in home-entertainment applications. In: Proceedings of the 2010 International Conference on Indoor Positioning and Indoor Navigation (IPIN); 15-17 September 2010. Zurich, Switzerland: IEEE; 2010
  41. 41. Zhang Q, Niu X, Gong L, Zhang H, Shi C, Liu C, Wang J, Coleman M. Development and Evaluation of GNSS/INS Data Processing Software. China Satellite Navigation Conference (CSNC) 2013 Proceedings. Springer; 2013. pp. 685-696

Written By

Paolo Dabove, Vincenzo Di Pietra and Andrea Maria Lingua

Submitted: 13 December 2016 Reviewed: 09 May 2017 Published: 02 November 2017