Open access peer-reviewed chapter

Virtual Reality for Urban Sound Design: A Tool for Architects and Urban Planners

Written By

Josep Llorca

Submitted: 26 October 2017 Reviewed: 26 February 2018 Published: 12 April 2018

DOI: 10.5772/intechopen.75957

From the Edited Volume

Artificial Intelligence - Emerging Trends and Applications

Edited by Marco Antonio Aceves-Fernandez

Chapter metrics overview

1,591 Chapter Downloads

View Full Metrics

Abstract

Urban sound is one of the main concerns of architects and urban planners in contemporary cities: how to control it, what to do about noise pollution, where silent areas should be situated, or which urban decisions must be made. These questions, among others, are based on spatial sound. Virtual reality is a powerful technology that can serve as a design tool to find some answers to these questions. Due to its power to generate realistic images of the environments that are studied, it is easy to see that virtual reality could contribute to the visualization and auralization of spaces before their construction. This task is one of architects’ responsibilities, and such a tool could be very useful to them. This chapter highlights the principles and some applications of virtual reality in urban sound design.

Keywords

  • virtual reality
  • architect
  • urban planner
  • sound design
  • sound object

1. Introduction

Sound continuously surrounds and envelops us, whether we are indoors or out, at work or play, in cities or the country. We hear birds, voices, machines, wind, water, thunder, whispers, steps, calls, whimpers, doors, windows, floors, chairs, etc. Some of these sounds are heard clearly, others overlap. Sometimes they come together in a yell, at other times they succeed one another. Then, you can hear noises, sounds, music, background, rhythms, accelerando, harmonies, dissonances, cacophonies, echoes, vibratos, repetition, melodies, etc. At times sounds remain masked by distance; at other times, some frequencies are highlighted. They are modified by echo, reverberation, coupling, absorption, brightness or localization. They have the capacity to make us rejoice, to sadden, pacify, sweeten, amaze, frighten, annoy, alert, stress, upset, attack or madden. At the same time, they can make you dance, swing, feel dizzy, sing, whistle, imitate, keep quiet, laugh or cry. When you try to emulate them, you can beat, rub, blow, strum, nip, strike, haul, move or pat something. Some sounds can be anticipated, others come as a surprise. Sometimes, you know how they are going to sound, but you do not know when. At other times you know when you will hear them, but not how they are going to sound. In some cases, the sound has been heard, but you do not know where it has come from. At certain times, it is known that a sound is going to happen, but you do not know why. Thus, anticipation, prevision, effect, surprise, timbre or production techniques contribute to hearing a sound as a source of pleasure, such as when it turns into music.

This chapter describes some attributes of sound object design in architecture and urbanism, and explains a useful tool for designing and experiencing a sound object. The first part of the chapter defines the sound object in the built environment through some considerations of its temporal composition, and the treatment of the city as a sonic instrument. The second part of the chapter translates these concepts into a powerful representation tool––virtual reality––as a useful instrument for architects and urbanists of today, and then reviews the software that meets these needs and describes some case studies. Finally, the last part of the chapter provides an overview of how acoustic virtual reality can take advantage of some current developments in artificial intelligence.

Advertisement

2. Sound objects

2.1. Some features of the architectural and urban sound object

To define what an architectural and urban sound object is, we must refer to the creator of the term “sound object”. In 1966, in his famous Traité des objets musicaux, Pierre Schaeffer broke with the academic classifications of noise, sound and music, and created a new musicology. His work presented a phenomenology of the audible. The key concept was not defined as a musical object, but as a sound object that could represent any environmental sound. The notion is quite complex and its richness cannot be demonstrated in a few words [1]. Nevertheless, Pierre Schaeffer himself defined what a sound object is, pointing out that:

It is obvious that in saying “that’s a violin” or “that’s a creaking door” we are alluding to the sound produced by the violin or the creak of the door. But the distinction that we want to establish between the instrument and the sound object is even more radical: when we are presented with a magnetic strip in which an unknown sound is recorded, what are we listening to? It is precisely what we call a sound object, regardless of every causal reference designated by the terms sound body, sound source or instrument [2].

In this way, the term “sound object” is grounded in our subjectivity, despite the fact that it is not modified by individual variations in hearing, or continuous variations in our attention and sensitivity. Far from a subjective issue––in the individualist, incommunicable and practically elusive sense––sound objects can be well described and analyzed [2]. In other words, the sound object is the sound that reaches the listener’s ear and is analyzed just before entering it. The sound object is never well revealed, as in the effect and content of blind hearing.

Pierre Schaeffer defines sound objects by comparing illuminating and sonic phenomena. This comparison is reproduced here, as it provides a clear explanation:

Two big differences separate the experience of illuminating and sonic phenomena. The first consists of the fact that most visual objects are not sources of light, but simply objects, in the usual sense of the word, with light shining on them. Physicists are therefore quite accustomed to distinguishing light from the objects that reflect it. If the object itself gives out light, then we say it is a light “source”.

With sound there is nothing like this. In the overwhelming majority of sonic phenomena, sound as originating from “sources” is emphasized. However, the classic distinction in optics between sources and objects has not been imposed in acoustics. Attention has been given to the sound (as we say the light) considered as an emanation from a source, its paths and deformations, without the appreciation of the shapes and contours of this sound apart from the reference to its source [2].

It can be easily assumed that the distinction between sound source and deformations of the sound source can refer to a probable distinction between sound source and modifications to the sound added by the architecture. In other words, we can refer to the distinction between music and architecture in the process of hearing. However, before a conclusion is drawn, another quote by Schaeffer could provide clarification:

This attitude has been reinforced by the fact that sound (prior to the discovery of recording) has always been linked in time with the energy phenomenon that was its origin, to the extent that it has been practically confused with it. However, a fleeting sound is only accessible to one sense and remains under single control: the sense of hearing. In contrast, a visual object has something stable, and this is the second of the aforementioned differences. It is not confused with the light that illuminates it, it appears with permanent contours under different lights, and it is accessible to other senses: it can be felt, weighed and smelled; there is a form that our hands feel, a surface that touch explores, a weight and an odour.

It is understood that the notion of object barely had the strength to impose itself on the physicist’s attention. As the natural tendency of physics is to lead facts to their causes, great satisfaction is found in the energetic evidence of the sound source. There is no reason why the ear, at the end of the propagation of the mechanical relations in an elastic medium (the air), should perceive anything other than the sound source itself.

In fact, there is nothing false in the reasoning. Let us just say that, while it is valid for a physicist or an electroacoustic device builder, it is not, however, suitable for a musician or even for an acoustic ear [or an architect]. In fact, the latter do not concern themselves with the way a sound is born and propagated, but only with the way it is heard. Now, what the ear hears is neither the source nor the “sound”, but the true sound objects in the same way that the eye does not see directly the source, or even its “light”, but the luminous objects.

What Schaeffer wants us to realize is that what the ear hears is not the source, the “sound”, or even the pure music, but the real sound objects, which are the-music-with-the-architecture, in the same way that the eye does not see the source directly, or even its “light”, but the illuminated architecture. What the ear hears is, in fact, musicalized architecture.

Notably, when virtual reality tries to represent the sound world of an environment, it always does so for a subjective position of the listener, with an individual and unique point of view. The listener as a receiver perceives the effects and the sound content there, and interactively in virtual reality the sound is heard once it has been mixed with the architecture. Therefore, the listening point of the listener includes not only the sound source, but also the modifications caused by the place in which the sound moves, that is, the sound object. In this context, the notion of sound object provides an appropriate theoretical framework for this type of representation, since the sound object is revealed in the blind listening of the effects and the sound content, as explained previously [2].

2.2. The city as a sound instrument and the architect as a luthier of the city

Another issue in considerations of the sound object is the notion of identification of the sound source. While it is true that a sound sounds different in each architectural or urban space, we can also say that we continue to recognize the original sound and distinguish it from other sounds, despite the effects with which the architectural space modifies the sound source. To understand this issue, it can be explained as follows.

We can say that an acoustic musical instrument has three elements, of which the first two are essential. These are: the vibrator that starts to vibrate, and the exciter that causes the initial vibration or prolongs it in the case of maintained sounds. The third element, which is accessory, but always present, is the resonator, that is, a device designed to add its own effects to those of the body in vibration in order to amplify them, prolong them or modify them in some way [2].

If we transfer this concept to the city, we can also distinguish three elements: the vibrator or the elements of the construction: stony floors, street furniture, walls, etc.; the exciter or the man with his footsteps, movements and actions, the wind, the water; and the resonator or architecture. From this point of view, we consider architecture as a large-scale instrument. This classification into three elements presents the city as a great sound instrument and the architect as a manufacturer of musical instruments without knowing it.

Thus, we can easily compare a church, a street and a square. All of them have elements of vibration: the stone walls of the church, the asphalt of the street, and the sheet of water on the fountain of the square. The exciter for the church are the people who pray, while in the street it is the friction of the car wheels, and in the square the jet of water that falls from the top of the fountain. Finally, the first two spaces have an open-air resonator, while the church is a closed space of resonance.

This classification introduces great clarity in the approach, so that we can move on to another more difficult classification: that of the sound objects themselves, obtained from sources or sound bodies. The murmur of the people praying in the church is infinitely closer to the sound of the fountain in the square than to the shrill sound of a shout in the church, which in turn can approach the braking of a car on the street.

Once a sound source has been discovered, two possibilities are offered to the instrument manufacturer: repeat the same source and multiply it in different measures, or keep the same source and try to vary it. Schaeffer argues that the second procedure is not the simplest, because it inextricably links the three elements: vibrator, exciter and resonator. It is likely that contingencies force the instrumentalist not to use these variations in mutual independence, but to associate them immediately with the level of esthetics of the object [2].

However, if we refer to multiple instruments composed of a collection of vibrating bodies, like the city, we see at once that each of them repeats the triple combination of the elements. Architecture proposes a change in the collection of vibrators (each piece of architecture has different construction elements), a change in the collection of exciters (people change), and a change in the resonator (although the architecture does not move, there are as many pieces of architecture as places; due to the movement of the spectator, the architectural scene changes). Therefore, it is an instrument that varies greatly.

If we listen carefully to the same sound––an oboe––from different points of the same architecture, there is hardly anything in common between the various results produced by the same reproduction-resonance device. But this does not prevent the musician from speaking about the “timbre” of the oboe as an identity. Certainly, the oboe’s timbre is recognizable, and the most disfigured of the halls allows the oboe to be identified by an uneducated listener. So, we can state a priori that, although the entire room has a timbre, each of its spatial positions also has its own timbre: the same word with two different meanings. Therefore, we define architecture as an acoustic instrument:

Any device that allows us to obtain a varied collection of sound objects or varied sound objects, maintaining in spirit the presence of a cause, is an instrument of music in the traditional sense of experience common to all civilizations [2].

Advertisement

3. Virtual reality as a tool for acoustic design

Virtual reality is the simultaneous representation and perception of reality and its physical attributes in an interactive computer-generated environment [3]. One of these physical attributes is sound. The following diagram shows the operation of acoustic virtual reality applied to a closed architectural environment. Three elements can be distinguished that act in the process: the emission of a sound source, the addition of sound effects, and the perception of the final sound object (Figure 1).

Figure 1.

Representation of the operation of acoustic virtual reality applied to a closed architectural environment. A: real environment. B: virtual environment. C: zoom into the sound object.

The process described above is based on auralization. Following the concepts of simulation in acoustics and vibrations, Vorländer, in [3], describes auralization as (a) the separation of the process of sound generation, propagation and reproduction into three separate blocks, and (b) the representation of these blocks with systems theory tools (Figure 2).

Figure 2.

Representation of the operation of auralization. Redrawn from [3]. Generation and propagation of sound and its representation in the physical domain (A and B), and in the domain of acoustic signal processing (C). In the physical domain, sound source characterization and wave propagation can be either modeled or measured. The components will be combined in a synthesis of source signals and impulse responses.

The primitive signal, s(n), is called “dry recording”. It contains the mono sound signal without any reverberation. Normally it is a sound source recorded at a distance and in a specific direction in an anechoic chamber. The resulting sound signal after the sound propagation in (or between) rooms, g(n), contains characteristics of both the sound source and the transmission system. Here, the propagation of sound in a room usually adds the phenomenon of reverberation to the source signal, while a sound event transmitted through the walls is characterized by a lower sound pressure and a dark sound (with the characteristics of a low pass). The operation of a sound transmission system in physics is represented by the impulse response of the system, h(n). The sound signal at the position of the receiver is achieved by convolution operation of the original “dry recording” with the impulse response (usually represented by a digital filter) [3]. This method can be easily understood as an acoustic filter that contains the impulse response as a function of the position in the hall [4].

In the framework of a tool for the architect and urban planner, we need to summarize which requirements the acoustic virtual reality must meet to satisfy the design needs of architects and urban planners:

  1. The tool must be able to correctly represent the location of the sound sources.

  2. It must allow the attenuation of sound with distance.

  3. It must be able to include the effects added to the source by the geometry of the built environment. To do this, you must calculate the bounces with the adjacent geometry.

  4. It must change the resulting sound depending on the materials of the building elements.

  5. It must be interactive, that is, allow movement through the environment, and even modification and testing in real time of the elements in the scenario.

3.1. Virtual acoustics software: a quick review

The market offers a series of virtual reality tools that address virtual acoustics. These can be divided into two types: auralization engines for computer games, and acoustic simulation programs.

Regarding the first type, the sound of a computer game is the result of work done by the sound designer. A sound designer usually creates audio content (sound effects and music) and then creates sound events to launch the audio content. Sound events are normally monitored using tools such as Wwise or FMOD and are launched by them in the game.

Audiokinetic Wwise (https://www.audiokinetic.com/) is a solution for sound design in computer games that consists of a powerful application to create audio and animation structures; define the propagation; control the sound, music and integration of movement; profile the reproduction; and create banks of sounds. In addition, Wwise is a sophisticated audio engine that handles audio processing, animation and a series of functions optimized for each platform. The program interprets LUA scripts and reproduces exactly how the sounds and the animation behave in the game, allowing the validation of specific behaviors and outlining the Wwise acuity in each platform before the integration of sound into the game. It also contains a series of plug-ins that are divided into those that serve to generate audio and movement, such as the tone generator, and those that create audio effects, such as reverberation. Finally, it is an interface between Wwise and the visualization programs of the three-dimensional world.

FMOD Studio (https://www.fmod.com/), like Wwise, this is an application dedicated to sound in video games. This software, developed by Fireflight Technologies in 2002, is one of the industry’s standards and has been the basis of award-winning projects. FMOD Studio is a flexible and intuitive solution for audio in video games. It allows sound in the game to be designed using a DAW interface without knowledge of the required programming. FMOD can be incorporated into almost all platforms. In addition, the program allows real-time mixes and balances. This aspect allows you to make changes and listen to them without having to re-record the scene. FMOD consists of several separate tools in different programs such as “FMOD Designer” that correspond to the main window of the program, where the main work of creating the events and the parameters to be called by the video games is done [5].

Pure Data (https://puredata.info/) is a visual open source environment that works from any personal computer to smartphones and iOS. It is one of the major branches of the family of programming languages known as Max, originally developed by Miller Puckette at IRCAM. Pure Data allows musicians, visual artists, researchers and developers to create programs graphically without writing a line of code. It can be used to process and generate sound, video, 2D/3D graphics, interface and MIDI sensors.

Propagate (https://www.assetstore.unity3d.com/en/#!/content/40200) is a system for Unity that allows you to incorporate immersive audio that propagates realistically through the geometry of the scene quickly and efficiently. It is a simple interface that allows you to propagate the audio in real time, even when the sound sources move. The program is based on three principles of sound reception: the occlusion system simulates the transmission of sound waves by paraments and geometries, taking into account their materials and thus modifying the volume and frequency distribution. The diffraction system simulates the passage of sound waves through holes and corners between geometries. The perception system simulates how the perception of sound changes with the position in geometry.

Acoustic simulation programs, the second group of virtual reality programs with acoustics treatment, are programs that rely on and are completely dedicated to the faithful reproduction of sound in a space. Usually, they do not pay attention to the visual aspect of space, so their three-dimensional representations are not realistic, but rather schematic. Here we collect three powerful samples: RAVEN, EVERTims and CATT-Walker.

RAVEN. Developed at ITA in RWTH Aachen university, it is based on knowledge about today’s acoustic simulation techniques and allows fairly faithful physical auralization of the propagation of sound in complex environments, including important effects such as sound diffusion, room isolation and sound diffraction. Instead of this rendered realistic sound field, the sound sources are distributed and move freely, and the receivers listen to them in real time. In addition, manipulations and modifications of the environment itself are supported. The acoustic simulation of RAVEN combines the method of deterministic image sources (IS) [6] with a stochastic ray-tracing algorithm [7]. This framework allows physical computations of high-quality impulse responses in real time, where, apart from the components of the reflected specular sound field, the phenomenon of sound diffusion, sound transmission, and diffraction are taken into account. The environment is completely written in C ++, supports the operating systems of Windows, Linux and Mac OX X, and allows parallel computing in machines where memory is shared, in machines with memory distributed over the network, or in a combination of both [8].

EVERTims (http://evertims.github.io/) is an open source framework for the auralization of 3D models, which offers real-time feedback on how the acoustics sound in any given room during its creation. The framework is based on three components: a Blender plug-in, a C ++ Ray plotter, and a JUCE auralization motor. While a 3D model in Blender is devised, the plug-in continuously increases the geometry and the details of the materials to the ray tracing. On the basis of this information, the client simulates how the waves propagate there. The result of this simulation is then released to the auralization engine that reconstructs the Ambisonics sound field in any position for binaural listening. The environment takes advantage of the Blender Render Engine to support the auralization of the game for interactive exploration of the designed model [9].

Finally, with the CATT-Walker module (https://www.catt.se/walker.htm), CATT-Acoustic software has a powerful tool for real-time auralization in a microcomputer. CATT-Walker manages this dynamic audible restitution by continuously interpolating the impulse responses previously calculated in B format (surround coding). Hearing is performed in binaural mode from appropriate ambisonic decoding. To maintain a sufficiently low latency, CATT uses Lake Technology’s split FIR convolutional filtration technique. To reproduce the simulated acoustic environment as closely as possible, the modeled space must be sampled more or less densely by distributing the reception points around the source and in the evolution zone.

Advertisement

4. Five virtual acoustics applications for architects and urban planners

After the description of the properties of Virtual Acoustics and the software on offer that can support virtual acoustic simulation tools, we propose five applications of virtual acoustics in the process of design and architectural and urban analysis.

4.1. Invisible sound objects

The first case is for the phenomenon of sound objects that are heard, but you cannot see them (incongruence), the visible objects that you see, but cannot hear (incongruence), or the sound objects that you see in a different place to which they are heard (delocalization). Do such phenomena generate any design problems?

In contrast to the experimental conditions, the listener immersed in a real environment relies on all the senses to structure a representation of the environment [10]. A sensory modality could also pay attention to a different modality, and even influence the same perception more strongly. This raises questions about whether the resources of attention are controlled by a supramodal system or by various modalities of attention systems. In conditions of focused attention, it is difficult to judge each signal (sound and vision) separately when incongruent signals occur in the same place, at least much more difficult than when the incongruent signals come from different points and attention is divided [11]. The most feasible model for today’s knowledge consists of a multilevel attention mechanism with a multimodal component above the sensory component. In the context of perception of the sound environment, this could be interpreted as a stronger emphasis on visible sources, but at the same time, a lower probability of identification of the deviant sounds if these sounds come from the same place as the visual stimulus.

The mechanisms of multisensory attention also have a very strong temporal component. The sound stimuli presented in temporal congruence with the appearance of the visual objective make the visual objective stand out in the scene [12]. Based on this knowledge of multisensory perception, one can solve a nineteenth-century concern of sound landscape designers, at least partially: is it good to hide the sources of unwanted sounds from view? From the perspective of attention, we can conclude that when the sound is not very prominent and therefore does not attract much attention, we can avoid noticing the sound by eliminating congruent visual stimuli. Similarly, a desired sound should be accompanied by a visual stimulus to ensure that it receives sufficient attention. In contrast, we must emphasize that in the case of very prominent sounds that will attract attention, the absence of a visual stimulus could appear as a surprise, which would influence the perception [13].

In this context, the virtual reality tool offers a scenario to test the congruence between the sounds that are seen and heard. For this, the possible options to be evaluated must be simulated:

  • A not very prominent and seen sound source: there is weak incongruence.

  • A not very prominent and not seen sound source: there is weak congruence.

  • A very prominent and not seen sound source: there is strong incongruence.

  • A very prominent and seen sound source: there is strong congruence, and control of the situation.

  • A desired sound source and view: there is strong congruence, and control of the situation.

  • A desired and unseen sound source: there is strong inconsistency, discomfort and lack of control of the situation.

  • An unwanted and seen sound source: there is strong congruence, annoyance and control of the situation.

  • An unwanted and unseen sound source: there is strong inconsistency, discomfort and lack of control of the situation.

4.2. The influence of materials

Architects and urban planners are constantly concerned about the visual impact that the finishing will have on the built work. Although the visual impact affects the perception of space [14] and can affect the mood of the users [14, 15], the sound impact has a no less important effect on the perception of space [16, 17] and the users’ mood [18].

We also know that visual perception in virtual reality environments is related to the geometric representation of the space, and to the material representation of this geometry. If we focus on the representation of the materials, their treatment and properties are paramount. Therefore, base color, glossiness, roughness, normal or bump and displacement are some of the techniques that virtual reality software has developed to simulate the materials of the represented reality.

Even though this fine detailing could also seem to be necessary for acoustic virtual reality representations, there is a big difference between acoustic and visual materials. Small details in visual perception are negligible in acoustic perception. For this reason, a plane wall behaves acoustically similarly to a rough wall at low frequencies [19]. However, some phenomena linked to materiality, such as acoustic porosity and absorption, are linked to visual glossiness and roughness and, therefore, could have a big influence both visually and acoustically. In this context, the fine representation of both visual and acoustical properties of materials in virtual reality would turn this tool into a convincing way of representing the environments designed by architects and urban planners.

4.3. Concavity and convexity in architectural forms

There has always been a debate among architects about the suitability of curved shapes versus straight forms, and vice-versa. At both extremes, these tendencies can be classified as the purest rationalism, that of rectangular forms, straight lines, repeated and constant rhythms [20]; and the organisms that its formal referents have in nature [21, 22, 23]. Experimentally, curvilinear, sinuous, parabolic or circular shapes have been shown to affect neural activity more strongly than rectangular or quadrangular ones [24]. The visual influence of this type of geometry both in architectural interiors and in urban exteriors is also very important in the acoustics of these spaces. The graphical acoustics of ray tracing [25, 26] shows us how concave shapes reinforce sound at a point or in a concentrated area, while convex shapes scatter sound in multiple directions.

This affects the way sound is perceived in interiors and can produce undesired effects in these places. One clear example of this phenomena can be tested when seated at specific points in Coderch’s new building for the School of Architecture of Barcelona. The curved walls concentrate student’s whispers in some areas that reinforce the noise perception at these points. The same effect can happen when, in an urban environment, a curved design of walls concentrates the sound rays in one area.

Acoustic virtual reality should consider this effect as a product of design. For this purpose, therefore, a rough approximation of the architectural geometry is not enough. More detail in the representation of these nuances would lead to a more realistic and credible representation of reality with this tool.

4.4. The sound object as a reason for the architectural project

This case study deals with the sound object not as a pretext for comfort in architecture, through congruence with visual, acoustic comfort or a sound impact. The sound object is now a topic of design. The design of the sound object must meet all the requirements that the solution to a specific problem requires, as in any architectural design process. Architectural design is a creative process, but its additional emphasis on the definition phase of the problem [27] places it in a special category. When the term “creative” is used in its most general sense, to describe a process in which an agent (a personal product and in the environment) interacts with the material to form a new synthesis of essential novelty, it embraces the entire design process, but is too general to be particularly useful. However, when used in a more specific sense, relating pure arts and sciences, it usually allows for more self-original and self-motivated input, deriving from sensitive perception in art, and critical observation in science of the selected phenomenon. It is, then, a creative process of synthesis, preceded by analysis and followed by validation, especially in science, which ends up consisting of a piece of art or a hypothesis or validated theory [28]. This process of synthesis-analysis-synthesis, which summarizes all attitudes of observation of reality, [29] is the same as occurs in the process of auralization: synthesis of the sound emitted by the sound source, whose sound waves come from a single point; analysis or separation by parts of the different types of waves that bounce, pass or diffract in the elements of the built environment; and synthesis that is collected in a single point and that we have qualified as the “sound object”. Below, we present a series of examples of acoustic targets, extracted from [13]:

  • Moving water or sounds of nature should be the dominant sound heard.

  • Only the sound of nature should be heard.

  • A specific sound should be clearly audible in some areas.

  • Mostly (nonmechanical, nonamplified) sounds made by people should be heard.

  • The sounds of people cannot be heard.

  • Suitable for hearing unamplified/amplified speech (or music).

  • Acoustic sculpture/installation sounds should be clearly audible.

  • Sounds conveying a city’s vitality should be the dominant sounds heard.

  • Sounds that convey the identity of a place should be the dominant sounds heard.

However, the process of designing the sound object, like any architectural design process, requires that the solution to the problem addressed by the sound object is not just the solution that meets the functional requirements. In addition to fulfilling the requirements satisfactorily, it offers the user an integrated solution with the pre-existing environment and has its own entity as a newly created element. Here are some possible guidelines for achieving such goals:

  • A solution integrated with the pre-existing environment: it may contain vernacular sounds of the area or neighboring territory, replicating or imitating them. It can present contrasting sounds, highlighting those of the pre-existing environment. It can modulate pre-existing sounds, reinforcing some frequencies or switching off others.

  • A solution that has its own entity as a newly created element: it can present a characteristic rhythm, whether regular or irregular. It can present a characteristic tonic, either monophonic or polyphonic. It can present characteristic timbres or textures that are produced by concrete materials.

Advertisement

5. Artificial intelligence and acoustic virtual reality

Even though the framework depicted here seems easy to implement, the reality is rather different. In 2003, Kang et al. highlighted the introduction of new EU noise policies [30] and noted that noise-mapping software/techniques are being widely used in European cities [31]. Nevertheless, they stated that these techniques provide an overall picture for macro-scale urban areas. The micro-scale, for example an urban street or a square, could be more effectively studied using detailed acoustic simulation techniques. In addition, applications that predict and measure micro-scale environments, such as auralization techniques for indoor spaces [32], are still not sufficiently user-friendly, and the computation time is rather long. Kang et al. presented two computer models based on the radiosity and image source methods in an attempt to present to urban designers an interface that could be useful in the design stage, using simple formulae that can estimate sound propagation in micro-scale urban areas. However, the answer still has not been found.

Artificial intelligence can be applied to virtual reality in many ways. In the user interface, we need systems that behave rationally, e.g. reflect user movements as accurately as possible. Content production needs tools for optimizing the layout of virtual worlds, and virtual world simulation needs methods for approximating the behavior of the environment [33]. This last challenge relates strongly acoustic virtual reality and artificial intelligence and might be the key factor in progress in the simulation of both closed spaces and open environments. Some studies have already included artificial intelligence in the field of acoustics. In particular, new methods identify room acoustic properties based on evolutionary algorithms (EA) [34]. Focused on the problem of learning from real acoustics, Cox et al. [35] developed a new method employing machine learning techniques and a modified low frequency envelop spectrum estimator, to estimate important room acoustic parameters including Reverberation Time (RT) and Early Decay Time (EDT) from received music signals. What is known as the machine audition field [36] therefore presents a promising method that can establish and enhance classical methods of acoustics. More specifically, architects and urban planners always need comprehensive visualizations of the reality that they are designing. Approximation of the behavior of the environment on which they are working is therefore a major concern in their representations. For this reason, artificial intelligence could be a good way to solve some problems that still remain today. These questions draw possible future paths for investigation that we can sum up in the two following points:

  • The wide variety of case studies that an architect deals with in everyday practice require an easy method for acoustic representation. Otherwise, decisions are taken by approximation only, or the effort of representation is too great for one studio. For this reason, a database of urban spaces with their defined acoustic features would be useful. The acoustic features could be used as the hidden layer in a neural network framework for rapid prediction of the behavior of the environment in future modifications.

  • Measurements of acoustic properties of an architectural space need highly specialized equipment and special conditions. Not every architectural studio has the opportunity or means for making such measurements. Investigations to extract acoustic information from everyday recordings could help not only to analyze the current environment, but also to predict new architectural designs.

Advertisement

6. Conclusions

Acoustic architectural and urban design is an area that still needs to be addressed. Despite the fact that a considerable amount of research has been done on architectural acoustics, urban soundscape, and noise treatment, few real applications of these theories can be found in built projects. This chapter links Schaeffer’s theory on sound objects to acoustic virtual reality, to describe a potential way of understanding the role of acoustic virtual reality in the field of architecture and urbanism. The main tools that could help practitioners have been presented here. Moreover, five specific applications have been proposed. However, much more work is needed to apply the theory to daily practice, and all efforts at bringing scientific research into everyday activity are welcome. The direction of artificial intelligence seems plausible in the future. Sound design is an old concern, and still remains as elusive and unpredictable today as it can be when it turns into music.

Advertisement

Acknowledgments

This research was supported by the National Programme of Research, Development and Innovation aimed at Society Challenges BIA2016-77464-C2-1-R & BIA2016-77464-C2-2-R of the National Plan for Scientific Research, Development and Technological Innovation 2013-2016, Government of Spain, titled “Gamificación para la enseñanza del diseño urbano y la integración en ella de la participación ciudadana (EduGAME4CITY)”, and “Diseño Gamificado de visualización 3D con sistemas de realidad virtual para el studio de la mejora de competencias motivacionales, sociales y espaciales del usuario (EduGAME4CITY)”.

References

  1. 1. Augoyard JF, Torgue H, Sonic Experience: A Guide to Everyday Sounds. Montreal: McGill-Queen’s University Press; 2010
  2. 2. Schaeffer P. Treatise on Musical Objects: An Essay Across Disciplines. 1966th ed. Oak-land: Univertisy of California Press; 2017
  3. 3. Vorländer M, Schröder D, Pelzer S, Wefers F. Virtual reality for architectural acoustics. Journal of Building Performance Simulation. 2014;8(1):15-25
  4. 4. Llorca J, Redondo E, Valls F, Fonseca D, Villagrasa S. Acoustic Filter. Cham: Springer; 2017. pp. 22-33
  5. 5. Rehren C, Cárdenas J. Motores de Audio para Video Juegos. Síntesis Tecnológica. 2011;4:81-99
  6. 6. Schröder D, Lentz T. Real-time processing of image sources using binary space partitioning. Journal of the Audio Engineering Society. 2006;54(7/8):604-619
  7. 7. Schröder D, Dross P, Vorländer M. A fast reverberation estimator for virtual environments––RWTH Aachen University Institute of Technical Acoustics––English. In: Proceedings of the AES 30th International Conference on Intelligent Audio Environments. Saariselkä, Finland; New York, NY: Audio Engineering Society; March 15-17, 2007
  8. 8. Dirk Schröder, Michael Vorländer. RAVEN: A real-time framework for the auralization of interactive virtual environments. In: Conference: Proceedings of the Forum Acusticum, Aalborg, Denmark Volume. January 2011. ISBN: 978-84-694-1520-7
  9. 9. Poirier-Quinot D, Noisternig M, Katz BFG. EVERTims: Open source framework for realtime auralization in architectural acoustics and virtual reality. In: 20th International Conference on Digital Audio Effects (DAFx-17), Edinburgh, United Kingdom; Sep 2017
  10. 10. Driver J, Spence C. Attention and the crossmodal construction of space. Trends in Cognitive Sciences. 1998;2(7):254-262
  11. 11. Santangelo V, Fagioli S, Macaluso E. The costs of monitoring simultaneously two sensory modalities decrease when dividing attention in space. NeuroImage. 2010;49(3):2717-2727
  12. 12. Talsma D, Senkowski D, Soto-Faraco S, Woldorff MG. The multifaceted interplay between attention and multisensory integration. Trends in Cognitive Sciences. 2010;14(9):400-410
  13. 13. Kang J, Schulte-Fortkamp B. Soundscape and the Built Environment. Boca Raton: CRC Press. Taylor & Francis Group; 2016
  14. 14. Filbrich L, Alamia A, Blandiaux S, Burns S, Legrain V. Shaping visual space perception through bodily sensations: Testing the impact of nociceptive stimuli on visual perception in peripersonal space with temporal order judgments. PLoS One. 2017;12(8):e0182634
  15. 15. Vartanian O et al. Impact of contour on aesthetic judgments and approach-avoidance decisions in architecture. Proceedings of the National Academy of Sciences. 2013;110(Supplement_2):10446-10453
  16. 16. Al-barrak L, Kanjo E, Younis EMG. NeuroPlace: Categorizing urban places according to mental states. PLoS One. 2017;12(9):e0183890
  17. 17. Plack CJ. The Sense of Hearing. 2nd ed. London: Taylor & Francis Group; 2014
  18. 18. Zhang Y, Kang J, Kang J. Effects of soundscape on the environmental restoration in urban natural environments. Noise & Health. 2017;19(87):65-72
  19. 19. Kuttruff H. Acoustics. New York: Talyor & Francis; 2004
  20. 20. Frampton K. Modern Architecture: A Critical History. Oxford: Oxford Univeristy Press; 1980
  21. 21. Mumford M. Form follows nature: The origins of American organic architecture. Journal of Architectural Education. 1989;42(3):26-37
  22. 22. Krause LR. Frank Lloyd Wright: Organic architecture for the 21st century. Journal of Architectural Education. 2011;65(1):82-84
  23. 23. Dennis JM, Wenneker LB. Ornamentation and the organic architecture of Frank Lloyd Wright. Art Journal. 1965;25(1):2-14
  24. 24. Banaei M, Hatami J, Yazdanfar A, Gramann K. Walking through architectural spaces: The impact of interior forms on human brain dynamics. Frontiers in Human Neuroscience. 2017;11:477
  25. 25. Officer CB. Introduction to the Theory of Sound Transmission: With Application to the Ocean/C.B. Officer.––Version Details––Trove. New York: McGraw-Hill; 1958
  26. 26. Kinsler LE. Fundamentals of acoustics. New York: Wiley; 2000
  27. 27. Kneller GF. The Art and Science of Creativity. Holt: Rinehart and Winston; 1965
  28. 28. Herbert G. The architectural design process. British Journal of Aesthetics. 1966;6(2):152
  29. 29. Condillac. La lógica o los primeros elementos del arte de pensar. Imprenta d. Barcelona; 1827
  30. 30. EUR-Lex––32002L0049––GA––EUR-Lex. [Online]. Available from: http://eur-lex.europa.eu/legal-content/GA/TXT/?qid=1399875039336&uri=CELEX%3A32002L0049. [Accessed: November 09, 2017]
  31. 31. Welcome to Schal. [Online]. Available from: http://www.tpsconsult.co.uk/schal.aspx. [Accessed: November 09, 2017]
  32. 32. Jing Y, Xiang N. A modified diffusion equation for room-acoustic predication. The Journal of the Acoustical Society of America. 2007;121(6):3284-3287
  33. 33. Laukkanen S, Karanta I, Kotovirta V, Markkanen J, Rönkkö J. Adding intelligence to virtual reality. In: ECAI 2004: Proceedings of the 16th European Conference on Artificial Intelligence. Santa Clara, CA, USA: IEEE; 2004. p. 1136
  34. 34. Poteralski A, Szczepanik M, Ptaszny J, Kuś W, Burczyński T. Hybrid artificial immune system in identification of room acoustic properties. Inverse Problem in Science Engineering. 2013;21(6):957-967
  35. 35. Kendrick P, Cox TJ, Zhang Y, Chambers JA, Li FF. Room acoustic parameter extraction from music signals. In: 2006 IEEE International Conference on Acoustics Speed and Signal Processing Proceedings. Toulouse, France: IEEE; Vol. 5. pp. V-801-V-804
  36. 36. Wang W. Machine audition: Principles, Algorithms, and Systems. Hershey: IGI Global; 2011

Written By

Josep Llorca

Submitted: 26 October 2017 Reviewed: 26 February 2018 Published: 12 April 2018