Open access peer-reviewed chapter

The Evolution of Translation Theory: From Prescriptive to Descriptive Approaches

Written By

Erick Nkululeko Nzimande

Submitted: 01 June 2023 Reviewed: 01 June 2023 Published: 02 October 2023

DOI: 10.5772/intechopen.1002611

Chapter metrics overview

194 Chapter Downloads

View Full Metrics

Abstract

Translation theory has undergone several paradigm shifts, from the inception of translation studies as an empirical discipline (Holmes 2000). The many paradigm shifts that have taken place can be ascribed to the infiltration of the discipline by other disciplines, such as literary studies, post-colonial studies, cultural studies, etc. The aim of this chapter is to provide a description of the major developments that have taken place in translation theory, from the prescriptive approaches of the 1960s to the more descriptive paradigms of the 2000s. The chapter commences with the word-for-word and sense-for-sense approaches of the 1950s and 1960s. It then proceeds to the discussion of the equivalence-based approach of the 1960s, the functional approach which dawned in the 1970s and the polysystem theory which also emerged in the 1970s. This is followed by an account of the descriptive translation studies (DTS) model, which emerged in the 1980s, and then the cultural turn of the 1990s, which encapsulates several approaches, including patronage and translation, gender and translation and post-colonial translation theory. The chapter then concludes with the description of the corpus-based translation studies (CTS) approach, which dawned in the late 1990s.

Keywords

  • translation theory
  • descriptive translation studies
  • functional approach
  • polysystem theory
  • corpus-based translation studies

1. Introduction

Various disciplines have shown interest in the field of translation studies, as an empirical discipline [1]. This includes disciplines that are somewhat related to the field, such as literary studies, linguistics, linguistic philosophy and others, and also those that are remote such as cultural studies, mathematics, information theory, post-colonial studies, etc. These disciplines have introduced new paradigms and methodologies into the field of translation studies as an attempt to advance it as a developing discipline [1]. This is what sparked the many paradigm shifts in translation theory during the period of the 1950s–2000s.

This chapter aims to provide an account of the major developments that have occurred in translation theory, from the 1950s and 1960s in which prescriptive approaches reigned to the 1970s – 2000s, where more descriptive paradigms dawned. The chapter commences with the earlier approaches, namely the word-for-word and sense-for-sense approaches spanning the period of the 1950s - 1960s. The description of the equivalence-based approach, which emerged in the 1960s, will follow, which will then be followed by the functional approach of the 1970s. The chapter will proceed to the discussion of the polysystem theory, which also reigned in the period of the 1970s, which will be followed by the description of the descriptive translation studies (DTS), which dawned in the 1980s. The discussion of the cultural turn, which emerged in the 1990s, will be provided next and then the description of the corpus-based translation studies (CTS) approach of the late 1990s will conclude the chapter.

Advertisement

2. The word-for-word versus sense-for-sense approach

The period of the 1950s – 1960s in the west was characterised by a heated debate on whether to employ the word-for-word or the sense-for-sense approach when translating a piece of text [2]. Even to date, remnants of the controversy still surface when translators render texts of various types. The difference between word-for-word and sense-for-sense approaches was first observed with Cicero, Horace and Jerome in the first century BC [2]. Cicero explains his own strategy he employed when rendering Greek oratory into Roman, and he clearly indicates that he used the sense-for-sense approach as he translated this text.

Jerome [in 2] also alludes to the fact that he also favoured the sense-for-sense paradigm when translating various text types from Greek into Latin. St Jerome is a well-known Greek translator, who was requested by Pope Damasus in 382 AC to perform revision of the first translations of the New Testament. The aim for the revision was to develop an official and standard Latin translation that could be used in churches [2]. His approach is best explicated in a letter he wrote in 395 AC to his friend Pammachius:

Now I not only admit but freely announce that in translating from the Greek — except of course in the case of the holy scripture, where even the syntax contains a mystery — I render not word-for-word, but sense-for-sense.

[Jerome quoted in 3, p. 3]

From the above quote, it is apparent that Jerome was in favour of the sense-for-sense approach, with the exception of Biblical texts. He contends that the sense-for-sense paradigm gives room for the message contained in the ST to be transferred successfully. The word-for-word approach, on the other hand, results in translations that are stilted and may have distorted message [2]. Concerning Biblical texts, Jerome [in 2] is of the view that the word-for-word approach becomes useful as these kinds of texts require translation of a textual kind. Jerome [in 2] foregrounds the point that the syntax and message contained in the holy scriptures are sometimes unclear, and using the sense-for-sense approach may result in translations that have distorted meaning. In the past, altering the meaning contained in sacred texts was considered a serious offence that could even warrant death sentence [2, 3].

Most earlier translators agree with Jerome that the word-for-word approach comes in handy in the case of rendering Biblical texts. Biblical texts were seen as inspired, and some of them even believed to be written by God himself. Altering the word of God was considered a serious offence and was forbidden [3]. This had even more serious consequences in other parts of the world. For instance, Dolet from Europe was charged with blasphemy for his addition of the words ‘rien du tout’ (meaning ‘nothing at all’) when rendering Plato’s dialogues [2]. Consequently, he faced death sentence for this [2].

Similar arguments concerning translation of Biblical texts also surfaced in other parts of the world. For example, Hung and Pollard [in 2] proffer an account of the strategy that was employed by Chinese translators when translating Buddhist sutras from Sanskrit. Their account is divided in three phases, and the first phase (i.e., the Eastern Han Dynasty) was marked by the use of the word-for-word approach. The second phase (the Jin Dynasty and the Northern and Southern Dynasties) was characterised by the use of the sense-for-sense approach. In the third phase, a combination of both approaches was observed [2].

The controversy was also observed in other parts of the world, such as the Arabic world, England, France and Germany [2]. In English, Dryden and Tytler were the most prominent scholars. Dolet was the most popular scholar in France, and in Germany, it was Schleiermacher [2]. However, their arguments cannot all be exhausted within the scope of this chapter.

In the case of Africa, no trace of the existence of the word-for-word versus sense-for-sense controversy can be made. There are two factors to which this can be ascribed: (a) Translation Studies in the African continent is still very much new, and (b) the culture of writing is foreign in Africa as everything in the past was done through the word of mouth [4]. When Europeans or colonialists reached the African continent, they were of the view that Africans were much uncivilised and had to be ‘trained’ in everything so they could be brought to the European standards. Translation, therefore, did not exist during this period, as it was only Europeans who were imparting their languages and culture to Africans [5].

However, Danquah [in 4] indicates that the word-for-word and sense-for-sense debate might have existed in Ghana when Ashanti professional linguists interpreted and translated for the king. They argue that the linguists had to ‘perfect’ the speech of the king by adding were necessary and deleting where they saw it fit. This, therefore, shows that they were in favour of the sense-for-sense approach as adding new texts and omitting unnecessary text is part of this approach.

Several studies were conducted in the past to reveal how the word-for-word and sense-for-sense approaches have been applied. Some of these studies have already been highlighted in the foregoing discussion. It is important to note that these approaches are different from the other approaches, discussed in the following sections, in the sense that they are more like translation strategies than tools that can be used in explicating translation phenomena. Therefore, researchers have investigated them as strategies and not theories per se. The study conducted by Hung and Pollard [in 2] to investigate the approach that was employed by Chinese translators when rendering Buddhist sutras texts into Sanskrit is among the studies that were discussed. More recent studies on the two approaches have also been conducted. A classic example for this is the study conducted by Baker and Hanna [6] in which they investigated translation strategy employed by translators of the period 750–1250 AC in the Arabic world. Their study showed that the translators of this period firstly employed the word-for-word approach and later changed to the sense-for-sense paradigm, after realising that the former was producing unfruitful results. Other researchers have opted for more modern terminology to explore the same trends. For instance, Madadzhe and Mashamba [7] investigated translation of medical terms from English into Tshivenda (i.e., one of the official languages of South Africa). The researchers still retained word-for-word as an approach or strategy and used adaption and idiomatic translation to refer to a sense-for-sense kind of translation. The results show that the use of adaptation and idiomatic translation produce translations that are comprehensible to the target reader. The researchers, therefore, recommend these strategies, which entail they favour the sense-for-sense approach.

Therefore, the above studies clearly indicate that the word-for-word and sense-for-sense approaches have been a useful tool for researchers in explicating translation phenomena.

More systematic approaches dawned in the second half of the twentieth century in an endeavour to revisit the word-for-word and sense-for-sense poles. It is in this period that the equivalence-based approach emerged, and the following section aims to provide a discussion of this approach.

Advertisement

3. The equivalence-based approach

In the 1960s, some scholars began to scrutinise the word-for-word and sense-for-sense concepts [2]. This led to several scholars gaining interest in the concept of ‘equivalence’ in an attempt to develop a more systematic paradigm that can be used in translation studies. This was then the emergence of the so-called ‘equivalence-based approach’. The primary focus of the equivalence-based approach was on correspondence between the ST and the TT in terms of form (style) and content (meaning). The most controversial issue during the period of this approach was around meaning and equivalence [2]. One of the prominent scholars concerning this debate was Jakobson [8], who was famous for his contention that no complete equivalence can be established between words (code-units). Equivalence between the ST and TT should rather focus on the message than the individual words [8]. Jakobson [8] further posits that scholars who investigate the controversy of meaning and equivalence focus on the code units and structure and do not consider the message that should be transferred from the ST to the TT.

Eugene Nida has also done an incredible amount of work on the concept of equivalence. He became most popular for his introduction of two types of equivalence, namely formal equivalence and dynamic equivalence. These two concepts clearly indicate that Nida was moving away from the word-for-word and sense-for-sense concepts [2]. In Formal equivalence, the translators strive to establish equivalence in content and form in a more literal manner [9]. This is, in a way, related to the word-for-word approach. However, Nida [9] is well aware of the fact that such a strategy would produce a translation that is hugely stilted and unintelligible to the target readership. This would necessitate the use of footnotes if the translation were to be comprehensible to the target readers. Dynamic equivalence, on the other hand, strives for naturalness in translation and also looks at the culture within which translation functions [9]. Therefore, this type of equivalence is more connected with the sense-for-sense approach.

Nida [9] further introduced the concept of ‘the principle of equivalent effect’. According to this principle, any good translation is expected to induce the same response that was produced by the ST to its readers.

Later, Nida reformulated his concepts of equivalence and called formal equivalence ‘formal correspondence’ and dynamic equivalence ‘functional equivalence’ [10]. However, this modification had no effect whatsoever on the principles and meaning of the two concepts.

Several researchers have used the equivalence-based approach in an attempt to understand and explain translation phenomena. For instance, Nzimande [11] conducted a study in which he investigated the use of Mona Baker’s strategies, which he devised to address the problem of non-equivalence in translation, in the English translation of the isiZulu novel ‘UMamazane’. The results showed that most of Baker’s strategies were very effective for the translator in addressing the problem of non-equivalence. Furthermore, Moropa and Kruger [12] evaluated the strategies used in the English translation of isiXhosa (i.e. one of the official languages of South Africa) culture-specific terms. They found several mistranslations and corrected them using strategies, such as cultural equivalent, functional equivalent, descriptive equivalent, etc., that have been devised by various scholars to address the challenge of non-equivalence [12]. Therefore, the equivalence-based approach played a very significant role in these two studies, in understanding the problem of non-equivalence and the strategies that can be employed in an attempt to mitigate it.

Although the equivalence-based approach made tremendous contribution to the field of translation studies, it is not without shortcomings. For instance, Nida has been heavily lambasted for confining equivalence to word level. Other scholars have also raised a concern that his principle of equivalent effect cannot be measured [2]. Furthermore, Kenny [13] has also questioned the definition of equivalence, arguing that it is too circular (i.e., translation is used to define equivalence and equivalence, in turn, is used to define translation). Moreover, Bassnett [14] criticises the approach for the fact that it is limited to word and grammatical level. There are a whole lot of factors influencing the translation process rather than merely replacing lexical units between the ST and the TT.

In the 1970s, another paradigm shift took place yet again in translation theory, and the functional approach’ emerged. This new approach is discussed in the following section.

Advertisement

4. The functional approach

The 1970s marked the dawn of a new paradigm in translation theory, namely the ‘functional approach’. The most notable proponent of this approach was Vermeer, who was most popular for developing the ‘skopos theory’. The term ‘skopos’ originates from Greek, and it means ‘purpose’, ‘function’ or ‘aim’ [15, 16]. Proponents of the skopos theory contend that the function of the target text determines the translation strategy to be employed by the translator. This is the reason why skopos theory is also called the functional approach [15]. Proponents of this approach further argue that translation is a form of action and every action has a purpose attached to it [16]. Vermeer then collaborates with Reiss [16] to formulate the so-called ‘skopos rule’. The rule stipulates that: ‘an action is underpinned by its purpose, that is, it is a function of its purpose or skoposʼ ([16], p. 90). According to them, this is the most prominent rule governing the translation process. Translation as an action then produces what Vermeer calls ‘translatum,’ which is the translated text [15].

It was indicated in the foregoing discussion that it is the purpose or function of the target text, which determines the translation strategy to be adopted when translating. This, therefore, entails that the first step in the translation process is to determine the function of the target text before you can commence with the translation task [16]. This then introduces the concept of the ‘commission’ or ‘translation brief’. Vermeer ([15], p. 229) defines a commission as, ‘the instruction, given by oneself or by someone else, to carry out a given action – here: to translate’. It is ideal that the purpose or skopos is clearly stated in the commission. However, the purpose can easily be determined from the translation brief, even if it is not clearly stated [15]. If this is the case, we then talk of implied or implicit skopos [15].

Moreover, the skopos or commission must conform to the norms of the target language and culture. This then indicates that the functional approach introduces the concept of ‘culture’ into translation. Therefore, there is a marked distinction between the equivalence-based approach and the functional model since the former perceives the source text as paramount, whereas the latter is target text-oriented [17]. The functional model considers language to be part of culture and the function or purpose the translation fulfils in the target culture takes precedence [15, 17].

Therefore, it can be argued that the emergence of the functional model marks an official shift from source text-oriented, which reigned in the period of 1950s – 1960s to target text-oriented approaches, which conceptualises translation as a process that takes place within the culture of the target language. Approaches prior to the 1970s conceived of translation as a retrospective process and the functional model, on the contrary, considers it to be a prospective process [17].

Nevertheless, proponents of the functional model posit that translators should not disregard the ST altogether [18]. The skopos should be the determining factor as to how the ST should be rendered [2]. Nord [18] introduces the concept of ‘loyalty,’ which seeks to explicate the kind of relationship that should exist between the ST and the TT. Loyalty binds the translator to both the ST and the TT. It endeavours to ensure that the purpose of the TT does not completely depart from the intentions of the ST author [2]. Loyalty, therefore, ‘limits the range of justifiable target-text functions for one particular source text and raises the need for a negotiation of the translation assignment between translators and their clientsʼ ([18], p. 126).

Research that has employed the functional approach in interpreting findings obtained is very limited. One study was conducted by Wallmach and Kruger [19] in which they explored translation strategies used by students when translating text in an examination situation. The source text was in English, and students had to translate it into their home/first languages. The findings demonstrated that the problem of lack of terminology in African languages can be addressed if a more functional approach is used in explaining the notion of translation rather than the prescriptive approach, which dictates how translation should be done [19]. Another study was conducted by Moropa [20], where she examined the role played by the initiator/commissioner in the translation of the novel ‘The Prophet’ into the Indigenous languages of South Africa. The findings revealed that the initiator and the translation brief he provided to the translators played a vital role in ensuring that the novel was translated efficiently into the indigenous languages [20]. Therefore, these two studies show that the functional approach has played an instrumental role in investigating translated texts.

The immense contribution of the functional approach to translation theory can never be denied. The most notable contribution is the introduction of the concept of culture into translation, as well as consideration of the norms and conventions of the target language in the translation process. Nonetheless, the model has also been criticised by some scholars. The main criticism raised is that not all texts have a purpose. For instance, literary and religious texts have no clearly defined purpose [15]. However, Vermeer [15] contends that all actions do somehow possess an aim. If the action to the performed is seen as having no purpose, then it is technically not an action [15]. Literary texts may not possess a purpose that is clearly stated, but when one scrutinises them closely, it can be noted that they do possess an implicit purpose that the translation serves [18]. Furthermore, the approach has been criticised for introduction of not so needed jargon, for example, ‘translatum’. Terms such as target text already exist and should be used instead [2]. Finally, the approach has been lambasted for its overreliance on the purpose of the target text and complete disregard of the nature of the source text [2].

It is not only the functional approach which reigned during the 1970s but also another approach existed, namely ‘polysystem theory’. This paradigm is explained in the following section.

Advertisement

5. The polysystem theory

The 1970s were marked by co-existence of two approaches, namely the functional approach and the polysystem theory. The polysystem theory was founded by Even-Zohar, and his theory was built from the work of Russian formalists and Czech structuralists of the 1930s and 1940s [2]. Even-Zohar raises some criticisms against the functional approach, which, therefore, indicates that his model might have dawned slightly before the functional approach. For example, he maintains that ‘functionalism has profoundly altered both structures and methods, questions and answers, of every discipline into which it was introduced’ ([21], p. 10).

Proponents of the polysystem theory are of the view that literature is part of the social, cultural, historical and literary system. These function together to form what is called the ‘polysystem’ [2]. Even-Zohar defines polysystem as:

a multiple system, a system of various systems, which intersect with each other and partly overlap, using concurrently different options, yet functioning as one structured whole, whose members are interdependent.

([21], p. 11)

The polysystem is marked by high level of hierarchy between its sub-systems in which some sub-systems occupy the peripheral position while others enjoy the central position [21]. The sub-systems are always in continuous battle to occupy the central position [21]. As a result, some sub-systems find their way from the periphery into the centre, while others are moved from the central position into the periphery [21]. The peripheral position is known as the ‘secondary position,’ whereas the central one is referred to as the ‘primary position’ [21].

Normally, it is the innovatory systems that are found in the centre of the polysystem, and these systems actively shape the polysystem [2]. Innovatory systems or strata always bring new repertoire into the polysystem (i.e., models, laws and elements responsible for producing new texts) [21, 22]. The secondary position, on the other hand, is marked by high level of conservativism, where old models, laws and elements are constantly used [2, 22]. Moreover, multiple centres and peripheries exist within the polysystem [21]. There is a possibility for one system to progress from a periphery of another system to a periphery of another, and then make its way into the central position of the same system [21].

According to Even-Zohar [22], translated literature usually occupies the peripheral position within the polysystem. This, therefore, entails that it plays a conservative role within the polysystem and does not participate in bringing new repertoires into the polysystem. Nevertheless, translated literature may progress from the periphery into the centre. It can then contribute in bringing in new laws, models and elements when occupying the primary position [22].

There are three cases where translated literature can find itself in the central position of the polysystem. Firstly, it may make its way into the centre when the literature is still young and being developed [22]. In this case, translation serves as a tool through which models, elements, principles, etc. are brought into the polysystem [2]. Secondly, it may find its way into the primary position when the literature is ‘weak,’ ‘occupies the peripheral position’ or ‘both’. Literature in the secondary position usually lacks resources and vital repertoires. It is then through translation that more resources and repertoires can be brought into the literature [2, 22]. Thirdly, translated literature may make its way into the centre when there is a literary vacuum or turning point in literature [22]. It may be the case that the existing sub-systems within the polysystem are no longer able to maintain the existence of the polysystem, and then translation is used to introduce new and vibrant sub-systems to keep the polysystem going [22]. In these three instances, translated literature can then make its way into the central position. These three instances are presented diagrammatically in Figure 1 above.

Figure 1.

Conditions when translated literature assumes central position. [adapted from ([2], p. 173)].

Nevertheless, Even-Zohar [22] indicates that translated literature is also stratified within the polysystem. This entails that it occupies different positions in the polysystem. Some strata of translated literature may occupy the peripheral position, while others may be found in the centre of the polysystem [22].

Furthermore, proponents of the polysystem contend that when literature is translated between hegemonic and minority languages, translators tend to favour the norms and conventions of the hegemonic language. This view is also shared by other scholars, such as Venuti [23], especially concerning translation in the colonial era. Scholars then contend that translators in the post-colonial epoch should counter this dominance of hegemonic languages by adopting the source text-oriented approach (foreignisation in Venuti’s terms) when translating into a hegemonic language. When translating from a hegemonic language, they should deliberately favour the target text-oriented paradigm (domestication in Venuti’s terminology) [2, 23].

Only a limited number of studies that employ the polysystem theory as a theoretical underpinning have been conducted. The first study was done by Ntuli [24] who sought to test the hypothesis of the polysystem theory that when literature is translated between hegemonic and minority languages, translators have a tendency of favouring the norms of hegemonic languages. This hypothesis was tested within the poles of domestication and foreignisation. The results proved the hypothesis to be accurate since they showed the domestication approach was predominant in the translation. Ntuli was investigating a translation into a hegemonic language (English), and dominance of domestication indicates the translator was in favour of English norms. A similar study was conducted by Nzimande [11] as part of the study he conducted to investigate Baker’s strategies of non-equivalence. His research also sought to test the same hypothesis tested by Ntuli. The study was also situated within the poles of domestication and foreignisation. Nzimande, however, was investigating the English translation of the isiZulu novel ‘UMamazane’. The results also showed dominance of domestication, and this further strengthened the hypothesis of the polysystem theory. These two studies clearly demonstrate the pivotal role the polysystem theory has played in advancing the field of Translation Studies.

The polysystem theory is also not immune to criticism, though it made great contribution towards advancement of translation theory. For instance, it has been criticised for its heavy reliance on the formalist approach as this approach may not be found to be relevant for Translation Studies in the 1970s and beyond [Gentzler in 2]. Furthermore, the model has been lambasted for being too abstract and only conceptualising literature as a polysystem that also encompasses translated literature. This shows ignorance on the part of the approach of the practical nature of translation and texts [Gentzler in 2]. Lastly, the fact that the model is only confined to literature has been viewed as another shortcoming. It is unknown as to how far the approach can go in the case of other types of text, such as scientific texts [Gentzler in 2].

Following the polysystem theory, the ‘descriptive translation studies’ approach emerged. This approach is discussed in the section to follow.

Advertisement

6. The descriptive translation studies

The ‘descriptive translation studies’ (DTS) emerged in the 1980s as a new paradigm in translation theory. It was developed by Gideon Toury, who built from his earlier work in the 1970s, as well as the polysystem theory [2, 25]. The approach is of a descriptive nature as it ought to describe how translations are supposed to be done [26]. Clearly, this signals another shift in translation theory, from the prescriptive models that existed before the 1970s which provided a ‘prescription’ as to how the translation task should be executed [25].

Furthermore, DTS perceives translations as ‘facts of the target culture’. This simply means it is the norms and conventions of the target culture and language, which determine how translation is to be executed [26]. It is for this reason that the DTS is considered to be ‘target-oriented’ [26]. This marks yet another shift in translation theory, from the source text-oriented paradigms of the 1950s and 1960s, where the source text was paramount as compared to the target text [25]. Nevertheless, the move away from source text-oriented to target text-oriented paradigms began in the 1970s with the emergence of the functional approach. Since Toury started working on the DTS in the 1970s, this clearly indicates that this period marked an official shift from source text-oriented to target text-oriented approaches.

Toury, however, brings back the concept of equivalence which was introduced by the equivalence-based model. However, he redefines it as a descriptive, variable, functional-relational and historical concept, as opposed to its earlier definition as a prescriptive, a-historical and invariant concept [25]. Furthermore, Toury [26] posits that texts are viewed as translations if they function as translations in the target culture. It should, therefore, be noted that equivalence is seen as the relationship between the ST and the TT, if the TT is regarded as the original of the ST [26]. Equivalence, in this case, will demonstrate the variable profile, which is determined by the target culture [25].

Moreover, [1] three kinds of research are realised within the DTS model. These are the function-oriented, product-oriented and process-oriented research. Function-oriented DTS rather focuses on contexts than on texts. It is concerned with investigating the function a translation fulfils in the target culture, as well as its value and influence within the target context [1, 25]. Product-oriented DTS, on the other hand, has its focus on describing translations or comparing translations to their original text [1]. The focus of process-oriented DTS is on the act or process of translation itself [1]. Toury [26], however, emphasises the interdependency between these three kinds of research and contends that it should be considered mandatory if one seeks to adequately explicate translation phenomena.

Furthermore, Toury [26] conceives of translation as a norm-governed activity, and it is the norms that determine the type and extent of equivalence between the ST and the TT. Norms are, in essence, socio-cultural constraints specific to a society, culture and time. Toury [26] provides three kinds of norms and contends that these are observed as different stages of the translation process. These include initial, preliminary and operational norms. By initial norm Toury [27] means the choice to either favour the norms of the target culture or those of the source culture. Preliminary norms are concerned with translation policy and directness of translation [27]. Operational norms have to do with the factors governing the translation task itself [27]. They encompass ‘matricial norms’ and ‘textual-linguistic norms’. Matricial norms concern themselves with the completeness of the translation [2, 27]. Textual-linguistic norms are factors governing the selection of TT textual and linguistic material to be used in the translation [2, 27]. The different types of norms described here can be represented diagrammatically in Figure 2 below.

Figure 2.

Toury’s different kinds of translation norms.

A large volume of research has been done, which employed the DTS as a theoretical framework. One study was done by Nokele [28], and it was a comparative analysis of the isiXhosa and isiZulu translations of Mandela’s autobiography ‘Long Walk to Freedom’. The research focused specifically on the translation of metaphors, and findings revealed that similar strategies were adopted in the two translations in the rendering of metaphors [28]. Another research was done by Ngcobo [29] in which he explored speech act of naming in the isiZulu translation of the English novel ‘Cry, the Beloved Country’. The results revealed that the translator employed strategies such as cultural substitution, omission and addition in an attempt to make the isiZulu text appealing to the target readership [29]. The two studies discussed clearly demonstrate practical application of DTS in the field of translation studies.

The DTS also contributed immensely in the field of translation studies. However, it has also received several criticisms from various scholars. The most notable contributions are its introduction of norms into translation theory, as well as its descriptive stance. The approach has been lambasted for adopting only a target text-oriented view. This disregards other factors that might influence the translation process, such as politics and ideology [30]. Furthermore, other scholars have argued that the concept of norms is rather an abstract one as norms can only be identified through analysing translations that are supposed to be governed by them [2]. The strategy adopted by the translator in the translation process may be unconscious and analysing translated text may not give guarantee of the norms that were at play during the translation process.

Further developments took place in translation theory and concepts such as politics, ideology, gender, power relations, etc. were introduced. This marked the beginning of the ‘cultural turn’ approaches. These are highlighted in the following section.

Advertisement

7. The cultural turn

In the 1990s, a group of approaches encapsulated under the term ‘the cultural turn’ dawned, and another paradigm shift in translation theory took place. This marked yet another paradigm shift in translation theory. Bassnett and Lefevere were instrumental in the introduction of the approach into translation studies, through their essays ‘Translation, History and Culture,’ which came out in 1990 [2]. The cultural turn began to consider the aspects of culture, politics and ideology as influencing factors in the translation process [31]. It is concerned with how culture, history, context and convention exert an influence on the translation process [Bassnett & Lefevere quoted in 2]. According to Snell-Hornby [quoted in 2], it is this consideration of culture and politics in the translation process, which is termed ‘the cultural turn’.

Nevertheless, it should be noted that the concept of culture in translation theory was introduced as early as the 1970s with the dawn of the functional approach and polysystem theory. The view that glimpses of the cultural turn were seen as early as the 1970s is also supported by certain scholars. Mizani [31] also supports this argument when he indicates that the concept of culture was initially introduced by the polysystem theory and the DTS.

However, the fact cannot be denied that the concept was officially introduced in the 1990s with the emergence of the cultural turn approaches. Gentzler [quoted in 17] is also in support of this argument when he maintains that the earlier introduction of the concept was only an indication of the move towards the cultural turn, but it is only in the 1990s that Bassnett and Lefevere took a firm stand and officially introduced the concept. The cultural turn encapsulates a number of approaches including those focusing on feminism, colonialism, power relations and so forth [2]. This chapter focuses on those approaches concerned with patronage and translation, gender and translation and the post-colonial translation theory as not all approaches encompassed within the cultural turn can be exhausted within its limited scope.

7.1 Patronage and translation

Lefevere [32] worked within the DTS paradigm and the polysystem theory but went beyond these approaches to introduce the ideological, social and cultural contexts in which translation as an activity is embedded [2, 33]. Lefevere’s main focus is on the aspects of power, ideology and institution that are seen to be exercising control upon reception, approval or disapproval of literary texts at large and translations in particular [32]. The key concept here is ‘patronage’, and it refers to institutions or persons that control the translation process [33]. The institutions and persons in charge of the translation process are then called ‘patrons’. Patrons encompass groups of people, that is, political parties, the media, religious bodies, publishers and so forth, individuals possessing certain powers and institutions, that is, critical journals, academies, etc. The educational establishment has a major influence since it controls literature dissemination [32].

Lefevere [32] is of the view that patrons get to decide which texts to be translated, as well as how it should be disseminated to the public. The drive to translate literature can be poetological or ideological. If it is poetological, it will either reject or conform to the reigning poetics [2]. If it is ideological, it will reject or accept the reigning ideology. Lefevere [32] indicates that it is the ideological factors that take precedence over their poetological counterparts. The ideological considerations refer to the patron’s ideology imposed upon the translator or the translator’s own ideology [2]. The poetological considerations refer to reigning poetics in the target culture [2].

Since patronage and translation have been discussed in detail, it is perhaps vital to move on to issues of gender and translation.

7.2 Gender and translation

The cultural turn approach also considers issues of gender representation or portrayal in translated literature. Two paradigms seek to explain this [24]. The first paradigm has to do with unequal power relations between men and women that are observed in many societies. Women are generally viewed as inferior to men and are always subordinated to men in society [34]. Snell-Hornby [17] concurs with this sentiment when he posits that in gender studies, similarly with post-colonial studies, feminist translation studies developed as an attempt to counter the dominance of men as portrayed in translated literature. Research on gender and translation sought to expose the power differentials found in translated literature, as well as the false image portrayed about women in translation [34].

Feminist translation theory proposes that there is a connection between the status of translated texts and that of women in society. Translated texts are perceived as inferior to their originals, in the same way that women are subordinated and treated as inferior to men in society [2]. Therefore, the primary purpose of this approach is to expose and counter the subordination of women and translated literature in society and literature [35]. Simon is the most prominent proponent of this approach, and he has done a large volume of research on translation and gender. Some of his research has focused on women who have played a major role in the field of translation [2, 34].

The second paradigm concerning gender and translation, as proposed by Von Flotow [34], focuses on identity and translation and considers issues of diversity of sexual orientation [34]. Von Flotow [34] contends that the emergence of gay and lesbian as other sexes makes it difficult to identify someone as male or female in this day and age. Therefore, proponents of feminist translation theory have rather sought to investigate the portrayal of identity in general in translation. For instance, Harvey [36] has conducted research, where he explored portray of gay as a type of gender in American English and French-translated texts. For the French texts, research demonstrated that terms denoting gay identity were either translated as derogatory or omitted completed in translated texts [36]. Harvey [36] then posited that this strategy of translation is a reflection of the general trend to reject gay identity in France. In the case of American English texts, findings indicated that there is a tendency to add text and use terms that reveal day identity in translated texts [36].

It was indicated earlier that the cultural turn also encapsulates the post-colonial translation theory. The following section is, therefore, dedicated to this approach.

7.3 Post-colonial translation theory

The cultural turn has also been concerned with issues of translation and post-colonialism. In the 1990s, scholars began to investigate the role translation and language, in general, played in the colonisation [17]. This then led to the development of the post-colonial translation theory. The primary focus of this new paradigm has been to explore and explicate power imbalances between different languages as reflected in translated literature of the colonial and post-colonial epoch [2, 17]. The investigation has been mainly concerned with the dominance of languages of colonisers and how this is depicted in translated literature. Munday [2] contends that there is observed parallelism between the status of translated texts and women in society and that of the colonised. In translated literature, hegemonic languages are seen or portrayed as superior to minority languages. Therefore, these power differentials are viewed as a reflection of asymmetrical power relations between the colonised and the coloniser [2].

Two arguments have sought to explicate this observed phenomenon. The first argument is around the fact that translation between hegemonic and minority languages has been a unidirectional process. A large amount of translation work has been done from hegemonic to minority languages [23]. This can be ascribed to the fact that minority languages have been viewed as lacking in vital resources and translation was, therefore, used as a vehicle through which to import those lacking resources from dominant languages. The second argument centres on translation from minority to dominant languages. Translation into hegemonic languages serves the purpose of adding to the anthropological knowledge the coloniser keeps in its knowledge centre about the colonies [33].

Research has shown that translation from a major language into a minor language during and after the colonial era has often created a distorted and false image of the colonised [2, 23]. Snell-Hornby [17] provides India and South Africa as a prime example of countries in which local languages have existed concomitantly with English as a major and hegemonic language. Spivak [37] has conducted research on this area in which she investigated the dominance of English as a hegemonic language in the colonial, as well as post-colonial era. She focused specifically on the translated literature of the Third World from Bengali (a language spoken in Bangladesh) into English to investigate the impact this had on Bengali as a language and its speakers [2]. Spivak [37] maintains that translations into English have always favoured English and marginalised the languages from which the translation was done. She is of the view that such approach in translation portrays a false image of less influential languages [2]. Furthermore, Nzimande [38] has investigated the two English translations of the isiZulu novel ‘Insila KaShaka’. One of the objectives of this research was to determine whether power differentials between English and isiZulu played out in the first translation, which was done in the colonial period (1951), as well as in the second translation, which was produced in the post-colonial era (2017). The results demonstrated that power dynamics between isiZulu and English indeed exerted some influence on the approach adopted by the translators.

Other scholars who have done an incredible amount of work on post-colonial translation theory include, but not limited to, Bassnett and Trivedi, Niranjana and Venuti. However, their contribution cannot be discussed here due to space limitations.

Several criticisms have been raised against the cultural turn approach as a whole. For instance, Munday [2] contends that the introduction of the cultural turn into translation studies might be viewed as an attempt to colonise the field since was still undergoing development when the approach was brought in. Munday [2] goes further and posits that proponents of the post-colonial paradigm seem to have been more concerned with advancing their own personal agendas. For instance, Cronin proposes that English-speaking Irish translators can ‘make a distinctive contribution to world culture as a non-imperial English-speaking bridge for the European audiovisual industry’ [Cronin quoted in 2, p. 214]. These translators can obviously achieve this through the use of relevant translation strategies [2]. Therefore, this can be conceived as an attempt to use translation as a tool for political and economic gains [2].

After the cultural turn, another paradigm emerged in translation studies, namely the corpus-based translation studies (CTS) approach. The following section discusses this approach.

Advertisement

8. Corpus-based translation studies

In 1998, corpus linguistics began making inroads into translation studies. During this period, the corpus-based approach was being proposed as a new theory of translation [Laviosa in 2]. However, it is perhaps important to note that it is Toury who firstly proposed the corpus-based paradigm when he proposed that a new approach is necessary in translation studies that would allow studies to be replicable and transparent. As a response to this proposition, Baker brought corpus linguistics into translation studies [39]. Her initial indication of the importance of corpora in translation studies was observed when she contended that ‘the availability of large corpora of both original and translated text, together with the development of a corpus-driven methodology, will enable translation scholars to uncover the nature of translated text as a mediated communicative event’ ([40], p. 243). This then led to the dawn of yet another approach in translation theory, the corpus-based translation studies (CTS) paradigm [39]. Laviosa [quoted in ([41], p. 20)] defines the CTS aptly as follows:

a branch of the discipline that uses corpora of original and/or translated texts for the empirical study of the product and process of translation, the elaboration of theoretical constructs and the training of translators. […] It uses both inductive and deductive approaches to the investigation of translation and translating.

Baker [42] contends that the tools that the field of corpus linguistics provides can enable scholars to easily reveal universal features of translated texts [42]. In order to investigate such features, Baker [43] compiled her own corpus [43]. Furthermore, the CTS is not only used to investigate universal features of translation but also specific features.

Translators employ different strategies when engaging in the translation process, and the differences observed in the translation styles of translators can be the result of the norms that govern the translation task or the translation process itself [2]. This is, therefore, in line with the argument that the CTS was introduced in an endeavour to develop the DTS as the notion of norms was firstly introduced by Toury in the DTS model. Besides norms, several other commonalities between the DTS and CTS can be observed. For example, both paradigms adopt the target text-oriented and descriptive stance in explicating translation phenomena [2]. Since they are both target text-oriented, they both mark a shift from prescriptiveness, which characterised the 1950s and 1960s approaches to descriptiveness, which was introduced with the dawn of the 1970s approaches [44, 45]. Moreover, Tymoczko [in 44] states that the CTS concerns itself with both the process and the product of translation, which is similar to the DTS as was seen with the different branches of the DTS (process-oriented, product-oriented and function-oriented DTS).

Moreover, the CTS relies on the use of corpora, as its name suggests (i.e. corpus-based). Different types of corpora are used, including monolingual, bilingual, multilingual and parallel/comparable corpus. These types, however, cannot be explicated within the scope of this chapter. Most scholars who have undertaken corpus-based translation research have made use of a parallel/comparable corpus. A corpus can be defined as a large collection of authentic text that is available in electronic format and can be manipulated using available corpus query tools to reveal various linguistic phenomena in a text [39].

Previously, a corpus was any large collection of text that was not necessarily in electronic format. Baker [43] further indicates that corpora when not large in the past and could be a small collection of text that could be manipulated manually. With rapid advancements in the field of Corpus Linguistics, corpora are now mainly available in electronic format and have become larger in size.

Laviosa [46] identifies three main branches of the CTS, namely the descriptive, theoretical and applied CTS. The descriptive CTS refers to any research that employs the descriptive method in an attempt to explicate translation phenomena [46]. Theoretical CTS encompasses those studies that seek to test a specific hypothesis or to prove or refute a specific theory [46]. Applied CTS is that research conducted to gain knowledge that can be applied in real-life situations [46].

A large number of researchers have employed the CTS in their studies. It is also pivotal noting that the CTS is more of a methodology than a theory that can be used in interpreting findings of a study. Therefore, most of the studies have used the approach as such. For instance, Nzabonimpa [47] examined the use of simplification as a universal feature of translation in the translation of Latin loan words in the English-French parallel corpus consisting of legal texts. The results revealed that translators used simplification in rendering most of the Latin loan words, which appeared in the source text. Furthermore, Ndlovu [48] explored the strategies employed in the translation of medical terms in the English-isiZulu parallel corpus. The findings revealed that translators used strategies such as paraphrasing, use of general words, coinage and familiar words, etc. to make English medical terms accessible to isiZulu readers. These studies, therefore, clearly indicate that CTS has been very instrumental in advancing the field of translation studies.

The CTS has also been criticised on several aspects. The approach has been heavily lambasted for the fact that the corpus on which the approach basis the analysis and conclusions cannot be a true representation of the whole world. There are other phenomena and aspects of language that may not necessarily be present in the corpus [2]. Some conclusions drawn out of the corpus may not be accurate as the corpus may not reveal some linguistic features that are relevant for the analysis [46]. Moreover, the issue of balanced and preventiveness has always been problematic. It is never an easy task to develop a corpus that is both balanced and representative [45]. Researchers are more likely to face a situation in which a compromise has to be made between balanced and representativeness [45]. Such a compromise may render the results of the analysis inaccurate and invalid.

Advertisement

9. Conclusion

The aim of this chapter has been to highlight developments that have taken place in translation theory, from the approaches of the 1950s and 1960s that were prescriptive to the paradigms of the 1970s – 2000s that were marked by descriptivism. The chapter opened with ‘word-for-word’ and ‘sense-for-sense’ approaches that reigned in the 1950s and 1960s. The debate that characterised these two approaches were also discussed. Furthermore, a description of the equivalence-based approach of the 1960s was provided. This paradigm emphasised the importance of the equivalence concept and focused on correspondence in form (style) and meaning (content) between the ST and TT [2]. The discussion then proceeded to the functional approach of the 1970s. The most prominent proponent of this paradigm is Vermeer, and he became most popular for coining the term ‘skopos theory’, as a synonym of ‘functional theory’. According to the functional or skopos theory, it is the function or purpose of the text, which takes precedence in the translation process [15, 16].

The chapter proceeded to the description of the polysystem theory developed by even-Zohar, which existed concurrently with the functional approach in the 1970s. According to this paradigm, literature and translation form part of a polysystem, which consists of multiple systems [21, 22]. Furthermore, the descriptive translation studies (DTS), which dawned in the 1980s as an offspring of the polysystem theory, was described. The DTS was seen as an official shift from prescriptiveness to descriptiveness in translation theory. According to the DTS, translation is perceived as facts of the target culture. The approach further views any text as a translation if it functions as a translation in the target culture [26]. Proponents of the DTS also conceive of translation as an activity governed by norms, which operate at different stages of the translation process [26].

The chapter then went further to the discussion of the cultural turn approach of the 1990s. The cultural turn covers a wide range of approaches, such as patronage and translation, gender and translation and post-colonial translation theory, and these formed part of the discussion [2, 17]. The chapter closed with the corpus-based Translation Studies (CTS) approach, which emerged towards the end of the 1990s. CTS was brought into translation studies by the field of corpus linguistics [39]. CTS relies on the use of corpora in explaining translation phenomena, and its introduction in translation studies was an attempt to develop DTS [39]. The practical application, as well as shortcomings of each of these approaches were also highlighted.

References

  1. 1. Holmes JS. The name and nature of translation studies. In: Venuti L, editor. The Translation Studies Reader. London & New York: Routledge; 2000. pp. 172-185
  2. 2. Munday J. Introducing Translation Studies: Theories and Applications. 4th ed. London & New York: Routledge; 2016
  3. 3. Munday J. The Routledge Companion to Translation Studies. London & New York: Routledge; 2009
  4. 4. Bandia P. Part II: History and traditions. In: Baker M, Saldanha G, editors. Routledge Encyclopedia of Translation Studies. 2nd ed. London & New York: Routledge; 2009. pp. 311-320
  5. 5. Olsen BC. An overview of translation history in South Africa 1652-1860 [thesis]. Johannesburg: University of the Witwatersrand; 2008
  6. 6. Baker M, Hanna S. Arabic tradition. In: Baker M, Saldanha G, editors. Routledge Encyclopedia of Translation Studies. 2nd ed. London & New York: Routledge; 2009. pp. 328-337
  7. 7. Madadzhe RN, Mashamba M. Translation and cultural adaptation with specific reference to Tshivenda and English: A case of medical terms and expressions. South African Journal of African Languages. 2014;34(1):51-56
  8. 8. Jakobson R. On linguistic aspects of translation. In: Venuti L, editor. The Translation Studies Reader. London & New York: Routledge; 2000. pp. 113-118
  9. 9. Nida EA. Principles of correspondence. In: Venuti L, editor. The Translation Studies Reader. London & New York: Routledge; 2000. pp. 126-140
  10. 10. Nida EA, Taber CR. The Theory and Practice of Translation. Leiden: American Bible Society; 1974
  11. 11. Nzimande EN. The translator as a cultural mediator in the translation of Mthembu’s UMamazane into English [thesis]. Pretoria: University of Pretoria; 2017
  12. 12. Moropa K, Kruger A. Mistranslation of culture-specific terms in Kropf's Kafir-English dictionary. South African Journal of African Languages. 2000;20(1):70-79
  13. 13. Kenny D. Equivalence. In: Baker M, Saldanha G, editors. Routledge Encyclopedia of Translation Studies. 2nd ed. London & New York: Routledge; 2009a. pp. 96-99
  14. 14. Bassnett S. Translation Studies. London & New York: Routledge; 2013
  15. 15. Vermeer HJ. Skopos and commission in translational action. In: Venuti L, editor. The Translation Studies Reader. London & New York: Routledge; 2000. pp. 223-232
  16. 16. Reiss K, Vermeer HJ. Towards a General Theory of Translational Action: Skopos Theory Explained. Trans. Nord, C, English Reviewed by Dudenhöffer, M. Manchester: St Jerome Publishing; 2013
  17. 17. Snell-Hornby M. The Turns of Translation Studies: New Paradigms or Shifting Viewpoints? 66. Amsterdam & Philadelphia: John Benjamins; 2006
  18. 18. Nord C. Translating as a Purposeful Activity: Functionalist Approaches Explained. Manchester: St Jerome Publishers; 1997
  19. 19. Wallmach K, Kruger A. ‘Putting a sock on it’: A contrastive analysis of problem-solving translation strategies between African and European languages. South African Journal of African Languages. 1999;19(4):276-289
  20. 20. Moropa K. The initiator in the translation process: A case study of the prophet by Kahlil Gibran in the indigenous languages of South Africa. South African Journal of African Languages. 2012;32(2):99-109
  21. 21. Even-Zohar I. Polysystem theory. Poetics Today. 1990;11(1):9-26
  22. 22. Even-Zohar I. The position of translated literature within the literary Polysystem. In: Venuti L, editor. The Translation Studies Reader. London & New York: Routledge; 2000. pp. 192-197
  23. 23. Venuti L. The translator’s Invisibility. London: Routledge; 1995
  24. 24. Ntuli ID. Zulu literature in the global book market: the English translation of Inkinsela YaseMgungundlovu [thesis]. Johannesburg: University of the Witwatersrand; 2016
  25. 25. Rosa AA. Descriptive translation studies (DTS). In: Gambier Y, van Doorslaer L, editors. Handbook of Translation Studies. Amsterdam: John Benjamins; 2010. pp. 94-104
  26. 26. Toury G. Descriptive Translation Studies and beyond. Amsterdam: John Benjamins; 1995
  27. 27. Toury G. The nature and role of norms in translation. In: Venuti L, editor. The Translation Studies Reader. New York & London: Routledge; 2000. pp. 198-211
  28. 28. Nokele ABB. Metaphor in Mandela’s long walk to freedom: A cross-cultural comparison. Southern African Linguistics and Applied Language Studies. 2011;29(3):327-341
  29. 29. Ngcobo S. Cry, the beloved country’s isiZulu translation: Speech act of naming. Southern African Linguistics and Applied Language Studies. 2015;33(1):99-110
  30. 30. Hermans T. Revisiting the classics: Toury’s empiricism version one. The Translator. 1995;1(2):215-223
  31. 31. Mizani S. Cultural translation. In: Zainnurahman SS, editor. The Theories of Translation: From History to Procedures. Retrieved from Personal Journal of Philosophy of Language and Education. 2009. pp. 49-60
  32. 32. Lefevere A. Translation, Rewriting and the Manipulation of Literary Fame. London & New York: Routledge; 1992
  33. 33. Hermans T. Translation, ethics, politics. In: Munday J, editor. The Routledge Companion to Translation Studies. London & New York: Routledge; 2009. pp. 93-105
  34. 34. Von Flotow S. Gender and translation. In: Kuhiwczczka P, Littau K, editors. A Companion to Translation Studies. Clevedon: Multilingual Matters; 2009. pp. 92-105
  35. 35. Simon S. Gender in Translation: Cultural Identity and the Politics of Transmission. London & New York: Routledge; 1996
  36. 36. Harvey K. Translating camp talk: Gay identities and cultural transfer. In: Venuti L, editor. The Translation Studies Reader. London & New York: Routledge; 2000. pp. 446-467
  37. 37. Spivak GC. The politics of translation. In: Venuti L, editor. The Translation Studies Reader. London & New York: Routledge; 2000. pp. 397-416
  38. 38. Nzimande EN. A Critical Analysis of the Two English Translations of the First isiZulu Novel Insila KaShaka: A Corpus-Based Approach [thesis]. Pretoria: University of South Africa; 2022
  39. 39. Kenny D. Corpora. In: Baker M, Saldanha G, editors. Routledge Encyclopedia of Translation Studies. London & New York: Routledge; 2011. pp. 59-62
  40. 40. Baker M. Corpus linguistics and translation studies: Implications and applications. In: Baker M, Francis G, Tognini-Bonelli E, editors. Text and Technology: In Honour of John Sinclair. Amsterdam: John Benjamins; 1993. pp. 233-150
  41. 41. Nokele ABB. Translating conceptual metaphor in Mandela’s Long Walk to Freedom: A cross-cultural comparison [Thesis]. Pretoria: University of South Africa; 2015
  42. 42. Kenny D. Corpora in translation studies. In: Baker M, editor. Routledge Encyclopedia of Translation Studies. London: Routledge; 2009. pp. 50-53
  43. 43. Baker M. Corpora in translation studies: An overview and some suggestions for future research. Targets. 1995;7(2):223-243
  44. 44. Kruger A. Corpus-based translation studies: Its development and implications for general, literary and bible translation. Acta Theologica Supplementum. 2002;2002(2):70-106
  45. 45. Laviosa S. 2004. Corpus-based translation studies: Where does it come from? Where is it going? Language Matters. 2004;35(1):6-27
  46. 46. Laviosa S. 1998. The corpus-based approach: A new paradigm in translation studies. Meta. 1998;43(4):474-479
  47. 47. Nzabonimpa JP. Investigating Lexical Simplication of Latin Based Loan Terms in English to French Legal Translations: A Corpus-Based Study [thesis]. Pretoria: University of South Africa; 2009
  48. 48. Ndlovu V. The accessibility of translated Zulu health texts: an investigation of translation strategies. [thesis]. Pretoria: University of South Africa; 2009

Written By

Erick Nkululeko Nzimande

Submitted: 01 June 2023 Reviewed: 01 June 2023 Published: 02 October 2023