Open access peer-reviewed chapter

Quantum Communication Processes and Their Complexity

By Noboru Watanabe

Submitted: July 9th 2012Reviewed: March 11th 2013Published: July 24th 2013

DOI: 10.5772/56356

Downloaded: 1127

1. Introduction

The complex systems and their dynamics are treated various way. Ohya looked for synthesizing method to treat complex systems. He established Information Dynamics [36] which is a new concept unifying the dynamics of a state and the complexity of the system itself. By applying Information Dynamics, one can discuss in a unified frame the problems such as in mathematics, physics, biology, information science. Information Dynamics is growing as one of the research fields, for instance, the international journal named "Open Systems and Information Dynamics" in 1992 has appeared. In ID, there are two types of complexity, that is, (a) a complexity of state describing system itself and (b) a transmitted complexity between two systems. Entropies of classical and quantum information theory are the example of the complexities of (a) and (b).

Shannon [52] found that the entropy, introduced in physical systems by Clausius and Boltzmann, can be used to express the amount of information by means of communication processes, and he proposed the so-called information communication theory at the middle part of the 20th century. In his information theory, the entropy and the mutual entropy (information) are most important concepts. The entropy relates to the complexity of ID measuring the amount of information of the state of system. The mutual entropy (information) corresponds to the transmitted complexity of ID representing the amount of information correctly transmitted from the initial system to the final system through a channel, and it was extended to the mutual entropy on the continuous probability space by Gelfand– Kolmogorov - Yaglom [17,23], which was defined by using the relative entropy of two states by Kullback-Leibler [26].

Laser is often used in the current communication. A formulation of information theory being able to treat quantum effects is necessary, which is the so-called quantum information theory. The quantum information theory is important in both mathematics and engineering. It has been developed with quantum entropy theory and quantum probability. In quantum information theory, one of the important problems is to investigate how much information is exactly transmitted to the output system from the input system through a quantum channel. The amount of information of the quantum input system is described by the quantum entropy defined by von Neumann [29] in 1932. The C*-entropy was introduced in [33,35] and its property is discussed in [28,21]. The quantum relative entropy was introduced by Umegaki [55] and it is extended to general quantum system by Araki [4,5], Uhlmann [54] and Donald [14]. Furthermore, it had been required to extend the Shannon’s mutual entropy (information) of classical information theory to that in the quantum one. The classical mutual entropy is defined by using the joint probability expressing a correlation between the input system and the output system. However, it was shown by Urbanik [56] that in quantum system there does not generally exists a joint probability distribution. The semi-classical mutual entropy was introduced by Holevo, Livitin, Ingarden [18,20] for classical input and output passing through a possible quantum channel. By introducing a new notion, the so-called compound state, in 1983 Ohya formulated the mutual entropy [31,32] in a complete quantum mechanical system (i.e., input state, output state and channel are all quantum mechanical), which is called the Ohya mutual entropy. It was generalized to C*-algebra in [Oent84]. The quantum capacity [40] is defined by taking the supremum for the Ohya mutual entropy. By using the Ohya quantum mutual entropy, one can discuss the efficiency of the information transmission in quantum systems [28,27,44,34,35], which allows the detailed analysis of optical communication processes. Concerning quantum communication processes, several studies have been done in [31,32,35,40,41]. Recently, several mutual entropy type measures (Lindblad - Nielsen entropy [10] and Coherent entropy [6]) were defined by using the entropy exchange. One can classify these mutual entropy type measures by calculating their measures for the quantum channel. These entropy type complexities are explained in [39,43].

The entangled state is an important concept for quantum theory and it has been studied recently by several authors. One of the remarkable formulations to search the entanglement state is the Jamiolkowski’s isomorphism [22]. It might be related to the construction of the compound state in quantum communication processes. One can discuss the entangled state generated by the beam splitting and the squeezed state.

The classical dynamical (or Kolmogorov-Sinai) entropy S(T) [23] for a measure preserving transformation T was defined on a message space through finite partitions of the measurable space. The classical coding theorems of Shannon are important tools to analyze communication processes which have been formulated by the mean dynamical entropy and the mean dynamical mutual entropy. The mean dynamical entropy represents the amount of information per one letter of a signal sequence sent from the input source, and the mean dynamical mutual entropy does the amount of information per one letter of the signal received in the output system. In this chapter, we will discuss the complexity of the quantum dynamical system to calculate the mean mutual entropy with respect to the modulated initial states and the attenuation channel for the quantum dynamical systems [59].

The quantum dynamical entropy (QDE) was studied by Connes-Størmer [13], Emch [15], Connes-Narnhofer-Thirring [12], Alicki-Fannes [3], and others [9,48,19,57,11]. Their dynamical entropies were defined in the observable spaces. Recently, the quantum dynamical entropy and the quantum dynamical mutual entropy were studied by the present authors [34,35]: (1) the dynamical entropy is defined in the state spaces through the complexity of Information Dynamics [36]. (2) It is defined through the quantum Markov chain (QMC) was done in [2]. (3) The dynamical entropy for a completely positive (CP) map was defined in [25]. In this chapter, we will discuss the complexity of the quantum dynamical process to calculate the generalized AOW entropy given by KOW entropy for the noisy optical channel [58].

2. Quantum channels

The signal of the input quantum system is transmitted through a physical device, which is called a quantum channel. The concept of channel has been performed an important role in the progress of the quantum information communication theory. The mathematical representation of the quantum channel is a mapping from the input state space to the output state space. In particular, the attenuation channel [31] and the noisy optical channel [44] are remarkable examples of the quantum channels describing the quantum optical communication processes. These channels are related to the mathematical desctiption of the beam splitter.

Here we review the definition of the quantum channels.

Let (B(1),S(1))and (B(2),S(2))be input and output systems, respectively, where B(k)is the set of all bounded linear operators on a separable Hilbert space kand S(k)is the set of all density operators on k(k=1,2). Quantum channel Λis a mapping from S(1)to S(2).

  1. Λis called a linear channel if Λsatisfies Λ(λρ1+(1λ)ρ2)=λΛ(ρ1)+(1λ)Λ(ρ2)for any ρ1,ρ2S(1)and any λ[0,1].

  2. Λis called a completely positive (CP) channel if Λis linear and its dual map Λfrom B(2)to B(1)holds

i,j=1nAiΛ(BiBj)Aj0

for any nN, any BjB(2)and any AjB(1), where the dual map Λof Λis defined by trΛ(ρ)B=trρΛ(B)for any ρS(1)and any BB(2). Almost all physical transformations can be described by the CP channel [30,39,21,46, 43].

3. Quantum communication processes

Let K1and K2be two Hilbert spaces expressing noise and loss systems, respectively. Quantum communication process including the influence of noise and loss is denoted by the following scheme [31]: Let ρbe an input state in S(1), ζbe a noise state in S(K1).

S(1)ΛS(2)γaS(1K1)ΠS(2K2)

The above maps γ,aare given as

γ(ρ)=ρξ,ρS(1),a(σ)=trK2σ,σS(2K2).

The map Πis a CP channel from S(1K1)to S(2K2)given by physical properties of the device transmitting signals. Hence the channel for the above process is given as

Λ(ρ)trK2Π(ρζ)=(aΠγ)(ρ)

for any ρS(1).Based on this scheme, the noisy optical channel is constructed as follows:

4. Noisy optical channel

Noisy optical channel Λwith a noise state ζwas defined by Ohya and NW [44] such as

Λ(ρ)trK2Π(ρζ)=trK2V(ρζ)V,

where ζ=|m1m1|is the m1photon number state in S(K1)and Vis a mapping from 1K1to 2K2denoted by

V(|n1|m1)=j=0n1+m1Cjn1,m1|j|n1+m1j,

where

Cjn1,m1=r=LK(1)n1+jrn1!m1!j!(n1+m1j)!r!(n1j)!(jr)!(m1j+r)!αm1j+2r(β¯)n1+j2r,

and |n1is the n1photon number state vector in 1, and α, βare complex numbers satisfying |α|2+|β|2=1. Kand Lare constants given by K=min{n1,j}, L=max{m1j,0}. We have the following theorem.

Theorem The noisy optical channel Λwith noise state |mm|is described by

Λ(ρ)=i=0OiVQ(m)ρQ(m)VOi,

where Q(m)l=0(|yl|m)yl|,Oik=0|zk(zk|i|), {|yl}is a CONS in 1, {|zk}is a CONS in 2and {|i}is the set of number states in K2.

In particular for the coherent input states

ρ=|ξξ||κκ|S(1K1),

the output state of Πis obtained by

Π(|ξξ||κκ|)=|αξ+βκαξ+βκ||β¯ξ+α¯κβ¯ξ+α¯κ|.

|κκ||ξξ||αξ+βκαξ+βκ||β¯ξ+α¯κβ¯ξ+α¯κ|

5. Attenuation channel

The noisy optical channel with a vacuum noise is called the attenuation channel introduced in [31] by

Λ0(ρ)trK2Π0(ρζ0)=trK2V0(ρ|00|)V0,

where |00|is the vacuum state in S(K1)and V0is a mapping from 1K1to 2K2given by

V0(|n1|0)=jn1Cjn1|j|n1j,Cjn1=n1!j!(n1j)!αj(β¯)n1j

In particular, for the coherent input state

ρ=|ξξ||00|S(1K1),

one can obtain the output state

Π0(|ξξ||00|)=|αξαξ||β¯ξβ¯ξ|.

Lifting 0from S()to S(K)in the sense of Accardi and Ohya [1] is denoted by

0(|ξξ|)=|αξαξ||βξβξ|.

0(or Π0)is called a beam splitting. Based on liftings, the beam splitting was studied by Accardi - Ohya [1] and Fichtner - Freudenberg - Libsher [16].

6. Information dynamics

We are interested to study the dynamics of state change or the complexity of state for several systems. Information dynamics (ID) is a new concept introduced by Ohya [36] to construct a theory under the ID's framework by synthesizing these investigating schemes. In ID, a complexity of state describing system itself and a transmitted complexity between two systems are used. The examples of these complexities are the Shannon's entropy and the mutual entropy (information) in classical entropy theory. In quantum entropy theory, it was known that the von Neumann entropy and the Ohya mutual entropy relate to these complexities. Recently, several mutual entropy type measures (the Lindblad - Nielsen entropy [10] and the Coherent entropy [6]) were proposed by means of the entropy exchange for an input state and a channel.

7. Concept of information dynamics

Ohya introduced Information Dynamics (ID) synthesizing dynamics of state change and complexity of state. Based on ID, one can study various problems of physics and other fields. Channel and two complexities are key concepts of ID. Two kinds of complexities CS(ρ), TS(ρ;Λ)are used in ID. CS(ρ)is a complexity of a state ρmeasured from a subset Sand TS(ρ;Λ)is a transmitted complexity according to the state change from ρto Λρ. Let S,S¯,Stbe subsets of S(1),S(2),S(12), respectively. These complexities should fulfill the following conditions as follows:

8. Complexity of system

  1. For any ρS, CS(ρ)is nonnegative (i.e., CS(ρ)0)

  2. For a bijection jfrom exS(1)to exS(1),

CS(ρ)=CS(j(ρ))

is hold, where exS(1)is the set of all extremal points of S(1).

  1. For ρσS(12), ρS(1), σS(2),

CSt(ρσ)=CS(ρ)+CS¯(σ)

It means that the complexity of the state ρσof totally independent systems are given by the sum of the complexities of the states ρand σ.

9. Transmitted complexity

(1’) For any ρSand a channel Λ, TS(ρ;Λ)is nonnegative (i.e., TS(ρ;Λ)0)

(4) CS(ρ)and TS(ρ;Λ)satisfy the following inequality 0TS(ρ;Λ)CS(ρ).

(5) If the channel Λis given by the identity map id, then TS(ρ;id)=CS(ρ)is hold.

The examples of the above complexities are the Shannon entropy S(p)for CS(p)and the classical mutual entropy I(p;Λ)for TS(p;Λ), respectively Let us consider these complexities for quantum communication processes.

10. Quantum entropy

Since the present optical communication is using the optical signal including quantum effect, it is necessary to construct new information theory dealing with those quantum phenomena in order to discuss the efficiency of information transmission of optical communication processes rigorously. The quantum information theory is important in both mathematics and engineering, and it contains several topics, for instance, quantum entropy theory, quantum communication theory, quantum teleportation, quantum entanglement, quantum algorithm, quantum coding theory and so on. It has been developed with quantum entropy theory and quantum probability. In quantum information theory, one of the important problems is to investigate how much information is exactly transmitted to the output system from the input system through a quantum channel. The amount of information of the quantum communication system is described by the quantum mutual entropy defined by Ohya [31], based on the quantum entropy by von Neumann [29], and the quantum relative entropy by Umegaki [55], Araki [4] and Uhlmann [54]. The quantum information theory directly relates to quantum communication theory, for instance, [40,41,45]. One of the most important communication processes is quantum teleportation, whose new treatment was studied in [24]. It is important to classify quantum states. One of such classifications is to study entanglement and separability of states (see [7,8]). There have been lots of trials in finite dimensional Hilbert spaces. Quantum mechanics should be basically discussed in infinite dimensional Hilbert spaces. We have to study such a classification in infinite dimensional Hilbert spaces.

10.1. Von Neumann entropy

The study of the entropy in quantum system was begun by von Neumann [29] in 1932. For any state given by the density operator ρ, the von Neumann entropy is defined by

S(ρ)=trρlogρ,ρS().

Since the von Neumann entropy satisfies the conditions (1),(2),(3) of the complexity of state of ID, it seems to be considered as an example of the complexity of state C(ρ)=S(ρ).

10.2. Entropy for general systems

Here we briefly explain Let us comment general entropies of states in C*-dynamical systems. The C*-entropy (S-mixing entropy) was introduced by Ohya in [33,35] and its property is discussed in [28,21].

Let (A,S(A),α(G))be a C*-dynamical system and Sbe a weak* compact and convex subset of S(A). For example, Sis given by S(A)(the set of all states on A), I(α)(the set of all invariant states for α), K(α)(the set of all KMS states),and so on. Every state φShas a maximal measure μpseudosupported on exSsuch that

φ=Sωdμ,

where exSis the set of all extreme points of S. The measure μgiving the above decomposition is not unique unless Sis a Choquet simplex. The set of all such measures is denoted by Mφ(S)and Dφ(S)is the subset of Mφ(S)constituted by

D(S)={Mφ(S);μk+ and {φk}exSs.t.kμk=1,μ=kμkδ(φk)}

where δ(φ)is the Dirac measure concentrated on an initial state φ. For a measure μDφ(S), the entropy type functional H(μ)is given by

H(μ)=kμklogμk.

For a state φSwith respect to S, Ohya introduced the C*-entropy (S-mixing entropy) [33,35] defined by

SS(φ)={inf{H(μ);μDφ(S)}+if Dφ(S)=.

It describes the amount of information of the state φmeasured from the subsystem S. If S=S(A), then SS(A)(φ)is denoted by S(φ). This entropy is an extension of the von Neumann entropy mentioned above.

10.3. Quantum relative entropy

The classical relative entropy in continuous probability space was defined by Kullback-Leibler [26]. It was developed in noncommutative probability space. The quantum relative entropy was first defined by Umegaki [55] for σ-finite von Neumann algebras, which denotes a certain difference between two states. It was extended by Araki [4] and Uhlmann [54] for general von Neumann algebras and *-algebras, respectively.

10.4. Umegakirelative entropy

The relative entropy of two states was introduced by Umegaki in [55] for σ-finite and semi-finite von Neumann algebras. Corresponding to the classical relative entropy, for two density operators ρand σ, it is defined as

S(ρ,σ)={trρ(logρlogσ)(s(ρ)s(σ)),(else),

where s(ρ)s(σ)means the support projection s(σ)of σis greater than the support projection s(ρ)of ρ. It means a certain difference between two quantum states ρand σ. The Umegaki’s relative entropy satisfies (1) positivity, (2) joint convexity, (3) symmetry, (4) additivity, (5) lower semicontinuity, (6) monotonicity. Araki [4] and Uhlmann [54] extended this relative entropy for more general quantum systems.

10.5. Relative entropy for general systems

The relative entropy for two general states was introduced by Araki [4,5] in von Neumann algebra and Uhlmann [54] in *-algebra. The above properties are held for these relative entropies.

10.5.1. Araki's relative entropy[4,5]

Let Nbe a σ-finite von Neumann algebra acting on a Hilbert space and φ,ψbe normal states on Ngiven by φ()=x,xand ψ()=y,ywith x,yK(i.e., Kis a positive natural cone) . On the domain Ny+(IsN(y)), the operator Sx,yis defined by

Sx,y(Ay+z)=sN(y)Ax, AN  (z is satisfying sN(y)z=0),

where sN(y)(the N-support of y) is the projection from to {Ny}. Using this Sx,y, the relative modular operator Δx,yis defined as Δx,y=(Sx,y)Sx,y¯, whose spectral decomposition is denoted by 0λdex,y(λ)(Sx,y¯is the closure of Sx,y). Then the Araki’s relative entropy is given by

Definition The Araki’s relative entropy of φand ψis defined by

S(ψ,φ)={0logλdy,ex,y(λ)y(ψφ),(otherwise),

where ψφmeans that φ(AA)=0implies ψ(AA)=0for AN.

10.5.2. Uhlmann's relative entropy[54]

Let be a complex linear space and p,qbe two semi-norms on . H((p,q))is the set of all positive Hermitian forms αon satisfying |α(x,y)|p(x)q(y)for all x,y. For x, the quadratical mean QM(p,q)of pand qis defined by

QM(p,q)(x)=sup{α(x,x)1/2;αH((p,q))}.

For each x, there exists a family of semi-norms pt(x)of t[0,1], which is called the quadratical interpolation from pto q, satisfying the following conditions:

  1. For any x, pt(x)is continuous in t,

  2. p1/2=QM(p,q)

  3. pt/2=QM(p,pt)(t[0,1])

  4. p(t+1)/2=QM(pt,q)(t[0,1])

This semi-norm ptis denoted by QIt(p,q). It is shown that for any positive Hermitianforms α,β, there exists a unique function QFt(α,β)of t[0,1]with values in the set H((p,q))such that QFt(α,β)(x,x)1/2is the quadratical interpolation from α(x,x)1/2to β(x,x)1/2. For x, the relative entropy functional S(α,β)(x)of αand βis defined as

S(α,β)(x)=limt+0inf1t{QFt(α,β)(x,x)α(x,x)}.

Let be a *-algebra A. For positive linear functional φ,ψon A, two Hermitian forms φL,ψRare given by φL(A,B)=φ(AB)and ψR(A,B)=ψ(BA).

Definition The Uhlmann’s relative entropy of φand ψis defined by

S(ψ,φ)=S(ψR,φL)(I).

10.5.3. Ohya mutual entropy [31]

The Ohya mutual entropy [31] with respect to the initial state ρand a quantum channel Λis described by

I(ρ;Λ)sup{nS(ΛEn,Λρ),ρ=nλnEn},

where S(,)is the Umegaki's relative entropy and ρ=nλnEnrepresents a Schatten-von Neumann (one dimensional orthogonal) decomposition [49] of ρ. Since the Schatten-von Neumann decomposion of a state ρis not unique unless all eigenvalues of ρdo not degenerate, the Ohya mutual entropy is defined by taking a supremum for all Schatten-von Neumann decomposion of a state ρ. Then the Ohya mutual entropy satisfies the following Shannon's type inequality [31]

0I(ρ,Λ)min{S(ρ),S(Λρ)},

where S(ρ)is the von Neumann entropy. This inequalities show that the Ohya mutual entropy represents the amount of information correctly carried from the input system to the output system through the quantum channel. The capacity denotes the ability of the information transmission of the communication processes, which was studied in [40,41,45].

For a certain set SS(1)satisfying some physical conditions, the capacity of quantum channel Λ[40] is defined by

CqS(Λ)sup{I(ρ;Λ);ρS}.

If S=S(1)holds, then the capacity is denoted by Cq(Λ).Then the following theorem for the attenuation channel was proved in [40].

Theorem For a subset Sn{ρS(1);dims(ρ)=n},the capacity of the attenuation channel Λ0satisfies

CqSn(Λ0)=logn,

where s(ρ)is the support projection of ρ.

10.6. Mutual entropy for general systems

Based on the classical relative entropy, the mutual entropy was discussed by Shannon to study the information transmission in classical systems and it was extended by Ohya [33,34,35] for fully general quantum systems.

Let (A,S(A),α(G))be a unital C-system and Sbe a weak* compact convex subset of S(A). For an initial state φSand a channel Λ:S(A)S(), two compound states are

ΦμS=SωΛωdμ,Φ0=φΛφ.

The compound state ΦμSexpresses the correlation between the input state φand the output state Λφ. The mutual entropy with respect to Sand μis given by

IμS(φ;Λ)=S(ΦμS,Φ0)

and the mutual entropy with respect to Sis defined by Ohya [33] as

IS(φ;Λ)=sup{IμS(φ;Λ);μMφ(S)}.

10.7. Mutual entropy type complexity

Shor [53] and Bennet et al [6,10] proposed the mutual type measures so-called the coherent entropy and the Lindblad-Nielson entropy by using the entropy exchange [50] defined by

Se(ρ,Λ)=trWlogW,

where Wis a matrix W=(Wij)i,jwith

WijtrAiρAj

for a state ρconcerning a Stinespring-Sudarshan-Kraus form

Λ()jAjAj,

of a channel Λ. Then the coherent entropy IC(ρ;Λ)[53] and the Lindblad-Nielson entropy IL(ρ;Λ)[10] are given by

IC(ρ;Λ)S(Λρ)Se(ρ,Λ),

IL(ρ;Λ)S(ρ)+S(Λρ)Se(ρ,Λ).

In this section, we compare with these mutual types measures. By comparing these mutual entropies for quantum information communication processes, we have the following theorem [47]:

Theorem Let {Aj}be a projection valued measure withdim Aj=1. For arbitrary state ρand the quantum channel Λ()jAjAj, one has

  1. 0I(ρ;Λ)min{S(ρ),S(Λρ)}(Ohya mutual entropy),

  2. IC(ρ;Λ)=0(coherent entropy),

  3. IL(ρ;Λ)=S(ρ)(Lindblad-Nielsen entropy).

For the attenuation channel Λ0, one can obtain the following theorems [47]:

Theorem For any state ρ=nλn|nn|and the attenuation channel Λ0with |α|2=|β|2=12, one has

  1. 0I(ρ;Λ0)min{S(ρ),S(Λ0ρ)}(Ohya mutual entropy),

  2. IC(ρ;Λ0)=0(coherent entropy),

  3. IL(ρ;Λ0)=S(ρ)(Lindblad-Nielsen entropy).

Theorem For the attenuation channel Λ0and the input state ρ=λ|00|+(1λ)|θθ|, we have

  1. 0I(ρ;Λ0)min{S(ρ),S(Λ0ρ)}(Ohya mutual entropy),

  2. S(ρ)IC(ρ;Λ0)S(ρ)(coherent entropy),

  3. 0IL(ρ;Λ0)2S(ρ)(Lindblad-Nielsen entropy).

The above Theorem shows that the coherent entropy IC(ρ;Λ0)takes a minus value for |α|2<|β|2and the Lindblad-Nielsen entropy IL(ρ;Λ0)is greater than the von Neumann entropy of the input state ρfor |α|2>|β|2. Therefore Ohya mutual entropy is most suitable one for discussing the efficiency of information transmission in quantum processes. Since the above theorems and other results [47] we could conclude that Ohya mutual entropy might be most suitable one for discussing the efficiency of information transmission in quantum communication processes. It means that Ohya mutual entropy can be considered as the transmitted complexity for quantum communication processes.

11. Quantum dynamical entropy

The classical dynamical (or Kolmogorov-Sinai) entropy S(T) [23] for a measure preserving transformation T was defined on a message space through finite partitions of the measurable space.

The classical coding theorems of Shannon are important tools to analyse communication processes which have been formulated by the mean dynamical entropy and the mean dynamical mutual entropy. The mean dynamical entropy represents the amount of information per one letter of a signal sequence sent from an input source, and the mean dynamical mutual entropy does the amount of information per one letter of the signal received in an output system.

The quantum dynamical entropy (QDE) was studied by Connes-Størmer [13], Emch [15], Connes- Narnhofer-Thirring [12], Alicki-Fannes [3], and others [9,48,19,57,11]. Their dynamical entropies were defined in the observable spaces. Recently, the quantum dynamical entropy and the quantum dynamical mutual entropy were studied by the present authors [34,35]: (1) The dynamical entropy is defined in the state spaces through the complexity of Information Dynamics [36]. (2) It is defined through the quantum Markov chain (QMC) was done in [2]. (3) The dynamical entropy for a completely positive (CP) maps was introduced in [25].

12. Mean entropy and mean mutual entropy

The classical Shannon ‘s coding theorems are important subject to study communication processes which have been formulated by the mean entropy and the mean mutual entropy based on the classical dynamical entropy. The mean entropy shows the amount of information per one letter of a signal sequence of an input source, and the mean mutual entropy denotes the amount of information per one letter of the signal received in an output system. Those mean entropies were extended in general quantum systems.

In this section, we briefly explain a new formulation of quantum mean mutual entropy of K-S type given by Ohya [35,27].

In quantum information theory, a stationary information source is denoted by a Ctriple (A,S(A),θA)with a stationary state φwith respect to θA; that is, Ais a unital C-algebra, S(A)is the set of all states over A, θAis an automorphism of A, and φS(A)is a state over Awith φθA=φ.

Let an output C-dynamical system be the triple (,S(),θ), and Λ:S(A)S()be a covariant channel which is a dual of a completely positive unital map Λ:Asuch that Λθ=θAΛ.

In this section, we explain functional SμS(φ;αM), SS(φ;αM), IμS(φ;αM,βN)and IS(φ;αM,βN)introduced in [35,27] for a pair of finite sequences of αM=(α1,α2,,αM), βN=(β1,β2,,βN)of completely positive unital maps αm:AmA, βn:nwhere Amand n(m=1, , M, n=1, , N) are finite dimensional unital C-algebras.

Let Sbe a weak * convex subset of S(A)and φbe a state in S. We denote the set of all regular Borel probability measures μon the state space S(A)of Aby Mφ(S), so that μis maximal in the Choquet ordering and μrepresents φ=S(A)ωdμ(ω). Such measures is taken by extremal decomposition measures for φ, Using Choquet's theorem, one can be shown that there exits such measures for any state φS(A). For a given finite sequences of completely positive unital maps αm:AmAfrom finite dimensional unitalC-algebras Am(m=1,,M)and a given extremal decomposition measure μof φ, the compound state of α1φ,α2φ,,αMφon the tensor product algebra m=1MAmis given by [35,27]

ΦμS(αM)=S(A)m=1Mαmωdμ(ω).

Furthermore ΦμS(αMβN)is a compound state of ΦμS(αM)and ΦμS(βN)with αMβN(α1,α2,,αM,β1,β2,, βN)constructed as

ΦμS(αMβN)=S(A)(m=1Mαmω)(n=1Nβnω)dμ

For any pair (αM,βN)of finite sequences αM=(α1,,αM)and βN=(β1,,βN)of completely positive unital maps αm:AmA, βn:nAfrom finite dimensional unital C-algebras and any extremal decomposition measure μof φ, the entropy functional Sμand the mutual entropy functional Iμare defined in [35,27] such as

SμS(φ;αM)=S(A)S(m=1Mαmω,ΦμS(αM))dμ(ω),IμS(φ;αM,βN)=S(ΦμS(αMβN),ΦμS(αM)ΦμS(βN)),

where S(,)is the relative entropy.

For a given pair of finite sequences of completely positive unitalmaps αM=(α1,,αM), βN=(β1,,βN), the functional SS(φ;αM)(resp. IS(φ;αM,βN)) is given in [35,27] by taking the supremum of SμS(φ;αM)(resp. IμS(φ;αM,βN)) for all possible extremal decompositions μ's of φ:

SS(φ;αM)=sup{SμS(φ;αM);μMφ(S)},IS(φ;αM,βN)=sup{IμS(φ;αM,βN);μMφ(S)}.

Let A(resp. ) be a unital C-algebra with a fixed automorphism θA(resp. θ), Λbe a covariant completely positive unital map from to A, and φbe an invariant state over A, i.e., φθA=φ.

αN(α,θAα,,θAN1α),βΛN(Λβ,Λθβ,,ΛθN1β).

For each completely positive unital map α:A0A(resp. β:0) from a finite dimensional unital C-algebra A0(resp. 0) to A(resp. ), S˜S(φ;θA,α), I˜S(φ;Λ,θA,θ,α,β)are given in [35,27] by

S˜S(φ;θA,α)=liminfN1NSS(φ;αN),I˜S(φ;Λ,θA,θ,α,β)=liminfN1NIS(φ;αM,βN).

The functional S˜S(φ;θA)and I˜S(φ;Λ,θA,θ)are defined by taking the supremum for all possible A0's, α's, 0's, and β's:

S˜S(φ;θA)=supαS˜S(φ;θA,α),I˜S(φ;Λ,θA,θ)=supα,βI˜S(φ;Λ,θA,θ,α,β).

Then the fundamental inequality in information theory holds for S˜S(φ;θA)and I˜S(φ;Λ,θA,θ)[35].

12.1. Proposition

0I˜S(φ;Λ,θA,θ)min{S˜S(φ;θA),S˜S(Λφ;θ)}.

These functional S˜S(φ;θA)and I˜S(φ;Λ,θA,θ)are constructed from the functional SμS(φ;αN)and IμS(φ;αN,βN)coming from information theory and these functionals are obtained by using a channel transformation, so that those functionals contains the dynamical entropy as a special case [35,27]. Moreover these functionals contain usual K-S entropies as follows [35,27].

Proposition If Ak,Aare abelian C-algebras and each αkis an embedding, then our functionals coincide with classical K-S entropies:

SμS(A)(φ;αM)=Sμclassical(m=1MA˜m),IμS(A)(φ;αM,βidN)=Iμclassical(m=1MA˜m,n=1NB˜n)

for any finite partitions A˜m,B˜nof a probability space (Ω,F,φ).

In general quantum structure, we have the following theorems [35,27].

Theorem Let αmbe a sequence of completely positive maps αm:AmAsuch that there exist completely positive maps αm:AAmsatisfying αmαmidAin the pointwise topology. Then:

S˜S(φ;θA)=limmS˜S(φ;θA,αm).

Theorem Let αmand βmbe sequences of completely positive maps αm:AmAand βm:msuch that there exist completely positive maps αm:AAmand βm:msatisfying αmαmidAand βmβmidin the pointwise topology. Then one has

I˜S(φ;Λ,θA,θ)=limmI˜S(φ;Λ,θA,θ,αm,βm).

The above theorem is a Kolmogorov-Sinai type convergence theorem for the mutual entropy [35,27,28,34].

In particular, a quantum extension of classical formulation for information transmission giving a basis of Shannon's coding theorems can be considered in the case that A=A0, B=B0, S=Sand θA,θBare shift operators, both denoted by θ. In this case, the channel capacity is defined as [40,41,45,46,38,39,42,43]

C˜(Λ)sup{I˜S(φ;Λ,θ);φS}.

Using this capacity, one can consider Shannon's coding theorems in fully quantum systems.

13. Computations of mean entropies for modulated states

Based on the paper [59], we here explain general modulated states and briefly review some examples of modulated states (PPM, OOK, PSK).

Let {a1,,aN}be an alphabet set constructing the input signals and N{E1,,EN}be the set of one dimensional projections on a Hilbert space satisfying

  1. EnEm(nm)

  2. Encorresponds to the alphabet an.

We denote the set of all density operators on generated by

S0{ρ0=n=1NλnEn;ρ00,trρ0=1},

where an element of S0represents a state of the quantum input system. The state is transmitted from the quantum input system to the quantum modulator in order to send information effectively, whose transmitted state is called the quantum modulated state. The quantum modulated states are denoted as follows: Let Mbe an ideal modulator and N(M){E1(M),,EN(M)}be the set of one dimensional projections on a Hilbert space Mfor modulated signals satisfying

En(M)Em(M)(nm), and we represent the set of all density operators on Mby

S0(M){ρ0(M)=n=1NμnEn(M);ρ0(M)0,trρ0(M)=1},

where an element of S0(M)represents a modulated state of the quantum input system. There are many expressions for the modulations. In this section, we take the modulated states by means of the photon number states.

γMis a modulator Mif γM(En)=En(M)is a map from S0to S0(M)satisfying (1) γMis a completely positive unital map from A0to A. Moreover γIMis called an ideal modulator IMif (1) γIM(En)=En(M)is a modulator from S0to S0(M), γIM(En)γIM(Em)for any orthogonal EnS0. Some examples of ideal modulator are given as follows:

  1. For any EnS0, the PPM (Pulse Position Modulator) is defined by

γPPM(En)En(PPM)=E0PAME0(PAM)Ed(PAM)E0(PAM)E0(PAM)

where E0(PAM)is the vacuum state on (PAM).

  1. For E1,E2S0, the OOK (On-Off Keying) is defined by

γOOK(E1)E1(OOK)=|0><0|,γOOK(E2)E2(OOK)=|κ><κ|

where |κ><κ|is the coherent state on OOK.

  1. For E1,E2S0, the PSK (Phase Shift Keying) is defined by

γPSK(E1)E1(PSK)=|κ><κ|,γPSK(E2)E2(PSK)=|κ><κ|

where |κ><κ|,|κ><κ|are the coherent states on PSK.

Now we briefly review the calculation of the mean mutual entropy of K-S type for the modulated state (PSK) by means of the coherent state. Other calculations are obtained in [59].

α(IM)N,β(IM)Nare given by

α(IM)N(αγ˜(IM),θAαγ˜(IM),,θAN1αγ˜(IM)),β(IM)N(γ˜(IM)Λβ,γ˜(IM)Λθβ,,γ˜(IM)ΛθN1β),

where Λ˜i=Λand γ˜(IM)i=γ(IM)are held.

PSK. For an initial state ρ=i=ρii=Si, let ρi=ν|κκ|+(1ν)|κκ|(0ν1). The Schatten decomposition of ρiis obtained as

ρi=ni=12λniEni(PSK),

where the eigenvalues λniof ρiare

λ1=12{1+14ν(1ν)(1exp(|2κ|2))},λ2=12{114ν(1ν)(1exp(|2κ|2))}.

Two projections Eni(PSK)(ni=1,2) and the eigenvectors |eni(PSK)of λni(ni=1,2) are given by

Eni(PSK)=|eni(PSK)eni(PSK)|,

|eni(PSK)=ani|κ+bni|κ,(ni=1,2),

where

|bni|2=1τni2+2exp(|κ|2)τni+1,|ani|2=τni2|bni|2,anibni¯=ani¯bni=τni|bni|2

τ1=(12ν)+14ν(1ν)(1exp(|2κ|2))2(1ν)exp(|κ|2),τ2=(12ν)14ν(1ν)(1exp(|2κ|2))2(1ν)exp(|κ|2).

For the above initial state Eni(PSK), one can obtain the output state for the attenuation channel Λas follows:

ΛEni(PSK)=ni=12λni,niEni,ni(PSK)(ni=1,2),

where the eigenvalues λni,niof ΛEni(OOK)are given by ( ni=1,2)

λni,1=12{1+14μni(1μni)(1|uni,1,uni,2|2)},λni,2=12{114μni(1μni)(1|uni,1,uni,2|2)},

μni=12(1+exp((1η)|κ|2))τni2+2exp(|ακ|2)τni+1τni2+2exp(|κ|2)τni+1.

|uni,ni,uni,ni|2=1,uni,1,uni,2=τni21(τni2+1)24exp(|2ακ|2)τni2(ni=1,2).

Eni,ni(PSK)are the eigenstates with respect to λni,ni. Then we have

ΦE(α(PSK)N)=i=0N1γ(PSK)αθAi(ρ)=i=0N1γ(PSK)(ρi)=n0=12nN1=12(k=0N1λnk)(i=0N1Eni(PSK))

When Λis given by the attenuation channel, we get

ΦE(βΛ(PSK)N)=i=0N1βθBiΛγ(PSK)(ρ)=n0=12nN1=12(k=0N1λnk)(i=0N1ΛEni(PSK))

The compound states through the attenuation channel Λbecomes

ΦE(α(PSK)NβΛ(PSK)N)=n0=12nN1=12(k=0N1λnk)n0=12nN1=12(k=0N1λnk,nk)×(i=0N1Eni(PSK))(i=0N1Eni,ni(PSK))ΦE(α(PSK)N)ΦE(βΛ(PSK)N)=n0=12nN1=12(k=0N1λnk)m0=12mN1=12(k=0N1λmk)×m0=02mN1=02(k"=0N1λmk",mk")×(i=0N1Eni(PSK))(i=0N1Emi,mi(PSK))

Lemma For an initial state ρ=i=ρii=Si, we have

IE(ρ;α(PSK)N,β(PSK)N)=n0=12nN1=12n0=12nN1=12(k=0N1λnkλnk,nk)logk=0N1λnk,nkm0=12mN1=12(k=0N1λmkλmk,nk).

By using the above lemma, we have the following theorem.

Theorem For an initial state ρ=i=ρii=Si, we have

S˜(ρ;θA,α(PSK)N)=limN1NS(ρ;α(PSK)N)=n=12λnlogλn

and

I˜(ρ;Λ,θ,θ,α(PSK)N,β(PSK)N)=n=12n=12λnλn,nlogλn,nm=12λmλm,n.

14. KOW dynamical entropy

In this section, we briefly explain the definition of the KOW entropy according to [25].

For a normal state ωon B(K)and a normal, unital CP linear map Γfrom B(K)B()to B(K)B(), one can define a transition expectation EΓ,ωfrom B(K)B()to B()by

EΓ,ω(A˜)=ω(Γ(A˜))=trKω˜Γ(A˜),A˜B(K)B()

in the sense of [1,25], where ω˜S(K)is a density operator associated to ω. The dual map Eis a lifting from S()to S(K)by

EΓ,ω(ρ)=Γ(ω˜ρ).

in the sense of Accardi and Ohya [1]. For a normal, unital CP map Λfrom B()to B()and the identity map idon B(K), the transition expectation

EΛΓ,ω(A˜)=ω((idΛ)Γ(A˜)),A˜B(K)B()

and the lifting is defined by

EΛΓ,ω(ρ)=Γ(ω˜Λ(ρ)),ρS(),

where idΛis a normal, unital CP map from B(K)B()to B(K)B()and Λis a quantum channel [30,21,31,39,44,46,43] from S()to S()with respect to an input signal state ρand a noise state ω˜. Based on the following relation

tr(1nK)ΦΛ,nΓ,ω(ρ)(A1AnB)trρ(EΛΓ,ω(A1EΛΓ,ω(A2An1EΛΓ,ω(AnB))))

for all A1,A2,,AnB(K),BB()and any ρS(), a lifting ΦΛ,nΓ,ωfrom S()to S((1nK))and marginal states are given by

ρΛ,nΓ,ωtrΦΛ,nΓ,ω(ρ)S(1nK) and ρ¯Λ,nΓ,ωtr1nKΦΛ,nΓ,ω(ρ)S()

where ΦΛ,nΓ,ω(ρ)is a compound state with respect to ρ¯Λ,nΓ,ωand ρΛ,nΓ,ωin the sense of [25,31] .

Definition The quantum dynamical entropy with respect to Λ,ρ,Γand ωis defined by

S˜(Λ;ρ,Γ,ω)limsupn1nS(ρΛ,nΓ,ω),

where S(ρΛ,nΓ,ω)is the von Neumann entropy of ρΛ,nΓ,ωS(1nK)defined by S(ρΛ,nΓ,ω)=trρΛ,nΓ,ωlogρΛ,nΓ,ω. The dynamical entropy with respect to Λand ρis defined as

S˜(Λ;ρ)sup{S(Λ;ρ,Γ,ω);Γ,ω}.

15. Formulation of generalized AF and AOW entropies by KOW entropy

In this section, I briefly explain the generalized AF and AOW entropies based on the KOW entropy [25].

For a finite operational partition of unity γ1,,γdB(), i.e., i=1dγiγi=I, and a normal unital CP map Λfrom B()to B(), transition expectations EΛγfrom MdB()to B()and EΛγ(0)from Md0B()to B()are defined by

EΛγ(i,j=1dEijAij)i,j=1dΛ(γiAijγj),EΛγ(0)(i,j=1dEijAij)i=1dΛ(γiAiiγi),

where Eij=|eiej|with normalized vectors ei,i=1,2,,ddim, Mdin B()is the d×dmatrix algebra and Md0is a subalgebra of Mdconsisting of diagonal elements of Md. Then the quantum Markov states

ρΛ,nγ=i1,,in=1dj1,,jn=1dtrρΛ(Wj1i1(Λ(Wj2i2(Λ(Wjnin(I))))))×Ei1j1Einjn

and ρΛ,nγ(0)is obtained by

ρΛ,nγ(0)=i1,,in=1dtrρΛ(Wi1i1(Λ(Wi2i2(Λ(Winin(I))))))Ei1i1Einin=i1,,in=1dpi1,,inEi1i1Einin,

where

Wij(A)γiAγj,AB(),Wij(ρ)γjργi,ρS(),pi1,,intrρΛ(Wi1i1(Λ(Wi2i2(Λ(Winin(I))))))=trWinin(ΛΛ(Wi2i2(Λ(Wi1i1(Λ(ρ)))))).

Therefore the generalized AF entropy S˜(Λ;ρ)and the generalized AOW entropy S˜(0)(Λ;ρ)of Λand ρwith respect to a finite dimensional subalgebra B()are obtained by

S˜(Λ;ρ)sup{γi}S˜(Λ;ρ,{γi}),S˜(0)(Λ;ρ)sup{γi}(0)S˜(Λ;ρ,{γi}),

where the dynamical entropies S˜(Λ;ρ,{γi})and S˜(0)(Λ;ρ,{γi})are given by

S˜(Λ;ρ,{γi})limsupn1nS(ρΛ,nγ),S˜(0)(Λ;ρ,{γi})limsupn1nS(ρΛ,nγ(0)).

16. Computations of generalized AOW entropy for modulated states

Then we have the following theorem [25]:

16.1. Theorem

S˜(Λ;ρ)S˜(0)(Λ;ρ).

S˜(0)(Λ;ρ)is equal to the AOW entropy if {γi}is PVM (projection valued measure) and Λis given by an automorphism θ. S˜(Λ;ρ)is equal to the AF entropy if {γiγi}is POV (positive operator valued measure) and Λis given by an automorphism θ. For the noisy optical channel, the generalized AOW entropy can be obtained in [58] as follows.

Theorem [58] When ρis given by ρ=λ|00|+(1λ)|ξξ|and Λis the noisy optical channel with the cohetent noise |κκ|and parameters α,βsatisfying |α|2+|β|2=1,the quantum dynamical entropy with respect to Λ,ρand {γj}is obtained by

S˜(0)(Λ;ρ,{γj})=j,kqk,jqjlogqk,j,

where

qj=λ|βκ,xj|2+(1λ)|αξ+βκ,xj|2

qk,j=νj+|xk,yj+|2+(1νj+)|xk,yj|2,|yj+=aj+|βκ+bj+|αξ+βκ,|yj=aj|βκbj|αξ+βκ,

aj+=εj+aj,aj=εjaj,bj+=εj+bj,bj=εjbj,

εj+=τj2+2exp(12|ξ|2)τj+1τj2+2exp(12|αξ|2)τj+1,εj=τj2+2exp(12|ξ|2)τj+1τj22exp(12|αξ|2)τj+1,νj+=12(1+exp(12(1|α|2)|ξ|2))1(εj+)2,

τj=(12λ)2(1λ)exp(12|ξ|2)+(1)j14λ(1λ)(1exp(|ξ|2))2(1λ)exp(12|ξ|2),

|bj|2=1τj2+2exp(12|ξ|2)τj+1,|aj|2=τj2|bj|2,a¯jbj=ajb¯j=τj|bj|2

Theorem [58] For n≧3, the above compound state ρΛ,nγ(0)is written by

ρΛ,nγ(0)=j1,,jn=12qj1,,jnk=1n|xjkxjk|,

where

qj1,,jntrWjnjn(Λ(Λ(Wj2j2(Λ(Wj1j1(Λ(ρ))))))),Λ(ρ)=λ|βκβκ|+(1λ)|αξ+βκαξ+βκ|,Wjj(Λ(ρ))γjΛ(ρ)γj=(λ|βκ,xj|2+(1λ)|αξ+βκ,xj|2)|xjxj|.

Based on [40,41,45,46], one can obtain

Λ(|xjxj|)=νj+|yj+yj+|+(1νj+)|yjyj|.

Thus we have

qj1,,jn=k=2n(νjk1+|xjk,yjk1+|2+(1νjk1+)|xjk,yjk1|2)×(λ|βκ,xj|2+(1λ)|αξ+βκ,xj|2)=k=2nqjk,jk1qj1.

If jqk,jqj=qkis hold ,then we get the dynamical entropy with respect to Λ, ρand {γj}such as

S˜(0)(Λ;ρ,{γj})=j,kqk,jqjlogqk,j.

© 2013 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

Noboru Watanabe (July 24th 2013). Quantum Communication Processes and Their Complexity, Open Systems, Entanglement and Quantum Optics, Andrzej Jamiolkowski, IntechOpen, DOI: 10.5772/56356. Available from:

chapter statistics

1127total chapter downloads

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

Related Content

This Book

Next chapter

Light Propagation in Optically Dressed Media

By A. Raczyński, J. Zaremba and S. Zielińska-Kaniasty

Related Book

Frontiers in Guided Wave Optics and Optoelectronics

Edited by Bishnu Pal

First chapter

Frontiers in Guided Wave Optics and Optoelectronics

By Bishnu Pal

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More About Us