Open access peer-reviewed chapter

Joint EigenValue Decomposition for Quantum Information Theory and Processing

Written By

Gilles Burel, Hugo Pillin, Paul Baird, El-Houssaïn Baghious and Roland Gautier

Reviewed: 27 January 2022 Published: 02 April 2022

DOI: 10.5772/intechopen.102899

From the Edited Volume

Matrix Theory - Classics and Advances

Edited by Mykhaylo Andriychuk

Chapter metrics overview

169 Chapter Downloads

View Full Metrics

Abstract

The interest in quantum information processing has given rise to the development of programming languages and tools that facilitate the design and simulation of quantum circuits. However, since the quantum theory is fundamentally based on linear algebra, these high-level languages partially hide the underlying structure of quantum systems. We show that in certain cases of practical interest, keeping a handle on the matrix representation of the quantum systems is a fruitful approach because it allows the use of powerful tools of linear algebra to better understand their behavior and to better implement simulation programs. We especially focus on the Joint EigenValue Decomposition (JEVD). After giving a theoretical description of this method, which aims at finding a common basis of eigenvectors of a set of matrices, we show how it can easily be implemented on a Matrix-oriented programming language, such as Matlab (or, equivalently, Octave). Then, through two examples taken from the quantum information domain (quantum search based on a quantum walk and quantum coding), we show that JEVD is a powerful tool both for elaborating new theoretical developments and for simulation.

Keywords

  • quantum information
  • quantum coding
  • quantum walk
  • quantum search
  • joint eigenspaces
  • joint eigenvalues
  • joint eigenvectors

1. Introduction

The field of quantum information is experiencing a resurgence of interest due to the recent implementation of secure transmission systems [1] based on the teleportation of quantum states in metropolitan networks and in the context of satellite transmissions, further underscored by the development of quantum computers. A new path for intercontinental quantum communication opened up in 2017 when a source onboard a Chinese satellite made it possible to distribute entangled photons between two ground stations, separated by more than 1000 km [2, 3]. Experiments using optical fibers [4] and terrestrial free-space channels [5] have also proved that the use of quantum entanglement can be achieved over large distances.

Quantum programming languages, such as Q# [6] have been developed to facilitate the design and simulation of quantum circuits. The underlying quantum theory is quite complex and often counter-intuitive due to the fact that it relies on linear algebra and tensor products—for instance, the state of a set of three independent qubits (quantum bits) is not described by a 3-dimensional vector, as would be the case for classical bits, but by a 23-dimensional vector which lives in a Hilbert space constructed by tensor products of lower-dimensional spaces. Therefore, these programming languages are helpful for people who do not need to bother with the underlying theory.

However, since the quantum theory is fundamentally based on linear algebra, there are cases of practical interest for the researcher in which keeping a handle on the matrix representation of the quantum systems is a fruitful approach because it allows the use of powerful tools of linear algebra to better understand their behavior and to better implement simulation programs.

In this chapter, our objective is to illustrate how the concept of Joint EigenValue Decomposition (JEVD) can provide interesting results in the domain of quantum information. The chapter is organized as follows. In Section 2, we give some mathematical background and in Section 3, we provide basic elements to understand quantum information. Then, in Section 4, we show an example of the application of JEVD to quantum coding, more precisely we propose an algorithm, based on JEVD, to identify a quantum encoder matrix from a collection of given Pauli errors. Finally, in Section 5, we show that JEVD is a powerful tool for the analysis of a quantum walk search. More precisely, we prove that, while the quantum walk operates in a huge state space, there exists a small subspace that captures all the essential elements of the quantum walk, and this subspace can be determined thanks to JEVD.

Advertisement

2. Mathematical background

2.1 Matrices and notations

We note UT the transpose of a matrix U and U the transpose conjugate of U.

H is the normalized Hadamard 2×2 matrix and HN the N×N Hadamard matrix obtained by the Kronecker product (defined in the next subsection):

H=121111andHN=HnN=2nE1

IN is the N×N identity matrix (which will sometimes be noted I when its dimension is implicit).

In the domain of quantum information processing, we mainly have unitary matrices. A square matrix U is unitary [7] if UU=UU=I. The columns of a unitary matrix are orthonormal and its eigenvalues are of norm 1. If the unitary matrix is real, its eigenvalues come by conjugate pairs.

We call “shuffle matrix” the permutation matrix Pa,b which represents the permutation obtained when one writes elements row by row in an a×b matrix and reads them column by column. For instance, set a=2 and b=3. If one writes the elements 1,2,3,4,5,6 row by row in a 2×3 matrix and reads them column by column, the order becomes 1,4,2,5,3,6. Then the shuffle matrix is the permutation matrix such that 142536=123456P2,3. The inverse of Pa,b is Pb,a=Pa,bT.

Gn is the n×n Grover diffusion matrix defined by [8]:

Gn=In+2θnθnTE2

where θn the n×1 vector is defined by θn=111T/n. It is easy to see that Gnθn=θn. Therefore, θn is an eigenvector of Gn with eigenvalue +1. We can also see that for any vector v orthogonal to θn we have Gnv=v. It follows that Gn has two eigenvalues, 1 and +1, and the dimensions of the associated eigenspaces are n1 and 1.

2.2 Kronecker product

The Kronecker product, denoted by , is a bilinear operation on two matrices. If A is a k×l matrix and B is a m×n matrix, then the Kronecker product is the km×ln block matrix C below:

C=AB=a11Ba1lBak1BaklBE3

Assuming the sizes are such that one can form the matrix products AC and BD, an interesting property, known as the mixed-product property, is:

ABCD=ACBDE4

The Kronecker product is associative, but not commutative. However, there exist permutation matrices (the shuffle matrices defined in the previous subsection) such that, if A is an a×a square matrix and B a b×b square matrix, then [9]:

ABPa,b=BAPb,aE5

2.3 Singular value decomposition, image, and kernel

The Singular Value Decomposition (SVD) of an m×n matrix A is [7]:

A=USVE6

where U and V are unitary matrices, and S is diagonal. The diagonal of S contains the Singular Values, which are real nonnegative numbers, ranked by decreasing order. The sizes of the matrices are Um×m, Sm×n and Vn×n. The SVD is a very useful linear algebra tool because it reveals a great deal about the structure of a matrix.

The image and the kernel of A are defined by:

ImA=yCm:y=Axfor somexCnE7
kerA=xCn:Ax=0E8

When used in an algorithm, the notation null will also be used for a procedure that computes a matrix whose columns are an orthonormal basis of the kernel of A.

The complement of a subspace A within a vector space H is defined by:

Ac=yH:xy=0forallxAE9

In an algorithm, if the columns of A are an orthonormal basis of A then the columns of B=nullA provide an orthonormal basis of Ac.

The rank of A is its number of nonzero singular values. When programmed on a computer determination of the rank must take into account finite precision arithmetic, which means that “zero” is replaced by “extremely small” (less than a given tolerance value). Let us note r=rankA. We have

dimImA=rE10
dimkerA=nrE11

An orthonormal basis of kerA is obtained by taking the last nr columns of the matrix V.

2.4 Joint eigenspaces and joint eigenvalue decomposition (JEVD)

The eigenvalue decomposition of a unitary matrix A is:

A=VDVE12

where D is a diagonal matrix, the diagonal of which contains the eigenvalues, and V is a unitary matrix whose columns are the eigenvectors.

Let us note EλA the eigenspace of an operator A associated with an eigenvalue λ. The joint eigenspace Eλ,μA,B is:

Eλ,μA,B=EλAEμBE13

A property of great interest in quantum information processing is that within Eλ,μA,B (and even within any union of joint eigenspaces) the operators A and B commute.

Determination of the joint eigenspace on a computer may be determined through the complement, because:

EλAEμB=EλAcEμBccE14

Using Matrix-oriented programming languages, such as Matlab or Octave, this requires only a few lines. Let us note Aλ and Bμ matrices whose columns are orthonormal bases of EλA and EμB and . the horizontal concatenation of matrices. The following computation procedure provides a matrix C whose columns are an orthonormal basis of Eλ,μA,B:

C=nullnullAλnullBμE15

However, it is not efficient in terms of complexity and in the next sections we will propose faster computational procedures, adapted to each context.

A lower bound on the dimension of a joint eigenspace can be obtained as follows. Let us note n the dimension of the full space. We have, obviously:

dimEλAEμBnE16

and we know that:

dimEλAEμB=dimEλA+dimEμBdimEλAEμBE17

Combining both equations, we obtain:

dimEλ,μA,BdimEλA+dimEμBnE18
Advertisement

3. Quantum information principles

A quantum system is described by a state vector ψCN, where N is the dimension of the system. Since in the quantum formalism states ψ and γψ are equivalent, for any nonzero complex number γ, the state is usually represented by a normed vector and the global phase is considered irrelevant.

As long as it remains isolated, the evolution of a quantum system is driven by the Schrödinger equation. The latter is a first-order differential equation operating on the quantum state. Its integration shows that the quantum states at times t1 and t2 are linked by a unitary matrix U such that ψ2=Uψ1. The norm is preserved because U is unitary.

The second kind of evolution, called “measurement,” may occur if the system interacts with its environment. A measurement consists of the projection of the state onto a subspace of CN. When the measurement is controlled, it consists in defining a priori a decomposition of the state space into a direct sum of orthogonal subspaces iHi. The measurement randomly selects one subspace. The result of the measurement is an identifier of the selected subspace (for instance, its index i). After measurement, the state is projected onto Hi. If Pi is the projection matrix onto Hi, then the state becomes Piψ (which is then renormalized because the projection does not preserve the norm). The probability of Hi being selected is the square norm of Piψ.

It is worth noting that a measurement may destroy a part of quantum information (because usually, a projection is an irreversible process), while the unitary evolution is reversible, and as such, preserves quantum information. Consequently, measurements must be used with extreme caution—how to design the system and the measurement device to measure only what is strictly required and not more is one of the difficult problems encountered in quantum information processing.

Quantum systems of special interest for quantum information processing are qubits (quantum bits) and qubit registers. A qubit belongs to a 2D quantum system with state a normed vector of C2. To highlight links with classical digital computation, it is convenient to note 0 and 1 for the orthonormal basis of C2. Physically any 2D quantum system can carry a quantum bit. For instance, the spin of an electron is a 2D quantum system, and the spins up and down can be associated with the basic states 0 and 1. A general qubit has an expression:

ψ=α00+α11E19

where α0 and α1 are complex numbers subject to α02+α12=1.

A qubit register is a 2n-D quantum system which, for convenience, is usually referred to as a standard orthonormal basis noted 0…000…010…101…11 and then, by analogy with classical digital processing, n is the number of qubits. For instance, for n=2 the basis is 00011011, where ab=ab, and the quantum state of the register is:

ψ=ab012γababE20

Note that, contrary to classical digital registers, the qubits are usually not separable, hence the register must be considered as a whole. We say that the qubits are entangled. However in the special case where the coefficients γab can be decomposed in the form γab=αaβb the state can be written as a tensor product of the states of two qubits, which can be considered separately. Then, we have:

ψ=α00+α11β00+β11E21
Advertisement

4. Application of JEVD to quantum coding

4.1 Principle of quantum coding

The objective of quantum coding is to protect quantum information [10]. In the classical domain, the information can be protected using redundancy—for instance, if we want to transmit bit 0 on a noisy communication channel, we can instead transmit 000 (and, similarly, transmit 111 instead of 1). On the receiver side, if one error has occurred on the channel, for instance, if the second bit is false, we receive 010 instead of 000, from which we can still guess that the most probable hypothesis is that the transmitted word was 000. Of course, if there were two errors the transmitted word could have been 111, but it is assumed that the probability of error is low, hence two errors are less likely than one error. More elaborated channel codes have been proposed, but fundamentally they are all based on the idea of adding redundancy and assuming that the probability of channel error is low.

In the quantum domain, it is impossible to use redundancy because it is impossible to copy a quantum state (this is due to the “no-cloning theorem” [11]). However, we can use entanglement to produce the quantum equivalent of classical redundancy. The principle of quantum coding is shown in Figure 1. Assume we want to protect the quantum state ψ of a k-qubit register. We add r ancillary qubits initialized to 0 to form an n-qubit register (n=k+r). The encoder is represented by a unitary 2n×2n matrix U. Then, errors may occur on the encoded state: they are represented by a unitary matrix E. The decoder is represented by another unitary matrix U which is the transpose conjugate of the encoding matrix. Finally, we measure the last r qubits of the decoded state, and, depending on the result of the measurement, we apply the appropriate restoration matrix Uc (which is a unitary matrix of size 2k×2k) to the k-qubit register composed of the first k qubits of the decoded state.

Figure 1.

Principle of quantum coding.

As an illustration, let us consider n=2, k=1 and the very simple quantum encoder shown in Figure 2. It is a basic quantum circuit known as the CNOT quantum gate, and it is represented by the unitary matrix below:

Figure 2.

CNOT quantum gate.

U=1000010000010010E22

A quantum error on a qubit is described by a 2×2 unitary matrix. It is convenient to decompose the error as a linear sum of the identity and the Pauli matrices below [12]:

Z=1001X=0110Y=0ii0E23

Let us consider that an error may appear on the first encoded qubit and that this error, if present, is represented by the unitary Pauli matrix X. Then, the error matrix which acts on the encoded state is:

E=XI=0100100000010010E24

It is easy to check that:

F=UEU=0001001001001000=XXE25

The state at the input of the encoder is α0α1T10T=α00α10T. The state at the output of the decoder is, therefore, 0α10α0T.

Measuring the second qubit on the output of the decoder consists in decomposing the state space into a direct sum H0H1 of two subspaces spanned by 0010 and 0111. The result of the measurement will be either 0 or 1 (index of the selected subspace), and by analogy with classical decoding, this result will be called the “syndrome.” The projections on these subspaces are 00T and α1α0T. Then the probability to obtain syndrome 1 is 1.

The measurement then projects the state onto H1. Note that in this particular case, the information is preserved by the projection. Then, applying the operator Uc=X to the projected state restores the initial state.

Similarly, if there is no error, we can see that F is the identity matrix, then the projections on the subspaces are α0α1T and 00T. In that case, the syndrome is 0 and the state is projected onto H0. Correction is done by applying the operator I to the projected state, which is equivalent to doing nothing.

The very simple code used above, as an illustration, cannot correct more complex errors (for instance, an error Z on the first qubit). However, there exist efficient quantum codes, such as the Steane code [13], and the Shor code [14]. A remarkable result of quantum coding theory is that a linear combination of correctable errors is correctable [15].

Figure 3 shows the Steane Encoder, which is a n=7k=1t=1 quantum encoder. This means that it encodes k=1 qubits on n=7 qubits and it is able to correct any error occurring on t=1 encoded qubits. It is built with Hadamard (Eq. (1)) and CNOT (Eq. (22)) quantum gates. From this circuit description, it is possible to obtain the coding matrix U.

Figure 3.

Steane encoder.

4.2 Determination of encoder matrix using JEVD

The problem we address can be stated as follows (see Figure 1 for the notations)—given a list of n independent Pauli errors Ei with corresponding diagonal outer errors Fi, determine the unitary operator U (quantum encoder) such that:

UEiU=Fii1nE26

This equation shows that the columns of U are the eigenvectors of Ei. Specification of the code by a small set of Pauli errors is very convenient and the interest of automatic determination of matrix U is to allow further simulations of the behavior of the quantum code in various configurations.

To illustrate and validate the approach that will be developed below, let us consider the collection of n=7 Pauli errors shown in Table 1. Here, to be able to check the results, this collection has been chosen to correspond to the Steane encoder (Figure 3), while in a standard application of the method, it would be given a priori. The interest is that here we can compute the encoder matrix from the circuit and this will allow us to check that our method produces the correct encoder matrix.

EiFi
ZZZZZZZZIIIIII
XXIIIXXIZIIIII
XIXIXIXIIZIIII
XIIXXXIIIIZIII
ZZIIIZZIIIIZII
ZIZIZIZIIIIIZI
ZIIZZZIIIIIIIZ

Table 1.

Collection of Pauli errors.

We use n independent equations in which each Fi is a tensor product of I and Z only (including only one Z). Therefore, matrices Fi are diagonal, and their diagonal elements are +1 and 1 in equal numbers.

Figure 4 shows the diagonals of the matrices Fi (each row corresponding to one diagonal). Values 1 and +1 are represented, respectively, by black and white dots.

Figure 4.

Diagonals of matrices Fi.

Since matrix U does not depend on i in Eq. (26) its columns are joint eigenvectors of the Ei. For instance, in the example above, the 20th column of U is a joint eigenvector of E1, E2, …, E7 associated to eigenvalues +1,+1,1,+1,+1,1,1 (see Figure 4). In the general case, the set of n eigenvalues corresponding to the column c of U is easily obtained by taking the binary representation of c1 with the mapping 0+1 and 11.

Now, let us consider the determination of column c of U. We know that it is a vector spanning a joint eigenspace of the Ei corresponding to a given set of eigenvalues λii=1n. For each Ei let Ai denote the 2n×2n1 matrix whose orthonormal columns span the eigenspace associated to λi and Bi the 2n×2n1 matrix whose orthonormal columns span the kernel of Ai (which corresponds to the eigenspace associated to λi).

Let Yk denote the joint eigenspace corresponding to eigenvalues λjj=1k with k1n. We propose Algorithm 1 to efficiently compute the column of U. It computes a series of matrices Yk whose columns are an orthonormal basis of Yk. Obviously, the searched column of U is Yn. For the moment, let us consider that Kc=1 (the optimal value will be discussed later).

ifKc=1thenY1=A1endfork=Kc+1tondoCk=BkYk1Zk=nullCkYk=Yk1Zk

Algorithm 1: Algorithm for determination of a joint eigenspace.

The sizes of the matrices are decreasing with k:

Ck: 2n1×2nk+1 Zk: 2nk+1×2nkYk: 2n×2nk.

The intuitive ideas under the algorithm are the following:

  • Ck=BkYk1: The orthonormal basis of Yk1 is projected on the kernel of Ak. The components of the projected vectors are expressed in the orthonormal basis Bk of that kernel. Consequently, ImCk is the projection of Yk1 on the kernel of Ak, expressed in that kernel.

  • A matrix Zk whose columns are an orthonormal basis of the complement of this projection is determined.

  • Finally, the components of the basis vectors are restored to the original space by Yk=Yk1Zk

Let us prove that the matrices Yk have orthonormal columns. This is obviously the case for k=1. Then, by recursion, we have:

YkYk=ZkYk1Yk1Zk=IE27

Now, let us prove, by recurrence, that ImYk=Yk.

Obviously, this is the case for k=1. Assume this is the case for k1. We have:

ImYkImYk1=Yk1E28

We have also:

BkYk=BkYk1Zk=BkYk1Zk=CkZk=0E29

Then

ImYkkerBk=ImAkE30

From ImYkYk1 and ImYkImAk we deduce ImYkYk.

Conversely, assume that a vector x belongs to Yk. Because YkYk1 there exists a vector b such that x=Yk1b and because xImAk we have also Bkx=0

Then

BkYk1b=0Ckb=0a:b=Zka

Therefore x=Yk1b=Yk1Zka=YkaxImYkYkImYk.

After execution of the algorithm to determine each column of U, there remains an indetermination because the joint eigenvectors (i.e., the columns of U) are determined up to a phase factor. This has no consequence on the performance of the quantum code. However, if we want to fix this residual indetermination, we proposed a fast and simple procedure in ref. [16]. The procedure requires an additional set of n Pauli errors in which each additional Fi is a tensor product of I and X only. As an example, for the Steane code, we use Table 2.

EiFi
XXXXXXXXIIIIII
ZIZZZZZIXIIIII
ZZIZZZZIIXIIII
ZZZIZZZIIIXIII
XIIIXIIIIIIXII
XIIIIXIIIIIIXI
XIIIIIXIIIIIIX

Table 2.

Additional Collection of Pauli errors.

After these remaining differences have been removed, we obtain an estimated matrix U that is equal to the true matrix, up to a global phase (Figure 5). However, this remaining indetermination does not matter because, as said before, the global phase has no significance in quantum physics. Here we have chosen the global phase so that the encoder matrix is real.

Figure 5.

Estimated Matrix U for the Steane encoder.

Figure 5 shows the matrix computed by our method. We have checked that it is equal to the matrix directly computed from the circuit description.

The programmer may speed up the computation by taking into account the fact that when computing columns c of U, some matrices Yk have already been computed for other columns and can be reused. For instance, in Figure 4, we see that the joint eigenvalues corresponding to columns 19 and 20 are the same, except the last one. Then, when computing column 20, we can set K20=n in Algorithm 1 instead of the default value K20=1, because the Yn1 for column 20 is the same as for column 19. More generally, Algorithm 2 written in pseudo-Octave code computes the optimal values of the Kc.

K=11fork=2tondo|K=reshapeKkones12k112kend

Algorithm 2: Algorithm for computation of the optimal values Kc.

For instance, for n=3 the algorithm produces K=13231323.

Advertisement

5. Application of JEVD to quantum walk search

5.1 Principle of quantum walk search

Let us consider a particle that can move on a graph. In the classical world, at the time t this particle is localized at a node of the graph. It can then randomly choose one of the edges linked to this node to reach one of the adjacent nodes at a time t+1. The repeated iteration of this process is the concept of classical random walk.

A quantum walk [17] relies also on a graph, but contrary to the classical walk, here the particle may be located at many nodes at the same time and can choose many edges simultaneously. At the time t, the state of the particle is then described by a state vector ψt and the evolution between times t and t+1 is given by a unitary matrix U=SC such that ψt+1=Uψt. The unitary matrices C and S represent, respectively, the choice of the edges and the movement to the adjacent nodes.

In the following, we will consider graphs associated with hypercubes [18]. We will note n the dimension of the hypercube and N=2n the number of nodes. Figure 6 shows the graph corresponding to a hypercube of dimension n=3. It is convenient to label the nodes by binary words. In quantum language, these binary words κ are also used to label the basis vectors of the so-called position space HS.

Figure 6.

Hypercube for n=3.

The quantum state lives in a Hilbert space built by the tensor product of the position space HS (corresponding to the nodes) and the coin space HC (corresponding to the possible movements along the edges) H=HSHC. The dimensions of these state spaces are Ne=nN, N and n.

It is usual to define C as [19]:

C=INGnE31

where Gn is the n×n Grover diffusion matrix defined in Section 2. Matrix C obtained for n=3 is shown on Figure 7.

Figure 7.

Matrices C (left) and O (right) for n=3.

The structure of S is more complex. It is convenient to first define it in HCHS and then to transpose it to H using the shuffle matrix P=Pn,N (defined in subsection 2.2). Then:

S=PŜPTE32

where

Ŝ=diagŜ1ŜnandŜd=IndXId1E33

The last equation just means that, because a movement along direction d corresponds to an inversion of the dth bit in κ, the shift operator permutes the values associated to nodes that are adjacent along that dimension.

A quantum walk search can be described by repeated application of a unitary evolution operator Q, which can be written:

Q=UOE34

Here O is the oracle, which aims at marking the solutions. An example of oracle structure is shown in Figure 7. It is a block-diagonal matrix, whose blocks are Gn when they correspond to a solution and In otherwise. Denote M the number of solutions and assume that MN (otherwise the quantum walk search would serve no purpose because the probability of rapidly finding a solution with a classical search would be high). In the example shown in the figure, there are M=2 solutions (located at positions 1 and 4 on Figure 6).

Let t denote the number of iterations until a measurement is performed. Starting from an initial state ψ0, repeated iterations lead to the state ψt=Qtψ0 which is then measured. The theory of quantum walk search [19] shows that the probability of success (that is the probability of obtaining a solution by measurement) oscillates as a function of t. This means that theoretical tools which help to understand and simulate quantum walk search lead to the development of methods to determine the optimal time of measurement.

In the sequel, we will show that JEVD is a fruitful tool in this context. Indeed, set E to be the union of the joint eigenspaces of U and O, and E¯ its complement. Inside E, the operators commute. So, if we note with index E the restrictions of the operators to E, we have:

QE2=UEOEUEOE=UEOE2UE=UE2E35

Then, inside E, there is no significant difference between the effective quantum walk Q and the uniform quantum walk U, because, after each pair of successive iterations, the evolution is identical. Since the uniform quantum walk has no reason to converge to a solution, we deduce that the interesting part of the process lives in the complement of E, that is in E¯.

After establishing results about the dimensions of the eigenspaces of U and O, we will show that the concept of joint eigenspaces allows us to establish an upper bound on the dimension of the complement, with the remarkable result that this dimension grows only linearly with n. Then, we propose an algorithm for efficient computation of the joint eigenspaces and, finally, use it to check our theoretical upper bound.

5.2 Eigenspaces of U and O

Set

F=HNInE36

Then matrix F diagonalizes S:

FSF=HNInPN,nŜPn,NHNInE37
=Pn,NInHNŜInHNPN,nE38
=PTdiagHNŜdHNPE39

The latter term is diagonal because the mixed product property, H2=I and HXH=Z, shows that:

HNŜdHN=IndZId1E40

Once more, using the mixed product property, we can also prove that F keeps C unchanged, that is:

FCF=CE41

The diagonal of FSF is the concatenation of the binary representation of the numbers 0 to N1 with the mapping 0+1 and 11. That is:

FSF=diagS0SκSN1E42

Note that the diagonal of Sκ contains k times 1 and nk times +1 (where k is the Hamming weight of κ).

Then, because F2=I, FUF is a block diagonal matrix:

FUF=FSFFCFE43
=diagSκCE44

Block κ is then

Uκ=SκGnE45

We have:

dimEUκdimE+,Sκ,GnE46
dimE+Sκ+dimEGnnE47
nk+n1nE48
nk1E49

and

dimE+UκdimE,Sκ,GnE50
dimESκ+dimEGnnE51
k+n1nE52
k1E53

Then, there is only room left for at most 2 eigenvalues, specifically, at most a pair of conjugate ones.

Assume that this pair of eigenvalues exists. Since the diagonal of Gn contains 1+2n, the trace of Uκ is:

traceUκ=nk+1+k11+2nE54
=n+2k+212knE55

The sum of the eigenvalues is equal to the trace and we already have eigenvalue 1 with multiplicity nk1 and eigenvalue +1 with multiplicity k1. The sum of these n2 eigenvalues is n+2k. Then the sum of the two missing eigenvalues must be 212kn. Let us denote them by λk and λk. We must have Reλk=12kn. Then, since λk=1 we have

λk=12kn+i2nknkE56

Considering the eigenvalues of Gn and In it is trivial to show that the dimensions of the eigenspaces of the oracle are:

dimEO=ManddimE+O=NeME57

5.3 Upper bound on the dimension of the complement

The eigenvalues of U belong to 1+1λkλk where k1n1. Then, there are 2+2n1=2n eigenspaces of U.

For j12n let αj be the dimensions of these eigenspaces and βj the dimensions of their intersections with E+O. An eigenvector of U is in an intersection if and only if it is orthogonal to EO. Then, because the dimension of EO is M, we have βjαjM. Consequently

j=12nβjj=12nαj2nME58

Obviously, we have j=12nαj=Ne, so that

j=12nβjNe2nME59

It follows that the dimension of the complement has an upper bound:

dimEc2nME60

This is a remarkable result—despite the fact that the dimension of the Hilbert space grows exponentially (Ne=n2n), the dimension of the complement grows only linearly with n.

5.4 Fast computation of the joint eigenspaces

5.4.1 Introduction

To check our theoretical upper bound, we propose an efficient algorithm for fast computation of the joint eigenspaces.

We have to compute orthonormal bases of joint eigenspaces of U and O. The dimension of EO is small, hence, it makes sense to define it by an orthonormal basis generating the eigenspace. However, the dimension of E+O is large (greater than Ne/2). Hence, it is computationally more efficient to define it by an orthonormal basis of its complement (which is EO). Indeed dimEOdimE+O. We then have to design an algorithm adapted to each case.

5.4.2 Intersection of two eigenspaces defined by orthonormal bases

Let us consider a matrix A whose columns are an orthonormal basis of an eigenspace of U, and a matrix B whose columns are an orthonormal basis of EO. Set p and q to be the number of columns of these matrices (their number of rows being Ne). We want to compute an Ne×r matrix J whose columns are an orthonormal basis of the joint eigenspace (whose dimension we have set to be r). We propose the algorithm below, which is a straightforward adaptation of Theorem 1 in ref. [20].

First, we compute the p×q matrix C below:

C=ABE61

Then, we compute the SVD of C:

C=UcScVcE62

Denote by sk the singular values (the diagonal elements of Sc) and determine r such that sk1ε for k=1,,r, and sk<1ε for k>r. Here ε1 is a very small positive value introduced to take into account the presence of small errors due to computer finite precision arithmetic. Finally:

J=AUc:1:rE63

Or, equivalently, J=BVc:1:r.

5.4.3 Intersection of two eigenspaces, one of them being defined by an orthonormal basis of its complement

Let us consider a matrix A whose columns are an orthonormal basis of an eigenspace of U, and a matrix B whose columns are an orthonormal basis of the complement of E+O (that is EO). First, we compute the p×q matrix C (Eq. (61)). Then, we compute the p×r matrix (rp) Z below:

Z=nullCE64

and we obtain an Ne×r matrix J whose columns are an orthonormal basis of Eλ,+U,O from:

J=AZE65

The justification of the algorithm is as follows. The q columns of C are a basis of the projection of ImB into ImA, the components being expressed in the basis of ImA The complement of ImC in ImA is the desired intersection (expressed in ImA). The columns of Z are an orthonormal basis of this intersection. Finally, Eq. (65) restores the components in the original space.

5.5 Simulation results

Consider a hypercube of dimension n=7 with M=3 solutions located at nodes 2,8,9. The dimension of the state space is then Ne=n2n=896. From the discussion above, we know that the dimension of the complement is upper bounded by 2nM=42.

The algorithm gives us the dimensions of the joint eigenspaces of U and O (Table 3). The sum of the dimensions of the joint eigenspaces is then j=12nβj=858, from which we obtain the dimension of the complement:

λOλUdimEλO,λUO,U
1any λk or λk0
+1λ0=+1321
+1λ1 or λ14
+1λ2 or λ218
+1λ3 or λ332
+1λ4 or λ432
+1λ5 or λ518
+1λ6 or λ64
+1λ7=1321

Table 3.

Joint eigenspaces of O and U for n=7 and M=3 solutions located at nodes 2,8,9.

dimEc=Nej=12nβj=38E66

We can see that, as expected, this dimension (dimEc=38) is much smaller than the dimension of the original state space (Ne=896). We can also check that it is less than the theoretical upper bound (2nM=42), as expected.

Advertisement

6. Conclusions

The recent growth of research on quantum communications and quantum information processing opens new challenges. In this chapter, we have shown that matrix theory concepts, such as JEVD, are powerful tools to propose new theoretical results as well as efficient simulation algorithms.

In the domain of quantum coding, we have shown how to determine the encoding matrix of a quantum code from a collection of Pauli errors. On a more speculative note to be part of future work concerning interception of quantum channels, it might also be useful to identify the quantum coder used by a noncooperative transmitter.

In the domain of quantum walk search, thanks to JEVD we have proved that there exists a small subspace of the whole Hilbert space which captures the essence of the search process, and we have given an algorithm that allows us to check this result by simulation.

Advertisement

Acknowledgments

The authors thank the IBNM (Institut Brestois du Numérique et des Mathématiques), CyberIoT Chair of Excellence, for its support.

Advertisement

Abbreviations

[qubit]Quantum bit
[JEVD]Joint EigenValue Decomposition
[SVD]Singular Value Decomposition

References

  1. 1. Humble TS. Quantum security for the physical layer. IEEE Communications Magazine. 2013;51(8):56-62
  2. 2. Liao SK et al. Satellite-to-ground quantum key distribution. Nature. 2017;549:43-47
  3. 3. Ren JG et al. Ground-to-satellite quantum teleportation. Nature. 2017;549:70-73
  4. 4. Valivarthi R et al. Quantum teleportation across a metropolitan fibre network. Nature Photonics. 2016;10:676-680
  5. 5. Yin J et al. Quantum teleportation and entanglement distribution over 100-kilometre freespace channels. Nature. 2012;488:185-188
  6. 6. Microsoft. The Q# programming language user guide [Internet] 2022 Available from: https://docs.microsoft.com/en-us/azure/quantum/user-guide/?view=qsharp-preview [Accessed: January 11, 2022]
  7. 7. Golub GH, Van Loan CF. Matrix Computations. 3rd ed. Baltimore and London: The John Hopkins University Press; 1996
  8. 8. Grover LK. Quantum mechanics helps in searching for a needle in a haystack. Physical Review Letters. 1997;79(2):325
  9. 9. D’Angeli D, Donno A. Shuffling Matrices, Kronecker Product and Discrete Fourier Transform. Discrete Applied Mathematics. 2017;233:1-18
  10. 10. Raussendorf R. Key ideas in quantum error correction. Philosophical Transactions of the Royal Society A. 2012;370:4541-4565
  11. 11. Wootters W, Zurek W. A single quantum cannot be cloned. Nature. 1982;299(5886):802-803. DOI: 10.1038/299802a0
  12. 12. Nielsen MA, Chuang IL. Quantum Computation and Quantum Information. Cambridge, UK: Cambridge University Press; 2010
  13. 13. Steane A. Multiple-particle interference and quantum error correction. Proceeding of the Royal Society of London. 1996;452(1954):2551-2577. DOI: 10.1098/rspa.1996.0136
  14. 14. Shor PW. Scheme for reducing decoherence in quantum computer memory. Physical Review A. 1995;52(4):R2493-R2496. DOI: 10.1103/PhysRevA.52.R2493
  15. 15. Calderbank AR, Shor PW. Good quantum error-correcting codes exist. Physical Review A. 1996;54(2):1098-1105. DOI: 10.1103/physreva.54.1098
  16. 16. Burel G, Pillin H, Baghious EH, Baird P, Gautier R. Identification Of Quantum Encoder Matrix From A Collection Of Pauli Errors. Ho Chi Minh city, Vietnam: Asia-Pacific Conference on Communications; 2019
  17. 17. Kempe J. Quantum random walks – An introductory overview. Contemporary Physics. 2003;44(4):307-327. DOI: 10.1080/00107151031000110776
  18. 18. Moore C, Russell A. Quantum Walks on the Hypercube. Lecture Notes in Computer Science. Vol. 2483. New York: Springer. DOI: 10.1007/3-540-45726-7_14
  19. 19. Shenvi N, Kempe J, Whaley KB. Quantum random-walk search algorithm. Physical Review A. 2003;67:052307
  20. 20. Bjorck A, GolubG. Numerical methods for computing angles between linear subspaces. Mathematics of Computation. 1973;27:123. DOI: 10.2307/2005662

Written By

Gilles Burel, Hugo Pillin, Paul Baird, El-Houssaïn Baghious and Roland Gautier

Reviewed: 27 January 2022 Published: 02 April 2022