## Abstract

In this chapter, we consider special compound 4 n × 4 n magic squares. We determine a 2 n − 3 dimensional subspace of the nullspace of the 4 n × 4 n squares. All vectors in the subspaces possess the property that the sum of all entries of each vector equals zero.

### Keywords

- null space
- magic squares
- mathematical induction

## 1. Introduction

A semi-magic square is an

A | B | C | |

E | |||

s − C | s − | s − A | s − B |

s − | s − | s − E | s − |

where

This result was developed by Rosser and Walker. Hendricks proved that the determinant of a pandiagonal magic square is zero. We note that every antipodal pair of elements add up to one-half of the magic constant. Al-Amerie considered in his M.Sc thesis some of the results here. There are three fundamental primitive pandiagonal squares which are 4 × 4. Kraitchik (see [3, 8]) has shown how to derive all pandiagonal squares from three particular ones.

We define a certain class of

Definition 1: A

The following matrix is a possible form for this kind of squares:

where

Note that we have the following relations:

Using Maple we can show that the

where

Note that the sum of all entries of the vectors is zero. For example:

−51 | 39 | 26 | 0 | 9 | 13 |

54 | −10 | −2 | −5 | 4 | −5 |

−5 | 1 | 2 | 3 | 17 | 18 |

12 | 3 | −1 | 63 | −27 | −14 |

17 | 8 | 17 | −42 | 22 | 14 |

9 | −5 | −6 | 17 | 11 | 10 |

has as nullspace

Definition 2: A

is called a compound magic square if the following relation holds:

It is easy to check if the last relation guarantees that the square is a magic 8 × 8 square. In the same manner we can combine four panmagic squares in a magic square.

Definition 3: Let

is called the compound

The condition

## 2. Main results

We prove first a simple result for a compound square of

Proposition 1: The compound

Proof: First we note that the vector

is a nonzero vector, which belongs to the nullspace of the square, since the squares have the same magic constant.

Now, the square

belongs to the nullspace of the square. To do this we compute the following matrix multiplication:

According to the choice of

Note that we used the relation

According to Al-Ashhab (see [3]) we can assume that the vectors in the nullspace of the pandiagonal magic square are

Further, we can assume that

Hence, we can assume that:

Since the sum of two pandiagonal magic squares is pandiagonal magic, we deduce that four rows in the matrix in Eq. (2) are redundant. Since we have the relations

the application of elementary row operations on the matrix in Eq. (2) yields to

where

This analysis enables us to conclude the following relations from (2):

If we set

which is consistent with the previous relations, we conclude that the vector

belongs to the nullspace of the square. We can make another choice as follows.

and we obtain a vector belonging to the nullspace of the square, which is

Now, the vectors

is linearly independent with the last two vectors, since its first two entries are not the opposite of the third and fourth entry. ⎕

For example, the following square is a compound

0 | 14 | −19 | 13 | 10 | 5 | −22 | 15 |

−12 | 6 | 7 | 7 | −20 | 13 | 12 | 3 |

23 | −9 | 4 | −10 | 26 | −11 | −6 | −1 |

−3 | −3 | 16 | −2 | −8 | 1 | 24 | −9 |

−16 | 25 | −17 | 16 | −6 | 16 | −20 | 18 |

1 | −2 | 2 | 7 | −7 | 5 | 7 | 3 |

21 | −12 | 20 | −21 | 24 | −14 | 10 | −12 |

2 | −3 | 3 | 6 | −3 | 1 | 11 | −1 |

For this square we can construct as described the following two vectors in its nullspace

In fact, its nullity is 3. Thus, these two vectors together with

form a basis of its nullspace.

We prove now a similar result to the previous proposition, where we replace the

Proposition 2: The compound

Proof: First we note that the vector

is a nonzero vector, which belongs to the nullspace of the square, since the squares have the same magic constant.

We look for scalars

We transform this equation into a linear system, in which we eliminate the redundant equations. The system becomes

From the definition of the panmagic square we know that

Thus, due to Eqs. (3)–(5), we can reduce the linear system to the following

We can verify using the computer that the coefficient matrix of this system has in general the rank four. Hence, we deduce that

Remark: We did not here make use of the relation

For example, the following square is a compound

−51 | 39 | 26 | 0 | 9 | 13 | 6 | 17 | 15 | −6 | 0 | 4 |

54 | −10 | −2 | −5 | 4 | −5 | 20 | 5 | 2 | 0 | 9 | 0 |

−5 | 1 | 2 | 3 | 17 | 18 | −24 | 6 | 7 | 8 | 19 | 20 |

12 | 3 | −1 | 63 | −27 | −14 | 18 | 12 | 8 | 6 | −5 | −3 |

17 | 8 | 17 | −42 | 22 | 14 | 12 | 3 | 12 | −8 | 7 | 10 |

9 | −5 | −6 | 17 | 11 | 10 | 4 | −7 | −8 | 36 | 6 | 5 |

2 | 53 | 45 | −131 | 33 | 34 | 59 | 31 | 34 | −137 | 24 | 25 |

−10 | 0 | 10 | 11 | 12 | 13 | −44 | 15 | 14 | 16 | 17 | 18 |

−89 | 21 | 22 | 23 | 29 | 30 | −108 | 26 | 27 | 28 | 31 | 32 |

143 | −21 | −22 | 10 | −41 | −33 | 149 | −12 | −13 | −47 | −19 | −22 |

1 | 0 | −1 | 22 | 12 | 2 | −4 | −5 | −6 | 56 | −3 | −2 |

−11 | −17 | −18 | 101 | −9 | −10 | −16 | −19 | −20 | 120 | −14 | −15 |

Using the computer we can verify that its nullity is 3. In other words, the constructed subspace is the nullspace itself.

We can generalize the previous result for an arbitrary number of squares involved in the compound square.

Theorem 1: Let

such that

possesses a

and

Proof: We will check first that these vectors belong to the nullspace of the matrix. When we multiply the first vector with the matrix, we obtain a vector having in the first row

Since we know that

we obtain zero in the second row of the vector. Since the third and fourth rows of the squares are complementary to the first two rows, we deduce that the third and fourth rows of the vector are also zero. Now, the fifth entry of the vector is

We use the following relations according to our assumption

and obtain

We continue checking all rows until we reach the last entry, which is

We use

in order to obtain this value of the entry

Hence, we finished checking the first vector.

Now, we turn our attention to the second vector. When we multiply the matrix with it, we obtain in the first entry.

Using the relations

we deduce that the second entry is also zero. In a similar manner we can deal with the third and fourth entries. The fifth entry will be

We use the relations

to obtain for the fifth entry.

We continue checking the entries until we reach the last entry, which is

Using the relations

we get

Hence, the second vector belongs to the nullspace of the (

Similarly, we can check that all the other vectors are included in the nullspace of the (*n* × 4*n*)-matrix. The first entry by matrix multiplication is:

As before we deduce also that the second, third, and fourth entries are zero. The fifth entry is

We use the relations

Therefore, this entry is

When we reach the (

We use the relations

to prove that this entry is

We prove now that the vectors are linearly independent. Let

This leads us to the following vector which is a zero vector.

From the (

According to our assumptions we must have

Hence, we conclude that