Page 1 of 8

PREFACE

This solutions manual is designed to accompany the eighth edition of Linear

Algebra with Applications by Steven J. Leon. The answers in this manual supple- ment those given in the answer key of the textbook. In addition this manual contains

the complete solutions to all of the nonroutine exercises in the book.

At the end of each chapter of the textbook there are two chapter tests (A

and B) and a section of computer exercises to be solved using MATLAB. The

questions in each Chapter Test A are to be answered as either true or false. Although

the true-false answers are given in the Answer Section of the textbook, students

are required to explain or prove their answers. This manual includes explanations,

proofs, and counterexamples for all Chapter Test A questions. The chapter tests

labeled B contain problems similar to the exercises in the chapter. The answers to

these problems are not given in the Answers to Selected Exercises Section of the

textbook, however, they are provided in this manual. Complete solutions are given

for all of the nonroutine Chapter Test B exercises.

In the MATLAB exercises most of the computations are straightforward. Con- sequently they have not been included in this solutions manual. On the other hand,

the text also includes questions related to the computations. The purpose of the

questions is to emphasize the significance of the computations. The solutions man- ual does provide the answers to most of these questions. There are some questions

for which it is not possible to provide a single answer. For example, some exercises

involve randomly generated matrices. In these cases the answers may depend on

the particular random matrices that were generated.

Steven J. Leon

sleon@umassd.edu

Copyright © 2010 Pearson Education, Inc. Publishing as Prentice Hall.

Full file at

https://buklibry.com/download/solutions-manual-linear-algebra-with-applications-8th-edition-by-steven-j-leon/

Download full file from buklibry.com

Page 2 of 8

Contents

1 Matrices and Systems of Equations 1

1 Systems of Linear Equations 1

2 Row Echelon Form 3

3 Matrix Arithmetic 4

4 Matrix Algebra 7

5 Elementary Matrices 13

6 Partitioned Matrices 19

MATLAB Exercises 23

Chapter Test A 25

Chapter Test B 28

2 Determinants 31

1 The Determinant of a Matrix 31

2 Properties of Determinants 34

3 Additional Topics and Applications 38

MATLAB Exercises 40

Chapter Test A 40

Chapter Test B 42

3 Vector Spaces 44

1 Definition and Examples 44

2 Subspaces 49

3 Linear Independence 53

4 Basis and Dimension 57

5 Change of Basis 60

6 Row Space and Column Space 60

MATLAB Exercises 69

Chapter Test A 70

Chapter Test B 72

iii

Copyright © 2010 Pearson Education, Inc. Publishing as Prentice Hall.

Full file at

https://buklibry.com/download/solutions-manual-linear-algebra-with-applications-8th-edition-by-steven-j-leon/

Download full file from buklibry.com

Page 3 of 8

4 Linear Transformations 76

1 Definition and Examples 76

2 Matrix Representations of Linear Transformations 80

3 Similarity 82

MATLAB Exercise 84

Chapter Test A 84

Chapter Test B 86

5 Orthogonality 88

1 The Scalar product in R

n

88

2 Orthogonal Subspaces 91

3 Least Squares Problems 94

4 Inner Product Spaces 98

5 Orthonormal Sets 104

6 The Gram-Schmidt Process 113

7 Orthogonal Polynomials 115

MATLAB Exercises 119

Chapter Test A 120

Chapter Test B 122

6 Eigenvalues 126

1 Eigenvalues and Eigenvectors 126

2 Systems of Linear Differential Equations 132

3 Diagonalization 133

4 Hermitian Matrices 142

5 Singular Value Decomposition 150

6 Quadratic Forms 153

7 Positive Definite Matrices 156

8 Nonnegative Matrices 159

MATLAB Exercises 161

Chapter Test A 165

Chapter Test B 167

7 Numerical Linear Algebra 171

1 Floating-Point Numbers 171

2 Gaussian Elimination 171

3 Pivoting Strategies 173

4 Matrix Norms and Condition Numbers 174

5 Orthogonal Transformations 186

6 The Eigenvalue Problem 188

7 Least Squares Problems 192

MATLAB Exercises 195

Chapter Test A 197

Chapter Test B 198

Copyright © 2010 Pearson Education, Inc. Publishing as Prentice Hall.

Full file at

https://buklibry.com/download/solutions-manual-linear-algebra-with-applications-8th-edition-by-steven-j-leon/

Download full file from buklibry.com

Page 4 of 8

Chapter 1

Matrices and

Systems

of Equations

1 SYSTEMS OF LINEAR EQUATIONS

2. (d)



1 1 1 1 1

0 2 1 −2 1

0 0 4 1 −2

0 0 0 1 −3

0 0 0 0 2



5. (a) 3x1 + 2x2 = 8

x1 + 5x2 = 7

1

Copyright © 2010 Pearson Education, Inc. Publishing as Prentice Hall.

Full file at

https://buklibry.com/download/solutions-manual-linear-algebra-with-applications-8th-edition-by-steven-j-leon/

Download full file from buklibry.com

Page 5 of 8

Section 1 • Definition and Examples 47

Therefore

(α + β)f = αf + βf

A7. For each x in [a, b],

[(αβ)f](x) = αβf(x) = α[βf(x)] = [α(βf)](x)

Therefore

(αβ)f = α(βf)

A8. For each x in [a, b]

1f(x) = f(x)

Therefore

1f = f

6. The proof is exactly the same as in Exercise 5.

9. (a) If y = β0 then

y + y = β0 + β0 = β(0 + 0) = β0 = y

and it follows that

(y + y) + (−y) = y + (−y)

y + [y + (−y)] = 0

y + 0 = 0

y = 0

(b) If αx = 0 and α = 0 then it follows from part (a), A7 and A8 that 6

0 =

1

α

0 =

1

α

(αx) =

1

α

α

x = 1x = x

10. Axiom 6 fails to hold.

(α + β)x = ((α + β)x1, (α + β)x2)

αx + βx = ((α + β)x1, 0)

12. A1. x ⊕ y = x · y = y · x = y ⊕ x

A2. (x ⊕ y) ⊕ z = x · y · z = x ⊕ (y ⊕ z)

A3. Since x ⊕ 1 = x · 1 = x for all x, it follows that 1 is the zero vector.

A4. Let

−x = −1 ◦ x = x

−1 =

1

x

It follows that

x ⊕ (−x) = x ·

1

x

= 1 (the zero vector).

Therefore 1

x

is the additive inverse of x for the operation ⊕.

A5. α ◦ (x ⊕ y) = (x ⊕ y)

α = (x · y)

α = x

α

· y

α

α ◦ x ⊕ α ◦ y = x

α ⊕ y

α = x

α

· y

α

A6. (α + β) ◦ x = x

(α+β) = x

α

· x

β

α ◦ x ⊕ β ◦ x = x

α ⊕ x

β = x

α

· x

β

Copyright © 2010 Pearson Education, Inc. Publishing as Prentice Hall.

Full file at

https://buklibry.com/download/solutions-manual-linear-algebra-with-applications-8th-edition-by-steven-j-leon/

Download full file from buklibry.com

Page 6 of 8

Section 3 97

10. (a) By the Consistency Theorem Ax = b is consistent if and only if b is in

R(A). We are given that b is in N(AT

). So if the system is consistent

then b would be in R(A)∩N(AT

) = {0}. Since b =6 0, the system must

be inconsistent.

(b) If A has rank 3 then AT A also has rank 3 (see Exercise 13 in Section 2).

The normal equations are always consistent and in this case there will be

2 free variables. So the least squares problem will have infinitely many

solutions.

11. (a) P

2 = A(ATA)

−1ATA(ATA)

−1AT = A(ATA)

−1AT = P

(b) Prove: P

k = P for k = 1, 2, . . ..

Proof: The proof is by mathematical induction. In the case k = 1 we

have P

1 = P. If P

m = P for some m then

P

m+1 = P P m = P P = P

2 = P

(c) P

T = [A(A

TA)

−1A

T

]

T

= (AT

)

T

[(ATA)

−1

]

TAT

= A[(ATA)

T

]

−1AT

= A(AT A)

−1AT

= P

12. If



A I

O AT





r

 =



b

0



then

Axˆ + r = b

A

T

r = 0

We have then that

r = b − Axˆ

A

T

r = A

T b − A

TAxˆ = 0

Therefore

A

TAxˆ = A

T b

So xˆ is a solution to the normal equations and hence is a least squares

solution to Ax = b.

13. If xˆ is a solution to the least squares problem, then xˆ is a solution to the

normal equations

A

TAx = A

T b

It follows that a vector y ∈ R

n will be a solution if and only if

y = xˆ + z

for some z ∈ N(ATA). (See Exercise 26, Chapter 3, Section 2). Since

N(A

TA) = N(A)

Copyright © 2010 Pearson Education, Inc. Publishing as Prentice Hall.

Full file at

https://buklibry.com/download/solutions-manual-linear-algebra-with-applications-8th-edition-by-steven-j-leon/

Download full file from buklibry.com

Page 7 of 8

Section 4 • Hermitian Matrices 147

18. (a) A and T are similar and hence have the same eigenvalues. Since T is

triangular, its eigenvalues are t11 and t22.

(b) It follows from the Schur decomposition of A that

AU = UT

where U is unitary. Comparing the first columns of each side of this

equation we see that

Au1 = Ut1 = t11u1

Hence u1 is an eigenvector of A belonging to t11.

(c) Comparing the second column of AU = UT, we see that

Au2 = Ut2

= t12u1 + t22u2

Since u1 and u2 are linearly independent, t12u1 + t22u2 cannot not be

equal to a scalar times u2. So u2 is not an eigenvector of A.

19. (a) If the eigenvalues are all real, then there will be five 1 × 1 blocks. The

blocks can occur in any order depending on how the eigenvalues are

ordered.

(b) If A has three real eigenvalues and one pair of complex conjugate eigen- values, then there will be three 1 × 1 blocks corresponding to the real

eigenvalues and one 2 × 2 block corresponding to the pair of complex

conjugate eigenvalues. The blocks may appear in any order on the di- agonal of the Schur form matrix T.

(c) If A has one real eigenvalue and two pairs of complex eigenvalues then

there will be a single 1 × 1 block and two pairs of 2 × 2 blocks. The

three blocks may appear in any order along the diagonal of the Schur

form matrix T.

20. If A has Schur decomposition UT U H and the diagonal entries of T are all

distinct then by Exercise 20 in Section 3 there is an upper triangular matrix

R that diagonalizes T. Thus we can factor T into a product RDR−1 where

D is a diagonal matrix. It follows that

A = UT U H = U(RDR−1

)U

H = (UR)D(R

−1U

H)

and hence the matrix X = UR diagonalizes A.

21. MH = (A − iB)

T = AT − iBT

−M = −A − iB

Therefore MH = −M if and only if AT = −A and BT = B.

22. If A is skew Hermitian, then AH = −A. Let λ be any eigenvalue of A and

let z be a unit eigenvector belonging to λ. It follows that

z

HAz = λz

H z = λkzk

2 = λ

and hence

λ = λ

H = (z

HAz)

H = z

HA

Hz = −z

HAz = −λ

This implies that λ is purely imaginary.

Copyright © 2010 Pearson Education, Inc. Publishing as Prentice Hall.

Full file at

https://buklibry.com/download/solutions-manual-linear-algebra-with-applications-8th-edition-by-steven-j-leon/

Download full file from buklibry.com

Page 8 of 8

Chapter A 197

(d) Both AX and U1(U1)T are projection matrices onto R(A). Since the

projection matrix onto a subspace is unique, it follows that

AX = U1(U1)T

16. (b) The disk centered at 50 is disjoint from the other two disks, so it contains

exactly one eigenvalue. The eigenvalue is real so it must lie in the interval

[46, 54]. The matrix C is similar to B and hence must have the same

eigenvalues. The disks of C centered at 3 and 7 are disjoint from the

other disks. Therefore each of the two disks contains an eigenvalue.

These eigenvalues are real and consequently must lie in the intervals

[2.7, 3.3] and [6.7, 7.3]. The matrix C

T has the same eigenvalues as C

and B. Using the Gerschgorin disk corresponding to the third row of C

T

we see that the dominant eigenvalue must lie in the interval [49.6, 50.4].

Thus without computing the eigenvalues of B we are able to obtain nice

approximations to their actual locations.

CHAPTER TEST A

1. The statement is false in general. For example, if

a = 0.11 × 100

, b = 0.32 × 10−2

, c = 0.33 × 10−2

and 2-digit decimal arithmetic is used, then

fl(fl(a + b) + c) = a = 0.11 × 100

and

fl(a + fl(b + c)) = 0.12 × 100

2. The statement is false in general. For example, if A and B are both 2 × 2

matrices and C is a 2 × 1 matrix, then the computation of A(BC) requires

8 multiplications and 4 additions, while the computation of (AB)C requires

12 multiplications and 6 additions.

3. The statement is false in general. It is possible to have a large relative error

if the coefficient matrix is ill-conditioned. For example, the n × n Hilbert

matrix H is defined by

hij =

1

i + j − 1

For n = 12, the matrix H is nonsingular, but it is very ill-conditioned. If you

tried to solve a nonhomogeneous linear system with this coefficient matrix

you would not get an accurate solution.

4. The statement is true. For a symmetric matrix the eigenvalue problem is well

conditioned. (See the remarks following Theorem 7.6.1.) If a stable algorithm

is used then the computed eigenvalues should be the exact eigenvalues of a

nearby matrix, i.e., a matrix of the form A + E where kEk is small. Since

the problem is well conditioned the eigenvalues of nearby matrices will be

good approximations to the eigenvalues of A.

Copyright © 2010 Pearson Education, Inc. Publishing as Prentice Hall.

Full file at

https://buklibry.com/download/solutions-manual-linear-algebra-with-applications-8th-edition-by-steven-j-leon/

Download full file from buklibry.com