logo资料库

Kincaid_Cheney_Numerical_Mathematics_and_Computing__Sixth_Editio....pdf

第1页 / 共789页
第2页 / 共789页
第3页 / 共789页
第4页 / 共789页
第5页 / 共789页
第6页 / 共789页
第7页 / 共789页
第8页 / 共789页
资料共789页,剩余部分请下载后查看
Front Cover
Title Page
Copyright
Contents
1 Introduction
1.1 Preliminary Remarks
Significant Digits of Precision: Examples
Errors: Absolute and Relative
Accuracy and Precision
Rounding and Chopping
Nested Multiplication
Pairs of Easy/Hard Problems
First Programming Experiment
Mathematical Software
Summary
Additional References
Problems 1.1
Computer Problems 1.1
1.2 Review of Taylor Series
Taylor Series
Complete Horner’s Algorithm
Taylor’s Theorem in Terms of (x – c)
Mean-Value Theorem
Taylor’s Theorem in Terms of h
Alternating Series
Summary
Additional References
Problems 1.2
Computer Problems 1.2
2 Floating-Point Representation and Errors
2.1 Floating-Point Representation
Normalized Floating-Point Representation
Floating-Point Representation
Single-Precision Floating-Point Form
Double-Precision Floating-Point Form
Computer Errors in Representing Numbers
Notation fl(x) and Backward Error Analysis
Historical Notes
Summary
Problems 2.1
Computer Problems 2.1
2.2 Loss of Significance
Significant Digits
Computer-Caused Loss of Significance
Theorem on Loss of Precision
Avoiding Loss of Significance in Subtraction
Range Reduction
Summary
Additional References
Problems 2.2
Computer Problems 2.2
3 Locating Roots of Equations
3.1 Bisection Method
Introduction
Bisection Algorithm and Pseudocode
Examples
Convergence Analysis
False Position (Regula Falsi) Method and Modifications
Summary
Problems 3.1
Computer Problems 3.1
3.2 Newton’s Method
Interpretations of Newton’s Method
Pseudocode
Illustration
Convergence Analysis
Systems of Nonlinear Equations
Fractal Basins of Attraction
Summary
Additional References
Problems 3.2
Computer Problems 3.2
3.3 Secant Method
Secant Algorithm
Convergence Analysis
Comparison of Methods
Hybrid Schemes
Fixed-Point Iteration
Summary
Additional References
Problems 3.3
Computer Problems 3.3
4 Interpolation and Numerical Differentiation
4.1 Polynomial Interpolation
Preliminary Remarks
Polynomial Interpolation
Interpolating Polynomial: Lagrange Form
Existence of Interpolating Polynomial
Interpolating Polynomial: Newton Form
Nested Form
Calculating Coefficients a[sub(i)] Using Divided Differences
Algorithms and Pseudocode
Vandermonde Matrix
Inverse Interpolation
Polynomial Interpolation by Neville’s Algorithm
Interpolation of Bivariate Functions
Summary
Problems 4.1
Computer Problems 4.1
4.2 Errors in Polynomial Interpolation
Dirichlet Function
Runge Function
Theorems on Interpolation Errors
Summary
Problems 4.2
Computer Problems 4.2
4.3 Estimating Derivatives and Richardson Extrapolation
First-Derivative Formulas via Taylor Series
Richardson Extrapolation
First-Derivative Formulas via Interpolation Polynomials
Second-Derivative Formulas via Taylor Series
Noise in Computation
Summary
Additional References for Chapter 4
Problems 4.3
Computer Problems 4.3
5 Numerical Integration
5.1 Lower and Upper Sums
Definite and Indefinite Integrals
Lower and Upper Sums
Riemann-Integrable Functions
Examples and Pseudocode
Summary
Problems 5.1
Computer Problems 5.1
5.2 Trapezoid Rule
Uniform Spacing
Error Analysis
Applying the Error Formula
Recursive Trapezoid Formula for Equal Subintervals
Multidimensional Integration
Summary
Problems 5.2
Computer Problems 5.2
5.3 Romberg Algorithm
Description
Pseudocode
Euler-Maclaurin Formula
General Extrapolation
Summary
Additional References
Problems 5.3
Computer Problems 5.3
6 Additional Topics on Numerical Integration
6.1 Simpson’s Rule and Adaptive Simpson’s Rule
Basic Simpson’s Rule
Simpson’s Rule
Composite Simpson’s Rule
An Adaptive Simpson’s Scheme
Example Using Adaptive Simpson Procedure
Newton-Cotes Rules
Summary
Problems 6.1
Computer Problems 6.1
6.2 Gaussian Quadrature Formulas
Description
Change of Intervals
Gaussian Nodes and Weights
Legendre Polynomials
Integrals with Singularities
Summary
Additional References
Problems 6.2
Computer Problems 6.2
7 Systems of Linear Equations
7.1 Naive Gaussian Elimination
A Larger Numerical Example
Algorithm
Pseudocode
Testing the Pseudocode
Residual and Error Vectors
Summary
Problems 7.1
Computer Problems 7.1
7.2 Gaussian Elimination with Scaled Partial Pivoting
Naive Gaussian Elimination Can Fail
Partial Pivoting and Complete Partial Pivoting
Gaussian Elimination with Scaled Partial Pivoting
A Larger Numerical Example
Pseudocode
Long Operation Count
Numerical Stability
Scaling
Summary
Problems 7.2
Computer Problems 7.2
7.3 Tridiagonal and Banded Systems
Tridiagonal Systems
Strictly Diagonal Dominance
Pentadiagonal Systems
Block Pentadiagonal Systems
Summary
Additional References
Problems 7.3
Computer Problems 7.3
8 Additional Topics Concerning Systems of Linear Equations
8.1 Matrix Factorizations
Numerical Example
Formal Derivation
Pseudocode
Solving Linear Systems Using LU Factorization
LDL[sup(T)] Factorization
Cholesky Factorization
Multiple Right-Hand Sides
Computing A[sup(–1)]
Example Using Software Packages
Summary
Problems 8.1
Computer Problems 8.1
8.2 Iterative Solutions of Linear Systems
Vector and Matrix Norms
Condition Number and Ill-Conditioning
Basic Iterative Methods
Pseudocode
Convergence Theorems
Matrix Formulation
Another View of Overrelaxation
Conjugate Gradient Method
Summary
Problems 8.2
Computer Problems 8.2
8.3 Eigenvalues and Eigenvectors
Calculating Eigenvalues and Eigenvectors
Mathematical Software
Properties of Eigenvalues
Gershgorin’s Theorem
Singular Value Decomposition
Numerical Examples of Singular Value Decomposition
Application: Linear Differential Equations
Application: A Vibration Problem
Summary
Problems 8.3
Computer Problems 8.3
8.4 Power Method
Power Method Algorithms
Aitken Acceleration
Inverse Power Method
Software Examples: Inverse Power Method
Shifted (Inverse) Power Method
Example: Shifted Inverse Power Method
Summary
Additional References
Problems 8.4
Computer Problems 8.4
9 Approximation by Spline Functions
9.1 First-Degree and Second-Degree Splines
First-Degree Spline
Modulus of Continuity
Second-Degree Splines
Interpolating Quadratic Spline Q(x)
Subbotin Quadratic Spline
Summary
Problems 9.1
Computer Problems 9.1
9.2 Natural Cubic Splines
Introduction
Natural Cubic Spline
Algorithm for Natural Cubic Spline
Pseudocode for Natural Cubic Splines
Using Pseudocode for Interpolating and Curve Fitting
Space Curves
Smoothness Property
Summary
Problems 9.2
Computer Problems 9.2
9.3 B Splines: Interpolation and Approximation
Interpolation and Approximation by B Splines
Pseudocode and a Curve-Fitting Example
Schoenberg’s Process
Pseudocode
Bézier Curves
Summary
Additional References
Problems 9.3
Computer Problems 9.3
10 Ordinary Differential Equations
10.1 Taylor Series Methods
Initial-Value Problem: Analytical versus Numerical Solution
An Example of a Practical Problem
Solving Differential Equations and Integration
Vector Fields
Taylor Series Methods
Euler’s Method Pseudocode
Taylor Series Method of Higher Order
Types of Errors
Taylor Series Method Using Symbolic Computations
Summary
Problems 10.1
Computer Problems 10.1
10.2 Runge-Kutta Methods
Taylor Series for f (x, y)
Runge-Kutta Method of Order 2
Runge-Kutta Method of Order 4
Pseudocode
Summary
Problems 10.2
Computer Problems 10.2
10.3 Stability and Adaptive Runge-Kutta and Multistep Methods
An Adaptive Runge-Kutta-Fehlberg Method
An Industrial Example
Adams-Bashforth-Moulton Formulas
Stability Analysis
Summary
Additional References
Problems 10.3
Computer Problems 10.3
11 Systems of Ordinary Differential Equations
11.1 Methods for First-Order Systems
Uncoupled and Coupled Systems
Taylor Series Method
Vector Notation
Systems of ODEs
Taylor Series Method: Vector Notation
Runge-Kutta Method
Autonomous ODE
Summary
Problems 11.1
Computer Problems 11.1
11.2 Higher-Order Equations and Systems
Higher-Order Differential Equations
Systems of Higher-Order Differential Equations
Autonomous ODE Systems
Summary
Problems 11.2
Computer Problems 11.2
11.3 Adams-Bashforth-Moulton Methods
A Predictor-Corrector Scheme
Pseudocode
An Adaptive Scheme
An Engineering Example
Some Remarks about Stiff Equations
Summary
Additional References
Problems 11.3
Computer Problems 11.3
12 Smoothing of Data and the Method of Least Squares
12.1 Method of Least Squares
Linear Least Squares
Linear Example
Nonpolynomial Example
Basis Functions {g[sub(0)], g[sub(1)], . . . , g[sub(n)]}
Summary
Problems 12.1
Computer Problems 12.1
12.2 Orthogonal Systems and Chebyshev Polynomials
Orthonormal Basis Functions {g[sub(0)], g[sub(1)], . . . , g[sub(n)]}
Outline of Algorithm
Smoothing Data: Polynomial Regression
Summary
Problems 12.2
Computer Problems 12.2
12.3 Other Examples of the Least-Squares Principle
Use of a Weight Function w (x)
Nonlinear Example
Linear and Nonlinear Example
Additional Details on SVD
Using the Singular Value Decomposition
Summary
Additional References
Problems 12.3
Computer Problems 12.3
13 Monte Carlo Methods and Simulation
13.1 Random Numbers
Random-Number Algorithms and Generators
Examples
Uses of Pseudocode Random
Summary
Problems 13.1
Computer Problems 13.1
13.2 Estimation of Areas and Volumes by Monte Carlo Techniques
Numerical Integration
Example and Pseudocode
Computing Volumes
Ice Cream Cone Example
Summary
Problems 13.2
Computer Problems 13.2
13.3 Simulation
Loaded Die Problem
Birthday Problem
Buffon’s Needle Problem
Two Dice Problem
Neutron Shielding
Summary
Additional References
Computer Problems 13.3
14 Boundary-Value Problems for Ordinary Differential Equations
14.1 Shooting Method
Shooting Method Algorithm
Modifications and Refinements
Summary
Problems 14.1
Computer Problems 14.1
14.2 A Discretization Method
Finite-Difference Approximations
The Linear Case
Pseudocode and Numerical Example
Shooting Method in the Linear Case
Pseudocode and Numerical Example
Summary
Additional References
Problems 14.2
Computer Problems 14.2
15 Partial Differential Equations
15.1 Parabolic Problems
Some Partial Differential Equations from Applied Problems
Heat Equation Model Problem
Finite-Difference Method
Pseudocode for Explicit Method
Crank-Nicolson Method
Pseudocode for the Crank-Nicolson Method
Alternative Version of the Crank-Nicolson Method
Stability
Summary
Problems 15.1
Computer Problems 15.1
15.2 Hyperbolic Problems
Wave Equation Model Problem
Analytic Solution
Numerical Solution
Pseudocode
Advection Equation
Lax Method
Upwind Method
Lax-Wendroff Method
Summary
Problems 15.2
Computer Problems 15.2
15.3 Elliptic Problems
Helmholtz Equation Model Problem
Finite-Difference Method
Gauss-Seidel Iterative Method
Numerical Example and Pseudocode
Finite-Element Methods
More on Finite Elements
Summary
Additional References
Problems 15.3
Computer Problems 15.3
16 Minimization of Functions
16.1 One-Variable Case
Unconstrained and Constrained Minimization Problems
One-Variable Case
Unimodal Functions F
Fibonacci Search Algorithm
Golden Section Search Algorithm
Quadratic Interpolation Algorithm
Summary
Problems 16.1
Computer Problems 16.1
16.2 Multivariate Case
Taylor Series for F: Gradient Vector and Hessian Matrix
Alternative Form of Taylor Series
Steepest Descent Procedure
Contour Diagrams
More Advanced Algorithms
Minimum, Maximum, and Saddle Points
Positive Definite Matrix
Quasi-Newton Methods
Nelder-Mead Algorithm
Method of Simulated Annealing
Summary
Additional References
Problems 16.2
Computer Problems 16.2
17 Linear Programming
17.1 Standard Forms and Duality
First Primal Form
Numerical Example
Transforming Problems into First Primal Form
Dual Problem
Second Primal Form
Summary
Problems 17.1
Computer Problems 17.1
17.2 Simplex Method
Vertices in K and Linearly Independent Columns of A
Simplex Method
Summary
Problems 17.2
Computer Problems 17.2
17.3 Approximate Solution of Inconsistent Linear Systems
l[sub(1)] Problem
l[sub(∞)] Problem
Summary
Additional References
Problems 17.3
Computer Problems 17.3
Appendix A: Advice on Good Programming Practices
A.1 Programming Suggestions
Case Studies
On Developing Mathematical Software
Appendix B: Representation of Numbers in Different Bases
B.1 Representation of Numbers in Different Bases
Base β Numbers
Conversion of Integer Parts
Conversion of Fractional Parts
Base Conversion 10 ↔ 8 ↔ 2
Base 16
More Examples
Summary
Problems B.1
Computer Problems B.1
Appendix C: Additional Details on IEEE Floating-Point Arithmetic
C.1 More on IEEE Standard Floating-Point Arithmetic
Appendix D: Linear Algebra Concepts and Notation
D.1 Elementary Concepts
Vectors
Matrices
Matrix-Vector Product
Matrix Product
Other Concepts
Cramer’s Rule
D.2 Abstract Vector Spaces
Subspaces
Linear Independence
Bases
Linear Transformations
Eigenvalues and Eigenvectors
Change of Basis and Similarity
Orthogonal Matrices and Spectral Theorem
Norms
Gram-Schmidt Process
Answers for Selected Problems
Bibliography
Index
loga x = (loga b)(logb x) |x| − |y| |x ± y| |x| + |y| Formulas from Algebra 1 + r + r 2 + ··· + r n−1 = r n − 1 r − 1 1 + 2 + 3 + ··· + n = 1 2 n(n + 1) 6 n(n + 1)(2n + 1) 12 + 22 + 32 + ··· + n2 = 1 n n Cauchy-Schwarz Inequality n 2 xi yi i=1 x 2 i i=1 y2 i i=1 Formulas from Geometry Area of circle: A = πr 2 Area of trapezoid: A = 1 2 h(a + b) Area of triangle: A = 1 2 bh (r = radius) Circumference of circle: C = 2πr (h = height; a and b are parallel bases) Formulas from Trigonometry sin2 x + cos2 x = 1 1 + tan2 x = sec2 x sin x = 1/ csc x cos x = 1/ sec x tan x = 1/ cot x tan x = sin x/ cos x sin x = − sin(−x) cos x = cos(−x) sin π 2 π 2 − x − x = cos x = sin x (b = base, h = height) cos sin(x + y) = sin x cos y + cos x sin y cos(x + y) = cos x cos y − sin x sin y sin x + sin y = 2 sin cos x + cos y = 2 cos sinh x = 1 −x ) cosh x = 1 −x ) (x + y) (x + y) (ex − e (ex + e cos 1 2 cos 1 2 1 2 2 2 (x − y) (x − y) 1 2 Graphs y 1 ⫺1 tan x arccos x y ␲ sin x cos x ␲– 2 ␲ 3␲–– 2 x 2␲ arcsin x arctan x ␲– 2 ⫺1 0 1 ␲ ⫺– 2 x
(two points (x1, y1) and (x2, y2)) Formulas from Analytic Geometry Slope of line: m = y2 − y1 x2 − x1 y − y1 = m(x − x1) (x2 − x1)2 + (y2 − y1)2 Equation of line: Distance formula: d = Circle: (x − x0)2 + (y − y0)2 = r 2 (x − x0)2 = 1 + (y − y0)2 Ellipse: a2 b2 (r = radius, (x0, y0) center) (a and b semiaxes) Definitions from Calculus The limit statement lim x→a whenever 0 < |x − a| < δ. A function f is continuous at x if lim h→0 If lim h→0 1 h [ f (x + h) − f (x)] exists, it is denoted by f f (x + h) = f (x). (x) or ± g Formulas from Differential Calculus ( f ± g) = f ( f g) = f g loga x = x sin x = cos x d dx d dx −1 loga e cos x = −sin x f (x) = L means that for any ε > 0, there is a δ > 0 such that | f (x) − L| < ε d dx d dx d dx d dx d dx d dx d dx d dx d dx d dx f (x) and is termed the derivative of f at x. arccot x = −1 1 + x 2 √ arcsec x = 1 x 2 − 1 −1 √ x 2 − 1 arccsc x = x x sinh x = cosh x cosh x = sinh x tanh x = sech2x coth x = −csch2x sech x = −sech x tanh x csch x = −csch x coth x + f g − f g g2 ◦ g)g = g f f g ( f ◦ g) = ( f x a = a x a−1 ex = ex eax = aeax ax = ax ln a x x = x x (1 − ln x) ln x = x −1 d dx d dx d dx d dx d dx d dx d dx d dx d dx d dx d dx d dx d dx d dx tan x = sec2 x cot x = −csc2 x sec x = tan x sec x csc x = −cot x csc x arcsin x = 1√ 1 − x 2 −1√ 1 − x 2 arccos x = arctan x = 1 1 + x 2
S I X T H E D I T I O N S I X T H E D I T I O N NUMERICAL NUMERICAL MATHEMATICS MATHEMATICS AND COMPUTING AND COMPUTING Ward Cheney The University of Texas at Austin David Kincaid The University of Texas at Austin Australia • Brazil • Canada • Mexico • Singapore • Spain United Kingdom • United States
Numerical Mathematics and Computing, Sixth edition Ward Cheney, David Kincaid Dedicated to David M. Young Publisher: Bob Pirtle Development Editor: Stacy Green Editorial Assistant: Elizabeth Rodio Technology Project Manager: Sam Subity Marketing Manager: Amanda Jellerichs Marketing Assistant: Ashley Pickering Marketing Communications Manager: Darlene Amidon-Brent Project Manager, Editorial Production: Cheryll Linthicum Creative Director: Rob Hugel Art Director: Vernon T. Boes Print Buyer: Doreen Suruki Permissions Editor: Bob Kauser Production Service: Matrix Productions Text Designer: Roy Neuhaus Photo Researcher: Terri Wright Copy Editor: Barbara Willette Illustrator: ICC Macmillan Inc. Cover Designer: Denise Davidson Cover Image: Glowimages/Getty Images Cover Printer: R.R. Donnelley/Crawfordsville Compositor: ICC Macmillan Inc. Printer: R.R. Donnelley/Crawfordsville © 2008, 2004 Thomson Brooks/Cole, a part of The Thomson Corporation. Thomson, the Star logo, and Brooks/Cole are trademarks used herein under license. ALL RIGHTS RESERVED. No part of this work covered by the copyright hereon may be reproduced or used in any form or by any means—graphic, electronic, or mechanical, including photocopying, recording, taping, web distribution, information storage and retrieval systems, or in any other manner—without the written permission of the publisher. Printed in the United States of America 1 2 3 4 5 6 7 11 10 09 08 07 For more information about our products, contact us at: Thomson Learning Academic Resource Center 1-800-423-0563 For permission to use material from this text or product, submit a request online at http://www.thomsonrights.com. Any additional questions about permissions can be submitted by e-mail to thomsonrights@thomson.com. Thomson Higher Education 10 Davis Drive Belmont, CA 94002-3098 USA Library of Congress Control Number: 2007922553 Student Edition: ISBN-13: 978-0-495-11475-8 ISBN-10: 495-11475-8
Preface In preparing the sixth edition of this book, we have adhered to the basic objective of the previous editions—namely, to acquaint students of science and engineering with the po- tentialities of the modern computer for solving numerical problems that may arise in their professions. A secondary objective is to give students an opportunity to hone their skills in programming and problem solving. A final objective is to help students arrive at an under- standing of the important subject of errors that inevitably accompany scientific computing, and to arm them with methods for detecting, predicting, and controlling these errors. Much of science today involves complex computations built upon mathematical soft- ware systems. The users may have little knowledge of the underlying numerical algorithms used in these problem-solving environments. By studying numerical methods one can be- come a more informed user and be better prepared to evaluate and judge the accuracy of the results. What this implies is that students should study algorithms to learn not only how they work but also how they can fail. Critical thinking and constant skepticism are attitudes we want students to acquire. Any extensive numerical calculation, even when carried out by state-of-the-art software, should be subjected to independent verification, if possible. Since this book is to be accessible to students who are not necessarily advanced in their formal study of mathematics and computer sciences, we have tried to achieve an elementary style of presentation. Toward this end, we have provided numerous examples and figures for illustrative purposes and fragments of pseudocode, which are informal descriptions of computer algorithms. Believing that most students at this level need a survey of the subject of numerical mathematics and computing, we have presented a wide diversity of topics, including some rather advanced ones that play an important role in current scientific computing. We rec- ommend that the reader have at least a one-year study of calculus as a prerequisite for our text. Some knowledge of matrices, vectors, and differential equations is helpful. Features in the Sixth Edition Following suggestions and comments by a dozen reviewers, we have revised all sections of the book to some degree, and a number of major new features have been added as follows: • We have moved some items (especially computer codes) from the text to the website so that they are easily accessible without tedious typing. This endeavor includes all of the Matlab, Mathematica, and Maple computer codes as well as the Appendix on Overview of Mathematical Software available on the World Wide Web. • We have added more figures and numerical examples throughout, believing that concrete codes and visual aids are helpful to every reader. iii
iv Preface • New sections and material have been added to many topics, such as the modified false position method, the conjugate gradient method, Simpson’s method, and some others. • More exercises involving applications are presented throughout. • There are additional citations to recent references and some older references have been replaced. • We have reorganized the appendices, adding some new ones and omitting some older ones. Suggestions for Use Numerical Mathematics and Computing, Sixth Edition, can be used in a variety of ways, depending on the emphasis the instructor prefers and the inevitable time constraints. Prob- lems have been supplied in abundance to enhance the book’s versatility. They are divided into two categories: Problems and Computer Problems. In the first category, there are more than 800 exercises in analysis that require pencil, paper, and possibly a calculator. In the second category, there are approximately 500 problems that involve writing a program and testing it on a computer. Students can be asked to solve some problems using advanced software systems such as Matlab, Mathematica, or Maple. Alternatively, students can be asked to write their own code. Readers can often follow a model or example in the text to assist them in working out exercises, but in other cases they must proceed on their own from a mathematical description given in the text or in the problems. In some of the computer problems, there is something to be learned beyond simply writing code—a moral, if you like. This can happen if the problem being solved and the code provided to do so are somehow mismatched. Some computing problems are designed to give experience in using either mathematical software systems, precoded programs, or black-box library codes. A Student’s Solution Manual is sold as a separate publication. Also, teachers who adopt the book can obtain from the publisher the Instructor’s Solution Manual. Sample programs based on the pseudocode displayed in this text have been coded in several programming languages. These codes and additional material are available on the textbook websites: www.thomsonedu.com/math/cheney www.ma.utexas.edu/CNA/NMC6/ The arrangement of chapters reflects our own view of how the material might best unfold for a student new to the subject. However, there is very little mutual dependence among the chapters, and the instructor can order the sequence of presentation in various ways. Most courses will certainly have to omit some sections and chapters for want of time. Our own recommendations for courses based on this text are as follows: • A one-term course carefully covering Chapters 1 through 11 (possibly omitting Chapters 5 and 8 and Sections 4.2, 9.3, 10.3, and 11.3, for example), followed by a selection of material from the remaining chapters as time permits. • A one-term survey rapidly skimming over most of the chapters in the text and omitting some of the more difficult sections. • A two-term course carefully covering all chapters.
Preface v Student Research Projects Throughout the book there are some computer problems designated as Student Research Projects. These suggest opportunities for students to explore topics beyond the scope of the textbook. Many of these involve application areas for numerical methods. The projects should include programming and numerical experiments. A favorable aspect of these as- signments is to allow students to choose a topic of interest to them, possibly something that may arise in their future profession or their major study area. For example, any topic suggested by the chapters and sections in the book may be delved into more deeply by consulting other texts and references on that topic. In preparing such a project, the students have to learn about the topic, locate the significant references (books and research papers), do the computing, and write a report that explains all this in a coherent way. Students can avail themselves of mathematical software systems such as Matlab, Maple, or Mathematica, or do their own programming in whatever language they prefer. Acknowledgments In preparing the sixth edition, we have been able to profit from advice and suggestions kindly offered by a large number of colleagues, students, and users of the previous edition. We wish to acknowledge the reviewers who have provided detailed critiques for this new edition: Krishan Agrawal, Thomas Boger, Charles Collins, Gentil A. Est´evez, Terry Feagin, Mahadevan Ganesh, William Gearhart, Juan Gil, Xiaofan Li, Vania Mascioni, Bernard Maxum, Amar Raheja, Daniel Reynolds, Asok Sen, Ching-Kuang Shene, William Slough, Thiab Taha, Jin Wang, Quiang Ye, Tjalling Ypma, and Shangyou Zhan. In particular, Jose Flores was most helpful in checking over the manuscript. Reviewers from previous editions were Neil Berger, Jose E. Castillo, Charles Cullen, Elias Y. Deeba, F. Emad, Terry Feagin, Leslie Foster, Bob Funderlic, John Gregory, Bruce P. Hillam, Patrick Lang, Ren Chi Li, Wu Li, Edward Neuman, Roy Nicolaides. J. N. Reddy, Ralph Smart, Stephen Wirkus, and Marcus Wright. We thank those who have helped in various capacities. Many individuals took the trou- ble to write us with suggestions and criticisms of previous editions of this book: A. Aawwal, Nabeel S.Abo-Ghander, Krishan Agrawal, Roger Alexander, Husain Ali Al-Mohssen, Kistone Anand, Keven Anderson, Vladimir Andrijevik, Jon Ashland, Hassan Basir, Steve Batterson, Neil Berger, Adarsh Beohar, Bernard Bialecki, Jason Brazile, Keith M. Briggs, Carl de Boor, Jose E. Castillo, Ellen Chen, Edmond Chow, John Cook, Roger Crawfis, Charles Cullen, Antonella Cupillari, Jonathan Dautrich, James Arthur Davis, Tim Davis, Elias Y. Deeba, Suhrit Dey, Alan Donoho, Jason Durheim, Wayne Dymacek, Fawzi P. Emad, Paul Enigenbury, Terry Feagin, Leslie Foster, Peter Fraser, Richard Gardner, John Gregory, Katherine Hua Guo, Scott Hagerup, Kent Harris, Bruce P. Hillam, Tom Hogan, Jackie Hohnson, Christopher M. Hoss, Kwang-il In, Victoria Interrante, Sadegh Jokar, Erni Jusuf, Jason Karns, Grant Keady, Jacek Kierzenka, S. A. (Seppo) Korpela, Andrew Knyazev, Gary Krenz, Jihoon Kwak, Kim Kyungjin, Minghorng Lai, Patrick Lang, Wu Li, Grace Liu, Wenguo Liu, Mark C. Malburg, P. W. Manual, Juan Meza, F. Milianazzo, Milan Miklavcic, Sue Minkoff, George Minty, Baharen Momken, Justin Montgomery, Ramon E. Moore, Aaron Naiman, Asha Nallana, Edward Neuman, Durene Ngo, Roy Nicolaides, Jeff Nunemacher, Valia Guerra Ones, Tony Praseuth, Rolfe G. Petschek, Mihaela Quirk, Helia Niroomand Rad, Jeremy Rahe, Frank Roberts, Frank Rogers, Simen Rokaas, Robert
分享到:
收藏