Optimization Algorithms on Matrix Manifolds
![]() Optimization Algorithms on Matrix Manifolds offers techniques with broad applications in linear algebra, signal processing, data mining, computer vision, and statistical analysis. It can serve as a graduate-level textbook and will be of interest to applied mathematicians, engineers, and computer scientists. Practical Optimization: Algorithms and Engineering Applications
![]() Advancements in the efficiency of digital computers and the evolution of reliable software for numerical computation during the past three decades have led to a rapid growth in the theory, methods, and algorithms of numerical optimization. This body of knowledge has motivated widespread applications of optimization methods in many disciplines, e.g., engineering, business, and science, and has subsequently led to problem solutions that were considered intractable not too long ago. Key Features: extensively class-tested provides a complete teaching package with MATLAB exercises and online solutions to end-of-chapter problems includes recent methods of emerging interest such as semidefinite programming and second-order cone programming presents a unified treatment of unconstrained and constrained optimization uses a practical treatment of optimization accessible to broad audience, from college students to scientists and industry professionals provides a thorough appendix with background theory so non-experts can understand how applications are solved from point of view of optimization Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods
![]() Templates have three distinct advantages: they are general and reusable, they are not language specific, and they exploit the expertise of both the numerical analyst, who creates a template reflecting in-depth knowledge of a specific numerical technique, and the computational scientist, who then provides "value-added" capability to the general template description, customizing it for specific needs. For each template that is presented, the authors provide a mathematical description of the flow of the algorithm, discussion of convergence and stopping criteria to use in the iteration, suggestions for applying a method to special matrix types, advice for tuning the template, tips on parallel implementations, and hints as to when and why a method is useful. Numerical Methods for Least Squares Problems
![]() In the last 20 years there has been a great increase in the capacity for automatic data capturing and computing. Least squares problems of large size are now routinely solved. Tremendous progress has been made in numerical methods for least squares problems, in particular for generalized and modified least squares problems and direct and iterative methods for sparse problems. Until now there has not been a monograph that covers the full spectrum of relevant problems and methods in least squares. This volume gives an in-depth treatment of topics such as methods for sparse least squares problems, iterative methods, modified least squares, weighted problems, and constrained and regularized problems. The more than 800 references provide a comprehensive survey of the available literature on the subject. Convex Optimization
![]() Matrix Preconditioning Techniques and Applications
![]() Introduction to Algorithms
![]() In its new edition, Introduction to Algorithms continues to provide a comprehensive introduction to the modern study of algorithms. The revision has been updated to reflect changes in the years since the book's original publication. New chapters on the role of algorithms in computing and on probabilistic analysis and randomized algorithms have been included. Sections throughout the book have been rewritten for increased clarity, and material has been added wherever a fuller explanation has seemed useful or new information warrants expanded coverage. As in the classic first edition, this new edition of Introduction to Algorithms presents a rich variety of algorithms and covers them in considerable depth while making their design and analysis accessible to all levels of readers. Further, the algorithms are presented in pseudocode to make the book easily accessible to students from all programming language backgrounds. Each chapter presents an algorithm, a design technique, an application area, or a related topic. The chapters are not dependent on one another, so the instructor can organize his or her use of the book in the way that best suits the course's needs. Additionally, the new edition offers a 25% increase over the first edition in the number of problems, giving the book 155 problems and over 900 exercises that reinforce the concepts the students are learning. Partial Differential Equations (Graduate Studies in Mathematics, V. 19) GSM/19
![]() Included are complete treatments of the method of characteristics; energy methods within Sobolev spaces; regularity for second-order elliptic, parabolic, and hyperbolic equations; maximum principles; the multidimensional calculus of variations; viscosity solutions of Hamilton-Jacobi equations; shock waves and entropy criteria for conservation laws; and much more. The author summarizes the relevant mathematics required to understand current research in PDEs, especially nonlinear PDEs. While he has reworked and simplified much of the classical theory (particularly the method of characteristics), he emphasizes the modern interplay between functional analytic insights and calculus-type estimates within the context of Sobolev spaces. Treatment of all topics is complete and self-contained. The book's wide scope and clear exposition make it a suitable text for a graduate course in PDEs. Practical Methods of Optimization
![]() Matrix Computations
![]() Scientific Computing
![]() Changes for the second edition include: expanded motivational discussions and examples; formal statements of all major algorithms; expanded discussions of existence, uniqueness, and conditioning for each type of problem so that students can recognize "good" and "bad" problem formulations and understand the corresponding quality of results produced; and expanded coverage of several topics, particularly eigenvalues and constrained optimization. The book contains a wealth of material and can be used in a variety of one- or two-term courses in computer science, mathematics, or engineering. Its comprehensiveness and modern perspective, as well as the software pointers provided, also make it a highly useful reference for practicing professionals who need to solve computational problems. Iterative Methods for Linear and Nonlinear Equations
![]() Art of Computer Programming, Volume 1: Fundamental Algorithms
![]() Art of Computer Programming, Volume 2: Seminumerical Algorithms
![]() Art of Computer Programming, Volume 3: Sorting and Searching
![]() Numerical Optimization
![]() For this new edition the book has been thoroughly updated throughout. There are new chapters on nonlinear interior methods and derivative-free methods for optimization, both of which are used widely in practice and the focus of much current research. Because of the emphasis on practical methods, as well as the extensive illustrations and exercises, the book is accessible to a wide audience. It can be used as a graduate text in engineering, operations research, mathematics, computer science, and business. It also serves as a handbook for researchers and practitioners in the field. The authors have strived to produce a text that is pleasant to read, informative, and rigorous - one that reveals both the beautiful nature of the discipline and its practical side. There is a selected solutions manual for instructors for the new edition. Numerical Recipes in C: The Art of Scientific Computing
![]() Iterative Methods for Sparse Linear Systems, Second Edition
![]() Iterative Methods for Sparse Linear Systems, Second Edition gives an in-depth, up-to-date view of practical algorithms for solving large-scale linear systems of equations. These equations can number in the millions and are sparse in the sense that each involves only a small number of unknowns. The methods described are iterative, i.e., they provide sequences of approximations that will converge to the solution. This new edition includes a wide range of the best methods available today. The author has added a new chapter on multigrid techniques and has updated material throughout the text, particularly the chapters on sparse matrices, Krylov subspace methods, preconditioning techniques, and parallel preconditioners. Material on older topics has been removed or shortened, numerous exercises have been added, and many typographical errors have been corrected. The updated and expanded bibliography now includes more recent works emphasizing new and important research topics in this field. Audience This book can be used to teach graduate-level courses on iterative methods for linear systems. Engineers and mathematicians will find its contents easily accessible, and practitioners and educators will value it as a helpful resource. The preface includes syllabi that can be used for either a semester- or quarter-length course in both mathematics and computer science. Contents Preface to the Second Edition; Preface to the First Edition; Chapter 1: Background in Linear Algebra; Chapter 2: Discretization of Partial Differential Equations; Chapter 3: Sparse Matrices; Chapter 4: Basic Iterative Methods; Chapter 5: Projection Methods; Chapter 6: Krylov Subspace Methods, Part I; Chapter 7: Krylov Subspace Methods, Part II; Chapter 8: Methods Related to the Normal Equations; Chapter 9: Preconditioned Iterations; Chapter 10: Preconditioning Techniques; Chapter 11: Parallel Implementations; Chapter 12: Parallel Preconditioners; Chapter 13: Multigrid Methods; Chapter 14: Domain Decomposition Methods; Bibliography; Index. |