"Mitchell knows what she’s talking about. Even better, she’s a clear, cogent and interesting writer . . . Artificial Intelligence has significantly improved my knowledge when it comes to automation technology, [but] the greater benefit is that it has also enhanced my appreciation for the complexity and ineffability of human cognition."John Warner, Chicago Tribune
"Without shying away from technical details, this survey provides an accessible course in neural networks, computer vision, and natural-language processing, and asks whether the quest to produce an abstracted, general intelligence is worrisome . . . Mitchell’s view is a reassuring one." The New Yorker
"In Mitchell’s telling, artificial intelligence (AI) raises extraordinary issues that have disquieting implications for humanity. AI isn’t for the faint of heart, and neither is this book for nonscientists . . . she is a good writer with broad knowledge of the topic . . . and a canny mindfulness of both the merits and problems of AI." Howard Schneider, Undark
"Artificial intelligence can trounce you at chess, but will mistake a school bus for an ostrich or make bizarre connections between birds and hydrants. Mitchell cuts through the hype that the field of A.I. is often prone to and lays out what it does well, where it fails, and how it might do better." George Musser, author of Spooky Action at a Distance
"The recent resurgence of AI has led to predictions of everything from the end of the world to immortality. Melanie Mitchell’s very intelligent, clear and sensible book is a welcome corrective to the exaggerated fears and hopes for AI, and the prefect primer to start understanding how the systems actually work." Alison Gopnik, professor of Psychology at UC Berkeley, and author of The Philosophical Baby
"Melanie Mitchell writes about AI with a warm, friendly voice and an unpretentious brilliance that no machine could hope to match...for now." Steven Strogatz, professor of mathematics, Cornell University, and author of Infinite Powers
"Melanie Mitchell’s book is a must read for anyone interested in the emerging revolution of AI, machine learning and big data. She provides a remarkably lucid and comprehensive overview not just of their power and potential in shaping life in the 21st century but, perhaps more importantly, of their shortcomings and dangers. Mitchell brings a holistic, integrated perspective for understanding what these terms actually mean and the capabilities they promise in a non-technical language that any of us can appreciate. At the same time, she lays bare the hyperbole and misconceptions that are being propagated in the media. This book can be, and should be, read by the proverbial man or woman-on-the-street, the silicon valley guru, members of congress, or a student of the humanities, as well as by professional scientists and engineers. They will all profit enormously from it." Geoffrey West, distinguished professor at the Santa Fe Institute, and author of Scale: The Universal Laws of Life, Growth, and Death in Organisms, Cities, and Companies
“If you think you understand AI and all of the related issues, you don’t. By the time you finish this exceptionally lucid and riveting book you will breathe more easily and wisely.” Michael S. Gazzaniga, Director of the SAGE Center for the Study of Mind, University of California-Santa Barbara, and author of The Consciousness Instinct
"Computers are capable of feats of astonishing intelligence, while at the same time lacking any semblance of common sense. Melanie Mitchell takes us through an enlightening tour of how artificial intelligence currently works, and how it falls short of true human understanding. The challenges and opportunities discussed in this book will be crucial in shaping the future of humanity and technology." Sean Carroll, author of Something Deeply Hidden: Quantum Worlds and the Emergence of Spacetime
“Melanie Mitchell deftly provides the reader with a keen, clear-sighted account of the history of AI and neural networks. She explores refinements of the Turing Test, Ray Kurzweil’s Singularity (a little dubiously), deep machine learning, computer vision, translation programs, ethical issues, and many other topics, their history, modern development, and the ebb and flow of the hype surrounding their various incarnations. What is most impressive is that without getting too technical, Mitchell sketches enough details and clever illustrations that one gets a good intuitive understanding of AI, both its special purpose machines and its attempts at developing a more general intelligence. A wonderfully informative book.” John Allen Paulos, Professor of Mathematics, Temple University, and author of Innumeracy: Mathematical Illiteracy and its Consequences
"Melanie Mitchell nails it: current AI does all kinds of neat tricks, but there’s no real understanding there, and until there is, we will never get to the real promise of AI." Gary Marcus, Founder and CEO of Robust.AI and co-author of Rebooting AI
A nonmathematical yet still somewhat technical explanation of how researchers are going about achieving artificial intelligence.
This is not another cheerful or alarming exercise in futurology. Science writer Mitchell (Computer Science/Portland State Univ.; Complexity: A Guided Tour, 2011, etc.) begins by wondering if an intelligent machine would "require us to reverse engineer the human in all its complexity or is there a shortcut, a clever set of yet unknown algorithms, that will produce what we recognize as full intelligence." She then explains what researchers have done so far. Beginning in the 1950s, when success seemed just around the corner, there was symbolic AI, which involved programmers using symbols that humans could understand to solve straightforward logical problems. This led to "expert systems," which used massively detailed instructions to make decisions in narrow fields such as disease diagnosis better than human experts. By the 1980s, the limitations of AI became more obvious. Today, concepts such as "deep learning," relying on artificial neural networks, evaluate information without following rigid instructions. Despite the name and hype (and accomplishments—e.g., being unbeatable at Jeopardy), machine and human learning are not comparable. Highly advanced computers are "trained" by immense inputs, made possible only with the advent of 21st-century "big data." After evaluating their outputs, programmers retrain them to improve their accuracy. Like humans, they are not perfect. Mitchell maintains that true superintelligence will not happen until machines acquire human qualities such as common sense and consciousness. These are nowhere in sight despite recent spectacular advances—in translation, facial recognition, etc.—and the author believes that this absence makes it unlikely that one anticipated breakthrough, true driverless cars, will happen any time soon. "It's worth remembering," she writes, "that the first 90 percent of a complex technology project takes 10 percent of the time and the last 10 percent takes 90 percent of the time."
Although sometimes too abstruse, this is mostly a surprisingly lucid introduction to techniques that are making computers smarter.
Mitchell (computer science, Portland State Univ.; Complexity: A Guided Tour) aims to impart understanding of the new wave of artificial intelligence (AI) influencing all aspects of digital living. The content straddles both a historical and a contemporary perspective, detailing approaches to AI development in the post-World War II era, including expert systems and reasoning, while also covering the now popular approach of deep learning, its early dismissal by the field, and subsequent validation. This historical grounding makes for a worthy and compelling narrative in itself. There are also ample contemporary topics explored in great detail, such as AI applications in image recognition, autonomous vehicles, voice recognition, and the impressive translation that today's popular search engines now provide. VERDICT This work will mainly interest technologists who are exploring the computational and technological foundations of AI and the present implications these bring to the digital era.—Jim Hahn, Univ. Lib., Univ. of Illinois, Urbana