Machines That Think: The Future of Artificial Intelligence
A scientist who has spent a career developing Artificial Intelligence takes a realistic look at the technological challenges and assesses the likely effect of AI on the future.How will Artificial Intelligence (AI) impact our lives? Toby Walsh, one of the leading AI researchers in the world, takes a critical look at the many ways in which "thinking machines" will change our world.Based on a deep understanding of the technology, Walsh describes where Artificial Intelligence is today, and where it will take us.·Will automation take away most of our jobs? ·Is a "technological singularity" near? ·What is the chance that robots will take over? ·How do we best prepare for this future? The author concludes that, if we plan well, AI could be our greatest legacy, the last invention human beings will ever need to make.
1126396513
Machines That Think: The Future of Artificial Intelligence
A scientist who has spent a career developing Artificial Intelligence takes a realistic look at the technological challenges and assesses the likely effect of AI on the future.How will Artificial Intelligence (AI) impact our lives? Toby Walsh, one of the leading AI researchers in the world, takes a critical look at the many ways in which "thinking machines" will change our world.Based on a deep understanding of the technology, Walsh describes where Artificial Intelligence is today, and where it will take us.·Will automation take away most of our jobs? ·Is a "technological singularity" near? ·What is the chance that robots will take over? ·How do we best prepare for this future? The author concludes that, if we plan well, AI could be our greatest legacy, the last invention human beings will ever need to make.
11.49 In Stock
Machines That Think: The Future of Artificial Intelligence

Machines That Think: The Future of Artificial Intelligence

by Toby Walsh
Machines That Think: The Future of Artificial Intelligence

Machines That Think: The Future of Artificial Intelligence

by Toby Walsh

eBook

$11.49  $15.00 Save 23% Current price is $11.49, Original price is $15. You Save 23%.

Available on Compatible NOOK devices, the free NOOK App and in My Digital Library.
WANT A NOOK?  Explore Now

Related collections and offers

LEND ME® See Details

Overview

A scientist who has spent a career developing Artificial Intelligence takes a realistic look at the technological challenges and assesses the likely effect of AI on the future.How will Artificial Intelligence (AI) impact our lives? Toby Walsh, one of the leading AI researchers in the world, takes a critical look at the many ways in which "thinking machines" will change our world.Based on a deep understanding of the technology, Walsh describes where Artificial Intelligence is today, and where it will take us.·Will automation take away most of our jobs? ·Is a "technological singularity" near? ·What is the chance that robots will take over? ·How do we best prepare for this future? The author concludes that, if we plan well, AI could be our greatest legacy, the last invention human beings will ever need to make.

Product Details

ISBN-13: 9781633883765
Publisher: Prometheus Books
Publication date: 02/20/2018
Sold by: Barnes & Noble
Format: eBook
Pages: 335
File size: 765 KB

About the Author

Toby Walsh is one of the world's leading experts in artificial intelligence (AI). Professor Walsh's research focuses on how computers can interact with humans to optimize decision-making for the common good. He is also a passionate advocate for limits to ensure AI is used to improve, not take, lives. In 2015, Professor Walsh was one of the people behind an open letter calling for a ban on autonomous weapons or "killer robots" that was signed by more than 3000 AI researchers and high-profile scientists, entrepreneurs, and intellectuals. He was subsequently invited by Human Rights Watch to talk at the United Nations in both New York and Geneva. Professor Walsh is a Fellow of the Australia Academy of Science and of the Association for the Advancement of Artificial Intelligence, and was recently awarded the 2016 NSW Premier's Prize for Excellence in Engineering and Information and Communications Technologies.

Walsh has been interviewed several hundred times, appearing on NPR (US), BBC (UK), CCTV (China), CNN (US), RT (Russia), and in publications including the Guardian, New York Times, Washington Post, and New Scientist. He also regularly writes for outlets like American Scientist, New Scientist, and Conversation.

Read an Excerpt

From the Introduction

Computers are transforming our lives today at a remarkable pace. As a result, there is a considerable appetite globally to learn more about Artificial Intelligence. Many commentators have predicted great things. In May 2016 the chief envisioning officer for Microsoft UK, Dave Coplin, put it very boldly: Artificial Intelligence is “the most important technology that anybody on the planet is working on today,” he said. “[It] will change how we relate to technology. It will change how we relate to each other. I would argue that it will even change how we perceive what it means to be human.”

A month earlier, Google’s CEO, Sundar Pichai, described how AI is at the center of Google’s strategy. “A key driver . . . has been our long- term investment in machine learning and AI . . . Looking to the future . . . we will move from mobile first to an AI first world.”

Yet many other commentators have predicted that AI carries with it many dangers, even to the extent that it may hasten the end of humankind if we’re not too careful. In 2014 Elon Musk warned an audience at MIT that “we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that.” Musk is the serial entrepreneur, inventor and investor famous for founding PayPal, Tesla and SpaceX. He’s shaken up the banking sector, the car industry and space travel with his innovations, so you might expect him to know a thing or two about the ability of technology, especially computing, to disrupt the world. And Musk has backed his opinion that AI poses a serious existential threat to humankind with his own money. At the start of 2015 he donated $10 million to the Future of Life Institute to fund researchers studying how to keep Artificial Intelligence safe. Now, $10 million may not be a huge amount of money for someone as rich as Musk, whose net worth of around $10 billion puts him in the world’s top 100 wealthiest people. But later in 2015 he raised his bet 100- fold, announcing that he will be one of the main backers of the $1 billion OpenAI project. The goals of this project are to build safe Artificial Intelligence and then to open-source it to the world.

Following Musk’s warning, the physicist Stephen Hawking pitched in on the dangers of Artificial Intelligence. Not without irony, Hawking welcomed a software update for his speech synthesizer with a warning that came in the electronic voice of that technology: “The development of full artificial intelligence could spell the end of the human race.”

Several other well- known technologists, including Microsoft’s Bill Gates and Apple’s Steve Wozniak (aka “Woz”), have predicted a dangerous future for AI. The father of information theory, Claude Shannon, wrote in 1987: “I visualize a time when we will be to robots what dogs are to humans . . . I’m rooting for the machines!” Even Alan Turing himself, in a broadcast on the BBC Third Programme in 1951, offered a cautionary prediction:

"If a machine can think, it might think more intelligently than we do, and then where should we be? Even if we could keep the machines in a subservient position, for instance by turning off the power at strategic moments, we should, as a species, feel greatly humbled . . . It . . . is certainly something which can give us anxiety."

Of course, not every technologist and technocrat is concerned about the impact of thinking machines on humanity. In January 2016 Facebook’s Mark Zuckerberg dismissed these sorts of fears: “I think we can build AI so it works for us and helps us. Some people fear-monger about how AI is a huge danger, but that seems far-fetched to me and much less likely than disasters due to widespread disease, violence, etc.” Andrew Ng, one of the leading AI researchers at China’s internet giant Baidu, has said: “Worrying about AI is like worrying about overpopulation on Mars.” (Don’t forget that one of Elon Musk’s other “moonshot” projects is to populate Mars . . .)

So who should you believe? If technologists like Musk and Zuckerberg can’t agree, doesn’t that mean that there’s at least something we need to worry about? Fears about Artificial Intelligence go some way back. One of the finest visionaries of the future, science-fiction writer Arthur C. Clarke, foretold the dangerous consequences of Artificial Intelligence back in 1968. Clarke has an amazing track record of foreseeing the technologies of the future. He predicted the use of geosynchronous satellites, a global digital library (which we now call the internet), machine translation and more. But the HAL 9000 computer in his novel 2001: A Space Odyssey famously demonstrated the consequences of AI taking control.Inspired by Clarke and other visionaries, I started dreaming about Artificial Intelligence as a young boy. And I’ve worked all my life in the field, trying to make these dreams come true. It is a little concerning, then, to have people outside the field, especially when they are very smart physicists and successful tech entrepreneurs, predicting that AI will be the end of humankind. Perhaps the people closest to the action should contribute to this debate? Or are we just too embedded in what we are doing to see the risks? And why would we work on something that could destroy our very own existence?

Some concerns about Artificial Intelligence perhaps come from some deep-seated parts of our psyches. These are fears captured in stories such as the Prometheus myth, the story of the Greek deity who gave man the gift of fire, which has subsequently been the cause of so much good and so much bad. The same fear is present in Mary Shelley’s Frankenstein—that our creations may one day hurt us. Just because the fear is old does not mean it is without reason. There are many technologies we have invented that should give, and have given, us pause for thought: nuclear bombs, cloning, blinding lasers and social media, to name just a few. One of my goals in this book is to help you understand how much you should welcome and how much you should worry about the coming of thinking machines.

Some of the responsibility lies with us, the scientists working on Artificial Intelligence. We haven’t communicated enough, and when we have, we have often used misleading language. We need to communicate better what we are doing and where this might be taking society. It is our responsibility as scientists to do so. And it is even more essential for us to do so when, as I argue in this book, much of the change will be societal, and society changes much more slowly than technology. Like most technologies, AI is morally neutral. It can lead to good or bad outcomes.

One of the problems of the debate is that there are a lot of misconceptions about Artificial Intelligence. I hope to dispel some of these. One of my arguments is that people, especially those outside the field, tend to overestimate the capabilities of Artificial Intelligence today and in the near future. They see a computer playing Go better than any human can, and as they themselves cannot play Go well, they imagine the computer can also do many other intelligent tasks. Or at least that it would not be hard to get it to do many other intelligent tasks. However, that Go-playing program, like all the other computer programs we make today, is an idiot savant. It can only do one thing well. It cannot even play other games such as chess or poker. It would take a significant engineering effort by us humans to get it to play any other game. It certainly isn’t going to wake up one morning and decide that it’s bored of beating us at Go and wants instead to win some money playing online poker. And there’s absolutely no chance it will wake up one morning and start dreaming of world domination. It has no desires. It is a computer program and can only do what it is programmed to do—which is to play Go exceptionally well.

On the other hand, I will also argue that all of us tend to underestimate the long- term changes technology can bring. We’ve had smartphones for just a decade, and look how they have transformed our lives. Think how the internet, which is only around two-decades old, has changed almost every aspect of our lives—and imagine therefore what changes the next two decades might bring. Because of the multiplying effects of technology, the next twenty years are likely to see even greater changes than the last twenty years. We humans are rather poor at understanding exponential growth, since evolution has optimized us to deal with immediate dangers. We are not good at understanding long-term risks, or at expecting black swans. If we really understood the long term well, we’d all stop buying lottery tickets and save much bigger pensions. The improvements that compound growth brings are hard for our pleasure-seeking, pain-avoiding brains to comprehend. We live in the moment.

Before you get any further into this book, I have to warn you: predicting the future is an inexact science. The Danish physicist and Nobel Prize–winner Niels Bohr wrote: “Prediction is very difficult, especially if it’s about the future.” I expect that my broad brushstrokes will be correct, but it is certain some of the details will be wrong. But in exploring these ideas, I hope you will understand why I and thousands of my colleagues have devoted our lives to exploring the exciting path that will take us to thinking machines. And I hope you will understand why it is a path that we should—indeed, must—explore if we are to continue to improve the quality of our lives on this planet. There are several areas in which there is a moral imperative for us to develop Artificial Intelligence, as many lives can be saved.

Above all, I hope you will consider how society itself may need to change. The ultimate message of this book is that Artificial Intelligence can lead us down many different paths, some good and some bad, but society must choose which path to take, and act on that choice. There are many decisions we can hand over to the machines. But I argue that only some decisions should be—even when the machines can make them better than we can. As a society, we need to start making some choices as to what we entrust to the machines.

Table of Contents

Prologue 1

Part I AI's Past

1 The AI Dream 17

2 Measuring AI 61

Part II AI's Present

3 The State of AI Today 85

4 The Limits of AI 125

5 The Impact of AI 186

Part III AI's Future

6 Technological Change 255

7 Ten Predictions 272

Epilogue 291

Notes 295

Bibliography 317

About the Author 331

From the B&N Reads Blog

Customer Reviews