Making AI Intelligible: Philosophical Foundations
Can humans and artificial intelligences share concepts and communicate? Making AI Intelligible shows that philosophical work on the metaphysics of meaning can help answer these questions. Herman Cappelen and Josh Dever use the externalist tradition in philosophy to create models of how AIs and humans can understand each other. In doing so, they illustrate ways in which that philosophical tradition can be improved. The questions addressed in the book are not only theoretically interesting, but the answers have pressing practical implications. Many important decisions about human life are now influenced by AI. In giving that power to AI, we presuppose that AIs can track features of the world that we care about (for example, creditworthiness, recidivism, cancer, and combatants). If AIs can share our concepts, that will go some way towards justifying this reliance on AI. This ground-breaking study offers insight into how to take some first steps towards achieving Interpretable AI.
1138471724
Making AI Intelligible: Philosophical Foundations
Can humans and artificial intelligences share concepts and communicate? Making AI Intelligible shows that philosophical work on the metaphysics of meaning can help answer these questions. Herman Cappelen and Josh Dever use the externalist tradition in philosophy to create models of how AIs and humans can understand each other. In doing so, they illustrate ways in which that philosophical tradition can be improved. The questions addressed in the book are not only theoretically interesting, but the answers have pressing practical implications. Many important decisions about human life are now influenced by AI. In giving that power to AI, we presuppose that AIs can track features of the world that we care about (for example, creditworthiness, recidivism, cancer, and combatants). If AIs can share our concepts, that will go some way towards justifying this reliance on AI. This ground-breaking study offers insight into how to take some first steps towards achieving Interpretable AI.
28.69 In Stock
Making AI Intelligible: Philosophical Foundations

Making AI Intelligible: Philosophical Foundations

by Herman Cappelen, Josh Dever
Making AI Intelligible: Philosophical Foundations

Making AI Intelligible: Philosophical Foundations

by Herman Cappelen, Josh Dever

eBook

$28.69 

Available on Compatible NOOK devices, the free NOOK App and in My Digital Library.
WANT A NOOK?  Explore Now

Related collections and offers

LEND ME® See Details

Overview

Can humans and artificial intelligences share concepts and communicate? Making AI Intelligible shows that philosophical work on the metaphysics of meaning can help answer these questions. Herman Cappelen and Josh Dever use the externalist tradition in philosophy to create models of how AIs and humans can understand each other. In doing so, they illustrate ways in which that philosophical tradition can be improved. The questions addressed in the book are not only theoretically interesting, but the answers have pressing practical implications. Many important decisions about human life are now influenced by AI. In giving that power to AI, we presuppose that AIs can track features of the world that we care about (for example, creditworthiness, recidivism, cancer, and combatants). If AIs can share our concepts, that will go some way towards justifying this reliance on AI. This ground-breaking study offers insight into how to take some first steps towards achieving Interpretable AI.

Product Details

ISBN-13: 9780192647566
Publisher: OUP Oxford
Publication date: 04/22/2021
Sold by: Barnes & Noble
Format: eBook
Pages: 200
File size: 393 KB

About the Author

Herman Cappelen is chair Professor of Philosophy at The University of Hong Kong. He has written and co-authored several books and works in all areas of systematic philosophy. Josh Dever is Professor of Philosophy at the University of Texas at Austin and Professorial Fellow at the Arche Research Centre at the University of St Andrews.

Table of Contents

  • PART I: INTRODUCTION AND OVERVIEW
  • 1: Introduction
  • 2: Alfred (The Dismissive Sceptic): Philosophers, Go Away!
  • PART II: A PROPOSAL FOR HOW TO ATTRIBUTE CONTENT TO AI
  • 3: Terminology: Aboutness, Representation, and Metasemantics
  • 4: Our Theory: De-Anthropocentrized Externalism
  • 5: Application: The Predicate 'High Risk'
  • 6: Application: Names and the Mental Files Framework
  • 7: Application: Predication and Commitment
  • PART III
  • 8: Four Concluding Thoughts
  • Bibliography
From the B&N Reads Blog

Customer Reviews