Zusammenfassungen
Confused about AI and worried about what it means for your future and the future of the world? You’re not alone. AI is everywhere—and few things are surrounded by so much hype, misinformation, and misunderstanding. In AI Snake Oil, computer scientists Arvind Narayanan and Sayash Kapoor cut through the confusion to give you an essential understanding of how AI works, why it often doesn’t, where it might be useful or harmful, and when you should suspect that companies are using AI hype to sell AI snake oil—products that don’t work, and probably never will.
While acknowledging the potential of some AI, such as ChatGPT, AI Snake Oil uncovers rampant misleading claims about the capabilities of AI and describes the serious harms AI is already causing in how it’s being built, marketed, and used in areas such as education, medicine, hiring, banking, insurance, and criminal justice. The book explains the crucial differences between types of AI, why organizations are falling for AI snake oil, why AI can’t fix social media, why AI isn’t an existential risk, and why we should be far more worried about what people will do with AI than about anything AI will do on its own. The book also warns of the dangers of a world where AI continues to be controlled by largely unaccountable big tech companies.
By revealing AI’s limits and real risks, AI Snake Oil will help you make better decisions about whether and how to use AI at work and home.
Bemerkungen zu diesem Buch
This book is about the types of AI that are problematic in some way, because you wouldn’t want to read three hundred pages on the virtues of spell-check. But it’s important to recognize that not all AI is problematic—far from it.
Kapitel 
Dieses Buch erwähnt ...
Dieses Buch erwähnt vermutlich nicht ... 
![]() Nicht erwähnte Begriffe | 2D-Barcodes, Apple, argumentum ad hominem, argumentum ad hominem circumstantial, argumentum ad hominem tu quoque, blockchain, bluesky, Chatbot, Daten, Digitalisierung, Eltern, Fediverse, GMLS & Bildung, GMLS & Schule, LehrerIn, Lernen, Negative Rückkoppelung, Positive Rückkoppelung / Teufelskreis, Schule, Siri, slippery slope, snapchat, Sprachassistenten, straw man, Tablet, Ukraine, Unterricht |
Tagcloud
Zitationsgraph
Zitationsgraph (Beta-Test mit vis.js)
1 Erwähnungen 
- Education for the Age of AI (Charles Fadel, Alexis Black, Robbie Taylor, Janet Slesinski, Katie Dunn) (2024)
Volltext dieses Dokuments
Anderswo suchen 
Beat und dieses Buch
Beat hat dieses Buch während seiner Zeit am Institut für Medien und Schule (IMS) ins Biblionetz aufgenommen. Beat besitzt kein physisches, aber ein digitales Exemplar. (das er aber aus Urheberrechtsgründen nicht einfach weitergeben darf). Aufgrund der vielen Verknüpfungen im Biblionetz scheint er sich intensiver damit befasst zu haben. Es gibt bisher nur wenige Objekte im Biblionetz, die dieses Werk zitieren.

Algorithmus
amazon
Arbeitslosigkeit
bias
bitcoin
Bullshit
Chat-GPT
China
cloud computing
Datenbank
deep learning
deepfake
Demokratie
E-Assessment
E-Learning
Eliza
Empfehlungs-Algorithmus
facebook
Familie
Fehler
Generation Alpha
Generative Machine-Learning-Systeme (GMLS)
Generative Pretrained Transformer 3 (GPT-3)
Generative Pretrained Transformer 4 (GPT-4)
Gesellschaft
Gesichtserkennung
Google
Hype Cycle
influencer
Instagram
Internet
iPad
iPhone
Journalismus
Kinder
Künstliche Intelligenz (KI / AI)
machine learning
Mastodon
Microsoft
Perceptron
PowerPoint
Predictive Analytics
Privatsphäre
Psychologie
QR Code
Ranking
Roboter
Russland
Schach
social media / Soziale Medien
Sprache
Tiktok
Twitter
Uber
Waffe
WhatsApp
Wissenschaft
WWW (World Wide Web)
Zukunft


Biblionetz-History