
ORIGINALLY RECORDED: September 8, 2021
ORIGINBALLY BROADCAST: September 12, 2021
For a transcript of this episode, click here.
We rely on computers for everything from games, to avoiding traffic, to curing disease. This is sped up by machine-learning: the process by which computers adjust their programming without human input. But providing conclusions isn’t the same as explaining them, and offering answers isn’t a substitute for teaching. What more do we need from machine learning and how does our relationship with computers mirror the difficulties we have in understanding one another?
Emily Sullivan is an Assistant Professor of philosophy and Irène Curie Fellow at both the Eindhoven University of Technology and the Eindhoven Artificial Intelligence Systems Institute, in The Netherlands. She is also a fellow in the Ethics of Socially Disruptive Technologies Research Program consortium, as well as an Associate Editor for the European Journal for the Philosophy of Science.
To subscribe in another app or platform, copy and paste the following RSS feed into your program:
http://news.prairiepublic.org/podcasts/11132/rss.xml.
Follow us on our social Networks
Want more philosophy?
Listen to Philosophical Currents, a philosopher’s take on this month’s biggest news stories.
Join Ashley Thornberg as she interviews Why? Radio’s host Jack Russell Weinstein for a philosopher’s look a the news, cultural trends, and controversies everyone is talking about. No arguments, just good humored and trustworthy conversation from two people who like and respect one another. .

What worries me is that our social biases our various political confirmation biases will ham string machine learning that can provide us with medical answers that will save lives. Will some answers be rejected because we find them politically distasteful rather than epistemologically faulty? Will some questions be cancelled or good answers be cancelled because we do not like them? I still, for example, take my favorite route home rather than take the Google maps route. Why? Because I erroneously think my route just MUST be right, though when I humble myself and defer to the AI route it is always quicker.
This is a super interesting comment. Thank you! I remember when I was in Indianapolis a few summers ago and had a real commute, I used Google Maps to manage the time. I was astonished at how accurate it was, but also how much faith I had to have in it’s judgment. If I think I know better, I’d probably override it, too.
Thanks for listening!!!!