Congratulations to Lizzy Stewart who recently defended her thesis: Alexa, Should I Trust You? A Theory of Trustworthiness for Artificial Intelligence.
Stewart also recently published "Detecting Fake News: Two Problems for Content Moderation." With Philosophy & Technology.
As people turn to AI driven technologies for help with everything from meal plan- ning to choosing a mate, it is increasingly important for individuals to gauge the trustworthiness of available technologies. However, most philosophical theories of trustworthiness focus on interpersonal trust and are inappropriate for non-agents. What, then, does it mean for non-agents such as AI driven technologies to be trust- worthy? I distinguish two different forms of trustworthiness: naive trustworthiness and robust trustworthiness. An agent is naively trustworthy to the extent that it would be likely to meet the truster’s expectations with respect to a given domain. An agent is robustly trustworthy to the extent that it would be likely to meet the truster’s needs with respect to a given domain. I argue that it is possible for AI driven technologies to be both naively and robustly trustworthy, but this trustwor- thiness is not a stable feature of trustees, but is relative to a trusters’ expectations and vulnerabilities.