Book Review
Book: The Risk Perception of Artificial Intelligence by Hugo Neri
Review by: Dr. Johanny Pestalozzi, 17.12.2021
The historical view on AI perception and the contrast between Western vs Eastern relationships with robots was very interesting. This placed Mary Douglas's work of the socio-cultural theory of risk into context. This analysis helps to understand how cultural elements and historic events shape risk perception differently across societies. For example, Shintoism, the impact of World War II, and strict immigration policies in Japan create robophilia in that culture, while the prevalence of social-science fiction robot villains in the West builds a robophobic attitude in people.
The title is a bit too general in my opinion. I expected a discussion of theories and respective experiments to contrast and draw conclusions on the topic. However, the content is written in the form of an essay, elaborating on the line of interpretation of the author around the historical development of AI and the influence of science-fiction and futurists in risk perception.
The author proposes that experts dominate the risk perception narrative around AI. However, people don't build a risk attitude only from expert opinions, but also from their lived experiences with AI and everything they understand as that. AI is already here and there are critical, damaging events from its use that are already evident to people, e.g., Cambridge Analytical, Facebook, and Twitter scandals. People rely on past events to build a risk attitude towards AI independent of their exposure to expert opinions or not. Besides, not only do experts help spread a risk perception towards AI, but laypeople also do that effectively, aided by the SARF through social media.
Moreover, the author assumes that people are actively/consciously seeking expert opinions to build a risk perception, which I doubt is the case. In general terms, risk perception information is fluid and transfers best from close, related cultural groups and their sustained perceptions rather than being suspended in time, waiting for experts to set the nature and intensity of a built risk attitude. In other words, people and experts construct risk perceptions in parallel and given the nature of uncertainty, unknowns, and the interdisciplinary nature and applications of AI, there's a level of preconceptions and heuristics playing a role in what is influencing the risk perception of these groups, see Pestalozzi et al. (2019).
Sometimes there was a blurred line between risk perception vs. actual risks of AI. It is important to consistently make a distinction between the two since they are indeed different concepts.
In summary, I enjoyed the reading and the refreshing contact with risk perception literature particularly addressing AI. I appreciated the clarity of the concepts presented and the intellectual reflection, which helped me build a bridge between the field of risk perception, user experience, and human factors in AI.