CIOReview
| | MARCH 202519CIOReviewaffect people's trust in AI-generated content and how they compare to human-generated content. The framework consists of four components: source, message, receiver and situation. Each component has several subcomponents representing the specific variables influencing people's trust. The framework concludes with the interactions and feedback loops among the components and subcomponents. Conceptual framework of the psychology of AI credibilityPerceived Objectivity ­ AI is just perceived to be objectiveConsistency and Reliability ­ A trust based on consistent and high-quality contentAuthority Attribution ­ AI uses advanced technologies, and most people do not realize AI goes back decadesLack of Emotional Biases ­ AI lacks emotions, thereby reducing concerns associated with thoseTransparency ­ Trust is achieved via users' perceived transparent explanationsAccuracy and Precision ­ Users believe AI is accurate and preciseSocial Proof ­ Widespread adoption of AI and positive user experiences Confirmation Bias Mitigation ­ Content may mitigate confirmation biases by presenting information objectivelyDiscussionThis conceptual framework I propose can help us understand the psychological mechanisms that underlie people's trust in AI-generated content and why they may accept it as true more than human-generated content. The framework can also inform the design and regulation of AI systems and the education and empowerment of users. Some of the possible implications are:· AI systems should be transparent and accountable about their sources, methods, and goals and provide clear and accurate information about the quality, reliability, and limitations of their outputs.· AI systems should be ethical and responsible in generating content that respects human values, rights, and dignity and avoids producing content that is misleading, biased, or harmful.· AI systems should be adaptable and responsive to the feedback and preferences of users and allow users to control and customize their interactions with the systems.· Users should be aware and informed about the existence and potential effects of AI-generated content and develop the skills and competencies to critically evaluate and verify the content they encounter.· Users should be empowered and engaged in the co-creation and governance of AI systems and have the opportunity to express their opinions and concerns about the systems and their outputs.In this article, we explored the psychology of AI credibility and why people trust AI-generated content more than human-generated content. We reviewed the existing literature on the topic and proposed a conceptual framework that explains the main cognitive and affective processes involved. We also discussed the implications of our findings for the design and regulation of AI systems and the education and empowerment of users. I hope that this paper can contribute to the advancement of research and practice in this important and emerging field. Users should be empowered and engaged in the co-creation and governance of AI systems and have the opportunity to express their opinions and concerns about the systems and their outputsEnrique Leon
< Page 9 | Page 11 >