Artificial presence, real-life influence? Effects of CGI influencers on young adults’ health behavior intentions

Vol.19,No.2(2025)

Abstract

Computer Generated Imagery (CGI) influencers—also known as virtual influencers—are an increasingly influential phenomenon on social media. Some CGI influencers are presented as cartoon characters and are thus clearly recognizable as non-human. Other CGI influencers, however, are almost indistinguishable from real humans. Although CGIs can elicit parasocial interaction (PSI), we lack research distinguishing cartoon-look CGI influencers from human-look CGI influencers. Also, we do not know whether CGIs can lead to persuasive effects. This is particularly relevant regarding health topics because CGIs cannot have health issues. In an experimental study with a quota-based sample of N = 443 young adults (62.8% female) aged 16 to 26 from the United Kingdom, we compared the effects of a real human influencer, a human-look CGI influencer, and a cartoon-look CGI influencer advising about insomnia (i.e., sleeping problems) on young adults’ PSI and health behavior intentions. We found that PSI was strongest for the real human and weakest for the cartoon-look CGI influencer and was significantly positively related to young adults’ health behavior intentions. Personal affectedness by insomnia and gender did not moderate these relationships. Overall, findings suggest that the persuasive power of CGIs is limited, at least regarding topics such as health. Implications are discussed.


Keywords:
CGI influencers; virtual influencers; PSI; health communication; social media; AI
Author biographies

Melanie Saumer

Department of Communication, University of Vienna, Vienna, Austria

Melanie Saumer is a PhD candidate in the Department of Communication at the University of Vienna. Her research focuses media psychology, political participation, and digital hate.

Ariadne Neureiter

Department of Communication, University of Vienna, Vienna, Austria

Ariadne Neureiter is a Postdoctoral researcher in the Department of Communication at the University of Vienna. Her research focuses on strategic (sustainability) communication, information processing strategies, and effects of green advertising.

Édua Mária Varga

Department of Communication, University of Vienna, Vienna, Austria

Èdua Mária Varga is a former master student of the Communication Science Masters’ program at the Department of Communication at the University of Vienna.

Veronika Gataric

Department of Communication, University of Vienna, Vienna, Austria

Veronika Gataric is a former master student of the Communication Science Masters’ program at the Department of Communication at the University of Vienna.

Chelsea Yupu Liu

Department of Communication, University of Vienna, Vienna, Austria

Chelsea Yupu Liu is a former master student of the Communication Science Masters’ program at the Department of Communication at the University of Vienna.

Jörg Matthes

Department of Communication, University of Vienna, Vienna, Austria

Jörg Matthes (PhD, University of Zurich) is professor of communication science in the Department of Communication at the University of Vienna, where he chairs the division of advertising research and media psychology. His research focuses on advertising effects, the process of public opinion formation, news framing, and empirical methods.

References

Ahn, R. J., Cho, S. Y., & Sunny Tsai, W. (2022). Demystifying computer-generated imagery (CGI) influencers: The effect of perceived anthropomorphism and social presence on brand outcomes. Journal of Interactive Advertising, 22(3), 327­–335. https://doi.org/10.1080/15252019.2022.2111242

Al-Harbi, B. F., & Al-Harbi, M. F. (2017). Eliciting salient beliefs about physical activity among female adolescent in Saudi Arabia: A qualitative study. Public Health International, 2(4), 116–123. https://www.researchgate.net/publication/320004674_Eliciting_Salient_Beliefs_About_Physical_Activity_Among_Female_Adolescent_in_Saudi_Arabia_A_Qualitative_Study

Arsenyan, J., & Mirowska, A. (2021). Almost human? A comparative case study on the social media presence of virtual influencers. International Journal of Human-Computer Studies, 155, Article 102694. https://doi.org/10.1016/j.ijhcs.2021.102694

Balcombe, L., & De Leo, D. (2022). Human-computer interaction in digital mental health. Informatics, 9(1), Article 14. https://doi.org/10.3390/informatics9010014

Biswas, S. S. (2023). Role of Chat GPT in public health. Annals of Biomedical Engineering, 51(5), 868–869. https://doi.org/10.1007/s10439-023-03172-7

Block, E., & Lovegrove, R. (2021). Discordant storytelling, ‘honest fakery’, identity peddling: How uncanny CGI characters are jamming public relations and influencer practices. Public Relations Inquiry, 10(3), 265–293. https://doi.org/10.1177/2046147X211026936

Bond, B. J. (2016). Following your “friend”: Social media and the strength of adolescents’ parasocial relationships with media personae. Cyberpsychology, Behavior, and Social Networking, 19(11), 656–660. https://doi.org/10.1089/cyber.2016.0355

Burke-Garcia, A. (2019). Influencing health: A comprehensive guide to working with online influencers. Productivity Press.

Burke-Garcia, A., & Soskin Hicks, R. (2024). Scaling the idea of opinion leadership to address health misinformation: The case for “health communication AI”. Journal of Health Communication, 29(6), 396–399. https://doi.org/10.1080/10810730.2024.2357575

Chaiken, S. (1980). Heuristic versus systematic information processing and the use of source versus message cues in persuasion. Journal of Personality and Social Psychology, 39(5), 752–766. https://doi.org/10.1037/0022-3514.39.5.752

Chattaraman, V., Kwon, W. S., Gilbert, J. E., & Ross, K. (2019). Should AI-based, conversational digital assistants employ social-or task-oriented interaction style? A task-competency and reciprocity perspective for older adults. Computers in Human Behavior, 90, 315–330. https://doi.org/10.1016/j.chb.2018.08.048

Chen, C. P. (2016). Forming digital self and parasocial relationships on YouTube. Journal of Consumer Culture, 16(1), 232–254. https://doi.org/10.1177/1469540514521081

Chugunova, M., & Sele, D. (2022). We and it: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics, 99, Article 101897. https://doi.org/10.1016/j.socec.2022.101897

Chung, D., Wang, J., & Meng, Y. (2024). Examining the impact of virtual health influencers on young adults’ willingness to engage in liver cancer prevention: Insights from parasocial relationship theory. Social Sciences, 13(6), Article 319. https://doi.org/10.3390/socsci13060319

Da Silva Oliveira, A. B., & Chimenti, P. (2021). “Humanized robots”: A proposition of categories to understand virtual influencers. Australasian Journal of Information Systems, 25. https://doi.org/10.3127/ajis.v25i0.3223

Daly, R. (July 17, 2019). Meet Lil Miquela, the real-life Ashley O. NME. https://www.nme.com/features/interview-lil-miquela-real-life-ashley-o-2530049

Dibble, J. L., Hartmann, T., & Rosaen, S. F. (2016). Parasocial interaction and parasocial relationship: Conceptual clarification and a critical assessment of measures. Human Communication Research, 42(1), 21–44. https://doi.org/10.1111/hcre.12063

Djafarova, E., & Rushworth, C. (2017). Exploring the credibility of online celebrities’ Instagram profiles in influencing the purchase decisions of young female users. Computers in Human Behavior, 68, 1–7. https://doi.org/10.1016/j.chb.2016.11.009

Donadello, I., & Dragoni, M. (2022). AI-enabled persuasive personal health assistant. Social Network Analysis and Mining, 12(1), Article 106. https://doi.org/10.1007/s13278-022-00935-3

Drenten, J., & Brooks, G. (2020). Celebrity 2.0: Lil Miquela and the rise of a virtual star system. Feminist Media Studies, 20(8), 1319–1323. https://doi.org/10.1080/14680777.2020.1830927

Eyal, K., Te’eni-Harari, T., & Katz, K. (2020). A content analysis of teen-favored celebrities’ posts on social networking sites: Implications for parasocial relationships and fame-valuation. Cyberpsychology: Journal of Psychosocial Research on Cyberspace, 14(2), Article 7. https://doi.org/10.5817/CP2020-2-7

Freberg, K., Graham, K., McGaughey, K., & Freberg, L. A. (2011). Who are the social media influencers? A study of public perceptions of personality. Public Relations Review, 37(1), 90–92. https://doi.org/10.1016/j.pubrev.2010.11.001

Gong, L. (2008). The boundary of racial prejudice: Comparing preferences for computer-synthesized White, Black, and robot characters. Computers in Human Behavior, 24(5), 2074–2093. https://doi.org/10.1016/j.chb.2007.09.008

Gräve, J. (2017). Exploring the perception of influencers vs. traditional celebrities. Proceedings of the 8th International Conference on Social Media & Society, Article 36. https://doi.org/10.1145/3097286.3097322

Hayes, A. F. (2013). Introduction to mediation, moderation, and conditional process analysis: A regression-based approach. Guilford Press.

Hendry, N., Hartung, C., & Welch, R. (2022). Health education, social media, and tensions of authenticity in the ‘influencer pedagogy’ of health influencer Ashy Bines. Learning, Media and Technology 47(4), 427–439. https://doi.org/10.1080/17439884.2021.2006691

Horton, D., & Wohl, R. R. (1956). Mass communication and para-social interaction. Psychiatry, 19(3), 215–229. https://doi.org/10.1080/00332747.1956.11023049

Hu, L. T., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal, 6(1), 1–55. https://doi.org/10.1080/10705519909540118

Hudders, L., & De Jans, S. (2022). Gender effects in influencer marketing: An experimental study on the efficacy of endorsements by same-vs. other-gender social media influencers on Instagram. International Journal of Advertising, 41(1), 128–149. https://doi.org/10.1080/02650487.2021.1997455

Jin, S. V., Ryu, E., & Muqaddam, A. (2021). I trust what she’s #endorsing on Instagram: Moderating effects of parasocial interaction and social presence in fashion influencer marketing. Journal of Fashion Marketing and Management, 25(4), 665–681. https://doi.org/10.1108/JFMM-04-2020-0059

Kaňková, J., Binder, A., & Matthes, J. (2024). Health-related communication of social media influencers: A scoping review. Health Communication, 1–14. https://doi.org/10.1080/10410236.2024.2397268

Kenny, D. A., Kaniskan, B., & McCoach, D. B. (2015). The performance of RMSEA in models with small degrees of freedom. Sociological Methods & Research, 44(3), 486–507. https://doi.org/10.1177/0049124114543236

Khamis, S., Ang, L., & Welling, R. (2017). Self-branding, ‘micro-celebrity’ and the rise of social media influencers. Celebrity Studies, 8(2), 191–208. https://doi.org/10.1080/19392397.2016.1218292

Kim, E., Xie, Q., Hong, J. W., & Kim, H. M. (2024). Prosocial campaigns with virtual influencers: Stories, messages, and beyond. International Journal of Human–Computer Interaction, 1–12. https://doi.org/10.1080/10447318.2024.2387399

Kim, H., & Park, M. (2023). Virtual influencers’ attractiveness effect on purchase intention: A moderated mediation model of the product–endorser fit with the brand. Computers in Human Behavior, 143, Article 107703. https://doi.org/10.1016/j.chb.2023.107703

Kim, Y., Chung, S., & So, J. (2020). Success expectancy: A mediator of the effects of source similarity and self-efficacy on health behavior intention. Health Communication, 35(9), 1063–1072. https://doi.org/10.1080/10410236.2019.1613475

Klassen, K., Borleis, E., Brennan, L., Reid, M., McCaffrey, T., & Lim, M. (2018). What people “like”: Analysis of social media strategies used by food industry brands, lifestyle brands, and health promotion organizations on Facebook and Instagram. Journal of Medical Internet Research, 20(6), Article E10227. https://doi.org/10.2196/10227

Kolo, C., & Haumer, F. (2018). Social media celebrities as influencers in brand communication: An empirical study on influencer content, its advertising relevance and audience expectations. Journal of Digital & Social Media Marketing, 6(3), 273–282. https://doi.org/10.69554/bvcw5365

Kühn, J., & Riesmeyer, C. (2021). Brand endorsers with role model function: Social media influencers’ self-perception and advertising literacy. MedienPädagogik: Zeitschrift für Theorie und Praxis der Medienbildung, 43, 67–96. https://doi.org/10.21240/mpaed/43/2021.07.25.X

Kühne, R., & Peter, J. (2023). Anthropomorphism in human–robot interactions: a multidimensional conceptualization. Communication Theory, 33(1), 42–52. https://doi.org/10.1093/ct/qtac020

Liu, M. T., & Brock, J. L. (2011). Selecting a female athlete endorser in China: The effect of attractiveness, match-up, and consumer gender difference. European Journal of Marketing, 45(8), 1214–1235. https://doi.org/10.1108/03090561111137688

Lou, C., & Kim, H. K. (2019). Fancying the new rich and famous? Explicating the roles of influencer content, credibility, and parental mediation in adolescents’ parasocial relationship, materialism, and purchase intentions. Frontiers in Psychology, 10, Article 2567. https://doi.org/10.3389/fpsyg.2019.02567

Lucas, G. M., Gratch, J., King, A., & Morency, L. P. (2014). It’s only a computer: Virtual humans increase willingness to disclose. Computers in Human Behavior, 37, 94–100. https://doi.org/10.1016/j.chb.2014.04.043

McCosker, A. (2018). Engaging mental health online: Insights from beyondblue’s forum influencers. New Media & Society, 20(12), 4748–4764. https://doi.org/10.1177/1461444818784303

McCreery, M. P., Krach, S. K., Schrader, P. G., & Boone, R. (2012). Defining the virtual self: Personality, behavior, and the psychology of embodiment. Computers in Human Behavior, 28(3), 976–983. https://doi.org/10.1016/j.chb.2011.12.019

Molenaar, K. (2021). Discover the top 15 virtual influencers for 2022 – listed and ranked. Influencer Marketing Hub. https://influencermarketinghub.com/virtual-influencers/

Morin, C. M., Vallières, A., & Ivers, H. (2007). Dysfunctional Beliefs and Attitudes About Sleep (DBAS): Validation of a brief version (DBAS-16). Sleep, 30(11), 1547–1554. https://doi.org/10.1093/sleep/30.11.1547

Moustakas, E., Lamba, N., Mahmoud, D., & Ranganathan, C. (2020). Blurring lines between fiction and reality: Perspectives of experts on marketing effectiveness of virtual influencers. 2020 International Conference on Cyber Security and Protection of Digital Services (Cyber Security) (IEEE). https://doi.org/10.1109/CyberSecurity49315.2020.9138861

Pauw, L. S., Sauter, D. A., van Kleef, G. A., Lucas, G. M., Gratch, J., & Fischer, A. H. (2022). The avatar will see you now: Support from a virtual human provides socio-emotional benefits. Computers in Human Behavior, 136, Article 107368. https://doi.org/10.1016/j.chb.2022.107368

Petty, R. E., & Cacioppo, J. T. (1986). The elaboration likelihood model of persuasion. Advances in Experimental Social Psychology, 19, 123–205. https://doi.org/10.1016/S0065-2601(08)60214-2

Pilgrim, K., & Bohnet-Joschko, S. (2019). Selling health and happiness how influencers communicate on Instagram about dieting and exercise: Mixed methods research. BMC Public Health, 19(1), Article 1054. https://doi.org/10.1186/s12889-019-7387-8

Qu, C., Brinkman, W. P., Ling, Y., Wiggers, P., & Heynderickx, I. (2014). Conversations with a virtual human: Synthetic emotions and human responses. Computers in Human Behavior, 34, 58–68. https://doi.org/10.1016/j.chb.2014.01.033

Rosenthal-von der Pütten, A. M., Krämer, N. C., Hoffmann, L., Sobieraj, S., & Eimler, S. C. (2013). An experimental study on emotional reactions towards a robot. International Journal of Social Robotics, 5(1), 17–34. https://doi.org/10.1007/s12369-012-0173-8

Roth, T. (2007). Insomnia: Definition, prevalence, etiology, and consequences. Journal of Clinical Sleep Medicine, 3(5), 7–10. https://doi.org/10.5664/jcsm.26929

Sakib, M. N., Zolfagharian, M., & Yazdanparast, A. (2020). Does parasocial interaction with weight loss vloggers affect compliance? The role of vlogger characteristics, consumer readiness, and health consciousness. Journal of Retailing and Consumer Services, 52, Article 101733. https://doi.org/10.1016/j.jretconser.2019.01.002

Schramm, H., & Hartmann, T. (2008). The PSI-Process Scales. A new measure to assess the intensity and breadth of parasocial processes. Communications, 33, 385–401. https://doi.org/10.1515/COMM.2008.025

Shin, D. (2016). Do users experience real sociability through social TV? Analyzing parasocial behavior in relation to social TV. Journal of Broadcasting & Electronic Media, 60(1), 140–159. https://doi.org/10.1080/08838151.2015.1127247

Shin, D. (2022). The perception of humanness in conversational journalism: An algorithmic information-processing perspective. New Media & Society, 24(12), 2680–2704. https://doi.org/10.1177/1461444821993801

Shin, D. (2023). Algorithms, humans, and interactions: How do algorithms interact with people? Designing meaningful AI experiences. Taylor & Francis. https://doi.org/10.1201/b23083

Shin, D., Jitkajornwanich, K., Lim, J. S., & Spyridou, A. (2024). Debiasing misinformation: How do people diagnose health recommendations from AI? Online Information Review, 48(5), 1025–1044. https://doi.org/10.1108/oir-04-2023-0167

Shin, D., Koerber, A., & Lim, J. S. (2024). Impact of misinformation from generative AI on user information processing: How people understand misinformation from generative AI. New Media & Society. https://doi.org/10.1177/14614448241234040

Siegel, M., Breazeal, C., & Norton, M. I. (2009). Persuasive robotics: The influence of robot gender on human behavior. 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2563–2568. https://doi.org/10.1109/iros.2009.5354116

Singh, K., Fox, J. R. E., & Brown, R. J. (2016). Health anxiety and internet use: A thematic analysis. Cyberpsychology: Journal of Psychosocial Research on Cyberspace, 10(2), Article 4. https://doi.org/10.5817/CP2016-2-4

Song, W., & Fox, J. (2016). Playing for love in a romantic video game: Avatar identification, parasocial relationships, and Chinese women’s romantic beliefs. Mass Communication and Society, 19(2), 197–215. https://doi.org/10.1080/15205436.2015.1077972

Tajfel, H. (1982). Social psychology of intergroup relations. Annual Review of Psychology, 33(1), 1–39. https://doi.org/10.1146/annurev.ps.33.020182.000245

Thomas, V. L., & Fowler, K. (2021). Close encounters of the AI kind: Use of AI influencers as brand endorsers. Journal of Advertising, 50(1), 11–25. https://doi.org/10.1080/00913367.2020.1810595

Tian, Q., & Hoffner, C. A. (2010). Parasocial interaction with liked, neutral, and disliked characters on a popular TV series. Mass Communication & Society 13(3), 250–269. https://doi.org/10.1080/15205430903296051

Tian, Y., & Yoo, J. H. (2015). Connecting with the biggest loser: An extended model of parasocial interaction and identification in health-related reality TV shows. Health communication, 30(1), 1–7. https://doi.org/10.1080/10410236.2013.836733

Tikka, P., & Oinas-Kukkonen, H. (2019). Tailoring persuasive technology: A systematic review of literature of self-schema theory and transformative learning theory in persuasive technology context. Cyberpsychology: Journal of Psychosocial Research on Cyberspace, 13(3), Article 6. https://doi.org/10.5817/CP2019-3-6

Torres, P., Augusto, M., & Matos, M. (2019). Antecedents and outcomes of digital influencer endorsement: An exploratory study. Psychology and Marketing, 36(12), 1267–1276. https://doi.org/10.1002/mar.21274

Tran, G. A., & Strutton, D. (2014). Has reality television come of age as a promotional platform? Modelling the endorsement effectiveness of celebreality and reality stars. Psychology & Marketing, 31(4), 294–305. https://doi.org/10.1002/mar.20695

Tsai, W. H. S., & Men, L. R. (2013). Motivations and antecedents of consumer engagement with brand pages on social networking sites. Journal of Interactive Advertising, 13(2), 76–87. https://doi.org/10.1080/15252019.2013.826549

Tsvetkova, M., Yasseri, T., Pescetelli, N., & Werner, T. (2024). A new sociology of humans and machines. Nature Human Behaviour, 8(10), 1864–1876. https://doi.org/10.1038/s41562-024-02001-8

Von Mettenheim, W., & Wiedmann, K. P. (2022). The role of fashion influencers’ attractiveness: A gender-specific perspective. Communication Research and Practice, 7(3), 263–290. https://doi.org/10.1080/22041451.2021.2013087

Windahl, S., & Mcquail, D. (2015). Communication models for the study of mass communications. Taylor & Francis.

Xu, K., & Shi, J. (2024). Visioning a two-level human–machine communication framework: initiating conversations between explainable AI and communication. Communication Theory, 34(4), 216–229. https://doi.org/10.1093/ct/qtae016

Yi, J. (2023). Female-oriented dating sims in China: Players’ parasocial relationships, gender attitudes, and romantic beliefs. Psychology of Popular Media, 12(1), 58–68. https://doi.org/10.1037/ppm0000386

Zhang, J., Conway, J., & Hidalgo, C. A. (2023). Why people judge humans differently from machines: The role of perceived agency and experience. 2023 14th IEEE International Conference on Cognitive Infocommunications, 159–166. https://doi.org/10.1109/coginfocom59411.2023.10397474

Zimmermann, D., Noll, C., Gräßer, L., Hugger, K. U., Braun, L. M., Nowak, T., & Kaspar, K. (2022). Influencers on YouTube: A quantitative study on young people’s use and perception of videos about political and societal topics. Current Psychology, 41, 6808–6824. https://doi.org/10.1007/s12144-020-01164-7

Additional information

Authors’ Contribution

Melanie Saumer: conceptualization, methodology, investigation, data curation, project administration, writing—original draft, writing—review & editing. Ariadne Neureiter: conceptualization, methodology, investigation, data curation, writing—original draft, writing—review & editing. Édua Mária Varga: conceptualization, methodology, writing—original draft. Veronika Gataric: conceptualization, methodology, writing—original draft. Chelsea Yupu Liu: conceptualization, methodology, writing—original draft. Jörg Matthes: conceptualization, methodology, project administration, formal analysis, writing—original draft, writing—review & editing, supervision, funding acquisition.

 

Editorial Record

First submission received:
November 16, 2023

Revisions received:
December 4, 2024
April 1, 2025
April 7, 2025

Accepted for publication:
April 7, 2025

Editor in charge:
Lenka Dedkova

Full text

Introduction

The online presence of influencers has developed incredibly fast during the past years and has become a dominant phenomenon on social media (e.g., Djafarova & Rushworth, 2017; Kaňková, et al., 2024). The persuasive power of influencers also includes sharing intimate aspects of the influencers’ lives, including mental health struggles and how to cope with them (McCosker, 2018). Even though communicating about personal health topics becomes more prevalent in influencer communication, it is not limited to human influencers only. Ever since the rise of artificial intelligence (AI) systems like Alexa, Siri (Thomas & Fowler, 2021), and most recently ChatGPT (Biswas, 2023), the influencer sphere has been infiltrated with “virtual” influencers (Daly, 2019).

It remains unclear whether virtual influencers elicit the same persuasive effects as human influencers, particularly in health communication (Hendry et al., 2022). CGIs (Computer Generated Imagery) are computer generated influencers that can vary in appearance—e.g., cartoon-like vs. human-like—yet their impact on parasocial interactions (PSI) and health-related persuasion is unknown. PSI may depend on user characteristics like personal affectedness and gender, as perceived similarity could enhance persuasion (Tikka & Oinas-Kukkonen, 2019). To address this gap, this study outlines an experiment comparing cartoon-like and human-like CGIs in social media health advice. We examine whether both types evoke PSI and how this influences health behavior intentions, considering personal affectedness by insomnia1 and gender as moderators. This study specifically focuses on CGI influencers, a subset of virtual influencers, to explore their effects on health communication.

Being a subgroup of virtual influencers, CGI influencers are AI entities rapidly emerging as a new trend within social media. While virtual influencers encompass all digital personas that engage audiences, including CGI influencers, animated characters, and AI-driven avatars, CGI influencers specifically refer to influencers based on computer-generated 3D models. In most instances they appear to be real humans and learn to imitate human communication styles but are, in fact, completely artificial avatars that operate on different levels of AI (Moustakas et al., 2020; Xu & Shi, 2024). Thus, CGI communication styles become similar to human influencer communication and consequently, CGI influencers have already gained a strong social standing within the social media sphere (e.g., Chattaraman et al., 2019; Thomas & Fowler, 2021).

Zooming in, AI-based health information is an especially pivotal research gap in the virtual influencer context, as AI can provide personalized, empathic (health) messages with accurate information (Shin, 2023). AI as a personal health assistant is praised for accuracy-based data assumptions (Donadello & Dragoni, 2022) and for combating health misinformation often spread by human social media influencers (Burke-Garcia & Soskin Hicks, 2024). Even though human-computer interaction (HCI) research highlights barriers such as missing reliability and problematic ethics in AI integration (e.g., Balcombe & De Leo, 2022), scholars also see potential in real-time machine learning within social media, ensuring trust and perceptions of humanness (e.g., Balcombe & De Leo, 2022).

Parasocial interactions (PSI)—a one-sided, illusionary relationship where individuals feel connected to a media figure or character despite it not being reciprocated (Horton & Wohl, 1956; for appliance to digital spaces see Lou & Kim, 2019; Q. Tian & Hoffner, 2010)—might function similarly for virtual and human influencers when conveying health messages. The intensity of such PSI is tightly bound to users’ perceived similarity with the influencer and the likeability, physical appearance, and trustworthiness of the influencer (e.g., Bond, 2016; Schramm & Hartmann, 2008; originally based on The Elaboration Likelihood Model; ELM; Petty & Cacioppo, 1986) and the Heuristic-Systematic Model (HSM; Chaiken, 1980). When the influencer appears convincingly human (e.g., E. Kim et al., 2024; Shin, 2022), research has shown that CGIs can evoke similar PSIs as human influencers (Song & Fox, 2016; Yi, 2023) and that social media users perceive the relationship with CGI influencers not only as fandom but as a kind of friendship (e.g., Block & Lovegrove, 2021; Dibble et al., 2016).

Despite this first evidence, it remains unknown if CGI influencers evoke the same or similar persuasive outcomes as human influencers do and how current human-computer interaction concepts relate to a CGI health communication setting. There are different types of CGI influencers, cartoon-look and human-look CGIs, and we do not know how these differ in terms of PSI. That is, it remains unclear whether health advice from CGI influencers can benefit recipients’ health behavior intentions (Hendry et al., 2022). This is critical, as CGIs, by definition, do not experience health issues, whereas online health information can both alleviate and heighten health concerns (Singh et al., 2016). Thus, we do not know how such persuasive processes may depend on the characteristics of the influencer and the recipients within the CGI context (Tikka & Oinas-Kukkonen, 2019); for instance, personal affectedness and gender likely influence PSI due to perceived similarity.

Against this background, we conducted an experimental study investigating the effects of a cartoon-look CGI, a human-look CGI, and a human influencer advising on social media about insomnia. We theorize that both types can elicit PSI, which, in turn, may affect health behavior intentions. Further, we looked at the moderating effect of personal affectedness of the respective health topic (i.e., insomnia) and gender. General social media use and CGI attitudes were additional control variables.

The Persuasive Power of Social Media Influencers

Influencers on social media can be described as grassroot individuals who have established a public presence among their followers by creating viral social media content over time (Lou & Kim, 2019). If successful, they tend to quickly evolve into a stadium of a micro-celebrity since their authenticity creates a personal brand around the influencer (Khamis et al., 2017). Characteristics of influencers are a medium to high number of followers on social media, extensive connections with others, and involvement in a variety of networks, which makes them able to give all sorts of advice and product recommendations or even promote ideologies by increasing their outreach and communicative power (Windahl & McQuail, 2015). Influencers can have similar or even higher persuasive powers as “real” celebrity endorsers since both act along the blurred lines of seemingly personal and/or sponsored content (Gräve, 2017). Yet celebrities tend to present a controlled work-focused picture online, while the appeal of influencers lies more in the depth of personal self-disclosing content. Research suggests that the latter has a huge persuasive appeal, especially among adolescents (Eyal et al., 2020). In line with that, the persuasive effectiveness of (human) influencer communication styles has been thoroughly examined, commonly leading to the conclusion that through authenticity, relatability, and closeness, influencers tend to have even more persuasive power than traditional information sources (e.g., Djafarova & Rushworth 2017; Drenten & Brooks, 2020).

Recently, the online influencer realm has gained new technological advances by drawing on artificial intelligence (AI). Thus, the lines between real humanness and virtual humanness get blurred increasingly.

The CGI Factor and How it Compares to Human Influencers

The social media influencer appeal is not necessarily bound to human entities. As Freberg and et al. (2011) stated: “An independent third-party endorser who shapes audience attitudes through blogs, tweets, and the use of other social media” might as well be a non-human artificial being (p. 90), like, for example, a CGI influencer. As mentioned above, CGI influencers can broadly be differentiated into human-look CGIs and cartoon-look CGIs (Torres et al., 2019). Human-look CGIs inherit a realistic human look and are involved in “human” behavior(i.e., snapshots with friends) and “human” discourses (i.e., sexism debate). Examples on social media are lilmiquela, a “19-year-old robot living in LA” (@lilmiquela on Instagram), or Shudu, “the world’s first digital supermodel” (@shudu.gram on Instagram). Cartoon-look CGIs with non-human looks (i.e., cartoon-look optics) increase curiosity about the “mystery of the unknown” in social media users (Torres et al., 2019). An example on social media is noonoouri, a cartoon-look-model that labels herself as “activist” and “vegan” (@noonoouri on Instagram).

Since CGI influencers often generate a level of publicity beyond a personal brand, they can be categorized as a new form of micro-celebrities (Drenten & Brooks, 2020). Just like human influencers, they engage with their followers and post about their “lives”, endorse brands and products, and interact with their followers. Further, what makes CGI influencers especially persuasive for their followers is their adaptable but unique looks, which create an entire phenomenon around them and is a valuable asset for marketing strategies employing CGIs (Torres et al., 2019). This, in turn, spikes the wider audience’s interest levels and hence increases CGIs messaging power (Da Silva Oliveira & Chimenti, 2021). The mystery of the unreal and virtual can, to some recipients at least, be more appealing than human influencers. This is underlined by cartoon-look CGI influencers sometimes being even more successful than human-look CGIs or human influencers (Molenaar, 2021). Since modern influencer culture is based on digital editing and applying filters, the jump to completely unreal depictions does not always evoke irritation right away (Drenten & Brooks, 2020). Even a human’s social media presence is not (always) an adequate depiction of reality. Thus, CGI influencers are just some small steps further away from this and thus blend in smoothly into the “fake” social media reality.

From a recipient’s perspective, studies show a tendency toward manufacturing one’s virtual self in online spaces to be perceived as more agreeable (McCreery et al., 2012). If this premise is reciprocal, that would also mean that recipients expect some forms of distortion from others and hence accept completely fabricated virtual personae to some degree (Drenten & Brooks, 2020).

Sociologically, human-machine interactions occur on a micro level but are part of broader societal changes shaped by AI (Tsvetkova et al., 2024). It became evident that humans tend to treat machines like humans—showing politeness, empathizing when in distress (Rosenthal-von der Pütten et al., 2013), and even applying stereotypes (Siegel et al., 2009). Yet, when AI is recognized as such, neurophysiological responses toward AI differ compared to humans due to the lack of perceived agency and morality in AI (Zhang et al., 2023). AI is viewed as less intentional, self-interested, and biased, having a narrower emotional range with reduced gratitude, anger, and fairness responses (Chugunova & Sele, 2022), thus, possibly not as authentic as humans. Thus, perceived humanness is hypothesized as a key factor of authentic human interaction with AI (Kühne & Peter, 2023; Shin, 2022). However, when the AI nature of virtual influencers is not disclosed, it remains unclear how different degrees of humanness in AI influence recipients’ perceptions. There are potentially blurred lines of realism due to artificial filters on humans on the one hand and somewhat realistic CGI influencer appearances on the other hand.

Another rather young phenomenon might play into those perceptions as well: The uncanny valley phenomenon may influence how audiences evaluate CGI influencers and engage with them. This psychological concept describes the discomfort or eeriness people experience when an artificial entity appears almost human but exhibits subtle imperfections in appearance or behavior. These imperfections—such as unnatural facial expressions or overly smooth movements—can make the entity seem unsettling rather than relatable (Arsenyan & Mirowska, 2021). CGI influencers, particularly those with highly stylized or cartoon-like designs, might evoke such reactions, as their human-like yet artificial features may create a sense of unease. Research suggests that this discomfort can hinder perceived closeness, as viewers may struggle to emotionally connect with an entity that feels “off” (Block & Lovegrove, 2021). Consequently, the uncanny valley effect could be a critical factor in determining the effectiveness of CGI influencers, particularly in contexts where trust and authenticity—such as health communication—are essential.

Summing this current state of the art up, several perspectives and even somewhat contradicting tendencies create a complex matter around CGI influencer perceptions and researching those underlaying mechanisms: CGIs create a huge follower base simply based on their uniqueness (i.e., CGIs definitely recognized as such), or because they are hard to be distinguished from real humans (i.e., human-like CGIs that might at a first glance appear to be pictures of heavily edited real humans), and have shown to create similar or even more success than their human counterparts (Molenaar, 2021). Contrastingly considering sensitive human perception processes, recent AI research also shows a quick shift of this kind of blind trust as soon as CGIs are a) disclosed as being artificial, and/or b) behaves in slightly non-human ways—referred to as uncanny valley effect (Arsenyan & Mirowska, 2021). Although research on human-machine interaction is still relatively young, these patterns must be considered when exploring the complex interplay of multiple psychological mechanisms, which might even operate simultaneously, alternate, or change over time.

Parasocial Interactions With Influencers

A PSI is a form of social involvement with media personae (Horton & Wohl, 1956). It is characterized by being one-sided and non-reciprocal since the recipient considers the persona as a familiar acquaintance. In contrast, the online persona does not know the recipients. Extensive studies reveal that the highly interactive social media context fosters intense PSI experiences to a greater extent than traditional media such as TV (e.g., Chen, 2016; Tsai & Men, 2013). Due to the possibility of “following” the everyday lives of social media influencers and being able to comment and respond to their content, it is postulated that influencers can evoke stronger PSIs than other media personae (Tsai & Men, 2013). The power of PSI is illustrated by the fact that, in some cases, social media influencers can have more influence on adolescents than their peers and family members (Al-Harbi & Al-Harbi, 2017). Compared to traditional communication (i.e., news broadcasts, TV channels, newspapers), personal influencer communication leads to a stronger connection of the recipients with both the content and the (virtual or human) persona. As a result, recipients perceive influencers and their messaging as more genuine (Tran & Strutton, 2014). Further, this bond leads to an increase in perceived trust in the influencer, which explains why influencers can generate higher persuasion levels in their followers compared to, for instance, official institution accounts on social media platforms (Jin et al., 2021).

Aspects that determine the power of a PSI are similarity, physical appearance and trustworthiness (e.g., Bond, 2016; Lou & Kim, 2019; Q. Tian & Hoffner, 2010). Accordingly, also virtual characters can inherit those characteristics, meaning that CGI influencers, too, can ignite intense PSIs with their followers (e.g., Song & Fox, 2016; Yi, 2023). That being said, they could (but do not have to) be identified as purely virtual personae, leading to recipients’ perceptions, evaluations, and relationships might varying between human-look CGI influencers, cartoon-look CGI influencers, and human influencers.

This aligns with the Elaboration Likelihood Model (ELM; Petty & Cacioppo, 1986). The ELM explains how viewers process information through two cognitive routes: the central route, involving thoughtful scrutiny, and the peripheral route, based on surface-level cues. In the context of parasocial interactions (PSIs) and CGI influencers, we argue, based on the ELM, that the humanness factor of influencers might serve as a peripheral cue that leads to recipients’ low-effort, intuitive processing of the influencer. As viewers automatically, without much scrutinization of the influencer, perceive them as authentic and relatable because of their perceived humanness, stronger PSIs might be elicited (Petty & Cacioppo, 1986; Shin, 2022). On the other hand, less realistic (cartoon-like) CGI influencers may disrupt this intuitive processing, leading viewers to engage in central processing, where they critically assess the influencer’s authenticity. This could weaken PSI with CGI influencers by encouraging more evaluative judgments. Thus, the ELM suggests that PSI will be stronger with more human-like CGI influencers compared to cartoon-like or human influencers, as variations in appearance shape the depth and nature of viewers’ cognitive engagement.

The uncanny valley phenomenon might play a role as well when evaluating (CGI) influencers and engaging in PSI with them. This phenomenon is often observed with digital characters or robots that appear to be nearly human but possess subtle imperfections—such as comic-like appearances—which make them appear “creepy” or unsettling (Arsenyan & Mirowska, 2021). Hence, the uncanny valley of CGI influencers could trigger discomfort in some viewers, thereby impacting parasocial interactions (Block & Lovegrove, 2021).

Since similarity, likeability, and physical appearance are central to strong PSI (e.g., Schramm & Hartmann, 2008) and are thus bound to human-look appearances and behavior and considering the uncanny valley effect appearing when non-human aspects are sensed, we argue that human influencers exceed in PSI intensity, compared to both human and cartoon-look CGI influencers. Based on the same rationale, we further assume that the human-look CGI influencer will outdo the cartoon-look CGI influencer in terms of PSI intensity. Again, drawing on the ELM, perceived humanness should theoretically ignite PSI stronger by leading to peripheral, low-effort processing (e.g., Petty & Cacioppo, 1986). Moreover, as outlined, the strongest driver for PSI is similarity (Schramm & Hartmann, 2008), which means that a suspicion about the artificiality of an influencer would lead to a decrease in PSI (Gong, 2008). When CGI influencers are depicted as hyper realistic in the form of cartoons, it can be expected that recipients detect those artificial nuances to a stronger extent than in human-look CGIs and hence react with lower levels of PSI, while human influencers are expected to always outperform any form of CGI regarding Psi intensity:

H1: If young adults are exposed to (a) a cartoon-look CGI influencer and (b) a human-look CGI influencer talking about insomnia, PSI will be lower compared to a human influencer.

H2: When young adults are exposed to a human-look CGI talking about insomnia, PSI will be higher compared to a cartoon-look CGI.

Effects on Health Behavior Intentions

Influencer communication may offer new possibilities for health communication. As a downside, however, health communication by social media influencers is often criticized by health scholars since many influencers lack expertise and relevant qualifications (Klassen et al., 2018). Due to influencers not being subject to peer-review systems and health regulations, their health messages might cause harm when wrong advice is distributed (Hendry et al., 2022). Yet the perceived authenticity of a health message is not only tied to an adequate qualification but to the particular context of similarity and personal affectedness by both the communicating influencer and the recipient—namely, PSI (Hendry et al., 2022). As elaborated above, PSI can increase influencers’ perceived authenticity in communicating health issues (Sakib et al., 2020). This suggests that social media influencers who communicate health messages have the potential to be perceived as authentic and trustworthy, even if they lack proper health qualifications.

Adding to this, personal narratives that influencers employ in their health messaging (for instance, talking about their health-related experiences) increase their credibility and, in turn, make their health messages more effective (Burke-Garcia, 2019). To increase likeability and similarity, influencers post about their daily lives, including challenging aspects like personal health struggles (Lou & Kim, 2019). However, the perception of this personal information depends on the recipient’s reception situation. Content relevance through personal affectedness is another important factor in forming a strong PSI (Kolo & Haumer, 2018), especially in health-related contexts (Y. Tian & Yoo, 2015). Influencers disclosing such private information are perceived as “opening up” personally, which is considered a PSI-enhancing feature (Hendry et al., 2022). Health aspects additionally inherit highly personal nuances, as someone who is not affected by certain issues will not be able to relate as much as someone who is (in)directly affected by them (Burke-Garcia, 2019).

So far, a few studies have shown that (health) messages of virtual influencers, such as CGIs, have similar persuasion effects as human health influencers (Thomas & Fowler 2021), at least as long as they are perceived as sort of human (Shin, 2022). Moreover, AI-based personal health assistants tend to be evaluated positively due to attributing accuracy to their data-based assumptions (Donadello & Dragoni, 2022). However, they can also function as a combatant of health misinformation—often spread by social media influencers (Burke-Garcia & Soskin Hicks, 2024). Further, human-computer interaction (HCI) research has shown that integrating AI in mental healthcare faces several barriers, such as problems with reliability, usability, safety, ethics, and socio-cultural adaptability. Simultaneously, real-time machine learning algorithms embedded into social media hold great potential for advancing human-computer interaction. Yet, there is a need to design them humanely to ensure users’ trust and behavioral outcomes (Balcombe & De Leo, 2022).

In the context of health communication, Shin, Jitkajornwanich, et al. (2024a) used the Heuristic-systematic Model (HSM) to explain how individuals process online health information. Like the ELM, the HSM posits two distinct processing routes: heuristic (intuitive, low-effort) and systematic (deliberate, high-effort). Empirical evidence for this model in the context of AI and health communication showed that a) the impact of AI-generated misinformation on perceived health information accuracy depended on whether people process it heuristically or systematically, and b) systematic processing enhanced the likelihood of identifying misinformation (Shin, Koerber, & Lim, 2024). Most importantly for our study, these theoretical models imply that when the relevance of the communicated topic is high for recipients, they are more likely to process information systematically, leading to stronger persuasion effects. In other words, the more relevant the health topic of insomnia is for young adults, the more they systematically process the influencers’ content and possibly engage with the influencers or even adapt suggested health behavior.

To sum it up, personal narratives of influencers and personal affectedness by the respective health issue can increase recipients’ perceived similarity, identification, and trustworthiness with the influencer. Those perceptions are known to positively determine PSI development (Burke-Garcia, 2019; Schramm & Hartmann, 2008). We thus argue that if the recipient is affected by the same health struggles as the influencer, such as insomnia, stronger PSI is induced compared to those who are not personally affected (Y. Tian & Yoo, 2015). We hypothesize the following:

H3: The relationships described in H1 are stronger for young adults with insomnia than those without insomnia.

According to Social Identification Theory (Tajfel, 1982), one’s identity is constructed by the social group one relates to most, leading to stronger identification processes with in-group members. This also applies to gender, meaning that women tend to identify with other women more than men (e.g., Liu & Brock, 2011). Since perceived similarity is one of the central factors of PSI (Schramm & Hartmann, 2008), the matching gender of the influencer and the recipient might increase similarity perceptions. While results are mixed for men (e.g., Hudders & De Jans, 2022; Von Mettenheim & Wiedmann, 2022), previous research showed that women tend to perceive themselves as more similar to female influencers compared to male influencers. This could lead to stronger PSI and, thus, positively impact attitudes and post-engagement (e.g., Hudders & De Jans, 2022; Liu & Brock, 2011). Hence, we argue that when the (CGI) influencer is female, female recipients might display stronger PSI than male recipients. Following this logic, our hypothesis states the following:

H4: The relationships described in H1 are stronger for young adults matching the gender of the influencer compared to young adults with no match.

Previous research showed that stronger PSI of followers with communicators leads to higher compliance with health advice given by the communicators (Sakib et al., 2020). Empirical evidence further suggests that PSI positively impacts young adults’ acceptance of influencers’ appeals because they view them as peer advice (e.g., Bond, 2016; Lou & Kim, 2019; Q. Tian & Hoffner, 2010). Since CGI influencers can be similar to human influencers (i.e., human-look CGIs: Thomas & Fowler, 2021), such virtual influencers can also become health role models. This then motivates users to follow the CGI’s (health) advice and copy their (health) behavior, even though they are not human (Kühn & Riesmeyer, 2021). In some instances, influencers have stronger effects on adolescents than their peers and family, making adolescents even more prone to seeing virtual influencers as role models and imitating their lifestyles (Al-Harbi & Al-Harbi, 2017).

As Shin (2016) highlighted, parasocial experiences (so-called parasociability) must comprise users’ perceptions of social presence and usefulness to shape their emotional and cognitive engagement. Since health advice normally contains utility and practical applications for users, users’ parasocial interactions (PSI) with digital influencers could enhance their engagement with the health content. Thus, these dynamics could foster intentions to engage in health-related behaviors, particularly when the (CGI) influencer is perceived as socially interactive and useful. Hence, we argue that it is plausible that PSI with CGI influencers is positively associated with health behavior intentions among young adults (e.g., Gong, 2008; Schramm & Hartmann, 2008; Shin, 2016;). To sum it up, we assume that a strong PSI with (CGI) influencers could motivate followers to imitate health behaviors. Hence, we propose the following hypothesis:

H5: PSI increases young adults’ health behavior intentions.

Methods

We conducted a 1 x 3 (type of influencer: real human influencer, human-look CGI influencer, and cartoon-look CGI influencer) between-subjects online experiment2. We employed a sample of young adults with age quotas within our age group (N = 443; Mage = 21.53, SDage = 2.82, minage = 16, maxage = 26) and gender quotas (62.8% female). The educational background was diverse; 16.7% low education (no education completed or lower secondary education), 47.2% middle education (high school graduate), 35.2% high education (college degree). Participants took part in the study in October 2021 and were recruited with the help of a professional polling company in the United Kingdom that offered incentives for participation. The online experiment was part of a larger survey with unrelated topics.

Procedure

The participants were randomly assigned to one of our three conditions. After giving informed consent, the stimulus was presented. Further, we assessed the mediators and the dependent variables. Finally, participants were thanked and thoroughly debriefed regarding the fabricated stimulus material. The study was conducted following the ethical guidelines of the University of Vienna and ethically approved by the Institutional Review Board of the Department of Communication (ID 20210721_062).

Stimulus Material

Each participant was exposed to a fictitious female Instagram influencer called “ava.gram”. The influencer “Ava” posted three consecutive postings about her sleeping issues due to insomnia and how she deals with it (a full display of all used stimuli can be found in the Appendix). Depending on the experimental condition, the pictures of “Ava” varied in their degree of realism. The stimuli in the real human influencer condition (n = 146) were not photoshopped and depicted a real human woman. The stimuli in the human-look CGI condition (n = 153) were edited with Photoshop to make “Ava” look artificial, in line with typical CGIs. The postings in the cartoon-look CGI condition (n = 144) were edited in a cartoonish style. None of the three different versions of the fictitious influencer “Ava” contained any disclosure regarding the nature of the influencer (human vs. CGI). We wanted participants to detect (or not detect) the realness of the respective stimulus on their own based on visual cues, such as photoshopped facial features (human-look CGI) or a complete “cartoonification” (cartoon-look CGI). Each respondent saw three consecutive postings within each condition (human influencer vs. human-look CGI influencer vs. cartoon-look CGI influencer). In each of the three consecutive postings, the caption beneath the respective picture depicted a text about insomnia. The first posting included a rough definition of insomnia, the second one included information about some of the symptoms of insomnia and how they affect the daily life of the influencer, and the third one gave advice on how the influencer tries to combat insomnia (for the exact wording, see the stimulus material in the Appendix).

Measures

Unless indicated otherwise, all items were measured on a 7-point Likert scale from 1 (strongly disagree) to 7 (strongly agree) and all scales resulted in a one-factor solution.

PSI was measured with 15 items based on Schramm and Hartmann (2008). However, one item had to be excluded due to compromising factor loadings. All remaining items loaded on one factor with standardized factor loadings > .65 and a Confirmatory Factor Analysis (CFA) showed acceptable model fit, RMSEA = .117, CFI = .919, TLI = .904, SRMR = .046; χ²(77) = 541.70, p < .001; see Table 1. The scale was reliable (Cronbach’s α = .98, ω = .96; M = 3.45, SD = 1.49).

Table 1. Factor Loadings of the Parasocial Interaction (PSI) Variable Items.

Item Wording

Factor Loadinga

The influencer is very similar to me.

.867

After seeing these three posts, I feel inclined to follow the influencer.

.866

The story of the influencer touched my feelings.

.861

I can identify with the influencer.

.858

The influencer has a role model effect on me.

.857

I would like to be very similar to the influencer and the way she deals with insomnia.

.855

I feel emotionally connected to the influencer.

.854

I have many things in common with the influencer.

.848

I would share this content and information with people in my life after having read the posts.

.831

I am interested in the influencer (their photos, the way they write/express themselves).

.800

I can connect with her and her content.

.793

After seeing these three posts, I am interested in seeing the influencer’s Instagram feed and other photos they post.

.786

The influencer and her content feel relatable to me and/or my lifestyle.

.744

The influencer feels approachable.

.653

Note. a One Factor Extracted.

Health behavior intentions were measured with the following three items based on Y. Kim and et al. (2020): I will try to live a healthy lifestyle in order to avoid sleep problems; I will try to avoid alcohol and caffeine because they negatively affect sleep quality; A healthy diet is important to me in order to avoid sleep problems (Cronbach’s α = .79, ω = .80; M = 4.75, SD = 1.43). Standardized factor loadings ranged from .79 to .88, indicating a strong convergence with the latent construct.

Personal affectedness by insomnia was measured with the following four items based on Morin and et al. (2007): I have experienced sleep-related problems in the last few months; The sleeping problems I am experiencing are more severe compared to what my friends, family and acquaintances experience; When experiencing sleeping problems, I research my problems and try to look for solutions; I feel like no one understands the problems I face when it comes to sleeping (Cronbach’s α = .70, ω = .76; M = 4.06, SD = 1.76). A CFA showed good model fit, RMSEA = .124, CFI = .974, TLI = .923, SRMR = .030; χ²(2) = 15.73, p < .001, and standardized factor loadings were > .65.3

Besides the sociodemographic variables of age, gender, and education, social media use was measured with the four following items: Using social media to consume content is a part of my daily routine; In general, I interact with social media platforms on a regular basis; I use (a) social media platform(s); I use Instagram on a regular basis (Cronbach’s α = .81, ω = .82; M = 5.45, SD = 1.37); excellent CFA model fit: RMSEA = .030, CFI = .999, TLI = .996, SRMR = .012; χ²(2) = 2.77, p = .250. As another covariate, CGI attitudes were measured with the following three items that were inspired by Da Silva Oliveira and Chimenti (2021): Computer-generated influencers are creepy; Computer-generated influencers are somehow scary; I am suspicious towards computer-generated influencers (Cronbach’s α = .91, ω = .91; M = 5.14, SD = 1.65). Standardized factor loadings ranged from .90 to .94.

Manipulation and Randomization Check

Our manipulation check consisted of four items referring to the perceived realness of the influencer on a 7-point Likert scale ranging from 1 (strongly disagree) to 7 (strongly agree). The items dealt with the degree to which participants perceive the images as computer-generated imagery (CGI) and not as photos of a real person (M = 3.97, SD = 2.16), perceive that the influencer does not exist in reality (M = 3.84, SD = 2.02), perceive that the influencer is computer generated (CGI; M = 3.88, SD = 2.10), and that their impression is that the influencer is not real (M = 3.95, SD = 2.10; Cronbach’s α = .99, M = 3.91, SD = 1.92). Results showed that subjects who were assigned to the cartoon-look CGI influencer condition (n = 144) significantly scored higher on these items than those participants who were assigned to the human-look CGI influencer condition, n = 153; t(442) = 1.61, p < .001, d = 3.83, or the real human influencer condition, n = 146; t(442) = 2.25, p < .001, d = 4.96. The real human influencer was perceived as less fabricated than the human-look CGI one, n = 153; t(442) = 0.64, p = .005, d = 1.19.

Among all conditions, randomization checks regarding age, F(2, 442) = 1.162, p = .314; η² = 0.005, gender, χ2(1) = 0.142, p = .753; V = .018, educational level, χ2(1) = 5.141, p = .455, V = .108, social media use, F(2, 442) = 0.125, p = .883; η² = 0.001, and CGI attitudes, χ2(2) = 0.466, p = .918; V = .023, showed no systematic differences.

Thus, we deemed the manipulation as well as the randomisation as successful.

Data Analysis

We performed a mediated moderation analysis using Model 9 of the PROCESS macro (Hayes, 2013) within the Statistical Package for the Social Sciences (SPSS). To test H1 and H2, we ran two models using the real human influencer condition as a reference group or the cartoon-look CGI influencer condition. We used PSI as the mediator in both models and inserted health behavior intentions as the dependent variable. Further, we entered personal affectedness by insomnia and gender (male, female) as moderator variables. Finally, we controlled for age, education, social media use, and CGI attitudes. The 95% confidence intervals for indirect effects were derived via bootstrapping with 5.000 resamples.

Results

As assumed in our H1, the level of PSI in the cartoon-look CGI condition, b = −0.59, SE = 0.14, p < .001; 95% CI [−.86, −.32], as well as in the human-look CGI condition, b = −0.31, SE = 0.14, p = .028; 95% CI [−.58, −.03], was lower compared to the real human influencer condition. Thus, H1 is supported.

H2 stating that PSI in young adults will be higher when a human-look CGI is posting about insomnia as compared to a cartoon-look CGI was confirmed by our data, b = −0.29, SE = 0.14, p = .041; 95% CI [−.56, −.01]. Thus, our second hypothesis (H2) was accepted.

Our H3, assuming that the personal affectedness of insomnia plays a moderating role in the association between (CGI) influencers and PSI, was not supported by our data. Results showed that being affected by insomnia did not influence the level of PSI with the influencer, regardless whether the influencer was a human-look CGI, b = −0.15, SE = 0.09, p = .103; 95% CI [−.34, .03], a cartoon-look CGI, b = −0.12, SE = 0.09, p = .213; 95% CI [−.30, .07], or a real human influencer, b = 0.15, SE = 0.09, p = .103; 95% CI [−.03, .34]. Hence, H3 is not confirmed. Interestingly, results showed a main positive effect of personal affectedness by insomnia on PSI, b = 0.64, SE = 0.07, p < .001; 95% CI [.50, .77], independently of exposure to the influencer addressing insomnia.

Our H4, stating that the relationship described in H1 is stronger for young adults matching the gender of the influencer compared to young adults with no match, was also not supported by our data. Results showed that the gender of the participants did not moderate the level of PSI, regardless of real human influencer exposure, b = 0.03, SE = 0.29, p = .912; 95% CI [−.54, .60], human-look CGI influencer exposure, b = −0.03, SE = 0.29, p = .912; 95% CI [−.60, .54], or cartoon-look CGI influencer exposure, b = 0.24, SE = 0.29, p = .403; 95% CI [−.33, .81].

Further, confirming our H5, PSI was positively associated with young adults’ health behavior intentions. The willingness to engage in health behavior was stronger if the PSI was stronger, b = 0.28, SE = 0.04, p < .001; 95% CI [.20, .37]. In addition, analyses showed no direct effect of the type of influencer on health behavior intentions; cartoon-look CGI: b = 0.13, SE = 0.15, p = .389; 95% CI [−.17, .43]; human-look CGI: b = 0.19, SE = 0.15, p = .214; 95% CI [−.11, .48]; real human: b = −0.19, SE = 0.15, p = .214; 95% CI [−.48, .11]. Thus, the association is based on a fully mediated effect that only appears through PSI as a mediating construct. Accordingly, H5 is supported by the data.

For PSI, age (b = 0.01, SE = 0.02, p = .694, education (dummy coded into low and high education; high education dummy: b = 0.38, SE = 0.13, p = .004), CGI attitudes (b = −0.16, SE = 0.04, p < .001), and social media use (b = 0.07, SE = 0.05, p = .123 were entered as control variables. Additionally, for health intentions, age (b = -0.01, SE = 0.02, p = .605, education (high education dummy: b = 0.13, SE = 0.14, p = .336, CGI attitudes (b = 0.14, SE = 0.04, p < .001), and social media use (b = 0.08, SE = 0.05, p = .117 were entered as control variables.

The visualized results are depicted in Figure 1.

Figure 1. Visualized Results.

Note. Reference group for Human-Look and Cartoon-Look CGI Influencer groups: Real Human Influencer; Reference group for Real Human Influencer group: Human-Look CGI Influencer; p < .05*, p < .01**, p < .001***.

For a summary of our moderated mediation model, see Table 2.

Table 2. Summary of the Mediated Moderation Model.

 

PSI

Health Behavior Intentions

 

b

SE

p

b

SE

p

aHuman-Look CGI Influencer

−0.31

0.14

.027

0.19

0.15

.215

aCartoon-Look CGI Influencer

−0.59

0.14

< .001

0.13

0.15

.389

bReal Human Influencer

0.31

0.14

.027

−0.19

0.15

.214

bCartoon-Look CGI Influencer

−0.29

0.14

.039

−0.06

0.15

.706

aPersonal Affectedness by Insomnia

0.64

0.07

< .001

 

 

 

aHuman-Look CGI Influencer * Personal Affectedness by Insomnia

−0.15

0.09

.103

 

 

 

aCartoon-Look CGI Influencer * Personal Affectedness by Insomnia

−0.12

0.09

.213

 

 

 

bReal Human Influencer * Personal Affectedness by Insomnia

0.15

0.09

.105

 

 

 

aGender

−0.04

0.21

.841

 

 

 

aHuman-Look CGI Influencer * Gender

−0.03

0.29

.916

 

 

 

aCartoon-Look CGI Influencer * Gender

0.24

0.29

.403

 

 

 

bReal Human Influencer * Gender

0.03

0.29

.992

 

 

 

Age

0.01

0.02

.687

−0.01

0.02

.618

Education_high (dummy)

0.38

0.13

.004

0.13

0.14

.350

CGI Attitudes

−0.16

0.04

< .001

0.14

0.04

< .001

Social Media Use

0.07

0.05

.111

0.08

0.05

.104

PSI

 

 

 

0.28

0.04

 

Explained Variance

0.38

 

 

 0.14

 

Note. Macro PROCESS 4.0, Model 9 with 5.000 bootstrap samples, N = 443. aReal Human Influencer inserted as a reference group; bHuman-Look CGI Influencer inserted as a reference group

Discussion

The aim of this study was to shed light on the persuasive power of CGI influencers on young social media users. With the increasing popularity of CGI influencers, empirically understanding the mechanisms of CGI influencer communication is crucial. This poses important theoretical implications on both a psychological micro-level and a sociological macro-level. We thus can conclude with practical implications regarding CGI influencer health communication.

We investigated the effects of cartoon-look and human-look CGIs on young adults’ PSI and health behavior intentions. Additionally, we examined how personal affectedness by insomnia and young adults’ gender influenced the PSI with the (CGI) influencer. Our findings demonstrate that PSI with a CGI influencer (human-look or cartoon-look) is generally lower compared to a real human influencer, supporting the notion of human “realism” as a key driver of parasocial interactions (e.g., Gong, 2008) and human-AI interaction (Shin, 2022). Since real human influencers meet aspects like similarity, likeability, and physical appearance to a greater extent than CGI influencers, real human influencers seem to evoke more PSI than CGIs (e.g., Schramm & Hartmann, 2008). Moreover, the cartoon-look CGI led to significantly less PSI than the human-look CGI. Our findings thus highlight the role of the Uncanny Valley phenomenon as a key limitation of CGI influencers, yet its connection to para-social interactions (PSI) warrants further elaboration. Specifically, participants who perceived CGI influencers as less human-like reported weaker PSI, suggesting that heightened “unrealness” disrupted social bonding. This aligns with prior research indicating that human-like cues enhance social connection (e.g., Shin, 2022). The uncanny nature of CGI influencers may have triggered cognitive dissonance, reducing emotional engagement. Future research should explore how varying degrees of realism shape PSI and whether factors like disclosure of CGI status moderate these effects.

As for moderating variables, we theorized that the PSI is stronger for those recipients who suffer from insomnia themselves. This hypothesis was not supported. Even though personal relatability is known to be interconnected with stronger PSI (e.g., Kolo & Haumer, 2018; Y. Tian & Yoo, 2015), personal affectedness by insomnia did not make a difference in our study. This contradicts recent health research, where several scholars have suggested that personal affectedness increases the PSI levels towards health communicators posting about health topics (e.g., Burke-Garcia, 2019; Sakib et al., 2020; Y. Tian & Yoo, 2015). However, even if personal affectedness by the respective health issue is given, recent human-AI interaction research points toward an uncanny valley effect (Arsenyan & Mirowska, 2021) in the context of virtual influencers. This could lead to irritation about the artificial visual cues of an influencer that might, in consequence, undermine the credibility of health information, although the addressed topic by the influencer might be personally relevant to the user. Another possible explanation for our finding might be related to the topic of the study as the prevalence of suffering from insomnia was relatively high in our sample (M = 4.06 on a 7-point Likert scale). This suggests that most participants could somehow relate to insomnia directly or indirectly by knowing someone who suffers from it (Roth, 2007). For other topics that are not as prevalent in our society (i.e., rare illnesses), personal affectedness might moderate the influence of CGI influencers’ communication on PSI. However, we found a main effect of the personal affectedness of insomnia on PSI independently of exposure to (CGI) influencers. This effect needs further elaboration, possibly pointing to individual personality traits of recipients that heighten PSI with influencers, independently of whether they are CGI or human. Thus, the question of why personal affectedness by insomnia leads to stronger PSI formation is yet to be determined by future studies.

We also found no moderating role of gender. Some women probably did not relate to “Ava” due to her looks and sympathy, despite the matching gender. In contrast, there is evidence that attractiveness is not as important for the recipients’ evaluation (H. Kim & Park, 2023). Since we only displayed one female influencer, this must be tested again with a variation of female influencers (and a completely female sample). Lastly, our stimuli only showed a female influencer, which leaves gender-matching effects for men unexplored. Future studies should vary the gender of the (CGI) influencers.

Finally, we found that PSI is positively associated with young adults’ health behavior intentions. This is in line with prior research, suggesting that PSI and behavioral outcomes are interrelated constructs due to the persuasive power of communicators igniting an intense PSI in their recipients (e.g., Bond, 2016; Lou & Kim, 2019; Sakib et al., 2020; Q. Tian & Hoffner, 2010). Since PSIs generally have a beneficial effect on persuasive outcomes, it is unsurprising that this also counts for young adults’ health behavior intentions when receiving health advice from a young influencer. Further, this implies that parasocial relationships (PSR; a stronger, accumulative, and long-lasting version of PSI; see, e.g., Dibble et al., 2016) might even have stronger effects on health behavior outcomes. Future studies should test this in a longitudinal setting where recipients follow a fictitious influencer over a longer time to build strong PSRs. On that note, shedding light on different (e.g., older) age groups than out rather young sample is a promising future direction as well, seeing that the patterns distilled here might change with varying ages and across larger time spans.

In addition, analyses showed no direct effect of influencer type on health behavior intentions, suggesting that young adults’ PSI fully mediates the effects with the influencer. There is a line of research suggesting that, especially regarding sensitive health topics, interaction with non-humans might even be beneficial for the willingness to disclose illness (Lucas et al., 2014) and the readiness to receive emotional support (Pauw et al., 2022; Qu et al., 2014). However, this direct relation was not visible in the context of CGI influencers and insomnia. It could be that the one-sided nature of the exposure might have undermined the possible benefits of non-human influencer interaction. Even though the content of our stimuli was directly related to insomnia—a quite common health issue—, the mere influencer content about insomnia did not impact young adults’ health behavior outcomes directly. This, once more, underlines the importance of PSI in social media communication.

Implications for Theory and Practice: Multiperspectivity on CGI Influencers

Overall, our study suggests that the persuasive potential of CGIs is limited regarding real-world topics such as human health. This is in line with previous research indicating that the influencers’ “human” authenticity, as well as their approachableness and their similarity to the recipients, are central factors that determine the recipients’ attraction to social media influencers generally (e.g., Khamis et al., 2017), and AI entities more specifically (Shin, 2022).

Our study holds implications for both theory and practice. Theoretically, our study backs up current empirical evidence for the uncanny valley effect emerging for human-like CGI influencers in social media settings (e.g., Arsenyan & Mirowska, 2021). We did not only show that human-look influencers appear to ignite less PSI than real human influencers but did so within the context of insomnia—a health topic that many can relate to, but an inherently human one, nonetheless. CGI influencers intersect multiple fields, from media studies to psychology, offering insights into how AI and human-computer interaction shape the audience’s perceptions of credibility, trust, and influence of virtual communicators. By showing that PSI of young users is more easily created by human influencers (compared to CGI influencers), at least within the health context, this study theoretically contributes to prior persuasion research in health communication.

Regarding practical implications, using CGI influencers for target-tailored, empathic health information free from misinformation might become a valuable and cost-effective asset in health campaigns. CGI influencers challenge traditional ideas of identity and authenticity, reshaping beauty, health, and lifestyle norms by blurring reality and fiction. In health communication, these virtual personas represent innovative tools to engage recipients with diverse demographics, possibly fostering trust and behavioral change (e.g., Chung et al., 2024). From a marketing perspective, CGI influencers are cost-effective and controllable, avoiding risks associated with human influencers, like scandals or unpredictability. However, politically, CGI influencers—entirely shaped by those who create them —could subtly influence public opinion or policy debates, which warrants scrutiny (Ahn et al., 2022).

Beyond health communication, our findings hold broader implications for digital influence, particularly in political, marketing and ethics contexts. As CGI influencers become increasingly indistinguishable from real humans, their role in shaping public opinion and trust demands closer scrutiny, for instance by disclaim their AI nature when used in marketing. Research indicates that hyperrealistic AI-generated influencers can appear more or at least similarly authentic as real humans, suggesting that audiences may engage with CGI figures in ways that challenge traditional media literacy. This evolving realism raises ethical and regulatory concerns, particularly regarding misinformation and persuasion in political campaigns. Future research should examine how these hyperrealistic digital figures influence credibility, trust, and engagement across different domains.

To summarize, CGI influencers have a transformative role in shaping the communication and engagement of young people, particularly in the rapidly evolving digital health landscape.

Limitations and Future Research

This study comes with certain limitations. First, our sample only consisted of young adults. Results might look different for children or adults. Hence, replicating this study with a sample of children or adults could be of promising value. This would give additional scientific insight into CGI influencer effects for different age groups, as age is known to influence social media usage and perceptions tremendously (e.g., Zimmermann et al., 2022).

Second, future studies should emphasize the degrees of CGI influencer realism in more detail. Despite cartoon-look CGI influencers that look like human cartoons, cartoon-look CGI influencers in the form of animals or alien-like avatars are on the rise (Molenaar, 2021). This calls for a more differentiated investigation of the different types of CGI personae in an experimental setting. Moreover, although our manipulation did work according to the manipulation check, the human-look CGI influencer could have been perceived as a strongly photoshopped human influencer instead of a CGI. More differentiated manipulations of the influencers involving a variety of human-look CGI influencer depictions are necessary to test in follow-up studies. Our study focused on natural effects and deliberately did not include disclosures, which represent a distinct research avenue. Future research could explore how AI disclosures impact the perception of CGI influencers.

Third, the mental health topic of insomnia is only one fragment of the multi-faceted health communication area. Besides mental health topics, further health-related topics related to physical health (i.e., obesity) should be explored in future studies. Mental and physical health are closely intertwined and can be influenced by lifestyle influencers on social media (e.g., Pilgrim & Bohnet-Joschko, 2019). Future experimental manipulations in this context should cover broader (mental) health topics. On a similar note, we also acknowledge that the present methodology only allowed us to measure health behavior intentions, not actual health behavior. While this was not the goal of the study, we do encourage future research to be conducted to measure actual behavior depending on CGI influencer health advice, which could be done by, e.g., employing a multi wave longitudinal study to estimate actual behavior changes over time.

Fourth, as mentioned already in the discussion section, we only depicted one female influencer in our stimuli. Confounding variables such as physical appearance preferences may have biased our results. Such variables could be eliminated in future research by including male/diverse influencers and various looking female, male, or diverse characters.

Lastly, at the time of data collection, CGI influencers’ content was probably at least partially still created by humans, since AI was not as rapidly advancing as it is now. With advancements in AI, future studies should consider how autonomously generated content may influence user perceptions and engagement.

Conclusion

To our knowledge, this is the first study in the context of health communication that investigated different types of (CGI) influencers who give health advice on social media. Our findings suggest that cartoon-like CGI influencers struggle to establish PSI compared to human or human-like counterparts, reinforcing the role of perceived realism. However, a paradox emerges: CGI influencers, unable to experience health issues, may lack credibility in health communication due to the uncanny valley phenomenon. This raises ethical concerns about their use in health promotion, where authenticity is crucial for persuasion. However, their advantages in producing (sub-)audience-tailored health content might enhance modern health communication, since they still can have similarly persuasion effects as human influencers. But, ultimately, the results of the present study suggest that real human influencers are still superior to CGI influencers (human-look CGIs and cartoon-look CGIs) regarding their effects on young adults’ PSI formation and indirectly on young adults’ health behavior.

Footnotes

1 Insomnia is the medical term for sleeping problems with clinic relevance due to their severity. Insomnia is listed as a mental health problem since sleeping issues usually originate in mental health struggles and/or psychological stress (Morin et al., 2007).

2 The used data set is available on OSF.

3 Although RMSEA values for the PSI and Insomnia scales exceeded conventional cutoffs (Hu & Bentler, 1999), they should be interpreted with caution. For the Insomnia scale, the elevated RMSEA is likely due to the small number of items and degrees of freedom, which can inflate RMSEA in smaller samples (Kenny et al., 2015). Notably, the CFI and SRMR met recommended thresholds, and all standardized loadings exceeded .65, supporting convergent validity. For the PSI, despite higher-than-ideal RMSEA, other fit indices were acceptable (CFI > .90, SRMR < .08), and loadings were consistently strong. Also, the chi-square is sensitive to even small model misfits, and in larger samples, it’s common to see a significant p-value. Thus, the overall model fit is considered acceptable and theoretically meaningful despite the elevated RMSEA.

Conflict of Interest

The authors have no conflicts of interest to declare.

Use of AI Services

The authors declare they have not used any AI services to generate or edit any part of the manuscript or data.

Acknowledgement

Data Availability

The used data set was uploaded on OSF and can be accessed with the following link.

Appendix

Stimuli Material

Instagram Postings

Figure A1. Condition 1: Real Human Influencer.

Figure A2. Condition 2: Human-Look CGI Influencer.

Figure A3. Condition 3: Cartoon-Look CGI Influencer.

Insomnia-Related Content of the Postings

Post 1. “As many of you know, I have been dealing with on-and-off insomnia for a while now, and loads of you have been asking me about it lately, so I wanted to give you a quick summary of what exactly insomnia is. So! Insomnia is a common sleep disorder that can make it hard to fall asleep, hard to stay asleep, or cause one to wake up too early and not be able to get back to sleep. Because it robs you of vital rest, functioning properly during the day can be very challenging, as insomnia can sap not only your energy level and mood but also your health, work performance, and quality of life. And yes, it is as rough as it sounds! #insomnis #sleephealthawareness”

Post 2. “After my last post about insomnia and what exactly it is, I got so many messages from you guys identifying with the symptoms that I listed and worrying about unknowingly having insomnia. So, I wanted to clear it up some more. First of all: don’t worry! Insomnia is much more common and widespread than most people realize! At some point in their lives, most adults WILL experience short-term (acute) insomnia, which lasts for days or weeks. It’s usually the result of stress, lifestyle, or a traumatic event. But some people have long-term (chronic) insomnia that lasts for a month or more. This can be triggered by things like screen time, blue light exposure, as well as anxiety and diet. Either way, you don’t have to put up with non-stop sleepless nights! Simple changes in your daily habits can often help and I’ll talk about this in my next post! #insomnia #sleephealthawareness”

Post 3. “HOW TO TACKLE INSOMNIA: As promised, here are some useful tips I make use of whenever I am experiencing a rough patch with my insomnia: 1) Wake up at the same time each day. Even if you barely slept or fell asleep way too late, this helps to ensure you are creating a regular sleeping pattern your body can get used to. 2) Eliminate naps, alcohol, and stimulants like caffeine or nicotine to the best of your ability. 3) Get all your worrying over with before you go to bed. If you find yourself lying in bed thinking about tomorrow, consider setting a time before bed to review the day and make plans for the next day. The goal is to avoid doing these things while trying to fall asleep. 4) Commit to exercising regularly, as it is known to improve sleep quality and duration. And lastly, treat yourself to a cup of chamomile tea before bed, like I am in this picture, as chamomile is commonly regarded as a mild tranquilizer or sleep inducer. #insomnia #sleephealthawareness”.

Metrics

0

Crossref logo

0


166

Views

37

PDF views