Protective behavior against personalized ads: Motivation to turn personalization off

Vol.13,No.2(2019)
Special Issue: Digital Advertising and Consumer Empowerment

Abstract

Data collection and processing for personalized advertising has become a common practice in the industry. For this reason, regulators have been aiming to empower consumers to exercise more control over their data. Companies that collect and process data for personalized advertising are required to be transparent and among others, provide consumers with technical knowledge about the personalization process. At the same time, they have started offering settings to withdraw consent for processing data for personalization purposes by opting out from personalized advertising. However, such opt-out functions remain mostly unused. Thus, this study investigates first, if technical knowledge about personalization empowers consumers to use such opt-out functions and second, what mechanisms can explain the empowering impact of knowledge. Drawing on Rogers’ (1975) protection motivation theory (PMT) and applying an innovative combination of traditional (online experiment, N = 425, Mage = 48) and computational (online behavior tracking, N = 80, Mage = 48) research methods, the study shows that technical knowledge has no empowering effect on consumers by indirectly lowering opt-out motivation and behavior. The results also demonstrate that perceived severity and response efficacy increase motivation to opt-out, while positive attitude towards personalization and perceived self-efficacy lower it. Being one of the first studies to apply PMT to personalization context and computational methods to measure opt-out, it offers several important societal and theoretical implications regarding consumer empowerment and personalized advertising online.


Keywords:
Personalized digital advertising; consumer empowerment; consumer knowledge; protection motivation theory; computational research
Author biographies

Joanna Strycharz

Amsterdam School of Communication Research, University of Amsterdam, Amsterdam, Netherlands

Joanna Strycharz (MSc.) is a PhD Candidate of Persuasive Communication at the Amsterdam School of Communication Research, University of Amsterdam. Her 3-year PhD project is part of the Research Priority Area ‘Personalised Communication,’ an interdisciplinary cooperation between the Institute for Information Law and ASCoR, funded by the University of Amsterdam. She examines consumer knowledge of personalized marketing and its influence on the impact of such persuasion tactics as well as consumer empowerment.

Guda van Noort

Amsterdam School of Communication Research, University of Amsterdam, Amsterdam, Netherlands

Guda van Noort (Ph.D) is a professor of Persuasive Communication at the Amsterdam School of Communication Research, University of Amsterdam. She is the director of SWOCC, is involved in the EAA and the DDMA Privacy Authority, and is an honorary research associate at the Tilburg Centre for Cognition and Communication. Her work has been published in leading journals (e.g., International Journal of Advertising, Journal of Interactive Marketing), receiving awards and grants (e.g., MSI, AAA). She reviews for journals, serves at NWO and dissertation committees, supervises PhDs and teaches.

Edith Smit

Amsterdam School of Communication Research, University of Amsterdam, Amsterdam, Netherlands

Edith Smit (Ph.D) is director of the Graduate School of Communication and professor of Persuasive Communication at the Amsterdam School of Communication Research, University of Amsterdam. She has been actively involved with the EAA, the DDMA Privacy Authority, and SWOCC. She has a proven track record of publishing in leading journals (e.g., Computers in Human Behavior, Journal of Media Psychology), receiving awards and grants (e.g., MSI, AAA), reviewing for journals, serving at NWO and dissertation committees, supervising PhD candidates, and teaching.

Natali Helberger

Institute for Information Law (IViR), University of Amsterdam, Amsterdam, Netherlands

Natali Helberger (Ph.D) is a professor in Information Law at the Institute for Information Law, University of Amsterdam. She specializes in the regulation of converging information and communications markets, and the interface between technology and information law, user rights and the role of the user in information law and policy. For her research, she has been awarded a VENI Grant from the NWO, and an ERC Grant. She is a co-founder of the Research Priority Area ‘Personalised Communication’, a cooperation between IVIR and ASCoR.

References

Acquisti, A., & Grossklags, J. (2005). Privacy and rationality in individual decision making. IEEE Security and Privacy Magazine, 3(1), 26-33. https://doi.org/10.1109/MSP.2005.22

Awad, N. F., & Krishnan, M. S. (2006). The personalization privacy paradox: An empirical evaluation of information transparency and the willingness to be profiled online for personalization. MIS quarterly, 30, 13-28. https://doi.org/10.2307/25148715

Baek, T. H., & Morimoto, M. (2012). Stay away from me: Examining the determinants of consumer avoidance of personalized advertising. Journal of Advertising, 41(1), 59–76. https://doi.org/10.2753/JOA0091-3367410105

Bang, H., & Wojdynski, B. W. (2016). Tracking users' visual attention and responses to personalized advertising based on task cognitive demand. Computers in Human Behavior, 55, 867-876. https://doi.org/10.1016/j.chb.2015.10.025

Baruh, L., & Popescu, M. (2017). Big data analytics and the limits of privacy self-management. New Media & Society, 19, 579–596. https://doi.org/10.1177/1461444815614001

Bergkvist, L., & Rossiter, J. R. (2007). The predictive validity of multiple-item versus single-item measures of the same constructs. Journal of Marketing Research, 44, 175-184. https://doi.org/10.1509/jmkr.44.2.175

Bleier, A., & Eisenbeiss, M. (2015). The importance of trust for personalized online advertising. Journal of Retailing, 91, 390-409. https://doi.org/10.1016/j.jretai.2015.04.001

Boehmer, J., Larose, R., Rifon, N., Alhabash, S., & Cotten, S. (2015). Determinants of online safety behaviour: Towards an intervention strategy for college students. Behaviour & Information Technology, 34, 1022-1035. https://doi.org/10.1080/0144929X.2015.1028448

Boerman, S., Kruikemeier, S., & Zuiderveen Borgesius, F. (2017). Online behavioral advertising: A literature review and research agenda. Journal of Advertising, 46, 363-376. https://doi.org/10.1080/00913367.2017.1339368

Boerman, S., Kruikemeier, S., & Zuiderveen Borgesius, F. (2018). Exploring motivations for online privacy protection behavior: Insights from panel data. Communication Research. Advanced online publication. https://doi.org/10.1177/0093650218800915

Bol, N., Dienlin, T., Kruikemeier, S., Sax, M., Boerman, S. C., Strycharz, J., . . .de Vreese, C. H. (2018). Understanding the effects of personalization as a privacy calculus: Analyzing self-disclosure across health, news, and commerce contexts. Journal of Computer-Mediated Communication, 23, 370-388. https://doi.org/10.1093/jcmc/zmy020

Brandimarte, L., Acquisti, A., & Loewenstein, G. (2013). Misplaced confidences: Privacy and the control paradox. Social Psychological and Personality Science, 4, 340-347. https://doi.org/10.1177/1948550612455931

Centraal Bureau voor de Statistiek. (2015). Bevolking; Kerncijfers. [Country Population; Core Statistics]. Retrieved from: http://statline.cbs.nl/StatWeb/publication/?VW=T&DM=SLNL&PA=37296ned&D1=a&D2=0,10,20,30,40,50,60,%28l-1%29,l&HD=130605-0924&HDR=G1&STB=T

Cranor, L. F. (2012). Can users control online behavioral advertising effectively? IEEE Security & Privacy, 10(2), 93-96. https://doi.org/10.1109/MSP.2012.32

Dinev, T., & Hart, P. (2006). An extended privacy calculus model for e-commerce transactions. Information Systems Research, 17, 61-80. https://doi.org/10.1287/isre.1060.0080

Ermakova, T., Fabian, B., Kelkel, S., Wolff, T., & Zarnekow, R. (2015). Antecedents of health information privacy concerns. Procedia Computer Science, 63, 376-383. https://doi.org/10.1016/j.procs.2015.08.356

GDPR. (2018). General data protection regulation (GDPR). Retrieved from: https://gdpr-info.eu/

Gerber, N., Gerber, P., & Volkamer, M. (2018). Explaining the privacy paradox: A systematic review of literature investigating privacy attitude and behavior. Computers & Security, 77, 226-261. https://doi.org/10.1016/j.cose.2018.04.002

Gironda, J. T., & Korgaonkar, P. K. (2018). iSpy? Tailored versus invasive ads and consumers’ perceptions of personalized advertising. Electronic Commerce Research and Applications, 29, 64-77. https://doi.org/10.1016/j.elerap.2018.03.007

Goldfarb, A., & Tucker, C. (2011). Online display advertising: Targeting and obtrusiveness. Marketing Science, 30, 389-404. https://doi.org/10.1287/mksc.1100.0583

Ham, C.-D. (2017). Exploring how consumers cope with online behavioral advertising. International Journal of Advertising, 36, 632-658. https://doi.org/10.1080/02650487.2016.1239878

Hayes, A. F. (2012). PROCESS: A versatile computational tool for observed variable mediation, moderation, and conditional process modeling. Retrieved from https://www.afhayes.com/public/process2012.pdf

Katz, M. L., Heaner, S., Reiter, P., Van Putten, J., Murray, L., McDougle, L., . . . Paskett, E. D. (2009). Development of an educational video to improve patient knowledge and communication with their healthcare providers about colorectal cancer screening. American Journal of Health Education, 40, 220-228. https://doi.org/10.1080/19325037.2009.10599097

Kim, L. (2012, November 2). How many ads does Google serve in a day? Business 2 Community. Retreived fom https://www.business2community.com/online-marketing/how-many-ads-does-google-serve-in-a-day-0322253

Kim, Y. J., & Han, J. (2014). Why smartphone advertising attracts customers: A model of Web advertising, flow, and personalization. Computers in Human Behavior, 33, 256-269. https://doi.org/10.1016/j.chb.2014.01.015

Kim, H., & Huh, J. (2017). Perceived relevance and privacy concern regarding online behavioral advertising (OBA) and their role in consumer responses. Journal of Current Issues & Research in Advertising, 38, 92-105. https://doi.org/10.1080/10641734.2016.1233157

Lee, C. H., & Cranage, D. A. (2011). Personalisation-privacy paradox: The effects of personalisation and privacy assurance on customer responses to travel Web sites. Tourism Management, 32, 987-994. https://doi.org/10.1016/j.tourman.2010.08.011

Lee, D., Larose, R., & Rifon, N. (2008). Keeping our network safe: A model of online protection behaviour. Behaviour and Information Technology, 27, 445-454. https://doi.org/10.1080/01449290600879344

Maddux, J. E., & Rogers, R. W. (1983). Protection motivation and self-efficacy: A revised theory of fear appeals and attitude change. Journal of Experimental Social Psychology, 19, 469-479. https://doi.org/10.1016/0022-1031(83)90023-9

Meppelink, C. S., Van Weert, J., Haven, C. J., & Smit, E. G. (2015). The effectiveness of health animations in audiences with different health literacy levels: An experimental study. Journal of Medical Internet Research, 17(1), e11. https://doi.org/10.2196/jmir.3979

Milne, G. R., & Culnan, M. J. (2004). Strategies for reducing online privacy risks: Why consumers read (or don’t read) online privacy notices. Journal of Interactive Marketing, 18(3), 15-29. https://doi.org/10.1002/dir.20009

Milne, G. R., Labrecque, L. I., & Cromer, C. (2009). Toward an understanding of the online consumer's risky behavior and protection practices. Journal of Consumer Affairs, 43, 449-473. https://doi.org/10.1111/j.1745-6606.2009.01148.x

Milne, S., Sheeran, P., & Orbell, S. (2000). Prediction and intervention in health-related behavior: A meta-analytic review of protection motivation theory. Journal of Applied Social Psychology, 30, 106-143. https://doi.org/10.1111/j.1559-1816.2000.tb02308.x

Morman, M. T. (2000). The influence of fear appeals, message design, and masculinity on men’s motivation to perform the testicular self-exam. Journal of Applied Communication Research, 28, 91-116. https://doi.org/10.1080/00909880009365558

Norberg, P. A., Horne, D. R., & Horne, D. A. (2007). The privacy paradox: Personal information disclosure intentions versus behaviors. Journal of Consumer Affairs, 4, 100-126. https://doi.org/10.1111/j.1745-6606.2006.00070.x

Pan, B., Hembrooke, H., Joachims, T., Lorigo, L., Gay, G., & Granka, L. (2007). In Google we trust: Users’ decisions on rank, position, and relevance. Journal of Computer-Mediated Communication, 12, 801-823. https://doi.org/10.1111/j.1083-6101.2007.00351.x

Cerulus, L., & Scott, M. (2018, June 25), Europe’s new privacy rules: 1 month in, 7 takeaways. Politico. Retrieved from: https://www.politico.eu/article/gdpr-europe-new-privacy-rules-7-takeaways/

Robles, P. (2018, January 26). In a blow to marketers, Google will let users opt-out of remarketing ads. Econsultancy. Retrieved from: https://econsultancy.com/in-a-blow-to-marketers-google-will-let-users-opt-out-of-remarketing-ads/

Rogers, R. W. (1975). A protection motivation theory of fear appeals and attitude change. The Journal of Psychology, 91, 93-114. https://doi.org/10.1080/00223980.1975.9915803

Smit, E. G., Van Noort, G., & Voorveld, H. (2014). Understanding online behavioural advertising: User knowledge, privacy concerns and online coping behaviour in Europe. Computers in Human Behavior, 32, 15-22. https://doi.org/10.1016/j.chb.2013.11.008

Strycharz, J., Van Noort, G., Helberger, N., & Smit, E. (2019). Contrasting perspectives–practitioner’s viewpoint on personalised marketing communication. European Journal of Marketing, 53, 635-660. https://doi.org/10.1108/EJM-11-2017-0896

Strycharz, J., Van Noort, G., Smit, E., & Helberger, N. (2018). Consumer view on personalized advertising: Overview of self-reported benefits and concerns. In Proceedings of ICORIA 2018. 148.

Tucker, C. E. (2014). Social networks, personalized advertising, and privacy controls. Journal of Marketing Research, 50, 546-562. https://doi.org/10.1177/002224371305000501

Tugend, A. (2015, December 20). Key to opting out of personalized ads, hidden in plain view. The New York Times. Retrieved from https://www.nytimes.com/2015/12/21/business/media/key-to-opting-out-of-personalized-ads-hidden-in-plain-view.html

Turow, J., Hennessy, M., & Draper, N. A. (2015). The tradeoff fallacy: How marketers are misrepresenting American consumers and opening them up to exploitation. SSRN Electronic Journal. Advanced online publication. https://doi.org//10.2139/ssrn.2820060

Ur, B., Leon, P. G., Cranor, L. F., Shay, R., & Wang, Y. (2012). Smart, useful, scary, creepy: Perceptions of online behavioral advertising. In Proceedings of the Eighth Symposium on Usable Privacy and Security (SOUPS) (article 4). Washington, DC, US: ACM. https://doi.org/10.1145/2335356.2335362

Van Noort, G., Kerkhof, P., & Fennis, B. M. (2008). The persuasiveness of online safety cues: The impact of prevention focus compatibility of Web content on consumers’ risk perceptions, attitudes, and intentions. Journal of Interactive Marketing, 22(4), 58-72. https://doi.org/10.1002/dir.20121

Witte, K. (1992). Putting the fear back into fear appeals: The extended parallel process model. Communication Monographs, 59, 329-349. https://doi.org/10.1080/03637759209376276

Wottrich, V. M., Van Reijmersdal, E. A., & Smit, E. G. (2018). App users unwittingly in the spotlight: A model of privacy protection in mobile apps. Journal of Consumer Affairs. Advanced online publication. https://doi.org/10.1111/joca.12218

Xiao, H., Li, S., Chen, X., Yu, B., Gao, M., Yan, H., & Okafor, C. N. (2014). Protection motivation theory in predicting intention to engage in protective behaviors against schistosomiasis among middle school students in rural China. PLoS Neglected Tropical Diseases, 8(10), e3246. https://doi.org/10.1371/journal.pntd.0003246

Xu, H., Dinev, T., Smith, H. J., & Hart, P. (2008). Examining the formation of individual’ s privacy concerns: Toward an integrative view. In Proceedings of the Twenty Ninth International Conference on Information Systems, ICIS 2008. 6. Retrieved from https://aisel.aisnet.org/icis2008/6

Additional information

Editorial Record:

First submission received:
October 30, 2018

Revisions received:
February 20, 2019
April 12, 2019
May 13, 2019

Accepted for publication:
May 14, 2019.

The article is part of the Special Issue "Digital Advertising and Consumer Empowerment" guest edited:
Liselot Hudders (Ghent University, Ghent, Belgium), Karolien Poels (University of Antwerp, Antwerp, Belgium), and Eva van Reijmersdal (University of Amsterdam, Amsterdam, The Netherlands).

Full text

Introduction

Today, internet users are constantly confronted with personalized digital advertising, i.e., advertising shown to them based on their data (Boerman, Kruikemeier, & Zuiderveen Borgesius, 2017). One way of dealing with the associated risks is to empower users to be able to exercise more control. In fact, the General Data Protection Regulation (GDPR) introduced in the EU aims, among others, to empower consumers. To achieve this, it requires organizations to provide two types of information to consumers: first, technical information (i.e., about data collection and processing), and second, rights information (i.e., about the existence of different consumer rights) (GDPR, Art. 13).

As a consequence, companies that apply personalization have been working to become more transparent (Politico, 2018). They have started giving the consumer a more active role in the personalization process. Consumers can decide if they want to be a target of personalized ads in the first place or if they actively want to avoid them. For example, many personalized ads online include a small AdChoices logo. If one clicks on it, he or she is redirected to a page that explains both data collection and opt-out options. However, this effort, while effective, has been claimed to be rather complex and put a high burden on the consumer (Tugend, 2015). A simpler solution is provided by Google. It offers a website with a detailed description of data that is collected and used to personalize ads. On the website, Google users can also inspect and adjust, for instance, predicted interests as well as opt-out from personalized services completely. In other words, the movement towards consumer empowerment is not only noticeable in laws and regulations, but also in more technical information and possibility to control this processing given to consumers by advertising platforms themselves.

Considering that personalization has been portrayed in consumer research as one of the most controversial marketing practices (Turow, Hennessy, & Draper, 2015), one would expect such control functions to be popular. However, it is not the case. For example, the predecessor of the functionality offered by Google, namely muting single ads, was used about 5bn times in 2017 (Robles, 2018), which may seem a lot, but represents only a small fraction of ads served as Google is said to have over 24bn advertising impressions a day (Kim, 2012). Such low opt-out rates may have multiple causes. First, previous studies have shown that consumers are both unsure how to control personalization (Ur, Leon, Cranor, Shay, & Wang, 2012) and do not know what personalization entails (Boerman et al., 2017). Lack of technical knowledge may thus impede their agency. At the same time, the opt-out functions themselves are not widely known; only 9% of consumers understand what AdChoices entails (Tugend, 2015). Impact of the two types of knowledge (about processing and about control) may thus be important in this context. Second, studies have shown that personalization leads to mixed feelings among consumers: on the one hand, they consider it creepy; on the other hand, they see multiple benefits of it (Lee & Cranage, 2011). Thus, why would consumers opt-out when they may actually like personalized ads?

Research hence hints at multiple reasons that may lead to consumers not opting out, but central factors remain unclear as opting out from personalization is yet to be systematically studied. At the same time, we do not know if knowledge about processing and about control actually empowers consumers to take an active role in the personalization processes. The current study focuses on technical knowledge and aims to close this gap by investigating: 1) to what extent providing information about data processing for personalization drives opt-out behavior, and 2) how we can explain the (lack of) empowering impact of knowledge. To reach these aims, we first, manipulate respondents’ knowledge about data processing for personalization by Google (technical knowledge required by regulators) and second, inform all respondents about the existance of the opt-out function by Google and subsequently measure if they are motivated to use it and if they actually do so. As technical knowledge is the focus of this study, it is manipulated, while information about the opt-out is treated as a baseline requirement for action and is offered to all respondents. To explain the impact of technical knowledge, theoretically, the study draws on protection motivation theory (PMT) (Rogers, 1975), which has been previously applied to privacy protection (e.g., Boehmer, Larose, Rifon, Alhabash, & Cotten, 2015), but its applications to digital spaces remain scarce. Furthermore, the theory has not yet been applied to the personalized advertising and consumer empowerment. Methodologically, the study applies both traditional and computational methods to combine the actual actions online with personality traits and motivations.

Our study makes multiple contributions. Theoretically, it moves beyond the extensive body of research on the impact of personalization and offers a different angle on this phenomenon. Thus, instead of investigating the phenomenon itself and the opportunities it offers, we look at personalization as a threat that consumers can control. By applying the PMT to a privacy sensitive context, the study challenges the notion of empowering impact of technical knowledge about data processing on consumers. Finally, this is one of the first studies examining both behavioral intentions and actual online behavior, which has been called for in past personalization research (Boerman et al., 2017). From a methodological perspective, this study applies innovative computational methods to combine self-reported measures with data on actual online behavior. Practically, the findings offer insights into the actual effects of legal and practical consumer empowerment measures on consumer behavior.

Personalized Advertising in Consumer Research

Personalization is defined as “the strategic creation, modification, and adaptation of content and distribution to optimize the fit with personal characteristics, interests, preferences, communication styles, and behaviors” (Bol et al., 2018, p. 373). It commonly appears in online advertising, taking more rudimentary forms, such as addressing people by names, or by applying more advanced techniques, such as personalization of content or distribution of an ad (Strycharz, Van Noort, Helberger, & Smit, 2019).

Personalized advertising has been commonly seen as one of the more questionable practices applied by advertisers. More specifically, a stream of consumer research has consistently shown that consumers feel uncomfortable and reject it (Smit, Van Noort, & Voorveld, 2014; Ur et al., 2012). Along these lines, Strycharz, Van Noort, Smit, and Helberger (2018) concluded that consumers see significantly more negatives about personalized ads than they can name positives. On the other hand, modern data-driven forms of personalization can also be experienced as more relevant and useful (Bleier & Eisenbeiss, 2015). Indeed, the personalization paradox states that personalization has positive and negative sides (Awad & Krishnan, 2006). It contributes to both perceived relevance and usefulness, but also the susceptibility and privacy concerns of users, which suggests that consumers have reasons to perceive data collection and processing as a severe problem and also to enjoy the benefits of it. Table 1 gives an overview of studies that investigated consumer perceptions of personalized advertising with a focus on related benefits, concerns and the paradoxical nature of personalization.

Table 1. Overview of Consumer Research Regarding Consumer Perceptions of Personalized Advertising.

Authors

Main findings/Arguments

Baek & Morimoto (2012)

- Privacy concern and ad irritation increase ad skepticism and avoidance for personalized advertising.
+ Perceived personalization decreases ad skepticism and ad avoidance.

Bang & Wojdynski (2016)

+ Consumers pay more attention to personalized compared to generic ads.

Bleier & Eisenbeiss (2015)

+ Personalization improves usefulness of ads for trusted retailers.
- Personalization increases reactance and privacy concerns for less trusted retailers.
Click-through intention is directly influenced by ad usefulness, reactance and privacy concern.

Bol et al. (2018)

- Personalization of advertising decreases trust and expected benefits of personalization.

Gironda & Korgaonkar (2018)

+ Perceived usefulness of personalized advertising increases its effectiveness.
- Invasiveness of personalized advertising decreases its effectiveness.

Goldfarb & Tucker (2011)

- Personalization of advertising combined with advertising obtrusiveness lowers advertising effectiveness for consumers concerned about their privacy.

Kim & Han (2014)

+ Personalization increases informativeness, credibility, and entertainment of advertising.
+ Personalization lowers ad irritation.

Kim & Huh (2017)

- Personalized advertising is not experienced as more relevant.
- Privacy concern lowers consumer attitude towards personalized advertising.
+ Perceived relevance of personalized advertising improves consumer attitude towards it.

Tucker (2014)

+ Perceived control over privacy positively impacts effectiveness of personalized advertising.

Note: Findings regarding concerns and negative effects of personalization are indicated with “-“; findings regarding benefits and positive effects of personalization are indicated with “+”.

Empowering Impact of Knowledge on Consumers

A number of academic studies have concluded that consumers know little about different personalization practices (e.g., Smit, et al., 2014). This lack of knowledge has a negative effect on their agency; it impedes them from taking control over their personal data (Cranor, 2012). At the same time, past research has shown that consumers do want to have such control and take active part in the personalization process (Turow et al., 2015), which suggests that lack of knowledge may be the burden.

At the same time, knowledge is vital for protection behavior. For example, in the health domain, knowledge about a certain type of disease was found to be positively correlated with the motivation to perform self-exams (Morman, 2000). More protection motivation leads in turn to more protective behavior (see Milne, Sheeran, & Orbell, 2000 for meta-analysis). Similarly, in the context of privacy protection, higher levels of objective knowledge about a phenomenon are related to higher engagement in general protective behavior (Baruh & Popescu, 2017), while behavioral intention was found to predict management of privacy settings (Gerber, Gerber, & Volkamer, 2018). Applying this to personalization context, we expect that someone who possesses technical knowledge about personalized advertising will be more motivated and thus more likely to act. Thus, we hypothesize that:

H1: Technical knowledge about the personalization process positively impacts personalization a) opt-out motivation and thus, indirectly positively influences b) opt-out behavior.

Explaining Empowering Impact of Knowledge through PMT

In other contexts, the empowering impact of objective knowledge about a phenomenon on motivation to e.g., perform self-exams has been explained by the so-called protection motivation theory (PMT) (Xiao et al., 2014). PMT was originally developed to understand why people are motivated to protect themselves from health threats (Rogers, 1975). In the light of this theory, protection motivation can be defined as the desire of individuals to protect themselves from threats. In past research, such threats have been operationalized in varied ways, e.g., as risky behaviors such as smoking (Maddux & Rogers, 1983). More recently, risky online behaviors such as self-disclosure, have been researched thought the lens of the PMC. Milne, Labrecque, and Cromer (2009, p. 450) defined the risky behavior in the online context as “specific computer-based actions that put people at risk,” and protection motivation as motivation to exercise “specific computer-based actions that consumers take to keep their information safe.” Considering the concerns related to personalized advertising, we define ‘protection motivation’ as internet users’ desire to adjust the settings offered by advertising platforms so that they do not receive personalized ads (which also means that their data is not processed for this purpose).

PMT identifies mechanisms that motivate a person to act. It posits that one’s protection motivation stems from two cognitive processes: threat and coping appraisal. Knowledge has been portrayed as a trigger of these processes. While the threat appraisal assesses the individual’s belief that the threat is noxious (perceived severity) that it is likely to happen (perceived susceptibility), the coping appraisal includes the belief to be able to perform the protective behavior (perceived self-efficacy) and that the protecting action is effective (perceived response efficacy). In a later version, Maddux and Rogers (1983) added the value of the risky-behavior (e.g., attitude towards it). The relation between these factors and motivation is linear: while threat and coping appraisal have a positive relation with the motivation, the value of the risky-behavior relates negatively to motivation (Rogers, 1975). Regarding actual behavior, the theory posits that it is attenuated from intentions – more protection motivation leads to more protective behavior (Milne et al., 2000). Thus, throughout the study we hypothesize a direct relation between motivation and behavior and indirect relations of other variables and the behavior.

In the light of the PMT, knowledge activates threat and coping appraisal, which has been shown in the health context. More specifically, individuals knowledgeable about an illness and the ones aware of how one can get infected, experience higher levels of susceptibility and show higher self- and response efficacy (Xiao et al., 2014). How knowledge impacts threat and coping appraisal in the digital context has not been systematically investigated yet. However, building on past research, one can expect that the empowering effect of technical knowledge can be partially explained by activation of threat and coping appraisal.

Furthermore, we propose an extension to the original theory to make it more suitable to the personalization context. We argue that privacy concern is vital for the belief that personalization is noxious. First, technical knowledge has been identified as one of the main predictors of health information privacy concern (Ermakova  Fabian, Kelkel, Wolff, & Zarnekow, 2015). Second, more concerned individuals have been found to refrain from using certain apps (Wottrich, Van Reijmersdal, & Smit, 2018), or reject cookies that enable personalization (Milne & Culnan, 2004). Thus, we argue that privacy concern needs to be added as a factor within the threat appraisal that is impacted by technical knowledge and motivates users to opt-out from personalization. In the following sections, we present the serial mediation hypotheses. An overview of predicted paths can be found in Figure 2 in the Results section.

Knowledge and Perceived Severity of and Susceptibility to Data Processing

Perceived severity can be defined as an individual's perception about the seriousness of the threat (Witte, 1992). PMT holds that individuals who perceive a threat as severe are more likely to be motivated to protect themselves from it (Maddux & Rogers, 1983). In the health context, perceived severity of a disease has indeed been shown as a trigger of protection motivation (Katz et al., 2009). Similar findings have been concluded in the context of general privacy protection (Boerman et al., 2018): consumers who consider data collection online a problem, are more motivated to protect themselves.

At the same time, technical knowledge and awareness have been proven to be related to perceived severity (Xiao et al., 2014). In this study, we apply the concept of severity in order to explain the empowering impact of technical knowledge on protection against personalized digital advertising. We thus expect that consumers who know how data is collected and processed start to believe that data collection and processing for personalization is a serious issue and are thus motivated to protect themselves more and consequently, do so. Thus, we hypothesize:

H2: The impact of technical knowledge about the personalization process on motivation and consequently on opting-out, is positively mediated by perceived severity.

Even when one believes that the threat is severe, but does not believe that it can affect them, they will not be motivated to act. Perceived susceptibility indicates to what extent an individual feels that it is likely that the threat will occur (Lee, Larose, & Rifon, 2008). It has been shown to increase one’s motivation to protect themselves from various health threats (Milne et al., 2000). In the digital context, higher levels of perceived severity result in more motivation to protect oneself from computer viruses (Lee et al., 2008) or to use pop-up blockers (Boehmer et al., 2015).

The relation between knowledge and susceptibility has been investigated in the health context: more knowledgeable individuals feel stronger that the threat can happen to them (Xiao et al., 2014). Thus, we expect that in the context of personalization, when one knows about data collection and processing practices for personalization, he or she is more likely to believe this can happen to them, and thus more motivated to do something against it and eventually, they act more. Therefore, we hypothesize:

H3: The impact of technical knowledge about the personalization process on motivation and consequently on opting-out, is positively mediated by perceived susceptibility.

Finally, in the context of personalized advertising, we argue that privacy concern has to be included as a part of threat appraisal. It can be defined as “concerns about possible loss of privacy as a result of information disclosure” (Xu, Dinev, Smith, & Hart, 2008, p. 4) and takes various forms, for example, concern about data collection by unauthorized parties (e.g., Dinev & Hart, 2006). While this construct is rather irrelevant to health protection, it has been shown crucial in the data processing context. In fact, more concerned individuals have been found to refrain from using certain apps (Wottrich et al., 2018) or reject cookies (Milne & Culnan, 2004).

At the same time, knowledge has been commonly brought in relation to privacy concern. In fact, Ermakova et al. (2015) identified knowledge about technologies as one of the main predictors of health information privacy concern. Similarly, Gerber et al. (2018) concluded that awareness and literacy predicted privacy attitudes and concerns. Building on these findings, we expect that consumers informed about technical aspects of personalization practices will also be more concerned about their privacy, which will activate threat protection. Thus, we hypothesize that:

H4: The impact of technical knowledge about the personalization process on motivation and consequently on opting-out, is positively mediated by privacy concern.

Knowledge and Perceived Self-Efficacy and Efficacy of Opt-Out Functions

Besides the threat appraisal, the PMT maintains that evaluation of the protective behavior also impacts protection motivation. First, coping appraisal includes one’s belief that they are able to protect themselves, i.e., self-efficacy (Maddux & Rogers, 1983). In the health context, this factor has been documented as having the strongest impact on the motivation (Milne et al., 2000). We use this concept to describe users’ perceived confidence (and not actual skills) in preventing companies from collecting and processing their data for personalization purposes. Past research in the online context indeed showed that increased levels of self-efficacy led higher motivation for different protective behaviors, such as installing anti-virus programs (Lee et al., 2008). In the advertising context, self-efficacy has been shown to lead to avoidance of online behavioral advertising (Ham, 2017). At the same time, more knowledgeable and aware consumers have been shown to feel more able to protect themselves (regardless if they were actually able to do so, Xiao et al., 2014). Building on this research, we hypothesize that:

H5: The impact of technical knowledge about the personalization process on motivation and consequently on opting-out, is positively mediated by perceived self-efficacy.

Second, coping appraisal involves an evaluation of the response. According to the PMT, perceived response efficacy is necessary to trigger protection motivation and behavior (Maddux & Rogers 1983). While the impact of self-efficacy has been widely investigated in the online data protection context, response efficacy has received considerably less attention. Boerman et al., (2018) showed the significant positive impact of response efficacy on protection motivation for different types of online privacy protection. In fact, technical knowledge how data is collected and used puts consumers in the position to be able to ask the follow up question: how to protect myself, and consider their options, e.g., the opt-out functions offered by AdChoices and Google. Therefore, we hypothesize that:

H6: The impact of technical knowledge about the personalization process on motivation and consequently on opting-out, is positively mediated by perceived response efficacy.

Knowledge and Attitude Towards the Risky Behavior as a Part of PMT

A later version of the PMT included the value of the threat as a construct that lowers motivation to stop it (Maddux & Rogers, 1983). In the mobile app context, Wottrich et al. (2018) concluded that when users like an app, they are less motivated to prevent it from collecting their data. Similarly, personalization has numerous benefits for consumers, such as informativeness, credibility, and entertainment of advertising (Kim & Han, 2014), which may make them less motivated to opt-out. At the same time, how receiving technical information about personalization practices impacts attitude is not clear. We cannot say if it fosters the positive or the negative sides of the phenomenon. Thus, we hypothesize that if one is in favor or against personalized ads impacts the motivation to stop seeing them, but pose an open research question when it comes to the impact of knowledge intervention on the attitude:

H7: Attitude towards personalized advertising will be negatively related to personalization opt-out a) motivation and thus, will be indirectly negatively related to b) opt-out behavior.

RQ1: How does technical knowledge about the personalization process impact one’s attitude towards the phenomenon?

Methods

Research Design

To test and explain the impact of technical knowledge on opt-out behavior, we used a unique design combining data from an online tracking tool and an online experiment. The tracking tool was used to unobtrusively collect actual opt-out behavior, while in the online experiment, participants’ technical knowledge was manipulated, they were informed about the opt-out, and their responses were measured with self-report measures.

Participants and Data Collection

Tracking data. Between February and June 2017, 712 members of CentERdata’s LISSPANEL agreed to participate in an online tracking study as part of a larger research project Personalised Communication and to install a plug-in on their computers that registered their online behavior. They were informed about the methods and extent of data-collection and privacy protocols. The plug-in was self-installed; respondents were given verbal and visual instructions. In case of technical issues, respondents could contact the researchers by email. Respondents that agreed to participate installed the Google Chrome or Mozilla Firefox plugin which, for an a priori determined set of 317 white-listed domains, tracked all incoming and outgoing traffic, including Google’s AdSetting. Additionally, a blacklist was constructed of websites that would not be tracked (such as bank-transaction pages). The plugin routed all HTTP/HTTPS traffic related to white-listed domains through a VPN proxy that served as a data-collection point. HTTPS traffic was decrypted and re-encrypted to ensure privacy and allow for analysis. Data collection captured all webpage content, external libraries, images and banners as well as all user-provided information. Before storage, best-effort anonymization scripts would remove sensitive information such as passwords. Participants could at all times deactivate the plugin or use private mode to browse untracked. In this study, we are only interested in whether participants, within two weeks after completing the online experiment, used the website adsettings.google.com that allowed them to manage their personalization preferences.

Experiment. To test if technical knowledge triggers PMT processes, an online experiment was administered. A single factor (technical knowledge about personalization process by Google) between-subjects design with two conditions (i.e., technical knowledge about data collection and processing for personalization vs. baseline knowledge about personalization) was used. Participants were recruited between May 7 and 29, 2018. The research institute invited the members from their panel who had installed the plugin to participate in an online experiment, of which 514 took part in the experimental study. Participants’ age ranged between 18 and 87 years (M = 49, SD = 18), and 51% were female. The sample approximates the country’s adult population (Centraal Bureau voor de Statistiek, 2015) except for higher levels of education. Figure 1 depicts the sampling and data cleaning procedure in detail. The final experimental dataset included 425 participants (Mage = 48, SDage = 18, 50% female). Out of these participants, 80 generated behavioral data. Descriptives for the final sample are presented in Table 2.

Figure 1. Data collection and cleaning process.

 

Table 2. Descriptives of Full Sample (N = 425), Tracking Sample (N = 80) and Survey Only Sample (N = 345).

 

Full sample

Tracking sample

Survey only sample

t-test (df)

p-value

Perceived severity

5 (1.50)

4.77 (1.53)

5.05 (1.47)

1.56 (423)

.121

Perceived susceptibly

5.92 (1.12)

6.02 (1.00)

5.90 (1.14)

-.84 (423)

.402

Privacy concern

4.72 (1.41)

4.68 (1.38)

4.73 (1.41)

.27 (423)

.789

Self-efficacy

3.31 (1.52)

3.36 (1.52)

3.30 (1.52)

-.31 (423)

.754

Response-efficacy

5.03 (1.40)

4.99 (1.20)

5.04 (1.45)

.36 (423)

.748

Attitude towards personalized advertising

3.67 (1.38)

3.70 (1.25)

3.66 (1.41)

-.23 (423)

.819

Opt-out motivation

5.04 (1.92)

4.84 (2.11)

5.09 (1.87)

1.06 (423)

.289

Note: Means with standard deviations within parentheses are presented. All variables range from 1-7. Independent samples t-tests were conducted to compare the tracking sample with the survey only sample.

Manipulation and stimulus material. Technical knowledge about personalization process was manipulated and not just measured, as earlier studies have shown low knowledge and little variation of it among the country’s population (Smit et al., 2014). The two experimental conditions were: 1) exposure to technical knowledge intervention about personalization process by Google; 2) exposure to a general material about personalization. Google was the subject of knowledge manipulation as the company personalized ads on more than 2 million websites and offered its users an option to turn personalization off.

To create external validity, the knowledge intervention material was designed using the information that Google provided on its data usage practices for personalized advertising (as this was the material available to consumers). This material complies with legal requirements regarding informing consumers about data collection and processing and underlines neither benefits nor risks stemming from personalization.

To choose the optimal format of the intervention, a pre-test was conducted. We aimed at material that would be most professional and would lead to highest recall. Based on the literature on effectiveness of various messages on recall (Meppelink, Van Weert, Haven, & Smit, 2015), three different manipulations were designed: an animation with spoken text, an animation with written text and an article. To choose the manipulation that has the most professional and convincing look, and has strongest effect on recall, a pre-test among 62 participants was conducted (convenience sample, 66% female, age: M = 34, SD = 17). In this pre-test, participants were asked to watch one of the animations or to read the article and were asked questions about professionality of the material, how convincing it was and were presented with multiple true/false questions. The results showed no significant differences (only the video with voiceover (M = 5.27) was rated as marginally more professional than the article (M = 4.65), t(37) = -1.7, p = .09). However, it has to be noted that the sample was rather small and on average higher educated than the research population. In fact, past research has shown higher impact of videos on recall for lower educated respondents (Meppelink et al., 2015), who were underrepresented in the sample. Thus, based on the pre-test and past literature, the video with voiceover was chosen for the knowledge stimulus.

For the experimental condition, a technical knowledge intervention video about personalization process by Google was created. This video was 2.5 minutes long and included information on what data is used by Google to personalize ads in the search engine and on more than two million websites and how Google uses first-party data of advertisers. For the baseline condition, a short filler video was constructed. It contained general information about personalization online (that companies base their online communication on what they know about users and that two users may thus see different ads online, with no technical information) and lasted 30 seconds (see Appendix for transcripts).

Procedure. Participants began the online experiment by being exposed to the stimulus described above. They were randomly assigned to one of the two conditions (initial Ncotrol = 250, Nknowledge = 264; final Ncontrol = 202 (Mage = 48, SDage = 18, 45% female, 35 plug-in users), Nknowledge = 223 (Mage = 49, SDage = 17, 55% female, 45 plug-in users)). Next, participants were asked a filter question, namely if they use any services by Google (see Table 3) and a manipulation check was included. First, we asked participants if they watched the video. Participants that answered yes were asked what they learned from the video (open question). After the manipulation check, participants completed the remaining part of the questionnaire, which included the mediating variables. Finally, all participants were provided with information about the opt-out function (as they were expected not to know about it and knowledge about opt-out was treated as a baseline requirement for opt-out motivation and action) without explicitly being asked to use it, and were asked about perceived response efficacy of this action and motivation to carry it out.

Table 3. Google Services Used by the Respondents.

Service

Users

Non-users

Gmail

350

75

PlayStore on an Android phone (e.g., Samsung)

215

210

YouTube account

208

217

Log-in functionality on Google Maps (when you are logged in, you can see e.g. your home or add favorite locations)

200

225

Google Drive

165

260

Google Plus account

72

353

Note: N = 425

Measures

Manipulation check. As a manipulation check, we first asked respondents if they had watched the video (96% yes, 2% at least half, 2% no). Respondents who indicated to have watched at least half of the video were asked an open question about what they have learned from it. The open answers were coded according to a codebook based on the content of the video. To conclude what the respondent learned, answers that included technical information about personalization process (as included in the intervention) received 1 (N = 175), while all other answers scored 0 (N = 250).

Perceived severity. Perceived severity, i.e., people’s perceptions of the severity of the collection, usage, and sharing of their online behavior for personalization purposes was measured with three statements (1 = Strongly disagree, 7 = Strongly agree) derived from Boerman et al. (2018). The mean of the three items is used as a measure of perceived severity (Cronbach’s alpha = .9, M = 5, SD = 1.50). Details of all measurement items are described in Appendix, Table A1.

Perceived susceptibility. To measure the perceived susceptibility, we used a three-item scale derived from Boerman et al. (2018) (1 = Strongly disagree, 7 = Strongly agree). The mean of the three items is used as a measure of perceived susceptibility (Cronbach’s alpha = .91, M = 5.92, SD = 1.12).

Privacy concerns. To assess privacy concerns, we used a five-item instrument developed by Baek and Morimoto (2012) and adopted by Bol et al. (2018). The scale ranged from 1 (totally disagree) to 7 (totally agree). The five items were averaged to form the online privacy concerns scale with a Cronbach’s alpha of .92 (M = 4.72, SD = 1.41).

Self-efficacy. Self-efficacy was measured using two statements (1 = Strongly disagree, 7 = Strongly agree) based on Boerman et al. (2018). The mean score of the two items is used as a scale of self-efficacy (Cronbach’s alpha = .77, M = 3.31, SD = 1.52).

Response efficacy. To measure response efficacy, we asked respondents to indicate (1 = Strongly disagree, 7 = Strongly agree) if the opt-out option presented to them in the questionnaire would be an effective way to eliminate the usage of data for personalized advertising (M = 5.03, SD = 1.40).

Attitude towards personalized advertising. In order to measure attitude towards personalized advertising directly, we operationalized it with a single question. After watching the videos, the respondents were presented with the statement: ‘Personalization in advertising is for me:’ with answer options ranging from 1 (‘a very negative development’) to 7 (‘a very positive development’) (M = 3.67, SD = 1.38).

Opt-out motivation. Opt-out motivation was measured with one item (1 = very unlikely and 7 = very likely) inspired by Wottrich et al. (2018): ‘Over the next two weeks, I intend to protect my privacy by using the settings available on adsetting.google.com’ (M = 5.04, SD = 1.92).

Opt-out behavior. To measure opt-out behavior, the aforementioned plug-in was used. We identified the Ajax (Asynchronous JavaScript And XML) request Google uses when logged-in users use the opt-out function on adsetting.google.com (request: https://play.google.com/log?format=jsonandauthuser=0) and searched for it in all data collected by the plug-in between May 7 and June 12. For the participants for whom we do not have opt-out data (who deactivated the plug-in) or did not opt-out, 0 was induced. Out of all participants, 37 used the opt-out website this timeframe (thus scored 1 on the dichotomous variable).

Control variables. Multiple control variables were measured as well. First, we included plug-in usage as a control variable as both plug-in users and non-users were included in the final model. Moreover, as age and gender have been often named a factor that influences both level of privacy concerns as well as protection behavior online (e.g., Milne et al. 2009), we included in our analyses age measured in years and a dichotomous variable for gender.

Data Analysis

To test the hypotheses, mediation analyses were conducted using the PROCESS macro (Hayes, 2012, Model 6). This macro offers the possibility to test both direct and indirect effects and provides confidence intervals based on bootstrapping for the mediated effect. Moreover, it allows the inclusion of dichotomous dependent variables and thus uses ordinary least squares (OLS) regression to estimate variables on the left sides of model equations, except for the model of outcome variable, which is estimated using logistic regression.

Before conducting the mediation analysis in PROCESS, we tested all assumptions of OLS and logistic regression. Only violated assumptions are reported. Concerning normality, perceived susceptibility was somewhat negatively skewed (-1.6) and showed a slightly high kurtosis (3.56). As neither log nor square root transformation solved the issue, we use bootstrapping for all our analyses. Moreover, plot of standardized residuals against predicted (fitted) values showed a negative trend, indicating heteroscedasticity. Thus, in all analyses we use Hubert-White standard errors.

Results

Manipulation Check

To examine if the participants perceived the manipulation as intended, open answers were coded. Next a chi-square test was conducted with the coded answers (binary variable) and condition (binary variable). The test shows that the two groups score significantly different on the manipulation check, χ2(1) = 263.93, p > .01. Indeed, participants in the manipulation condition learned more about technical details of personalization process by Google compared to participants in the baseline condition.

Hypotheses Testing

A summary of the results is presented in Figure 2, while all direct effects are reported in Table A2 in Appendix.

Figure 2. Observed path model of the hypothesized mediated effect. This figure shows results of the mediation analysis. Standardized coefficients (B) are portrayed. *p < 0.10, **p < 0.05, ***p < 0.001. aF(9, 415) = 55.11; bF(9, 415) = 5.38; cF(9, 415) = 30.83; dF(9, 415) = 4.64; eF(9, 415) = 4.17; fF(9, 415) = 11.17; gF(10, 414) = 20.13; h χ2(11) = 117.08.

 

H1 proposed that individuals exposed to manipulation of technical knowledge about the personalization process would be more motivated to opt-out and would do so more. The results show that individuals who were informed about these practices were not more motivated to opt-out (B = -.27, SE = .16, BC 95% CI[-.57, .04]). Exposure to knowledge did not indirectly affect the behavior (B = -.53, SE = .48, BC 95% CI[-1.47, .08]). Thus, H1 is not supported.

H2 stated that the impact of technical knowledge on motivation and consequently on opting-out, was positively mediated by perceived severity. Knowledge was found to have a negative effect on perceived severity (B = -.31, SE = .10, BC 95% CI[-.51, -.10]). Perceived severity had a significant positive effect on motivation (B = .26, SE = .08, BC 95% CI[.1, .41]) and indirect positive effect on opt-out behavior (B = .11, SE = .07, BC 95% CI[.03, .3]). Exposure to technical knowledge intervention had indirectly, a significant negative effect on opt-out behavior, first through perceived severity and second, motivation (B = -.03, SE = .03, BC 95% CI[-.12, -.01]). Thus, H2 is not supported.

H3 proposed that the impact of technical knowledge on motivation and consequently on opting-out, was positively mediated by perceived susceptibility. The manipulation did not have a significant effect on susceptibility (B = -.12, SE = .11, BC 95% CI[-.33, .09]), and susceptibility did not impact motivation to opt-out (B = -.07, SE = .03, BC 95% CI[-.20, .08]) nor indirectly opt-out behavior (B = -.03, SE = .05, BC 95% CI[-.13, .05]). The indirect mediated effect of knowledge on opt-out behavior was not significant (B = .003, SE = .007, BC 95% CI[-.01, .02]). Therefore, we reject H3.

H4 proposed that the impact of technical knowledge on motivation and consequently on opting-out, was positively mediated by privacy concern. The manipulation did not have a significant effect on privacy concern (B = .11, SE = .11, BC 95% CI[-.10, .32]); privacy concern did not significantly impact motivation to opt-out (B = .14, SE = .09, BC 95% CI[-.02, .31]), nor indirectly the behavior (B = .06, SE = .06, BC 95% CI[-.01, .21]). The indirect mediated effect of knowledge on opt-out behavior was not significant (B = .007, SE = .01, BC 95% CI[-.01, .04]). Hence, we reject H4.

According to H5, the impact of technical knowledge on motivation and consequently on opting-out, was positively mediated by perceived self-efficacy. The manipulation did not have a significant effect on self-efficacy (B = -.07, SE = .15, BC 95% CI[-.35, .22]). Self-efficacy significantly decreased opt-out motivation (B = -.17, SE = .06, BC 95% CI[-.29, -.04]) and behavior (indirectly: B = -.07, SE = .05, BC 95% CI[-.21, -.01]). The indirect mediated effect of knowledge on opt-out behavior was not significant (B = .004, SE = .02, BC 95% CI[-.02, .04]). Thus, we reject H5.

H6 proposed that the impact of technical knowledge on motivation and consequently on opting-out, was positively mediated by perceived response efficacy. The manipulation did not have a significant effect on response efficacy (B = .05, SE = .13, BC 95% CI[-.21, .31]). Response efficacy significantly increased opt-out motivation (B = .32, SE = .07, BC 95% CI[.19, .46]) and the behavior (indirectly: B = .14, SE = .08, BC 95% CI[.05, .35]). The indirect mediated effect of knowledge on opt-out behavior was not significant (B = .01, SE = .03, BC 95% CI[-.04, .07]). Thus, we reject H6.

H7 proposed that attitude towards personalized advertising was negatively related to personalization opt-out motivation and thus, was indirectly negatively related to opt-out behavior. There was a negative direct effect of attitude on motivation to opt-out (B = -.24, SE = .07, BC 95% CI[-.37, -.10] and attitude indirectly, had a significant negative effect on behavior (B = -.10, SE = .06, BC 95% CI[-.27, -.03]). Thus, H7 was supported.

Finally, an open RQ1 was posed to investigate the impact of technical knowledge intervention on attitude towards personalized advertising. There was a non-significant relation between the manipulation and the attitude (B = -.16, SE = .12, BC 95% CI[-.40, .08]). Thus, we conclude that technical knowledge neither improves nor lowers one’s attitude.

Discussion

The aim of this research was to examine the empowering effect of technical knowledge about the personalization process on consumers’ motivation to opt-out from personalized advertising and the subsequent behavior, and the processes behind it. We applied and extended the PMT. The experiment that combined traditional and computational research methods demonstrated that technical knowledge did not have the expected empowering effect. In fact, we found a negative effect on perceived severity; thus, individuals exposed to technical knowledge manipulation were less motivated and less likely to opt-out. While not triggered by knowledge, elements of PMT themselves explained why consumers were (not) motivated to opt-out. Finally, motivation to opt-out was found to strongly drive the actual behavior.

To our surprise, we did not find the expected effects of technical knowledge. We made sure to account for the generally low knowledge level (Smit et al., 2014) by not simply measuring, but by manipulating it. Also, we hoped to maximize the effect by narrowing down the manipulation to a specific context, i.e., Google (as past studies have concluded that insignificant effects could be explained by too general measures, e.g., Wottrich et al., 2018). However, the effect was small and opposite to our expectation. Similar surprising effect was found by Wottrich et al. (2018) in the context of mobile apps. There are two possible explanations why knowledge makes people less likely to act. First, well-informed users may realize what issue they face (Wottrich et al., 2018). Thus, as explained by the extended parallel process model (Witte, 1992), participants may have activated fear control processes and deny the issue. Alternatively, it is possible that we may have not only manipulated participants’ knowledge, but also their feeling of safety. Past research on privacy seals has demonstrated that such seals make users feel more secure (Van Noort, Kerkhof, & Fennis, 2008). In the current study, the manipulation material explained the unknown and may have similarly given the “false feeling of security”. In the context of online risks, Brandimarte, Acquisti and Loewnstein (2013) have concluded a so-called control paradox. It assumes that control over sharing private information decreases one’s privacy concerns and increases their willingness to publish sensitive information. The current study suggests that this paradox also takes place in the context of transparency about data collection and processing. At the same time, how the knowledge was offered to consumers should be considered. The current study departed from the GDPR and the requirement to purely inform consumers about the technical process of data collection and use. Possibly, while having such objective knowledge does not trigger action, knowledge about risks and the potential downsides to personalization would have the expected effect.

It is important to note that our study only manipulated one type of knowledge: the technical knowledge about the personalization process. Companies are legally required to share such information as included in our manipulation video. However, consumers also have to be informed about their rights. In fact, 84% of participants who did have a Google account did not know about the existence of the opt-out function. We received numerous surprised comments and thank you notes from Google users participating in the study who did not have the information. While the current study focused on technical knowledge and one type of protective behavior (and in order to be able to measure opt-out efficacy, motivation and behavior all respondents were informed about the opt-out functionality), future studies should investigate the effect of both knowledge types mandated by the GDRP. In fact, the law requires companies to provide consumers with broad information about different rights (see Art. 13 of the GDPR), which goes far beyond simply offering an opt-out function. While technical knowledge did not have an empowering effect, it is possible that information about consumer rights plays a more important role in triggering consumer action.

Worth noting is the strong effect of motivation on behavior. This lies in line with past findings on application of PMT to the health context – elements of the theory impact behavior indirectly by increasing motivation (Milne et al., 2000). In the current study, the innovativeness of data collection allowed us to measure how many participants actually used the opt-out function. As many as 46% of participants that had the tracking software installed turned personalization by Google off. The high percentage suggests that once informed about a simple and accessible function, internet users do act. This optimistic conclusion is contrary to past studies that have argued the existence of a privacy paradox: people who worry still do not act (e.g., Norberg, Horne, & Horne, 2007).

Regarding application of the PMT to the personalization context, our study found that three factors significantly predicted the opt-out behavior as expected. In line with prior studies (Milne et al., 2000), the more severe people think data processing for personalization is, the more likely they are to act. Thus, for future research, question arises who and why perceive personalization as a problem. Interestingly, privacy concern was found to have no effect on motivation, but a direct negative effect on the opt-out behavior. In the past, privacy concern was found to predict privacy protection behavior (e.g., Milne & Culnan, 2004). Thus, there might be omitted variable bias. As concerned users are more likely to use various advanced privacy enhancing technologies, they may not need the setting offered by Google. Future research shall focus on a broader spectrum of protection from personalization to conclude if the effect differs depending how much people value the measure. Our findings with regard to coping appraisal are mixed. Response efficacy is the strongest predictor of the motivation and behavior: people need to believe that it works to use it. Contrarily, people who are more confident in their skills are less motivated and less likely to opt-out. While this finding is surprising, those who are more confident may also be able to take other measures.

The attitude towards personalization was not related to knowledge: having technical information does not empathize positive or negative sides of personalization. At the same time, attitude had a negative effect. Indeed, past research on more general privacy issues has shown that people engage in the threat-related behavior in exchange for convenience, functionality, or financial gains (Acquisti & Grossklags, 2005).

Limitations, Implications, and Future Directions

Despite the intriguing findings, the present study has also some limitations that need to be mentioned. First, it assumes that personalization can be seen as a threat by consumers. While past research indeed suggests that it may be the case, consumers who do not see it this way will not activate threat and coping appraisal. Thus, while PMT takes both negative (severity, privacy concern) and positive (attitude) sides of the phenomenon into account, other frameworks, such as social exchange theory, could be applied in future research to further investigate the role of knowledge in consumer empowerment.

Second, our innovative method is one of the biggest contributions of this study, but it was also limiting. While all the participants were invited to install the traffic monitoring plug-in, only 80 were using it actively. It is a lesson for future studies that use self-designed tracking technology. A high number of drop-outs can be expected and even with extensive privacy and information procedures, cannot be avoided. At the same time, we are optimistic about the findings. Even with the small sample size, we were able to find a strong significant relation between motivation and behavior. Moreover, we compared plug-in users with other participants and there were no significant differences between the groups (see Table 2).

Third, our manipulation focused on one knowledge type and on one specific company, namely Google. While it allowed us to measure the constructs related to one behavior, choosing Google as the subject has its limitations. The choice was motivated by the fact that Google is currently the biggest search engine and is seen as a reliable source of information (Pan et al., 2007). While this shows that Google is a highly relevant brand, it also indicates that Google has become a part of daily life. Future studies should take a broader view on personalization and test if users are equally motivated to opt-out from personalization from other sources, such as advertising networks or Facebook.

Fourth, while we made sure to design the knowledge intervention video following recommendations from past literature, it has to be noted that while both groups were exposed to a video to minimize the differences, the video in the manipulation condition was longer than the filler video. The baseline video was shorter in order to keep the study design clean, but the length difference might have led to a higher cognitive load in the manipulation condition. Future studies should take this into account and e.g., compare empowering effect of different types of knowledge on consumer agency while keeping the cognitive load of information constant.

Finally, it has to be noted that three variables in the current study were measured using single questions. This approach has its advantages, but also limitations. When it comes to response efficacy and opt-out motivation, we deliberately limited ourselves to one option in order to limit the scope of the study and maximize effects. Concerning attitude towards personalized advertising, it was chosen to measure it directly in order to avoid priming affective and in particular behavioral elements, which could affect especially the motivation to opt-out. Moreover, past research in advertising measures has shown no difference in predictive validity of the multiple-item and single-item measures when it comes to attitude towards advertisings (Bergkvist & Rossiter, 2007).

Despite its limitations, the current study carries a number of implications. Methodologically, the innovative way of registering behavior can be seen as an inspiration. The use of computational methods makes it possible to explore and empirically test hypotheses that we were unable to test with classical methods. Moreover, in the digital world, in order to keep up with the industry, researchers have to start using digital analytics to move beyond measuring motivations and intentions. From a practical perspective, our findings cast doubts on the role of transparency about data collection and processing. Informing consumers did not activate their threat or coping appraisal and did not predict their motivation to act. This is a good news for marketers who commonly dread the transparency requirements: purely being informed does not make consumers negative by default. Simultaneously, more research is needed to conclude the type of transparency necessary for motivation activation. The positive reaction of respondents to the information about their rights should be a hint for at the fact that knowledge is a more complex notion. Finally, the findings provide practical insights into what can be done to encourage consumers to exercise their rights. First, providing information about a simple and accessible form of protection resulted in a relatively high number of opt-outs, which suggests that transparency needs to be very targeted to drive behavior. Second, to empower consumers, policy makers need to focus on increasing the motivation to act. This can be achieved by addressing the perceived severity and response efficacy. Thus, consumers need to know what threats they face online regarding data processing and what effective ways there are to protect themselves.

Acknowledgments

This research was funded by and made possible through the University of Amsterdam Research Priority Area ‘Personalised Communication’ (personalised-communication.net), as well as through the support of the Stichting Wetenschappelijk Onderzoek Commerciële Communicatie (SWOCC). Furthermore, we would like to thank Bob van de Velde, Mats Willemsen, Maarten de Jonge, and Mykola Makhortykh for their work on the behavior-tracking plug-in.

Appendix

Scripts of Videos Used in the Online Experiment

Manipulation condition. Many companies base communication with their customers on what they know about the customer. It is thus possible that two clients of the same company see different banners on news websites or receive different newsletters. Such a phenomenon is called personalized advertising.

Google is one of such companies. Google displays advertisements in many different locations that can be divided in two categories. First, Google shows advertisements within its own services (think of the search engine or YouTube). Second, it is responsible for ads on more than two million websites and apps that work together with Google to show personalized ads to visitors, for example buienradar.nl.

When you search for something on Google.com, you not only see ads based on the search term you have used, but also ads based on your location that Google collects thorough your mobile phone, based on ads that you have clicked on before, or websites and apps that you have used in the past.

Google is also responsible for many banners that you see online, for example on nu.nl. Such banners are based on data that Google collects about you: your Google profile (e.g., age, gender), types of websites that you commonly visit, apps that you use on your mobile phone, websites and apps that you have used and that cooperate with Google and also your browsing history from another device, for example your tablet.

Finally, advertisers can also share information with Google. For example, they can make your home address available to Google. Through comparing this with the location history of your phone, Google can find a match and thus know if you are a client of that company and when you are home.

Using your personal data, Google can show you personalized ads on various websites and in the search engine.

Baseline condition. Many companies base communication with their customers on what they know about the customer. It is thus possible that two clients of the same company see different banners on news websites or receive different newsletters. Such a phenomenon is called personalized advertising.

Table A1. Summary of Measurement Model Statistics.

Constructs

Measurement items

Mean

SD

Factor loadings

Perceived severity

Having companies collect my online behavior is a problem for me.

4.73

1.68

.94

Having companies use my online behavior for personalized advertising purposes is a problem for me.

4.71

1.70

.94

Having companies share my online behavior is a problem for me.

5.57

1.52

.85

Perceived susceptibility

I believe that companies collect information about my online behavior.

6.05

1.20

.95

I believe that companies use information about my online behavior to show me personalized ads.

6.12

1.14

.94

I believe that companies share information about my online behavior with other companies.

5.60

1.32

.87

Privacy concern

I am worried that my personal data (such as browsing behavior, name or location) may be misused by others.

4.62

1.66

.93

When I am online, I have the feeling that others keep track of what I click on and what websites I visit.

4.77

1.70

.92

I am afraid that my personal data that I share online is not stored safely

4.66

1.56

.91

I am afraid that my personal data online is distributed without my permission

4.86

1.59

.83

I am afraid that my personal data online can be accessed by people I do not know.

4.69

1.62

.76

Self-efficacy

I am able to protect my personal information from companies on the Internet.

3.58

1.71

.91

I feel confident that I can protect myself online from data use for personalized advertising

3.04

1.64

.91

Attitude towards personalized advertising

Personalization in advertising is for me (1 – ‘a very negative development’ to 7 – ‘a very positive development’)

3.67

1.37

 

Response efficacy

I believe that the opt-out function offered by Google is an effective way of protection against personalized advertising online.

5.03

1.40

 

Opt-out motivation

Over the next two weeks, I intend to protect my privacy by using the settings available on adsetting.google.com’

5.04

1.92

 

 

Table A2. Linear and Logistic Regression Models for Direct Effects Including Control Variables (N = 425): Perceived Severity, Perceived Susceptibility, and Privacy Concern.

 

Perceived severity

Perceived susceptibility

Privacy concern

 

b (SE)

t

95%CI

b (SE)

t

95%CI

b (SE)

t

95%CI

Manipulation (1 = present)

-0.31 (0.10)

-2.96

-0.51, -0.10

-0.12 (0.11)

-1.12

-0.33, 0.09

-0.11 (0.11)

1.06

-0.10, 0.32

Perceived severity

 

 

 

0.01 (0.06)

0.15

-0.11, 0.13

0.57 (0.05)

12.21

0.48, 0.66

Perceived susceptibility

0.01 (0.06)

0.15

-0.12, 0.14

 

 

 

0.18 (0.06)

2.78

0.05, 0.30

Privacy concern

0.56 (0.05)

11.34

0.46, 0.66

0.17 (0.06)

2.83

0.05, 0.29

 

 

 

Efficacy of out-out

0.05 (0.04)

1.21

-0.03, 0.12

0.03 (0.04)

0.78

-0.05, 0.11

-0.01 (0.04)

0.33

-0.07, 0.10

Self-efficacy

-0.01 (0.04)

-0.31

-0.09, 0.07

-0.11 (0.04)

-2.63

-0.19, -0.03

-0.04 (0.04)

-0.94

-0.12, 0.04

Attitude towards personalization

-0.27 (0.05)

-5.25

-0.37, -0.17

0.001 (0.05)

0.02

-0.09, 0.09

-0.05 (0.05)

-0.91

-0.14, 0.05

Opt-out motivation

 

 

 

 

 

 

 

 

 

Age

0.01 (0.003)

2.50

0.002, 0.01

-0.004 (0.003)

-1.19

-0.01, 0.003

-0.001 (0.003)

-0.30

-0.01, 0.01

Gender

0.30 (0.11)

2.70

0.08, 0.51

-0.41 (0.11)

-3.61

-0.63, -0.19

-0.10 (0.11)

-0.92

-0.32, 0.12

Plug-in use

-0.27 (0.13)

-2.08

-0.53, -0.01

0.01 (0.06)

.16

-0.11, 0.13

0.11 (0.13)

0.85

-0.14, 0.36

F(df) | χ2(df)

55.11(9, 415)

 

 

5.38(9, 415)

 

 

30.83(9, 415)

 

 

R2 | McFadden R2

.49

 

 

.11

 

 

.43

 

 

 

Table A3. Linear and Logistic Regression Models for Direct Effects Including Control Variables (N = 425): Efficacy of Opt-Out, Self-Efficacy, and Attitude towards Privacy Concern.

 

Efficacy of opt-out

Self-efficacy

Attitude towards privacy concern

 

b (SE)

t

95%CI

b (SE)

t

95%CI

b (SE)

t

95%CI

Manipulation (1 = present)

-0.05 (0.13)

0.40

-0.21, 0.31

-0.07 (0.15)

-0.44

-0.35, 0.22

-0.16 (0.12)

-1.33

-0.40, 0.08

Perceived severity

0.08 (0.06)

1.22

-0.05, 0.20

-0.02 (0.08)

-0.32

-0.18, 0.13

-0.35 (0.06)

-5.48

-0.48, -0.22

Perceived susceptibility

0.05 (0.07)

0.77

-0.08, 0.18

-0.21 (0.08)

-2.76

-0.36, -0.06

0.002 (0.06)

0.02

-0.12, 0.12

Privacy concern

0.02 (0.07)

0.33

-0.11, 0.16

-0.07 (0.08)

-0.93

-0.23, 0.08

-0.06 (0.06)

-0.92

-0.18, 0.07

Efficacy of out-out

 

 

 

0.20 (0.05)

3.71

0.09, 0.31

0.07 (0.05)

1.42

-0.03, 0.17

Self-efficacy

0.17 (0.05)

3.65

0.08, 0.26

 

 

 

0.05 (0.05)

1.09

-0.04, 0.14

Attitude towards personalization

0.09 (0.06)

1.42

-0.03, 0.21

0.07 (0.07)

1.11

-0.06, 0.20

 

 

 

Opt-out motivation

 

 

 

 

 

 

 

 

 

Age

0.02 (0.004)

4.39

0.01, 0.03

-0.01 (0.004)

-1.23

-0.01, 0.003

-0.01 (0.003)

-2.94

-0.02, -0.003

Gender

0.11 (0.13)

0.82

-0.15, 0.38

-0.16 (0.16)

-1.02

-0.47, 0.15

-0.09 (0.13)

-0.68

-0.34, 0.17

Plug-in use

-0.07 (0.15)

-0.49

-0.36, 0.21

0.11 (0.18)

0.59

-0.25, 0.46

-0.04 (0.14)

-0.30

-0.32, 0.24

F(df) | χ2(df)

4.64(9, 415)

 

 

4.17(9, 415)

 

 

11.17(9, 415)

 

 

R2 | McFadden R2

.09

 

 

.08

 

 

.22

 

 

 

Table A4. Linear and Logistic Regression Models for Direct Effects Including Control Variables (N = 425):
Opt-Out Motivation and Opt-Out Behavior.

 

Opt-out motivation

 

Opt-out behavior

 

b (SE)

t

95%CI

 

b (SE)

Z

95%CI

Manipulation (1 = present)

-0.27 (0.16)

-1.69

-0.57, -0.04

 

-0.53 (0.48)

-1.11

-1.47, 0.41

Perceived severity

0.26 (0.08)

3.22

0.10, 0.41

 

0.39 (0.24)

1.64

-0.08, 0.86

Perceived susceptibility

-0.06 (0.07)

-0.80

-0.20, 0.08

 

0.29 (0.27)

1.07

-0.24, 0.81

Privacy concern

0.14 (0.09)

1.68

-0.02, 0.31

 

-0.59 (0.24)

-2.49

-1.06, -0.13

Efficacy of out-out

0.32 (0.07)

4.65

0.19, 0.46

 

0.06 (0.21)

0.29

-0.36, 0.48

Self-efficacy

-0.17 (0.06)

-2.66

-0.29, -0.04

 

-0.01 (0.16)

-0.06

-0.32, 0.30

Attitude towards personalization

-0.24 (0.07)

-3.36

-0.37, -0.10

 

-0.01 (0.20)

-0.03

-0.40, 0.39

Opt-out motivation

 

 

 

 

0.45 (0.15)

2.92

0.15, 0.75

Age

0.01 (0.01)

2.34

0.002, 0.02

 

-0.001 (0.01)

-0.10

-0.03, 0.03

Gender

0.23 (0.16)

1.38

-0.10, 0.55

 

0.003 (0.52)

0.01

-1.01, 1.02

Plug-in use

-0.15 (0.22)

-0.68

-0.58, 0.28

 

4.40 (0.59)

7.43

3.24, 5.57

F(df) | χ2(df)

20.13(10, 414)

 

 

 

117.08(11)

 

 

R2 | McFadden R2

.28

 

 

 

.47

 

 

Metrics

9087

Views

2076

PDF views