Juxtaposing the persuasion knowledge model and privacy paradox: An experimental look at advertising personalization, public policy and public understanding
Vol.10,No.1(2016)
Special issue: Online Self-disclosure and Privacy
Recent studies suggest the expanding collection and use of big data by advertisers to target messages to consumers based on their location, demographics and online behaviors is escalating information privacy concerns and negatively impacting campaign outcomes. For communication scholars and practitioners, this recent attitudinal shift indicates a critical need to better understand consumer perceptions related to personalized advertising in the era of big data. It is currently assumed that U.S. self-regulatory initiatives, including the AdChoices Icon, reduce perceived risk by giving consumers a greater sense of control over the exchange of their personal information online (Castro, 2011). However, less than 37% of U.S. Internet users are familiar with the AdChoices Icon (eMarketer, 2015), and 52% incorrectly believe that privacy policies ensure the confidentiality of their personal information (Pew, 2014). To examine the complexities of the privacy paradox, the present study utilizes a 2x2x2 experiment (N=382) to measure attitudes toward personalized advertising with and without the presence of the AdChoices Icon. A Univariate GLM analysis of the data indicate that when controlling for demographics, online trust, message credibility, and perceived risks and benefits, advertising personalization did not have a significant effect on attitude toward the ad, but inclusion of the AdChoices Icon did. Further, respondents indicating no knowledge of the AdChoices Icon reported lower attitudinal responses toward the ad compared to those who were knowledgeable of its meaning. Exploring these complex relationships offers to advance research and practice by extending Persuasion Knowledge Model to examine the effects of personalized online message delivery, as well as offering practitioners actionable insights to improve their personalized advertising outcomes.
Privacy paradox; personalized advertising; information privacy; AdChoices icon; privacy policy
Nancy Howell Brinson
The University of Texas, Austin, TX, USA
Nancy Howell Brinson is a doctoral student at the Stan Richards School of Advertising & Public Relations, University of Texas at Austin. Building on a 25 year-career in advertising, her research interests lie in examining the effects of data mining across multiple platforms and contexts to understand how increasingly personalized advertising messages impact audience perceptions.
Matthew S. Eastin
The University of Texas, Austin, TX, USA
Matthew S. Eastin is an Associate Professor at the Stan Richards School of Advertising & Public Relations, University of Texas at Austin. His research focuses on new media behavior, and from this perspective, he has investigated information processing as well as the social and psychological factors associated with new media.
Acquisti, A., & Gross, R. (2006). Imagined communities: Awareness, information sharing, and privacy on the Facebook. In Privacy enhancing technologies (pp. 36-58). Springer Berlin Heidelberg.
Ayeh, J., Au, N., & Law, R. (2013). Do we believe in TripAdvisor? Examining credibility perceptions and online travelers’ attitude toward using user-generated content. Journal of Travel Research, 52, 537-452. http://dx.doi.org/10.1177/0047287512475217
Baek, T., & Morimoto, M. (2012). Stay away from me: Examining the determinants of consumer avoidance of personalized advertising. Journal of Advertising. 41, 59-76. http://dx.doi.org/10.2753/JOA0091-3367410105
Castro, D. (2011). Benefits and limitations of industry self-regulation for online behavioral advertising. The Information Technology & Innovation Foundation. Retrieved from http://www.itif.org/files/2011-self-regulation- online-behavioral-advertising.pdf
Cisco (2014). VNI Global IP Traffic Forecast, 2013–2018. Retrieved from http://www.cisco.com/c/en/us/solutions/service-provider/visual-networking-index-vni/index.html
Cleff, E. (2007). Privacy issues in mobile advertising. International Review of Law Computers and Technology, 21, 225-236. http://dx.doi.org/10.1080/13600860701701421
Culnan, M., & Armstrong, P. (1999). Information privacy concerns, procedural fairness, and impersonal trust: An empirical investigation. Organizational Science, 10, 104-115. http://dx.doi.org/10.1287/orsc.10.1.104
Davis, W. (2015, October 7). Lawmakers call for stronger do-not-track standards. Mediapost Policy Blog. Retrieved from http://www.mediapost.com/publications/article/259971/lawmakers-call-for-stronger-do-not-track- standards.html
Dix, S., Bellman, S., Haddad, H., & Varan, D. (2010). Using interactive program-loyalty banners to reduce TV ad avoidance: Is it possible to give viewers a reason to stay tuned during commercial breaks? Journal of Advertising Research, 50, 154-161. http://dx.doi.org/10.2501/S0021849910091312
Ducoffe, R. (1996). Advertising value and advertising on the web. Journal of Advertising Research, 17, 21-35.
Dutta, S., & Bilbao-Osorio, B. (2014). The Global Information Technology Report 2014 – Rewards and risks of big data. In INSEAD and World Economic Forum, 35-93.
Eastin, M.S. (2001). Credibility assessments of online health information: the effects of source expertise and knowledge of content. Journal of Computer-Mediated Communication, 6(4). http://dx.doi.org/10.1111/j.1083- 6101.2001.tb00126.x
eMarketer. (2015). AdChoices: Do consumers know they can control the creepiness? Retrieved from http://www.emarketer.com/Article/AdChoices-Do-Consumers-Know-They-Control-Creepiness/1012623
Federal Trade Commission. (2010) Protecting consumer privacy in an era of rapid change. FTC report.
Friestad, M., & Wright, P. (1994). The persuasion knowledge model: How people cope with persuasion attempts. Journal of Consumer Research, 21(1), 1-31. http://dx.doi.org/10.1086/209380
Golembiewski, R. T., & McConkie, M. (1975). The centrality of interpersonal trust in group processes. Theories of Group Processes, 131, 185.
Greengard, S. (2015). The Internet of Things. Cambridge, MA: MIT Press.
Hastak, M., & Culnan, C. (2010). Future of privacy forum online behavioral advertising “icon” study. Retrieved from https://fpf.org/final_report.pdf
Hovland, C. I., Janis, I. L., & Kelley, H. H. (1953). Communication and persuasion. New Haven, CT: Yale University Press.
IAB. (2011). Self regulatory program for online behavioral advertising factsheet. Retrieved from https://www.iab.com/wp-content/uploads/2015/06/OBA_OneSheet_Final.pdf
IBM (2013). Big data. Retrieved from http://ibm.com/big-data/us/en/
Internet Advertising Bureau (2015). Internet advertising revenue report. Retrieved from http://www.iab.net/media/file/IAB_Internet_Advertising_Revenue_Report_HY_2015_PDF.pdf
Jensen, C., Potts, C., & Jensen, C. (2005). Privacy practices of Internet users: Self report versus observed behavior. International Journal of Human-Computer Studies, 63, 203-227. http://dx.doi.org/10.1016/j.ijhcs.2005.04.019
John, L. (2015, October 16). We say we want privacy online, but our actions say otherwise. Harvard Business Review. Retrieved from https://hbr.org/2015/10/we-say-we-want-privacy-online-but-our-actions-say-otherwise
Kachersky, L., & Kim, H. (2011). When consumers cope with price-persuasion knowledge: The role of topic knowledge. Journal of Marketing Management, 27, 28-40. http://dx.doi.org/10.1080/02672571003647719
Karjaluoto, H., & Alatalo, T. (2007). Consumers' attitudes towards and intention to participate in mobile marketing. International Journal of Services Technology and Management, 8(2), 155-173. http://dx.doi.org/10.1504/IJSTM.2007.012866
Kim, P., Ferrin, D., Cooper, C., & Dirks, K. (2004). Removing the shadow of suspicion: The effects of apology versus denial for repairing competence-versus integrity-based trust violations. Journal of Applied Psychology, 89, 104-118. http://dx.doi.org/10.1037/0021-9010.89.1.104
Lafferty, B. A., Goldsmith, R. E., & Newell, S. J. (2002). The dual credibility model: The influence of corporate and endorser credibility on attitudes and purchase intentions. Journal of Marketing Theory and Practice, 10(3), 1-12. http://dx.doi.org/10.1080/10696679.2002.11501916
LaRose, R., & Eastin, M. (2004). A social cognitive theory of Internet uses and gratifications: Toward a new model of media attendance. Journal of Broadcasting & Electronic Media, 48, 358–377. http://dx.doi.org/10.1207/s15506878jobem4803_2
Lerman, K. (2014). Beyond the bulls-eye: Building meaningful relationships in the age of big data. Retrieved from: https://www.cspace.com/blog/building-meaningful-relationships-in-the-age-of-big-data/
Malhotra, N., Kim, S., & Agarwal, J. (2004). Internet users' information privacy concerns (IUIPC): the construct, the scale, and a causal model. Information Systems Research, 15, 336-355. http://dx.doi.org/10.1287/isre.1040.0032
Mayer, J. & Narayana, A. (2015) Do not track, universal web tracking opt out. Retrieved from http://donottrack.us/ Mayer-Schoenberger, V. & Cukier, K. (2013). Big data. New York, NY: Houghton Mifflin Harcourt.
McCann. (2013).Truth about privacy study. Retrieved from http://truthcentral.mccann.com/truth/
McKnight, D. H., Choudhury, V., & Kacmar, C. (2002). Developing and validating trust measures for e-commerce: An integrative typology. Information Systems Research, 13, 334-359. http://dx.doi.org/10.1287/isre.13.3.334.81
Mir, I. (2011). Consumer attitude towards m-advertising acceptance: A cross-sectional study. Journal Of Internet Banking & Commerce, 16(1), 1-22.
Morrison, M., & Peterson, T. (2015, September 14). Yes, there is a war on advertising. Now what? Advertising Age. Retrieved from http://adage.com/article/print-edition/a-war-advertising/300336/
Mozilla (2015). Lightbeam for FireFox. Retrieved from https://www.mozilla.org/en-US/lightbeam/
Nelson, M, Keum, H, & Yaros, R. (2004). Advertainment or adcreep: Game players’ attitudes toward advertising product placements in computer games. Journal of Interactive Advertising, 5, 3-21. http://dx.doi.org/10.1080/15252019.2004.10722090
OECD. (2013). The 2013 OECD privacy guidelines. Retrieved from http://bit.ly/166TxHy
Ohanian, R. (1990). Construction and validation of a scale to measure celebrity endorsers’ perceived expertise, trustworthiness and attractiveness. Journal of Advertising, 19, 39-52. http://dx.doi.org/10.1080/00913367.1990.10673191
PageFair. (2015). The 2015 Ad blocking report. Retrieved from https://blog.pagefair.com/2015/ad-blocking-report/
Pavlou, P. A. (2003). Consumer acceptance of electronic commerce: Integrating trust and risk with the technology acceptance model. International Journal of Electronic Commerce, 7, 101-134.
Peterson, T. (2015, September 4). IAB explores its options to fight ad blockers, including lawsuits. Advertising Age. Retrieved from http://adage.com/article/digital/iab-surveys-options-fight-ad-blockers-including-lawsuits/300228/
Pew Research Center. (2016). The state of privacy in America: What we learned. Retrieved from http://www.pewresearch.org/fact-tank/2016/01/20/the-state-of-privacy-in-america/
Pew Research Center. (2014). Half of online Americans don’t know what a privacy policy is. Retrieved from http://www.pewresearch.org/fact-tank/2014/12/04/half-of-americans-dont-know-what-a-privacy-policy-is/
Pornpitakpan, C. (2004). The persuasiveness of source credibility: A critical review of five decades’ evidence. Journal of Applied Psychology, 34, 243-81.
Purcell, K., Brenner, J., & Rainie, L. (2012). Search engine use 2012. Pew Internet & American Life. Retrieved from http://pewinternet.org/Reports/2012/Search-Engine-Use-2012.aspx
Richards, N. (2014). Why data privacy law is (mostly) constitutional. Intellectual Privacy. Cambridge, UK: Oxford University Press.
Rocket Fuel. (2014). Quantified self digital tools: A CPG marketing opportunity. Retrieved from http://rocketfuel.com/blog/quantified-self
Roeber, B., Rehse, O., Knorrek, R., & Thomsen, B. (2015). Personal data: How context shapes consumers’ data sharing with organizations from various sectors. Electronic Markets, 25, 95-108. http://dx.doi.org/10.1007/s12525- 015-0183-0
Schwartz, P., & Solove, D. (2011). The PII problem: Privacy and a new concept of personally identifiable information. NYU Law Review, 86, 1814.
Slefo, G. (2015, October 15). IAB to advertisers and content providers: ‘We messed up”. Advertising Age. Retrieved from http://adage.com/article/digital/iab-advertisers-content-providers-messed/300919/
Smith, H. J., Milberg, S. J., & Burke, S. J. (1996). Information privacy: Measuring individuals' concerns about organizational practices. MIS Quarterly, 20, 167-196. http://dx.doi.org/10.2307/249477
Sunyaev, A., Dehling, T., Taylor, P., & Mandl, K. (2015). Availability and quality of mobile health app privacy policies. Journal of American Medical Information Association (JAMIA), 22, e28-e33.
Tummarello, K., (2013, September 17). Do not track effort in trouble. The Hill. Retrieved from http://thehill.com/policy/technology/322701-do-not-track-group-should-give-up-departing-online-ad-reps-say
Utz, S., & Kramer, N. (2009). The privacy paradox on social network sites revisited: The role of individual characteristics and group norms. Cyberpsychology: Journal of Psychosocial Research on Cyberspace, 3(2), article 1.
Way, H. (2014). Harnessing the power of big data: New media and advertising. Retrieved from https://www.parksassociates.com/report/advertising-big-data
Wessels, B. (2012). Identification and the practices of identity and privacy in everyday digital communication. New Media & Society, 14, 1251-1268. http://dx.doi.org/10.1177/1461444812450679
White, T., Zarhay, D., Thorbjornsen, H. & Shavitt, S. (2008) Getting too personal: Reactance to highly personalized e-mail solicitations. Marketing Letters, 19, 39-50. http://dx.doi.org/10.1007/s11002-007-9027-9
Zhang, J., & Wedel, M. (2009). The effectiveness of customized promotions in online and offline stores. Journal of Marketing Research, 46, 190-206. http://dx.doi.org/10.1509/jmkr.46.2.190
Introduction
U.S. consumers are being increasingly barraged with online advertising messages targeted to them based on their personal data. The 2015 Internet Advertising Revenue Report reveals that targeted online advertising revenues hit an historic high of $27.5 billion for the first half of 2015, representing a 19% increase over the same period in 2014. Moreover, the volume and depth of personal information being collected and mined about consumers is staggering. According to the Global Information Technology Report (Dutta & Bilbao-Osorio, 2014), over two and a half quintillion bytes of data are created each day, and 90% of the world’s total stored data was created in the last two years alone. The accelerated growth of this so-called “big data” is attributed in part to the proliferation of smart phones and quantified self devices, which track up to 100 data points about individual users, including their precise location, online behavior, past purchase history, email and text communications, social contacts and even biometrics (IBM, 2013). Consequently, marketers now have the ability to aggregate multiple information sources to profile consumers, which can be used to narrowly target individuals with various forms of personalized advertising.
Numerous national surveys (e.g., McCann, 2013; Purcell, Brenner & Rainie, 2012) indicate U.S. consumers’ increasing concerns related to their privacy in this big data environment, particularly within contexts they consider more sensitive, such as health. It is currently assumed by U.S. policy makers that self-regulatory initiatives, including the AdChoices Icon reduce perceived risk by giving consumers a greater sense of control over the exchange of their personal information online (Castro, 2011). However, less than 37% of Internet users are familiar with the AdChoices Icon (eMarketer, 2015; Way, 2004), and 52% incorrectly believe that privacy policies ensure the confidentiality of their personal information online (Pew, 2014).
To examine the complexities of digital identity and the Privacy Paradox (a noted discrepancy between individuals’ stated privacy concerns and their actual online privacy settings), the present study utilizes a 2x2x2 experimental design to estimate awareness of the AdChoices Icon as well as determine attitudes toward advertising within the context of the AdChoices Icon and personalized advertising. Exploring these relationships offers to advance research and practice, as well as inform developing policy standards. First, extending consumer behavior theory (Persuasion Knowledge Model) to a personalized online context encourages scholars to consider a framework that more accurately reflects the dynamics associated with computer-mediated communication (CMC) in the age of big data. Secondly, an increased understanding of consumers’ perceptions about the risks and benefits associated with personalized advertising will enable marketers to improve their personalized advertising outcomes. Moreover, given increasing efforts by lawmakers and privacy advocates to initiate “Do Not Track” standards, along with advertising industry efforts to outlaw ad blocking software (Morrison & Peterson, 2015), the collection and use of consumers’ personal data becomes of mounting concern to government regulators as well.
Previous research in this area has primarily focused on best practices for companies attempting to capitalize on personalized data collection. Fewer studies address consumers’ awareness of current data collection practices along with their privacy concerns related to the access and availability of increasing levels of personal information. Therefore, the objective of this study is to explore the key drivers of online information sharing and propose public policy recommendations designed to increase consumer awareness about how their personal data are being used, granting them greater choice and control over their aggregated information while protecting the interests of advertisers and publishers.
Literature Review
Big Data. Since the earliest days of the information age, scholars noted the increasing capability of information systems to monitor the communications and activities of individuals. It is projected that all digital data created, replicated or consumed—known as the “digital universe”—will expand by a factor of 30 from 2005 to 2020, doubling in size each year (IBM, 2013). Mobile technology and the targeted, specific, and constant access to consumers it permits will be a fundamental contributor to the big data universe. While total traffic over IP networks is forecasted to triple from 2012 to 2017, mobile traffic data is projected to grow thirteen-fold, representing a more significant share of all data created and transmitted (Cisco, 2014). Proliferating social network sites, which quantify many aspects of social life including friendships, business relationships, interests, conversations and sentiments are also producing enormous volumes of data (Mayer-Schoenberger & Cukier, 2013). Moreover, it is not just communication between people, but communication between objects that is expanding the data universe. The development of “smart”, connected devices including cars, televisions, appliances, and so on (referred to as “The Internet of Things [IoT]”) promises to redefine the way we react with each other and the world (Greengard, 2015).
Personalized Advertising and Perceived Benefits. Marketers claim to segment audiences into tailored clusters utilizing big data in the form of demographics, geographic location, and previous online behaviors in an effort to secure their interest and satisfy their needs. Indeed, specific monetary benefits offered to consumers by personalized advertising may include such incentives as discount coupons, cash incentives, special offers, prior knowledge of sales, shopping rewards, customized offers, enhanced customer service, time savings, purchase reminders and personalized product recommendations. A key difference between modern ad personalization and what was possible just a few years ago is the development of hyper-targeted message delivery based on real-time location as well as recent online and offline behavior powered by big data (Dutta et al., 2014). Advancing technologies have expanded personalization capabilities to every imaginable venue, including targeted e-mail messages addressing private information about the recipient (White, Zarhay, Thorbjornsen, & Shavitt, 2008), search engine, banner and mobile advertising tailored to an individual’s specific online behaviors and location (Zhang & Wedel, 2009) and television ads delivered to individuals based on their viewing preferences and online behaviors (Dix, Bellman, Haddad & Varan, 2010).
Information Privacy. Conceptualized as an individual’s right to choose which information is communicated to others, information privacy and the protection of personal data has long been viewed as fundamental human rights (Schwartz & Solove, 2011). Currently, human recognition (or “personally-identifiable information” [PII]) is portrayed as the legal threshold for the loss of anonymity or privacy; however, the nature of digital communication suggests a need to rethink this definition for the modern age. An individual’s digital identity encompasses a wide range of traceable offline characteristics (e.g., age, residence, income, etc.), in addition to a variety of online profiles, passwords, pin numbers, access codes, and behaviors – all of which establish concrete links between social and technological aspects of identity (Wessels, 2012). Today’s digital consumer is no longer entirely anonymous since virtually every form of communication and behavior generates data that can be collected, aggregated and analyzed. Information gathered benignly for one purpose can be readily retrieved for another, and the possible linkage between mass amounts of aggregated data about an individual conceivably makes almost every point of collected data personally identifiable. Indeed, in its 2010 report, the Federal Trade Commission (FTC) recognized and addressed the “diminishing distinction between personally identifiable information…and supposedly anonymous or de-identified information” (p. 93).
Currently, protection of consumers’ data privacy in the U.S. relies on Fair Information Practices (FIPs). Originally embodied into U.S. law via the Privacy Act (1974), FIPs regulate the relationships between business and government entities that collect, use, and disclose personal information about individuals (Richards, 2014). The basic principles established by FIPs have been remarkably durable, and encompass standards for data quality, transparency, enforcement and the protection of consumers’ sensitive data. Moreover, they serve as the foundation for the global Organisation for Economic Co-operation and Development (OECD) Privacy Guidelines (2013), which include an expanded list of principles related to personal data privacy and security. However, advancing technology related to personalized advertising fueled by online behavior tracking raises concerns about individual privacy that are not fully addressed by current regulation. Consequently, the AdChoices Icon was conceived.
The AdChoices Icon. In response to criticism from U.S. consumers and privacy advocates demanding the need for greater oversight of personal data use by advertisers, the FTC charged industry groups in 2009 to work together to develop better ways to disclose information to consumers about online advertising and privacy. The resulting coalition, the Digital Advertising Alliance (DAA), launched a study to determine the optimal method of communicating with consumers about how their data was tracked and used by advertisers, and the resulting AdChoices Icon program was launched in March 2011. The program promotes the use of an icon (see figure 1) and accompanying language that should be displayed in or near online advertisements or on webpages where data is collected and used for online behavioral advertising. Its purpose is to not only inform consumers of data tracking practices, but also offer them the ability to conveniently opt-out of some or all of the participating companies’ online behavioral ads (IAB, 2011).
Figure 1. The AdChoices Icon.
At the time the AdChoices Icon was launched, researchers Hastak and Culnan (2010) confirmed that the majority of the over 2,600 study respondents were not comfortable with receiving targeted advertising based on their personal data, although providing transparency and control minimized this discomfort to some degree. They further noted that the AdChoices Icon alone did not communicate key points well, thus concluding that consumer education would be needed to improve awareness. Since its launch five years ago, consumer groups such as the Electronic Information Privacy Center (EPIC) and the National Information Infrastructure Task Force (NIITF) have criticized the AdChoices Icon campaign as largely ineffective. They point to research showing that not only is awareness of the AdChoices Icon limited, but even when consumers are aware of it, they do not clearly understand its meaning and purpose (e.g., Way, 2014; eMarketer, 2015).
Ad Blockers and Do-Not-Track. Adding to the complexity of this situation is a growing sentiment that online advertising has become increasingly annoying and intrusive, prompting greater numbers of consumers (approximately 198 million as of June, 2015) to install various forms of ad-blocking software (PageFair, 2015). Advertising industry leaders and publishers estimate that ad blocking by consumers has cost them more than $22 billion in revenue in the first half of 2015, representing the potential to threaten the economic viability of the media (Morrison & Peterson, 2015). Privacy advocates welcome this trend of consumer empowerment, encouraging the adoption of new technologies designed to protect consumers such as Firefox’s “Lightbeam” (Mozilla, 2015), (which reveals tracking activity by first parties with whom consumers directly interact as well as third parties who receive data indirectly); in addition to pressing lawmakers to enact “Do Not Track” regulation (Davis, 2015). Although most browsers offer do-not-track headers, advertisers are currently free to choose whether or not to honor them; and even when they are honored, users find the convenience and usability features they associate with their favorite websites no longer work (Mayer & Narayanan, 2015).
These actions have provoked the advertising and publishing industries to consider lawsuits restricting the use of ad-blocking software by browser companies (Peterson, 2015), as well as prompting the IAB to propose improvements to the AdChoices Icon program with an initiative they call, LEAN (Light, Encrypted, Ad Choice Supported and Non-Invasive). LEAN represents an overhaul of standard advertising principles intended to emphasize the importance of the user experience (Slefo, 2015). Although not yet a mandated requirement, these guidelines promise to limit ad file size, restrict data usage, and assure user choice and data security. Other options being explored by publishers include blocking content to users who have ad-blocking software installed and testing subscription-based access for those who do not wish to receive personalized ads (Morrison & Peterson, 2015).
As a result of this escalating conflict, in 2011, the FTC formed a working group of U.S. advertising industry leaders and privacy advocates under the guidance of the World Wide Web Consortium (W3C) to develop do-not-track standards that will protect consumer rights while ensuring the viability of publishers and the advertising infrastructure that supports them. Despite pressure from members of the U.S. Congress and the FTC, the talks resulted in some key stakeholders withdrawing from the group (e.g., Peter Swire, co-chairman and the DAA), as well as multiple missed deadlines (Tummarello, 2013). However, in its latest report published July 15, 2015, the W3C proposed tentative standards for implementing do-not-track requests that limit tracking by third parties (with whom the consumer does not directly interact), but does not address data tracking by first parties (including Google, Facebook and Amazon, who collect enormous amounts of data related to consumers’ behavior online). Initial reactions indicate that neither side of the debate is satisfied with this proposal. Privacy advocates along with U.S. Senators Ed Markey (D-Mass.), Al Franken (D-Minn.) and Joe Barton (R-Texas) say it does not go far enough to protect consumer privacy rights; and a group of third party advertising technology companies say it will result in “a dramatic concentration of market power in the hands of first parties that have shown themselves …to be historically poor stewards of privacy” (Davis, 2015). If enacted, this proposal would create privacy restrictions that are secondary only to rules concerning health and financial data in the U.S. (McCabe, 2016). Discussions are still ongoing, and an official response from the FTC is expected by July of 2016.
Online Trust and Perceived Risk. While numerous studies provide useful insight into individuals’ perceptions about information privacy (e.g., Cleff, 2007; Dutta & Bilbao-Osorio, 2014), they shed limited light on the determinants of information disclosure to particular recipients in online settings. Trust and perceived risk are considered the two principal components that individuals weigh when attempting to balance the costs and benefits involved in privacy disclosure in interpersonal relationships. Historically, trust works in tandem with perceived risk to predict behaviors, and together the trust-risk equation is considered the most influential variable driving behavior in interpersonal relationships (Golembiewski & McConkie, 1975). Trust represents a “willingness to make oneself vulnerable to another in the presence of risk” (Kim, Ferrin, Cooper & Dirks, 2004, p. 104), and involves a cognitive element as well as a behavioral element. Recent literature related to mass communication (e.g., McKnight, Choudhury & Kacmar, 2002; Pavlou, 2003) conceptually groups trust into three dimensions: ability, benevolence and integrity. Pavlou (2003) defined “ability” as a consumer’s confidence that an agent has the resources and capabilities to perform whatever activities are required to complete the job, “benevolence” indicates the confidence consumers have that an agent is positively oriented toward their interests rather than their own, and “integrity” indicates the belief that an agent abides by a moral or professional code. Though they may be considered separately, all three elements are typically combined as a measure for consumers’ trust in online partners. Further, studies related to attitudes toward data collection and personalized advertising in a variety of contexts (e.g., Karjaluoto &Alatalo, 2007; Mir, 2011) suggest that trust and credibility play a key role in determining consumers’ attitudes and behaviors toward these practices.
Message Credibility. Extant literature has demonstrated that credibility, rooted in various objective and subjective components attributed to a source or a message, is directly related not only to believability, but also persuasion. Ohanian (1990) examined three key dimensions contributing to credibility: trustworthiness, expertise and attractiveness, and found that source and message credibility have a significant influence on consumer purchase intentions, particularly with high-involvement products. Further, Eastin (2001) found that when evaluating online message content factors such as content dynamics and knowledge of content become important when assessing credibility. As individuals increasingly rely on online sources for news, information, and e-commerce, online message credibility becomes of vital importance to marketers and advertisers.
Moreover, other researchers established a positive relationship between source and message credibility and attitudes (e.g., Hovland, Janis, & Kelley, 1953); as well as credibility and attitude within a commercial context (e.g., Ayeh, Au, & Law, 2013; Pornpitakpan, 2004; Lafferty, Goldsmith & Newell, 2002; Ohanian, 1991). For instance, Ayeh et al.’s (2013) examination of user generated content found that the credibility dimensions of trust and perceived expertise significantly influence attitudes toward online content. More directly related, Lafferty et al. (2002) found a positive relationship between credibility and attitude toward advertising.
Privacy Paradox. Despite increasing attention drawn to the issue of online information privacy, many consumers seem to accept that some loss of privacy is a cost of doing business in the digital age. Numerous studies of consumer behavior on social networking sites (SNS) reflect users’ tendency to disclose detailed personal information on their profiles despite expressing generalized privacy concerns (e.g. Acquisti & Gross, 2006; Utz & Kramer, 2009; Wessels, 2012). Likewise, recent industry studies indicate a majority of respondents are willing to share their personal data with advertisers when presented with the opportunity to receive various forms of incentives (e.g., Pew, 2016; Rocket Fuel, 2014). These divergent attitudes about personal data sharing (referred to as the Privacy Paradox) reflect the complex nature of information privacy management in the digital age. Inducements such as online impression management, product giveaways, lower prices, convenience and better selection, combined with consumers’ reported feelings of powerlessness to protect their personal data have been advanced as explanations for this phenomenon (John, 2015). Others contend that users’ openness to sharing personal data reflects inaccurate perceptions of vulnerability (Jensen, Potts & Jensen, 2005), or a lack of understanding about the real value of their personal information (Lerman, 2014). Indeed, one survey suggests that although people are aware that companies are using their personal data and have concerns about this practice, their understanding of exactly how the data are being used is severely lacking (Pew, 2014).
Theoretical Foundation
Advancing technology has generated new forms of communication that span the structural and functional characteristics of mass and interpersonal communication, and as such, require deeper examination. The Persuasion Knowledge Model (PKM) offers an interesting theoretical perspective on the cognitive processes that consumers may be experiencing in response to receiving personalized advertising messages.
Persuasion Knowledge Model. Friestad & Wright (1994) were among the first researchers to study how consumers perceive marketers’ efforts to persuade them. They posited that over time, consumers develop personal knowledge about the tactics marketers use in their persuasion attempts, and this “persuasion knowledge” enables them to identify how, when and why marketers are trying to influence them. They further argued that persuasion knowledge provides a type of filtering schema that allows consumers to adaptively respond to persuasion attempts in an effort to achieve their own goals. This persuasion process is purportedly influenced by a number of factors originating from the perspective of both the agent (the person responsible for constructing a persuasion attempt) and the target (the person for whom a persuasion attempt is intended).
From the agent’s perspective, the first goal is to master knowledge of the topic as well as the intended target in order to devise a message that will elicit the desired response. In the modern age of big data, the depth of knowledge available about potential targets has become increasingly specific, driven by demographic, geographic and online behavioral data that is often collected without the consumers’ knowledge or consent. Once the agent has aggregated sufficient knowledge about the intended target, the next step is identifying the optimal channel in which to deliver the persuasive message. Recent technological advances enable personalized messages to be delivered via a myriad of channels, including email, online and mobile banner ads, pop-up videos, sponsored social media posts, search engines, online gaming sites, e-commerce portals, quantified self devices and a host of other platforms (IAB, 2015; RocketFuel, 2014).
On the other end of the spectrum, the intended targets of a persuasive message also develop their own knowledge about the topic and the agent, both of which contribute to their likelihood to be persuaded. Friestad & Wright (1994) refer to this filtering process as persuasion coping. A consumer’s persuasion knowledge can accumulate from many sources, including third-party observations of every day persuasion attempts, other consumers’ comments about the agent and their marketing stimuli, an agent’s perceived trustworthiness, as well as the their personal privacy concerns. In the more than two decades since PKM was originally proposed, advancing technology has not only enhanced the agent’s ability to obtain increasingly personalized knowledge about target audiences, but also triggered new coping behaviors by consumers. These developments suggest critical extensions to PKM that warrant examination and consideration.
Applications to CMC. Recent studies have extended PKM to a number of computer-mediated settings, including online gaming (Nelson, Keum & Yaros, 2004) and online shopping (Kachersky & Kim, 2011). Nelson et al. (2004) used netnography and questionnaires to gain insight into online gamers’ beliefs about the effectiveness and appropriateness of various forms of advertising in the context of online gaming utilizing PKM. They found that online gamers were well aware of the sponsors’ in-game advertisements, and in some cases, actively sought coping mechanisms to discount them, such as installing ad blocking software and ignoring overt product placement messages. However, despite these coping behaviors, positive relationships were noted between attitudes toward product placement advertising in online games and perceived impact on purchasing behavior. These findings indicate this subtle form of persuasive messaging did not stimulate undue privacy concerns among the study participants (who deemed the presence of ads as a worthy trade off for free access to the gaming platform).
Kachersky and Kim (2010) utilized PKM to examine consumers’ beliefs about the persuasive intent of different e-commerce pricing formats. While PKM posits that people generally find it unpleasant to be the target of a persuasion attempt, the authors hypothesized that offers of beneficial pricing would supersede consumers’ judgments based on their persuasion knowledge. Data from this study indicated that consumers use two knowledge bases when comparing economically equivalent offers online, persuasion and topic knowledge; and they will typically choose the offer they perceive to be a better deal, regardless of the perceived intent of the agent. These findings are also supported by a similar study examining the Privacy Paradox, which noted that offers of better selection and preferred pricing overrode consumers’ normal levels of skepticism related to sharing their personal information online (John, 2015).
PKM and Personalized Advertising. Recent research (e.g., Lerman, 2014; Pew, 2014) indicates that consumers’ privacy concerns, triggered by the data mining technology that fuels advertising personalization, juxtaposed by the desire to access content and other benefits associated with these technologies are introducing a new dynamic into the persuasion process that requires further examination. Utilizing the framework of PKM, this study proposes two hypotheses as well a related research question:
H1: After controlling for demographic variables, perceived online trust, message credibility, and perceived risks and benefits, ads featuring personalized content will illicit greater negative attitudes than ads that are not personalized.
H2: After controlling for demographic variables, perceived online trust, message credibility, and perceived risks and benefits, ads featuring the AdChoices Icon will illicit greater negative attitudes than ads that do not feature the AdChoices Icon.
RQ1: When examining attitude toward the ad, how does knowledge of the AdChoices Icon interact with message personalization and the presence of the AdChoices Icon after controlling for demographic variables, perceived online trust, message credibility, and perceived risks and benefits?
Method
Sample
Data were collected from 382 subjects through an online panel of consumers, aged 19-87 (M = 52.41, SD = 15.47), of which 48% were female. The majority of respondents were Caucasian (70%), followed by African-American (10%), Hispanic (10%), Asian (7%) and other (3%). Fourteen percent earned less than $30,000 annually, followed by $30,000-$39,000 (9%), $40,000-$49,000 (10%), $50,000-$59,999 (8%), $60,000-$69,999 (10%), $70,000-$79,999 (6%), $80,000-$89,999 (5%), $90,000-$99,000 (18%) and 11% reporting $100,000+. Education levels included 2% who did not complete high school, high school graduate (9%), some college (29%), Bachelor’s degree (29%), some graduate school (7%), Master’s degree (18%), and professional or doctorate degree (7%).
Design
The study design was a 2 (personalized ad/non-personalized ad) x 2 (AdChoices Icon inclusion/exclusion) x 2 (knowledge of Icon/no knowledge) between subjects experiment. Participants were randomly presented with a fictional Yahoo home page containing a standard leaderboard banner advertisement with one of the following four manipulations embedded: a personalized advertisement (1) with or (2) without the AdChoices Icon featured; or non-personalized advertisement, (3) with or (4) without the AdChoices Icon featured (see figures 2-5).
The third experimental manipulation, knowledge about the AdChoices Icon, was created by asking participants what details they recalled (if any) about the AdChoices Icon post exposure; and then assigning them to either the “knowledge” or “no knowledge” condition based on their responses (see knowledge construct development below).
Figure 2. Personalized/Icon Ad.
Figure 3. Non-Personalized/Icon Ad.
Figure 4. Personalized/No Icon Ad.
Figure 5. Non-Personalized/No Icon Ad.
Ad personalization was based on each participant’s location and gender, which was customized by asking respondents to report their demographic information prior to exposure, and subsequently presenting the personally appropriate ad based on a randomized condition assignment. Moreover, a Weight Watcher’s scale was featured in the advertisement in order to maximize effects as extant research (e.g., Pew, 2016; Roeber, Rehse, Knorrek, & Thomsen, 2015; Sunyaev, Dehling, Taylor & Mandl, 2015) suggests health-related topics elicit greater levels of perceived risk in personalized communication contexts.
After exposure, participants completed a questionnaire to assess AdChoices Icon knowledge, perceived risk and benefits of sharing personal information online, message credibility, trust and reported attitudes about the ad presented to them.
Measures
Attitudes toward the Personalized Ad. Baek and Morimoto’s (2012) five item scale measuring attitudes toward personalized advertising was used to measure participants’ attitudes about the advertisement presented to them based on their personal data in the experimental setting. Respondents were asked to rate their level of agreement with a series of statements based on the advertisement they were just exposed to on 7-point scale, ranging from 1(strongly disagree) to 7 (strongly agree). Sample items included “The information in the ad would enable me to order products that are tailor-made for me” and “The information in the ad makes me feel that I am a unique customer” (M = 4.22, SD = 1.12, α = .94).
Knowledge of Icon. To measure their knowledge of the Icon, participants were asked to recall any facts they knew related to the meaning of the AdChoices Icon. These data were then coded on a scale of 0-4 based on subjects’ recognition of four key pieces of information: (1) your data is being tracked, (2) to tailor personal advertisements to you, (3) by third parties that aggregate and share your information, and (4) the AdChoices Icon means you may choose to opt out of this form of data tracking. (M = 0.58, SD = 0.68). In order to create a naturally occurring condition, respondents were assigned to the no knowledge (recall score = 0, N = 274) or knowledge condition (recall score ≥ 1, N = 108) based on the number of facts they recalled.
Perceived Risk. Participants’ perceptions of the risks of sharing their personal information with online partners were measured based on a composite of three established scales measuring perceived risks related to online and mobile data sharing. First, the Internet Users’ Information Privacy Concerns (IUIPC) scale (Malhotra, Kim & Agarwal, 2004) 10-item scale measures three dimensions found to contribute to information privacy concern in online settings: data collection, data control and awareness of privacy practices. Sample items included, “It usually bothers me to give personal information to so many online companies” (collection); “Consumers have lost all control over how their personal information is used” (control); and “It is very important to me that I am knowledgeable about how my personal information will be used” (awareness). Also included were Culnan & Armstrong’s (1999) 11-item scale reflecting awareness of online privacy policies as well as Smith, Milberg & Burke’s (1996) 9-item scale related to information privacy concerns in mobile contexts. All 30 items were measured on a 7-point Likert scale, ranging from strongly disagree (1) to strongly agree (7) (M = 5.47, SD = .80, α = .78).
Perceived Benefits. Participants’ perceptions of the benefits of sharing their personal information with online partners were measured based on a composite of four established scales measuring outcome expectancies related to convenience, information, entertainment and personal status by LaRose and Eastin (2004), as well as by an advertising value scale adapted by Ducoffe (1996). Participants were asked to indicate how likely they were to share personal information online in order to “Conveniently stay up to date on things” (convenience), “Get immediate knowledge” (information), “Feel entertained” (entertainment) and “Feel important” (status). Respondents also reported how likely they were to share personal information with online partners in order to receive specific monetary benefits including coupons, cash incentives, special offers, prior knowledge of sales, shopping rewards, customized offers, enhanced customer service, time savings, purchase reminders and personalized product recommendations. All 28 items were measured on a 7-point Likert scale, ranging from strongly disagree (1) to strongly agree (7) (M = 4.89, SD = .79, α = .71).
Online Trust. Disposition to trust was measured using six items from the McKnight, Choudhury and Kacmar (2002) trust scale for e-commerce, which examines trust on three dimensions: perceived benevolence, integrity and ability. Items including, “I am comfortable relying on Internet vendors to meet their obligations (integrity)”, “I believe that online companies are interested in my well-being, not just its own” (benevolence), and “I feel confident that technological advances on the Internet make it safe for me to share my personal data online” (ability) were measured on a 7-point Likert scale ranging from strongly disagree (score = 1) to strongly agree (score = 7) (M = 3.80, SD = 1.00, α = .74).
Message Credibility. The measure for perceived message credibility was adapted from Ohanian (1990) and included 15 semantic differential items. Specifically, participants were asked to assess the credibility of the message based on a range of such items as unattractive (1) to attractive (7), unreliable (1) to reliable (7), dishonest (1) to honest (7), plain (1) to elegant (7), and so on (M = 3.93, SD = 1.15, α = .93).
Control Variables. In order to isolate the variables of interest, the demographic variables of age, gender and education will be controlled in all analyses.
Data Analysis
All data were analyzed using SPSS utilizing a Univariate General Linear Model. Hypotheses 1 and 2 and Research Question 1 were analyzed within a 2x2x2 experimental design.
Results
Data indicate that after controlling for age (F(1,382) = 1.30, p > .05), gender (F(1,382) = .613, p > .05), and education (F(1,382) = .87, p > .05), online trust (F(1,382) = 12.37, p < .01), message credibility (F(1,382) = 176.86, p < .01), perceived risk (F(1,382) = 4.56, p < .01), and perceived benefits (F(1,382) = 18.96, p < .05), ad personalization (F(1,382) = .298, p > .05) was not found to influence respondents’ attitude toward the ad. Thus, H1 was not supported by the data. However, data did support H2 in that Icon presence did significantly influence attitudes toward the ad (F(1,382) = 4.133, p < .05). Here, subjects exposed to the AdChoices Icon (M = 3.77, SE = .08) displayed greater attitudes toward the ad compared to those not exposed to the Icon (M = 3.53, SE = .08).
Turning to RQ1, although knowledge of the Icon did not interact with ad personalization F(1,382) = .993, p > .05), data did find that knowledge of the Icon approached significance in predicting attitude toward the ad F(1,382) = 3.127, p = .07). Here, data indicate that when the ad was lacking the AdChoices Icon, those without knowledge (M = 3.56, SE = .08) and those knowledgeable (M = 3.51, SE = .13) of the Icon did not display differing attitudes toward the ad. However, as shown in figure 6, when the AdChoices Icon was presented within the ad, those indicating no knowledge (M = 3.59, SE = .09) reported lower attitudinal responses toward the ad compared to those who were knowledgeable of its meaning (M = 3.94, SE = .14)
Figure 6. Interaction of the Presence of the AdChoices Icon by AdChoices Icon Awareness on Attitude Toward the Ad.
Discussion
The purpose of this study was to investigate how awareness of personal data collection and aggregation practices (via ad personalization and AdChoices Icon inclusion) affects consumers’ attitudes toward personalized advertising messages. Through the theoretical lens of the PKM, it was hypothesized that personalization and inclusion of the AdChoices Icon would significantly influence respondents’ attitude toward the ad since consumers’ knowledge of advertisers’ persuasive attempts is predicted to trigger ad skepticism. Although the empirical results of this study only supported the hypothesized relationship between the presence of the AdChoices Icon and attitude toward the ad, an examination of respondents’ knowledge about the AdChoices Icon provides additional insight. Here, data suggest that inclusion of the Icon in personalized advertising messages could increase attitudes toward the ad as long as the recipient is knowledgeable of the Icon’s meaning. These findings are consistent with PKM’s premise that target consumers use three separate knowledge bases in evaluating persuasive advertising messages: topic, agent and persuasion knowledge. That is, when they perceive the topic to be relevant (i.e. personally targeted to them), and the agent to be trustworthy (by declaring use of the target’s personal data via the AdChoices Icon), consumers are more likely to overlook their persuasion knowledge schema and be receptive to an advertisers’ personalized message.
Further, although not included in the experimental manipulation, findings from this study support the existence of the Privacy Paradox, which predicts that consumers will minimize perceived risks in situations when the perceived benefits are considered of greater value than the risks associated with disclosing personal information. As reported, study data indicated significant relationships between perceived benefits of personalization on attitude toward the ad, as well as perceived risks of personalization on attitude toward the ad, although benefits were found to have a stronger relationship. This finding is also supported by previous research (Kachersky & Kim, 2011), which predicts that consumers will typically choose the offer they perceive to provide the greatest benefit, regardless of the perceived risk or intent of the agent. These findings suggest a natural extension to PKM by expanding the target’s coping behaviors to include not only trust in the agent and privacy concerns, but also the perceived benefits offered by the agent’s message. Here, in addition to knowledge, expectancy values become part of the “optimal channel” evaluation and selection. In this context, consumers are clearly balancing the perceived privacy risks associated with responding to an agent’s personalized message with its perceived benefits. This approach offers to clarify the relationship between consumers’ knowledge and attitudes by encouraging PKM researchers to understand the influence of perceived benefits as a variable that is inclusive of persuasion knowledge.
This, in turn, suggests the need for more robust policy approach by the FTC related to consumer education about current data collection practices. Policy makers dealing with big data and privacy issues are encouraged to use this study as a platform to push a literacy based agenda to not only protect consumer interests but also offer advertisers improved outcomes. Until recently, Fair Information Practices (FIPs) along with self-regulatory initiatives such as the AdChoices Icon were believedto be adequatelyinformingU.S. consumers about thecollection and use of their personal information online. However, the majority of respondents in this studyrecalled only one key fact about the AdChoices Icon, and not one reported knowledge of all four. Even when notified about first and third party data collectionvia a public service announcement explaining the meaning the AdChoices Icon, findings indicate that studyrespondents still did not fully comprehend the potential risks associated with this practice.
Utilizing a balanced approach that considers the interests of all affected actors (consumers, publishers, advertisers and government entities), and rooted in recommendations developed by experts representing multiple perspectives, policy standards that more fairly address the collection and use of consumers’ personal data by first and third parties are warranted. These standards should incorporate not only increased consumer education and awareness, but also improved transparency and visibility related to data collection practices by publishers and advertisers, greater consumer choice and control over data sharing practices, and enhanced accountability by those who collect and use consumers’ personal data.
Limitations and Future Research
The current findings, while interesting, only represent a single exploration into a very complex issue. The Privacy Paradox, by definition, suggests that privacy awareness and knowledge has a confounding effect on perceptions, in that, while some might find it comforting to know when their information is being collected and used, for others, just learning about big data aggregation and its connection to personalized advertising might make them feel their privacy has been violated. Future research should further investigate the moderating role of affect upon knowledge acquisition of the AdChoices Icon meaning. Additionally, experimental designs typically involve trade offs where the precision and control gained via contrived settings are balanced with a corresponding loss of generalizability and realism. In this study, the ad manipulations developed featured only one product (a Weight Watchers body weight scale) with a fairly simple design. It is conceivable that the ad materials presented may have elicited unusually high perceived risk levels due to their health orientation, or were not overly engaging for some of the study participants, thus impacting their reported attitude toward the ad. Therefore, future studies should test a variety of message designs, contexts, and population differences to control for possible differences related to these factors.
Further, while the interaction between Icon knowledge and Icon inclusion approached significance, perhaps a better conceptual understanding of knowledge would offer greater insights into this relationship. For example, respondents in this study were simply classified as knowledgeable about the AdChoices Icon or not. By expanding this construct to include additional levels and/or types of knowledge, future research could offer greater depth to this analysis.
In conclusion, the present study contributes to existing literature investigating the effects of personalization and the inclusion of the AdChoices Icon in advertising messages, as well as examining perceptions of the perceived risks and benefits of online information disclosure through the lens of the Persuasion Knowledge Model. Moreover, it confirms that awareness of data collection and aggregation practices as conveyed by the AdChoices Icon are not well understood among the general population, and thus, reaction to it, regardless of the advertising attribute manipulated, is varied. To this end, additional research is still needed into the Privacy Paradox and how these micro-situations of privacy and identity operate within the broader context of big data.
Acknowledgement
Funding for this study provided by The Center for Identity, The University of Texas at Austin
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Copyright © 2016 Nancy Howell Brinson, Matthew S. Eastin