Frontiers: The Intended and Unintended Consequences of Privacy Regulation for Consumer Marketing

  • 时间:2025-08-12

Abstract

As businesses increasingly rely on granular consumer data, the public has increasingly pushed for enhanced regulation to protect consumers’ privacy. We provide a perspective based on the academic marketing literature that evaluates the various benefits and costs of existing and pending government regulations and corporate privacy policies. We make four key points. First, data-based personalized marketing is not automatically harmful. Second, consumers have heterogeneous privacy preferences, and privacy policies may unintentionally favor the preferences of the rich. Third, privacy regulations may stifle innovation by entrepreneurs who are more likely to cater to underserved, niche consumer segments. Fourth, privacy measures may favor large companies who have less need for third-party data and can afford compliance costs. We also discuss technology platforms’ recent proposals for privacy solutions that mitigate some of these harms, but, again, in a way that might disadvantage small firms and entrepreneurs.

History: Olivier Toubia served as the senior editor.

Supplemental Material: The web appendix is available at https://doi.org/10.1287/mksc.2024.0901.

1. Introduction

Several recent initiatives have reinvigorated the debate about consumer digital privacy in the United States. As we write, 19 states in the United States have enacted comprehensive privacy laws that emulate the EU’s General Data Protection Regulation (GDPR 2020): 11 of these laws come into effect in 2025 or 2026. Privacy regulation has broad and bipartisan support, as with the recent draft of the American Privacy Rights Act in 2024. The June 2024 draft would preempt state laws. Congress is debating the American Innovation and Choice Online Act (AICOA) which includes restrictions to enhance consumer privacy for both private and publicly listed companies. President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence highlighted the federal government’s commitment to enforce consumer protection laws and enact appropriate safeguards “against fraud, unintended bias, discrimination, infringements on privacy, and other harms from AI” (Biden 2023).

The Federal Trade Commission (2024, p. vii) recently recommended federal privacy legislation like GDPR for social media and streaming platforms, recommending “minimizing data collection to only that data which is necessary for their services,” “data retention and data deletion policies,” “limiting data sharing with affiliates, other company-branded entities, and third parties,” and “clear, transparent, and consumer-friendly privacy policies.” They cited no academic literature on consumer privacy in marketing or economics, and they discussed neither potential harms from the proposed measures nor empirical evidence of benefits of past regulation. Marketing scholars are well positioned to fill this gap by offering data-driven insights.

We synthesize emerging empirical findings from the academic literature in marketing, economics, and behavioral science about the intended versus unintended consequences of existing and pending privacy regulations for consumer markets (Bleier et al. 2020).1 Regulation invariably involves trade-offs, including the unintended costs that are often omitted from proposals for data privacy regulation. Our intended audience includes (a) policymakers, (b) managers who set corporate policies and who might collaborate in creating public policy, and (c) privacy scholars outside of marketing science. Given the likely reevaluation of pending initiatives by the Trump administration, such a synthesis is timely.

Two high-level conclusions emerge from our discussion. First, policy analysis has focused heavily on restricting data flows. But humans seek boundary regulation—sharing information when they wish and restricting it when they do not (Goldfarb and Que 2023Acquisti 2024). Second, marketing privacy regulations often favor the powerful in ways not acknowledged by legal experts driving the privacy debate.

After outlining key intended benefits of privacy regulation in Section 2, we elaborate four key points leading to this conclusion.

  • Section 3. Some privacy advocates assume, incorrectly, that personalized marketing based on granular consumer data is automatically harmful—a zero-sum game in which value is transferred from consumers to firms. Personalization can be win-win. Moreover, in domains like pricing, lack of personalization may favor those most able to pay.

  • Section 4. Heterogeneous privacy preferences create the central problem in privacy regulations. Some consumers lack established privacy preferences, so surveys such as those cited by FTC (2024) may be unreliable for policy. Among consumers with established preferences, interests conflict. Current regulations tend to favor high-income consumers with stronger privacy preferences. Low-income consumers are already digitally invisible to the point that firms implicitly discriminate by excluding them from outreach. Blanket restrictions on data sharing exacerbate that inequality.

  • Section 5. Privacy measures stifle the wave of innovation and entry of direct-to-consumer online businesses, especially by small entrepreneurs with offerings targeted to niche and underserved segments.

  • Section 6. Privacy measures tilt competition in favor of large, incumbent firms that have less need for third-party data and can afford the large compliance costs. A growing literature shows differential harm to small businesses from government policies and tech platform policies such as Apple’s App Tracking Transparency (ATT).

  • Section 7 outlines the promise and problems of new privacy-enhancing technologies (PETs). Section 8 concludes and recommends future research.

2. Intended Benefits of Digital Marketing Privacy Regulation and Pertinent Regulations

There are several important reasons why consumers may benefit from oversight of the use of their personal data by marketers and why the current consent-based regime fails to protect consumers (Acquisti 2024).

  1. Consumers might be harmed if firms possess and act on incorrect information about them and if such data use lacks transparency and the ability to correct (Dorsey 2022). “Companies collect personal and transactional data to create consumer scores…to predict how consumers will behave in the future” (U.S. GAO 2022). These scores suffer from biases because of social inequities and intrinsic bias in the data themselves, inaccuracies, because of out-of-date information. Their usage can lead to seemingly unfair differential treatment.

  2. Firms might use personal data to discriminate against disadvantaged consumers or protected classes (Consumer Financial Protection Bureau 2022). Consider the U.S. Justice Department suit “alleging that Meta’s housing advertising system discriminates against Facebook users based on their race, color, religion, sex, disability, familial status and national origin” (Civil Rights Litigation Clearinghouse 2022). Even without an intent to discriminate, market forces may cause online advertising algorithms to underserve certain groups of consumers and deny them full access to the digital economy (Lambrecht and Tucker 2019).

  3. Firms might price discriminate against consumers with higher valuations of a product or service. The Council of Economic Advisors (2014) explains: “Consumers have a legitimate expectation of knowing whether the prices they are offered for goods and services are systematically different than the prices offered to others.”

  4. “Notice and consent” regimes may protect consumers insufficiently. Sellers and buyers may have asymmetric information about the consequences of data sharing unforeseen by the buyer (Acquisti et al. 2016Clark 2020). Moreover, consumers suffer from “consent fatigue” because of the large number of consent requests, often to get access to information on a website (McDonald and Cranor 2008Miller and Tucker 2018Acquisti 2024). Arguably, it is unreasonable to expect any consent to be “informed” and meaningful (Utz et al. 2019).

  5. Behavioral biases and bounded rationality may impair consumers’ abilities to manage their privacy effectively (Acquisti et al. 2020).

Motivated by concerns like those above, the European Union proactively launched the GDPR in 2018, with far-reaching implications for over 20 million companies spanning dozens of countries. GDPR puts a high bar on a firm’s ability to collect and process personal individual data and to guarantee transparency. For example, personal data such as sex and gender should only be collected and processed when it is necessary for the task and not used beyond the original purpose. Our Web Appendix provides further details.

The United States has adopted a more decentralized patchwork of federal and state laws, along with industry-specific regulations that are enacted at the federal level but apply to specific sectors, such as HIPAA, which governs health data. Most U.S. privacy measures have been implemented in a fragmented way across a variety of state laws, such as California’s Privacy Rights Act and Colorado’s Privacy Act.

Perhaps in anticipation of heightened regulations, many American firms proactively strengthened consumer protections. Apple’s “ask not to track” option in its ATT framework blocks apps from tracking an individual’s behavior on other companies’ apps and websites without opt-in consent (Kesler 2023). In 2024, Google ended its five-year effort to phase out third-party cookies in its Chrome browser, aiming instead to introduce tools for consumers to make informed choices and adjust privacy settings easily.

While issues (a)–(e) are indeed concerns, we discuss below how existing privacy regulations and policies may have unintended consequences on both the supply and demand sides that offset anticipated consumer welfare gains. For instance, Apple’s ATT appears to have reduced the number of fraud complaints (Bian et al. 2023) but increased product prices and market concentration and reduced ad spending (Deisenroth et al. 2024).

3. Access to Consumer Data Can Increase Consumer Value via Personalization

One of the most contentious aspects of the use of personal data for marketing purposes is the personalization of the marketing mix. Consider personalized pricing. Some have questioned its legality (Ramasastry 2005). The popular press has been rife with headlines such as “To Fight Surveillance Pricing, We Need Privacy First” (Noble 2024). Documented examples of such personalized pricing are scarce, yet public officials have expressed concerns: “[differential pricing] transfers value from consumers to shareholders, which generally leads to an increase in inequality and can therefore be inefficient from a utilitarian standpoint” (Council of Economic Advisors 2015, p. 6).

Theoretically, personalized marketing could harm consumers by transferring consumer value to firms or excluding segments of the population from valuable communications or regressive pricing. However, economic theory also shows that price discrimination can increase consumer value when the total number of consumers served increases (e.g., Stole 2007). Moreover, oligopoly price discrimination can trigger price wars and a prisoner’s dilemma whereby firms’ prices and profits decline to the benefit of consumers (e.g., Stole 2007). Whether price discrimination increases both consumer value and firm value depends on the nature of the consumer segments defined by the data available (Bergemann et al. 20152024).

Several empirical studies demonstrate how personalized pricing can benefit consumers less able to pay. Personalized pricing lowered prices for over 60% of the customers for a large digital human resources platform, primarily for the smallest-enterprise customers (Dubé and Misra 2023). Supermarket prices in poor neighborhoods are 8% higher than they would be if large chains implemented more granular geographic price differences across stores in each city (DellaVigna and Gentzkow 2019). Willingness to pay for healthy nutrients increases with a household’s income, so personalized pricing could help reduce nutritional inequality (Allcott et al. 2019). Personalized pricing could help low-income households afford municipal fines and fees to avoid defaulting and accumulating municipal debt while potentially increasing municipal revenues (Glenn et al. 2022). Switching from uniform to variable pricing of National Football League tickets benefited fans in cities with lower income and higher income diversity (Arslan et al. 2023).

Looking beyond pricing, the application of large language models to personalize the California SNAP program’s email campaign more than doubled enrollments for food stamps (Misra 2020). Disabling personalization for Alibaba customers led to less efficient search and lower purchase incidence, particularly for consumers with unusual tastes and niche merchants serving them (Sun et al. 2023). Personalized recommendations increased the diversity of digital music consumed (Datta et al. 2018). Regulators have acknowledged some of these nonprice personalization benefits (e.g., Council of Economic Advisors 2014, pp. 7–8). In sum, data-based personalized marketing can improve matching of customers with less common needs with appropriate sellers.

4. Which Consumers Care Most About Privacy, and Do Privacy Policies Unintentionally Favor the Privileged?

The heterogeneity in privacy preferences and regulatory consequences across consumers and situations presents one of the biggest challenges to privacy regulation. Any policy that speeds data flows imposes externalities on those who would not benefit from data sharing; any privacy policy that slows data flows imposes externalities on those who would benefit.

Policy analyses such as FTC (2024) rely on consumer surveys to infer broad (and context insensitive) public support for more restrictive privacy regulation. In a Pew survey, “…81% of the public say that the potential risks they face because of data collection by companies outweigh the benefits” (Auxier et al. 2019). Pew finds 68% of Republicans and 78% of Democrats support more government regulation (Faverio 2023).

How probative are these surveys for policy? A well-documented “privacy paradox” shows that stated attitudes may diverge from actual privacy behaviors (e.g., Goldfarb and Que 2023) and are contaminated by “socially desirable” responses (Larson 2023). Is this low attitude-behavior correspondence a symptom of “nonattitudes” and low importance and knowledge for some consumers (cf. Schuman and Presser 1980Howe and Krosnick 2017)? As we discuss in more detail in the Web Appendix, when survey respondents do not have a prior opinion on an issue, their responses are “constructed” on the spot.

The hallmark of constructed preferences is their sensitivity to normatively irrelevant context, such as minor changes in wording or question order (Feldman and Lynch 1988). When preferences are constructed, choices over data tracking policies will be sensitive to choice architecture (Thaler and Sunstein 2021). Opt-in data tracking policies nudge consumers not to share, and opt-out policies encourage data sharing (Johnson et al. 2002).

4.1. Instrumental, Contextually Determined Privacy Preferences

Privacy preferences include both instrumental and intrinsic components, complicating consumer welfare analysis (Lin 2022). Some consumers may have a strong intrinsic psychological distaste for sharing data, independent of how their data will be used. Consumers may also have context-specific instrumental preferences driven by strategic or economic motives, such as a high-income consumer deciding not to share data for fear of receiving higher prices.

Data transmission can sometimes benefit consumers such that regulators should focus on boundary regulation instead of banning data tracking. While industry solutions such as browser-level global privacy controls seem well suited for consumers with strong “intrinsic” privacy preferences to say “no” or “yes” to all data sharing, consumers with instrumental privacy preferences will instead need a way to provide meaningful “batched consent” to accept some requests and decline others without experiencing “consent fatigue.” For instance, iOS users’ opt-in rates to share data are much higher for gaming apps than travel apps (Perik 2024). Successful examples of facilitating data sharing between entities to improve service quality while remaining compliant with GDPR exist in child services and medical records (ICO 2024).

McDonald and Cramer (2008) note the time costs of exerting boundary regulation by carefully reading privacy policies. Perhaps AI agents could be trained for this purpose to automate context-dependent boundary control for consumers with heterogeneous preferences. The Web Appendix discusses other concrete policies that treat privacy as a matter of boundary regulation rather than concealment.

4.2. Weaker Privacy Preferences for Low-Income, Less Educated, and Younger Consumers

Richer, more educated, and older consumers tend to be willing to pay more for privacy on Facebook than poorer, less educated, and younger consumers (Lin and Strulov-Shlain 2025). These findings conceptually replicate Varian et al. (2005) and Savage and Waldman (2015).

Poorer, less educated, and younger consumers who value their privacy the least are also the most sensitive to choice architecture, again suggesting that their preferences are largely constructed on the spot (Lin and Strulov-Shlain 2025). See also Mrkva et al. (2021).

These findings raise thorny consumer protection issues. Policymakers should consider whether the gains from data sharing outweigh possible risks from privacy loss for lower-income, younger, and less educated consumers. If so, current privacy proposals such as FTC (2024) may favor the instrumental concerns of the privileged, especially when personalization has redistributive effects in domains such as pricing. Of course, if more disadvantaged consumers are wrong not to worry more about privacy—rather than having different priorities—one might argue for paternalistic solutions that toughen current consent-based frameworks (cf. Acquisti et al. 2020).

4.3. Privacy and Marketing Inclusiveness

The regulatory push to limit/ban access to third-party data suggests a prevailing sentiment that firms know too much about consumers. We now review research suggesting that, counterintuitively, firms may not know enough about disadvantaged consumers to market them offers or to monitor for unintended discrimination.

4.3.1. Algorithmic Discrimination Against Disadvantaged Consumers.

The United States prohibits disparate treatment based on someone’s protected class status in markets for select markets. Even when targeted marketing that discriminates on sensitive consumer attributes is permissible, firms may avoid it if such practices may be perceived as unfair or unethical by stakeholders.

Indeed, sometimes, algorithms can learn to discriminate absent any deliberate intent by the marketer (Netzer et al. 2019). A cold start problem in algorithmic learning can cause uneven outcomes for minority relative to majority groups (Lambrecht and Tucker 2024). Facebook’s bidding algorithm served more ads for a STEM career campaign to men than women, even though the algorithm was blind to gender (Lambrecht and Tucker 2019).

Paradoxically, the policy objective of privacy protection and the policy objective of nondiscrimination can conflict (Ali et al. 2019). Many privacy policies discourage collecting and storing personal attributes such as race and gender. However, without knowing a customer’s race and gender, it would be difficult to monitor digital marketing for unfair discrimination (King et al. 2023), nor can firms readily correct for potentially unfair discrimination when algorithms use variables correlated with otherwise unobserved protected attributes (cf. Ascarza and Israeli 2022). The Web Appendix discusses limits of methods for monitoring using inferred sensitive variables.

4.3.2. Is Privacy a Problem of the Privileged? Too Little Data for Disadvantaged Consumers.

To the extent that some marketing offers are beneficial when targeted to the appropriate consumer audiences using individual consumer data, one concern is that lack of data may exclude some consumers. Marketers often prescreen consumers for offers based on predicted profitability. The necessary data are often more limited and fragmented for poorer than rich consumers. Poorer consumers live in “data deserts” (Tucker 2023), causing algorithmic exclusion from algorithmic processing because of missing or fragmented data. This exclusion thwarts marketing outreach and may deprive them of offers, exacerbating marginalization.

Privacy restrictions may inadvertently exacerbate the algorithmic exclusion of “invisible” poor consumers. For instance, Boston’s Street Bump app disproportionately increased pothole repair rates in wealthy neighborhoods more likely to use the app (Tucker et al. 2023). John Hancock’s Fitbit-based insurance discounts disproportionately benefited the wealthy who more commonly use fitness trackers (Goldfarb and Tucker 2017). In peer-to-peer lending, privileged consumers with more social connections were more likely to benefit (Freedman and Jin 2017).

Data fragmentation is another obstacle for poorer consumers (Tucker 20232024). Contact information such as name, phone number, or email address is less consistently tracked for consumers facing economic instability. Large data brokers’ databases are more likely to have missing or biased records for individuals with lower wealth, education, and home ownership rates (Neumann et al. 2024). Experian was 50% less likely to contain information about Hispanic and Asian individuals than White individuals (Kaplan et al. 2017). Credit scores are statistically noisier indicators of default risk for historically underserved groups who lack credit histories (Blattner and Nelson 2021).

Naturally, data exclusion can benefit marginalized groups, for instance, by shielding them from corporate and government surveillance. However, marginalized consumers with limited data may be harmed by digital exclusion from valuable offers in markets such as credit, employment, or public services. Financial inclusion increased in India with the launch of the Aadhar national ID scheme (Alonso et al. 2023) and in Peru, where expanding credit access using shared retail data reduced income disparities (Lee et al. 2024). Policymakers should consider how to level the playing field by incentivizing more equal understanding of poorer and richer consumers.

5. Privacy Measures May Stifle Entry and Innovation by Entrepreneurs and Small Businesses Who Are More Likely to Serve Niche Consumer Segments

Digital advertising now constitutes most ad spending (Cramer-Flood 2023). Data-driven targeting improves its efficiency over traditional media. Data-driven targeting often uses cross-site or cross-app user data. Major advertising platforms track such data with user identifiers, such as third-party cookies, to optimize ad delivery. In addition to “onsite data”—user data collected on the platform—some advertising platforms facilitate “offsite data”—collected off the advertising platform—including browsing history, purchase events, and other online user actions. For example, online retailers can use a pixel or conversion API to transmit purchase data to Meta to optimize ad campaigns.

Greater marketing data generated a surge in the launch of valuable, disruptive new products for consumers, particularly by small businesses and entrepreneurs. For instance, the craft beer segment surged from 4% of U.S. sales to over 20% since 2005 (Bronnenberg et al. 2022). In 2018, 16,000 smaller CPG companies generated 19% of total U.S. sales, a $2 billion (two-percentage-point) increase over 2017 (eMarketer Editors 2019).

By reducing the costs that used to be required to build a new brand on television and other mass media, digital advertising eroded massive barriers to entry for new consumer brands (cf. Sutton 1991Bronnenberg et al. 2009). Digital advertising saves small U.S. entrepreneurs $163 billion annually, and over two-thirds of them lack a cost-effective alternative ad medium (Kerrigan and Keating 2019).

More broadly, without cross-site/app identity, consumers enjoy less free content (e.g., Kircher and Foerderer 2023Johnson et al. 2024). GDPR likely hurt the European advertising-supported software industry and stifled innovation, with an associated decline in new firms, venture capital investment, and new apps (Jia et al. 2021Janßen et al. 2022). Apple’s ATT substantially degraded digital advertising: firm revenue fell 37% more for more Meta-dependent firms (Aridor et al. 2024), and the number of U.S. establishments fell 1.1% (roughly 91,000 U.S. establishments) in more affected industries. Privacy protection in healthcare could discourage IT adoption and lead to worse health outcomes (Miller and Tucker 20092011Derksen et al. 2022; cf. Adjerid et al. 2016). Security and privacy measures for financial services can discourage take-up of innovations such as online banking (Lambrecht et al. 2011). Goldfarb and Tucker (2012) discuss the trade-off between privacy and innovation. However, firms may adapt to privacy restrictions by innovating in ways that do not involve data (cf. Agrawal et al. 2019).

6. Privacy Policy May Harm Small Companies with Greater Need for Third-Party Data and Less Ability to Afford Compliance Costs

Privacy regulations also impact competition among businesses that rely on digital marketing. Dozens of papers that consider the economic impact of GDPR largely document its harms to firm performance, competition, innovation, the web, and marketing (Johnson 2024).

GDPR increased the cost of collecting and storing data by requiring firms to enhance data protection, imposing penalties for data breaches, and requiring more transparency to consumers about tracking and data usage. Demirer et al. (2024) provide a case study of one of the largest global cloud-computing providers between 2015 and 2021, estimating that EU firms store, on average, 26% less data than comparable U.S. firms two years after the GDPR. Interestingly, EU firms decrease their computation relative to comparable U.S. firms by 15%—implying that firms became less data intensive after GDPR. Demirer et al. estimated that GDPR increased average total data storage costs by 20%—again, disproportionately more for smaller firms.

Restrictions to limit the effectiveness of digital advertising would likely disproportionately disadvantage small businesses because nine out of ten predominantly use digital advertising, especially on Meta (Kerrigan and Keating 2019). The GDPR and company-initiated policies, such as the deprecation of third-party cookies and Apple’s ATT, limit the collection of offsite data. For instance, banning offsite data for advertising on Meta disproportionately harms small advertisers, with the median small advertiser losing 4.6 times as many customers per $1,000 spent on advertising as the median large advertiser (Wernerfelt et al. 2024). Apple’s ATT policy also disproportionately harmed small businesses (Aridor et al. 2024Deisenroth et al. 2024). Disabling access to targetable data in Safari and Chrome “…disproportionately hurt price responsive consumers and small/niche product sellers” on an e-commerce retail platform (Korganbekova and Zuber 2023). It also disproportionately harmed merchants serving niche consumers on Alibaba (Sun et al. 2023).

Data restrictions have also eroded digital advertising effectiveness. The EU’s e-Privacy Directive EC/2002/58 was associated with a 65% decrease in online advertising effectiveness (Goldfarb and Tucker 2011). Without cookies, the value created by advertising falls 52%, a loss shared roughly proportionately between market advertisers, publishers, and ad tech intermediaries (Johnson et al. 2020).

Worse, privacy restrictions that limit or ban the use of data used for advertising could inadvertently increase concentration in the advertising market. GDPR increased concentration in EU digital advertising markets (Peukert et al. 2022Johnson et al. 2023) and market share for Google and Facebook. Apple’s ATT used different opt-in prompts for Apple versus third-party apps, potentially giving Apple higher opt-in rates and more access to targetable user data (Baviskar et al. 2024).

Regulators should consider this trade-off between the protection of consumer privacy and the potential harm from increased market concentration and the potential stifling of the recent wave of innovation among small, disruptive consumer brands. Indeed, the EU Data Protection Act now recognizes the differential compliance burden for small businesses (Beveridge 2024), with looser requirements for small businesses.

7. A Path Toward Acceptable Data Processing

Privacy regulation steers both the data economy and firm compliance by defining acceptable data processing. For instance, HIPAA specifies health data storage and transfer requirements between covered parties, which can include encryption, deidentification, written agreements, and breach notification. The Children’s Online Privacy Protection Act (COPPA) restricts processing children’s data but establishes a safe harbor program for firms to coordinate self-regulation.

The GDPR prioritizes privacy while imposing substantial compliance costs on firms because the GDPR defines personal data broadly, imposes multiple data-related responsibilities on firms, and prescribes a high consent standard for many marketing purposes. Recently, GDPR has come in the crosshairs of EU’s “crusade” against overregulation accused of throttling business (O’Regan 2025). Forward-looking privacy regulation should instead consider the role of privacy-enhancing technologies.

7.1. Privacy-Enhancing Technologies

PETs offer technological alternatives to privacy regulation such as adding noise to data (i.e., differential privacy), cohorting consumers (e.g., K-anonymity), decentralizing data processing (e.g., federated learning, on-device computation), limiting data flows (e.g., zero-knowledge proof), and privacy-safe data combination (e.g., secure multiparty computation) (e.g., Bowen et al. 2024). The U.S. Census uses differential privacy to anonymize its public statistics. Google uses federated learning to implement keyboard next-word predictions.

The online advertising industry is on a path to replace cross-site/app identifiers with PETs (Johnson et al. 2022Geng et al. 2023). For example, the Google (2022) “Privacy Sandbox” consists of multiple technologies for ad targeting (Topics API, Protected Audience API), ad measurement (Attribution Reporting API), and fraud detection (Privacy State Tokens). Similarly, Microsoft launched its Ad Selection API for ad targeting, Apple offers AdAttributionKit for ad measurement, and Facebook and Mozilla jointly proposed their Interoperable Private Attribution approach for ad measurement. These PET applications appear performant and see moderate adoption among websites and AdTech vendors (Kobayashi et al. 2024).

PETs have limitations. First, computer science has treated “reidentifiability” as the focal problem (cf. Ponte et al. 2024). However, consumers do not perceive dramatically lessened privacy violations from PET solutions such as Google’s “Topics” and “Protected Audience APIs” (Jerath and Miller 2024).

Second, PETs may have competitive consequences because small firms may require greater transformation of those data to protect individual privacy, exacerbating challenges for inference (Komarova and Nekipelov 2020). Consequently, many real-world applications choose permissive privacy parameters that effectively sacrifice privacy for utility (Blanco-Justicia et al. 2022Williams and Bowen 2023). Many small firms also lack the technical expertise to implement PETs.

Third, differential privacy may indirectly leak sensitive information, in particular, for disadvantaged consumers (Chang and Shokri 2021). For these reasons, scholars criticize the use of differential privacy in the U.S. census (Hotz et al. 2022).

Furthermore, PETs do not alleviate some of the other unintended consequences of privacy regulation. A privacy-preserving algorithm mitigates, but does not eliminate, the tendency for privacy restrictions to cause greater harm to small sellers and price-sensitive consumers (Korganbekova and Zuber 2023). Moreover, existing PETs do not resolve firms’ lack of knowledge about disadvantaged consumers, effectively excluding them from the marketplace.

7.2. Forward-Looking Regulation

Privacy regulation had the intended consequence of favoring innovation in PETs; but U.S. regulatory proposals to date (to our knowledge) omit PETs. For instance, the FTC’s request for public comment on “commercial surveillance and data security” only mentions PETs in passing.

Forward-looking regulation should consider how to incentivize the costly implementation of PET adoption and innovation. For instance, regulation could include appropriate PET use as a sufficient legal basis for data processing. Regulation could stipulate that firms can forgo costly consent collection if they employ PETs. In settings where consent plays an important role, regulation could incentivize PET adoption by permitting consent defaults that advantage data collection (e.g., opt-out rather than opt-in consent). In contrast, the French regulator CNIL has stated that the consent standard should be the same for Privacy Sandbox–enabled online advertising as third-party cookies.

8. Conclusion

Herein, we have summarized key themes in the relevant academic marketing literature regarding potential intended and unintended consequences from current privacy measures. Many policies reduce the usefulness of consumer data to both consumers and firms by eliminating the ability to track sources of heterogeneity. On the demand side, these policies weaken personalized marketing: this can reduce value creation for consumers with niche tastes and potentially exclude marginalized consumer segments. On the supply side, these regulations can stifle innovation and reduce the competitiveness of markets, especially hampering small businesses and entrepreneurs. While PETs offer potential to reduce some of these documented costs to consumers and firms, these technologies are likely to advantage larger firms.

Public policy needs to weigh the trade-offs between the costs and benefits to consumer data privacy restrictions. The unintended costs are frequently omitted from data privacy regulation proposals. A similar balanced approach has been recommended in the past in the discussion of the trade-offs between innovation and privacy (e.g., Goldfarb and Tucker 2012). Additionally, we offer several suggestions to better understand when surveys do or do not elicit reliable privacy preferences across contexts.

Three topics deserve further research by marketing scholars. First, few papers have quantified the tangible economic benefits of privacy regulations to consumers. Second, more work is needed on the redistributive effects of privacy regulation. Third, research and policy attention are needed on boundary regulation. Heterogeneous consumers need help to efficiently and accurately assess when data sharing is in their interests and to ease the tasks of data sharing when it is helpful. Technical solutions are needed that allow consumers to “own” their own data locally and to share with selected providers only for a limited purpose and time.