The Invisible Data Sharing Market: An Exploration
Content

Our Newsletter

Get Our Resources Delivered Straight To Your Inbox

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
We respect your privacy. Learn more here.

Introduction

Imagine discovering that your car insurance premium has surged by over 20%, a shock that many have faced, not due to their driving skills, but because their vehicles have been quietly sharing data with their insurers. 

In this example, the central figure is data exchange, LexisNexis, a global data broker known for collecting driving habits and providing this information to insurers. The data, sourced directly from automakers and sold to data brokers, details driving behaviours like times, distances and instances of speeding or hard braking, albeit without mentioning the specific locations. 

This practice raises pivotal questions about data privacy and consent. Automakers, leveraging the connectivity of modern vehicles, gather extensive driving data, sometimes without explicit approval from the car owners. 

This particular revelation about the depth of data collection by automakers and their partnerships with data brokers like LexisNexis brings to light significant concerns regarding privacy and the need for informed consent. 

Data brokers and data sharing are under tremendous scrutiny, particularly in the USA. This car insurance example, highlighted in a recent New York Times Article [1], prompts bigger and more far-reaching questions: 

  • What other industries is this occurring in without consumer knowledge or informed consent and how are individuals being affected? 
  • What can be done to make it more privacy-centric, transparent and ethical?

If we’re honest, these data brokerages won’t stop doing this - they make too much money - but what if we could find a way for this to continue in a privacy-first, transparent and ethical way?

In this article, we investigate this invisible data market of data brokers and data buyers, the role AI plays in analysing this information, the privacy and consent dilemma and what can be done to put data privacy first in this scenario.  

The Data Sharing Network

While the general public is increasingly aware of how websites and apps collect personal data, outside of credit reference agencies like Equifax or Experian, a less visible yet vast network operates beyond the confines of your browser. This network thrives on data collected offline or from an array of connected devices, including your car, smart appliances and even your fitness tracker. 

These sources provide a continuous stream of information about personal habits, preferences and behaviours that go beyond simple online activity.

The Data Brokers

At the heart of this network are data brokers. These entities play a crucial role in the data ecosystem, collecting, aggregating and enriching personal information from various sources and then selling it to the highest bidder. 

Their operations extend far beyond collecting driving habits. Data brokers compile detailed profiles that include shopping patterns, location history, online browsing behaviour and much more. These profiles are then enriched with additional data to provide a comprehensive view of an individual's lifestyle and preferences.

Data brokers operate largely in the shadows, with most people unaware of the extent of information collected about them or how it is used. The data amassed is powerful, offering deep insights into consumer behaviour, which is highly valuable to businesses across various industries.

While this isn’t a uniquely American problem, the Federal Trade Commission [2] is taking action against data brokers. Outlogic, formerly known as X-Mode Social, has been banned from “...selling or sharing sensitive information that can be used to track people’s locations…” 

Previously, the data they had sold included mobile advertising IDs capable of identifying the individual device and locations visited due to a lack of anonymisation. This allowed businesses to identify individuals, their travel patterns and where they shopped, worshipped and worked.

FTC Chair Lina M. Khan said, “The FTC’s action against X-Mode (Outlogic) makes it clear that businesses do not have free licenses to market and sell Americans’ sensitive location data.”

The Data Buyers

Data buyers span a wide range of industries, extending well beyond the insurance companies discussed in our original example. Marketing firms, for instance, use this personal data to tailor advertisements to the individual level, aiming to increase the effectiveness of their campaigns.

Earlier this year, Senator Ron Wyden [3] wrote to the FTC and SEC with information about a Data Broker, Near, which had sold the location data of individuals who had visited Planned Parenthood clinics to ad agencies working for anti-abortion groups. These agencies then targeted these people with anti-abortion messaging using the location data that had been harvested without their consent.

But it’s not just businesses that buy this data - the Government buys it from data brokers too.  

In an article published on the Brennan Center website [4], they explain that data brokers exploit legal loopholes to sell information to Government agencies to bypass privacy safeguards and, in some cases, this includes the requirements of the Fourth Amendment.

In 2020, Muslim Pro, a prayer app with over 98 million downloads, was found to track a user’s location and sell the data to third-party data brokers, including Outlogic (formerly X-Mode).  A Vice investigation [5] found that the US military was one of the buyers and the data was used to monitor Muslim communities - despite the law prohibiting such purchases.

This demonstrates a larger issue facing the USA: the lack of a federal Data Privacy and Protection law.  Without comprehensive legislation that protects the privacy of individuals, these practices will continue.  

With the proliferation of AI tools, the risks to data privacy, personal information and freedom of speech will grow. Because AI can analyse such huge volumes of data, businesses can now combine data from multiple sources and use algorithms to potentially identify individual users.

AI and Data Sharing

Profiling Power

Artificial Intelligence (AI) has revolutionised the way businesses understand and interact with their customers. By analysing vast amounts of sensitive data, AI can create detailed profiles of individuals, offering unprecedented insights into their habits, preferences and behaviours. 

These profiles are not just static snapshots; they are dynamic and continuously updated with new data to refine the understanding of an individual's lifestyle and choices.

However, this can often have unwanted effects on individuals and can affect their health and well-being.  For example, in Siobhan Smith’s [6] case, she still receives targeted advertisements for pregnancy-related products months after having a miscarriage which has dramatically affected her mental health.

The power of AI in profiling allows businesses to tailor their offerings with remarkable precision. For example, a retailer could use AI to determine which customers are most likely to be interested in a new product line based on their past purchasing history and online browsing behaviour. 

Similarly, financial institutions can use AI-derived profiles to adjust credit limits or offer personalised loan rates, while employers might leverage this technology to screen job applicants, predicting their future performance based on past behaviours and achievements.

This granular profiling capability raises important questions about privacy and consent. As businesses increasingly rely on AI to make decisions that directly affect individuals, the need for transparency and ethical data use becomes crucial.

Predictive Models

Beyond profiling, AI's ability to predict future behaviour is perhaps even more impactful. By analysing patterns in past data, AI can make educated guesses about an individual's future actions, preferences and risk factors. These predictive models are used across industries to make decisions that significantly affect individuals' lives and opportunities.

In the insurance sector, for example, AI can predict the likelihood of a claim being filed based on past behaviour, adjusting premiums accordingly. In healthcare, predictive models can forecast potential health issues, allowing for earlier interventions. 

Even in the realm of marketing, AI can anticipate future purchasing behaviour, enabling companies to target consumers with relevant offers before the need or desire is explicitly expressed.

However, the predictive power of AI also comes with ethical considerations. The accuracy of predictions can vary and reliance on historical data and personal information may perpetuate existing biases or inaccuracies, leading to unfair assumptions or decisions about individuals. 

Using predictive models to make assumptions about future behaviour raises concerns about determinism and the reduction of human behaviour to data points.

As we rely more on AI to inform decisions, the challenge lies in balancing the benefits of predictive analytics with the need to respect individual autonomy, ensure fairness and prevent discrimination. Striking this balance requires not only technological innovation but also a commitment to ethical standards and practices that prioritise the dignity and rights of individuals.

The Privacy Cost & Ethical Dilemma

The Concept of Consent

The concept of consent, especially concerning data collection, demands a closer examination. The issue at its core is not just about the act of agreeing to terms of service; it's about understanding what one is consenting to. 

In many cases, consent is obtained through mechanisms that do not ensure the individual's informed understanding. This is particularly true when consent forms are lengthy, filled with jargon, or when the implications of data sharing are not made explicitly clear.

For car owners, the distinction between opting into safety features and unwittingly agreeing to surveillance can be murky. Programs that monitor driving habits are often presented as value-added services, offering benefits like discounts on insurance premiums. 

However, this obscures the reality that these programs collect highly personal data, which can be used for purposes beyond the consumer's expectations or understanding. 

The case highlighted in the New York Times article, involving General Motors and LexisNexis, illustrates how drivers may be unaware of the extent to which their driving data is shared, analysed and used by third parties.

The lack of clear, upfront communication about data collection practices and the purposes behind them undermines the principle of informed consent which is a core component of most privacy laws. It raises critical questions about autonomy and the right to privacy.

Should the onus be on individuals to comb through dense legal documents to understand what they're agreeing to? Or should companies be required to ensure that consent is informed and obtained through clear, concise, and direct communication?

What Is The Value of This Data?

The value derived from the collection and analysis of car data is disproportionately in favour of companies and the data broker, with consumers often receiving a minimal return. This imbalance prompts a reevaluation of the data exchange paradigm. 

While businesses argue that data collection enables them to provide personalised services and improvements in safety and efficiency, the tangible benefits for consumers are frequently less clear or impactful.

For instance, while drivers may receive minor discounts on insurance premiums for consenting to monitoring, the deeper value of the data collected far exceeds these incentives. Insurance companies and data brokers can leverage this information to refine risk models, develop new products and even influence market strategies. This vast potential for profit and innovation contrasts sharply with the limited benefits offered to consumers, such as marginal discounts or slightly more personalised services.

The value exchange is not just about financial gains or discounts, it's also about control and privacy. Consumers often trade away significant aspects of their privacy in exchange for services or conveniences without fully understanding the implications which are hidden within complicated privacy policies.

This trade-off can lead to scenarios where consumers are left vulnerable to data breaches, unwanted surveillance and misuse of their personal information for targeted advertising.

The question of who benefits most from this ecosystem is thus twofold: it concerns not only the economic value derived from data but also the broader implications for consumer autonomy and privacy. 

Redressing this imbalance requires more than just offering consumers better incentives; it necessitates a fundamental shift in how data value is perceived and distributed within the digital economy.

Who Controls the Data?

Once personal data enters the broker ecosystem, individuals effectively lose control over it. This data, which can include sensitive information, locations and even potentially health-related inferences, becomes a commodity traded between corporations without the individual's ongoing consent or oversight. This loss of control poses privacy concerns and opens the door to unfairness and discrimination.

It's crucial not to overlook the significant role played by credit reference agencies such as Equifax and Experian. These agencies amass vast amounts of personal financial data, influencing decisions on loans, insurance and even job applications. Their operations exemplify the broader challenges of data privacy, underscoring the complexity of managing personal information across different sectors.

The use of detailed profiles created from aggregated data can lead to decisions that affect individuals' access to services, pricing and opportunities without their knowledge or consent. For instance, a person could find themselves paying higher insurance premiums based on opaque risk assessments or even being targeted by predatory lending practices.

If biases in data collection or analysis exist, they can perpetuate discrimination, affecting marginalised groups disproportionately. The same can be said for systems leveraging AI or Machine Learning to make decisions or inferences.

The ethical dilemmas posed by this data ecosystem are profound. They challenge us to reconsider the frameworks governing data collection, consent and privacy. Ensuring that individuals retain control over their data and that the benefits of this data ecosystem are more equitably distributed requires concerted efforts from regulators, companies and society at large.

What Can (and Should) Be Done

Alternative Approaches

As we said at the beginning of the article, data brokers and this invisible data market aren’t going to go away - the profits to be made from buying and selling data are too great.

So, leaves us with one real option: to explore an alternative, privacy-first way for this invisible data market to operate. Exploring alternative approaches to data collection and use can provide consumers with more control over their personal information while still enabling businesses to benefit from data insights.

Google Privacy Sandbox

For example, Google’s Privacy Sandbox Initiative, aimed at phasing out third-party cookies, proposes new technologies designed to protect individual privacy while providing advertisers with the data they need for effective campaigns. 

For example, Google is investigating FLOCs (Federated Learning of Cohorts) as an alternative to third-party cookies. FLOCs group together people with similar browsing habits, allowing AdTech companies to market to the “cohort” rather than the individual.

However, it's crucial to address the ongoing debates and concerns surrounding the Privacy Sandbox. In this article from The Register [6], critics argue that, despite its privacy-centric goals, the initiative could potentially consolidate Google's dominance in the digital advertising space, raising questions about market competition and the true extent of privacy protections for users.

The UK’s Competition and Markets Authority [7] has similar concerns and has forced Google to halt cookie deprecation until its concerns are addressed. 

These concerns highlight the complexity of balancing privacy enhancements with the economic dynamics of the advertising industry.

Privacy Enhancing Technologies

We’ve talked about specific applications for Privacy-Enhancing Technologies (PETs) before in some of our other articles. PETs, such as federated learning, homomorphic encryption and differential privacy, allow for the analysis of data while preserving the privacy of individual records.

In our opinion, every business should leverage some form of PET to protect the data they gather and process as this is the only way to truly protect user privacy.

Implementing such technologies can enable companies to gain valuable insights without compromising individual privacy.

Additionally, models that allow individuals to directly control and potentially monetise their data represent a significant shift in the data ecosystem. Anonymisation techniques can further ensure that data used for analysis and decision-making does not reveal personal identities. 

Providing consumers with platforms to manage their data preferences, including options to opt in or out of data sharing, can empower individuals and provide them with tangible benefits for their participation. Companies like Gener8 [7], which allow users to benefit and earn money from the fact they are being tracked and their data is being shared, are challenging the status quo.

By adopting a combination of regulatory measures, industry standards and innovative approaches to data management and protection, it is possible to create a more equitable and respectful data ecosystem. 

These strategies, while challenging to implement, offer a path toward reconciling the need for data to drive business and innovation with the imperative to protect individual privacy and autonomy.

Regulation and Transparency

Clear regulations are essential to address the challenges and ethical dilemmas presented by the widespread collection and use of data. Regulations should mandate transparency about what data is being collected, how it is being used and with whom it is being shared. Such transparency is fundamental to ensuring that consumers can make informed choices about their data.

GDPR, CCPA and the other regulations coming into force globally aren’t the endgame. They are the beginning. They are the starting point, not the pinnacle. 

Regulations should enforce the principle of minimal data collection — only collecting data that is necessary for the specified service or benefit. Limiting the scope of data collection can significantly reduce the potential for misuse and the risk of privacy breaches. In addition, there should be a push for greater accountability, with companies being required to demonstrate compliance with privacy laws and to report data breaches promptly.

Industry Standards

The establishment of industry-wide standards for the ethical collection, security and use of data is critical. These standards should be developed collaboratively by stakeholders across the spectrum, including regulators, businesses, privacy advocates and consumers. 

They should encompass best practices for data anonymisation, secure data storage and the ethical use of AI and predictive models to prevent discrimination and ensure fairness. But these standards should also go a step beyond that and mandate the use of Privacy-Enhancing Technologies across all operations.  

For example, Differential Privacy should be a non-negotiable minimum standard in data analysis for any firm, in any industry, to ensure that individual data can’t be isolated, reconstituted by AI or inferred from the combination of multiple data sets.

Standards could also include guidelines for obtaining genuine informed consent, with an emphasis on clear, concise and accessible information for consumers. By adhering to common standards, industries can foster trust with consumers, mitigate risks and create a more level playing field.

Conclusion

Navigating the complex data sharing ecosystem reveals a reality where personal data is gathered, shared and leveraged against users (often) without the smallest attempt to gain their consent.

We all recognise that data is crucial to modern life and we also recognise that changing the status quo is a monumental task.  But, when you consider the impact that these data sharing practices have on people, something has to change.

Why should prospective mothers, like Siobhan, who suffer the horror of a miscarriage be subject to targeted advertising months or years later, just because they can’t find a way to opt out? 

Why should people who work for Planned Parenthood, or need to use those services, be victims of targeted hate campaigns from anti-abortion campaigners because they travelled to or through a location once and that geo-location data has been sold to a marketing agency?

While it will probably be difficult, or indeed impossible, to dismantle this invisible data sharing market, that isn’t to say we can’t put privacy-first.

This transformation of data into insights brings to light the crucial balance between innovation and privacy, highlighting the ethical responsibilities of businesses in handling personal information.

This involves committing to transparency, prioritising privacy and implementing standards that ensure data practices are both responsible and respectful of individual rights.

The call for action extends to fostering an environment where ethical data use is integral to corporate culture, advocating for privacy-enhancing technologies and ensuring informed consent. As we advance, the collective effort to cultivate a fair and privacy-conscious data ecosystem is not just beneficial but essential for sustaining trust and integrity.

Our Newsletter

Get Our Resources Delivered Straight To Your Inbox

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
We respect your privacy. Learn more here.

Related Blogs

Privacy Impact Assessments: What They Are and Why You Need Them
  • Data Privacy & Compliance
  • April 18, 2024
Learn About Privacy Impact Assessments (PIAs) And Why You Need Them
PII, PI and Sensitive Data: Types, Differences and Privacy Risks
  • Data Privacy & Compliance
  • April 18, 2024
Learn About The Different Types Of PII And Their Risks
How to Conduct Data Privacy Compliance Audits: A Step by Step Guide
  • Data Privacy & Compliance
  • April 16, 2024
A Step By Step Guide to Conducting Data Privacy Compliance Audits
More Blogs

Contact Us For More Information

If you’d like to understand more about Zendata’s solutions and how we can help you, please reach out to the team today.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.





The Invisible Data Sharing Market: An Exploration

March 15, 2024

Introduction

Imagine discovering that your car insurance premium has surged by over 20%, a shock that many have faced, not due to their driving skills, but because their vehicles have been quietly sharing data with their insurers. 

In this example, the central figure is data exchange, LexisNexis, a global data broker known for collecting driving habits and providing this information to insurers. The data, sourced directly from automakers and sold to data brokers, details driving behaviours like times, distances and instances of speeding or hard braking, albeit without mentioning the specific locations. 

This practice raises pivotal questions about data privacy and consent. Automakers, leveraging the connectivity of modern vehicles, gather extensive driving data, sometimes without explicit approval from the car owners. 

This particular revelation about the depth of data collection by automakers and their partnerships with data brokers like LexisNexis brings to light significant concerns regarding privacy and the need for informed consent. 

Data brokers and data sharing are under tremendous scrutiny, particularly in the USA. This car insurance example, highlighted in a recent New York Times Article [1], prompts bigger and more far-reaching questions: 

  • What other industries is this occurring in without consumer knowledge or informed consent and how are individuals being affected? 
  • What can be done to make it more privacy-centric, transparent and ethical?

If we’re honest, these data brokerages won’t stop doing this - they make too much money - but what if we could find a way for this to continue in a privacy-first, transparent and ethical way?

In this article, we investigate this invisible data market of data brokers and data buyers, the role AI plays in analysing this information, the privacy and consent dilemma and what can be done to put data privacy first in this scenario.  

The Data Sharing Network

While the general public is increasingly aware of how websites and apps collect personal data, outside of credit reference agencies like Equifax or Experian, a less visible yet vast network operates beyond the confines of your browser. This network thrives on data collected offline or from an array of connected devices, including your car, smart appliances and even your fitness tracker. 

These sources provide a continuous stream of information about personal habits, preferences and behaviours that go beyond simple online activity.

The Data Brokers

At the heart of this network are data brokers. These entities play a crucial role in the data ecosystem, collecting, aggregating and enriching personal information from various sources and then selling it to the highest bidder. 

Their operations extend far beyond collecting driving habits. Data brokers compile detailed profiles that include shopping patterns, location history, online browsing behaviour and much more. These profiles are then enriched with additional data to provide a comprehensive view of an individual's lifestyle and preferences.

Data brokers operate largely in the shadows, with most people unaware of the extent of information collected about them or how it is used. The data amassed is powerful, offering deep insights into consumer behaviour, which is highly valuable to businesses across various industries.

While this isn’t a uniquely American problem, the Federal Trade Commission [2] is taking action against data brokers. Outlogic, formerly known as X-Mode Social, has been banned from “...selling or sharing sensitive information that can be used to track people’s locations…” 

Previously, the data they had sold included mobile advertising IDs capable of identifying the individual device and locations visited due to a lack of anonymisation. This allowed businesses to identify individuals, their travel patterns and where they shopped, worshipped and worked.

FTC Chair Lina M. Khan said, “The FTC’s action against X-Mode (Outlogic) makes it clear that businesses do not have free licenses to market and sell Americans’ sensitive location data.”

The Data Buyers

Data buyers span a wide range of industries, extending well beyond the insurance companies discussed in our original example. Marketing firms, for instance, use this personal data to tailor advertisements to the individual level, aiming to increase the effectiveness of their campaigns.

Earlier this year, Senator Ron Wyden [3] wrote to the FTC and SEC with information about a Data Broker, Near, which had sold the location data of individuals who had visited Planned Parenthood clinics to ad agencies working for anti-abortion groups. These agencies then targeted these people with anti-abortion messaging using the location data that had been harvested without their consent.

But it’s not just businesses that buy this data - the Government buys it from data brokers too.  

In an article published on the Brennan Center website [4], they explain that data brokers exploit legal loopholes to sell information to Government agencies to bypass privacy safeguards and, in some cases, this includes the requirements of the Fourth Amendment.

In 2020, Muslim Pro, a prayer app with over 98 million downloads, was found to track a user’s location and sell the data to third-party data brokers, including Outlogic (formerly X-Mode).  A Vice investigation [5] found that the US military was one of the buyers and the data was used to monitor Muslim communities - despite the law prohibiting such purchases.

This demonstrates a larger issue facing the USA: the lack of a federal Data Privacy and Protection law.  Without comprehensive legislation that protects the privacy of individuals, these practices will continue.  

With the proliferation of AI tools, the risks to data privacy, personal information and freedom of speech will grow. Because AI can analyse such huge volumes of data, businesses can now combine data from multiple sources and use algorithms to potentially identify individual users.

AI and Data Sharing

Profiling Power

Artificial Intelligence (AI) has revolutionised the way businesses understand and interact with their customers. By analysing vast amounts of sensitive data, AI can create detailed profiles of individuals, offering unprecedented insights into their habits, preferences and behaviours. 

These profiles are not just static snapshots; they are dynamic and continuously updated with new data to refine the understanding of an individual's lifestyle and choices.

However, this can often have unwanted effects on individuals and can affect their health and well-being.  For example, in Siobhan Smith’s [6] case, she still receives targeted advertisements for pregnancy-related products months after having a miscarriage which has dramatically affected her mental health.

The power of AI in profiling allows businesses to tailor their offerings with remarkable precision. For example, a retailer could use AI to determine which customers are most likely to be interested in a new product line based on their past purchasing history and online browsing behaviour. 

Similarly, financial institutions can use AI-derived profiles to adjust credit limits or offer personalised loan rates, while employers might leverage this technology to screen job applicants, predicting their future performance based on past behaviours and achievements.

This granular profiling capability raises important questions about privacy and consent. As businesses increasingly rely on AI to make decisions that directly affect individuals, the need for transparency and ethical data use becomes crucial.

Predictive Models

Beyond profiling, AI's ability to predict future behaviour is perhaps even more impactful. By analysing patterns in past data, AI can make educated guesses about an individual's future actions, preferences and risk factors. These predictive models are used across industries to make decisions that significantly affect individuals' lives and opportunities.

In the insurance sector, for example, AI can predict the likelihood of a claim being filed based on past behaviour, adjusting premiums accordingly. In healthcare, predictive models can forecast potential health issues, allowing for earlier interventions. 

Even in the realm of marketing, AI can anticipate future purchasing behaviour, enabling companies to target consumers with relevant offers before the need or desire is explicitly expressed.

However, the predictive power of AI also comes with ethical considerations. The accuracy of predictions can vary and reliance on historical data and personal information may perpetuate existing biases or inaccuracies, leading to unfair assumptions or decisions about individuals. 

Using predictive models to make assumptions about future behaviour raises concerns about determinism and the reduction of human behaviour to data points.

As we rely more on AI to inform decisions, the challenge lies in balancing the benefits of predictive analytics with the need to respect individual autonomy, ensure fairness and prevent discrimination. Striking this balance requires not only technological innovation but also a commitment to ethical standards and practices that prioritise the dignity and rights of individuals.

The Privacy Cost & Ethical Dilemma

The Concept of Consent

The concept of consent, especially concerning data collection, demands a closer examination. The issue at its core is not just about the act of agreeing to terms of service; it's about understanding what one is consenting to. 

In many cases, consent is obtained through mechanisms that do not ensure the individual's informed understanding. This is particularly true when consent forms are lengthy, filled with jargon, or when the implications of data sharing are not made explicitly clear.

For car owners, the distinction between opting into safety features and unwittingly agreeing to surveillance can be murky. Programs that monitor driving habits are often presented as value-added services, offering benefits like discounts on insurance premiums. 

However, this obscures the reality that these programs collect highly personal data, which can be used for purposes beyond the consumer's expectations or understanding. 

The case highlighted in the New York Times article, involving General Motors and LexisNexis, illustrates how drivers may be unaware of the extent to which their driving data is shared, analysed and used by third parties.

The lack of clear, upfront communication about data collection practices and the purposes behind them undermines the principle of informed consent which is a core component of most privacy laws. It raises critical questions about autonomy and the right to privacy.

Should the onus be on individuals to comb through dense legal documents to understand what they're agreeing to? Or should companies be required to ensure that consent is informed and obtained through clear, concise, and direct communication?

What Is The Value of This Data?

The value derived from the collection and analysis of car data is disproportionately in favour of companies and the data broker, with consumers often receiving a minimal return. This imbalance prompts a reevaluation of the data exchange paradigm. 

While businesses argue that data collection enables them to provide personalised services and improvements in safety and efficiency, the tangible benefits for consumers are frequently less clear or impactful.

For instance, while drivers may receive minor discounts on insurance premiums for consenting to monitoring, the deeper value of the data collected far exceeds these incentives. Insurance companies and data brokers can leverage this information to refine risk models, develop new products and even influence market strategies. This vast potential for profit and innovation contrasts sharply with the limited benefits offered to consumers, such as marginal discounts or slightly more personalised services.

The value exchange is not just about financial gains or discounts, it's also about control and privacy. Consumers often trade away significant aspects of their privacy in exchange for services or conveniences without fully understanding the implications which are hidden within complicated privacy policies.

This trade-off can lead to scenarios where consumers are left vulnerable to data breaches, unwanted surveillance and misuse of their personal information for targeted advertising.

The question of who benefits most from this ecosystem is thus twofold: it concerns not only the economic value derived from data but also the broader implications for consumer autonomy and privacy. 

Redressing this imbalance requires more than just offering consumers better incentives; it necessitates a fundamental shift in how data value is perceived and distributed within the digital economy.

Who Controls the Data?

Once personal data enters the broker ecosystem, individuals effectively lose control over it. This data, which can include sensitive information, locations and even potentially health-related inferences, becomes a commodity traded between corporations without the individual's ongoing consent or oversight. This loss of control poses privacy concerns and opens the door to unfairness and discrimination.

It's crucial not to overlook the significant role played by credit reference agencies such as Equifax and Experian. These agencies amass vast amounts of personal financial data, influencing decisions on loans, insurance and even job applications. Their operations exemplify the broader challenges of data privacy, underscoring the complexity of managing personal information across different sectors.

The use of detailed profiles created from aggregated data can lead to decisions that affect individuals' access to services, pricing and opportunities without their knowledge or consent. For instance, a person could find themselves paying higher insurance premiums based on opaque risk assessments or even being targeted by predatory lending practices.

If biases in data collection or analysis exist, they can perpetuate discrimination, affecting marginalised groups disproportionately. The same can be said for systems leveraging AI or Machine Learning to make decisions or inferences.

The ethical dilemmas posed by this data ecosystem are profound. They challenge us to reconsider the frameworks governing data collection, consent and privacy. Ensuring that individuals retain control over their data and that the benefits of this data ecosystem are more equitably distributed requires concerted efforts from regulators, companies and society at large.

What Can (and Should) Be Done

Alternative Approaches

As we said at the beginning of the article, data brokers and this invisible data market aren’t going to go away - the profits to be made from buying and selling data are too great.

So, leaves us with one real option: to explore an alternative, privacy-first way for this invisible data market to operate. Exploring alternative approaches to data collection and use can provide consumers with more control over their personal information while still enabling businesses to benefit from data insights.

Google Privacy Sandbox

For example, Google’s Privacy Sandbox Initiative, aimed at phasing out third-party cookies, proposes new technologies designed to protect individual privacy while providing advertisers with the data they need for effective campaigns. 

For example, Google is investigating FLOCs (Federated Learning of Cohorts) as an alternative to third-party cookies. FLOCs group together people with similar browsing habits, allowing AdTech companies to market to the “cohort” rather than the individual.

However, it's crucial to address the ongoing debates and concerns surrounding the Privacy Sandbox. In this article from The Register [6], critics argue that, despite its privacy-centric goals, the initiative could potentially consolidate Google's dominance in the digital advertising space, raising questions about market competition and the true extent of privacy protections for users.

The UK’s Competition and Markets Authority [7] has similar concerns and has forced Google to halt cookie deprecation until its concerns are addressed. 

These concerns highlight the complexity of balancing privacy enhancements with the economic dynamics of the advertising industry.

Privacy Enhancing Technologies

We’ve talked about specific applications for Privacy-Enhancing Technologies (PETs) before in some of our other articles. PETs, such as federated learning, homomorphic encryption and differential privacy, allow for the analysis of data while preserving the privacy of individual records.

In our opinion, every business should leverage some form of PET to protect the data they gather and process as this is the only way to truly protect user privacy.

Implementing such technologies can enable companies to gain valuable insights without compromising individual privacy.

Additionally, models that allow individuals to directly control and potentially monetise their data represent a significant shift in the data ecosystem. Anonymisation techniques can further ensure that data used for analysis and decision-making does not reveal personal identities. 

Providing consumers with platforms to manage their data preferences, including options to opt in or out of data sharing, can empower individuals and provide them with tangible benefits for their participation. Companies like Gener8 [7], which allow users to benefit and earn money from the fact they are being tracked and their data is being shared, are challenging the status quo.

By adopting a combination of regulatory measures, industry standards and innovative approaches to data management and protection, it is possible to create a more equitable and respectful data ecosystem. 

These strategies, while challenging to implement, offer a path toward reconciling the need for data to drive business and innovation with the imperative to protect individual privacy and autonomy.

Regulation and Transparency

Clear regulations are essential to address the challenges and ethical dilemmas presented by the widespread collection and use of data. Regulations should mandate transparency about what data is being collected, how it is being used and with whom it is being shared. Such transparency is fundamental to ensuring that consumers can make informed choices about their data.

GDPR, CCPA and the other regulations coming into force globally aren’t the endgame. They are the beginning. They are the starting point, not the pinnacle. 

Regulations should enforce the principle of minimal data collection — only collecting data that is necessary for the specified service or benefit. Limiting the scope of data collection can significantly reduce the potential for misuse and the risk of privacy breaches. In addition, there should be a push for greater accountability, with companies being required to demonstrate compliance with privacy laws and to report data breaches promptly.

Industry Standards

The establishment of industry-wide standards for the ethical collection, security and use of data is critical. These standards should be developed collaboratively by stakeholders across the spectrum, including regulators, businesses, privacy advocates and consumers. 

They should encompass best practices for data anonymisation, secure data storage and the ethical use of AI and predictive models to prevent discrimination and ensure fairness. But these standards should also go a step beyond that and mandate the use of Privacy-Enhancing Technologies across all operations.  

For example, Differential Privacy should be a non-negotiable minimum standard in data analysis for any firm, in any industry, to ensure that individual data can’t be isolated, reconstituted by AI or inferred from the combination of multiple data sets.

Standards could also include guidelines for obtaining genuine informed consent, with an emphasis on clear, concise and accessible information for consumers. By adhering to common standards, industries can foster trust with consumers, mitigate risks and create a more level playing field.

Conclusion

Navigating the complex data sharing ecosystem reveals a reality where personal data is gathered, shared and leveraged against users (often) without the smallest attempt to gain their consent.

We all recognise that data is crucial to modern life and we also recognise that changing the status quo is a monumental task.  But, when you consider the impact that these data sharing practices have on people, something has to change.

Why should prospective mothers, like Siobhan, who suffer the horror of a miscarriage be subject to targeted advertising months or years later, just because they can’t find a way to opt out? 

Why should people who work for Planned Parenthood, or need to use those services, be victims of targeted hate campaigns from anti-abortion campaigners because they travelled to or through a location once and that geo-location data has been sold to a marketing agency?

While it will probably be difficult, or indeed impossible, to dismantle this invisible data sharing market, that isn’t to say we can’t put privacy-first.

This transformation of data into insights brings to light the crucial balance between innovation and privacy, highlighting the ethical responsibilities of businesses in handling personal information.

This involves committing to transparency, prioritising privacy and implementing standards that ensure data practices are both responsible and respectful of individual rights.

The call for action extends to fostering an environment where ethical data use is integral to corporate culture, advocating for privacy-enhancing technologies and ensuring informed consent. As we advance, the collective effort to cultivate a fair and privacy-conscious data ecosystem is not just beneficial but essential for sustaining trust and integrity.