AI’s Data Problem Looks a Lot Like Finance’s Old One

The consumer data question is one that long plagued finance. Who owns your data, and how much should institutions be allowed to use it? 

Over the course of decades, the financial industry pieced together some hard-earned answers, beginning with the 1999 Gramm–Leach–Bliley Act (GLBA) mandating financial firms disclose data sharing practices and offer customers “opt-out” rights, and onward to open banking mandates and restrictions around third-party and API access. These rules reshaped expectations around consent, transparency, and ownership for some of the most sensitive data there is: financial data.

Now, AI is echoing those same debates—because, truly, what could be more sensitive than our own thoughts? They are the essence of the content we feed into AI systems today.

Just a few days ago, FTC Chairman Andrew N. Ferguson warned companies not to forfeit American data security or weaken end-to-end encryption in response to UK and EU governments moving to enforce content moderation or make data accessible to law enforcement. These warnings were issued to companies operating not just in AI, though certainly adjacent to it: cloud computing, data security, social media, messaging apps. 

It is true many are wondering what happens when corporations have access to personal prompts, private musings, and even therapeutic confessions. And last month, Sam Altman himself claimed there’s no legal confidentiality when using ChatGPT as a therapist. 

On the one hand, there certainly are advantages to monitoring regulations, whether it’s the reduction of violent or hateful content, or the ability to get ahead of risks like the influx of AI-enabled tragedies that were confided to ChatGPT.  In this case, AI is as non-sacred as our search history.

On the other, there’s an argument to be made for AI to be subject to the same privacy privileges as doctor-patient or attorney-client relationships, given many consumers treat them as such. And with this mindset, we admit there is a certain, fundamental difference in the way humans engage with AI chatbots compared to technologies of old.

As your food for thought, the question is: When value is generated from the ideas and writing of users, who should get to control the data feeding it? Finance answered its data privacy questions with public trust, regulatory scaffolding, and legal backstops. AI, the next sensitive frontier, is still figuring it out.

Today, we unpack how AI is déjà-vuing finance’s most painful lessons when it comes to trust, and what the considerations are for innovating responsibly in this age of acceleration.

—The Editors