“I regularly evaluate the companies out there that are AI native and trying to solve these problems: teams founded by the people who used to run AI and machine learning at Stripe and all these other companies, and it’s not that hard to break their stuff. If it was, I would be buying them.” Reust told Fintech Nexus at Money20/20
Wealthtech platform Betterment operates in a crowded marketplace. It’s got Wealthfront as a major competitor on one end (it filed for IPO recently) and there’s a hodgepodge of other tech cos some might call “robo-advisors” competing with it too–although Betterment doesn’t like that moniker. On the other end is an even messier range of outfits; your Schwabs, your Fidelitys, your LPL-allied CFPs, and a seemingly infinite tail end of others.
What to do in such a saturated space? In an interview with Fintech Nexus on the sidelines of Money20/20, Betterment President Mike Reust says the answer lies not in injecting generative AI everywhere, but instead solving for trustworthiness, real-world user needs, compliance, and unit economics. Likening the growth of generative AI in finance to the high-stakes and drawn-out introduction of self-driving cars into the world — and previewing where AI is on Betterment’s product roadmap — Reust sees life savings as a thing most consumers don’t want to tinker with experimentally.
“It’s easy to generate an image five different ways, but am I going to let AI trade on my account with margin? Absolutely not. Maybe some people will, but I don’t think most will for a long time,” he said.
You chatted with FTA’s Penny Lee during the program. You mentioned that generative AI isn’t categorically useful for wealthtech. How much of this is a question of generative AI’s current capabilities, versus something more fundamental and static?
Did you follow the self-driving arc at all? It’s been the thing that was going to be solved within a year. Aurora and other major companies have been on the cusp of solving medium- or short-haul trucking in Texas — straight roads with perfect weather — for years, right? And only this year did they start unsupervised rides. I think this stuff will take longer than people think. I do not think it will be normal for someone to sign up and use a chatbot-style interface to truly plan their finances. Yeah, they’ll use ChatGPT and ask various questions. But I don’t think a Betterment or similar is going to put that in front of someone and replace a CFP for even modest use cases for years.
It’s worth breaking down the kinds of problems we’re all trying to solve, and getting really precise about the techniques used. If you want to manage a portfolio, you’re asking very precise kinds of questions: How far am I from the target? How much have I drifted? What are the tax implications of making a trade? These are very discrete, modelable problems. We don’t need magic: We have very good math to solve these problems.
If you zoom out a little bit and you’re like, “Okay, well, I want to have a conversation with the client, because I want them to be more engaged,” there are some people who want to be truly active traders, but they’re just not our customer. When you try to zoom out and you try to say, “Okay, Adam, you are 30 years old, you live in Seattle, you make X dollars per year. You have this much saved in your 401(k), that’s the context, now what should you do? Can we have a back and forth in conversation?” it can look pretty good until it looks really stupid, really fast.
And then the chatbot says “Oh, you’re so smart for catching that!”
I think that final 10%, 5%, 1%, whatever, is going to take a long time, just like autonomous driving. It was really easy to get cars on the highway to go 10 miles an hour over the speed limit, but many other components have taken a decade. I think the same is true here. In a lot of ways personal finance might be more of a bounded problem but — I don’t know — I regularly evaluate the companies out there that are AI native and trying to solve these problems: teams founded by the people who used to run AI and machine learning at Stripe and all these other companies, and it’s not that hard to break their stuff. If it was, I would be buying them.
The best you can kind of do is try to detect when it feels like it’s going off the rails and escalate. But it’s really hard to detect that because you kind of need to know what the correct answer is. And if you already have another system that can tell you the correct answer, you just use that.
Some of the AI companies are trying to go more direct: Claude, for example, announced some stuff they were going to try to do more directly in finance itself. But I don’t know: It feels like a harder problem than the industry thinks it is right now. Again, I’m talking about the CFP-like experience. Precise use cases, like, “Hey, what account type should I open next?” we’ll have that go live in, like, a month. But this CFO in your pocket stuff is further off. It’s also scary.
Forget AI: People have been trying to do self-driving money for a long time, and we’ve all walked away from it because clients didn’t want it. Even when it worked — we had stuff in-market, like a two-way sweep for a while, and Wealthfront had a self-driving wallet for a while — the clients just don’t want it. They might want to set up a simple rule like, “Hey, I want my checking account to always have $10,000,” and then pull from savings or treasury or somewhere, but it seems like, at most, 4% or 5% of clients want that kind of stuff. The rest just want to move the money on their own, because they don’t trust the stuff. And even when we had it in place, and it worked for a while, it didn’t grow in adoption meaningfully.
Maybe AI will convince them it’s good enough or cool enough. But I think most people are learning right now that AI is cool and not trustworthy. I think we’re just a way’s off of the trust in the product, or the products being good enough for clients to trust it.
OpenAI has been hiring former bankers and other finance professionals to write and train financial models for $150 an hour.
These are the same companies that think AGI is measurably near. If AGI happens, what are we wasting our time with? So sure, they have infinite money. Why wouldn’t they be attacking all these problems? Because they really know they’re further away, and of course, they’re going to come after the biggest markets in the universe, like, shocking: They’re going to hire some people in the most profitable industry in the history of our species. Of course. And so we’ll see what they do. I’m not trying to diminish them. Just, you know, it turns out it’s still a regulated space. Turns out customer trust is complicated.
And you need things to be interpretable and explainable and so forth for FINRA reasons.
You have to be able to tell them why it happened that way and why it didn’t. If you get sued at some point and you’re asked to explain why you did or didn’t do something, and your answer is “I don’t know, we just do what the AI tells us,” it’ll result in more class-action lawsuits. We’re hiring people, we’re working directly on it, we’re entrenched, we’re regularly evaluating companies and taking it seriously, but I don’t know. I just spent the last decade hearing that everything was going to be on-chain imminently. Some people want some bitcoin, and that still seems to be it, right?
There are definitely resonances.
It’s the same story. It’s different technology, obviously, but it’s had the same MBA-level pitch. “You won’t need all these departments and these divisions and blah, blah, blah, because the technology solves it.” I don’t know: AI is much more real than the on-chain crypto stuff. But I don’t know: I talk to my product and design teams, and when we discuss AI solving problems, it’s like, “Is that the interface? Do clients just want to come chat at us about managing their assets?” That’s not clear. We’re testing that. My intuition is that it’s not clear. Is this going to be a paradigm shift in UX preferences, where people want to use chatbots for everything? Or is it going to be more like VR or augmented reality, where it’s cool, but it’s still very, very niche? In practice, it can be big business — Quest sells a lot of units — but how many people do you know that use a Quest more than once a year?
One person.
I thought I was gonna be gaming on these things all the time, but it’s such a freaking hassle. I think if you took the essence of this conference, and then replayed it in five years, you’d be like, “That was overhyped.”
And people come to Betterment for a reason rather than ChatGPT. That reason probably isn’t, “Oh, I really want to open the Betterment app and then talk to ChatGPT there.”
There will be those little use cases. Like, sure, “What’s my transaction status? Actually cancel it.” That’s fine, but it’s not going to go radically further.
And then on the flip-side, for instance, Google throws Gemini on everything. For my reporting, every time I open a Google Doc, Gemini asks something along the lines of, “How do you want me to write your document for you?” and, fundamentally, I don’t want that. There has to be a meaningful opt-out option.
They’re trying to sprinkle Gemini goodness everywhere, and it just kind of sucks everywhere. It’s good for one thing: generating cool photos my daughter likes.
Another component of this is maybe more long-term, but it surrounds the costs of AI. I feel like we’re in the, like, “Ubers are $3 to go anywhere,” stage of things. Turned out SoftBank was subsidizing my trips to Williamsburg for six years. And now SouthBank is maybe pouring billions of dollars into generating pictures your daughter likes. When it comes to your product roadmap, how much do you think about the undergirding cost of some of these AI-driven technologies, and how much they might fluctuate in cost over time?
It’s a really good question. I’m going to start with the tools we let our employees use to do their jobs. That’s where we spend a lot more money right now. It costs a lot of money per month per engineer on Claude or whatever we’re using. I have been having fun debates with my team on negotiating these contracts to try to make the cost structure make sense. Because if a company wants to give me a fixed-rate contract per person, that doesn’t work. The usage base will be completely different, that company is going out of business, or they have to completely restructure the contract in a couple years. So we help them organize their contracts more sanely so it doesn’t mess everything up in the future. It reminds me of New Relic, which screwed this up as an early cloud infrastructure player.
It is obviously true that there is VC-funded price suppression at the moment. If a tool costs 10% of an engineer’s salary a year, I need it to be making the engineer 50% better, because it’s going to cost more like that amount in the future, and so it doesn’t matter in the moment, but I’m keeping my eye on those sorts of efficiency gains. Tools only making us 10% better and cost 10% don’t work, because I know that that’s not sustainable.
Back to the product roadmap: We don’t have any AI use cases at scale such that the costs add up to meaningful dollars, because it’s still just cheap per call, per token, for whatever you need, and so I’m not that worried about it at the moment, but we’re going to have a bunch of use cases out there we’re testing, and I will be paying much closer attention to that. Go with the example that I think is the one we’ll launch first, which is — we’re not talking about this a ton publicly — “Which account should I open next?” We are actually using CFPs to train models, and that’s effectively a one-time cost. And then there’s some ongoing stuff occasionally, but like, the real cost of the inference at runtime to consume the model for Adam and give the answer for what his next account should be, that costs almost nothing right now, so I don’t really care. But if it was effectively the same funnel conversion rate or outcome as a non-AI-powered solution, it’d be a hard choice. Current use cases involve pretty simple inference and not a crazy model, but if we get to harder problems, I’ll certainly be thinking about it.
Last question relates to demand for these AI products. Is it “shareholder”-driven, or is it user-driven, or a mix?
Definitely more driven by the industry, investors, executives. That’s true. I have great signal clients are interested. I have no signal they’re ready to embrace. Obviously, you have to show them some product that’s worth considering embracing. But I see at least as much skepticism as I do optimism from clients. We’re literally conducting user research every week right now on this question. I think it will change over time, but I think it might regress before it gets better, because people are going to be using so many tools that will be laughably mediocre that they won’t consider it seriously. It’s easy to generate an image five different ways, but am I going to let AI trade on my account with margin? Absolutely not. Maybe some people will, but I don’t think most will for a long time.