Recommendations
Recommendations
Meta hit the headlines this week, saying it’ll likely begin using “tens of trillions” of parameters to power content recommendations. The scale, it said, would make it “orders of magnitude” bigger than the largest large language models.
Cool, we guess?
Content recommendations, no doubt, work. We don’t feel compelled to even defend such a statement. The question, though, is to what extent and to what end?
Setting aside arguments over whether we’re helpless against algorithms, it’s likely we can all agree algorithmic feeds have pushed us so deep into niches that it can be hard to dig ourselves out. It’s practically fact: in Meta’s last earnings call, Mark Zuckerberg said AI recommendations in Reels increased time spent on Instagram by 24%.
It makes sense, then, that Meta would want to improve its recommendations. There aren’t many interesting questions to ask about why, but there are interesting questions to ask about what.
At the core, recommendations are meant to show you something you’re likely to enjoy. That’s true in social media as much as it’s true in commerce as much as it’s true in relationships.
But will a new recommendation model do the same? Or will it do something different?
Meta, in a blog post, said it’s focused on giving users insight into why certain content was recommended and better surface ways to fine tune the algorithm, so to speak.
This feels … dated?
Meta, sort of, has to play it safe right now. They spent a boatload on metaverse prepping and are climbing out of a massive hole from a stock performance perspective.
They’re staying close to shore, so to speak.
Better, easier-to-access algorithm “controls” and more transparency is nice, but they feel like features that should have existed 10 years ago. Not to mention the fact that “tens of trillions” of parameters likely means Meta can’t really tell you why you saw what you saw. The logic probably wouldn’t fit in the tool tip.
But if you do have “tens of trillions” of parameters for a new model, you probably have that many parameters for multiple models.
So, what if there were multiple models? What if users could select different models in different situations?
That’d be algorithmic control. It’d be like switching a channel.
This, really, is what TikTok got right—at least from an interface perspective. If you had multiple models to flip through as easily as flipping through videos powered by one model, you’d find an entirely new discovery mechanism.