It's not that complicated, you just agree to give up 30% of your royalties and Spotify autoplays your track more than any other track (and includes it more in Release Radar / Discover Weekly / Daily Mix / Radio): https://artists.spotify.com/discovery-mode
No serious label does this as there's no benefit from those drive-by listens other than making the number go up, but you can bet that nearly every artist without a label that somehow reaches over a million listens on their first release does.
Editorial playlists on the other hand actually require you to do good in some of the niche ones before you get "promoted" to the bigger ones.
There's one popular platform that requires disclosing whether and how AI was used (Steam), and if you search anything about it, all you can find is like a sea of articles opposing it.
Before I buy, can you confirm the bridge is GDPR-compliant, AI-Act-ready, has a digital product passport, and passed its environmental impact assessment? Otherwise the local compliance officer will fine us before it even collapses.
>Before I buy, can you confirm the bridge is GDPR-compliant, AI-Act-ready, has a digital product passport, and passed its environmental impact assessment?
Great comment! We've added double-plus-good to your Palantir-Trumport account and 2% off your next Amazon purchase!
It's because LLM tools have design guidelines as a part of the system prompt which makes everything look the same unless you explicitly tell it otherwise.
To give an example that annoys me to no end, Google's Antigravity insists on making everything "anthropomorphic", which gets interpreted as overtly rounded corners, way too much padding everywhere, and sometimes even text gradients (huge no-no in design). So, unless you instruct it otherwise, every webpage it creates looks like a lame attempt at this: https://m3.material.io/
Google Search has like 10 years of history of doing its best not to get you to click on a search result, but to answer your question directly or at the very least keep you on their platform while screwing over website owners.
The first two iteration of this were AMP and Instant Answers, the third one is AI Overview. AI Overview should not be seen in isolation, but as a part of the pattern. If it weren't for it, Google would double down on some other method of reaching the same goal.
This one will end up the same way the other two did: there's gonna be a vocal minority that's gonna consider it unfair and a web killer, the vast majority of users won't have an opinion, Google will not care, "the web" will play along, those early adopters are temporarily gonna have an advantage in this "new age" and some will die in the process, but the vast majority is gonna continue on as if nothing happened.
It's also not gonna be the final iteration of this process because shiny new things sound better to investors than marginal improvements, so X years from now AI Overview is gonna be seen as something "old-fashioned", Google Search will pivot once again, and the rest of the web will follow to keep Google happy.
> the vast majority of users won't have an opinion
They're here, they don't care how they get from point A to point B, the tech used to achieve that result is completely irrelevant to them. AI? Great. Not AI such as the Instant Answers era? Also great. Average Joe does not spend his time thinking about the economics of the web.
But you shouldn't confuse them finding "AI" useful now with them being attached to it long term. It's a hip new tool now, but the novelty will fade and Google will have to re-invent themselves all over again. If anything, they kinda screwed themselves over by calling this "AI". AI is supposed to be something within reach, but always some years away. By wasting that term for the current era, it's gonna suck so hard to think of a new marketing term that's gonna be seen as an improvement in comparison to the term "AI".
Which is saddening as the first thing I think when I see this overview is "How do I verify this statement is correct" and paradoxically it sometimes just slows me down.
Yeah hate to say it, because I am an AI hater, but I love the AI results in Google and Kagi. I barely click results anymore for basic questions unless it's something important enough for me to need verification to ensure the AI-gen answer wasn't a hallucination. It's been so nice not having to pick through the cesspool that is StackOverflow to find answers to quick cli questions, or wade through SEO-generated, Amazon-affiliate link garbage for more general questions.
This is what the vast majority of users will do of course. The issue people have with it is that it breaks the "social contract" of the web, which is that part of the advertising income goes to the site that provided the information the answer is based on. That, by destroying that income, "overviews" (now including AI overviews, but that's not where it started) are destroying first publishers and I'm sure it'll go all the way through until Youtube is entirely destroyed as well.
Of course, it does not destroy Google's income ... and it destroys the promise Google made long ago, which is to never keep users on Google platforms.
Oh and to add insult to injury, want to bet Opal will force app developers into what you probably never even imaged would happen on the web? Pay-per-view. Not for a video. For a website/app.
Some years (decades?) ago, a sysadmin like me might half-jokingly say: "I could replace your job with a bash script." Given the complexity of some of the knowledge work out there, there would be some truth to that statement.
The reason nobody did that is because you're not paying knowledge workers for their ability to crunch numbers, you're paying them to have a person to blame when things go wrong. You need them to react, identify why things went wrong and apply whatever magic needs to be applied to fix some sort of an edge case. Since you'll never be able to blame the failure on ChatGPT and get away with it, you're always gonna need a layer of knowledge workers in between the business owner and your LLM of choice.
You can't get rid of the knowledge workers with AI. You might get away with reducing their size and their day-to-day work might change drastically, but the need for them is still there.
Let me put it another way: Can you sit in front of a chat window and get the LLM to do everything that is asked of you, including all the experience you already have to make some sort of a business call? Given the current context window limits (~100k tokens), can you put all of the inputs you need to produce an output into a text file that's smaller in size than the capacity of a floppy disc (~400k tokens)? And even if the answer to that is yes, if it weren't for you, who else in your organization is gonna write that file for each decision you're the one making currently? Those are the sort of questions you should be asking before you start panicking.
reply