Hacker Newsnew | past | comments | ask | show | jobs | submit | sjs382's commentslogin

How is it better?

I can believe that a conversation like that happened once. Maybe twice, if I want to be extra generous with the benefit of the doubt. It's missing context but I can have my imagination can fill in those gaps.

But he said that this was common.


And to expand: it's a gradient, not black-and-white.


I think it's generally understood among their users (paying customers who make an active choice to use the service) but I agree—they should be explicit re: the disclosure.


Kagi News does not require payment. The articles are indexed by search engines. Anyone can send a link to anyone else or post a link anywhere.

The speculation most Kagi customers inferred the articles were AI generated could be correct. Or not. We agree they should disclose in any case.


All AI use should have mandatory disclosure.


I generally side with those that think that it's rude to regurgitate something that's AI generated.

I think I am comfortable with some level of AI-sharing rudeness though, as long as it's sourced/disclosed.

I think it would be less rude if the prompt was shared along whatever was generated, though.


> AI slop eventually will get as good as your average blogger. Even now if you put an effort into prompting and context building, you can achieve 100% human like results.

In that case, I don't think I consider it "AI slop"—it's "AI something else". If you think everything generated by AI is slop (I won't argue that point), you don't really need the "slop" descriptor.


Then the fight Kagi is proposing is against bad AI content, not AI content per-se? Then that's very subjective...


I don't pretend to speak for them, but I'm OK in principle dealing in non-absolutes.


Explicitly in the article, one of the headings is "AI slop is deceptive or low-value AI-generated content, created to manipulate ranking or attention rather than help the reader."

So yes, they are proposing marking bad AI content (from the user's perspective), not all AI-generated content.


Which troubles me a bit, as 'bad' does not have same definition for everyone.


How is this any different from a search engine choosing how to rank any other content, including penalizing SEO spam? I may not agree with all of their priorities, but I would welcome the search engine filtering out low quality, low effort spam for me.


Yes, that's why we'll publish a blog post on this subject in the coming weeks. We've been working on this topic since the beginning of summer, and right now our focus is on exploring report patterns.

Matt also shared insights about the other signals we use for this evaluation here https://news.ycombinator.com/item?id=45920720

And we are still exploring other factors,

1/ is the reported content ai-generated?

2/ is most content in that domain ai-generated (+ other domain-level signals) ==> we are here

3/ is it unreviewed? (no human accountability, no sources, ...)

4/ is it mindlessly produced? (objective errors, wrong information, poor judgement, ...)


There’s a whole genre of websites out there that are a ToC and a series of ChatGPT responses.

I take it to mean they’re targeting that shit specifically and anything else that becomes similarly prevalent and a plague upon search results.


A simple definition would be: Its bad if it isn't labeled as AI content or if there is not a mechanism that allows you to filter out AI content.


That's fine.


I think you're referencing https://kite.kagi.com/

In my view, it's different to ask AI to do something for me (summarizing the news) than it is to have someone serve me something that they generated with AI. Asking the service to summarize the news is exactly what the user is doing by using Kite—an AI tool for summarizing news.

(I'm a Kagi customer but I don't use Kite.)


I'm just realizing that while I understand (and think it's obvious) that this tool uses AI to summarize the news, they don't really mention it on-page anywhere. Unless I'm missing it? I think they used to, but maybe I'm mis-remembering.

They do mention "Summaries may contain errors. Please verify important information." on the loading screen but I don't think that's good enough.


"Kagi News reads public RSS feeds of thousands of (community-curated) world-wide news sources and utilizes AI to distill them into one perfect daily briefing."

https://news.kagi.com/about


On another page is not on page. And a daily briefing is AI generated does not communicate all articles are AI generated.


https://news.kagi.com/world/latest

Where's the part where you ask them to do this? Is this not something they do automatically? Are they not contributing to the slop by republishing slopified versions of articles without as much as an acknowledgement of the journalists whose stories they've decided to slopify?

If they were big enough to matter they would 100% get sued over this (and rightfully so).


> Where's the part where you ask them to do this? Is this not something they do automatically?

It's a tool. Summarizing the news using AI is the only thing that tool does. Using a tool that does one thing is the same as asking the tool to do that thing.

> Are they not contributing to the slop by republishing slopified versions of articles without as much as an acknowledgement of the journalists whose stories they've decided to slopify?

They provide attribution to the sources. They're listed under the headline "Sources" right below the short summary/intro.


It's not the only thing the tool does, as they also publish that regurgitation publicly. You can see it, I can see it without even having a Kagi account. That makes it very much not an on-demand tool, it makes it something much worse than what what ChatGPT is doing (and being sued for by NYT in the process).

> They provide attribution to the sources. It's listed under the headline "Sources" and is right below the short summary/intro.

No, they attribute it to publications, not journalists. Publications are not the ones writing the pieces. They could easily also display the name of the journalist, it's available in every RSS feed they regurgitate. It's something they specifically chose not to do. And then they have the balls to start their about page about the project like so:

> Why Kagi News? Because news is broken.

Downvote me all you want but fuck them. They're very much a part of the problem, as I've demonstrated.


> as I've demonstrated

You have not, you've thrown a temper tantrum


Sure thing bud. Thank you for your well thought out counter-argument.


> What does the US benefit from this new policy?

This really makes me feel like a conspiracy theorist, but it doesn't seem as far from reality as it should...

If there's no response: exhibiting total dominance of the region and being able to make up whatever unverifiable statistics they want re: domestic safety (drugs, gangs, etc).

If there is a response: potential for armed conflict which could become a pretense for interning more citizens with hispanic heritage, similar to what was done to Japanese Americans in the 1940s.


Depends.

If you're a restaurateur, do you have the .ai files your agency created, an Adobe Illustrator license, and know-how to get in there and change the prices? And then know where/how to deliver the result to get it printed? If so, you probably still have something better to do...

You probably pay an agency an hourly rate plus markup to get them updated, prepped and sent off to be reprinted.

Next time: negotiate fixed prices/timelines for small updates, own the files, and own the relationship with the printers.


While Google has their own chips, they don't really have the market power there to buy up bleeding-edge manufacturing capacity.

Apple on the other hand... (though they're behind in other regards)


But they were just an example. They are many players arising. You can look at this: https://news.ycombinator.com/item?id=45746246 even <https://www.nextsilicon.com/> is not on that list.

I don't see a natural monopoly anymore.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: