It's probably regular airplanes, stars, and Venus, and now, I assume civilian drones. Once people started claiming drones, other people who never bothered to look in to the sky started seeing lights they didn't understand (planes, stars, planets) because they never bothered to study the night sky in even a basic way, and they are suggestible. Then people probably started flying their own drones to do investigation + prank people.
+ StarLinks or other low orbit satellites and space stations. In 2014, when Russian invasion started in Ukraine, we had so many reports from folks about drones in the sky about ISS or Venus. I saw this myself when I was at Chongar, near to Russian occupied Crimea: my comrades pointed to bright Venus in the night sky and said that it can be a Russian drone. I dismissed the claim by pointing out that army doesn't turn lights on when on mission.
Given that the top google results are now generated I think we already have a massive recursion problem. I think we would benefit from training a model specifically to just detect a likelihood of content being generated and then bias other models against the higher likelihood generated content so that we don’t end up with LLM echo chambers.
Right. Google already has a solution https://deepmind.google/technologies/synthid/
Everyone insists on training theirs to look human generated so the horses have left the stable on this
Isn't everybody always gushing about how LLMs are supposed to get better all the time? If that's true then detecting generated fluff will be a moving target and an incessant arms race, just like SEO. There is no escape.
Yep, that's what I've been thinking since people started talking about it. I hear that AI plagiarism detectors can never work, since LLM output can never be detected with any accuracy. Yet I also hear that LLMs-in-training easily sift out any generated content from their input data, so that recursion is a non-issue. It doesn't make much sense to have it both ways.
I wonder if the truth about sifting out synthetic training data is based on signals separate from the content itself. Signals such as the source of the data, reported author, links to/from etc.
These signals would be unavailable to a plagiarism/ai detector
I think there is a fallacy “if I am more productive at home than therefore all must be more productive at home”. Consider though that the most sophisticated companies in the world when it comes to analyzing human behavior are asking for employees to return to work, do you really think they would do this if this meant a productivity decrease. I’m suggesting the harsh reality is that while you may be more productive at home the majority of your colleagues are not and they unfortunately have ruined it for everybody. I don’t think this has anything to do with sunk office space costs, it’s about bottom line productivity.
Work is not only about what the company wants but also about what the employee needs. I don't see much difference between the ability to work from home and the ability to avoid working on weekends; the later is an acquired rights in many places, the former can come over time.
I would think with all the additional media coverage he has received since the announcement it would be very strange if he did not receive additional engagement with his posts.
Musk is an ideological asshole, this is Matt recognizing further growth probably requires pushing WP Engine out of their market. If you disagree with him on this, you probably disagree where Automattic as a whole is going.