AI certainly let you do it much faster, but it’s wrong to write off doing something like this by hand as impossible when it has actually been done before. And the models built by hand are the product of genuine human creativity and ingenuity; this is a pixelated satellite image. It’s still a very cool site to play around with, but the framing is terrible.
This is a wonderful example how people live in the inverse-world.
Marketing is in the end a way of trying to get people to listen, even if you have nothing substantial to say (or if you have something to say, potentially multiply the effect of that message). That means you have to invent a lot of packaging and fluff surrounding the thing you want to sell to change peoples impression independent of the actual substance they will encounter.
This to me is entirely backwards. If you want people to listen focus on your content, then make sure it is presented in a way that serves that content. And if we are talking about text, that is really, really small in terms of data and people will be happy if they can access it quickly and without 10 popups in their face.
Not that I accuse any person in this thread of towing that line, but the web as of today seems to be 99% of unneeded crap, with a tiny sprinkle of irrelevant content.
Almost 15,000 elements! I do agree that too many elements can slow a page but from my experience that starts to happen a few hundred thousand elements, at least that's what we'd run into making data visualizations for network topologies (often millions of nodes + edges) but the trick for that was to just render in canvas.
The HTML spec page[0] is the proper War and Peace of the web. It is 2,125MB of text gzipped, twice as large as War and Peace. It still makes some browsers weep, as was discussed in an episode of HTTP 203 podcast[1].
This is true, yet I've seen plenty of poorly built webapps that manage to run slowly even on a top tier development machine. Never mind what all the regular users will get in that case.
> For one it had to originate from app.opencode.com
No, that was the initial mitigation! Before the vulnerability was reported, the server was accessible to the entire world with a wide-open CORS policy.
Not sure what you mean by that, but before they implemented any mitigations, it had a CORS policy that allowed requests from any origin. As far as I know, Chromium is the only browser platform that has blocked sites from connecting to localhost, so users of other browsers would be vulnerable, and so would Chrome users if they could be convinced to allow a localhost connection.
Have you actually accounted for all the services you’re receiving from the government? Road construction and maintenance, schools, availability of clean water, safe aviation, trustworthy financial markets, public universities, funding for research that improves your health and happiness, and so on? I don’t think you can even really put a price on most of those, because they simply could not exist without a centralized system funded by taxes.
Google has carte blanche to lie to foreigners for national security purposes, it's not even illegal for them. The data is fed into the mass surveillance systems.
IP, user agent, language headers and network timings are enough to fingerprint and associate you with any other accounts at US tech companies. The visited website is linked via Referer / Origin headers to your browsing history.
All of this tracking is passive and there is no way to check for an independent observer.
Yet here you are defending the most privacy invasive company on the planet.
By default, loading Google Fonts from Google’s servers exposes user data to Google (e.g., IP Address, User agent, Referrer, Timestamps, Cache identifiers).
It’s difficult to prosecute online harassment, and “hate sites and photoshopped images” aren’t illegal. There’s a right to freedom of speech in the US.
Remember when Archive.is/today used to send Cloudflare DNS users into an endless captcha loop because the creator had some kind of philosophical disagreement with Cloudflare? Not the first time they’ve done something petty like this.
It wasn't a philosophical disagreement, they needed some geo info from the DNS server to route requests so they could prevent spam and Cloudflare wasn't providing it citing privacy reasons. The admin decided to block Cloudflare rather than deal with the spam.
Had nothing to do with spam, the argument by archive.today that they needed EDNS client subnet info made no sense, they aren't anycasting with edge servers in every ISP PoP.
e.g. currently most media snapshots contain wartime propaganda forbidden at least somewhere.
RT content verboten in Germany, DW content verboten in Russia, not to mention another dozen of hot spots.
"Other websites" are completely inaccessible in certain regions. The Archive has stuff from all of them, so there’s basically no place on Earth where it could work without tricks like the EDNS one.
Actually, I'm not entirely sure on how archive.org achieves its resiliency.
It's a rather interesting question for archive.org, if one were to interview them, that is.
Unlike archive.today, they don't appear to have any issues with e.g. child pornography content, despite certainly hosting a hundred times more material.
They have some strong magic which makes the cheap tricks needless.
- They already do this. Every chat-based LLM system that I know of has separate system and user roles, and internally they're represented in the token stream using special markup (like <|system|>). It isn’t good enough.
- LLMs are pretty good at following instructions, but they are inherently nondeterministic. The LLM could stop paying attention to those instructions if you stuff enough information or even just random gibberish into the user data.
100 people built this in 1964: https://queensmuseum.org/exhibition/panorama-of-the-city-of-...
One person built this in the 21st century: https://gothamist.com/arts-entertainment/truckers-viral-scal...
AI certainly let you do it much faster, but it’s wrong to write off doing something like this by hand as impossible when it has actually been done before. And the models built by hand are the product of genuine human creativity and ingenuity; this is a pixelated satellite image. It’s still a very cool site to play around with, but the framing is terrible.
reply