Hacker Newsnew | past | comments | ask | show | jobs | submit | valleyer's commentslogin

The point is that Apple's own design team was.

But you cannot, in general, migrate your data backwards. Apple's system apps will upgrade their data stores forward only. This isn't a problem if you are willing to e.g. re-download all of your (Mail.app) mail.

> But you cannot, in general, migrate your data backwards. Apple's system apps will upgrade their data stores forward only.

One huge reason to use third-party programs where possible. I dislike Apple's tight coupling of utilities as it is.


Yep, that's a great workaround, as long as you have third-party apps you're happy with.

Yep, though you can mitigate it a little bit in various ways. For one weird example, I keep my main user Home folder on my NAS and mount it via iSCSI. Mostly that's for data integrity/size/backup purposes, but it does also make it free to snapshot before trying out a system upgrade. If I hate it I can rollback my entire set of user data along with the OS.

Though amongst many other wonderful things lost in the mysts of Mac history I still desperately miss NetBoot/NetInstall and ultra easy clone/boot with something like CCC and TDM. It's so fucking miserable now in comparison to do reinstalls/testing/restores.


@xoa may I ask what do you use as iSCSI initiator?

Sorry for missing this! I use Xtend SAN by ATTO [0], which has been around a long time but is still getting basic updates including native Apple Silicon support now, and seems to perform well. It uses a kext and I do worry the day may come that Apple kills support despite having nothing ready to go for equivalent functionality, but so far so good.

----

0: https://www.atto.com/products/xtend-san-iscsi-initiator/


I generally just “reload” everything.

You can view statements via the Web on <https://card.apple.com/> (though not for the associated savings account, if you choose to add that).

Nice. Reminds me of Rodent's Revenge.

Don't threaten me with a good time.

Lawyers, admins, and executives, sure. But what about the complexity on the engineers who now have to maintain an exploding matrix of modes? I can definitely see that becoming burdensome.

much has been written about the deteriorating quality of iOS.

There's bluntly not strong external evidence that software quality is a driving priority at Apple in recent years, so it most probably follows that concerns about maintainability aren't either.


You’re not wrong, it is burdensome but the sheer volume of money they secure primarily because of their license to rent-seek mercilessly (in the US especially because it’s the market they dominate most and with the weakest regulators) makes even a hilarious amount of complexity supportable. Besides, it’s mainly the users who suffer from the codebase falling apart, not Apple decision makers.

$500k+ TC makes many burdens worth shouldering

they make $1b in revenue and $300mm a day in profit

Engineers say they want to work on hard problems then complain that they can’t solve something because it’s too complex

The difference is this isn't an inherently hard problem. It's just stupidity. The difficulty is not inherently interesting, because it's all made up.

Seconded, compliance-induced complexity is the most asinine and tedious possible application of programming skills.

sounds like a problem for claude to worry about

It's also the meaning used in the title of this very Web site.


Is this increasing complexity in the Web layout world worth it? Anyone who wants to use this is going to drop support for older browsers (and, in so doing, older machines that can't run newer OSes and newer browsers).

Personally, I use an 11-year-old machine and have had to add userscript hacks to certain major Web sites to work around bugs in CSS grid (not the "lanes" described here).

At least new JavaScript features can be "polyfilled" or whatever. Maybe sites could check for CSS feature support too? But they seem not to.

For example, the demo page linked in the article fails pretty unusably for me. All the images take up nearly the full viewport width.


> I use an 11-year-old machine

What OS are you running that can't run modern versions of browsers, and on what hardware?

Current Chrome runs on Windows 10, which came out 9.5 years ago but was intended to run on older computers, and macOS Monterey, which runs on Macs from ~2014-2015 depending on the model. But even Big Sur before that, the most recent version of Chrome which runs on that is Chrome 138 from just 6 months ago, and that doesn't seem old enough that you need to build userscript hacks.

I'm really curious what you're actually running. Generally speaking, an 11-year-old desktop should be able to run the current browser, and if not, a very recent one.


I am using a machine older than eleven years old and can still run the newest version of Firefox and Chrome.

I don't think the world needs to cater to people that refuse even basic internet hygiene.


I routinely use an 11 year old computer too. I can not see why "userscript hacks" would be needed.


> Personally, I use an 11-year-old machine and have had to add userscript hacks to certain major Web sites to work around bugs in CSS grid (not the "lanes" described here).

The version of CSS Grid we're using today didn't ship until 2017; a browser from 11 years ago would be using one of the non-standard versions of Grid. For example, Internet Explorer 11 was the first browser to ship a grid implementation.

> At least new JavaScript features can be "polyfilled" or whatever. Maybe sites could check for CSS feature support too?

First, not every site needs to look exactly the same in every browser; that's why progressive enhancement is a thing.

Second, there are multiple ways to create masonry-style layouts that don't require masonry support in the browser using multi-column layout or flexbox.

Third, masonry can be polyfilled using JavaScript [1].

[1]: https://masonry.desandro.com/


When the web came out it itself was new technology that excluded some older machines. Lynx kind of worked (I used it!) but it was a poor substitute, especially once `<img>` showed up.

You want to platform to be able to make progress and not be frozen in amber by what we had at some "magical" year when things were in some Golidlocks powerful enough but not too complex state. Especially since a lot of progress lately has been fixing long-standing inconsistencies and obvious gaps.

The cost of that is that yes, neither my Apple IIe or my Micro Pentium 90 run the modern web... one day my MBP M1 won't either.


Not updating your browser will net you tons of exploitable vulnerabilities.

How do you expect things to ever change if no one ever updates? Certainly even if you decide to lean towards maximum support it’s still a positive these features are being introduced so you can use them in 10 years.


> How do you expect things to ever change if no one ever updates?

Maybe things should stop changing.

We don't really need ten new CSS attributes every year. Things work. The elegant solution is to announce the project is done. That would bring some much-needed stability. Then we can focus on keeping things working.


The issue with this is that the browser is the cross-playing operating system, the VM that runs webapps. But we treat the platform like an evolving document format. If we want to declare it complete, we need to make it extensible so we can have a stable core without freezing capabilities. I foresee all of this CSS/HTML stuff as eventually being declared a sort of legacy format and adding a standard way to ship pluggable rendering engines/language runtimes. WASM is one step in that direction. There are custom rendering/layout engines now, but they basically have to render to canvas and lose a lot of performance and platform integration. Proper official support for such engines with hooks into accessibility features and the like could close that gap. Of course, then you have every website shipping a while OS userland for every pageload, kinda like containers on servers, but that overhead could probably be mitigated with some caching of tagged dependencies. Then you have unscrupulous types who might use load timings to detect cache state for user profiling... I'm sure there's a better solution for that than just disabling cross-site caching...

I digress.


> I foresee all of this CSS/HTML stuff as eventually being declared a sort of legacy format and adding a standard way to ship pluggable rendering engines/language runtimes.

I doubt this is going to happen as long as backwards compatibility continues to be W3C's north star. That's why all current browsers can still render the first website created by TBL in 1989.

Sure, official support for certain extensions should happen but HTML/CSS will always be at the core.


11 years ago we had Python 2.7.8 and 3.4.0 so no type hints, no async await, no match syntax, no formatted string literals, large number couldn’t be written like this 13_370_000_000, etc.

Developers deserve nice things.


> Developers deserve nice things.

I agree they do. But Python is a bad counterexample. You can upgrade your Python on your server and no one has to know about it. But if you want to use new CSS features, then every browser has to implement that feature and every user has to upgrade their browser.

The intent of my comment was to express a desire to stabilize the web API in particular, not to freeze all software development in its tracks.


But people ship python software, just like they ship CSS software, and python is bundled in many operating systems. When somebody ships e.g. a CLI tool to manipulate subtitle files, and it uses a language feature from python 3.9, that somebody is excluding you from running it on your 11 year old system.

People get new browser versions for free, there are more important things to thing about than users that for some reason don‘t want to upgrade. Like I would rather have my layout done quickly with nice elegant code (and no hacks) and spend my extra time developing an excellent UX for my users that rely on assistive technology.

Note that your wish for stabilization was delivered by the CSSWG with the @supports rule. Now developers can use new features without breaking things for users on older browser. So if a developer wants to use `display: grid-lanes` they can put it in an @supports clause. However if you are running firefox 45 (released in May 2016; used by 0.09% of the global users) @supports will not work and my site will not work on your browser. I—and most developers—usually don’t put things in an @support clause that passes "last 2 version, not dead, > 0.2%"


> Maybe things should stop changing.

There are two kinds of technologies: those that change to meet user needs, and those that have decided to start dying and being replaced by technologies that change to meet user needs.


> Is this increasing complexity in the Web layout world worth it?

Yes. I held off learning about CSS Grid for a very long time and as soon as I did I was converted. Sometimes I think the web doesn’t get enough credit for its ambition: mobile viewports, desktop viewports, touch interaction, pointer interaction, complex documents, webapps… it’s a lot. But you get some complexity as a side effect. The complexity we do see these days isn’t invented out of whole cloth, it’s standardising and improving layouts people are implementing with JavaScript, often badly.


> Is this increasing complexity in the Web layout world worth it? Anyone who wants to use this is going to drop support for older browsers (and, in so doing, older machines that can't run newer OSes and newer browsers).

If you’ve been at this for a while, it’s important to remember that browsers update a lot faster than they used to. Anchor positioning came out last year, for example, and all of the major browsers support it by now. Very old devices are a problem but security is purging those out faster than used to be the case.

We also have better tools for progressive adoption since you can easily query for things like CSS feature support. In this demo, they didn’t implement fallbacks but in most real sites you’d have something like a basic grid layout which is perfectly serviceable for the fraction of users on old Firefox releases.


> Maybe sites could check for CSS feature support too? But they seem not to.

Certainly can: https://developer.mozilla.org/en-US/docs/Web/CSS/Reference/A...


What does the age of your machine have to do with browser compatibility issues? Are you running a stale OS and a stale browser on that OS?


Sooner or later, the age of your machine will affect browser compatibility.

It doesn't even take many things to do this — the knock-on support of a bug in a driver that no-one wants to fix, a package that you like that prevents you from upgrading your host OS, web browser developers abandoning something about your GUI (how long before they drop X?) etc.

In the Linux world, the age of your machine is a limit with a blurry edge, but it's still there.


Yes it is. Developers write bad code when they try to work around the lack of features with ill thought out hacks, this results in a bad website for everybody, even those of us that keep our software up to date, and just so happen to have a different screen resolution and a different browser then what the developer tested on.


If enough consumers aren't able to use the website, then business wouldn't use it. The reality is new computers aren't that expensive (I see used M1s for under 1k) and consumers are upgrading.


You mentioned a used model that is over 5 years old as an example of "a new computer", and "1k" as "not expensive for consumers". It is honestly impressive how well you undermined your own point.

> If enough consumers aren't able to use the website, then business wouldn't use it.

I sincerely doubt any business owner would approve of losing even 10% of their potential users/customers if they knew that was the trade-off for their web developer choosing to use this feature, but there are disconnects in communication about these kinds of things -- if the web developer even knows about compatibility issues themselves, which you would expect from any competent web developer, but there are a whole lot of incompetent web developers in the wild who won't even think about things like this.


Most web devs get screemed at (by their peer reviewers or [preferably] static analysis tools) if they use a feature which has less then like 98% support without gracefully denigrating it, and rightfully so.

But your GP is in a massive minority, if every developer would cater to 11 year old browsers we would be wasting a lot of developer time to inferior designs, with more hacks which brake the web for even more users.


I don't know about "most". For various reasons, I use a 2-year-old browser on a daily basis (alongside an up-to-date browser), and I routinely run into websites that are completely broken on the 2-year-old browser. Unrelated to outdatedness, I recently ran into a local government website that e-mailed me my password in plaintext upon account creation. I have no way of accurately quantifying whether "most" web developers fall into the competent or incompetent bucket, but regardless of which there are more of, there are a significant enough number of incompetent ones.


I think a very common browserlist target is "last 2 version, not dead, > 0.2%". So if you have a 2-year old browser you are probably dozens of versions behind and are very likely in that 2% of users which developers simply ignore.


Going back 2 versions, only ~50% of Chrome users are on v140 or newer. If you go back another 2 versions, that number increases to around ~66%. Going back another 2 versions only increases that to 68%, with no huge gains from each further 2 step jump. That you think your target gives you 98% coverage is concerning for the state of web developers, to say the least.

After checking further, almost 20% of Chrome users are on a 2+ year old version. If you handle that gracefully by polyfilling etc., fine. If you "simply ignore" and shut out 20% of users (or 50% of users per your own admission of support target), as I have encountered in the wild countless times, you are actively detrimental to your business and would probably be fired if the people in charge of your salary knew what you were doing, especially since these new browser features are very rarely mission-critical.


Note that the comma in browserlist queries are OR. So if any given browser version still has > 0.2% usage, it is included. This would include Chrome 109 which is three year old. Meaning developers with this browswerlist target would fail their static analysis / peer review (actually even a more reasonable > 0.5% still fails on Chrome 109) if they used a feature which Chrome 109 doesn’t support without graceful degradation or polyfill.

Furthermore the "baseline widely available" target (which IMO is a much better target and will probably become the recommendation pretty soon) includes versions of the popular browsers going back 30 months, meaning a competent team of web devs with a qualified QA process should not deliver software which won‘t work on your 2 year old browser.

I can‘t speak for the developers of the websites which break on your 2 year old browser... Maybe they don‘t have a good QA process. Or maybe you were visiting somebodies hobby project (personally I only target "baseline newly available" in my own hobby projects; as I am coding mostly for my own amusement). But I think it is a reasonable assumption that user tend to update their browsers every 30 months, and you won‘t loose too many customers if you occasionally brake things for the users which don’t.


A couple of examples of the kinds of hobby projects that break on my 2-year-old Chrome installation: ChatGPT.com, Claude.ai, Substack.com

Your position sounds reasonable upon elaboration, I only wish more web developers had the same consideration.


Can you link to the source for your stats?

I'm not finding anything to corroborate that -- I'm seeing stats suggesting things like 90% of Chrome users are on the newest version after two weeks:

https://timotijhof.net/posts/2023/browser-adoption/

And Stat Counter shows that the current version of Chrome utterly dominates in any given month:

https://gs.statcounter.com/browser-version-market-share/desk...

The glacial adoption you're describing doesn't make much sense when you consider how aggressively Chrome auto-updates, so I'm quite confused.


My go-to reference is this, which itself cites statcounter: https://caniuse.com/usage-table

I was specifically referencing desktop Chrome, not including Chrome for Android, but other than that, if there are discrepancies, I'm not sure what the cause is.


Very interesting.

The Timo Tijhof data is based on Wikipedia visits, and shouldn't be affected by adblockers.

Meanwhile, StatCounter is based on sites that use its analytics, and on users not using adblockers that might block it. The CanIUse table makes clear there's a long tail of outdated Chrome versions that each individually have tiny usage, but they seem to add up.

It's fascinating they're so wildly different. I'm inclined to think Wikipedia, being the #9 site on the web [1], is going to produce a more accurate distribution of users overall. I can't help but wonder if StatCounter is used by a ton of relatively low-traffic sites, and the long tail of outdated Chrome is actually headless Chrome crawlers, and so they make up a large proportion relative to actual user traffic? Since they're not pushed to update, the way consumers are. And especially with ad-blocking real users excluded too?

Anecdotally, in web development I just haven't seen users complain about sites not working in Chrome, where it turns out the culprit is outdated Chrome. In contrast to complaints about e.g. not working in Firefox, which happen all the time. Or where it breaks in Chrome but it turns out it's an extension interfering.

[1] https://en.wikipedia.org/wiki/List_of_most-visited_websites


> The first thing that blew my mind was how stupid the whole idea is. Think for one second. One full second. Why do you ever want to add a chatbot to a snack vending machine? The video states it clearly: the vending machine must be stocked by humans. Customers must order and take their snack by themselves. The AI has no value at all.

I fear the author has missed the point of the "Project Vend" experiments, the original write-ups of which are available here (and are, IMO, pretty level-headed about the whole thing):

https://www.anthropic.com/research/project-vend-1

https://www.anthropic.com/research/project-vend-2

The former contains a section titled "Why did you have an LLM run a small business?" that attempts to explain the motivation behind the experiment.


Yeah, I haven't read the WSJ article, but I did read the original Anthropic experiment and I feel like the author is catastrophizing a bit much here. This is effectively just something they did for fun. It's entertaining and a funny read. Not everything has to be the end of the world.


The point is that it's an ad. No company spends money on a joke just to make a joke. Not the end of the world, although it's interesting that all the end of the world stuff comes directly out of that joke and its universe as it were. Take the joke seriously and extend its logic as far as it will go, and you get the end of the world. It's a thought experiment, or that's how I read it anyway.


I think the phrase we're looking for is "publicity stunt". Seems a fairly harmless and self-effacing one at that.


> The point is that it's an ad.

Sure, but like the other guy said, that's the point of publicity stunts. It doesn't even have to be specific to a company/ad, any silly thing like this is going to sound crazy if you take it seriously and "extend its logic as far as it will go". Like seeing the Sony bouncy balls rolling down the street ad and going "holy shit, these TV companies are going to ruin the world by dropping bouncy balls on all of us". It's a valid thought experiment, but kind of a strange thing to focus on so sternly when it's clearly not taking itself seriously, especially compared to all the real-world concerning uses of AI.

(And it is pretty funny, too. If anything I think we'd all prefer more creative ads like this and the bouncy ball one, and less AI-generated doomer Coke ads or such.)


To be fair they could have done an experiment on a transaction that typically requires a person in the loop. Rather than choosing a vending machine which, already, does not require a person in the loop for the transaction.


You’re thinking of replacing a vending machine with a chatbot which indeed doesn’t make much sense. The experiment was replacing the management of the machine. It’s not crazy to think that, money being no object, it would be great to have a person who hangs around the machine, whose job it is to ask customers what kinds of things people might want to buy from the machine, and conduct trials, tinker with pricing, do promotional deals, etc. But that’s of course impractical to have a FTE per machine to do that. The idea of this experiment was to see if that could be done with Claude. And of course as others have pointed out, it’s a simple and cheap and low-stakes version of “a business” suitable for experimenting with.


If I recall, the idea was the AI taking the role of the vending machine manager, choosing and restocking products and such. Anything on top of that was, I assume, just added for fun.


>I feel like the author is catastrophizing a bit much here.

I feel like he's catastrophizing the ordinary amount for an anti-AI screed. Probably well below what the market expects at this point. At this point you basically have to sound like Ed Zitron or David Gerard to stand out from the crowd.

AI is boiling the oceans, and you're worried about a vending machine?


The partner for these projects has a benchmark that the top frontier LLM labs seem to be running on their new model releases - I think there's _some_ value to these numbers in helping people compare and contrast model performance.

https://andonlabs.com/evals/vending-bench


You should not be getting notifications while driving.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: