It's not like I'll get a choice between the task database going down and not going down. If my task database goes down, I'm either losing jobs or duplicating jobs, and I have to pick which one I want. Whether the downtime is at the same time as the production database or not is irrelevant.
In fact, I'd rather it did happen at the same time as production, so I don't have to reconcile a bunch of data on top of the tasks.
Indeed. Apple should close those forums. It damages their brand to have such antagonistic people pretending to be support agents. A company of Apple's wealth could afford to have a small army of people in the Philippines do the same job with much less aggression.
>”… Now, the few Apple engineers that get back to me for some of these issues and the Apple support as well often tell me that Apple really cares about customer feedback. I really want to believe this ... but it's so hard to believe it, if less than 1% of my submitted reports (yes, less than 1%, and it's probably much less) ever gets a response. A response, if it ever comes, can come after 3 months, or after 1 year, or after 3 years; only rarely does it come within 1 month. To some of the feedbacks, after getting a response from the Apple engineers, I responded, among other things, by asking if I'm doing something wrong with the way I submit the feedback reports. Because if I do something wrong, then that could be the reason why only so few of them are considered by the Apple engineers. But I never got any answer to that. I told them that it's frustrating sending so much feedback without ever knowing if it's helpful or not, and never got an answer. …”
In my exp. their _support_ is fantastic which is another reason it’s odd they will simply leave countless _feedback_ submissions open nearly indefinitely. They ignore their free laborers!
Wholeheartedly agree. The few times in my life that I’ve bothered to post there with a problem, it’s been all the more upsetting that the patronizing generic advice and scolding of the frustrated users, is coming from random volunteer fanbois on the Internet, not even paid Apple staff who contractually have to be positive about Apple. A company with such rabidly loyal supporters shouldn’t deploy them like this. And if it was wise back in 2010 when Apple software was for the most part quite good… it sure isn’t wise now when they’re reaching what I hope is a temporary nadir in quality.
Tim Cook runs a well oiled machine. At some point, leadership will change. And I don’t think it is as simple as, “Just keep going what Tim was doing.” There are so many moving parts that it is nigh certain Apple will go through a period of brand damage when things begin to fall through the cracks. Will that fall be dramatic? Probably not. But I think you underestimate just how much a shift in leadership can tip the scales.
Yes, too much emphasis is put on Trump. He's just the President. This crisis has been 80 years in the making.
It's the Supreme Court that has expanded the powers of the President, and previously of the Federal government, far beyond what was ever intended.
By allowing the federal government to dominate the states, the Supreme Court created a position of unrivalled power.
Trump may be an evil narcissist by the standards of normal people, but there's plenty of those sorts of people in politics. That's why you have a constitution.
> It's the Supreme Court that has expanded the powers of the President
Sort of, but Congress also wrote a bunch of pretty broad, vague laws, delegating a significant amount of power to the executive via agency rulemaking, and it turns out the agencies are part of the executive branch and have to do what the head of the executive branch says they have to do (within the limits of those broad, vague laws). If Congress can't get back to smaller, simpler, more specific laws, and they continue to pass the burden of this complexity over to the executive branch to figure out, the executive branch will continue to wield outsize power.
Absolutely! The U.S. is defacto a Russia or China with a lower 'government expenditure/GDP ratio'
But that is not that much of a consolation if the government is allowed to pick winners and losers for kleptocracy or there is strong central planning and oversight on what should independent institutions
Laws can be changed. This is right now a trillion dollar industry, perhaps later it could even become a billion dollar industry. Either way, it's very important.
Strict copyright enforcement is a competitive disadvantage. Western countries lobbied for copyright enforcement in the 20th century because it was beneficial. Now the tables have turned, don't hold your breath for copyright enforcement against the wishes of the markets. We are all China now.
OPs idea is about having a new GPL like license with a "may not be used for LLM training" clause.
That the LLM itself is not allowed to produce copyrighted work (e.g. just copies of works or too structurally similar) without using a license for that work is something that is probably currently law. They are working around this via content filters. They probably also have checks during/after training that it does not reproduce work that is too similar.
There are law suits about this pending if I remember correctly e.g. with the New York Times.
The issue is that everyone is focusing on verbatim (or "too similar") reproduction.
LLMs themselves are compressed models of the training data. The trick is the compression is highly lossy by being able to detect higher-order patterns instead of fucusing on the first-order input tokens (or bytes). If you look at how, for example, any of the Lempel-Ziv algorithms work, they also contain patterns from the input and they also predict the next token (usually byte in their case), except they do it with 100% probability because they are lossless.
So copyright should absolutely apply to the models themselves and if trained on AGPL code, the models have to follow the AGPL license and I have the right to see their "source" by just being their user.
And if you decompress a file from a copyrighted archive, the file is obviously copyrighted. Even if you decompress only a part. What LLMs do is another trick - by being lossy, they decompress probabilistically based on all the training inputs - without seeing the internals, nobody can prove how much their particular work contributed to the particular output.
But it is all mechanical transformation of input data, just like synonym replacement, just more sophisticated, and the same rules regarding plagiarism and copyright infringement should apply.
---
Back to what you said - the LLM companies use fancy language like "artificial intelligence" to distract from this so they can they use more fancy language to claim copyright does not apply. And in that case, no license would help because any such license fundamentally depends on copyright law, which as they claim does not apply.
That's the issue with LLMs - if they get their way, there's no way to opt out. If there was, AGPL would already be sufficient.
I agree with your view. One just has to go into courts and somehow get the judges to agree as well.
An open question would be if there is some degree of "loss" where copyright no longer applies. There is probably case law about this in different jurisdictions w.r.t. image previews or something.
I don't think copyright should be binary or should work the way it does not. It's just the only tool we have now.
There should be a system which protects all work (intellectual and physical) and makes sure the people doing it get rewarded according to the amount of work and skill level. This is a radical idea and not fully compatible with capitalism as implemented today. I have a lot on my to-read list and I don't think I am the first to come up with this but I haven't found anyone else describing it, yet.
And maybe it's broken by some degenerate case and goes tits up like communism always did. But AFAICT, it's a third option somewhere in between, taking the good parts of each.
For now, I just wanna find ways to stop people already much richer than me from profiting from my work without any kind of compensation for me. I want inequality to stop worsening but OTOH, in the past, large social change usually happened when things got so bad people rejected the status quo and went to the streets, whether with empty hands or not. And that feels like where we're headed and I don't know whether I should be exited or worried.
The point of CSS is specifically to separate styling and semantics, so that they are not tightly coupled.
If you were writing a blog post you would want to be able to change the theme without going through every blog post you ever wrote, no?
If I'm writing a React component I don't want it tightly coupled to its cosmetic appearance for the same reason. Styling is imposed on elements, intrinsic styles are bad and work against reusability, that's why we all use resets is it not?
I do agree that the class name system doesn't scale but the solution is not to double down on coupling, but rather to double down on abstraction and find better ways to identify and select elements.
Content should come from your database, Markdown, JSON, models etc.
Presentation is determined by the HTML and CSS together.
So your content and presentation is already separate enough to get the benefits. Breaking up the presentation layer further with premature abstractions spread over multiple files comes at a cost for little payback. I'm sure everyone has worked on sites where you're scared to make CSS file edits because the unpredictable ripple of changes might break unrelated pages.
Styling code near your semantic HTML tags doesn't get in the way, and they're highly related too so you want to iterate and review on them together.
I've never seen a complex website redesign that didn't involve almost gutting the HTML either. CSS isn't powerful enough alone and it's not worth the cost of jumping through hoops trying because it's rare sites need theme switchers. Even blog post themes for the same platform come with their own HTML instead of being CSS-only.
> If you were writing a blog post you would want to be able to change the theme without going through every blog post you ever wrote, no?
Tailwind sites often have a `prose` class specifically for styling post content in the traditional CSS way (especially if you're not in control of how the HTML was generated) and this is some of the simplest styling work. For complex UIs and branded elements though, the utility class approach scales much better.
> I'm sure everyone has worked on sites where you're scared to make CSS file edits because the unpredictable ripple of changes might break unrelated pages.
CSS gives you multiple tools to solve this problem, if you don't use any of them then it's not really CSS's fault.
> Styling code near your semantic HTML tags doesn't get in the way
It does. When I'm working on functionality I don't want to see styles and vice versa. It adds a layer of noise that is not relevant.
If I'm making e.g. a search dropdown, I don't need to see any information about its cosmetic appearance. I do want to see information about how it functions.
Especially the other way around: if I'm styling the search dropdown I don't want to have to track down every JSX element in every sub-component. That's super tedious. All I need to know when I'm styling is the overall structure of the final element tree not of the vdom tree which could be considerably more complex.
> I've never seen a complex website redesign that didn't involve almost gutting the HTML either
Perhaps for a landing page. For a content-based website or web app you often want to adjust the design without touching your components.
> I've never seen a complex website redesign that didn't involve almost gutting the HTML either. CSS isn't powerful enough alone
I recognize your experience. But I would also like to argue that good semantic CSS class names require active development effort. If you inherit a code base where no one has done the work of properly assigning semantic CSS names to tags, then you can't update the external stylesheet without touching the HTML.
https://csszengarden.com/ shows how a clean separation between HTML and CSS can be achieved. This is obviously a simple web site and there is not much cruft that accumulated over the years. But the principles behind it are scalable when people take the separation of content and representation seriously.
I'll add to my sibling commenters and say that there is a long history of critiquing the value of separation of concerns. One of my favorite early talks that sold me on React was "Pete Hunt: React: Rethinking best practices -- JSConf EU" from Oct 2013 [1] that critiqued the separation of concerns of HTML templates + JS popular in the 2000s and early 2010s and instead advocated for componentization as higher return on investment. I think people already saw styling separation of concerns as not particularly valuable at that point as well, just it wasn't clear what component-friendly styling abstraction was going to win.
I do want styles tightly coupled to my React components. The product I work on has tens of thousands of React components.
I don't want to have to update some random CSS file to change one component's appearance. I've had to do this before and every time its a huge pain to not affect dozens of random other components. Other engineers encounter the same challenge and write poor CSS to deal with it. This compounds over time and becomes a huge mess.
Having a robust design system that enables the composition of complicated UIs without the need for much customization is the way.
Front end development got taken over by the Enterprise Java camp at some point, so now there is no html and css. There’s 10,000 components, and thus nothing that can be styled in a cascading way.
All these arguments are just disconnects between that camp and the oldskool that still writes at least some html by hand.
When I get sucked into react land for a gig, it starts making sense to just tell this particular div tag to have 2px of padding because the piece of code I’m typing is the only thing that’s ever going to emit it.
Then I go back to my own stuff and lean on css to style my handful of reusable pieces.
You’re kinda late to the party. 15 years ago that was the way to build UIs, but componentization changed that. Now we reason about UIs as blocks, not as pages, so collocation of logic, markup, and style makes the most sense.
Not to say that every component should be unique, generic components can be built in an extensible way, and users can extend those components while applying unique styling.
Theming is also a solved issue through contexts.
Reducing coupling was never a good idea. Markup and styling are intrinsically linked, making any change to the markup most likely will require changes to the styling, and vice versa. Instead of pretending we can separate the two, modern UI tools embrace the coupling and make building as efficient as possible.
In the webdev world being late is the same as being early. Just wait for the pendulum to swing back.
Tailwind is like GenZ has discovered the bgcolor="" attribute.
> Markup and styling are intrinsically linked, making any change to the markup most likely will require changes to the styling, and vice versa.
No, not vice versa. It's only in one direction. Changing the component requires changing styles, but changing styles doesn't require changing the component if it's merely cosmetic. If I have a button and I want to make it red the button doesn't have to know what color it is.
There’s nothing “gen z” about Tailwind, and there’s no pendulum effect either, and dismissing the very real benefit thousands of people report from Tailwind based on that is very small minded.
That kind of lack of intellectual curiosity is not a great trait for an engineer.
You're talking about separation of concerns (SOC), as opposed to locality of behavior (LOB).
This is the insight that Tailwind and others like HTMX made clear: Separation of concerns is not a universal virtue. It comes with a cognitive cost. Most notably when you have a growing inheritance hierarchy, and you either need 12 files open or tooling that helps you understand which of the 482 classes are in play for the specific case you’re troubleshooting. Vanilla CSS can be like that, especially when it’s not one’s primary skillset. With Tailwind you say ”this button needs to be blue”, and consolidate stuff into CSS later once the right patterns of abstraction become clear. Tailwind makes exploratory building way faster when we’re not CSS experts.
SOC is usually a virtue when teams are split (frontend/bavkend, etc), but LOB is a virtue when teams are small, full stack, or working on monoliths (this is basically Conway’s law, the shape of the codebase mirrors the shape of the team).
I think the problem is simply that css is too restricted that you can style a fixed piece of html in any way you want. In practice, achieving some desired layout require changing the html structure. The missing layer would be something that can change the structure of html like js or xslt. In modern frontend development you already have data defined in some json, and html + css combined together is the presentation layer that can't really be separated.
People who have tried both throughout their careers are generally sticking with Tailwind. I didn’t get it at first either, but after using it extensively I would never go back to the old way.
> The point of CSS is specifically to separate styling and semantics, so that they are not tightly coupled.
That was the original point, and it turned out that nobody cares about that 99% of the time. It's premature optimization and it violates "YAGNI". And in addition to not being something most people need, it's just a pain to set and remember and organize class names and organize files.
Remember CSS Zen Garden from the late 90s? How many sites actually do anything like that? Almost none.
And the beauty of Tailwind is, when you actually do need themes, that's the only stuff you have to name and organize in separate CSS files. Instead of having to do that with literally all of your CSS.
Not only does no one care, but it's not even true. There are effects you simply cannot achieve without including additional elements. So separation of styling and sementics is dead on arrival.
I use and love Apple Pay, but it's not ideal for every situation. The biggest flaw is that it requires waving your expensive phone in the vicinity of the reader.
Apple Pay is more risky than contactless cards. There is a risk of dropping your phone or it being stolen out of your hand. I only use it in controlled indoor environments, like at a retail store, where I have enough personal space to feel comfortable getting out my phone. If I want to pay at e.g. a stall in a crowded market, I'm using my card.
You don't think that's in part because of economics, education, healthcare, or other factors? The framing of this site is that it is purely a "you're eating wrong" problem.
A large part of the world population is poor, and they do not have the same level of health problems, nor are they similarly obese. Not purely diet related, but a huge part of it for sure.
It would be great to see some comparisons. I'm not claiming that "poor" is the singular indicator so it's obviously going to come down to which countries you're referring to. I'm also not claiming that diet isn't a part of it, not even close.
How are you qualified to judge its performance on real code if you don't know how to write a hello world?
Yes, LLMs are very good at writing code, they are so good at writing code that they often generate reams of unmaintainable spaghetti.
When you submit to an informatics contest you don't have paying customers who depend on your code working every day. You can just throw away yesterday's code and start afresh.
Claude is very useful but it's not yet anywhere near as good as a human software developer. Like an excitable puppy it needs to be kept on a short leash.
I know what's like running a business, and building complex systems. That's not the point.
I used highload as an example because it seems like an objective rebuttal to the claim that "but it can't tackle those complex problems by itself."
And regarding this:
"Claude is very useful but it's not yet anywhere near as good as a human software developer. Like an excitable puppy it needs to be kept on a short leash"
Again, a combination of LLM/agents with some guidance (from someone with no prior experience in this type of high performing architecture) was able to beat all human software developers that have taken these challenges.
> Claude is very useful but it's not yet anywhere near as good as a human software developer. Like an excitable puppy it needs to be kept on a short leash.
The skill of "a human software developer" is in fact a very wide distribution, and your statement is true for a ever shrinking tail end of that
What I think people get wrong (especially non-coders) is that they believe the limitation of LLMs is to build a complex algorithm.
That issue in reality was fixed a long time ago. The real issue is to build a product. Think about microservices in different projects, using APIs that are not perfectly documented or whose documentation is massive, etc.
Honestly I don't know what commenters on hackernews are building, but a few months back I was hoping to use AI to build the interaction layer with Stripe to handle multiple products and delayed cancellations via subscription schedules. Everything is documented, the documentation is a bit scattered across pages, but the information is out there.
At the time there was Opus 4.1, so I used that. It wrote 1000 lines of non-functional code with 0 reusability after several prompts. I then asked something to Chat gpt to see if it was possible without using schedules, it told me yes (even if there is not) and when I told Claude to recode it, it started coding random stuff that doesn't exist.
I built everything to be functional and reusable myself, in approximately 300 lines of code.
The above is a software engineering problem. Reimplementing a JSON parser using Opus is not fun nor useful, so that should not be used as a metric
> The above is a software engineering problem. Reimplementing a JSON parser using Opus is not fun nor useful, so that should not be used as a metric.
I've also built a bitorrent implementation from the specs in rust where I'm keeping the binary under 1MB. It supports all active and accepted BEPs: https://www.bittorrent.org/beps/bep_0000.html
Again, I literally don't know how to write a hello world in rust.
I also vibe coded a trading system that is connected to 6 trading venues. This was a fun weekend project but it ended up making +20k of pure arbitrage with just 10k of working capital. I'm not sure this proves my point, because while I don't consider myself a programmer, I did use Python, a language that I'm somewhat familiar with.
So yeah, I get what you are saying, but I don't agree. I used highload as an example, because it is an objective way of showing that a combination of LLM/agents with some guidance (from someone with no prior experience in this type of high performing architecture) was able to beat all human software developers that have taken these challenges.
This hits the nail on the head. There's a marked difference between a JSON parser and a real world feature in a product. Real world features are complex because they have opaque dependencies, or ones that are unknown altogether. Creating a good solution requires building a mental model of the actual complex system you're working with, which an LLM can't do. A JSON parser is effectively a book problem with no dependencies.
You are looking at this wrong. Creating a json parser is trivial. The thing is that my one-shot attempt was 10x slower than my final solution.
Creating a parser for this challenge that is 10x more efficient than a simple approach does require deep understanding of what you are doing. It requires optimizing the hot loop (among other things) that 90-95% of software developers wouldn't know how to do. It requires deep understanding of the AVX2 architecture.
You need to give it search and tool calls and the ability to test its own code and iterate. I too could not oneshot an interaction layer with Stripe without tools. It also helps to make it research a plan beforehand.
This is the reasoning deficit. Models are very good at generating large quantities of truthy outputs, but are still too stupid to know when they've made a serious mistake. Or, when they are informed about a mistake they sometimes don't "get it" and keep saying "you're absolutely right!" while doing nothing to fix the problem.
It's a matter of degree, not a qualitative difference. Humans have the exact same flaws, but amateur humans grow into expert humans with low error rates (or lose their job and go to work in KFC), whereas LLMs are yet to produce a true expert in anything because their error rates are unacceptably high.
> It should load quicker compared to traditional React apps where the browser loads the HTML, then loads the JS bundle, and only then renders a loading skeleton while likely triggering more requests for data.
Then your JS bundle is broken.
Promises exist. Modules exist. HTTP/2+ exists. You can load data while you are loading a small amount of JS required to render that data while you are loading other parts of your JS.
If everything is sequential: load giant JS bundle -> fetch -> render, that's because someone architected it like that. Browsers give you all the tools you need to load in parallel, if you don't use them then it's not the browser's fault.
You do not need SSR or rehydration. That's just Vercel propaganda. They saw that people are doing a stupid thing and decided to push a complex solution to it. Why? It makes them money.
You cannot load any data in a regular React application before you loaded both React and your React components that trigger the fetch.
If you use code splitting, your initial bundle size can be smaller, yes. That's about it.
I guess in theory you can hack together static loading skeletons that you then remove when React loaded your initial bundle, but that's certainly far from a common approach. By that standard, the vast majority of JS bundles would be "broken".
> You cannot load any data in a regular React application before you loaded both React and your React components that trigger the fetch.
You totally can!
Don't call fetch directly from a component - it's brittle. Write a hook to abstract that into one place. In your hook you can support prefetching by awaiting the promise you fired before you loaded your JS bundle (if you don't want to modify the server), or else take advantage of the browser cache. In this way your data and code can load in parallel.
Is it common? Not really. But it's a technique that is in the toolbox of a conscientious webdev.
If your task is to send an email, do you want to send it again? Probably not.
reply