I'm confident that you didn't realize what you were saying, but I really chuckled at "I can't think of any large downsides [in institutionalizing a clearly very legally questionable practice]".
There's a thing called "copyright" and it's kind of like a union, but for people who write or create art. It gives them the right to decide who gets to make a copy. Many of the best sources of news put up a paywall because it's what allows them to pay their reporters. When you make an illicit copy without their permission, you undermine their ability to make a living. In other words, eat.
I'm not interested in having a debate on the legality of it which is why I said "legally questionable." It doesn't strike me as implausible that you wouldn't know what copyright is, if you don't accept the premise that linking to the internet archive for any and all paywalled contemporary content is at least legally questionable.
> if you don't accept the premise that ... is at least legally questionable.
The premise was that this is so obvious that my naivety is funny. But no, you don't want to debate that point - Why would you care to consider otherwise, it's not you losing face if correct.
You'll also notice that the link in this post (https://archive.is/TajtJ) shows a 'log in' button, implying that log-in credentials where not used (or abused) to get/share this snapshot.
I don’t follow the first paragraph of this comment at all, it just seems vaguely antagonistic. You also seem to be suggesting I’m taking a view on a debate that I am not.
That such a blog post exists at least suggests the legal “question” exists, which again is the only thing I said in the first place.
I agree with your original post that the need for hard skills will persist, but I see it in the other direction: software engineers are going to have to get better at thinking in larger abstractions, not deeper understanding of the stack. Those who can only solve problems locally and repeat the patterns they've seen before rather than create new patterns from building blocks are the ones who are going to struggle.
"software engineers are going to have to get better at thinking in larger abstractions"
........Math was first on my list.
I don't know how else to say that.
Computer science is indistinguishable from sufficiently advanced maths.
The AI can already do that part.
The abstraction that matters going forward, is understanding why the abstraction chosen by the AI does or doesn't match the one needed by the customer's "big picture".
The AI is a bit too self-congratulatory in that regard, even if it can sometimes spot its own mistakes.
A lot of studying math is just learning jargon and applications for what are actually pretty straightforward concepts, which lets you better communicate with the computer. You get higher bandwidth communication and better ability to know all of the nuances in things it might propose. You can propose things and understand when it replies with nuances you missed.
Like intro differential geometry is basically a deep dive into what one actually does when reading a paper map. Something everyone (over 30?) is familiar with. But it turns out there's plenty to fill a graduate level tome on that topic.
Linear algebra is basically studying easy problems: y=ax. Plenty to write about how to make your problem (or at least parts of it) fit that mould.
I suspect and think I've seen others say that you get better outputs from LLMs when using jargon. Essentialy, its pattern matching tells it to say what an expert would say when using the terminology experts use.
Yep, exactly. The failure to realize that you mean different things when talking about "larger abstractions" is exactly the kind of miscommunication that software people will need to navigate better in the future.
Ah, I think “Math” as a single word on its means many different things to many different people, I didn’t interpret in quite the same way. But I see what you mean.
I’m not sure that my colleagues who I think of as “good at math” and “good at thinking in larger abstractions” are necessarily the same ones, but there’s definitely a lot of overlap.
Not sure why the /s here, it feels like documentation being read by LLMs is an important part of AI assisted dev, and it's entirely valid for that documentation to be in part generated by the LLM too.
As I used LLMs more and more for fact type queries, my realization is that while they give false information sometimes, individual humans also give false information sometimes, even purported subject matter experts. It just turns out that you don’t actually need perfectly true information most of the time to get through life.
It probably did, but they didn't feel the need to fully explain why they were confident it was AI generated, since that's not the point of the article.
I think it’s pretty mature of the author to recognize that this is the way they (and most humans) work, rather than acting like they always have the ability to treat others with their absolute full capacity for respect.
It’s all a matter of perspective I suppose, and of course I understand why you say this, but no professional options trader I’ve ever met would speak in these terms.
I don't work in firmware at all, but I'm working next to a team now migrating an application from VMs to K8S, and they refer to the VMs as "bare metal" which I find slightly cringeworthy - but hey, whatever language works to communicate an idea.
I'm not sure I've ever heard bare metal used to refer to virtualized instances. (There were debates around Type 1 and Type 2 (hosted) hypervisors at one point but haven't heard that come up in years.
reply