> It's a real shame because Elon's goals of allowing an unrestricted AI are somewhat noble
When I was young it was considered improper to scrape too much data from the web. We would even set delays between requests, so as not to be rude internet citizens.
Now, it's considered noble to scrape all the world's data as fast as possible, without permission, and without any thought as to the legality the material, and then feed that data into your machine (without which the machine could not function) and use it to enrich yourself, while removing our ability to trust that an image was created by a human in some way (an ability that we have all possessed for hundreds of thousands of years -- from cave painting to creative coding -- and which has now been permanently and irrevocably destroyed).
Yeah I think it's a mistake to focus on writing "readable" or even "maintainable" code. We need to let go of these aging paradigms and be open to adopting a new one.
In my experience, LLMs perform significantly better on readable maintainable code.
It's what they were trained on after-all.
However what they produce is often highly readable but not very maintainable due to the verbosity and obvious comments. This seems to pollute codebases over time and you see AI coding efficiency slowly decline.
> Poe's law is an adage of Internet culture which says that any parodic or sarcastic expression of extreme views can be mistaken for a sincere expression of those views.
The things you mentioned are important but have been on their way out for years now regardless of LLMs. Have my ambivalent upvote regardless.
as depressing as it is to say, i think it's a bit like the year is 1906 and we're complaining that these new tyres for cars they're making are bad because they're no longer backwards compatible with the horse drawn wagons we might want to attach them to in the future.
Isn't that still considered cooking? If I describe the dish I want, and someone else makes it for me, I was still the catalyst for that dish. It would not have existed without me. So yes, I did cook it.
> If I describe the dish I want, and someone else makes it for me, I was still the catalyst for that dish. It would not have existed without me. So yes, I did "cook" it.
The person who actually cooked it cooked it. Being the "catalyst" doesn't make you the creator, nor does it mean you get to claim that you did the work.
Otherwise you could say you "cooked a meal" every time you went to MacDonald's.
I've found the biggest impediment to this strategy is social pressure. The small step methodology goes against the common sense knowledge that the greatest gains come from hard work, so it often receives a lot of push back from friends and family. In my experience, if someone witnesses you taking a small step, they're likely to tell you you're not trying hard enough, or give you some of their own advice on what you should be doing instead.
Sure it is. Forbid training models on images of humans, humanoids, or living creatures, and they won't be able to generate images of those things. It's not like AI is some uncontrollable magic force that hatched out of an egg. It can only output what you put in.
Years of experience working in Enterprise and complex systems.
And that is all on point with the criticism: while an AI can design a new language based in an existing language like Clojure, we need actual experienced people to design new interesting languages that add new constraints and make Software Engineering as a whole better. And we are also killing with AI the possibility of new people getting up to speed and becoming a future Rich Hickey.
> And we are also killing with AI the possibility of new people getting up to speed and becoming a future Rich Hickey.
Not sure I am on board with this part... I find LLMs in particular to be great teachers specifically for getting up to speed to becoming future Rich Hickey.
it is indeed a great teacher but there are times where it hallucinates and sticks to the hallucinated content even after several iteration unless human in the loop breaks it. i've wasted hours believing what LLM hallucinated.
my learnings are a lot of microdoses of things that I usually don't work on in a day to day so i don't want to spend time reading about it but yes this sort of learning would be otherwise impossible so gotta thank LLM for that.
I always accept friend requests in case they want to send me a message, which they sometimes do. Is that bad? Maybe I should go back and delete all those people.
It is your choice, but for me, it is strictly professional. Some overlap with friends, but those who I never worked with, are not on my Linkedin contact list. Somewhat counterintuitively, no recruiters. Many/all recruiters have very "noisy" contact lists which complicates navigation via degrees of proximity, like 2nd, 3rd, etc. When the list of contacts is carefully curated, it adds value.
People you don't know but who take the time to leave a considerate message explaining why they would like to connect, is also probably fine.
But some "Jeffrey Epstein" rando wanting to connect without even explaining why should be an instant ignore. You are the company you keep, so might as well know who your company are.
When I was young it was considered improper to scrape too much data from the web. We would even set delays between requests, so as not to be rude internet citizens.
Now, it's considered noble to scrape all the world's data as fast as possible, without permission, and without any thought as to the legality the material, and then feed that data into your machine (without which the machine could not function) and use it to enrich yourself, while removing our ability to trust that an image was created by a human in some way (an ability that we have all possessed for hundreds of thousands of years -- from cave painting to creative coding -- and which has now been permanently and irrevocably destroyed).
reply