Can someone provide a reason for why anyone should be using Rails? I'm always curious why people love context switching between multiple programming languages.
As opposed to using JavaScript on the front end & back end? That benefit of JS has always seemed a bit overrated to me—the context for front-end & server-side JS is pretty darn different, too.
Anyway, Ruby & Rails are such a joy to use that, at least for me, the fact that it’s written in a different language than the one we need in the browser is a non-issue.
I used to love context switching between frameworks and languages: Rails was more fun and I felt more happier and creative. Django felt more restrictive but I felt more productive. NodeJS was more chaotic but I felt more powerful.
Today I stick with Ruby and Rails. I am trying to do my context switching in the front-end JS frameworks but I just feel a bit dead doing it at the moment!
Personally I don't think there is such a thing as "creatives". All humans are creative - that's what we do, we solve problems by coming up with solutions. Whether that is to create a painting, or coming up with a joke/punchline or writing a novel, or creating a product with code - that's a matter of aptitude/interest and environmental exposure.
While I sort of agree with you, I'm a software engineer with a background and hobbies in the arts. My wife and I are part time performing magicians, and I used to play various instruments in rock bands, I produced an indie album a few years ago etc. I'm saying this because I hang around people who are hyper-creative.
I compare those hyper-creative people to other people that I know and I work with and it becomes very apparent that what we tend to think of as "creativity" is not something that even the majority of people possess.
I definitely agree that everyone is "creative" to an extent within their respective interests and productive pursuits. But I think we might be conflating creativity with productivity. Everyone produces, or at least is capable of producing things. But what we tend to think of as "creativity" involves abstract thinking and piecing together things that are non-obvious.
In that sense, hackers and engineers do tend to often exhibit this form of creativity. I mean using something in a way that it was not original intended is an example of that "outside of the box" abstract thinking.
But my point, as anectodal as it is, in my 40+ years on this planet I've encountered far more people who are incapable of abstract thinking and coming up with novel ways to combine things than I have people who do possess this ability.
Now it might be a muscle, it might boil down to interests and personal ambitions. But, and I think you'd even agree with this, that's a hypothesis. I don't personally see evidence in support of that. The people who can't think abstractly are even some really decent programmers that I've met. They can produce shippable code and solve problems, but they can't think in terms of design patterns and abstractions... they need concrete examples for everything. The second you start to abstract a solution and talk in terms of generics you lose them. And I'm not putting them down, they're still great people to have on your team. Great work ethic and love what they do. They just can't think abstractly and are therefore not "creative" in the way that I interpret that word.
Games used to be crisp as hell, and now they run like shit, crash, and take 150gb to download, and 150 years to launch. If we played games for graphics, one of the most popular MMOs wouldn't be based on a browser game from 2002, in fact we wouldn't be playing games we would be playing real life.
Look at what Epic Games did with fortnite. They killed a competitive scene game that ran smooth for turbobloat graphics and skins.
Because a percentage of people who vote trump tell everyone they will vote dem to not be bullied or frozen out by their friends, relatives, colleagues, etc.
Dr Phil described it well I think.
Except that you'll hit an issue with the automated renewal at some point and it'll likely be when you don't have someone available to deal with it - cue several hours of downtime. A problem could occur with the cert issuer and then you've got all of their customers with hours of downtime - not really a good idea.
90 days is a good compromise between encouraging autorenewal and allowing services to be down for a couple of days without really impacting anyone. It's short enough so that the person who set up the automation is probably still employed and thus they have an incentive to fix any issues.
Until it's too short that there's not a single worked day during the alerting period, I think it's fine. 45 days means 15 days between "it didn't renew on schedule" and "anything breaks".
To be honest, many times GPT 4o helps me understand poorly written emails by colleagues. I often find myself asking it "Did he means this when he wrote this?"... I'm a bit on the spectrum so if someone asks me vague questions or hallucinates words for things that don't exist, I have to verify with chatGPT to reaffirm that they are in fact just stupid.
same here, sometimes my understanding of a message doesn't make sense, but when gpt4 gives me the same understanding, it feels like we can't both reach the same conclusion by mistake. of course, that's not true, but in most cases, it's good enough.
Indeed it is, but that fundamental concept is for human understanding of how physics works based on how we perceive/think about the universe, its not the metaphysics of the universe itself.
I think it's your mindset and how you approach it. E.g. some people are genuinely bad at googling their way to a solution. While some people know exactly how to manipulate the google search due to years of experience debugging problems. Some people will be really good at squeezing out the right output from ChatGPT/Copilot and utilize it to maximum potential, while others simply won't make the connection.
Its output depends on your input.
E.g. say you have an API swagger documentation and you want to generate a Typescript type definition using that data, you just copy paste the docs into a comment above the type, and copilot auto fills your Typescript type definition even adding ? for properties which are not required.
If you define clearly the goal of a function in a JSDoc comment, you can implement very complex functions. E.g. you define it in steps, and in the function line out each step. This also helps your own thinking.
With GPT 4o you can even draw diagrams in e.g. excalidraw or take screenshots of the issues in your UI to complement your question relating to that code.
> some people know exactly how to manipulate the google search due to years of experience debugging problems
this really rings true for me. especially as a junior, I always thought one of my best skills was that I was good at Googling. I was able to come up with good queries and find some page that would help. Sometimes, a search would be simple enough that you could just grab a line of code right off the page, but most of the time (especially with StackOverflow) the best approach was to read through a few different sources and pick and choose what was useful to the situation, synthesizing a solution. Depending on how complicated the problem was, that process might have occurred in a single step or in multiple iterations.
So I've found LLMs to be a handy tool for making that process quicker. It's rare that the LLM will write the exact code I need - though of course some queries are simple enough to make that possible. But I can sort of prime the conversation in the right direction and get into a state where I can get useful answers to questions. I don't have any particular knowledge on AI that helps me do that, just a kind of general intuition for how to phrase questions and follow-ups to get output that's helpful.
I still have to be the filter - the LLM is happy to bullshit you - but that's not really a sea change from trying to Google around to figure out a problem. LLMs seem like an overall upgrade to that specific process of engineering to me, and that's a pretty useful tool!
Keep in mind that Google's results are also much worse than they used to be.
I'm using both Kagi & LLM; depending on my need, I'll prefer one or the other.
Maybe I can access the same result with a LLM, but all the conversation/guidance required is time-consuming than just refining a search query and browsing through the first three results.
After all the answer is rarely exactly available somewhere. Reading people's questions/replies will provide a clues to find the actual answer I was looking for.
I have yet been able to achieve this result through a LLM.
> E.g. you define it in steps, and in the function line out each step. This also helps your own thinking
Yeah but there are other ways to think through problems, like asking other people what they think, which you can evaluate based on who they are and what they know. GPT is like getting advice from a cross-section of everyone in the world (and you don’t even know which one), which may be helpful depending on the question and the “people” answering it, but it may also be extroadinarily unhelpful, especially for very specialized tasks (and specialized tasks are where the profit is).
Like most people, I have knowledge of things very specific I know that less than a 100 people in the world know better than me, but thousands or even millions more have some poorly concieved general idea about it.
If you asked GPT to give you an answer to a question it would bias those millions, the statistically greater quantative solution, to the qualitative one. But, maybe, GPT only has a few really good indexes in its training data that it uses for its response, and then its extremely helpful because its like accidentally landing on a stackoverflow response by some crazy genius who reads all day, lives out of a van in the woods, and uses public library computers to answer queries in his spare time. But that’s sheer luck, and no more so than a regular search will get you.
I think the problem you're describing is not due to distractions and social media.
I think the fault there is that school has changed, kids aren't taught to be allowed to make mistakes. If you're not taught that failure is part of learning, then you're just teaching kids to build anxiety because they are not allowed to fail.