That's a fair observation, and yet anecdotally I agree with the parent comment. In my career I fixed quite a few bugs about dates being passed around and unexpectedly modified, while I struggle to remember the same problem with objects in general (could be a case of selective memory).
If I had the guess, I'd say it's a combination of:
- the difference between the mental model and the implementation. Dates are objects but "feel" like values: dates are parsed from a single value, and when stored/printed they collapse back to a single value (as opposed to custom objects which are generally a bag of properties, and when printed/stored they still look like an object)
- most common date operations causes the original date object to be mutated, which implicitly causes developers to mutate the passed value even if that's not what they explicitly meant
So the default combination is a calling code that expects date to be treated as a value, and the called code accidentally mutating the data because it's convenient to do so. If anything then causes the original value to be saved back in the db, the data gets corrupted.
Most experienced developers will remember to make a copy of the date object both in the calling code and in the receiving code, but the default remains dangerously easy to get wrong.
I wonder if we are getting different versions based on geolocation (I'm in Europe) because my experience is the absolute opposite of this. I actually had the thought "maybe I should switch to apple to stop having to deal with this" just this week (although reading this thread siri is as bad).
My experience is only through android auto and it honestly makes me furious how bad it is. There is absolutely no other tech product in my life that gets even close to how bad voice commands are handled in Android.
In my experience, literally everything sucks:
- single language voice recognition (me speaking in English with an accent)
- multi language voice recognition (english commands that include localised names from the country I'm in)
- action in context (understand what I'm actually asking it to do)
- supported actions (what it can actually do)
Some practical examples from just this week:
- I had to repeat 3 times that "no I don't want to reply" because I made the mistake of getting google to read a whatsapp message while driving, and it got stuck into the "would you like to reply" (it almost always gets stuck - it's my goto example to show people how bad it is)
- I asked it to queue a very specific playlist on Spotify, and it just couldn't get it right (no matter how specific my command was, I couldn't get it to play a playlist from MY. account instead of playing an unrelated public playlist)
- I asked to add a song to a playlist, and it said it couldn't do that (at least it understood what I was asking? maybe)
And in general I gave up trying to use google maps through voice commands, because it's just not capable of understanding an English command if it contains a street/location name pronounced in the local language/accent.
Considering that "using AI" can mean anything from "AI wrote the whole article" to "the author used AI to check the grammar", I'd argue this disclaimer is unnecessary and it's safe to assume AI is involved in some way nowadays.
> it's safe to assume AI is involved in some way nowadays
I don't think it's safe to assume so at all. Granted, I only know one journalist, and they've told me they only use LLMs in their work to gather further sources/references to check, everything else they still do "manually" with their own hands.
The editorial team should know exactly the scope of their teams AI usage. The snark mostly comes from them not knowing if AI was used or not, and they be upfront about them not knowing it. Feels like they're missing integrity if they don't know such things.
> I don't think it's safe to assume so at all. Granted, I only know one journalist, and they've told me they only use LLMs in their work to gather further sources/references to check, everything else they still do "manually" with their own hands.
I'd argue that your example falls under "which may have used AI in the preparation", which was exactly my point. (I actually had using AI for research as an example, but English is not my first language and I couldn't get the sentence to sound correct and chatGPT suggested I drop it)
> The editorial team should know exactly the scope of their teams AI usage. The snark mostly comes from them not knowing if AI was used or not, and they be upfront about them not knowing it. Feels like they're missing integrity if they don't know such things.
I don't see this as a lack of integrity, but rather as a futile attempt at being transparent. Everyone else is in the same position, they are just not adding a disclaimer.
And that's nothing specific about journalists, this applies to all professions. At most you can say what your official policy states, but you have absolutely no way of knowing how your employees/coworkers are using AIs.
What's the connection between being an illegal immigrant and having a valid ID?
In my experience as a traveller, any ID from any country is good enough to get a mobile contract. Some countries might check VISA status too, but any valid temporary VISA is generally enough.
"Do it intentionally" is a funny way to spell "I'm forcing you to do it by law and if you don't you won't be allowed to communicate with other humans in a digital form or access digital content".
And even then it's still a leak when the provider inevitably gets hacked and all your data is out there and you have no legal recourse to get reasonable compensation for it.
> Chatgpt uses mdashes in basically every answer, while on average humans don't
I would not be shocked if an aspect to training is bucketing "this is an example of good writing style" into a specific category. Published books - far more likely to have had an editor sprinkle in fancy stuff - may be weightier for some aspects.
My iPhone converts -- to — automatically. So does Google Docs / Gmail (althought I'm not certain if that's on their end or my Mac's auto-correct kicking in). Plenty of them out there.
> other AIs would show the same bias
Unless they've been trained not to use it, now that a bunch of non-technical people believe "emdash = AI, always".
I thought elements were created inside stars and dispersed by supernovas... Our sun has clearly not exploded yet (and I don't think it's big enough to ever ho supernova), so why does it matter what elements it can create?
If I had the guess, I'd say it's a combination of:
- the difference between the mental model and the implementation. Dates are objects but "feel" like values: dates are parsed from a single value, and when stored/printed they collapse back to a single value (as opposed to custom objects which are generally a bag of properties, and when printed/stored they still look like an object)
- most common date operations causes the original date object to be mutated, which implicitly causes developers to mutate the passed value even if that's not what they explicitly meant
So the default combination is a calling code that expects date to be treated as a value, and the called code accidentally mutating the data because it's convenient to do so. If anything then causes the original value to be saved back in the db, the data gets corrupted.
Most experienced developers will remember to make a copy of the date object both in the calling code and in the receiving code, but the default remains dangerously easy to get wrong.
reply