Imagine a subreddit full of people giving bad drug advice. They're at least partially full of people who are intelligent and capable of performing human work - but they're mostly not professional drug advisors. I think at best you could hold OpenAI to the same standard as that subreddit. That's not a super high bar.
It'd be different if one was signing up to an OpenAI Drug Advice Product, which advertised itself as an authority on drug advice. I think in this case the expectation is set differently up front, with a "ChatGPT can make mistakes" footer on every chat.
> I think in this case the expectation is set differently up front, with a "ChatGPT can make mistakes" footer on every chat.
If I keep telling you I suck at math while getting smarter every few months, eventually you're just going to introduce me as the friend who is too unconfident but is super smart at math. For many people LLMs are smarter than any friend they know, especially at K-12 level.
You can make the warning more shrill but it'll only worsen this dynamic and be interpreted as routine corporate language. If you don't want people to listen to your math / medical / legal advice, then you've got to stop giving decent advice. You have to cut the incentive off at the roots.
This effect may force companies to simply ban chatbots from certain conversation.
The "at math" is the important part here - I've met more than a few people who are super smart about math but significantly less smart about drugs.
I don't think that it's a good policy to forcibly muzzle their drug opinions just because of their good arithmetic skills. Absent professional licensing standards, the burden is on the listener to decide where a resource is strong and where it is weak.
Aternately, Google claimed gMail wa in public beta for years. People did not treat it like a public beta that could die with no warning, despite being explicitly told to by a company that, in recent years, has developed a reputation for doing that exact thing.
It's possible (and in fact the law) that the journalist against whom a search warrant is issued is suspected of aiding in the leak or committing a crime, though. I don't think we yet know that she's not in that category; only that she claims that she was told that she wasn't the focus of the probe and was not currently formally accused of a crime.
The article you linked shows 12-13% autism-positive rate over N~100 cases, in the UK - and it doesn't distinguish, in the free abstract at least, between minor/moderate/severe, or comorbidities among that population.
I agree that we should be kind to individuals and that understanding an individual's problems can help with that. That said, this paper does not appear to provide convincing evidence that autism is a major contributor to homelessness.
It looks like it's a third-party UI, her Mastodon client, using the description metadata in a way that kind of makes it look like that metadata is part of the post.
Auto-generating said description tag in the first person is a bit of a weird product decision - probably a bad one that upsets users more than it's useful - but the presentation layer isn't owned by Meta here.
Thanks for the explanation, that makes a lot of sense. I'll bet that when it's not a sensitive topic, this totally goes unnoticed by a lot of users. Frustratingly, I would imagine that the response from most people would just be that the LLM summarizations / metadata tagging should be censored in "sensitive cases," but will otherwise be accepted by the user base.
If anything there's an interesting angle in the facts of this story about a new form of "mansplaining," but it's the algorithm doing "robosplaining" for the human race.
“There’s at least one spot within 100 miles where you can wait 20 minutes to get enough charge to get to the next charger” is not an argument that will convince someone to give up the convenience of the gas station.
The convenience argument works for a small segment of the population that road trips a few hundred miles at a time regularly. For the rest of us, EVs are far more convenient. I don't ever go to a gas station, and every day I start out with 320 miles of range. I stop at the EV equivalent of a gas station two or three times a year. I've saved a lot of time not having to get gas every week.
The changes are not limited. Gas pumps are everywhere. EV chargers are much more limited which means you have to stop where they are. You can make the trips, but sometimes it means you are stopping to charge in places you didn't want to be which can be a significant change. Worse the places you might want to be often don't have a charger so it can mean stop to charge in some gas station you don't want to spend half an hour at, then drive 10 minutes to the museum you want to be at. (even in the rare case there is transit at the gas station, they don't want you parking at the charger for 3 more hours after you are fully charged)
If you're up in Neah Bay, WA (and I have been out there in the past so this isn't a fantasy scenario) and suddenly realize you need to charge, you need to drive over an hour and ten minutes to Forks, WA. But they only have a 250kW charging station, so you're going to need to wait 30-40 minutes. Now if you need to get back to Neah Bay, you're going to spend a total of 3 hours.
And, for my case, Neah Bay, WA is closer to the nearest charging station than where I most typically am for work.
If you _live_ in Neah Bay, you likely use your home charger. There are also slow chargers in nearby hotels for tourists.
If you are traveling, through then you just plan to have enough charge to reach the next charger (50 miles away in Forks).
I know that area well, I travel through it every several months. It also does not have a lot of gas stations, and the existing ones are about $1.5 over the regular price per gallon.
We already see "paid relays" and relays that filter certain content, even as small as nostr is today. I think the end state, if it manages to really catch on, is going to be as "oligarchical" as mastodon or other federated networks today - just via relays instead of via homeservers.
A step in the right direction for sure! But I don't feel like Nostr is the final target that nature is shooting for here.
The solution to bad relays is to just use different relays. Changing your relays is just a matter of publishing a new 10002 relay list, and optionally copying over your old notes (or reseeding them from local backups).
+1, user owning the ID is a step in the right direction compared to "homeserver" owning the right key and makes this possible.
That said - maybe (total hypothetical) the reason one relay becomes really big is because a lot of people think it provides really good service, and maybe it's difficult to convince the majority of the network to route around it. This would create a similar problem to what we see in more well established federated chat networks.
Honestly, $_ and "what does a function do when I don't supply any arguments?" are really nice in Perl, and not that difficult to understand. I think a lot of languages could use a 'default variable'.
$_ was one of the things that put me off perl, because the same syntax meant different things depending on context.
The Pragmatic Programmers had just started praising Ruby, so I opted for the that over Perl, and just went with it ever since. Hated PHP and didn't like Python's whitespace thing. I never Ruby on Rails'd either. That said my first interactive website was effectively a hello world button with cgi/perl.
But trying to learn to code from reading other peoples perl scripts was way harder than the (then) newer language alternatives.
Now I'm over 50 none of that is nearly as important. I remember being young and strongly opininated, this vs. that - its just part of the journey, and the culture. It also explains the current FizzBuzz in CSS minimisation post. We do because we can, not necessarily because we should.
It'd be different if one was signing up to an OpenAI Drug Advice Product, which advertised itself as an authority on drug advice. I think in this case the expectation is set differently up front, with a "ChatGPT can make mistakes" footer on every chat.
reply