Hacker Newsnew | past | comments | ask | show | jobs | submit | ang_cire's commentslogin

> How is this any different than say, Democratic voters who want medicare for all (or whatever) and not getting that for decades?

To be fair, the progressive movement in the Democratic Party is much larger than any actual working-class movement in the GOP. MAGA is not exactly pro-union, pro-striking, or even pro-farmer, given the tariffs. The Progressive Caucus otoh is now 45 % of the House Democrats. Zohran Mamdani was just elected mayor of NYC, and is already making big moves against landlords.

And that's even aside from the voters who don't vote for corporate Dems, and then get blamed by the DNC for losses. Every time someone asserts that "Bernie Bros" sat out in 2016, they're talking about Democrats who refused to keep 'voting for the same party over and over again'.


I'm not interested in wading into the wider discussion, but I do want to bring up one particular point, which is where you said

> do you believe that we design our moral systems based on our laws of punishment? That is... quite a claim.

This is absolutely something we do: our purely technical, legal terms often feed back into our moral frameworks. Laws are even created to specifically be used to change peoples' perceptions of morality.

An example of this is "felon". There is no actual legal definition of what a felony is or isn't in the US. A misdemeanor in one state can be a felony in another. It can be anything from mass murder to traffic infractions. Yet we attach a LOT of moral weight to 'felon'.

The word itself is even treated as a form of punishment; a label attached to someone permanently, that colors how (almost) every person who interacts with them (who's aware of it) will perceive them, morally.

Another example is rhetoric along the lines of "If they had complied, they wouldn't have been hurt", which is explicitly the use of a punishment (being hurt) to create an judgement/perception of immorality on the part of the person injured (i.e. that they must have been non-compliant (immoral), otherwise they would not have been being punished (hurt)). The fact they were being punished, means they were immoral.

Immigration is an example where there's been a seismic shift in the moral frameworks of certain groups, based on the repeated emphasis of legal statutes. A law being broken is used to influence people to shift their moral framework to consider something immoral that they didn't care about before.

Point being, our laws and punishments absolutely create feedback loops into our moral frameworks, precisely because we assume laws and punishments to be just.


> An example of this is "felon". There is no actual legal definition of what a felony is or isn't in the US. A misdemeanor in one state can be a felony in another. It can be anything from mass murder to traffic infractions. Yet we attach a LOT of moral weight to 'felon'.

The US is an outlier here; the distinction between felonies and misdemeanours has been abolished in most other common law jurisdictions.

Often it is replaced by a similar distinction, such as indictable versus summary offences-but even if conceptually similar to the felony-misdemeanour distinction, it hasn’t entered the popular consciousness.

As to your point about law influencing culture-is that really an example of this, or actually the reverse? Why does the US largely retain this historical legal distinction when most comparable international jurisdictions have abolished it? Maybe, the US resists that reform because this distinction has acquired a cultural significance which it never had elsewhere, or at least never to the same degree.

> Immigration is an example where there's been a seismic shift in the moral frameworks of certain groups, based on the repeated emphasis of legal statutes. A law being broken is used to influence people to shift their moral framework to consider something immoral that they didn't care about before.

On the immigration issue: Many Americans seem to view immigration enforcement as somehow morally problematic in itself; an attitude much less common in many other Western countries (including many popularly conceived as less “right wing”). Again, I think your point looks less clear if you approach it from a more global perspective


That is not the same as this. If you're a multi-PhD holder from Iran who's a world-famous scientist, you can get into e.g. the UK. This would forbid them, purely based on country of origin.

The article says it is a temporary pause. other sources seem to confirm this:

"Immigrant visa processing from these 75 countries will be paused while the State Department reassesses immigration processing procedures to prevent the entry of foreign nationals who would take welfare and public benefits,"

https://www.reuters.com/world/us/us-suspend-visa-processing-...


Oh, well that's reassuring

When you're walking around rural Illinois and you hear music start playing: https://www.youtube.com/watch?v=72aSGvXeOTs


Because they see what the insurance exec was doing through his job as itself being violence, as it resulted in many deaths.

They view Luigi's alleged actions as self-defense/ defense of others, i.e. morally justified.

I wouldn't personally morally disagree with someone Luigi'ing Maduro or the other guy mentioned according to that same standard, but in this situation and the knock-on hypotheticals of government intervention, this is not an individual using personal force according to their beliefs, these are governments (which have no moral rights, just the assertion/ imposition of authority by violence) expropriating them for political purposes. So not defense of others.


> In general we trust people that we bring onto our team not to betray us and to respect general rules and policies and practices that benefit everyone.

And yet we give people the least privileges necessary to do their jobs for a reason, and it is in fact partially so that if they turn malicious, their potential damage is limited. We also have logging of actions employees do, etc etc.

So yes, in the general sense we do trust that employees are not outright and automatically malicious, but we do put *very broad* constraints on them to limit the risk they present.

Just as we 'sandbox' employees via e.g. RBAC restrictions, we sandbox AI.


But if there is a policy in place to prevent some sort of modification, then performing an exploit or workaround to make the modification anyways is arguably understood and respected by most people.

That seems to be the difference here, we should really be building AI systems that can be taught or that learn to respect things like that.

If people are claiming that AI is so smart or smarter than the average person then it shouldn't be hard for it to handle this.

Otherwise it seems people are being to generous in talking about how smart and capable AI systems truly are.


First off, LLMs aren't "smart", they're algorithmic text generators. That doesn't mean it is less useful than a human who produces the same text, but it is not getting to said text in the same way (it's not 'thinking' about it, or 'reasoning' it out).

This is analogous to math operations in a computer in general. The computer doesn't conceptualize numbers (it doesn't conceptualize anything), it just uses fixed mechanical operations on bits that happens to represent numbers. You can actually recreate computer logic gates with water and mechanical locks, but that doesn't make the water or the concrete locks "smart" or "thinking". Here's Stanford scientists actually miniaturizing this into a chip form [1].

[1]: https://prakashlab.stanford.edu/press/project-one-ephnc-he4a...

> But if there is a policy in place to prevent some sort of modification, then performing an exploit or workaround to make the modification anyways is arguably understood and respected by most people.

I'm confused about what you're trying to say. My point is that companies don't actually trust their employees, so it's not unexpected for them not to trust LLMs.


This reads like a joke, but I've known two DBAs who don't use database management tools beyond exporting whole tables to excel, making manual changes, and importing to update the tables. Scary stuff.


Inevitability just means that something WILL happen, and many of those items are absolutely inevitable:

AI exists -> vacation photos exist -> it's inevitable that someone was eventually going to use AI to enhance their vacation photos.

As one of those niche power users who runs servers at home to be beholden to fewer tech companies, I still understand that most people would choose Netflix over a free jellyfin server they have to administer.

> Not being in control of course makes people endlessy frustrated

I regret to inform you, OP, that this is not true. It's true for exactly the kind of tech people like us who are already doing this stuff, because it's why we do it. Your assumption that people who don't just "gave up", as opposed to actively choosing not to spend their time on managing their own tech environment, is I think biased by your predilection for technology.

I wholeheartedly share OP's dislike of techno-capitalism(derogatory), but OP's list is a mishmash of

1) technologies, which are almost never intrinsically bad, and 2) business choices, which usually are.

An Internet-connected bed isn't intrinsically bad; you could set one up yourself to track your sleep statistics that pushes the data to a server you control.

It's the companies and their choices to foist that technology on people in harmful ways that makes it bad.

This is the gripe I have with anti-AI absolutists: you can train AI models on data you own, to benefit your and other communities. And people are!

But companies are misusing the technology in service of the profit motive, at the expense of others whose data they're (sometimes even illegally) ingesting.

Place the blame in the appropriate place. Something something, hammers don't kill people.


> You could make the argument that it's one of very few successful U.S. manufacturing company winning on purely technical/capitalist terms

Except it's not winning on that at all. It's "winning" because Chinese EV brands are barred from selling in the US. You can't buy an Avatr if you want. It's in fact protectionist regulations that allowed Tesla to retain EV dominance in the US, in the face of Chinese competition.


Tesla was very popular in the Chinese market and globally, including in markets where Chinese EVs aren't banned, until literally this year, which I'd argue is due in part to the trade war.


This isn't unique to China, it's just the nature of modern manufacturing. The only reason China stands out is because we offshored our manu there, so it's where we see it happen.

I feel like people forget that the entire purpose of factories/ automation/ modern manufacturing was to divorce human skill from product worth (so that companies wouldn't have to pay workers based on skill). That also means that in the realm of physical goods, "moats" are not maintainable unless you have a manufacturing technique or technology that others don't. Since companies rarely create their own production line machinery, anyone else who can afford the same machines can produce the same products.

The actual "viable strategy for hardware companies" has to be about market penetration; make products that aren't on Amazon, for example, and Amazon can't be used to out-maneuver you. Firearms are a great example of where manufacturing capability does not equal competitiveness; China can absolutely produce any firearm that you can buy in the US, but they don't because other factors (mostly related to regulatory controls) created a moat for manufacturers. Vehicles are another good example. Good luck buying an Avatr car in the US.

But yes, if you plan to make a vacuum, which is just you iterating on what others have done as well, you should probably expect that people are going to trivially iterate on your variant too.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: