Hacker Newsnew | past | comments | ask | show | jobs | submit | quotemstr's commentslogin

It's interesting how despite Google and Amazon both canceling products constantly, only one is infamous for the practice.

People demonize attestation. They should keep in mind that far from enslaving users, attestation actually enables some interesting, user-beneficial software shapes that wouldn't be possible otherwise. Hear me out.

Imagine you're using a program hosted on some cloud service S. You send packets over the network; gears churn; you get some results back. What are the problems with such a service? You have no idea what S is doing with your data. You incur latency, transmission time, and complexity costs using S remotely. You pay, one way or another, for the infrastructure running S. You can't use S offline.

Now imagine instead of S running on somebody else's computer over a network, you run S on your computer instead. Now, you can interact with S with zero latency, don't have to pay for S's infrastructure, and you can supervise S's interaction with the outside world.

But why would the author of S agree to let you run it? S might contain secrets. S might enforce business rules S's author is afraid you'll break. Ordinarily, S's authors wouldn't consider shipping you S instead of S's outputs.

However --- if S's author could run S on your computer in such a way that he could prove you haven't tampered with S or haven't observed its secrets, he can let you run S on your computer without giving up control over S. Attestation, secure enclaves, and other technologies create ways to distribute software that otherwise wouldn't exist. How many things are in the cloud solely to enforce access control? What if they didn't have to be?

Sure, in this deployment model, just like in the cloud world, you wouldn't be able to run a custom S: but so what? You don't get to run your custom S either way, and this way, relative to cloud deployment, you get better performance and even a little bit more control.

Also, the same thing works in reverse. You get to run your code remotely in a such a way that you can trust its remote execution just as much as you can trust that code executing on your own machine. There are tons of applications for this capability that we're not even imagining because, since the dawn of time, we've equated locality with trust and can now, in principle, decouple the two.

Yes, bad actors can use attestation technology to do all sorts of user-hostile things. You can wield any sufficiently useful tool in a harmful way: it's the utility itself that creates the potential for harm. This potential shouldn't prevent our inventing new kinds of tool.


> People demonize attestation. They should keep in mind that far from enslaving users, attestation actually enables some interesting, user-beneficial software shapes that wouldn't be possible otherwise. Hear me out.

But it won't be used like that. It will be used to take user freedoms out.

> But why would the author of S agree to let you run it? S might contain secrets. S might enforce business rules S's author is afraid you'll break. Ordinarily, S's authors wouldn't consider shipping you S instead of S's outputs.

That use case you're describing is already there and is currently being done with DRM, either in browser or in app itself.

You are right in the "it will make easier for app user to do it", and in theory it is still better option in video games than kernel anti-cheat. But it is still limiting user freedoms.

> Yes, bad actors can use attestation technology to do all sorts of user-hostile things. You can wield any sufficiently useful tool in a harmful way: it's the utility itself that creates the potential for harm. This potential shouldn't prevent our inventing new kinds of tool.

Majority of the uses will be user-hostile things. Because those are only cases where someone will decide to fund it.


> Attestation, secure enclaves, and other technologies create ways to distribute software that otherwise wouldn't exist. How many things are in the cloud solely to enforce access control? What if they didn't have to be?

To be honest, mainly companies need that. personal users do not need that. And additionally companies are NOT restrained by governments not to exploit customers as much as possible.

So... i also see it as enslaving users. And tell me, for many private persons, where does this actually give them for PRIVATE persons, NOT companies a net benefit?


additionally:

> This potential shouldn't prevent our inventing new kinds of tool.

Why do i see someone who wants to build an atomic bomb for shit and giggles using this argument, too? As hyperbole as my argument is, the argument given is not good here, as well.

The immutable linux people build tools, without building good tools which actually make it easier for private people at home to adapt a immutable linux to THEIR liking.


The atomic bomb is good example of what I'm talking about. The reason we haven't had a world war in 80 years is the atomic bomb. Far from being an instrument of misery, it's given us an age of unprecedented peace and prosperity. Plus, all the anti-nuclear activism in the world hasn't come one step closer to banishing nuclear weapons from the earth.

In my personal philosophy, it is never bad to develop a new technology.


I will put some trust into these people if they make this a pure nonprofit organization at the minimum. Building ON measures to ensure that this will not be pushed for the most obvious cases, which is to fight user freedom. This shouldn't be some afterthought.

"Trust us" is never a good idea with profit seeking founders. Especially ones who come from a culture that generally hates the hacker spirit and general computing.

You basically wrote a whole narrative of things that could be. But the team is not even willing to make promises as big as yours. Their answers were essentially just "trust us we're cool guys" and "don't worry, money will work out" wrapped in average PR speak.


> bad actors can use attestation technology to do all sorts of user-hostile things

Not just can. They will use it.


If they wanted to do that, they already would have. Do you think laptop makers need this technology to limit user freedom this way?

You're providing mechanism, not policy. It's amazing how many people think they can forestall policies they dislike by trying to reject mechanisms that enable them. It's never, ever worked. I'm glad there are going to be more mechanisms in the world.

4chan is old enough to drink. Something Awful is from the 20th century. Don't pretend that transgressive internet content is some novel challenge that today's youth must face for the first time.

You are mistaken if you think it is about transgression. Old forums had no algo feeds to manipulate the content you saw and steer you. This is the issue, not so much the content itself.

No our algorithmic manipulation was done through corporate media. Same monsters, different medium. It was the church, then newspapers, cable news, and now social media. https://en.wikipedia.org/wiki/Manufacturing_Consent

4chan is a peaceful place compared to modern social media.

Funny thing is, too, that you can do age verification with zero knowledge proofs. No ID needed -- in principle.

Yet in practice, yeah, it'll be the death of anonymity. To allow ZKPs to take off would be letting a good panic go to waste, right? /s

While I believe the genesis of this age limit push is a good old fashioned moral panic, it's also obvious that the usual enemies of free speech are salivating at using this panic as a pretext to ban anonymity on the internet.


This site is social media. Should under 15s be prevented discussing developers in the tech industry?

No? On what grounds? HN uses opaque feed ranking algorithms. It's run by a for-profit US tech company. It uses dark patterns (e.g. shadowbans and unwired "flag" links) that prompt users to engage under false pretenses.

It even has advertisements. The horror!

Yet nobody serious says HN is harmful to the fledging minor technologist.

I've yet to see a logical rule allowing minors to access HN but prohibiting their scrolling Instagram. Every demarcation scheme I've seen is some variant of "big company bad", which is a ridiculous standard for a law intended to prevent the harms that the "structure* of a medium (as opposed to the identity of its owners) produces.

In a nation of laws, an act is allowed or prohibited based on the nature of the act itself. Actors don't get special privileges based on who they are.


> This site is social media.

Is it?

If so then I would say the term "social media" has more or less lost all meaning.

To me HN is more like an old-school forum - it has a focus and it has a mod team to keep the rails on the discussion and keep the topics vaguely on topic.


My point is that it's hard to define social media in a way that excludes HN but includes the services that the activist sort thinks are disrespecting the gods of the city and corrupting the youth. Laws must be rooted in conduct, not identy.

Start with classic conditioning that’s implemented in every large social media platform and go from there.

Or go in reverse, look at research into correlation between mental health issues and social media use and extrapolate contributing factors, from those extract the features

Should give a starting point for nailing down the definition.


I'm not convinced it's that hard. I've pointed out a few ways that it differs significantly.

There are others major differences like the lack of infinite doomscrolling, or the personalised feed to optimise engagement.

To the wider point that maybe we should be preventing kids from accessing classes of things rather than particular services - yeah probably, but it's much easier to manage a blocklist starting with the worst offenders, and that might be a good enough start down the path of harm reduction.


Let us assume that:

    1. On average, it is a net negative for under 15s to participate in Instagram, TikTok, Xitter, YouTube, Snapchat, etc  
    2. On average, it is a net positive for under 15s to participate in HN, old-school forums, and the like.
    3. It is not possible to legally differentiate services referred to in points 1 & 2, so a ban must be all-or-nothing. 
Under those assumptions, the question becomes whether the overall net positive of allowing under-15s to participate in HN and old-school forums outweighs the overall net negative of allowing under-15s to participate in Instagram, TikTok, etc.

Given the relative number of under-15s participating in each category of services, do you think that is the case?


There are supposedly studies linking social media to various negative consequences. For example, according to the Mayo Clinic, social media can:

- Distract from homework, exercise and family activities.

- Disrupt sleep.

- Lead to information that is biased or not correct.

... Ah, just like that public health menace, the public library.

I don't believe "social media" is actually injurious to youths. The studies saying it does, ISTM, are all confounded, of poor quality, and ride off publication bias. And yeah, it's remarkable that a lot of people on this very thread ago grew up on the Internet and gained lifelong technical skills want to pull the ladder up after them on the grounds of unproven and implausible harms.

In reality, the drive for social media age limits is the latest in a long line of moral panics. In the 80s, it was D&D corrupting innocent souls. Now, it's feed ranking? I don't believe any of it.

Looking for reason at the root of a moral panic usually leads only to despair. These things just have to be endured.


Gambling doesn’t cause physical harm either, but it’s also banned for children. It’s similar to social media in that both are made to be as addictive as possible and they exploit human psychology.

I think it’s telling that many people here who work in tech don’t want social media for their kids, but there are no comic book readers who want to ban comics for their kids.


That's a distinction without a difference. Microsoft should structure Windows such that they're unable to comply with such an order, however legal. There are practical cryptographic ways to do it: Microsoft just doesn't want to. Shame on them.

It is pretty uncontroverisal that the owner, in the sense of having responsibility and ultimate control, should control the cryptographic keys. I think the disagreement here is who owns the computer.

Exactly

What's wrong with making a Substack?

It's the same as long Twitter posts, strung together by endless tweets a few years ago. People have a platform, they use that. If you don't make it a habit to post long articles, why bother with a new platform when the one you have will suffice?

Making a substack, or an account on Medium is "yet another thing" and many simply cannot be bothered and I don't blame them.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: