Hacker Newsnew | past | comments | ask | show | jobs | submit | ninjin's commentslogin

The uniqueness of the situation is that OpenAI et al. poses as an intelligent entity that serves information to you as an authority.

If you go digging on darkweb forums and you see user Hufflepuffed47___ talking about dosages on a website in black and neon green, it is very different from paying a monthly subscription to a company valued in the billions that serves you the same information through the same sleek channel that "helps" you with your homework and tells you about the weather. OpenAI et al. are completely uprooting the way we determine source credibility and establish trust on the web and they elected to be these "information portals".

With web search, it is very clear when we cross the boundary from the search engine to another source (or it used to be before Google and others muddied it with pre-canned answers), but in this case it is entirely erased and over time you come to trust the entity you are chatting with.

Cases like these were bound to happen and while I do not fault the technology itself, I certainly fault those that sell and profit from providing these "intelligent" entities to the general public.



Correct, the pirated music library was before they exited the closed Alpha.


No, that's what they ran on when the general public could join on a referral basis. They called that "beta".

The technology was already proven, i.e. The Pirate Bay and other torrent networks had already been a success for years. What Spotify likely aimed to show was that they could grow very fast and that their growth was too good to just shut down, like the entertainment industry tried to do with TPB.

After they took in the entertainment oligarchs they cut out the warez and substituted with licensed material.


Not sure if it was called "beta" or "alpha" and "closed" is of course up to interpretation, but it was indeed by invitation. Swedish law at the time (still?) had a clause about permitting sharing copyrighted material within a limited circle, which I know Spotify engineers referred to as somewhat legitimising it. I also know for a fact that once the invite-only stage ended there was a major purge of content and I lost about half of my playlist content, which was the end of me having music "in the cloud". Still, this is nearly twenty years ago, so my memory could be foggy.


When I first started using Spotify, a lot of the tracks in my playlists had titles like "Pearl Jam - Even Flow_128_mp3_encoded_by_SHiLlaZZ".

Always made me chuckle, it looked like they had copied half of their catalogue from the pirate bay. It took them a few years to clean that up.


Yes, when the entertainment industry came onboard they immediately made the service much worse. I reacted the same way you did.

IIRC, 2008, a little less than twenty years.


> The technology was already proven, i.e. The Pirate Bay and other torrent networks had already been a success for years.

Spotify showed that you could have a local-like experience with something backed by the cloud. BitTorrent had never really done that. The client wasn't that good, and you couldn't double click and hear a song in two seconds.

The way you said that made me think you might be remembering when it was partially P2P, but I don't remember the timeline, it was only used to save bandwidth costs, and they eventually dropped it because network operators didn't like it and CDNs became a thing.


If you don't remember, why speculate?

Ek had been the CEO of µTorrent and they hired a person who had done research on Torrent technology at KTH RIT to help with the implementation. It was a proven technology that required relatively small adaptations.

They moved away from this architecture after the entertainment industry got involved. Sure, it was a cost issue until this point, but it also turned into a telemetry issue afterwards.


I am somewhat cautious to comment as I know the author is way more experienced than I am and I fear that I may be missing something. However, let me try to accomplish the same with my elementary doas(1) knowledge.

Allowing mounting for a specific group is simple with doas.conf(5):

    permit :mountd cmd /sbin/mount
    permit :mountd cmd /sbin/umount
We can of course tighten it further as the author did:

    permit :mount-usb cmd /sbin/mount /dev/sdb1
    permit :umount-usb cmd /sbin/umount /media/usb
If you want to go more complex than specifying arguments, we could of course create a shell script and specify it instead of a binary.

Likewise, we can do something similar for a service account:

    permit :www-deployment as www-deployment cmd /var/www/bin/build /var/www/application
The key difference here would be that www-deployment can not delegate as easily to arbitrary users, as they would need to ask someone with root access to add additional users to the www-deployment group. But I am left wondering if this use case (if it is important enough) is not equally well served by specifying a location for non-root users to add permissions akin to what we see in doas.conf(5), but with the constraint that they of course can only allow other users to run commands with their privileges. Yes, it would "bloat" doas(1), but these code paths are not that long as long as you keep your scope constrained (doas(1) has a core of just over 500 lines and with environment handling and configuration format parsing we arrive a a final line count at just over 1,300).

At this point, the main advantage I see with capsudod is that you can more easily drop privileges and put in restrictions like pledge(2) before the binary is ever called upon by whatever user we have granted permissions. While with the doas(1) thinking above you have to run over plenty of code that could be exploited. Still, this feels like a rather minor relative improvement to what we already have.

Am I missing something in my ignorance? Lastly, let me also say that I am sure that sudo(8) has the ability to do the same things I proposed to do with doas(1) above, but I know the latter far better.


The whole problem is mapping privilege to users and groups, so doas doesn't solve the issues explained in the article.

> The key difference here would be that www-deployment can not delegate as easily to arbitrary users, as they would need to ask someone with root access to add additional users to the www-deployment group. But I am left wondering if this use case (if it is important enough)...

Delegation is the killer feature of the object capability model. It's not just important enough, it's the most important. Keep in mind that the ACL model allows delegation, too, it's just unsafe. Users share credentials all the time. Capabilities allow delegation in a way that can be attenuated, revoked, and audited.


Firstly, thank you for engaging and trying to enlighten me.

I do understand why capability delegation is useful and I am familiar with using Unix sockets to delegate the control of daemons using socket permissions, which feels similar to what we see here with capsudod (I have not read the code sadly, too much other code to read today).

However, I am still puzzled what the advantage of having a herd of capsudod instances running is to say my proposal of allowing users to set up their own doas.conf(5)s to delegate capabilities. Yes, we still need SUID and we will need to be darn sure 1,000 or so lines are properly secured, but it is attenuable, revocable, auditable, and feels (perhaps wrongly, because I have a bias towards text files describing the state of a system?) more natural to me than putting it all into the running state of a daemon.

Is there some other strength/weakness of these approaches that I am failing to see? I am no systems programmer, but I find topics like this interesting and dream of a day when I could be one.


> However, I am still puzzled what the advantage of having a herd of capsudod instances running is to say my proposal of allowing users to set up their own doas.conf(5)s to delegate capabilities. Yes, we still need SUID and we will need to be darn sure 1,000 or so lines are properly secured, but it is attenuable, revocable, auditable, and feels (perhaps wrongly, because I have a bias towards text files describing the state of a system?) more natural to me than putting it all into the running state of a daemon.

I think two separate discussions are being mixed here. The above seems mostly concerned with the chosen interface of capsudo. Imperative vs. declarative is orthogonal to the discussion about object capabilities vs. ACLs.


sure, but that doesn't change the fact that doas(1) is a suid binary. everything done would be done as root, from parsing the config file, checking the rights, and finally executing the command.

capsudo here would rely on singular unix sockets with file access rights, so in essence, it would indeed be similar to what you could do with doas, but the idea here is to seperate things. with doas, doas would check if you have the correct group or user to do the command, while with capsudo, the kernel would check it, and reject it if you don't have the right.


Having played the Famicom Disk System, I will also say that the load times are abysmal. I think it is Castlevania II that has an autosave function for the Japanese release which was for the FDS and it is so darn slow that I would recommend against playing it even if you can read Japanese.


Thank you for sharing the write up!

Not an OpenBSD expert by any means, but two small pieces of minor feedback as everything else you wrote mirrors my own setup and experience. Firstly, unless you are really strapped for space on say a 90s machine, it is generally recommended to install all the file sets as there can be interactions with ports even if one does not expect it (say ffmpeg needing X11 libraries). The general OpenBSD mindset is after all "Use the defaults" and the default is to install all the file sets. Secondly, the OpenBSD Handbook has a bit of a mixed reputation in the community from what I can tell. Unlike the FreeBSD Handbook, it is not an official document and I tend to rely on man pages, openbsd.org, misc@, and a few blogs I consider to be trustworthy instead.

As a final note, glad to see you have IPv6 up and running. I really should get around to it now that dhcp6leased has been in base for more than two releases.


I was fooled by the handbook! It sounded so official. I will add a footnote to the post.


Really? The variable name lengths? Not that the code is clearer as:

    const te = document.createElement('table');
    document.body.appendChild(te);
    [
        ['one',  'two',  'three'],
        ['four', 'five', 'six'  ],
    ].forEach((r, i) => {
        const re = te.insertRow(i);
        r.forEach((c, j) => {
            re.insertCell(j).innerText = c;
        })
    });
My personal stance on short variable names is that they are fine as long as their scope is very limited, which is the case here. Rather, the "crime" to me is an overuse of rather pointless variables as the majority of them were only used once.

Disclaimer: I have not tested the code and I only write JavaScript once every few years and when I do I am unhappy about it.


This is not an improvement. Having named variables for things is good actually. They will need to be declared again immediately once you want to modify the code. insertCell(i).innerText = c is a nonsense statement, it should be 2 lines for the 2 operations


I disagree, but maybe it is a cultural thing for those of us that are more used to functional styles of programming? I was taught method chaining as a style by a seasoned JavaScript and Ruby programmer myself and I do not find the semantics confusing. "Create X with Y set to 17 and Z to 4711" can be either on one or three lines to me, as long as the method calls are clear and short enough.

As for variables, I (again personally) find it taxing to have many variables in scope, so I do net see their presence as a universal good. If we instead simply use expressions, then there is no need to concern yourself with whether the variable will come into play later on. Thus, I think it increases clarity and favour that over the ease of future modification argument you propose (heck, I would argue that you get better diffs even if you force the variable declaration into a future modification).

As for bikeshedding this piece of code further, if I steal some ideas from chrismorgan [1] and embedding-shape [2] who appear to be way more seasoned JavaScript programmers than me:

    const $t = document.createElement('table');
    for (const r of
            [
                ['one',  'two',  'three'],
                ['four', 'five', 'six'  ],
            ]) {
        const $r = $t.insertRow();
        for (const e of r)
            $r.insertCell().innerText = e;
    };
    document.body.append($t);
This is now rather minimal and the logic is easy (for me) to follow as the scopes are minimal and namespace uncluttered. It was a rather fun little exercise for a language I am not overly familiar with and I learned a few tricks and perspectives.

[1]: https://news.ycombinator.com/item?id=45782938

[2]: https://news.ycombinator.com/item?id=45781591


Looks like your code inserts a new row for every cell.


Cheers! Fixed.


It really is and not just that. WireGuard being natively supported makes configuring your peers as easy as dumping the last of these example lines into /etc/hostname.wg[0-9]:

https://man.openbsd.org/wg#EXAMPLES

Simple, text-file based configuration for everything in the extensive base system and no drama between upgrades is really what makes you a happy OpenBSD user.


It feels like Alpine tries to imitate the OpenBSD installer somewhat as well, but it is just not the same as it forces you to make choices between SSH servers, NTP daemons, etc. So, it still very much feels like the Linux "pick and mix box". What makes OpenBSD so special is that there is one choice, it tends to be a good choice, and it is the only choice they will support and therefore they will put in the hours to make it solid.


Yes, and OpenBSD being a fork of NetBSD still carries some of that spirit.


And both of those have very minimal ports compared to Linux. Notably in modern arm/riscv. Netbsd has really fallen behind.

Still better than the none of freebsd.


I mean, are we surprised? Linux has on the order of millions times more users and funds (probably not developers though, but who knows). Thus, if there is any financial viability of a port I am certainly expecting Linux to "move" first. Rather, I am impressed that OpenBSD and NetBSD are keeping up as well as they do.


NetBSD and OpenBSD support “old” hardware notably longer than Linux does though. OpenBSD having dropped the VAX is not that long ago.


Yeah I suppose.

But OpenBSD forked from NetBSD like, what, 30 years ago?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: