One of the primary justifications given for the takeover was to secure the gems service and offer trustworthy stewardship. Reading this, I don't really get the sense that the new maintainers are really prepared to deliver on either.
That said, I really don't like the hand waving of the HTTP log thing in this post. Yeah sure, company names aren't as sensitive/radioactive as an SSN or an email, but selling usage data isn't exactly a noble endeavor.
I don't think anyone comes out of this looking good. Some are worse than others, sure, but this is just a mess from top to bottom.
Mh, one of our security admins recently said something that's very fitting to the discussion: If you are removing an employee from a company, and you have to rely on their personal integrity instead of technical controls to avoid problems, you are doing very basic access control wrong. And if you're doing absolute fundamentals like that wrong, how much is your entire information security worth then?
And reading this, and the other disclosure from Ruby Central, they seem to be handling this maintainer/employee offboarding woefully incompetently at really, really basic levels. Obtaining control to secret management and doing a general secret rotation of management secrets isn't an obscure first step.
My primary takeaway from all of this is that I do not want to be depending on infrastructure run by Ruby Central. Maybe it’ll turn out that the previous status quo was even worse and we just got incredibly lucky that it never exploded, but the people now running things have consistently failed to inspire confidence.
Plus, its not a good look for RubyCentral for trying to smear Andre for it, when it is perfectly acceptable within their own Privacy Policy[1]:
> We may share aggregate or de-identified information with third parties for research, marketing, analytics, and other purposes, provided such information does not identify a particular individual.
No but he was seeking it, from the email in the RubyCentral article and directly from TFA:
> I have no interest in any PII, commercially or otherwise. As my private email published by Ruby Central demonstrates, my entire proposal was based solely on company-level information, with no information about individuals included in any way.
Here Andre is downplaying his ask of the logs. Even if Andre didn't get them, the logs were desired. Had Ruby Central acquiesced the logs would've been parsed and sold. Might not be an issue for you but I am frankly not interested in having any data shared or sold like this.
I don't even understand why RubyCentral included the proposal to use the log data in the post about a security incident. Whatever we may think of the proposal, the only purpose of including it in this place is to smear Andre.
The incident is clear cut and makes RubyCentral staff look incompetent. They cut off access to 1password and did not even consider that someone may have a copy of the credentials somewhere? As in "maybe in their head"? Rotating shared credentials in such a situation is security 101 and they failed. And when Andre notifies them that they failed, instead of quietly saying "Thanks, we've fixed that", they make it a security incident and include - without any further context - a single email from something that must have been a longer conversation.
Without more details, it's hard for me to nail down the exact motivations at play here.
My current read is that RC majorly botched the takeover, demonstrated gaps in security know-how, and then retroactively framed everything as a problem with André. The details of the logs are mostly immaterial to the rest of the claims, but are still suspicious enough to spice up the announcement. I believe this because, at the moment, I don't see anything in the original RC post that wasn't satisfactorily explained by this post.
Yes, that’s what they do. But I still fail to grasp how that helps them - they still look pretty bad. Worse, actually - if you want to frame Andre as the bad actor, then my next question is “You knew that a bad actor had previous access, why in the name of $deity did you not double check that they have no access?”
> Had Ruby Central acquiesced the logs would've been parsed and sold.
Which the privacy policy of RubyCentral allows, so I don't get why they suddenly have ethical problems with that, apart of course from throwing shade on Andre. Parsing logs for company access is what basically everyone does, and frankly, I don't see the problem with getting leads from data like this. That has nothing to do with "selling PII".
Yes. While I personally don’t like this practice, it is so widespread and there is so much demand for it that it’s not unusual given their privacy policy makes explicit mention of it.
The best argument you could make is that gem owners should be able to see “who” downloads their gems. If they were self-hosting the packages, they would have that data. Of course, charging for it is the ookier part.
Say you provide a service for free and are desperate for corporate sponsorship. Who wouldn't look at what companies are using your service and contact them with "Hey, I'm seeing you are using our service, can we have a chat"? You basically have no other means of contacting companies nowadays without getting into trouble for cold-calling/spamming.
Honestly, I can't really see what you are reading through the lines here.
Are you by any chance involved with RubyGems / RubyCentral? In my case, I'm just a bystander and not even a Ruby developer (but I worked in a Ruby company in the past so I know the ecosystem).
EDIT: oh, you might be referring to the RubyCentral statement. I didn't read the original security incident text, so my bad here. Sorry.
I am definitely not affiliated with either, moreso my opinion is considerably more negative of the new maintainers (both for the method of takeover and their handling of this incident). Quite frankly, I don't even know why you would even ask if I was.
I do not feel like I'm reading between any lines here-- Ruby Central directly showed that André Arko asked for the data to sell in order to cover the on-call fees. Yes, they have reason to smear him and shouldn't be trusted, but André confirms that he asked for the logs. None of that is up for debate, these are just the facts!
What we can argue about is 1) whether this is meaningfully different than what RC does already as noted by their ToS and 2) whether or not company names derived from the HTTP logs is sensitive or whatever. It is my position that neither André nor RC should be selling this sort of usage data, regardless of motivation. Personally I think the monetization of such data is bad in general, but I understand not everyone feels the same. It just gives me the ick.
EDIT: Immediately after submitting this, I saw that you issued a correction. Bad timing on my part I suppose!
They were all spitballing ideas about how to recover from the DHH-driven dropping of corporate sponsorship dollars, and how too keep the support lights on.
I think an offer of covering all the 2nd level support costs in return for the right - that Ruby Central's own T&Cs grant - to monetise company usage stats, is a reasonable offer.
The "other side's" alternative was to steal ownership and control of a whole bunch of volunteer gem authors work at the behest of a different corporate sponsor who was clearly demonstrating they wanted to be able to not only throw their weight around and force policies and priorities on RubyGems/RubyCentral, but also to make it personal by explicitly calling for long term contributors to be removed entirely on a whim.
This is interesting, because I would have thought after all the information revealed, at least both sides could be blamed and usage stats is a no - no.
This is such a strange take. Ruby Central, for better or worse, is the steward of Rubygems/Bundler. If Mike Perham wants to withdraw his funding because he thinks DHH is a white supremacist, then that's fine. But DHH didn't do that, Perham did.
Arko is not a completely innocent, non-self-interested character here. He has announced a project to end-run the existing rubygems, bundler, etc infrastructure before all this, in the name of "better tooling", but his tooling is solely owned by him and a handful of people that really, really don't like DHH. Controlling this aspect of the ruby toolchain ecosystem is in their own self-interest and overlaps with their deep disdain for the politics and corporate nature of the existing stewards of the ruby toolchain ecosystem. Maybe their approach and stewardship of this fork of the toolchain is more just, secure and equitable, but make no mistake -- they are fighting the same war that DHH and Shopify are, which is who controls the keys to the toolchain. Do you think if Arko, Perham, et. al. had control they would somehow be completely neutral, apolitical stewards of the ecosystem? No! They have made it clear with their money and machinations that they do not want to operate in the same ecosystem as DHH and their politics and ethics are intertwined with their relationship to the ruby community. They are no different than him.
Meanwhile those of us who just want stability are stuck between two factions who claim righteousness and ownership. I wish they all could be deposed and some more mature non-individual foundation could take over.
I blame DHH for all of this. He needs to step up, walk his words back and mend the damage to the Ruby community he has done. Including chipping in with the funding he cost Rubygems.
Everyone is responsible for their own actions and DHH hasn't made anybody do anything. The reactions to his statements, whether you agree with what he said or not, are entirely voluntary.
What it does reveal is the fragility of a community that can seemingly be disrupted because of a single controversial blog post from a guy known to be controversial. This has counter-intuitively elevated DHH's position to that of a lynchpin, accentuating his importance as opposed to pressing him into obscurity.
I personally found DHH's take reprehensible and whatever respect I had for the man has all but vanished, but the Ruby community really does like to throw the baby out with the bathwater sometimes.
It wasn't DHH's latest awful blog post that made Mike Perham pull Sidekiq's support. It was because Ruby Central invited him back to the last Railsconf, after having kicked him out of Railsconf 2 years prior for his awful blog posts.
So, let me get this straight, you blame Sidekiq (and others!) for pulling their sponsorship, thus throwing the baby (rubygems.org) with the bath water (the reputational damage they'd get from being associated with Ruby Central and DHH)?
Notably I didn't use the word 'blame' but correctly assigned accountability to the people who made the decisions they did, for whatever reason they had. The parenthetical examples are yours alone, not mine.
Beyond that, yes...the Ruby community is dramatic and this is not the first time a furore has been made over some inter-community conflict with a bunch of reactionary stuff kicking off.
Because Threads is Meta's attempt at bullshitting Mastodon users in welcoming a wolf among the herd. Search for "Fedipact": Meta is de facto cut off from many Mastodon instances.
Except the largest Mastodon instance, mastodon.social, does federate with Threads. I'm not even sure if the list you provided even covers most of the top instances either.
It really feels like an "eating your cake and having it too" kinda situation: you get the engagement and interaction with millions of Threads users but you don't have to count them in your decentralization metrics.
I had tried to run BG3 on my Steam Deck a couple months back. It ran... okay. Lot's of hitches and I had to tune things way way way down, but somewhat playable.
I'm very grateful that they took the time to build a native Steam Deck release for the game, not really something I had ever expected. Hopefully with this I can actually jump in and enjoy the game!
No offense, but some people requirements are really, really low. I played God of War on Steam Deck and it was not a good experience, it was at the bottom of 'okay', and only because at that moment I wasn't at home to play on better hardware.
This is the reason why I don't believe when people say that it runs great without trying it myself.
I recently started it on Deck. At first I thought it was ok, perhaps a bit blurry and hard to read. Then I put it on the TV and oh my when those pixels came at me! I don't consider myself a hifi person, I really don't care much about such things. But that pixel mush was borderline unplayable! And I couldn't up the quality without making the game run unbearably slow. I don't understand why everyone is saying it works great or even fine on SD. Perhaps others don't really use an external screen for it? But now I can't get comfortable looking at it on the small screen either...
> No offense, but some people requirements are really, really low.
I think you kinda hit the nail on the head, but I believe there is an extra dimension to this: desire.
For BG3, it looked fun and I had good memories of BG2 so I was interested in playing it. After tuning the settings a bunch and not being able to get a consistent framerate / not have micro-freezing, I just said "oh well, I'll play it on some other platform in the future." I cared about BG3, but not that much.
This is in contrast to Elden Ring Nightreign, which also had issues. I was able to get it to a somewhat stable 30FPS and celebrated that success before dumping 100+ hours into the game. Why? Well, because I love FromSoft games! I really really really wanted to play the game and was willing to put up with a somewhat subpar experience in order to get it. BG3, among other games, is just not that exciting for me personally so my tolerance of technical hitches is very different.
... which brings us right back to this native release. Hopefully the improvements we see are enough to get me over that "hill" and actually enjoying the game. I have the update queued on my deck now so I can try it out after work.
> it has obvious limitations (and generally it can be called unsound, especially around thread locals)
Is this really better than what we have now? I don't think async is perfect, but I can see what tradeoffs they are currently making and how they plan to address most if not all of them. "General" unsoundness seems like a rather large downside.
> In future I plan to create a custom "green-thread" fork of `std` to ease limitations a bit
Can you go more in-depth into these limitations and which would be alleviated by having first class support for your approach in the compiler/std?
Depends on the metric you use. Memory-wise it's a bit less efficient (our tasks usually are quite big, so relative overhead is small in our case), runtime-wise it should be on par or slightly ahead. From the source code perspective, in my opinion, it's much better. We don't have the async/await noise everywhere and after development of the `std` fork we will get async in most dependencies as well for "free" (we still would need to inspect the code to see that they do not use blocking `libc` calls for example). I always found it amusing that people use "sync" `log`-based logging in their async projects, we will not have this problem. The approach also allows migration of tasks across cores even if you keep `Rc` across yield points. And of course we do not need to duplicate traits with their async counterparts and Drop implementations with async operations work properly out of the box.
>Can you go more in-depth into these limitations and which would be alleviated by having first class support for your approach in the compiler/std?
The most obvious example is thread locals. Right now we have to ensure that code does not wait on completion while having a thread local reference (we allow migration of tasks across workers/cores by default). We ban use of thread locals in our code and assume that dependencies are unable to yield into our executor. With forked `std` we can replace the `thread_local!` macro with a task-local implementation which would resolve this issue.
Another source of potential unsoundness is reuse of parent task stack for sub-task stacks in our implementation of `select!`/`join!` (we have separate variants which allocate full stacks for sub-tasks which are used for "fat" sub-tasks). Right now we have to provide stack size for sub-tasks manually and check that the value is correct using external tools (we use raw syscalls for interacting with io-uring and forbid external shared library calls inside sub-tasks). This could be resolved with the aforementioned special async ABI and tracking of maximum stack usage bound.
Finally, our implementation may not work out-of-box on Windows (I read that it has protections against messing with stack pointer on which we rely), but it's not a problem for us since we target only modern Linux.
If you use a custom libc and dynamic linker, you can very easily customize thread locals to work the way you want without forking the standard library.
Part of me also wonders if people may agree that its better simply because they don't actually have to do the summarization anymore. Even if it is worse by some %, that is an annoying task you are no longer responsible for; if anything goes wrong down the line, "ah the AI must've screwed up" is your way out.
I’m inclined to believe that call center employees don’t have a lot of incentive to do a good job/care, so a lossy AI could quite plausibly be higher quality than a human
For many years now, every time I have to talk with someone on a call centre there has been a survey at the end with at least two questions:
1. Would you recommend us?
2. Was the agent helpful?
I have a friend who used to work at a call centre and would routinely get the lowest marks on the first item and the highest on the second. I do that when the company has been shitty but I understand the person on the line really made an effort to help.
Obviously, those ratings go back to the supervisor and matter for your performance reviews, which can make all the difference between getting a raise or being fired. If anything, call centre employees have a lot of incentive to do a good job if they have any intention of keeping it, because everything they do with a customer is recorded and scrutinised.
Fair point, though I think “did I accurately summarize a conversation” is much harder to check/get away with vs “did I piss off the person on the other end”
Also it should be easy to correct some obvious mistakes in less convoluted discussions. Also, prob a support call is less complex than eg a group meeting by many aspects, and with a prob larger margin of acceptable errors.
Amusingly, selecting Bay Ridge in Brooklyn also seems to select Westerleigh in Staten Island; I know Bay Ridge shares a congressional district with Staten Island, but I assure you we're still a part of Brooklyn.