I have no personal experience with the SRE agents, but I used Codex recently when trying to root cause an incident after we're put in a stop gap, and it did the last mile debugging of looking through the code for me once I had assembled a set of facts & log lines and accurately pointed me to some code I had ignored in my mental model because it was so trivial I didn't think it could be an issue.
That experience made me think we're getting close to SRE agents being a thing.
And as the LLM makers like to reiterate, the underlying models will get better.
Which is to say, I think everyone should have some humility here because how useful the systems end up being is very uncertain. This of course applies just as much to execs who are ingesting the AI hype too.
So if you subtract linux and LLVM and Webkit and Java, what is left of Google? Absolutely nothing. Well, a mostly empty, dysfunctional mono repo lacking the main dependencies.
Claude is extremely verbose when it generates code, but this is something that should take a practicing software engineer an hour or so to write with a lot less code than Claude.
I like all the LLM coding tools, they're constantly getting better, but I remain convinced that all the people claiming massive productivity improvements are just not good software engineers.
I think the tools are finally at the point where they are generally a help, rather than a net waste of time for good engineers, but it's still marginal atm.
They've changed the laws recently which makes it far easier - I believe you'd still need to be accredited but for most of HN, that's a low bar. For OpenAI specifically, they've allowed employees to participate in the funding rounds and they did a separate tender offer with Softbank to provide liquidity to early employees as well;
You lose atomic deployment and have a distributed system the moment you ship Javascript to a browser.
Hell, you lose "atomic" assets the moment you serve HTML that has URLs in it.
Consider switching from <img src=kitty.jpg> to <img src=puppy.jpg>. If you for example, delete kitty from the server and upload puppy.jpg, then change html, you can have a client with URL to kitty while kitty is already gone. Generally anything you published needs to stay alive for long enough to "flush out the stragglers".
They just refresh the page, it's not a big deal. It'll happen on form submission or any navigation anyway. Some people might be caught in a weird invalid state for, like, a couple minutes absolute maximum.
Right, there's level of solutions. You can't sit here and say that a few seconds of invalid state on the front-end only for mayyyyybe .01% of your users is enough to justify a sprawling distributed system because "well deployments aren't atomic anyway!1!".
IMO, monorepos are much easier to handle. Monoliths are also easier to handle. A monorepo monolith is pretty much as good as it gets for a web application. Doing anything else will only make your life harder, for benefits that are so small and so rare that nobody cares.
Monorepo vs not is not the relevant criteria. The difference is simply whether you plan your rollout to have no(/minimal) downtime, or not. Consider SQL schema migration to add a non-NULL column on a system that does continuous inserts.
Again, that's trivial if you use up and down servers. No downtime, and to your users, instant deployment across the entire application.
If you have a bajillion services and they're all doing their own thing with their own DB and you have to reconcile version across all of them and you don't have active/passive deployments, yes that will be a huge pain in the ass.
So just don't do that. There, problem solved. People need to stop doing micro services or even medium sized services. Make it one big ole monolith, maybe 2 monoliths for long running tasks.
Magical thinking about monorepos isn't going to make SQL migrations with backfill instantaneous and occur simultaneously with the downtime you have while you switch software versions. You're just not familiar with the topic, I guess. That's okay. Please just don't claim the problem doesn't exist.
And yes, it's often okay to ignore the problem for small sites that can tolerate the downtime.
I think grit and hard work will still be valuable attributes, even if AI starts producing perfect software tomorrow.
The world also just doesn't change that quickly.
Even with the most rosy projections, there is no way that software engineers are unnecessary in 2-3 years. Go have a look at METR's projections, even rosy projections aren't getting us to software that can replace engineers in a few years, let alone having that change ripple through the economy.
And nobody actually knows how far AI progress will go on the current trajectory. Moore's law was a steady march for a long time, until it wasn't.
Can you say more about why mechanically she didn't get anything?
If you exercise your options you have real stock in the company, so I don't see how you can get shafted here.
Did investors do some sort of dividend cash out before employees were able to exercise their options? (Obviously shady, but more about investors/leadership being unethical than the deal structure).
Would love to know more about how this played out.
Multiple share classes are the norm even before the new acquisition types we see here. It’s extremely common in an acquisition for employee shares to be worth nothing while investor and founder shares are paid out.
But these new “acquisitions” aren’t even that. They are not acquisitions at all. They just hire the talent directly with perhaps an ip rights agreement thrown in as a fig leaf.
I'm well aware of dual class shares, but preferences are typically 1x, and none of the deals were for less than the amount raised, so they're not relevant here.
The fact that these are not really acquisitions doesn't change the fact that Groq the entity now has $20b.
Money can't just "go" somewhere, it needs a reason first, at least for book-keeping. I mean, VCs can get their invested capital back but on top of that, how would that money be transfered? $20B is a lot and for sure the VCs will not just write an invoice of $18B for consulting services.
Hey, husband of that friend here,
The bought company had huge debts to the investors (it is a startup, not tiny but small one, ran for several years) and after cashing those out from the purchase deal, the employees were left with shares that were worth 0$.
(might be that the founders also grabbed some money out of that purchase, no one knows tho)
The employees of that bought company were given an incentive by the buying company to stay for a while and help tearing down and integrating their product into the buying company.
One could say shady, I'd say that it was just a bad deal.
It's definitely true that common stock gets $0 if the acquisition price is <= (sum raised + debt).
That sort of sounds like the startup wasn't doing well, and the acquisition wasn't for a lot of money (relative to amount raised), which seems very different from these Groq/Windsurf situations.
There have been at least a half dozen of these deals in the past 1-2 years including Google “licensing” CharacterAI to pull their founders back into Google as valued employees.
In the deal mentioned above: my guess is that preferred class shareholders and common shares got paid out but the common shareholders had such a low payout that it rounded down to zero for most employees.
This can happen even in a regular acquisition because of the equity capital stack of who gets paid first. Investors typically require a 1x liquidation preference (they get their investment back first no matter what).
Liquidation preferences are typically 1x these days, so they only matter when companies are sold at fire sale prices where basically nobody is making any money.
The deals are all weird so it's hard to really know what's happening, but if Groq gets $20b, I don't see how common stock holders don't get paid.
Special dividend to priority class and retain the rest to grow the remaining sham company?
I've seen some discussion that paying out normal employees might look more like an acquisition on paper which they may want to avoid for ftc reasons. I've also seen some discussion that this is a quid pro quo to the trump family to get Nvidia back into China (jr. bought in at the September financing round..).
Lots of speculation in general, including why nvda chose to spend 20bil on this.
I think this is a not insane prediction, but much like truck driving and radiology the timeline is likely not that short.
Waymo has been about to replace the need for human drivers for more than a decade and is just starting to get there in some places, but has had basically no impact on demand yet, and that is a task with much less skill expression.
It's good that folks working on browsers are working on making this easier, but I don't think you can really rely on this for GET requests.
It's often easier to smuggle a same-origin request than to steal a CSRF token, so you're widening the set of things you're vulnerable to by hoping that this can protect state mutating GETs.
The bugs mentioned in the GitHub issue are some of the sorts of issues that will hit you, but also common things like open redirects turn into a real problem.
Not that state mutating GETs are a common pattern, but it is encoded as a test case in the blog post's web framework.
Hi, blog post author here. With regard to state-changing GET requests, I do not recommend their use and I agree that they create some problems for CSRF protection, but you are correct that I did include tests that verify that they can be enabled in my Microdot web framework.
Please correct me if I have missed anything, but I have designed this feature in my framework so that the default action when evaluating CSRF-related headers is to block. I then check all the conditions that warrant access. The idea is that for any unexpected conditions I'm not currently considering the request is going to be blocked, which ensures security isn't put at risk.
I expect there are some situations in which state-changing GET requests are not going to be allowed, where they should be. I don't think the reverse situation is possible, though, which is what I intended with my security first design. I can always revisit the logic and add more conditions around state-changing GET requests if I have to, but as you say, these are uncommon, so maybe this is fine as it is.
I was involved in the effort to add/upgrade Fetch Metadata in the OWASP cheat sheet. We had discussed GET requests, so if you find the guidance lacking about it, please let us know how.
Likewise, if you could elaborate on the open redirects issue, that would be great.
I haven't actually dug into it, but I would assume that open redirects would strip a Sec-Fetch-Site: cross-site header and replace it with none or same-site or something. So would things like allowing users to specify image URLs, etc. And if you rely on Sec-Fetch-Site for security on GETs, these turn into actual vulnerabilities.
I think these sorts of minor web app issues are common enough that state changing GETs should be explicitly discouraged if you are relying on Sec-Fetch-Site.
I generally recommend the book Founding Sales (available for free online), but it's targeted at SaaS founders.
But you're actually doing something even more common: running a consulting business, and there's plenty of content on that for just that reason, so I would go find content on how to scale a consulting business, e.g. this seems like the start of a thread to pull on https://training.kalzumeus.com/newsletters/archive/consultin...
I wanted to say your insight about a consulting business is spot on.
I also recommend Founding Sales and think it would be worth the OP skimming.
Also, search for Steli Efti (founder of a CRM called Close) who has some great content for outbound sales. I thought he did a session for Y Combinator's Startup School but didn't just find it. But he has lots of great content and a bit of a hustle mentality.
That experience made me think we're getting close to SRE agents being a thing.
And as the LLM makers like to reiterate, the underlying models will get better.
Which is to say, I think everyone should have some humility here because how useful the systems end up being is very uncertain. This of course applies just as much to execs who are ingesting the AI hype too.
reply