in high scale stateless app services this approach is typically used to lower tail latency. two identical service instances will be sent the same request and whichever one returns faster “wins” which protects you from a bad instance or even one which happens to be heavily loaded.
I'm not sure I follow. In this instance we're talking about multiple backend matching engines... Correct? By definition they must be kept in sync, or at least have total omnipotent knowledge about the state of all other backend book states.
I wasn’t in C++ style land but my recollection is that distilled experience would be backed up by extensive mailing list discussions. in case of contention the discussion might extend into case studies or other quantitative techniques atop google3. It’s difficult for me personally to describe the impact (outsized)of a super-resourced monorepo for this kind of thing. also as gp mentioned, it was sometimes possible to automate changes to comply with updated guidelines.
My understanding is that the hardware is always installed but the dealer will not fill the liquid reservoirs unless the customer specifically requests (and pays) for it.
I think six dev teams is small in terms of kube. I wouldn’t be surprised if that’s close to the perfect size to move onto kube and create and adopt a standard set of platform idioms.
at orgs significantly larger than that, the kube team has to aggressively spin out platform functions that enable further layering or risk getting overwhelmed trying to support and configure kube features to cover diverse team needs (that is, storage software doesn’t have the same needs or concerns as middleware or the frontend). this incubator model isn’t easy in practice. trying to adopt kube at this scale is very challenging because it requires the kube team to spin up and out sub-teams at a very high rate or risk slowing the migration down to a crawl or outright failure and purchasing e.g. off the shelf AWS because teams need to offboard their previous platform.
> I think six dev teams is small in terms of kube.
I don't doubt it. By "larger" I just meant larger than something like "running my servers on FreeBSD/OpenBSD and jails or VMM respectively" above, which sounds like a one-person operation.
> I wouldn’t be surprised if that’s close to the perfect size to move onto kube and create and adopt a standard set of platform idioms.
My previous position was at a company about 5x the size, with many loosely related enterprise and government products that they sold into markets in at least 20 countries. They also used k8s quite effectively.
But, I think the key is that you mentioned "the kube team". Having a single team responsible for everything k8s-related at a large org is likely to make it difficult to be effective.
For supporting individual dev teams I think you need people on those teams who have at least some of the necessary knowledge, so they're not entirely dependent on a central team. Even a watered-down version of what devops was supposed to be about is better than nothing.
let’s start with cars. I’ve accepted that I’ll never buy a new car again which is a real shame as EVs have a ton of potential. I’d be so happy to be wrong but the trend seems to be inescapable right now.