And the muscles. You can’t fight or flight if you have to ask the liver to deliver glycogen. That’s how anaerobic exercise works. You have the fuel but not enough oxygen to burn it so you burn it fuel rich and oxidizer poor.
The grandparent means something a bit different. Muscles can use glucose without _oxygen_ to get short bursts of energy quickly by rearranging glucose molecules (indirectly) into lactic acid.
As a customer or a vendor, being able to see any company's health like this must be wonderful if you're evaluating whether you want to enter a relationship with them. More of the world should do this.
Limited liability companies have to submit their accounts annually. Most small business accounts aren't audited, so it's self-reported at that level, but still useful to check scale, cash crunch, etc.
16 bit programming kinda sucked. I caught the tail end of it but my first project was using Win32s so I just had to cherry-pick what I wanted to work on to avoid having to learn it at all. I was fortunate that a Hype Train with a particularly long track was about to leave the station and it was 32 bit. But everyone I worked with or around would wax poetic about what a pain in the ass 16 bit was.
Meanwhile though, the PC memory model really did sort of want memory to be divided into at least a couple of classes and we had to jump through a lot of hoops to deal with that era. Even if I wasn't coding in 16 bit I was still consuming 16 bit games with boot disks.
I was recently noodling around with a retrocoding setup. I have to admit that I did grin a silly grin when I found a set of compile flags for a DOS compiler that caused sizeof(void far*) to return 6 - the first time I'd ever seen it return a non power of two in my life.
I tried to explain this to a team that eventually lost their customers to competitors who could generate less interesting pages far cheaper per request. Instead they went off on a two year jag trying to cache page sections.
You know a team has lost the architectural plot when their answer for all performance problems is more caching. And once you add caching it’s hard to sell any other sort of improvements because the caching poisons the perf analysis.
Their solution took forever because the system was less deterministic than we even knew. They were starting to wrap it up when I went on a tear cleaning up low level code that was nickel and diming us. By the time they launched they were looking at achieving half of the response time improvement they were looking for, in twice the time they estimated to do so. And they cheated. They making two requests about 10% of the time, which made the p50 time into a lie, because two smaller requests pull down the average but not the cost per page load. But I scooped them and made the slow path faster, undercutting another 25% of their perf improvements.
I ended up doing more to improve the Little’s Law situation in three months of working on it half time than they did in two man years. And still nothing changed. They are now owned by a competitor. That I believe shut down almost all of their services.
Microservices solve a logistical problem. Rob wants to push code every two days. Steve wants to push every three. Thom deals with business who wants to release at whim and preferably within a few hours. Their commissions and bonuses are not reduced by how much chaos they case the engineering team. It’s an open feedback loop.
As you add more employees they start tripping over each other on the differences between trunk and deployed. Thats when splitting into multiple services starts to look attractive. Unfortunately they create their own weather and so if you can use process to delay this point you’re gonna be better off.
Everyone eventually merges code they aren’t 100% sure about. Some people do it all the time. However microservices magnify this because it’s difficult to test changes that cross service boundaries. You think you have it right but unless you can fit the entire system onto one machine, you can’t know. And distributed systems usually don’t concern themselves with whether the whole thing will fit onto a dev laptop.
So then you have code in preprod you are pretty sure will work but aren’t completely sure. Stack enough “pretty sure”s over time and as team sizes grow and you’re gonna have incidents on the regular. Separate deployment reduces the blast radius, but doesn’t eliminate it. Feature toggles reduce it more than an order of magnitude, but that still takes you from problems every week to a couple a year. Which in high SLA environments still makes people cranky.
reply