> EE to debug react state management ... easily pick up most of it after a week long crash course while training a performance engineer ... would take months
Isn't that mostly because as you go up the abstraction layer, tools and docs to teach yourself the tricks of trade fast are in abundance (let alone a popular layer like React)? Which inturn is likely a function of incentives and opportunities.
It's because the higher up the stack you go, tools become more declarative and literate. Calling sort is far easier than understanding the algorithm for example.
> Calling sort is far easier than understanding the algorithm for example.
This was one of my gripes in college, why am I implementing something if I just need to understand what it does? I'm going to use the built-in version anyway.
Because that's the entire point of college. It's supposed to teach you the fundamentals - how to think, how to problem solve, how to form mental models and adapt them, how things you use actually work. Knowing how different sorting functions work and what the tradeoffs are allows you to pick the best sorting function for your data and hardware. If the tools you have aren't doing the job, you can mend them or build new tools.
So you know which sort to call because there isn't a right answer for all cases.
And so you can write your own because you're probably going to want to sort data in a specific way. Sort doesn't mean in numerical increasing or decreasing order, it means whatever order you want. You're sorting far more often than you're calling the sort function.
My degree was not specifically CS, it was a related degree, the focus was on landing jobs, but they still covered some CS concepts because some students were in fact doing a CS degree. I was more focused on show me what I need to build things. I have never had to hand-craft any algorithm in my 15 years of coding, it just makes no sense to me. Someone else figured it out, I'm contempt understanding the algorithms.
In my twenty years, I've rerolled famous algorithms "every now and then".
Its almost wild to me that you never have.
Sometimes you need a better sort for just one task. Sometimes you need a parser because the data was never 100% standards compliant. Sometimes you need to reread Knuth for his line-breaking algorithm.
My high school computer science teacher (best one I ever had) once told us this anecdote when we were learning sorting algorithms:
He was brought in by the state to do some coaching for existing software devs back in the 90s. When he was going over the various different basic algorithms (insertion sort, selection sort, etc.) one of the devs in the back of the class piped up with, "why are you wasting our time? C++ has qsort built in."
When you're processing millions of records, many of which are probably already sorted, using an insertion sort to put a few new records into a sorted list, or using selection sort to grab the few records you need to the front of the queue, is going to be an order of magnitude faster than just calling qsort every time.
Turned out he worked for department of revenue. So my teacher roasted him with "oh, so you're the reason it takes us so long to get our tax returns back."
Thinking that you can just scoot by using the built-in version is how we get to the horrible state of optimization that we're in. Software has gotten slow because devs have gotten lazy and don't bother to understand the basics of programming anymore. We should be running a machine shop, not trying to build a jet engine out of Lego.
I mean, the lesson I got from my 10X class was pretty much that: "never write your own math library, unless you're working on maintaining one yourself".
funnily enough, this wasn't limited to contributing to some popular OS initiative. You can call YAGNI, but many companies do in fact have their own libraries to maintain internally. So it comes up more than you expect.
On a higher level, the time I took to implement a bunch of sorts helped me be able to read the docs for sort(), realize it's a quicksort implentation, and make judgements like
1. yeah, that works
2. this is overkill for my small dataset, I'll just whip up basic bubblesort
3. oh, there's multiple sort API's and some sorts are in-place. I'll use this one
4. This is an important operation and I need a more robust sorting library. I'll explain it to the team with XYZ
The reasoning was the important lesson, not the ability to know what sorting is.
Unironically, wet / dry cycles isn't good news for California either.
Research published in the aftermath of the fire examines how this extremely wet to extremely dry weather sequence is especially dangerous for wildfires in Southern California because heavy rainfall leads to high growth of grass and brush, which then becomes abundant fuel during periods of extreme dryness.
I wonder how much of an effect human activity has on these cycles. Obviously, there are cycles within nature that don't include human activity but is this particular "equilibrium" (if we could call it that) the result of human settlements and all that entails or have they always happened this way but without a huge chunk of the population being in the midst of these modulations to witness it and be affected by it.
This might be a good time to recommend you all read the first 5 pages of East of Eden by George Steinbeck. It’s about how the Salinas valley goes through flood and draught cycles, and how every time they’re in one cycle they forget the other one ever happened
Huge amount, but maybe not in the way you intended.
Many of California's ecosystems have evolved to expect fires. Humans can't stand fires and aggressively put them out. So fuel that would be regularly burned off in mild wildfires instead builds up into megafires that exceed the limits of what the ecosystem can handle (a lot of California trees are fire-tolerant, but there's a point where the flames get too high and too intense).
So yeah, the human activity that affects these cycles is caused by our cognitive dissonance and fear to phrases like "mild wildfire".
Depends on how you quantify human impact. Lodgepole Pine (for example) is fire adapted. That's not something that evolved overnight. So it's safe to say that broad swaths of California have been experiencing a feast-famine cycle since before humanity developed agriculture.
Most of the actual wild fires just get put out. The big ones are happening because the build up is too big since all the smaller ones have been put out. It's all in service of the forestry industry.
what difference does the cause make if the end result is exactly the same as a natural event?
secondly, you could just as easily make this a case against CA environmental restrictions on logging. How many houses could have been built with those trees that went up in smoke? How many people could have been employed by the lumber industry? Now all those "green" trees are CO2 warming the atmosphere. It's almost as if CA wants crises (housing, employment, environment) because it gives their politicians more money and power.
Well, for starters, the Dixie Fire burned nearly a million acres and huge swaths of the Plumas and Lassen National Forests - the largest and most expensive fire in California history. It burned 70% of Lassen National Park.
I agree that forests are an economic resource and would argue that a fire, caused by humans and exacerbated by human forest management, is a devastating outcome economically. These aren’t wildfires that are merely periodically clearing the forest floor allowing for better forest propagation, they’re burning hot enough to kill everything - trees, soil, and anything in between. Along many parts of the Pacific Crest Trail in Northern California, you can see aspects of slopes that have been burned at various times over decades and see that those forests burned are struggling to come back. I hiked the entirety of the Pacific Crest Trail this past summer and would argue that I have a decent sense of the status quo of the scope and qualities of the devastation of forest fire in those forests affected by those fires I’ve cited.
What difference does it make?
1. These aren’t forest regenerating/undergrowth clearing events - they’re apocalyptic in their devastation. A million acres unnecessarily burned in the Dixie Fire.
2. Forests are limited, threatened resources. Muir wrote a passage calling the sheep herd he was tending in his first summer in his beloved Sierra “hooved locusts” but managed to rationalize the devastation wrought by those sheep immediately after by reasoning that there still remain thousands of untouched high Sierra meadows. Just as there aren’t a thousand Tuolumne Meadows, there aren’t a thousand Lassen or Plumas National Forests. Every single one is irreplaceable on a timeframe that takes into account forest regeneration and the scale of these fires.
3. Paradise, CA. was completely devastated by the Camp Fire - the deadliest and most destructive fire in California history - and started by poorly maintained PG&E power infrastructure. Lahaina, Hawaii was utterly devastated in a similar fashion by a fire with a similar causes Even ignoring that our forests are being irrevocably destroyed, human caused forest fires are engulfing and destroying entire communities, killing people unable to evacuate ahead of wind driven firestorms.
4. Besides forest health, threat to life and property, here’s one that I’d actually expect to land: the threat posed by increasingly powerful fires started by humans and exacerbated by human activity (including the forest management you cited) driven by increasingly extreme weather conditions and events is going to make home insurance untenable. People are already being widely priced out of insurance markets whose actuaries are now pricing in risks that include potential for outcomes like every single home in Lahaina/Paradise/Malibu/Santa Monica is devastated.
What does it matter? Well, even handwaving away the devastation wrought on our forests by man made fire, those fires that affect you and your home insurance bill are essentially that complete set of fires that aren’t naturally occurring events. You don’t have to take my word for it - an actuary will have you understanding it sooner or later.
> Can't tell if it's an OS level safeguard or an app-level one.
App version rollbacks are not allowed on Android. Even if it were, apps will have had to implement support for rollbacks (think database schema changes that must be undone etc).
> CF could have kept coasting on what Astro was building, but instead they are paying for it. But in return they get a lot of control.
Supabase pioneered the modern implementation of this model. Probably, RedHat before it? Google also tend to "acquihire" maintainers of popular FOSS projects, like Ben Goodger (Firefox), Scott Remnant (Upstart), Junio Hamano (Git), Guido von Rossum (Python).
TFA is missing a host of many a popular isolation techniques like Isolates, Code Interp / Binary Translators [0], Enclaves, Exclaves, Domains/Worlds, (RISC V) SEEs, TEEs, SEs, HSMs, pKVMs ...
> as the CLOUD Act "gives the US government authority to obtain digital data
AWS maintains a similar stance, too [0]?
The CLOUD Act clarified that if a service provider is compelled to produce data under one of the limited exceptions, such as a search warrant for content data, the data to be produced can include data stored in the U.S. or outside the U.S.
> Microsoft admitted that it 'cannot guarantee' data sovereignty
Hm. As for AWS, they say that if the customer sets up proper security boundaries [0], they'll ensure will keep their end of the bargain [2][3]:
As part of the technical design, access to the AWS European Sovereign Cloud physical infrastructure and logical system is managed by Qualified AWS European Sovereign Cloud Staff and can only be granted to Qualified AWS European Sovereign Cloud Staff located in the EU. AWS European Sovereign Cloud-restricted data will not be accessible, including to AWS employees, from outside the EU.
All computing on Amazon Elastic Compute Cloud (Amazon EC2) in the AWS European Sovereign Cloud will run on the Nitro System, which eliminates any mechanisms for AWS employees to access customer data on EC2. An independent third party (the UK-based NCC Group) completed a design review confirming the security controls of the Nitro System (“As a matter of design, NCC Group found no gaps in the Nitro System that would compromise these security claims”), and AWS updated its service terms to assure customers “there are no technical means or APIs available to AWS personnel to read, copy, extract, modify, or otherwise access” customer content on the EC2 Nitro System.
Customers also have additional mechanisms to prevent access to their data using cryptography. AWS provides advanced encryption, key management services, and hardware security modules that customers can use to protect their content further. Customers have a range of options to encrypt data in transit and at rest, including options to bring their own keys and use external key stores. Encrypted content is rendered useless without the applicable decryption keys.
The AWS European Sovereign Cloud will also benefit from AWS transparency protections over data movement. We commit in the AWS Service Terms that access to the EC2 Nitro System APIs is "always logged, and always requires authentication and authorization." The AWS European Sovereign Cloud also offers immutable, validated logs that make it impossible to modify, delete, or forge AWS CloudTrail log files without detection.
Isn't that mostly because as you go up the abstraction layer, tools and docs to teach yourself the tricks of trade fast are in abundance (let alone a popular layer like React)? Which inturn is likely a function of incentives and opportunities.
reply