Might be most devices by count, but certainly not by power consumption. EVs are the only major appliance that’s DC, and most people don’t even have them.
No, it’s mentioned because “Asian and Pacific Islander” is a specific, separate category for government’s data collection purposes. It was created as a result of Pacific Islander lobby. You can look up all of this.
I don’t know anything about the “Pacific Islander lobby” but it’s not relevant here. The sentence here is clearly just not using the Oxford comma. Only the summary for journalists uses this construction. The actual abstract does not.
This paper breaks out Asian and Pacific Islander as separate ethnic groups in their data. You can see it in this summary and in the slightly more detailed data if you click through to the paper on JAMA.
(Actually they break out “Non-Hispanic Asian” and “Non-Hispanic
Native Hawaiian or
Other Pacific
Islander”. The need to call out “non-Hispanic” on every ethnicity seems weird.)
Why need production if you don't have consumption? I jest, only partially.
I suppose we do things how we do because taxing income is a lot easier to do progressively than taxing consumption.
You can't meter how many times someone has been out to eat or how many gallons of gas they have put into their car, but you can more easily track what their employer puts in their bank account.
You can progressively tax consumption by combining high, non-progressive consumption tax with negative income tax rates. Something like, everyone gets some small UBI, and also extra income for every dollar made.
For example, let's introduce 35% consumption tax, but introduce $1k/year UBI and extra 30% on income between $0 and $30k, then additional 20% on income between $30k and $60k, and then 10% on income between $60k to $100k, and 0% on any income above that.
Then, if you make $30k, your gross take home pay is actually $30k + $1k + 30% * $30k = $40k, and if you make $200k, your gross take home pay is $200k + $1k + 30% * $30k + 20% * ($60k-$30k) + 10% * ($100k-$60k) = $220k.
At the same time, if you make $30k, if you spend all of it on consumption, you pay 35% * $40k = $14k in taxes, so your net take home pay after taxes is $40k-$14k = $27k. On the other hand, if you make $200k and consume all of it, you pay $77k in consumption tax, and your net take home pay is $220k - $77k = $143k. All very progressive.
Now, the person making $200k is highly incentivised to avoid some of this tax, and instead of consuming all of it, he might only want to consume half of it, and invest the other half. This is great, because then the other half will (hopefully) get invested in a productive activity, so that in future there's even more production.
There is little point in inventing new protocols, given how low the overhead of UDP is. That's just 8 bytes per packet, and it enables going through NAT. Why come up with a new transport layer protocol, when you can just use UDP framing?
Agreed. Building a custom protocol seems “hard” to many folks who are doing it without any fear on top of HTTP. The wild shenanigans I’ve seen with headers, query params and JSON make me laugh a little. Everything as text is _actually_ hard.
A part of the problem with UDP is the lack of good platforms and tooling. Examples as well. I’m trying to help with that, but it’s an uphill battle for sure.
I think the "problem" of sending data is a lot harder without some concept of payloads and signaling. HTTP just happens to be the way that people do that but many RPCs like zeromsg/nng, gRPC, Avro, Thrift, etc work just fine. Plenty of tech companies use those internally.
Some of this is hurt by the fact that v8, Node's runtime, has had first class JSON parsing support in but no support for binary protocol parsing. So writing Javascript to parse binary protocols is a lot slower than parsing JSON.
Sure, you can reimplement multiplexing on the application level, but it just makes more sense to do it on the transport level, so that people don't have to do it in JavaScript.
It only takes a few thousand lines (easily less than 10k even with zero dependencies and no standard library) to implement QUIC.
Kernel management of transport protocols has zero actual benefit for latency or throughput given proper network stack design. Neither does hardware offload except for crypto offload. Claimed differences are just due to poor network stack design and poor protocol implementation.
Not fully standards compliant since I skipped some irrelevant details like bidirectional streams when I can just make a pair of unidirectional streams, but handles all of the core connection setup and transport logic. It is not actually that complicated. And just to get ahead of it, performance is perfectly comparable.
FWIW, quic-go, a fully-featured implementation in Go used by the Caddy web server, is 36k lines in total (28k SLoC), excluding tests. Not quite 10k, but closer to that than to your figure.
This is not the case with Starlink (and presumably Starlink) satellites. The ground stations use directional phased arrays. They can do it, because they keep good track of where each satellite is at any given moment, and do trajectory adjustments as needed.
Yes, groundstations are virtually always highly directional, except for, like, radio hams sometimes. (Even hams usually use yagis.) Possibly you didn't notice this, but I'm talking about the antennas on the satellites, which are the ones that could suffer interference (since they're the ones receiving the uplink frequencies we're discussing), not the groundstation antennas.
You always have to keep track of where each satellite is at any given moment.
What do you mean by "Starlink (and presumably Starlink)"?
To add to this, we know what objects interfere with our satellite contacts. We keep their orbital positions (as best as possible) in mind when scheduling satellite operations to avoid communication failures (partial or total) caused by their interference.
This is often learned after the fact. A contact will fail or go badly and then you can examine what was around it at the time. Over a series of failures the offending satellite will be identified.
Yeah, if you don't know the name of the thing you're looking for, you can spend weeks looking for it. If you just search for generic like "eigenvalue bound estimate", you'll find thousands of papers and hundreds of textbooks, and it will take substantial amount of time to decide whether each is actually relevant to what you're looking for.
There is no reason to expect that the test results would be the same across all demographic groups, and in fact, everything we know about psychometry (i.e. the science of mental testing) suggests that we should expect exactly opposite. See e.g. "Intelligence: Knowns and unknowns", which described the consensus position of the American Psychological Association as of 1995:
> The cause of [test achievement] differential is not known; it is apparently not due to any simple form of bias in the content or administration of the tests
themselves.
Not sure what is your point, the "test achievement" mentioned in the document refers to totally different "test" that the ones we were talking about.
Also, on just pure logic, I don't think the document shows what you think it shows. The document you provide (which is 30 years old, so with just this one, we should not assume it reflects today's consensus) explains that the difference is not understood, and that there is no _obvious_ answer, neither from biology, from group culture or from bias in the tests. In other words: the difference is due to something _not obvious_, for example (but not limited to, of course, it's just an example), _not obvious_ form of bias.
What you describe using many completely unnecessary mathematical terms is not only not found in “every real-world protocol”, but in fact is something virtually absent from overwhelming majority of actually used protocols, with a notable exception of the kind of protocol that gets a four digit numbered RFC document that describes it. Believe it or not, but in the software industry, nobody is defining a new “version number” with “strictly defined algebra” when they want to add a new field to an communication protocol between two internal backend services.
> What you describe using many completely unnecessary mathematical terms
Unnecessary for you, surely.
> Believe it or not, but in the software industry, nobody is defining a new “version number” with “strictly defined algebra” when they want to add a new field to an communication protocol between two internal backend services.
Name a protocol that doesn't have a respective version number, or without the defined algebra in terms of the associated spec clarifications that accompany the new version. The word "strictly" in "strictly defined algebra" has to do with the fact that you cannot evolve a protocol without strictly publishing the changed spec, that is you're strictly obliged to publish a spec, even the loosely defined one, with lots of omissions and zero-values. That's the inferior algebra for protobuf, but you can think it is unnecessary and doesn't exist.
Instead of just handwaving about whether it's necessary or not, why not point to any protocol that relies on that attribute, and we can then evaluate how important that protocol is?
Yeah. And for anyone curious about the actual content hidden under the jargon-kludge-FP-nerd parent comment, here's my attempt at deciphering it.
They seem to be saying that you have to publish code that can change a type from schema A to schema B... And back, whenever you make a schema B. This is the "algebra". The "and back" part makes it bijective. You do this at the level of your core primitive types so that it's reused everywhere. This is what they meant by "pervasive" and it ties into the whole symmetric groups thing.
Finally, it seems like when you're making a lossy change, where a bijection isn't possible, they want you to make it incompatible. i.e, if you replaced address with city, then you cannot decode the message in code that expects address.
reply