To quantify this, India had a per capita CO2 emission of 2.07 tonnes per year, while Sweden had 3.43 (2023). Sweden used this to achieve a 58,100 USD per capita GDP (2025) compared to India's 2,878 USD all while using a non-unsignificant part of it as heating in the winter. It would be great for all of us if India could do better on a per capita basis since the resulting effect would be huge.
You're forgetting the fact that Sweden (like other European countries) has had >100 years of much higher emissions than India, and has built this wealth through that. Wealth compounds - so if you want to make these sorts of arguments, you should look at total historical emissions per capita.
So because others made an unknown mistake, now India should be allowed to perpetrate known, deliberate, and intentional harm? It makes India that much worse, it makes India evil!
This is just unsophisticated and uncivilized excuse making and primitive rationalization.
No, the point is that the fair way to look at this is that every country has a total carbon budget, based on population. Since atmospheric CO2 is a cumulative resource, that doesn't really decay at human time scales, looking at current emissions is misleading. It's taking an arbitrary moment in time as a 0 basis and saying "it doesn't matter how we got where we are now, from now on you shouldn't emit more than we do".
The reality is that European countries (including Russia) and the USA are disproportionately responsible for the massive amounts of CO2 in the atmosphere today. So, they should be more responsible for fixing this - either by investing some of the wealth they accumulated through this massive energy accumulation (that resulted in the CO2 emission) into carbon capture technologies, or by subsidizing the need for other countries in the world to build energy without so much pollution.
These arguments are frustratingly stupid. It's as if 100 royals were eating a quarter of the food, 10,000 peasants were eating the other three-quarters, and the royals were telling the peasants that their greed was causing the stores to run dangerously low.
I gave you the numbers, if you want an honest argument then use the numbers. It's as if 10.5M "royals needing heat" used 3.6 MT (0.12%) while 1450M "peasants" used 3000 MT (99.88%).
At university we implemented a DCT+quantization encoder/decoder for audio, and had a buggy version produce these super alien, beautiful sounds. I've often wished I had saved that version.
So what would you say about the PRISM and Upstream programs where metadata about millions of Americans was collected? Doesn't it seem as if they could target any US citizen by just pretending to target any foreigner they communicate with?
This is also why app backends don't really need statically typed languages, no matter how big the company is. You have a well-defined API on the front, and you have a well-defined DB schema on the back, that's good enough.
The static typing makes even less sense at finer code scopes, like I don't need to keep asserting that a for-loop counter is an int.
Statically typed languages, when used correctly, save engineering time both as you extend your service and when thing go wrong as the compiler helps you check that the code you've written, to some degree, meets your specification of the problem domain. With a weak type system you can't specify much of the problem domain without increased labour but with a more expressive type system (and a team that understands how to use it) you can embed enough of the domain specification that implementing part of the business logic incorrectly or violating protocols turns into compile errors instantly rather than possibly leaking to production.
As for your comment on `any`, the reason why one doesn't want to fall back on such is that you throw out most of the gains of using static types with such a construct when your function likely doesn't work with `any` type (I've never seen a function that works on absolutely anything other than `id :: a -> a` and I argue there isn't one even with RTTI).
Instead you want to declare the subset of types valid for your function using some kind of discriminated union (in rust this is `enum`, zig `union(enum)`, haskell ADTs/GADTs, etc etc) where you set a static bound on the number of things it can be. You use the type system to model your actual problem instead of fighting against it or "lying" (by saying `any`) to the compiler.
The same applies to services, APIs, protocols, and similar. The more work the compiler can help you with staying on spec the less work you have to do later when you've shipped a P1-Critical bug by mistake and none of your tests caught it.
The type system almost never catches a bug that proper testing would miss. And if the code has such nasty untested edge cases that you don't even notice a wrong type going somewhere, it'd probably behave wrongly even with the right types.
Indeed "any" breaks type checking all around it, but it can be contained more easily in a helper func with a simple return type. Most common case is your helper does a SQL query, and it's tedious and redundant to specify the type of rows returned when the SQL is already doing that.
> The type system almost never catches a bug that proper testing would miss.
This is true, but the difference is you don't have to write a compiler, it's already written for you. The testing, you have to write, and do so correctly.
A lot of the woes of statically typed languages can be mitigated with tooling. Don't want to repetitively create types from an OpenAPI spec? Generate the code. Don't want to create types from SQL records? Generate the code. Don't want to write types everywhere? Deduce them.
You get all the benefits of static typing, but none of the work. It's so advanced these days that lots of statically-typed languages look dynamically-typed when you see the code. But they're not, everything has a type if you hover over them. The type deduction is just that good.
You need tests either way. It's hard to write a test that checks behavior but misses a wrong-type. Simply running the problematic code will most likely throw an exception.
The type deduction is not so automatic in most languages, TS included. Rust has the most automatic one I've seen, and of course that kind of language needs static typing. But still, it's more explicit than needed for a typical web backend.
SQL type autogen is limited to full rows, so any query returning an aggregate or something isn't going to work with that. Even for full rows, it's eh. Usually I just see that encouraging people to do local computations that should be in SQL.
In my experience of dealing with loosey goosey languages (JS, PHP, Perl), 80%+ of the errors you will receive are because of the type system.
In PHP, if you look at your logs you'll see almost all the errors are checking an array with an index that doesn't exist. This is because people are using arrays as objects and then using strings as members. If you just use an object, this type of error is impossible.
So, we have to write `??` everywhere because anything can always be null and then it can break stuff.
And then you have errors with passing empty string vs null vs empty array to functions and getting unpredictable behavior. So you need to constantly check everything.
If you actually open up a function in a dynamically typed language, take your pick, you'll see something like this:
If you don't include checks like that everywhere, your code will break. You just don't know it isn't broken yet.
And, btw, something like PHP `empty()` is not a silver bullet either. Because it considers like a dozen different values to be empty. Which comes with it's own set of problems.
The argument isn't that with static types you don't need tests but rather with static types you can focus on testing what actually matters which is the fuction itself along with it's integration into the program. Tests won't cover 100% of the surface area of a function taking `any` as that is by definition infinite with the constraint on the shape specified in the body of the function dependent on the path taken at runtime (it might never be hit). I urge you to take a look at languages other than TS for this as TS is in the strange place of bolting on a static type system to rather messy language as what may be issues with TS may not be for the rest of the space of languages with static type checking (e.g haskell effectively requires no type annotations at all in theory but in practice it's often better to use them and with extensions it's required).
To add to that, try looking at Go, Elm, Zig, C#, mercury, and Idris to see different type systems at play rather than assuming TS and Rust are all that's available in this design space.
> Simply running the problematic code will most likely throw an exception.
The issue here is "most likely" which during a large refactor (and due to a wide spread use of duck typing) can hide many faults within the application that go without being noticed until a user hits that path. Static types turn this into a compile-error thus it won't ever reach production. If you have 100% path coverage and test every possible type that can be passed and ensure truthy values don't change behaviour (e.g `if ("")`, `if ([])`, ...) when passed (need to test the space where all of them evaluate to `false` in addition to `true`) you're still short of the number of variations that can be passed to your function and thus may end up in a situation in future where the wrong behaviour causes failure at runtime post deployment. This is not to say you should use types only and not tests (still test even with static types) but rather that the domain of possible things passed is reduced to a testable set of values instead of always being the largest path-dependent set possible.
> It's hard to write a test that checks behavior but misses a wrong-type.
It's incredibly easy to write such tests as most don't test the negative space of such functions nor is it easy to know what the minimal shape of an object may be for your function as it depends on which logic it hits inside. If a truthy value is used or a branch is made on a field or method call of the object then anything unused in the consequent branch is not checked by your test. You'd effectively have to resort to fuzzing to find what the minimal shape is for each possible path to have an idea of what your function doesn't catch in tests to get even close to static type checking (which is still far off as in some cases the set is so large it will never finish / enumerate all possible shapes).
> SQL type autogen is limited to full rows, so any query returning an aggregate or something isn't going to work with that.
Ask _why_ tooling doesn't work with that and you may notice it's due to a lack of specification or just insufficient tooling. Take F# [1] as the example here which can do far more [2] and doesn't suffer from the mentioned problem. F# is a language with static type checking.
It saves development time because if I change an API my language server can immediately notify me about all the now-broken call sites, and I don't have to wait for tests to run to find out about all of them.
The type system doesn't replace unit/snapshot/property/simulation tests as it's only job is specification. The type system is meant to be used in addition to testing to reduce the set of possible inputs to a smaller domain such that it's easier to reason about what is possible and what isn't. The same would be true even if you go as far as formal verification of programs, you always need to test even when you have powerful static types!
For example `foo :: Semigroup a, Traversable t => t a -> a` I already know that whatever is passed to this function can be traversed and have it's sum computed. It's impossible to pass something which doesn't satisfy both of those constraints as this is a specification given at the type level which is checked at compile-time. The things that cannot be captured as part of a type (bounded by the effort of specification) are then left to be captured by tests which only need to handle the subset of things which the above doesn't capture (e.g `Semigroup` specifies that you can compute the sum but it doesn't prevent `n + n = n` from being the implementation of `+`, that must be captured by property tests).
Another example, suppose you're working with time:
tick :: Monad m => m Clock.Present
zero :: Clock.Duration
seconds :: Uint -> Clock.Duration
minutes :: Uint -> Clock.Duration
hours :: Uint -> Clock.Duration
add :: Clock.Duration -> Clock.Present -> Clock.Future
sub :: Clock.Duration -> Clock.Present -> Clock.Past
is :: Clock.Duration -> Clock.Duration -> Bool
until :: Clock.Future -> Clock.Present -> Clock.Duration
since :: Clock.Past -> Clock.Present -> Clock.Duration
timestamp :: Clock.Present -> Clock.Past
compare :: Clock.Present -> Clock.Foreign.Present -> Order
data Order = Ahead Clock.Duration | Equal | Behind Clock.Duration
From the above you can tell what each function should do without looking at the implementation and you can probably write tests for each. Here the interface guides you to handle time in a safer way and tells a story `event = add (hours 5) present` where you cannot mix the wrong type of data ``until event `is` zero``. This is actual code that I've used in a production environment as it saves the team from shooting themselves in the foot with passing a `Clock.Duration` where a `Clock.Present` or `Clock.Future` should have been. Without a static type system you'd likely end up with a mistake mixing those integers up and not having enough test coverage to capture it as the space you must test is much larger than when you've constrained it to a smaller set within the bounds of the backing integer of the above.
In short, types are specifications, programs are proofs that the specification has a possible implementation, and tests ensure it behaves correctly for that the specification cannot constrain (or it would be too much effort to constrain it with types).
As for SQL, I'd rather say the issue is that the SQL schema is not encoded within your type system and thus when you perform a query the compiler cannot help you with inferring the type from the query. It's possible (in zig [1] at least) to derive the type of a prepared SQL query at compile-time so you write SQL as normal and zig checks that all types line up. It's not that types cannot do this, your tool just isn't expressive enough. F# [2] is capable of this through type providers where the database schema is imported making the type system aware of your SQL table layouts solving the "redundant specification" problem completely./
So with all of that, I assume (and do correct me if I'm wrong) that your view on what types can do is heavily influenced by typescript itself and you've yet to explore more expressive type systems (if so I do recommend trying Elm to see how you can work in an environment where `any` doesn't even exist). What you describe of types is not the way I experience them and it feels as if you're trying to fight against a tool that's there to help you.
> `foo :: Semigroup a, Traversable t => t a -> a` I already know that whatever is passed to this function can be traversed and have it's sum computed. It's impossible to pass something which doesn't satisfy both of those constraints as this is a specification given at the type level which is checked at compile-time.
To add to your point, I don't think foo can even be implemented (more accurately: is not total) because neither `Semigroup a` or `Traversable t` guarantee a way to get an `a`.
I think you'd need either `Monoid a` which has `mempty`, or `(Foldable1 t, Traversable t)` which guarantees that there's at least one `a` available.
Yep, I missed it as I don't often work in haskell anymore but with the correction the rest of the above still stands (haskell syntax is still the most efficient I'm aware of for talking about this). Also, it being unimplementable is also a nice property of having such a type system as you can prove that your specification is impossible to implement (more applicable in Lean, ATS2, Idris, Agda) or make invalid states impossible to even write (consider state transition rules at the type level).
"Need"? Probably not. But unlike microservices they don't really have downsides (at least not with modern IDEs and the automatic refactorings they support) and they do offer some benefits.
Statically-types languages are a form of automatically-verified documentation, and an opportunity to name semantic properties different modules have in common. Both of those are great, but it is awkward that it is usually treated as an all-or-nothing matter.
Almost no language offers what I actually want: duck typing plus the ability to specify named interfaces for function inputs. Probably the closest I've found is Ruby with a linter to enforce RDoc comments on any public methods.
I'm fine with types in shared libs, just not in the app layer code, where the cost outweighs the benefit. I think you can do the in-between you describe with Typescript, but every time I've been on a team that says "oh you can use `any`," one day they disallow it. Especially in a big corp where someone turns it into a metric and a promo target.
I forgot to add that "like I don't need to keep asserting that a for-loop counter is an int." is exactly what is happening with a dynamically typed language that is exactly what the runtime ends up doing unless it has a built-in range type to avoid that overhead and the loop variable cannot change while looping. With a static type checker that can be eliminated upfront as the compiler knows that it's an int and not suddenly a string as it's impossible to change the variable's type once it's defined thus all of the overhead of RTTI can be erased.
Javascript has to check on each iteration that the item is an int and for arrays that the length of those arrays hasn't changed underneath along with a bunch of other things as the language doesn't guarantee that it can't change. Even the JIT compiler in use has to check "is this path still that I expect" as at any point the type of the variable used in the loop can change to something else which invalidates the specialization the JIT compiler emitted for the case of it being an int. When you don't use languages with static types you push all of this work on runtime which makes the program slower for every check it needs to do while offering none of the advantages you have with static types.
Thus with a for loop in say C you don't assert that it is an int each time you statically constrain it to be an int as the loop condition can well be based on something else even if int is the more common to use. For example in zig `for` only takes slices and integer ranges with any other type being a compile-error:
var example: [1 << 8]usize = @splat(0);
for (&example, 0..) |*num, int| num.* = int;
This is not more work than what you'd do in a dynamically typed language.
12 collapses seems incredibly generous, I don't think I ever had fewer than five blackouts in a day for the few months that I stayed there. Maybe it's better for data centers than hotels but I really don't believe that there's a big difference.
reply