Hacker Newsnew | past | comments | ask | show | jobs | submit | ojkelly's commentslogin

Until such a point where have agents not trained on human language or programming languages, I think something that’s also really good for people as well. - one obvious way to do things

- memory safe, thread safe, concurrency safe, cancellation safe

- locality of reasoning, no spooky action at a distance

- tests as a first class feature of the language

- a quality standard library / reliable dependency ecosystem

- can compile, type check, lint, run tests, in a single command

- can reliably mock everything, either with effects or something else, such that again we maintain locality of reasoning

The old saying that a complex system that works is made up of simple systems that work applies. A language where you can get the small systems working, tested, and then built upon. Because all of these things with towards minimising the context needed to iterate, and reducing the feedback loop of iteration.


I thought the same, but when Berkeley Mono got ligatures I gave them a go and never turned them off.

I think the truth is that any good monospace font is designed with an awareness of the grid those characters are laid out in. The rhythm and stability of that grid is a feature of monospace fonts. It lets us line up text, draw shapes and so on.

You would think not having the underlying characters visible would be an issue, but ligatures are just symbols like any other. In a short time you learn to read them, like you would any contracted word.


Decided to check those ligatures out, but this is pretty much entirely unreadable to me.

https://usgraphics.com/static/products/TX-02/images/TX-02-li...


It is probably a bit easier to start from a language you are familiar with. That image intentionally is a mismatch of random arrows and operators that don't necessarily align to the semantics of real code.

I think that's one of the things Fira Code's Readme [1] does a better job at than Berkeley Mono's page. The top big image breaks down the ligatures in high level categories or the programming language they are most associated with, side by side the version with a ligature. Further down the Readme you can several real examples from programming languages with the ligatures called out, giving you the context clues of what it looks like in a language you may be already familiar with.

[1] https://github.com/tonsky/FiraCode/tree/6.2


There is a ligatures explorer to see exactly how glyphs relevant to a particular programming language of interest would look like, with or without ligatures: https://usgraphics.com/products/berkeley-mono/ligatures


Spatial Audio for music is interesting and when properly mixed for it a song can be great. But I’m not going out of my way to find those songs.

When it absolutely excels is movies and TV, the immersion is spectacular.

I’m holding out for an example of Spatial Audio/Dolby Atmos where the immersion it can provide adds to the experience. Orchestral music is probably the best place to find that.


Would it make more sense to consider the response from the DB, like a response from any other system or user input, and take the parse don’t validate approach?

After all, the DB is another system, and its state can be different to what you expected.

At compile time we have a best guess. Unless there was a way to tell the DB what version of the schema we think it has, it could always be wrong.


The name should represent what the function does, it should indicate its purpose.

The distinction is useful even when it’s structurally identical to another function.

Two identical functions in different contexts or domains often diverge. When they’re not arbitrarily bound by the fact they have the same contents it’s easy to extend one and not the other.

During compilation, they could both end up using the same actual implementation (though I’m not sure if any compiler does this).


Test names are one of those things that are painful because it’s obvious to you as you write it, but there’s an extra hassle to switch gears in your head to describe what the contents of the test is doing.

It is really valuable when they are named well.

I’ve found this is where LLM can be quite useful, they’re pretty good at summarising.

Someday soon I think we’ll see a language server that checks if comments still match what they’re documenting. The same for tests being named accurately.


I've never seen this as a problem. If you're doing TDD and you have a scenario in mind, you describe that scenario in the name of the test.

If you're writing the test after then yeah, maybe it's hard, but that's one of the many reasons why it's probably better to write the test before and align it with the actual feature or bugfix you're intending to implement.


Maybe also why TDD is hard for me because I only truly start to think or visualize when I'm writing the actual code. I don't know if it's ADHD or what it is, but writing requirements, tests before hand, is just not my forte. It's like I only get dopamine from when I build something and everything else feels frustrating.


I used to be like that sometimes. Then I started realizing I'd get the function 90% complete and discover an edge case and have to start over in a way that could handle the edge case. Sometimes this could happen twice.

Documenting your requirements by writing the tests in advance is of course painful because it forces you to think more upfront about the edge cases in advance. But that's precisely why it saves time in the long run, because then it makes it a lot more likely you can write the function correctly from the start.


> Perhaps others have made this connection before…it does feel like some sort of universal law of physics.

This has been on my mind too, and I can’t help but think the fundamental concept underpinning it all is causality.

Reading about “consistency as logical monotonicity” — CALM [0], after diving into Spanner, there’s definitely something to databases beyond CAP.

I’m yet to find a simple and clear law or theorem that captures what you’re hinting to, but it feels like we must be getting close. This has been bouncing around my head for a few years since I first wrote a toy CRDT DB.

It seems to show up anywhere we have more than one system with independent memory (a place where state can be held), needs to maintain a shared representation or fact about something.

Within one memory (akin to a reference frame in quantum physics), we can do anything we want. We can mutate state without any interaction or interference from the outside world. This also sounds like a pure function. By itself, this is academic, theoretical—it does not, it does not exist. If a tree falls in the woods.

So if we want to interact with any other systems, we then need to coordinate, and the question is how.

The issue and pattern seems to rhyme everywhere. CPUs, HDD, SSD, file systems, networks, UIs, people, teams, etc. The best possible collaboration seems to be that which requires the least coordination. Money is a good example here, someone can string together a series of products from companies who know nothing about each other, by coordinating with money - as a means of exchange. Not to mention being able to buy complex technology with no idea the supply chain behind it. I don’t have to coordinate with a mine to use a computer, which contains something from said mine.

It sort of looks like distributed databases build a virtual memory on top of many smaller memories, which need to share some parts with each other to protect the system as a whole.

New information may need a few writes before it can be considered a new fact. I think this is an implementation detail, in that it’s irrelevant to the observer (who has no control over it).

This isn’t eventual consistency, which is perhaps the cruder form of this where the implementation detail above is wide open for all to see. Instead new information is available immediately, just your definition of immediately is different to the databases.

It follows then, that as an observer of the system you cannot violate causality in any way by learning information from the future, while they are still in the past.

My understanding from Spanner is that when you ask for a read, you provide or are issued a timestamp which provides the causal information to determine what you are allowed to know.

The system can then maintain both a true and correct representation of what it knows, and an incomplete working memory of what it is trying to know next (the writes which need to be committed into multiple memories).

Memory being anything from ram, ssd, carrier pigeon, lines in sand, etc.

I think where this breaks most of our brains is that it’s a universe simulation.

And both time and causality are fundamental invariants of the stability of the system, but are leaky abstractions that we need to deal with.

In CALM this is abstracted into what is effectively entropy. If your program never moves backward in time/loses entropy it’s CALM (I think). In earlier works I think Lamport and vector clocks were used, in Spanner it’s based on very precise measurements of our own time, where the maximum speed of information (ie speed of light) is the greater of the smallest unit of time we can measure (the precision of the GPS clock) and the time it takes for new data to become available in the system.

The other part where this differs from the read world is that the speed of information, the latency of a request, is different for reads and writes. Not true in the quantum world where an everything is a wrote (I think). Then, consider that in our own reference frame we can do a huge amount of work while waiting for a db read/write, something that would violate the speed of light if not in our virtualised world.

We cannot break causality in the world we live and breathe in, but we do it all the time in our simulated ones.

[0] https://arxiv.org/abs/1901.01930


It “feels” to me like the uncertainty principle. Think of availability as an interval of time by which all nodes have to be connected. If you set A high enough, sure, you can have both CP to your heart’s delight. As A shrinks, you lose the ability to have both C and P and have to pick one. It’s something like CxP/A>n, where n is a constant within a system.


It’s a consequence of thinking light/dark modes are personal preference when they’re accessibility features.

And when viewed from that lens you can see how off-putting it is when it’s a paid feature, or only adjustable when logged in, or a user-setting that seemingly always defaults to light.


One practice I really like with Feature Flags is having a flag budget. Building on the idea of an error budget, where some percentage of errors is okay, but at a certain threshold feature work needs to slow or stop to get it under control.

Having a limit of in-flight feature flags, means you’re incentivised to clean them up when they’re done (or decide they’re long running features etc), and it can help keep a handle on in progress work.

But mostly, feature flags are a powerful tool—they can hurt as much as they help if you don’t use them right.


That's a great take! Feature flags are just like braces, they help you but you will need to remove them ;) I am currently hacking on a feature that will automatically create a PR for you with the flag removed. One click!


Nice! Do you enforce these 'budgets' in CI or some such? If so, how?


There’s 2 mains ways to do feature flags.

Coupled to a code deploy or not.

Changing flags on deploy is simple, great when you have a fast pipeline and only a few flags.

But at some point it becomes useful to decouple feature release from code deploy.

And the only way to do that, is to be able to change the value of a flag out-of-band of a pipeline.

Then you have the capability to test new code in environments before prod and in small parts of prod—canary releases and so on.

Configuration and settings management overlaps with feature flags, but note that often the value comes from the ability to test and safely deploy new code into production environments (more of a release flag), that to enable a feature for a specific user. It just so happens that the use cases and technical implementations overlap so frequently it’s sometimes less work to use the same system.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: