Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I agree that actually persisting data reliability is tablestakes for a database, which I would assume Ed takes for granted this needs to work. Obviously lots of non trivial stuff there but this post seems to be more about database product direction than the nitty gritty technical details talking about fsync, filesystems, etc


Also, most of the "action" on this sphere is for the "super-rich" customer: Assume it has more than 1 machine, lots of RAM, fast i/o & fast networks, etc. And this means: It run on AWS or other "super-rich" environment.

There, you can $$$ your way out of data corruption. You can even loss all the data if you have enough replicas and backups.

Not many are in the game of Sqlite.

This is the space I wish to work more: I think not only mean you can do better the high-end but is more practical all around: If you commit to a DB that depends of running in the cloud (to mask is not that fast, to mask is not that reliable, for extract more $$$ from customers, mostly) then when you NEED to have a portion of that data locally, you are screwed and then, you use sqlite!


    There, you can $$$ your way out of data corruption. You can even loss all the data if you have enough replicas and backups.
That's absolutely not true. All the money and all the backups and redundancy in the world won't save you if the data doesn't make it to persistent storage. Even in a totally closed AWS environment, the fallacies of distributed computing [1] still hold. Was there a network connectivity glitch? A latency spike? What happens when two connections attempt to write to the common data store at the same time?

You can't buy your way out of having to deal with the fundamental problem of, "How do I provide the illusion of a single unified system for a highly distributed swarm of microservices?"

[1]: https://en.wikipedia.org/wiki/Fallacies_of_distributed_compu...


> That's absolutely not true.

In the absolute case, yes. But large companies have lost a lot of data, and thanks of $$$ they survive that.

In that scenarios, you pay a lot to not get in problems, but you can pay to overcome them too ("pay bigly for lawyers to follow the law and not get in trouble, and to broke the law and get away with it").

Is not the ideal and exist a point in where that could break badly, but with certain size fatal software failures that will destroy a small company are just "Thursday" for somebody big.

BTW: I don't like this, I prefer to make software solid, but everybody run on C, Js, MongoDb, etc, and this show you can survive a massive crash...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: