Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
An introduction to temporal logic and how it can be used to analyze concurrency (github.com/dicklesworthstone)
92 points by eigenvalue on Jan 24, 2024 | hide | past | favorite | 26 comments


> First, let’s prove that a certain protocol ensures that no philosopher will starve. Starvation occurs when a philosopher is perpetually denied access to resources (in this case, forks), and thus can never eat. We'll use a protocol where a philosopher must pick up both forks at once or put down any fork they've picked up and wait before trying again.

This doesn't actually guarantee starvation freedom. Consider a scenario with three philosophers: Plato, Hume, and Arendt. Plato needs forks HA, Hume needs forks PA, etc.

1. Plato picks up forks HA and starts eating.

2. Hume picks up P, fails to pick up A. Hume immediately puts down P and enters the waiting phase.

3. Plato puts down HA.

4. Arendt picks up HP and starts eating.

5. Hume exits waiting phase.

6. Hume picks up A, fails to pick up P. Hume immediately puts down H and enters the waiting phase.

7. GOTO 1

The problem is that there's no guarantee of "strong fairness": that if the system keeps returning to a state where Hume can pick up both forks, he is guaranteed to eventually pick up both forks.

Unrelated, but this is formalism is more precisely known as "linear temporal logic". That's because there are many different temporal logics! Temporal logics are "modal" logics, which means it's a logic equipped with the "necessarily" and "possibly" modifiers. The "necessarily" easily maps onto temporal logic: it just means "true at all times". But what does "possibly" mean?

If I flip a coin, is it "possibly heads?"

In linear temporal logic, the answer is "no", because there are sequences of events where the coin is never heads. In computation tree logic, the answer is "yes", because the timeline has a branch where it lands heads.

Lamport wrote an article arguing that LTL is more useful in the context of programming (https://dl.acm.org/doi/pdf/10.1145/567446.567463). He'd later go on to make TLA, which is a restricted variant of LTL.


Wow, thanks for catching that! I've attempted to fix the proof based on your comment:

https://github.com/Dicklesworthstone/introduction_to_tempora...


> A philosopher can only leave the waiting phase and attempt to pick up the forks again after each of their adjacent philosophers has either started eating or entered and left the waiting phase once.

Consider the case of Plato starts eating, and both Hume and Arendt try to eat before he finishes. What happens to the system after that?

Exercise: can you express the Dining Philosophers system entirely symbolically in terms of FOL+GFX? From there you can run model checkers on it, which is great for building an intuition for problematic edge cases.


I recently started learning about this subject, which is pretty interesting. I had heard about it a while ago, but didn't know much about it besides the basics, and certainly didn't know how it could be applied to real problems since it always seemed pretty abstract. While I did learn about classical logic in college many years ago, I also recently gave myself a refresher on FOL and its axioms and inference rules so I could better understand how temporal logic departs from FOL and builds on it.

Anyway, in an attempt to help me learn the material better, I decided to write up the basic ideas along with several examples of how it could be applied. Perhaps it will be of interest to you as well, although there is certainly nothing new or groundbreaking in my exposition. I did try very hard to make things as simple and concrete as I could while not getting too "hand wavy" as they say.

Caveat: I'm certainly no expert in this field, so if you spot any mistakes, please let me know and I'll revise it (or just submit a PR on GitHub!).


I am also starting to learn, and I don't know if it can be applied to my real problems.

I chose the Lamport learning path. I love that guy and how he thinks and writes. I am reading the book (I bought it in paper, but there is a free version online), watching the video courses, and reading the PlusCal tutorial online.

I enjoy the path and the enlightenment when describing a system with math. Human language has too many ambiguities, very little precision, and is verbose to express some ideas.


Which book? I haven’t read anything of him. But whenever I see his stuff, I’ve been happy to see that it’s not just academic, but understandable and useful for solving real problems.


The book is probably "Specifying Systems" by Leslie Lamport[1].

I recently started reading this book and felt amazed at the way it progresses. It introduces Propositional logic, Predicate logic, and then Temporal logic. Enjoyed it so far and looking to apply it in the real-world.

[1] https://lamport.azurewebsites.net/tla/book.html


You are right, that one.


Strictly speaking, temporal logic does not so much 'depart' from FOL. It's a modality, hence like other modal logics it can be viewed as a restriction of full FOL (where the domain of discourse can be seen as some notion of 'possible worlds') with some desirable properties such as better computational tractability.


For a more automated way to model and verify concurrency using temporal logic as one tool, see TLA+:

https://en.wikipedia.org/wiki/TLA%2B


That let me think about all the programmers who think, intuitively, that circular buffers with atomic read and write pointers have an easy 'concurrency-safe' proof...


It feels great knowing someone is thinking of me.


trap


Are there similar systems where the notion of time is not global?


That could have very interesting implications for high performance computing. Think clock drift, fast interchip communications and clusters.


>English: "It is necessary and sufficient for the ground to be wet that it is raining."

That’s not true.


When you're using FOL, there is an understanding that it's not trying to literally describe the world accurately-- it's an idealized version to demonstrate logical thinking in a way that's easier and more intuitive than only using symbols like P or Q. So when there is a syllogism involving "All birds can fly," no one is seriously claiming that there aren't any birds that have damaged wings or that are too weak to fly.


Concurrent algorithms are notoriously hard. There's been tons of academic work to build models/language for concurrency, tools to check properties and so on.

Yet, it seems nothing has emerged that practitioners actually use. I know that there are some occasional experiments (such industrial team specified their protocols using TLA+/Alloy), but it's still extremely limited. I wonder why that is.


I think most development occurs on problems that can't be formally modeled anyway. Most developers work on things like, "can you add this feature to the e-commerce site? And can the pop-up be blue?" which isn't really model-able.

But that's not to say that formal methods are useless! We can still prove some interesting aspects of programs -- for example, that every lock that gets acquired later gets released. I think tools like Infer[0] could become common in the coming years.

[0]: https://fbinfer.com/


We need automated ways of converting Python and other popular languages to TLA+ with ways of verifying that they were translated correctly. Otherwise it seems like too much work for most purposes.


TLA+ is not intended for end-to-end verification of actually running code, that's still the domain of type systems and interactive proof checkers. You're "specifying" a toy model of how your system will behave and then verifying that.


Right. But even the automatic translation/conversion from the "real" Python code (or whatever it is) to the toy model of that code would make TLA+ a lot more accessible and useful to millions of developers instead of the couple hundred at most (just a guess, maybe it's more?) that currently make serious use of it.

I understand that TLA+ is not really intended for that and it's currently used more for super important, mission critical applications like flight computer code, but I would love to see that level of care and attention brought to more typical software, which even in relatively "boring" applications can involve a lot of concurrency nowadays.


A toy model of existing Python code is... a type system, which Python has. TLA+ is appropriate for cases where you don't even have that.


Converting from Python to TLA+ could be considered a form of denotational semantics. It's a ton of work to model the denotational semantics of even simple computer programs. I imagine an automatic translation of a nontrivial program would be infeasible to work with, but curious if there is active research or progress in this field.


Even small python program has enormous state beyond the reach of automatic reduction to TLA+ model.

BTW TLA+ is not too hard for basic usages. I argue it's much simpler than PlusCal because doesn't have additional semantic layer.


Modeling an entire system is almost always harder than modeling individual pieces of a system. As such, I expect that you almost always start with individual parts modeled, and then you are trying to combine multiple models together.

Stated differently, no model can really account for interactions that are invisible to it. As such, unless you are aiming at a giant model that encompasses everything, it seems unlikely that you will be able to use some modeling tools that are made to do this sort of stuff directly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: