Hacker Newsnew | past | comments | ask | show | jobs | submit | Macuyiko's commentslogin

Yes. CP SAT crunches through it in no time, but of course larger grids would quickly make it take much longer.

See

https://gist.github.com/Macuyiko/86299dc120478fdff529cab386f...


I don't believe this works in general. If you have a set of tiles that connect to neither the horse nor to an exit, they can still keep each other reachable in this formulation.

Yes, this is the major challenge with solving them with SAT. You can make your solver check and reject these horseless pockets (incrementally rejecting solutions with new clauses), which might be the easiest method, since you might need iteration for maximizing anyways (bare SAT doesn't do "maximize"). To correctly track the flood-fill flow from the horse, you generally need a constraint like reachable(x,y,t) = reachable(nx,ny,t-1) ^ walkable(x,y), and reachable(x,y,0)=is_horse_cell, which adds N^2 additional variables to each cell.

You can more precisely track flows and do maximization with ILP, but that often loses conflict-driven clause learning advantages.


Good point. I don't think the puzzles do this and if they would, I would run a pre-solve pass over the puzzle first to flood fill such horseless pockets up with water, no?

It's not quite that easy. For the simplest example, look at https://enclose.horse/play/dlctud, where the naive solution will waste two walls to fence in the large area. Obviously, you can construct puzzles that have lots of these "bait" areas.

Like the other comment suggested, running a loop where you keep adding constraints that eliminate invalid solutions will probably work for any puzzle that a human would want to solve.


Oh I see what you mean now, indeed:

    Score: 7
    ~~~~~~
    ~····~
    ~·~~·~
    .#..#.
    ......
    ..#...
    .#H#..
    ..#...
However, I think that you do not need 'time' based variables in the form of

    reachable(x,y,t) = reachable(nx,ny,t-1)
Enforcing connectivity through single-commodity flows is IMO better to enforce flood fill (also introduces additional variables but is typically easier to solve with CP heuristics):

    Score: 2
    ~~~~~~
    ~....~
    ~.~~.~
    ......
    ......
    ..##..
    .#H·#.
    ..##..
Cool puzzle!

Late, but reading all of the replies, and speaking from my own observation using Claude, Codex, as well as (non-CLI) Gemini, Kimi, Qwen, and Deepseek...

It's fun how we are so quick to assign meaning to the way these models act. This is of course due to training, RLHF, available tool calls, system prompt (all mostly invisible) and the way we prompt them.

I've been wondering about a new kind of benchmark how one would be able to extract these more intangible tendencies from models rather than well-controlled "how good at coding is it" style environments. This is mainly the reason why I pay less and less attention to benchmark scores.

For what it's worth: I still best converse with Claude when doing code. Its reasoning sounds like me, and it finds a good middle ground between conservative and crazy, being explorative and daring (even although it too often exclaims "I see the issue now!"). If Anthropic would lift the usage rates I would use it as my primary. The CLI tool is also better. E.g. Codex with 5.1 gets stuck in powershell scripts whilst Claude realizes it can use python to do heavy lifting, but I think that might be largely due to being mainly on Windows (still, Claude does work best, realizing quickly what environment it lives in rather than trying Unix commands or powershell invocations that don't work because my powershell is outdated).

Qwen is great in an IDE for quick auto-complete tasks, especially given that you can run it locally, but even the VSCode copilot is good enough for that. Kimi is promising for long running agentic tasks but that is something I've barely explored and just started playing with. Gemini is fantastic as a research assistant. Especially Gemini 3 Pro points out clear and to the point jargon without fear of the user being stupid, which the other commercial models are too often hesitant to do.

Again, it would be fun to have some unbiased method to uncover some of those underlying persona's.


We have trained this model on Windows (our first model to do so). Give it a try!


On the homepage it says "Sinmple" above "Export SQL", fyi


A coin measurer is still my goto explanation. Especially with most models having an inset for the coin to rest on / fit in. The hole itself is then just to quickly/easily get the coin out again with your finger.

With so many different coin sizes and types in the empire, I think this makes most sense.

Wikipedia also mentions this:

> Several dodecahedra were found in coin hoards, suggesting either that their owners considered them valuable objects, or that their use was connected with coins — as, for example, for easily checking coins fit a certain diameter and were not clipped.


If you look at ancient coins, you'll see that they didn't have identical sizes. They were minted from a standard weight of metal, but the manual minting tools of the time couldn't guarantee precise thickness and shape like we have today with machine-made coins. So a dodecahedron with precisely cut circular holes is not a good way to check your coins.


Also if they did have identical sizes and there was a need to measure those sizes, we would expect a lot of much simpler devices to measure them - say a flat piece of metal with differently sized holes. Fancy versions like the dodecahedron might exist, but they would be outnumbered by the utilitarian devices.


I've noticed that puzzles that can be solved with CP-SAT's presolver so that the SAT search does not even need to be invoked basically adhere to this (no backtracking, known rules), e.g.:

    #Variables: 121 (91 primary variables)
      - 121 Booleans in [0,1]
    #kLinear1: 200 (#enforced: 200)
    #kLinear2: 1
    #kLinear3: 2
    #kLinearN: 30 (#terms: 355)

    Presolve summary:
      - 1 affine relations were detected.
      - rule 'affine: new relation' was applied 1 time.
      - rule 'at_most_one: empty or all false' was applied 148 times.
      - rule 'at_most_one: removed literals' was applied 148 times.
      - rule 'at_most_one: satisfied' was applied 36 times.
      - rule 'deductions: 200 stored' was applied 1 time.
      - rule 'exactly_one: removed literals' was applied 2 times.
      - rule 'exactly_one: satisfied' was applied 31 times.
      - rule 'linear: empty' was applied 1 time.
      - rule 'linear: fixed or dup variables' was applied 12 times.
      - rule 'linear: positive equal one' was applied 31 times.
      - rule 'linear: reduced variable domains' was applied 1 time.
      - rule 'linear: remapped using affine relations' was applied 4 times.
      - rule 'presolve: 120 unused variables removed.' was applied 1 time.
      - rule 'presolve: iteration' was applied 2 times.

    Presolved satisfaction model '': (model_fingerprint: 0xa5b85c5e198ed849)
    #Variables: 0 (0 primary variables)

    The solution hint is complete and is feasible.

    #1       0.00s main
      a    a    a    a    a    a    a    a    a    a   *A* 
      a    a    a    b    b    b    b   *B*   a    a    a  
      a    a   *C*   b    d    d    d    b    b    a    a  
      a    c    c    d    d   *E*   d    d    b    b    a  
      a    c    d   *D*   d    e    d    d    d    b    a  
      a    f    d    d    d    e    e    e    d   *G*   a  
      a   *F*   d    d    d    d    d    d    d    g    a  
      a    f    f    d    d    d    d    d   *H*   g    a  
     *I*   i    f    f    d    d    d    h    h    a    a  
      i    i    i    f   *J*   j    j    j    a    a    a  
      i    i    i    i    i    k   *K*   j    a    a    a
Together with validating that there is only 1 solution you would probably be able to make the search for good boards a more guided than random creation.


All of the above is true, but between solving quicker, and admitting we gave context:

I do agree with you that an LLM should not always start from scratch.

In a way it is like an animal which we have given the ultimate human instinct.

What has nature given us? Homo Erectus is 2 million years ago.

A weird world we live in.

What is context.


Weirdly it has gotten so far that I have embedded this into my workflow and will often prompt:

> "Good work so far, now I want to take it to another step (somewhat related but feeling it too hard): <short description>. Do you think we can do it in this conversation or is it better to start fresh? If so, prepare an initial prompt for your next fresh instantiation."

Sometimes the model says that it might be better to start fresh, and prepares a good summary prompt (including a final 'see you later'), whereas in other cases it assures me it can continue.

I have a lot of notebooks with "initial prompts to explore forward". But given the sycophancy going on as well as one-step RL (sigh) post-training [1], it indeed seems AI platforms would like to keep the conversation going.

[1] RL in post-training has little to do with real RL and just uses one shot preference mechanisms with an RL inspired training loop. There is very little work in terms of long-term preferences slash conversations, as that would increase requirements exponentially.


Is there any reason to think that LLMs have the introspection ability to be able to answer your question effectively? I just default to having them provide a summary that I can use to start the next conversation, because I’m unclear on how an LLM would know it’s losing the plot due to long context window.


A bit of a rant, but this is the kind of fact checking I wish the media and all our EU "trusted sources" would have jumped on instead of going for the most trivial and idiotic cases only a toddler (or a journalist) would get stumped by. (Example: recent posts on Tiktok 'claiming to be images from Pakistan but taken from Battlefield 3...' again. Who is impressed or even surprised by this kind of investigation?)

Much more interesting, but also with more effort required, so of course it never happens.

It would have a more beneficial societal effect, because it is this kind of article, neutrally written, deep investigation, that truly would make people capable to self-discover "maybe I should question a bit more things".


That, and there is a big incentive to just sell content. Sensational, eye-catching, controversial content will grab more readers.



From an age perspective (but the crowd here will not like that): before I trusted myself I could always find it back so I don't need to save it. Now I can't anymore, but I don't care so much.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: