As an avid Prolog fan, I would have to agree with a lot of Mr. Wayne's comments! There are some things about the language that are now part of the ISO standard that are a bit unergonomic.
On the other hand, you don't have to write Prolog like that! The only shame is that there are 10x more examples (at least) of bad Prolog on the internet than good Prolog.
If you want to see some really beautiful stuff, check out Power of Prolog[1] (which Mr. Wayne courteously links to in his article!)
If you are really wondering why Prolog, the thing about it that makes it special among all languages is metainterpretation. No, seriously, would strongly recommend you check it out[2]
This is all that it takes to write a metainterpreter in Prolog:
I also have a strange obsession with Prolog and Markus Triska's article on meta-interpreters heavily inspired me to write a Prolog-based agent framework with a meta-interpreter at its core [0].
I have to admit that writing Prolog sometimes makes me want to bash my my head against the wall, but sometimes the resulting code has a particular kind of beauty that's hard to explain. Anyways, Opus 4.5 is really good at Prolog, so my head feels much better now :-)
Anything you'd like to share? I did some research within the realm of classic robotic-like planning ([1]) and the results were impressive with local LLMs already a year ago, to the point that obtaining textual descriptions for complex enough problems became the bottleneck, suggesting that prompting is of limited use when you could describe the problem in Prolog concisely and directly already, given Prolog's NLP roots and one-to-one mapping of simple English sentences. Hence that report isn't updated to GLM 4.7, Claude whatever, or other "frontier" models yet.
Opus 4.5 helped me implement a basic coding agent in a DSL built on top of Prolog: https://deepclause.substack.com/p/implementing-a-vibed-llm-c.... It worked surprisingly well. With a bit of context it was able to (almost) one-shot about 500 lines of code. With older models, I felt that they "never really got it".
>>I have to admit that writing Prolog sometimes makes me want to bash my my head against the wall
I think much of the frustration with older tech like this comes from the fact that these things were mostly written(and rewritten till perfection) on paper first and only the near-end program was input into a computer with a keyboard.
Modern ways of carving out a program with 'Successive Approximations' with a keyboard and monitor until you get to something to work is mostly a recent phenomenon. Most of us are used to working like this. Which quite honestly is mostly trial and error. The frustration is understandable because you are basically throwing darts, most of the times in the dark.
I knew a programmer from the 1980s who(built medical electronics equipment) would tell me how even writing C worked back then. It was mostly writing a lot, on paper. You had to prove things on paper first.
...these things were mostly written(and rewritten till perfection) on paper first and only the near-end program was input into a computer with a keyboard.
Not if you were working in a high-level language with an interpreter, REPL, etc. where you could write small units of code that were easily testable and then integrated into the larger whole.
The following is from David H.D. Warren's manual for DEC-10 Prolog, from 1979 [0]. It describes how Prolog development is done interactively, by being able to load code in dynamically into an interpreter and using the REPL -- note that the only mention of using paper is if the developer wants to print out a log of what they did during their session:
Interactive Environment Performance is all very well. What the programmer really needs is a good inter-active environment for developing his programs. To address this need, DEC-10 Prolog provides an interpreter in addition to the compiler.
The interpreter allows a program to be read in quickly, and to be modified on-line, by adding and deleting single clauses, or by updating whole procedures. Goals to be executed can be entered directly from the terminal. An execution
can be traced, interrupted, or suspended while other actions are performed. At
any time, the state of the system can be saved, and resumed later if required.
The system maintains, on a disk file, a complete log of all interactions with the user's terminal. After a session, the user can examine this file, and print it out on hard copy if required.
>> I think much of the frustration with older tech like this comes from the fact that these things were mostly written(and rewritten till perfection) on paper first and only the near-end program was input into a computer with a keyboard.
I very much agree with this, especially since Prolog's execution model doesn't seem to go that well with the "successive approximations" method.
Before personal computer revolution, compute time and even development/test time on a large computers back then was rationed.
One can imagine how development would work in a ecosystem like that. You have to understand both the problem, and your solution, and you need to be sure it would work before you start typing it out at a terminal.
This the classic Donald Knuth workflow. Like he is away disconnected from a computer for long periods of time, focussed on the problems and solutions, and he is working them out on paper and pen. Until he has arrived solutions that just work, correctly. And well enough to be explained in a text book.
When you take this away. You also take away the need to put in hard work required to make things work correctly. Take a look at how many Java devs are out there who try to use a wrong data structure for the problem, and then try to shoe horn their solution to roughly fit the problem. Eventually solution does work for some acceptable inputs, and remainder is left to be discovered by an eventual production bug. Stackoverflow is full of such questions.
Languages like Prolog just don't offer that sort of freedom. And you have to be in some way serious about what you are doing in terms of truly understanding both the problem and solution well enough to make them work.
Languages like Prolog just don't offer that sort of freedom.
Yes, they do -- that's why people have enjoyed using such languages.
It might help to think of them as being like very-high-level scripting-languages with more rigorous semantics (e.g. homoiconicity) and some nifty built-ins, like Prolog's relational-database. (Not to mention REPLs, tooling, etc.)
Read, for example, what Paul Graham wrote about using Lisp for Viaweb (which became Yahoo Store) [0] and understand that much of what he says applies to languages like Prolog and Smalltalk too.
I'm assuming they were written on paper because they were commonly punched into paper at some stage after that. We tend to be more careful with non erasable media.
But I wonder if that characterization is actually flattering for Prolog? I can't think of any situation, skill, technology, paradigm, or production process for which "doing it right the first time" beats iterative refinement.
Like Lisp and Smalltalk, Prolog was used primarily in the 1980s, so it was run on Unix workstations and also, to some extent, on PCs. (There were even efforts to create hardware designed to run Prolog a la Lisp machines.)
And, like Lisp and Smalltalk, Prolog can be very nice for iterative development/rapid prototyping (where the prototypes might be good enough to put into production).
The people who dealt with Prolog on punchcards were the academics who created and/or refined it in its early days. [0]
I mean there are nearly two full decades between the appearance of Prolog(1972) and PC revolution late 1980s and early 1990s.
>>The people who dealt with Prolog on punchcards were the academics who created and/or refined it in its early days. [0]
That's like a decade of work. Thats hardly early 'days'.
Also the programming culture in the PC days and before that is totally different. Heck even the editors from that era(eg vi), are designed for an entirely different workflow. That is, lots of planning, and correctness before you decided to input the code into the computer.
By 1979 at the latest -- probably closer to 1975 -- the primary Prolog implementation of the day (Warren's DEC-10 version) had an interpreter, where you could load files of code in and modify the code and you had a REPL with the ability to do all kinds of things.
I posted an excerpt of the manual, with a link to a PDF of it, in a reply to another comment [0]
(And, since even the earliest versions of Prolog were interpreted, they may've had features like this too).
And, as far as editors are concerned, people still use versions of vi (and, of course, emacs) to this day by people who don't necessarily do lots of planning and correctness before deciding to input the code into the computer.
And one other thing: just because early Prolog interpreters were implemented on punchcards doesn't mean that Prolog programs run by those interpreters needed to be. It's quite possible that basically nobody ever wrote Prolog programs using punchcards, given that Prolog has the ability to read in files of code and data.
>>"doing it right the first time" beats iterative refinement.
Its not iterative refinement which is bad. Its just that when you use a keyboard a thinking device, there is a tendency to assume the first trivially working solution to be completely true.
This is doesn't happen with pen and paper as it slows you down. You get mental space to think through a lot of things, exceptions etc etc. Until even with iterative refinement you are likely to build something that is correct compared to just committing the first typed function to the repo.
Ok what I would really love is something like this but for the damn terminal. No, I don't store credentials in plaintext, but when they get pulled into memory after being decrypted you really gotta watch $TERMINAL_AGENT or it WILL read your creds eventually and it's ever so much fun explaining why you need to rotate a key.
Sure go ahead and roast me but please include full proof method you use to make sure that never happens that still allows you to use credentials for developing applications in the normal way.
A really simple one is traversing a linked list (or any naturally recursive data structure, such as a dictionary or tree). It is very natural to traverse a recursive data structure recursively.
Really impressive! For anyone who's not a pythonista, trying to implement TCO is something akin to solving the Collatz conjecture but for Python. It's often just an exercise in madness. So seeing an elegant solution to this is really cool, I myself was a victim of this madness and was unable to do it so very cool to see someone nail it! This will be a goto tool for sure.
Ok hear me out. It's not particularly obvious to me why plants being easy to replicate suddenly destroyed the rare plant market. Surely they can't be easier to replicate than software. That hasn't seemed to put much of a dent in the software market.
Basic economics. If value is based on scarcity of a resource, and you lift the bottleneck that makes the resource scarce, the value is reduced.
In the case of software, the resource is time (you could build/host/operate that software yourself, but it takes a heck of lot more time than you're willing to spend so you trade money for the product instead), and you can't reduce the scarcity of time.
This happens in software too. When open source software like GCC came out, it suddenly became much cheaper to write C code compared to when you needed a Borland Turbo C license for $150 (1990 dollars).
So I don't disagree with any of the criticisms of MCPs but no one here has mentioned why they are useful, and I'm not sure that everyone is aware that MCP is actually just a wrapper over existing cli/API:
1. Claude Code is aware of what MCPs it has access to at all times.
2. Adding an MCP is like adding to the agent's actuators/vocabulary/tools because unlike cli tools or APIs you don't have to constantly remind it what MCPs it has available and "hey you have access to X" and "hey make an MCP for X" take the same level of effort on the part of the user.
3. This effect is _significantly_ stronger than putting info about available API/cli into CLAUDE.md.
4. You can almost trivially create an MCP that does X by asking the agent to create an MCP that does X. This saves you from having to constantly remind an agent it can do X.
NOTE: I cannot stress enough that this property of MCPs is COMPLETELY ORTHOGONAL to the nutty way they are implemented, and I am IN NO WAY defending the implementation. But currently we are talking past the primary value prop.
I would personally prefer some other method but having a way to make agents extensible is extremely useful.
>This effect is _significantly_ stronger than putting info about available API/cli into CLAUDE.md.
No it's not.
Honestly this conversation is extremely weird to me because somehow people are gravely misunderstanding what MCP even purports to do, let alone what it actually CAN do in the most ideal situation.
It is a protocol and while the merits of that protocol is certainly under active discussion it's irrelevant because you keep adding qualities about the protocol that it cannot deliver on.
Just same facts to help steer this conversation correctly, and maybe help your understanding on what is actually going:
* All LLM's/major models have function & tool calling built in.
* Your LLMs/models do not have any knowledge on MCP, nor have they been trained on it.
* MCP exists, at least the claim, is to help standardize the LIFECYCLE of the tool call.
* MCP does not augment or enhance the ability of LLM's in any form.
* MCP does not allow you to extend agents. That's an implicit feature.
* If you have access to "X" (using your example), you don't need anything that obeys the MCP standard.
MCP at best is for developers and tool developers. Your model does not need an MCP server or client or anything else MCP related to do what is already been trained to do.
>I would personally prefer some other method but having a way to make agents extensible is extremely useful.
This response is spot on. People seem very confused about what MCP actually is. It's just a standard way to provide an LLM with tools. And even how that happens is up to the agent implementation. There are some other less common features, but the core is just about providing tool definitions and handling the tool_call. Useful but basically just OpenAPI for LLM
I think people are really underappreciating the "OpenAPI for LLM" part. The hype forced a lot of different SaaS products and vendors of all stripes to actually follow a standard and think somewhat critically about the usability of what they expose.
2. The Claude Code system prompt almost certainly gives directions about how to deal with MCP tools, and may also include the list of tools
3. Instruction adherence is higher when the instructions are placed in the system prompt
If you put these three facts together then it’s quite likely that Claude Code usage of a particular tool (in the generic sense) is higher as an MCP server than as a CLI command.
But why let this be a limitation? Make an MCP server that calls your bash commands. Claude Code will happily vibe code this for you, if you don’t switch to a coding tool that gives better direct control of your system prompt.
1.) Awareness doesn’t mean they will use it. And in practice they often don’t use them.
2.) “ unlike cli tools or APIs you don't have to constantly remind it what MCPs it has available”
- this doesn’t match my experience. In fact, bash commands are substantially more discoverable.
3.) Again, this doesn’t match my experience and the major providers recommend including available MCP tools in system prompts/CLAUDE.md/whatever.
4.) Can’t speak to this as it’s not part of my workflow for the previous reasons.
The only useful MCP for me is Playwright for front end work.
Chrome Devtools is similarly an extremely high value MCP for me.
I would agree that if you don't find they add discoverability then MCPs would have no value for you and be worse than cli tools. It sounds like we have had very opposite experiences here.
Interesting. Perhaps it comes down to which platforms we're working on. I don't want to be outright dismissive of it. My primary platform is Claude Code. Are you working with another driver e.g. OpenAI Codex?
No, and its ok to be dismissive of it. I'm just giving an experience report.
Actually my primary value is emacs integration with claude.
I have an mcp with one function ( as_user_eval ) which allows it to execute arbitrary s-expressions against the portal package for emacs.
I use this often with custom slash commands, i.e., `/read-emacs`, which instructs claude to use that mcp to pull the context from multiple pseudoregions in to the context window (along with filenames and line numbers). This saves me from having to copy paste all of that.
I understand what the others are saying but the using the portal to talk to a running emacs client I don't find to be particularly "discoverable" on the cli.
I can say things like, "show me in emacs the test that failed", or, "highlight the lines you are talking about", or, "interactively remap my keybindings to do X", or, "take me to the info page that covers this topic".
This, paired with chrome devtools and playwright, had been a real productivity booster for me, and is quite fun.
I use voice dictation for this so it feels like I'm in Star Trek :)
I'm sure in 10 minutes we will be onto the next version of MCP... skills, ACP, Tools, whatever.
But this extensibility/discoverability for me has been nice. I make no stronger claims than that about "what it is for" or "what is should be", as I am a simple hacker with simple needs.
How so? The protocol doesn't obfuscate things. Your agent can easily expose the entire MCP conversation, but generally just exposes the call and response. This is no different than any other method of providing a tool for the LLM to call.
You have some weird bone to pick with MCP which is making you irrationally unreceptive to any good-faith attempt to help you understand.
If you want to expose tools to the LLM you have to provide a tool definition to the LLM for each tool and you have to map the LLM's tool calls into the agent executing the tools and returning the results. That's universal for all agent-side tools.
The whole purpose behind MCP was to provide a low-impedance standard where some set of tools could be plugged into an existing agent with no pre-knowledge and all the needed metadata was provided to facilitate linking the tools to the agent. The initial version was clearly focused on local agents running local tools over stdio. The idea of remote tools was clearly an afterthought if you read the specification.
If you want your agent to speak OpenAPI, you are *more* than welcome to make it do so. It'll probably be fine if it's a well-specified API. The context issues won't go away, I guarantee you. OpenAPI specs for APIs with lots of endpoint will result in large tool definitions for the LLM, just like they do with MCP.
A core issue I see with MCP, as someone using it every day, is that most MCP Server developers clearly are missing the point and simply using MCP as a thin translation layer over some existing APIs. The biggest value with MCP is when you realize that an MCP Server should be a *curated* experience for the LLM to interact with and the output should be purposefully designed for the LLM, not just a raw data dump from an API endpoint. Sure, some calls are more like raw data dumps and should have minimal curation, but many other MCP tools should be more like what the OP of this post is doing. The OP is defining a local multi-step workflow where steps feed into other steps and *don't* need LLM mediation. That should be a *single* MCP Server Tool. They could wrap the local bash scripts up into a simple single tool stdio MCP Server and now that tool is easily portable across any agent that speaks MCP, even if the agent doesn't have the ability to directly run local CLI commands.
Anyway, maybe take a breath and be objective about what MCP is and is not meant to do and disconnect what MCP is from how people are *currently* using (and frequently misusing) MCP.
There are tons of articles detailing the problems if you are genuinely interested.
Notice you couldn't technically point to anything to support your statements, but instead had to revert to religious zealotry and apologetics -- which has no place on this forum.
>be objective about what MCP is and is not meant to do and disconnect what MCP is from how people are currently using (and frequently misusing) MCP.
Please re-read what you wrote.
You wrote all of that just to counter your own stated position, because I think at some fundamental level you realize how non-sense it is.
To get this out of the way, you are an unpleasant person, but that doesn't mean you should be ignored though, so I'll reply.
> you couldn't technically point to anything to support your statements, but instead had to revert to religious zealotry and apologetics
> You wrote all of that just to counter your own stated position, because I think at some fundamental level you realize how non-sense it is.
You need to be specific and not make a blanket assertions like that if you want and honest dialog.
I take particular offense at you claiming "religious zealotry". Nothing in my post is even remotely definable as such. Yes, I use MCP, I also recognize when it's the right tool and when it's not. I don't think MCP is the solution to all problems. I also willingly acknowledge that other tools can fill the same gap. If anyone is being a religious zealot here, it's you and your crusade against MCP.
With your lack of specificity, it's hard to formulate a proper response to whatever you see as lacking in references. I would point out that I haven't see one link in all of your railing against MCP until this very response.
So, let's look at your link.
- I agree that websockets would have been a better choice than SSE+HTTP and StreamableHTTP. Auth for WS is a little bit of a pain from the browser, but it's feasible with some common conventions.
- I agree with their characterization of "web seems to be a thing we probably should support" (pretty sure I called that out in my post already...
- Their "kind of breaks the Unix/Linux piping paradigm" is laughable though. MCP is hardly the first or only thing to wire a 'server' to and application via stdin/stdout chaining and it's *very* much in the spirit of UNIX (IMHO, as someone working with UNIX systems for the last 30+ years)
- Again, I fully agree that the current HTTP transports are... lacking and could use a better solution.
- Rant about python aside (I agree BTW), well, they are just ranting actually. Yes, the documentation could use some help. Yes, the wasn't an official Go SDK until recently.
- Given this was written a while ago, it's not worth addressing the callous on SSE+HTTP beyond saying, 100% it was a bad design that appears to have been tacked on at the last minute.
- The observations about StreamableHTTP are mostly valid. They get a few points wrong, but the essence is right.
- Their security concerns are the same ones you'd have with any API, so I'm not sure how this is unique to MCP.
- Auth is a bit of a sore subject for me as well. MCP doesn't have an ergonomic workflow for multi-tenant sets and in-band oauth credential management. Again thoug, I don't disagree with the essence of their point.
After meandering they land on "just use stdio and websockets". So the whole rant is around the protocol transport.I agree the transport protocols need some TLC, but you *can* work with them now and new transports are something that's being worked on, even a WS transport.
None of that post talks about the actual protocol behind MCP, how it's succeeding/failing at filling the needs it's meant to address, or any real viable alternative for a standard for linking tools to agents.
If you feel like calling out specific point you feel I should back up with references, I can likely provide them. As with any post, much of the information is synthesized from a lot of places so things like the assertion that remote servers were clearly an afterthought is purely from my reading of the spec and the remote transports code.
>To get this out of the way, you are an unpleasant person
You are clearly very emotional about this, for whatever reason. But again it has no place on this forum.
>I would point out that I haven't see one link in all of your railing against MCP until this very response.
Because everything I've stated are fundamental facts about the technology. If you need sources for it, that means you are missing elementary concepts.
>After meandering
They literally point out several issues with the protocol that hamper observability.
You're being very verbose but not saying much and ignoring when things are directly answered for you. That's being generous.
Your position is like someone claiming lemongrass supplements cures COVID. Everyone is rightly pointing out that it's a placebo at best. Then your position is "well point out all the ways it DOESN'T help, everyone is doing it!"
Which is a really not-smart position to hold, to say the least.
The absurdity of this response is astounding. As it's clear you have no actual interest is an honest discussion I'll just drop off here and leave you to your echo chamber.
I am actively involved in multiple threads across this post. Claiming I have no interest in "honest discussion" when it's clear people are engaging constructively elsewhere is a delusional take.
You came in, made things quite personal for no reason, and have said nothing that anyone can work with.
Also not disagreeing with your argument. Just want to point out that you can achieve the same by putting minimal info about your CLI tools in your global or project specific CLAUDE.md.
The only downside here is that it's more work than `claude mcp add x -- npx x@latest`. But you get composability in return, as well as the intermediate tool outputs not having to pass through the model's context.
> 3. This effect is _significantly_ stronger than putting info about available API/cli into CLAUDE.md.
What? Why?
> unlike cli tools or APIs you don't have to constantly remind it what MCPs it has available
I think I'm missing something, because I thought this is what MCP does, literally. It just injects the instructions about what tools it has and how to use them into the context window. With MCP it just does it for you rather than you having to add a bit to your CLAUDE.md. What am I misunderstanding?
I think many here have no idea what exactly MCP is, and think it's some sort of magic sauce that transcends how LLMs usually work.
“But Brawndo has what plants crave! It's got electrolytes! '...Okay - what are electrolytes? Do you know? Yeah. It's what they use to make Brawndo.' But why do they use them in Brawndo? What do they do?''They're part of what plants crave.'But why do plants crave them?'Because plants crave Brawndo, and Brawndo has electrolytes.”
― Idiocracy Movie
Taking the article's analogy of the "collaboration while driving", the F1 sport is quite insanely collaborative. For example, the drivers literally do have someone in their ear by radio being their coach and spotter for the entire trip. I've never heard of the equivalent in software. Does anyone know of anything like this?
We literally followed his “bad” analogy for driving by doing software teaming aka mob pair programming. You switch drivers every 10m or so. It can be great. Everyone learns a lot about the feature and the codebase fast. But it can feel slow. And it tires some more than others. Most people liked it.
Except, they have one person in the ear. Not 4-5, not people giving opposite opinions, not drive by takes.
By the time a race engineer is communicating with a driver all of that has been shaken out. Specific concrete options are given to the driver, and usually only one.
Comments here are a bit rough. Marcin does plenty of work for OSE and the community is still active but most of it is in-person/offline -- hard for a lot of us to believe, I know, but there is human activity that takes place outside the internet.
Prolog: "Mistakes were made"
As an avid Prolog fan, I would have to agree with a lot of Mr. Wayne's comments! There are some things about the language that are now part of the ISO standard that are a bit unergonomic.
On the other hand, you don't have to write Prolog like that! The only shame is that there are 10x more examples (at least) of bad Prolog on the internet than good Prolog.
If you want to see some really beautiful stuff, check out Power of Prolog[1] (which Mr. Wayne courteously links to in his article!)
If you are really wondering why Prolog, the thing about it that makes it special among all languages is metainterpretation. No, seriously, would strongly recommend you check it out[2]
This is all that it takes to write a metainterpreter in Prolog:
Writing your own Prolog-like language in Prolog is nearly as fundamental as for-loops in other language.[1] https://www.youtube.com/@ThePowerOfProlog
https://www.metalevel.at/prolog
[2] https://www.youtube.com/watch?v=nmBkU-l1zyc
https://www.metalevel.at/acomip/
reply