I'd nursed a foot callus for years that hurt badly when I walked barefoot. Weeks ago, sitting on the locker room bench, I hit my limit. In desperation I pulled out my pocket knife to do some field surgery. A few minutes into it I glanced up to see two guys sitting across the room staring at me open-eyed as I dug into my foot with the tip of that pointy knife (8.5" with 3.5" blade)! I just smiled and dug that sucker out.
Should have gone after that callus a year ago! Amazing how such a tiny thing can aggravate.
But you're right about a knife alarming people. Years ago in another life I opened a similar knife to cut a cable and my boss literally jumped backward and exclaimed in fear. But he came from a place where, when someone pulls out a knife someone else usually gets stabbed.
They were probably just envious you were rocking a Kershaw Iridium Dessert Warrior. Which also comes in at under $100. And the Iridium family are pretty nice knives.
Tangentially, if that callus was a plantar callus (circular with a painful point in the center), you can get sticky pads with salicylic acid from the drugstore that will gradually destroy it. Much safer than digging into your foot with a knife, but I'm glad to hear it worked for you!
Yes, I didn't know WTF was there but over the years it had grown beyond annoying , becoming so painful I couldn't tolerate it. I thought perhaps something (a splinter, piece of glass or steel, etc.) had become embedded in my foot. I was determined to dig it out. I'm tall and not flexible so I cannot easily see all of the bottom of my foot. But I can reach it.
The callus was surprisingly small (~1/2") and came out in one piece after about 10 minutes of work. Nothing embedded. No bleeding, just a lot of knife-wiggling. The bottom of the foot is really tough!
I really don't care about most new phone features and for my laptop the M1 Max is still a really decent chip.
I do want to run local LLM agents though and I think a Mac Studio with an M5 Ultra (when it comes out) is probably how I'm going to do that. I need more RAM.
I bet I'm not the only one looking at that kind of setup now that was previously happy with what they had..
Apple has made some good progress on memory sharing over thunderbolt. If they could get that ironed out you maybe could run a good LLM on a cluster of Mac minis.
Again you cannot today but people are working on it. One guy might have gotten it to work but it’s not ready for prime time yet.
> Apple has made some good progress on memory sharing over thunderbolt
The only reason that Thunderbolt exists is to expose DMA over an artificial PCI channel. I'd hope they've made progress on it, Thunderbolt has only been around for fourteen years after all.
That ignores the possibility that local inference gets good enough to run without a subscription on reasonably priced hardware.
I don't think that's too far away. Anthropic, OpenAI, etc. are pushing the idea that you need a subscription but if opensource tools get good enough they could easily become an expensive irrelivance.
My concern is that inference hardware is becoming more and more specialized and datacenter-only. It won’t be possible any longer to just throw in a beefy GPU (in fact we’re already past that point).
Yep, good point. If they don't make the hardware available for personal use, then we wouldn't be able to buy it even it could be used in a personal system.
There is that, but the way this usually works is that there is always a better closed service you have to pay for, and we see that with LLMs as well. Plus there is the fact that you currently need a very powerful machine to run these models at anywhere near the speed of the PaaS systems, and I'm not convinced we'll be able to do the Moore's law style jumps required to get that level of performance locally, not to mention the massive energy requirements, you can only go so small, and we are getting pretty close to the limit. Perhaps I'm wrong, but we don't see the jumps in processing power we used to see in the 80s and 90s, due to clock speed jumps, the clock speed of most CPUs has stayed pretty much the same for a long time. As LLMs are essentially probabilistic in nature, this does open up options not available to current deterministic CPU designs, so that might be an avenue which gets exploited to bring this to local development.
> there is always a better closed service you have to pay for
Always? I think that only holds for a certain amount of time (different for each sector) after which the open stuff is better.
I thought it was only true for dev tools, but I had to rethink it when I met a guy (not especially technical) who runs open source firmware on his insulin pump because the closed source stuff doesn't gives him as much control.
From some comments I read in this thread, costs could be around 100-500k USD to get anywhere near current frontier models. My concern is that the constant price reductions we saw in cost per transistor (either storage or logic) over the last ~three decades are over, and that the cost per transistor will only go up!
Nicely made and always useful.
reply