Hacker Newsnew | past | comments | ask | show | jobs | submit | asadm's commentslogin


OpenAI had it, they had the foot in the door with their integration last year with Siri. But they dropped that ball and many other balls.

Yeah, I was really expecting them to just continue the partnership that Apple announced when the iPhone 16/iOS 18 came out, but I suppose it's been pretty much radio silence on both fronts since then. Although the established stability and good enough-ness that Google offers with Gemini are probably more than enough reason for Apple to pivot to them as a model supplier instead.

I'm sure hiring Jony Ive to design hardware for them didn't help.

Yeah. Super disappointing. I may end up switching to Gemini entirely at this rate.

I am working on a camera module that has SLAM built-in: https://x.com/_asadmemon/status/1989417143398797424

Running on a single-core armv7. It includes a VIO and a nice loop closure. I am now optimizing it further to see if I can fit some basic mapping too.


for good reasons.

I heavily use this one actually.

On Linux I miss it and create a hybrid super-tilde action that cycles through apps of the same kind as the focused app.

Me too, in fact watching my hands for a moment, it's the only way I switch applications now

While the communicator is nice, I just pre-ordered the power keyboard: https://www.clicks.tech/powerkeyboard


Good riddance MCP.


That's not what this is. MCP is still around and useful- skills are tailored prompt frameworks for specific tasks or contexts. They're useful for specialization, and in conjunction with post-training after some good data is acquired, will allow the next generation of models to be a lot better at whatever jobs produce good data for training.


Local tools/skills/function definitions can already invoke any API.

There's no real benefit to the MCP protocol over a regular API with a published "client" a local LLM can invoke. The only downside is you'd have to pull this client prior.

I am using local "skill" as reference to an executable function, not specifically Claude Skills.

If the LLM/Agent executes tools via code in a sandbox (which is what things are moving towards), all LLM tools can be simply defined as regular functions that have the flexibility to do anything.

I seriously doubt MCP will exist in any form a few years from now


I have seen ~10 IQ points drop with each MCP I added. I have replaced them all with either skill-like instructions or curl calls in AGENTS.md with much better "tool-calling" rate.


That's a context pollution problem, not an MCP problem.

https://www.anthropic.com/engineering/advanced-tool-use


building a rag for searching correct MCP is a band-aid.


1. it's not just about MCP, if you have 100s of skills, you are going to have the same context issues

2. it was delegation to a subagent to select the tools that should be made available, which sounded like it got the whole list and did "rag" on the fly like any model would

You're going to want to provide your agent with search, rag, subagent context gathering (and pruning/compation/mgmt) that can work across the internet, code bases, large tool/skill sets, past interaction history. All of this can be presented as a single or few tools to your main agent and is the more meta-pattern/trend emerging


It isn't particularly useful. It uses a lot of context without a lot of value. Claude has written a blog post saying as much. Skills keep the context out unless it's needed.

It's a much better system in my experience.


Claude did not say don't use MCP because it pollutes the context

What they said was don't pollute your context with lots of tool defs, from MCP or not. You'll see this same problem if you have 100s of skills, with their names and descriptions chewing up tokens

Their solution is to let the agent search and discover as needed, it's a general concept around tools (mcp, func, code use, skills)

https://www.anthropic.com/engineering/advanced-tool-use


I hear many Rivian customers really love Comma.ai, so much that they are #1 on Comma dash.


Probably a lot of overlap in the venn diagram of people who would like the two things. Mostly the "Early Adopter" circle.

Also a lot of cars have a lot of limitations with comma.ai. Yes, you can install it on all sorts but there are limitations like: above 32mph, cannot resume from stop, cannot take tight corners, cannot do stop light detection, requires additional car upgrades/features, only known to support model year 2021. Etc.

Rivian supports everything, it has a customer base who LOVE technology, are willing to try new things, and ... have disposable income for a $1k extra gadget.


I’ve seen videos of massive touch screen stuttering, is it still a thing on rivian?


a lot of the gen1 users will likely swap over to it though. They basically have dropped improvements for gen1 autonomy which is rug-pullish :(


Do Evil, Yes!


Was this by chance a "No, money down!" Simpsons reference?

https://www.youtube.com/watch?v=5yuL6PcgSgM


it is now lol


what side is the terrorist here?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: