Code completion is a great use, some writing tools are nice, I love chatGPT for information lookup (while knowing that it hallucinates), ChatGPT/whatever is great for generating icons and some images, and it’s much better at writing in corpo than I am.
But…Nvidia as the most valuable company in the world? $1B valuations for freshly minted companies? Radical changes to the economy? AGI?
It certainly doesn’t seem that way. Yes incremental (in this case a large increment!) improvements in the way we interact with machines.
> It certainly feels like the AI bubble is popping.
Objectively, no it doesn't. Even if you think it's a bubble, then the bubble is just growing. There are zero signs of anything popping yet. Company valuations are only increasing.
> Nvidia as the most valuable company in the world?
It makes sense when you look at the financials. It's like back in 1800's when railroads were the new thing, and imagine there were only one company that knew how to make locomotives, while everything else had multiple suppliers. It would have been the most valuable company as well.
Nvidia is in an extremely enviable situation where they're simply the only company that makes a product that is in extreme demand. And for various reasons, no other companies appear to be able to make and deliver a substitute -- not yet and seemingly not in the near future.
So they're able to produce an extreme level of profit, which shows no signs of stopping anytime soon, and their market valuation reflects that accurately.
Remember, high market valuation isn't a reflection of expected future revenue or sales -- it's a reflection of expected future profit. And Nvidia is raking in crazy profits because nobody else is effectively competing with them.
> Objectively, no it doesn't. Even if you think it's a bubble, then the bubble is just growing. There are zero signs of anything popping yet.
Ever seen a bubble pop? It doesn't contract and then pop, usually. A hole develops, and almost immediately no more bubble. If you watch very closely, you might see signs of the end a fraction of a second before it vanishes.
Like, this is why these phenomena are called bubbles; they grow and grow and grow, and then bang.
> It's like back in 1800's when railroads were the new thing, and imagine there were only one company that knew how to make locomotives, while everything else had multiple suppliers.
I, er, feel obliged to mention that that was a particularly notorious bubble. And, like, yes, if you're selling shovels in a gold rush, that's a good place to be, no question of that. But it doesn't say anything in particular about the long-term viability of the gold rush.
>> Objectively, no it doesn't. Even if you think it's a bubble, then the bubble is just growing. There are zero signs of anything popping yet.
> Ever seen a bubble pop? It doesn't contract and then pop, usually. A hole develops, and almost immediately no more bubble. If you watch very closely, you might see signs of the end a fraction of a second before it vanishes.
Well, if that’s the case then why is the OP arguing that the bubble is popping based on feelings and vibes instead of objective data caused by the rapid popping of the bubble that you postulate?
Literally you’re saying the bubble is going to pop so fast that it’s noticeable to everyone and objectively undeniable in fact, the numbers are going to show it, and that’s why… grandparent’s vibes are evidence that it’s happening, quickly and indisputably, despite the failure of the numbers to show it?
Right, you seem like you're mostly agreeing with me.
There's no "it feels like it's popping".
The bubble is just continuing to grow. When it pops, you'll know it.
(Although to be fair, it's not instant. It's often a jagged decline that takes weeks or even months. You never know when you've hit the bottom, when it's "fully popped". So the metaphor with actual physical bubbles is not perfect. But in hindsight you can often look back to a day of massive selling that "started" it.)
I think it's more like the San Francisco gold rush. The gold prospectors didn't make a lot of money. The people who made bank were the ones selling goods and services to the prospectors. Selling the shovels is more profitable, and has the bonus of not being dependent on whether or not your customers actually find gold.
It feels like at the point in software hype bubbles, everyone is using this metaphor, and is trying to sell wholesale shovels to shovel salesmen. Kind of like an MLM.
NVIDIA, specifically, doesn’t really make a compelling product, but is trying to make it seem like their tools can eventually make a compelling product, so build your platform on NVIDIA so that you can sell the opportunity that allows compelling products…
AMD ""just"" need to fix their software API ecosystem. Developers, developers, developers. Unfortunately I don't think they can do that even if promised a trillion dollar stock valuation. Not because it's technically difficult, either.
The locomotive analogy doesn't quite fit. The customers of the locomotive company would buy the locomotives and then use them to profit. Are Nvidia's customers making a profit from the billions they are spending on GPUs? I don't think so, at least not yet.
> Are Nvidia's customers making a profit from the billions they are spending on GPUs? I don't think so, at least not yet.
Yes. NVIDIA's customers are mostly the cloud providers buying tens of thousands of machines each. The latest 8xH100 pods cost almost $400,000 so no one but Facebook and a few other tech giants can afford to buy tons of them without having customers lined up and ready to rent them.
If you look at AWS's reserve pricing, a one year contract costs more than the hardware would if you bought it up front. The on-demand pricing is even crazier, if you're lucky enough to find some capacity. NVIDIA's customers are absolutely printing money, it's their customers' customers that are paying for it.
Plenty of railroads went bust -- it was a time of extreme speculation.
All that matters is that people think the industry will turn out to be profitable and sustainable overall, once we figure out which products and features work and which don't. There's no reason whatsoever to think AI doesn't fall in that category.
I'm not a railroad expert but I think AI involves much more speculation. They knew what a locomotive could do and what problem it solved. We can't really say that with AI.
A locomotive was an _input_. Like a GPU, arguably. They _didn't_ really know what a railway could do, at the time; a lot of speculative railway projects were terribly ill-conceived and could never have made economic sense (or in extreme cases, worked at all, at least with the tech available; the first attempt at a Channel Tunnel was in the late 19th century, for instance).
I see things at a slightly different angle than you.
Yes, there's a bubble.
Those things you list, I find it very easy to believe they add up to at least 3% productivity boost globally. A single tech doing this in one year (or even 18 months) before we really understand how to best apply it, is a radical change all by itself, even if we don't use it to speed up future growth.
The world economy is $109 trillion; 3% of that is thus just over $3 trillion per year, which is enough to justify NVIDIA's valuation… assuming they get to collect all the profits and nobody else manages any, which is unlikely given their hardware burns a lot of expensive (in aggregate) electricity.
I can also easily believe that this triggers a painful demand increase for electricity.
I hope it isn't economically dominant, because nobody is ready for that transition.
> Code completion is a great use, some writing tools are nice, I love chatGPT for information lookup (while knowing that it hallucinates), ChatGPT/whatever is great for generating icons and some images, and it’s much better at writing in corpo than I am.
I find it incredibly hard to believe that these things provide value anywhere close to 3% of GDP, that estimate is orders of magnitude too high.
3% is the entire economic output of the UK or India. The global agriculture industry is only like 4.5%.
Depending on which source I use, there are around 27 million software developers in the world. I'm not claiming it doubled our productivity, but I will say that if we earned an average of $100k, we'd be 2.5% of the global economy just by ourselves.
I can't find out how many copy editors and copy writers there are in the world (I keep finding numbers for the USA alone).
I don't know how many graphical artists there are, nor how many are willing to use the various image generator AI, but I am seeing that stuff pop up IRL.
We know at least some lawyers have used it because they didn't check the output for hallucinations and it bit them hard; we don't know how many use it carefully.
The easiest prediction you can make is that it's going to have an impact in unexpected ways.
It will be some time until we get to grips with all the implications of computers grokking language. Then again sci-fi has had computers understanding speech for ages as if it's the most natural thing in the world.
If it is a pop how can Big Tech walk it back? Am I overreacting here or have companies like Google and Microsoft pimped out their reputations in an irreversible fashion?
Everyone will walk it back like they walked back 'blockchain.' And then they'll go and find another technology to conspicuously chase. The hype cycle will repeat.
None of the big players have staked reputation on AI yet at all, which is telling in its own way. They've invested in it and explored products and demonstrated their R&D commitment but they haven't made it their new brand identity or anything.
The most vulnerable Big Tech organization are the ones like OpenAI and Anthropic that are swollen with investment specifically for AI but need to find a business or growth plan that can sustain that investment, plus the smaller and startup companies that have bet the farm on AI before it matured. But these same companies are the ones that have the potential to win big if there's another key breakthrough on either capabilities or product insight.
They really don't need to walk back anything. If the features are too expensive to support and ultimately unused, then they'll just quietly kill them off. Otherwise, it'll just be another feature in the feature lists. Not something that really buys anything but also not something making them a lot of money.
Big tech wasn't invested in crypto in the way it is in AI. For a few days Meta pretended they were going to launch a cryptocurrency, but that's kinda it. Whereas in this case all the biggest tech players are making a lot of noise about how AI is The Future and investing a tremendous amount of resources into trying to make their AI be The Future.
Big tech investment into machine learning didn't need walking back because machine learning morphed smoothly into AI.
Big tech investment into "Web 2.0" didn't need walking back because "Web 2.0", to whatever extent it was a definite specific thing, was a huge success and now we just call it the web. (If you meant "Web 3.0", that didn't need walking back for the same reason as crypto didn't: it was never a very big thing for very big tech.)
So I'm not seeing how any of those things was like AI today. If it turns out that present-day AI is a bubble and it bursts, it will not be like crypto or "Web 3.0" for the big tech companies because those were only ever fads that the big tech companies didn't particularly tie themselves to. It won't be like machine learning because it will be the same bubble, just later on. And it won't be like "Web 2.0" because that wasn't a bubble and didn't burst.
With blockchain technology there were many people prepared to die on that hill. It was going to solve world hunger albeit not the obesity epidemic. It also had all of its own lingo for not selling or buying the dip.
I remember people saying how things would be replaced with blockchain technology such as ecommerce, and apps 'on the chain' would find many thousands of uses. People were quite insistent.
On HN you risked getting modded down to hell and shadowbanned for life if you said anything wrongful about crypto. It was bizarre.
With AI we don't have this level of zealotry. You can say it is a load of rubbish and nobody wants to kill you. Weird, but indicative that AI has found some use, even if it is for 'art' and just marketing.
Nonetheless, if someone does get a bit zealous about it, I can tell them that I have all problems solved already with blockchain technology apps, just for trolling value.
Nah, web 2.0 was a super hype back in the late 90s early 00s. Everyone was all about advertising their adoption and development of web 2.0. Web 3.0 was never really a thing, beyond some crypto enthusiasts nobody cared.
As for crypto, I probably should have said blockchain instead. There was one point recently where everything was block chained.
"""Web 2.0 (also known as participative (or participatory)[1] web and social web)[2] refers to websites that emphasize user-generated content, ease of use, participatory culture and interoperability (i.e., compatibility with other products, systems, and devices) for end users."""
I'm struggling to think of any mainstream big tech products that incorporated crypto at all. Besides some financial apps adding an additional tab for it.
This kind of thing is not irreversible. Remember when Google tried to get everyone on Google+? A far more annoying product push than shoving AI into Google search.
Yeah, I think you are probably overreacting. Microsoft and Google already have sort of bad reputations anyway, but they haven’t actually done anything that widely hurt customers yet with this AI stuff. If it doesn’t work out, it will just be a silly R&D project from the public’s point of view.
Or they’ll use it to make stuff like Cortana (or whatever they are calling it now) slightly less useless.
One thing big tech is very good at is walking it back. Microsoft was one of the earlier big adopters of cryptocurrency, and now it’s nowhere to be found. Nobody cares because Microsoft didn’t make a big deal out of it.
Because current AI is effectively being pitched as assistants 2.0, I don’t see them wiping AI from their lineup, but letting it rot is a possibility as they all have done for their v1 assistants.
Due to the attention economy, most aren’t going to care within a year. Google is still a dominant company despite killing off beloved services so if they can do that, the others certainly can weather failed AI products.
There might be some exceptions like Adobe, it depends on what they do. That’s my take on it anyways, in general there was just way too much pumped into AI without much thorough thought. The fact that Rabbit was even a thing people got hyped over speaks volumes to how out of touch with reality current AI pushes are.
I mean, see, say, Google+, or that Facebook metaverse thing, or the various Windows Phones, or IBM's adventures in the blockchain. Companies get over-excited about the latest trend and do silly stuff all the time. People generally more or less forget.
>Nvidia as the most valuable company in the world?
This could just as easily be written as:
Apple as the most valuable company in the world?
Microsoft as the most valuable company in the world?
Tesla as the most valuable company in the world?
I won't be the one to defend Nvidia, but I don't find any solid arguments to consider its business model worse or better than those of the 'other' IT companies.
Perhaps improvements might fizzle out, but a lot of smart people seem to think they won't. The AI models we are using today are the worst that models will ever be in the future. We can't base our valuations on our current experiences with AI today.
5 years ago, just getting a computer to form vaguely relevant grammatically correct sentences felt magical.
Sure, hard to tell either way. That's kinda how bubbles happen in the first place, because if we really knew how valuable something was, it would already have the price to reflect that.
But it's also why Lord Kelvin claimed in 1895 "heavier-than-air flying machines are impossible"* and in 1897 that "radio has no future".
* don't ask me why birds were not a proof by example; we have the same problem today with regards to human brains being proof by example that human level intelligence is possible.
It seems like one of the defining characteristics of a paradigm-shifting technology is that a lot of smart people dismiss it at first. The skepticism and cynicism I see about AI from the HN crowd (of all places) is such a great example. I use AI every day as both a user and developer and it's an incredible gamechanger. It's pretty surprising to me how incurious and conservative many HN commentators appear to be, but maybe it shouldn't. I'm guessing the median age on this site of the active commenters is now in the mid 30s or early 40s.
Depends on your definition of "fizzle out". If you mean "not dominate the future, but be relegated to merely a niche tech", then:
Ruby on Rails. Modula-2. Every JavaScript framework since ever. Supersonic passenger flight. Space tourism. Emitter-coupled logic. Gallium arsenide (still has my vote for a name for a speed-metal band). Musk's Boring Company. I'm sure I could go on with some more thought.
I didn't understand the comment to be about specific products, but more overall categories. I think space tourism is likely going to be a huge category, it's just decades away still. I don't know about supersonic passenger flight or building tunnels everywhere. It may never be worth it for those categories.
But regardless, I don't think that many people really thought Musk was going to build much with Boring. I certainly didn't.
And has anyone in the last three decades (other than Boom's hapless investors) thought that supersonic passenger flight was going to be huge?
Not really...crypto is the biggest one that people cite, but the crypto ecosystem seems to be humming along. It hasn't taken over the world, but it also hasn't really fizzled imo. I haven't really paid attention to whether VCs are still investing there.
I guess maybe VR is another one? But that's another example of a slowly growing ecosystem.
I think mostly the people who think some piece of hot tech is going to take over the world aren't usually wrong, they're just early.
Hmm...I didn't understand the comment to be about specific products, so I'd take Glass and Betamax off the list.
I really don't think crypto or VR or AR are going to fizzle, I think they're just early in their overall lifecycle. I absolutely think that in ten or twenty years those categories will be much larger than today.
We're certainly a long way from last summer's AI hype where CEOs of AI companies were going in front of congress to explain how their AI is going to take over the world and needed to be tightly regulated.
It absolutely can't be trusted, but it can focus searches for relevant information. I started researching network monitoring solutions and chatgpt provided suggestions for me to look at and provided terms that I use to find pages in documentation that were relevant to me. I saw real solutions that would work that my initial web searches missed. I got to relevant documentation much faster with a fair expectation of what I would find.
It is a greatat summarizing but poor at understanding. It is great at concepts but crap at precision. And every actionable "fact" coming from it needs to be vetted.
You should trust it as much as you'd trust another human.
The quality of the LLM matters. The better LLMs don't hallucinate as much as the rest. ChatGPT-4 appears to be trained to consult the web when it receives questions for which it's likely to hallucinate, such as questions about hard figures.
> You should trust it as much as you'd trust another human.
Eh? Only a sociopath, when confronted with a question they don't know the answer to, confidently makes something up. The human may not know the answer, but they generally know whether they know the answer or not. The machine does not (of course, it doesn't _know_ anything). You should absolutely trust something less than this far less than the average human.
> Only a sociopath, when confronted with a question they don't know the answer to, confidently makes something up. The human may not know the answer, but they generally know whether they know the answer or not.
What? No. Literally fake memories are such a well-known phenomenon the cops will abuse it to generate fake solutions to crimes etc.
Also there are people for which compulsive lying is a pathological problem, who are definitely not sociopaths, and there are many people who demonstrate the processes to non-pathological/non-clinical levels.
People will, indeed, “just make it up” perhaps especially if they don’t really know the answer, because that’s socially embarrassing etc - that’s literally the opposite of sociopathy.
We need a different word than "hallucinate" or "bullshit" because the LLM is executing the same functionality when it _gets_ the correct answer or incorrect. It doesn't _know_ the correct answer in either case.
> It certainly feels like the AI bubble is popping.
I wonder how much of this has to do with 15+ years of the narrative of AGI/"The Singularity" being near. Deep learning seemed to really reinforce the narrative as it has blown away classical machine learning and succeeded in so many applications. It started to look like all of Kurzweil, et al's predictions were dead on. Particularly as LLMs took off.
Of course, we're now seeing the apparent limitations in very difficult applications like reasoning as LLMs have mostly been revealed to be a dead end path towards reasoning (with apologies to Blake Lemoine ;) ). And self driving is being revealed to be extremely, extremely difficult to solve with pattern matching alone.
Even the traditionally conservative Apple seems to have bought into the hype (though some of the applications will be pretty useful, IMO, after a few generations of release).
I saw somewhere (X I think) someone made the case that the latest generation of AI buzz has only created between $5-$50 billion in value despite investments measured as a multiple of that. I found it convincing.
Personally, I'm still a believer that AGI will happen, but I am convinced by Yann Lecun's take. We're not even on the on ramp yet, and all the bluster and hand wringing over safety threatens to kill or tragically stall the most important technology in all of human history. [1][2][3][4]
"AI" (or rather language models) will change the world. But in a very different way. They aren't (and won't become) AGI. They aren't (and won't) replace software engineers (but maybe help them to write better software, or more realistically churn out even more bad software). And they aren't and won't become ultra-wise oracles that can solve any task you throw at them.
But they are an interface between the real world and computing, because everything in the real world revolves around human language. Here are a few things that are actually solved by now, even if we did not yet fully recognize it:
1. automatic human-human translations
2. collection and annotation of all kinds of real-world data (with the help of image recognition, of course)
3. human-machine communication (not decision-making, mind you.)
The most crucial aspect is that other human-machine interfaces will become less relevant. In particular web pages and mobile apps. They might stay as an attempt to capture attention, but they will become less and less relevant for actual usage. Here are a few services that might get by without a web browser or app soon:
1. online banking
2. delivery services (and others like Uber)
3. small business that basically only need to manage appointments online (doctors, lawyers, car shops)
I remember how phone banking relied on the honesty of individual, accountable employees. How do you hold AI accountable? You give it low limits, write off the mistakes and keep scaling and ignoring failure even at the million-dollar-mistake level?
Banks indeed look at things in terms of risk mitigation, but this is silly. You will not see a credible bank incorporate LLM-based features because it is a genuinely enormous attack surface with marginal ROI. Any logical developer would create an ELIZA-style on-rails system instead, and it would never misbehave like ChatGPT.
"Solved"—so long as you don't mind that unclear and difficult-to-measure percentage of nearly-guaranteed errors—errors which the LLM can never detect or acknowledge, because that would require a level of introspection of which they are fundamentally incapable.
Yada-yada-yada, we don't need a lot of things, depending on how you define 'need'. It is an interesting philosophical question, but when analyzing what products a company decides to bring to market it is completely irrelevant.
You can't make money fulfilling 'needs' unless there is also demand, but you can certainly make money fulfilling demand regardless of what some newspaper decides you 'need'.
(I would imagine the next FT headline will be 'Starbucks can't explain why people need coffee', after all why spend $4 and wait in line when you can just take $0.01 caffeine pill.)
I have been using a Samsung S24 series phone for a few months, with "circle to search" and Gemini-powered Google Assistant. I have used circle to search about three times, and disabled Google Assistant.
I know that a "power user" can get much more out of these tools than me, but I still can't imagine what people would use that for, especially on a phone. People use their phones taking calls, watching TikTok, sending messages, not getting answers. "Turn on the light" thing works well enough in the old assistant. Which makes me wonder whether all those things Apple advertised make much different in that ecosystem.
(btw it seems that the Samsung/Google AI stuff never got as much coverage as Apple's thing?)
"Relief that Apple is finally bringing generative AI to its devices added more than $260bn to its stock market value this week. Over the next few years, Apple AI could fuel a massive hardware upgrade cycle. It will only work on a very small number of existing devices, so most customers will have to buy a new and more powerful iPhone, iPad or Mac to see Apple Intelligence."
Any tactic the so-called "tech" company can use to get people to buy new computers. When the old ones are still working fine.
We the HN readers do not need it, but for 90% of the rest of the world's population the interaction model with computers today for tasks more complex than Instagram is akin to casting magical spells, and it's been like that for decades. Imagine they could just tell the computer what to do in a completely natural, possibly even broken English, and it'd do it. That's the longer term promise of AI, and that's where it'll inevitably go.
AI is already so woven into my life. I use it all the time and wouldn't want to go back.
For example I made a bookmarklet that summarizes articles linked from HN. So the way I have skimmed this article is by reading this:
Apple's new AI offering, Apple Intelligence,
seems unoriginal and derivative compared to
competitors.
Apple is partnering with OpenAI to integrate
ChatGPT, but the integration appears
halfhearted.
The article questions if Apple's approach can
truly unlock the potential of generative AI.
The prompt I use is:
Make up to three short bullet points about what
the following page is about. Under no circumstances
make any of the bullet points longer than 40 words.
The page:
And then I append the text content of the page.
It is a very crude approach. Not sure how well it really covers the most important points. But I already like these summaries to decide if I dive deeper. And over time the summaries will only get better.
I think AI will be part of most UIs in a few years.
Hmm. That would be like asking them to explain why you need fast code—you could easily argue you don't, but it _can_ make your experience a lot better. There's a weird rush to market right now where there are a lot of examples of these LLMs being wrong which do not help, but don't forget, AI has been around for a long time (ever googled something?).
All people are doing and can think to do is just dump all their data and company secrets into a third party LLM because that’s all you can do with an LLM in order to discuss said data with no one. At best an LLM should be a reasoning engine but everyone wants a “do my work for me” engine.
There are some small ways can enhance your life but consider this, we already had type aheads, document templates and generators. Adding an heir of randomness into these things might make them flashier but not necessarily more useful.
When OpenAI starts announcing products from all the data people dumped into their models, no one will be laughing. Enjoy the sherlocking while it happens to your company.
The jagged edge of LLMs' capabilities makes it really hard to explain what it's good at, and therefore what it's good for. It can be genius level at one thing and then extremely foolish at a seemingly related task in ways that are hard to intuit. It is still early days and AI is improving at a startling pace. So, eventually even the floor of competency will rise above the threshold of "average human". That's my guess anyways. Asking what AI is good for is kind of like asking what a toddler is good for. The question itself is premature.
I'm absolutely sick of AI being shoved into everything, AI generated images, AIs falling in love with someone from the New York Times and so on, but as a glorified search/shortcut finder in my phone it actually makes sense, to me at least.
The commentary focuses on the image generation and image editing features, but it doesn't mention the planned features for text manipulation and text summary. These will arguably be the most useful features for day-to-day device usage.
The ability to easily (and privately) double-check emails for typos and grammar mistakes before sending them out will genuinely be helpful (who hasn't sent a late-night message with a typo?). And if the quality of the text summaries is high, easy summary creation will also be useful for many people.
Not in its current state, when it doesn't do anything on its own and you have to ask it a question to interact with it.
Once it becomes capable enough to act as an agent and starts asking you questions (unprompted by you) to confirm its actions, it'll slowly become a requirement to participate in society and remain productive enough to be taken seriously, like smartphones are today.
I know Apple is relevant because they have a lot of money. But when was the last time their products were relevant? 2008 w/ the iPhone? 2010 with the iPad? 2002 w/ the iPod? MacBooks in 1998?
Most of my family uses iPhones and it's painful to watch them struggle with basic tasks. My mom tries to find out how to access gmail instead of me.com. My brother downloads yet another app to add metadata to pictures. No one at work seems to know how to upload pictures as JPEGs (I think they default to AVIFs now. Not a problem, but it was a super-secret change to the defaults, and easy to support on the server, but we didn't hear about it until our users started complaining.
I know there are people who positively love their iPhone and the status it apparently confers in their friends group. But I've never understood why a device from a secretive developer-hostile company is supposed to be some amazing tool.
It's a miniature computer with built-in wireless network and a poorly integrated phone app. It's innovation seemed to me to be how well they were able to convince Verizon to finance the phone so users wouldn't have sticker-shock.
So "apple intelligence" is just repackaging other people's stuff in a way that their users can digest? That's not a surprise. I'm just waiting for my family to start yammering on about how Steve Jobs just added AI to their phones and no one else has such features even though Steve Jobs is dead and LLMs have been a thing for YEARS.
>But when was the last time their products were relevant?
You can't neglect the fact that, despite their lack of innovation while sitting on a huge pile of money, Apple has brought a few good things in the last decade. For example, their use of ARM chips in Macs has influenced other companies to adopt this architecture, which in some ways is better than x86. Another product market they opened up with their "innovation" is the AirPods, which are, in my opinion, better for daily tasks than bulky headphones as they are easier to carry around and more discreet.
>But I've never understood why a device from a secretive, developer-hostile company is supposed to be some amazing tool.
An iPhone may lack some basic features, but I find the overall experience more polished than Android.
>I know there are people who positively love their iPhone and the status it apparently confers in their friends group.
My 2021 iPhone SE doesn't confer me any status, but I agree that some people just buy iPhones because they are expensive.
Anyway, in the end, it's about everyone's opinion. But Apple’s AI seems like overkill. I definitely don't want AI spilled into every app that is on my phone. I'm happy using the phone app, taking photos whenever I feel like it, and keeping in touch with my friends over WhatsApp/Discord/Messages.
> So "apple intelligence" is just repackaging other people's stuff in a way that their users can digest?
HN really underestimates how important this is to billions of consumers.
Having said that, last year someone I knew went from following every Apple event to buying a Nothing phone (Android), because Apple were offering nothing new for unreasonable prices. The iPhone diminishing returns era is here. Even the US might fix the "wrong colour in chat" problem.
I have to point out that the new M series macbook and Apple watch are immensely popular (and just to name two of them), to the point they change the established market -- Windows market share dropped below 90% despite being at around 95% for many years, and people who never wore a watch are wearing an Apple watch every day.
Why don't we have more robots now? We ve supposedly been waiting for decades for computer vision and navigation to be solved, and now it is. I can think of quite a few things that i want robots to do right now. I don't want more touchscreens
There is currently a proliferation of robotics companies of all shapes and sizes. Lots of capital is being deployed into robotics. My best guess is give it 3-5 years before we start to see and accept that there will be many more robot friends among us both in industry and our personal lives.
Yup, I don't think there's a mass market for humanoid robots that cost as much as a high-end car - at least not yet. If you have that kind of money, you can just hire a person and it's a lot easier to get a person to do things how you want.
If AGI isn't created in the next 5 years, then the smart money is going to lose interest in this like they did Crypto and self driving cars.
There are serious discussions about multi-100B USD compute clusters and building multi-GW power generation. If we could replace all artists plus half of mathematicians and scientists tomorrow, it still wouldn't be worth the investment. Nothing short of summoning the Computer God will justify that.
This is facetious, the magic of the internet was instantly realized but perhaps not immediately embraced full on. The magic of LLMs is still being felt out from the tech it’s based on.
Send signals between machines is a lot bigger and different than dump everything into a statistical model.
Most of the AI stuff is flashy, sure, but the summarization features they showed on iOS, especially for email and notifications, happening solely on device, are the killer feature.
Lots of anti AI narrative in the news lately. Not to be cynical but I'd hate to suspect it partially has to do with it threatening their business model.
We've had Sama doing a world tour saying its going to displace so many jobs people will try to kill him.
We have a hardware company pretending that just adding more TOPS is going to get us to AGI.
We have AGI benchmarks that don't even test for AGI (how can you be 50% AGI...)
At some point it has to deliver insane value, and that point is soon because the numbers aren't adding up. People are realizing it's good at like 30 of the 3000 things its proposed to do.
Alternate take: companies are starting to productize the technology breakthrough of 2023 and the products that are coming out range from underwhelming to worrisome, which doesn't deliver on the breakthrough's hype.
While part of the 2023 hype cycle was built on doomer fantasies of a massive and proximate labor disruption, I don't see a lot of people in the news industry feeling like their "business model is threatened" by Recall and Apple Intelligence, or even by ChatGPT and Copilot. I don't think that's what's driving the coverage.
It's hard to know if this take is meant to be taken seriously... it's so detached from at least how people on HN use LLMs.
"Cheating at homework." Is a calculator cheating? Was using encyclopedias cheating? What about wikipedia? What about Wolfram Alpha? This mentality that one must do all work originally or it's cheating is not how learning works / should work. We should teach kids how to leverage all the tools they can to find the right answer and use critical thinking to be confident the answer is right. If we discourage learning to use these tools we'll actually make kids more likely to just trust the tools at their word when they do go sneakily use them.
"... and their jobs." I'm a software engineer and AI has made the process of debugging issues and implementing common patterns so much faster and less error prone. It's because I'm a highly skilled engineer who knows what to look for (same as when we browse stack overflow) and how to identify good and bad patterns.
It's not laziness, and the work must be high quality, and now I have a tool which helps me get more done in less time without sacrificing quality.
Yes, if you are supposed to be learning arithmetic facts.
> Was using encyclopedias cheating?
Yes, if you are supposed to be learning how to find, evaluate, and cite original sources.
> What about wikipedia?
Often not accepted as a source in school classwork.
> What about Wolfram Alpha?
Same, all these things are great assistants for quickly performing tasks you know how to do, but they are a hinderance to learning if you don't know. And if you are using them blindly you won't know when they are wrong.
Do you think a student learns anything by copy/pasting a school report from Wikipedia? If you were starting out today, you would become a highly skilled engineer if you relied on CoPilot? Or did you become a highly skilled engineer by devoting a lot of time, thought, research, and experimenting to your craft?
Yes if the point of the exercise is to improve your mental arithmetic skills. It's a bit arbitrary but it's not complicated. This is the same way that steroids is cheating - the competition is defined such that PED are considered cheating, so using them is cheating.
I felt invoking the original Macintosh tagline "For The Rest of Us" to describe what feels like a hastily shoved together selection of OpenAI API calls and some aesthetically tasteless diffusion models particularly strange.
Not to mention the reveal was extremely strange and almost felt like they were not releasing anything at all for the first 15 minutes before they showed it in action, almost felt very self conscious.
I feel that level of tagline should have only been invoked when they truly manage to ship something on even the baseline of their competitors today in a (wishfully) local and secure way.
Apple Intelligence doesn't use OpenAI at all. Siri and writing tools can tap into ChatGPT to compose text, e.g. create a story but that requires approval from the user. Claiming that all Apple did was "hastily shoved together selection of OpenAI API calls" is a misrepresentation of what Apple showed.
I think you disagree with yourself? How is Apple AI not using ChatGPT if Siri uses ChatGPT? You said it yourself. Heck, Apple said it themselves.
The fact that there is a prompt asking for permission doesn't change that. Maybe some parts of the AI in iPhone don't use ChatGPT but others do, as advertised.
> Apple Intelligence doesn't use OpenAI at all. Siri and writing tools can tap into ChatGPT to compose text, e.g. create a story but that requires approval from the user.
Apple Intelligence is run on-device and on Apple's Private Cloud Compute, it does not use OpenAI. Apple also allows ChatGPT (and later other models) to be called directly from Siri and writing tools, in cases where Apple Intellgence can't solve the task. That is for example for when some text should be composed, like a story or recipe. Nothing about this contradicts with what I have written previously.
Yeah so it seems we agree, Apple resorts to third party AI for some tasks.
"Apple Intelligence" is a strange marketing driven name because it implies to the tech naive that all AI in an iPhone is offline.
When in fact "Apple Intelligence" is just the name for some of the AI functionalities in Apple devices. Some functionalities will still send data to ChatGPT and perhaps other third party providers in the future.
But Apple gets to advertise "Apple Intelligence" as privacy conscious, offline, personal AI and still be technically correct. Classic.
It's very clear when ChatGPT gets involved - you get a prompt asking you if you want to send the query to ChatGPT ("ChatGPT" being referred to by name). There's no feasible way to have your data sent to ChatGPT unintentionally.
Anyway, ChatGPT was a very small portion of the Apple Intelligence demo. The significant majority of the functionality they showed (all the non-ChatGPT stuff) was on-device or in Apple's Private Cloud Compute, the privacy story for which is quite good.
It's as if apple had a pair of visionary co-founders who are no longer with the company and the people left are excellent at getting things done but don't have an intuitive understanding of what people want.
Because we don't need AI in our lives as everyday consumers. For business uses, it makes some sense (code completion, proof reading, rewriting, etc). This isn't a product for consumers, this is a product for shareholders.
Apple's stock would have dipped dramatically if they held WWDC this year with zero mention of any AI products. They're duty isn't to us as consumers, it's to shareholders to protect and grow their investment. They just have to play the same game as the rest of the industry. They don't have to explain why consumers need it, that's not the point. The point is that they can't look like they're falling behind.
Code completion is a great use, some writing tools are nice, I love chatGPT for information lookup (while knowing that it hallucinates), ChatGPT/whatever is great for generating icons and some images, and it’s much better at writing in corpo than I am.
But…Nvidia as the most valuable company in the world? $1B valuations for freshly minted companies? Radical changes to the economy? AGI?
It certainly doesn’t seem that way. Yes incremental (in this case a large increment!) improvements in the way we interact with machines.
But humans stay winning.