Hacker Newsnew | past | comments | ask | show | jobs | submit | npalli's commentslogin

Just like San Francisco and Dallas/Texas (from his article) are very different in the US, we should expect lot of differences in Europe (as others mentioned, he clubs UK with EU). Housing is a general problem for all major cities though, not sure why you think it is unique to London in the whole continent. Stockholm, Paris, Dublin, Lisbon to name a few, are pretty bad for housing in their own unique ways. Certainly shouldn't be "breaking your brain".

Great summary of the year in LLMs. Is there a predictions (for 2026) blogpost as well?

Given how badly my 2025 predictions aged I'm probably going to sit that one out! https://simonwillison.net/2025/Jan/10/ai-predictions/

Making predictions is useful even when they turn out very wrong. Consider also giving confidence levels, so that you can calibrate going forward.

I use predictions to prepare rather than to plan.

Planing depends on deterministic view of the future. I used to plan (esp annual plans) until about 5 years. Now I scan for trends and prepare myself for different scenarios that can come in the future. Even if you get it approximately right, you stand apart.

For tech trends, I read Simon, Benedict Evans, Mary Meeker etc. Simon is in a better position make these predictions than anyone else having closely analyzed these trends over the last few years.

Here I wrote about my approach: https://www.jjude.com/shape-the-future/


Don’t be a bad sport, now!!

Seems very detailed and comprehensive. Did I miss it, but was there a performance comparison to the PyTorch version at the top?

Hi thanks for feedback! That’s a good point I did compare to torch but at a high enough sequence length (~1024) torch version starts OOM because it has to materialize the S^2 in global mem. On small sequence length, torch does win solely on optimised cublas matmuls

Three things:

1. The Rise: 2005 - 2010 Google hired Guido van Rossum in 2005 (stayed on for seven years) and gave corporate blessing that made everyone comfortable with moving from Perl to Python. It was seen as the language of scientists and smart people so a lot of people working in misc. languages like Fortran, MATLAB, Perl moved here. To remove the speed issue the official Google mantra was "Python where we can, C++ where we must". AI heavy weights like Peter Norvig (I think he was the chief AI scientist at one point and co-author of the famous AIMA book), promoted Python to be an acceptable Lisp.

2. Near Death: 2010-2015 Python almost died due to self inflicted wound from the 2 -> 3 transition and there was a good chance it would have gone nowhere like many languages before. Guido also moved away from Google and Google seemed to have shifted it's attention to Golang (apart from the standard C++ and Java). BTW, Python's dominance was not seen positively within Google hence they stopped actively promoting it. For ex. a leaked transcript from Eric Schimdt had him saying this

   So another definition would be language to Python, a programming language I never wanted to see survive and everything in AI is being done in Python.
https://gist.github.com/sleaze/bf74291b4072abadb0b4109da3da2...

3. Resurrection: 2015-Now Data science and ML took off and Python was right there thanks to the initial sponsorship from Google and ecosystem of scientists and engineers who were familiar (including working in the two-language mode). There was no language that could rival at this point.

Most of the syntax, power considerations etc.. are side shows as most scripting languages just tap into very powerful libraries written in c/c++/fortran or wrappers around shell. Doubt that distinguishes Python to the point where it has become so dominant.


Confluent was trading at less than 50% of its IPO price when IBM made the offer. The stock and the company has been going sideways for several years now, keeps growing revenues but loses even more as most of it is in Sales and Marketing. In which world is this seen as some sort of extraordinary company that will get sabotaged by IBM. Seems Confluent management knows the writing on the wall, IBM will clean up (fire a bunch of sales and management guys) and make this a workable business. It will seem brutal for some Confluent guys but that's because their business is broken; and only someone from outside can come in and fix it as the current senior management cannot.

IBM has been around for over a hundred years, maybe they know a thing or two about running a software business :-)


I joined IBM over 40 years ago, like my pappy before me.

My main takeaway from IBM's longevity is just how astonishingly long big companys' death rattles can be, not how great IBM are at running software businesses.


Are they dying? IBM’s stock is up 160% over the past 5 years.


No. They are a multi-generational institution at this point and they are constantly evolving. If you work there it definitely FEELS like they are dying because the thing you spent the last 10 years of your career on is going away and was once heralded as the "next big thing." That said, IBM fascinated me when I was acquired by them because it is like a living organism. Hard to kill, fully enmeshed in both the business and political fabric of things and so ultimately able to sustain market shifts.


That's an interesting and enlightening way to look at it.

For me it was the death of IBM's preeminence in IT. When I started there a job at IBM was prestigious,a job for life. More than once I was told that we had a lengthy backlog of inventions and technological wonders that could be wheeled out of the plant if competitors ever nipped too closely at IBM's heels.

At that time IBM had never made a single person redundant - anywhere in the world. The company had an incredible sophisticated internal HR platform that did elaborate succession planning and considered training and promotion as major workforce factors - there was little need to think much about recruitment because jobs for life. IBM could win any deal, maybe needing only to discount a little if things were very competitive.

It's impossible to imagine now what a lofty position the company held. It's not unfair to say that, if not dead, the IBM of old is no longer with us.


It is absolutely a different company. I had the opportunity to intern there twice in the late 70's and then was acquired by them in 2015, the IBM of 1978 and the IBM of 2015 were very different businesses. Having "grown up" so to speak in the Bay Area tech company ecosystem where companies usually died when their founding team stopped caring, IBM was a company that had decided, as an institution, to encapsulate what it took to survive in the company's DNA. I had a lot of great discussions (still do!) with our integration executive (that is the person who is responsible for integrating the acquisition with the larger company) about IBM's practice in terms of change.


To be fair, the whole job market has changed. Layoffs and the death of "a job for life" is not unique to IBM.

I think the pace of progress and innovation has, for better or worse, meant that companies can no longer count on successfully evolving only from the inside through re-training and promotions over the average employee's entire career arc (let's say 30 years).

The reality is that too many people who seek out jobs in huge companies like IBM are not looking to constantly re-invent themselves and learn new things and keep pushing themselves into new areas every 5-10 years (or less), which is table stakes now for tech companies that want to stay relevant.


Honestly, I think that's people reacting to the market more than it's the market reacting to people.

If your average zoomer had the ability to get a job for life that paid comparably well by a company that would look after them, I don't think loyalty would be an issue.

The problem is today, sticking with a company typically means below market reward, which is particularly acute given the ongoing cost of living crises affecting the west.


I interned there one summer and I felt like 9 years ago when I was there the company died 30 years prior. It’s a super weird place to work


I don't think they're dying at all, they're just become yet another consultancy/outsourcing shop


turning into a rent-seeking-behavior engine.

the final end-state of the company, like a glorious star turning into a black hole


exactly


IBM is a consulting business, not a software business. Their software sucks, and every actual software engineer knows it. IBM has a business selling to big, old, backwards enterprise businesses who wouldn't know good software from literal pieces of faeces.


That's not entirely true. Db2, for example, is a well-respected database.


To me it makes sense when it comes to the stock. It's not like someone goes to Robinhood or whatever and goes... Hey you know what's underrated? Kafka! Calls on Confluent!


Are they really a software business?

Investopedia says[0] they make 60% of their profit from "Software" but how much of that is "providing cloud solutions" and similar software-adjacent consulting exercises?

[0]: https://www.investopedia.com/how-ibm-makes-money-4798528


Not a Hacker News take I would have expected 10 years ago. Today, though. I agree.


Kind of strange take as though unique to software. Every sector that is large has issues since ambitious projects stretch what can be done by the current management and organizational practices. All software articles like these hark back to some mythical world smaller in scope/ambition/requirements. Humanity moves forward

* Construction and Engineering -- Massive cost overruns and schedule delays on large infrastructure projects (e.g., public transit systems, bridges)

* Military and Government -- Defense acquisition programs notorious for massive cost increases and years-long delays, where complex requirements and bureaucratic processes create an environment ripe for failure.

* Healthcare -- Hospital system implementations or large research projects that exceed budgets and fail to deliver intended efficiencies, often due to resistance to change and poor executive oversight.


Python is nothing without it’s batteries.


The design and success of e.g. Golang is pretty strong support for the idea that you can't and shouldn't separate a language from its broader ecosystem of tooling and packages.


The success of python is due to not needing a broader ecosystem for A LOT of things.

They are of course now abandoning this idea.


> The success of python is due to not needing a broader ecosystem for A LOT of things.

I honestly think that was a coincidence. Perl and Ruby had other disadvantages, Python won despite having bad package management and a bloated standard library, not because of it.


The bloated standard library is the only reason I kept using python in spite of the packaging nightmare. I can do most things with no dependencies, or with one dependency I need over and over like matplotlib

If python had been lean and needed packages to do anything useful, while still having a packaging nightmare, it would have been unusable


Well, sure, but equally I think there would have been a lot more effort to fix the packaging nightmare if it had been more urgent.


There was a massive effort though, the proliferation of several different package managers is evidence of that.


Maybe. A lot of them felt like one-person projects that not many people cared about. I think that on the contrary, part of the reason so many different package managers could coexist with no clear winner emerging was that the problem wasn't very serious for a lot of the community.


The bloated standard library is the reason why you can send around a single .py file to others and they can execute it instantly.

Most of the python users are not able nor aware of venv, uv, pip and all of that.


It's because Ruby captured the web market and Python everything else, and I get everything is more timeless than a single segment.


Ruby was competing on the web market and lost to many others, including Python. In part, because python had a much broader ecosystem, and php had wide adoption through wordpress and others, and javascript was expanding from browsers.


Python is its batteries.


But why whenever I try to use it, it tries to hurt me like it's kicking me right in my batteries?


What language is used to write the batteries


C/C++, in large part


These days it's a whole lot of Rust.


These days it’s still a whole lot of Fortran, with some Rust sprinkled on top. (:


Which since Fortran 2003, or even Fortran 95, has gotten rather nice to use.


IDK it's become too verbose IMHO, looks almost like COBOL now. (I think it was Fortran 66 that was the last Fortran true to its nature as a "Formula Translator"...)


We are way beyond comparing languages to COBOL, now that plenty folks type whole book sized descriptions into tiny chat windows for their AI overloads.


And below that, FORTRAN :)


I hear this so much from Python people -- almost like they are paid by the word to say it. Is it different from Perl, Ruby, Java, or C# (DotNet)? Not in my experience, except people from those communities don't repeat that phrase so much.

The irony here: We are talking about data science. 98% of "data science" Python projects start by creating a virtual env and adding Pandas and NumPy which have numerous (really: squillions of) dependencies outside the foundation library.


Someone correct me if I'm completely wrong, but by default (i.e. precompiled wheels) numpy has 0 dependencies and pandas has 5, one of which is numpy. So not really "squillions" of dependencies.

pandas==2.3.3

├── numpy [required: >=1.22.4, installed: 2.2.6]

├── python-dateutil [required: >=2.8.2, installed: 2.9.0.post0]

│ └── six [required: >=1.5, installed: 1.17.0]

├── pytz [required: >=2020.1, installed: 2025.2]

└── tzdata [required: >=2022.7, installed: 2025.2]


Read https://numpy.org/devdocs/building/blas_lapack.html.

NumPy will fall back to internal and very slow BLAS and LAPACK implementations if your system does not have a better one, but assuming you're using NumPy for its performance and not just the convenience of adding array programming features to Python, you're really gonna want better ones, and what that is heavily depends on the computer you're using.

This isn't really a Python thing, though. It's a hard problem to solve with any kind of scientific computing. If you insist on using a dynamic interpreted language, which you probably have to do for exploratory interactive analysis, and you still need speed over large datasets, you're gonna need to have a native FFI and link against native libraries. Thanks to standardization, you'll have many choices and which is fastest depends heavily on your hardware setup.


The wheels will most likely come with openblas, so while you can get the original blas (which is really only slow by comparison, for small tasks it's likely users won't notice), this is generally not an issue.


I don't know about _squillions_, but numpy definitely has _requirements_, even if they're not represented as such in the python graph.

e.g.

  https://github.com/numpy/numpy/blob/main/.gitmodules (some source code requirements)
  https://github.com/numpy/numpy/tree/main/requirements (mostly build/ci/... requirements)
  ...


They're not represented, because those are build-time dependencies. Most users when they do pip install numpy or equivalent, just get the precompiled binaries and none of those get installed. And even if you compile it yourself, you still don't need those for running numpy.


Seems this is basically DGX Spark with 1TB of disk so about $1000 bucks cheaper. DGX Spark has not been received well (at least online, Carmack saying it runs at half the spec, low memory bandwidth etc.) so perhaps this is way to reduce buyers regret, you are out only $3000 and not $4000 (with DGX Spark).



He is very enthusiastic about new things but even he struggled (for ex. the first link is about his experience OOB with Sparq and it wasn't a smashing success).

  Should you get one? #
  It’s a bit too early for me to provide a confident   recommendation concerning this machine. As indicated above,   I’ve had a tough time figuring out how best to put it to use,   largely through my own inexperience with CUDA, ARM64 and Ubuntu GPU machines in general.
 
  The ecosystem improvements in just the past 24 hours have been very reassuring though. I expect it will be clear within a few weeks how well supported this machine is going to be.


Performance wise it was able to spit out about half of a buggy version of Space Invaders as a single HTML file in roughly a minute.


I’m pretty sure I could spit out something that doesn’t work in half a minute.


Don't undersell it. The game is playable in a browser. The graphics are just blocks, the aliens don't return fire. There are no bunkers. The aliens change colors when they descend to a new level (whoops). But for less than 60 seconds of effort it does include the aliens (who do properly go all the way to the edges, so the strategy of shooting the sides off of the formation still works--not every implementation gets that part right), and it does detect when you have won the game. The tank and the bullets work, and it even maintains the limit on the number of bullets you can have in the air at once. However, the bullets are not destroyed by the aliens so a single shot can wipe out half of a column. It also doesn't have the formation speed up as you destroy the aliens.

So it is severely underbaked but the base gameplay is there. Roughly what you would expect out of a LLM given only the high level objective. I would expect an hour or so of vibe coding would probably result in something reasonably complete before you started bumping up into the context window. I'm honestly kind of impressed that it worked at all given the minuscule amount of human input that went into that prompt.


I do think that people typically undersell the ability of LLMs as coding assistants!

I'm not quite sure how impressed to be by the LLM's output here. Surely there are quite a few simple Space Invaders implementations that made it into the training corpus. So the amount of work the LLM did here may have been relatively small; more of a simple regurgitation?

What do you think?


>The aliens change colors when they descend to a new level (whoops).

That is how Space Invaders originally worked, used strips of colored cellophane to give the B&W graphics color and the aliens moved behind a different colored strip on each level down. So, maybe not an whoops?

Edit: After some reading, I guess it was the second release of Space Invaders which had the aliens change color as they dropped, first version only used the cellophane for a couple parts of the screen.


I think this is the key, it can do impressive stuff but it won't be fast. For that, you have to put in a NVidia data center / AI Factory.


He likes everything.


"I don't think I'll use this heavily"


Some of the stuff in the Carmack thread made it sound like it could be due to thermals, so maybe could reach or come a lot closer to, but not sustain, and if this has better cooling maybe it does better? I might be off on that.


I'd love to see how far shucking it and using aftermarket cooling will go. Or perhaps it's hard-throttled for market segmentation purposes?


I don't understand DGX Spark hate. It's clearly not about performance (a small, low-TDP device), but ability to experiment with bigger models. I.e. a niche between 5090 and 6000 Pro, and specifically for people who want CUDA


Wasn't it shown that Carmack just had incorrect expectations, based upon misunderstanding the details of the GPU hardware?

From rough memory, something along the lines of "it's an RTX, not RTX Pro class of GPU" so the core layout is different from what he was basing his initial expectations upon.


Except Carmack, as much as I hate to say it, was simply wrong. If you run the GPU at full throttle then you get the power draw that he reported. However, if you run the CPU AND the GPU at full throttle, then you can draw all the power that’s available.


Rust is a worse C++ with modern tooling.


How will Fivetran and dbt who are detested for being overpriced and underfeatured in the segment they are supposed to be good at (ETL/ELT) be taking on being a datalake? That's orders of magnitude more complex in engineering and operating and they have no experience. This is really a play to consolidate, get rid of duplicate functions and provide a better experience to customers.


This seems to undermine the engineering muscle these companies have. Fivetran is well-capable of building a query engine, and with this merger, they also get access to SDF's query engine. They have the engineering capabilities, as well as the capital to attract the talent where needed.

I would not underestimate any of these players in the space.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: