Hacker Newsnew | past | comments | ask | show | jobs | submit | jonathaneunice's commentslogin

That's Dan Frye's article, and it is um, a little Dan Frye-centric. He was a legitimately important contributor to IBM's technology management team around Linux and open source, especially as and after IBM made the turn.

But it reads as if he called the shot and piloted the turn. That is not my recollection or understanding. Other folks contributed as much or more to driving the Linux/open source pivot. Irving Wladawsky-Berger, the late Scott Handy, _et al_. It's IBM, so there were a ton of folks involved and contributing.

My source: I was an industry analyst and consultant in the server / system software space at the time, and I was in at least a few of the rooms where it happened.


We remember the 'Linux is 10' ads from later https://youtu.be/x7ozaFbqg00#linuxistenyearsold

Love a good rant or an artfully scathing review!

This seems very consistent with the refactoring techniques taught by Sandi Metz (https://sandimetz.com/99bottles). After taking her course, I successfully applied those techniques to good ends and outcomes.

Not sure "refactor in context" is the tool for every single last refactoring job in the universe. Some plumbing changes may be large or systematic enough that they need to be separately planned and applied, especially as explaining "oh I changed the fundamental way we do things, just in passing" can be a hard PR to present. OTOH, since adopting the "in context" approach I have had many fewer refactoring attempts abandoned, and refactoring seems much more logical and purposeful. So it works IME.


The complexity and frustration are in no way accidental. A carefully designed, obfuscated, and Byzantined process designed for exactly this effect.

> You're supposed to be so beaten down, so utterly depleted of will, that you just cave. [...] You disable a bunch of parental controls you don't really understand. You let your kid play his damn game. You become the ideal customer.

Exactly so. Parental controls, privacy settings, permission to show ads and collect infinite tracking data… The machine is working exactly as intended. Maybe there are sentiments that "the parents should have some control" and maybe there are some laws about protecting children or protecting consumer privacy. But hey, what if actually using any of those mechanisms was mind-bendingly difficult and annoying? What if your control were only available downstairs, in the unlighted cellar, at the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying "Beware of the Leopard." We'd still be in compliance, right? Heh heh. Yeah. That's the ticket!


Reminds me when Facebook added “privacy” controls, that were virtually impossible to find, difficult to understand, and confusing to give a false sense of security.


It paved the way for a lot more than that. At a time open source in general, and Linux in particular, did not have much corporate buy-in, IBM signaled "we back this" and "we're investing in this" in substantial ways that corporate IT executives could hear and act upon. That was a pre-cloud, pre-hyperscaler era when "enterprise IT" was the generally understood "high end" of the market, and IBM ruled that arena. IBM backing Linux and open source paved the way for a large swath of the industry—customers, software vendors, channel/distribution partners, yadda yadda—to do likewise.


agree - and the big industry consortium building `gcc` was already proving itself


Cosigned!

Em dash forever! Along with en dash for numerical ranges, true ellipsis not that three-period crap, true typographic quotes, and all the trimmings! Good typography whenever and wherever possible!


I agree we all ought to use available punctuation marks correctly. That said, I am compelled to lodge a formal complaint against quoted text arbitrarily assimilating punctuation from its surrounding context.

Quoted text is a sacred verbatim reproduction of its original source. Good authors are very careful to insert [brackets] around words inserted to clarify or add context, and they never miss an oppurtunity (sic) to preserve the source's spelling or grammatical mistakes. And yet quoted text can just suck in a period, comma, or question mark from its quoted context, simply handing the quoting author the key to completely overturn the meaning of a sentence?! Nonsense! Whatever is between the quotes had better be an exact reproduction, save aforementioned exceptions and their explicit annotations. And dash that pathetic “bUt mUH aEstHeTIcS!” argument on the rocks!

“But it's ugly!”, says you.

“Your shallow subjective opinion of the visual appearance of so-called ugly punctuation sequences is irrelevant in the face of the immense opportunity for misbehavior this piffling preference provides perfidious publications.”, says I.


I completely agree, this is perhaps the least sensible part of common English syntax.

   "Hello," he said.  
   "Hello", he said.
Only one of these makes actual sense as a hierarchical grammar, and it's not the commonly accepted one! If enough of us do it correctly perhaps we can change it.


I’ve always wondered about this. I guess typographically they should just occupy the same horizontal space, or at least be kerned closer in such a way as to prevent the ugly holes without cramming.

It’s true, though, that the hierarchically wrong option looks better, IMHO. The whitespace before the comma is intolerable.

This is an interesting case where I am of two autistic hearts, the logical one slowly losing vehemence as I get older and become more accepting of traditions.


It's especially obvious as a programmer.


I am all for using proper typographic symbols, but it is unclear what place the precomposed ellipsis U+2026—what I assume you mean by “true ellipsis”—has in that canon, especially with the compressed form it takes in most fonts.


En dash for ranges is too easily confused for a minus sign. I would rather use a different symbol altogether.


And two spaces after a period! Who's with me?


Not Matthew Butterick (nor all major English-language style guides): https://practicaltypography.com/one-space-between-sentences....

I only discovered two spaces after a full stop/period was a thing after moving to the U.S., and only apparently in people over 40.


I learned of it only by learning by Emacs! There are movement keys to move the to the next/previous sentence, and I wasn't understanding why they never worked for me.


It's how Millennials and our predecessors were taught to type in school, and it's muscle memory. Very hard to unlearn.


It's not that I have any trouble doing one or two spaces. I just think it's a bit arrogant of any group to decide something is "wrong".

Also, Pluto is still a planet because the new planet definition is absolutely stupid, and it wasn't really their word to work with anyway.


And text figures! And proper small caps!!


Agreed. Good typography is good writing.


Debugging is a completely different and better animal when collections have a predictable ordering. Else, every dict needs ordering before printing, studying, or comparing. Needlessly onerous, even if philosophically justifiable.


That's a great link and recommended reading.

It explains a lot about the design of Python container classes, and the boundaries of polymorphism / duck typing with them, and mutation between them.

I don't always agree with the choices made in Python's container APIs...but I always want to understand them as well as possible.

Also worth noting that understanding changes over time. Remember when GvR and the rest of the core developers argued adamantly against ordered dictionaries? Haha! Good times! Thank goodness their first wave of understanding wasn't their last. Concurrency and parallelism in Python was a TINY issue in 2006, but at the forefront of Python evolution these days. And immutability has come a long way as a design theme, even for languages that fully embrace stateful change.


> Also worth noting that understanding changes over time. Remember when GvR and the rest of the core developers argued adamantly against ordered dictionaries? Haha! Good times!

The new implementation has saved space, but there are opportunities to save more space (specifically after deleting keys) that they've now denied themselves by offering the ordering guarantee.


Ordering, like stability in sorting, is an incredibly useful property. If it costs a little, then so be it.

This is optimizing for the common case, where memory is generally plentiful and dicts grow more than they shrink. Python has so many memory inefficiencies that occasional tombstones in the dict internal structure is unlikely to be a major effect. If you're really concerned, do `d = dict(d)` after aggressive deletion.


> Ordering, like stability in sorting, is an incredibly useful property.

I can't say I've noticed any good reasons to rely on it. Didn't reach for `OrderedDict` often back in the day either. I've had more use for actual sorting than for preserving the insertion order.


Ordering is very useful for testing.

This morning for example, I tested an object serialized through a JSON API. My test data seems to never match the next run.

After a while, I realized one of the objects was using a set of objects, which in the API was turned into a JSON array, but the order of said array would change depending of the initial Python VM state.

3 days ago, I used itertools.group by to group a bunch of things. But itertools.group by only works on iterable that are sorted by the grouping key.

Now granted, none of those recent example are related to dicts, but dict is not a special case. And it's iterated over regularly.


Personally, I find lots of reasons to prefer an orders Dict to an unordered one. Even small effects like "the debugging output will appear in a consistent order making it easier to compare" can be motivation enough in many use cases.


It's sometimes nice to be deterministic.

I don't often care about a specific order, only that I get the same order every time.


Thinking about this upfront for me, I am actually wondering why this is useful outside of equality comparisons.

Granted, I live and work in TypeScript, where I can't `===` two objects but I could see this deterministic behavior making it easier for a language to compare two objects, especially if equality comparison is dependent on a generated hash.

The other is guaranteed iteration order, if you are reliant on the index-contents relationship of an iterable, but we're talking about Dicts which are keyed, but extending this idea to List, I see this usefulness in some scenarios.

Beyond that, I'm not sure it matters, but I also realize I could simply not have enough imagination at the moment to think of other benefits


I work on a build system (Bazel), so perhaps I care more than most.

But maybe it does all just come down to equality comparisons. Just not always within your own code.


Being able to parse something into a dict and then serialise it back to the same thing is a bit easier. Not a huge advantage, though.


Same. Recently I saw interview feedback where someone complained that the candidate used OrderedDict instead of the built-in dict that is now ordered, but they'll let it slide... As if writing code that will silently do different things depending on minor Python version is a good idea.


Well it's been guaranteed since 3.7 which came out in 2018, and 3.6 reached end-of-life in 2021, so it's been a while. I could see the advantage if you're writing code for the public (libraries, applications), but for example I know at my job my code is never going to be run with Python 3.6 or older.


Yeah, if you have that guarantee then I wouldn't fault anyone for using dict, but also wouldn't complain about OrderedDict.


Honestly, if I was writing some code that depended on dicts being ordered I think I'd still use OrderedDict in modern Python. I gives the reader more information that I'm doing something slightly unusual.


Same. Usually if a language has an ordered map, it's in the name.


Indeed! I don't understand why it isn't more common for stdlibs to include key-ordered maps and sets. Way more useful than insertion ordering.


Presumably because it involves different performance characteristics.


It seems like opinions really differ on this item then. I love insertion sort ordering in mappings, and python with it was a big revelation. The main reason is that keys need some order, and insertion order -> iteration order is a lot better than pseudorandom order (hash based orders).

For me, it creates more reproducible programs and scripts, even simple ones.


Ordering is specifically a property (useful or not) that a set doesn't have. You need a poset for it to be ordered.

I would expect to use a different data structure if I needed an ordered set.


Does your code actually rely on that? I've never once needed it.


Perl's decline was cultural in the same way VMS's decline was. A fantastic approach and ecosystem—just one overtaken by a world that was moving on to a different set of desires and values.

PHP emerged as a separate language and community and "ate Perl's lunch" when it came to the dominant growing app style of the Aughties... web pages and apps. Had PHP instead been a Rails-like extension of Perl for the web, sigils would have reigned for many years more. But there was never a WordPress, Drupal, or similar in Perl, and no reason for anyone who wasn't already highly connected to the Unix / sysadmin community to really give Perl another look.

By 2005 if you weren't already deep into Perl, you were likely being pulled away to other communities. If you were into Perl, you were constantly lamenting why newbies and web devs weren't using your language, just like the DECUS and VMS crowd did a decade earlier as Unix and Windows consumed all the oxygen and growth opportunities in the room.


VMS decline was bound to a failing hardware business and company. That's a very different thing. Unix was around in the 80s and VMS did fine, its when DEC hardware business went down the tubes VMS lost out big time.


VAX hardware did eventually run out of steam by the early 1990s. But VMS had already failed to capture the next generation of growth. Even while DEC was still the number 2 computer company, 5–10x the size of its competitors, and growing $1B/yr in revenues.

Unix (workstations first, then servers), PCs (DOS and NetWare, later Windows), packaged enterprise apps (e.g. IBM AS/400), "data warehousing," fault tolerant apps—a lot of those were not things happening on VAX or VMS or any other DEC product. The fight was already structurally lost and VMS aficionados were already bemoaning DEC's failure to pick up new workloads and user communities even when VAX was still going strong.

VMS declined like almost all proprietary minicomputer environments. AOS, ClearPath, GCOS, MPE, NonStop, PRIMOS, VOS, and VMS...all fine products just ones no longer serving the high-growth opportunities.

Declining investment streams hamstrung proprietary CPU/system development efforts compared to higher-volume alternatives (PC or Unix), so they got successively slower and relatively more expensive each generation. Proprietary environments weren't designed to ever escape their home fields, nor were their companies set up to profit from opening up / spreading out. A few tried, very belatedly, but... by that point, dead man walking.

So from this vantage, same pattern, not a different thing. Perl and VMS were awesome in their original home fields, but were not quick to, or even able to, capitalize on the new directions and workloads customers grew to want.


> lot of those were not things happening on VAX or VMS or any other DEC product

I would argue that VMSCluster was best in class for clusters at the time and they quickly rolled it out over commodity ethernet. It had many features many others didn't have.

RDB was one of the better databases systems on the market and Oracle says its a great acquisition for them.

DEC storage teams were very innovative and were still growing strongly in the early 90s. And still a profit maker for HP years later. They sadly stuck to VAX only for far to long.

Even their Tape division was innovative and made 100s of millions in profit under Quantum.

And they arguably had the best chip design teams in the world. Those team just worked on VAX and that was of course much more difficult then if they worked on RISC. They were competitive but had to build larger more expensive chips. And when those teams with their internal tools finally did RISC, they blew the doors of everybody with Alpha (Alpha of course opposed by Olson).

And in terms of other chip innovation, they did StrongARM having major potential to capture a new market.

They did have good understanding of internet and networking, the internet having been developed largely on PDP-10s and of course their research division was doing great stuff like Altavista or developing MP3 players.

DEC also developed pretty advanced media servers.

Lack of innovation was not the issue. And VMS didn't miss much from a server OS perspective. DEC was pretty consistency doing pretty amazing stuff technically speaking. And being proprietary really wasn't an issue as Windows NT and UNIX were to. IBM was. EMC was. Sun was even while talking about open systems. Solaris wasn't really open. Sun machines sold because the build good SMP machines that ran Oracle well.

And sure DEC didn't 'win' every new growth market, nobody can and nobody did, but they were very good in a lot of places.

> Declining investment streams hamstrung proprietary CPU/system development efforts compared to higher-volume alternatives (PC or Unix),

Multiple things are wrong with this statement. Unix machines were not high volume really, Sun, IBM, SGI, HP and many other split the market, all with the own CPU and their own fab or fab partner. And DEC invested as much or more in chip development as those other did. And their chip design team and fabs were among the best in the world in the 90s.

Sure, literally nobody could match Intel and the PC and eventually Intel would 'win' for sure.

> Proprietary environments weren't designed to ever escape their home fields, nor were their companies set up to profit from opening up / spreading out.

Massive SMP server weren't Suns 'home field' and not that of Unix and yet Sun made that its business threw most of the second half of the 90s.

HP home field was not PCs and Printers, yet they made it their business. You as company can sell PC, Unix workstation and prosperity mini-computer derived systems. Your OS doesn't have to conquer all.

> Perl and VMS were awesome in their original home fields, but were not quick to, or even able to, capitalize on the new directions and workloads customers grew to want.

I would point out that VMSCluster backed by DEC storage system was exactly a growth field that SHOULD have continued to to be a great seller in the age of massive growth for internet/networking.

Had VMS run on MIPS and by 1990 DEC had sold MIPS Server that could run single or multi-core Unix or VMS (with support for VMSCluster over commodity Ethernet), then I think VMS could have done very well. But VMS was trapped on under-performing insanely expensive VAX systems.

> Proprietary environments weren't designed to ever escape their home fields, nor were their companies set up to profit from opening up / spreading out.

The market for VAX/VMS was large server systems. And you don't need to 'escape' that to make money. Just as IBM made money without escaping their mid-range and high-end mainframe business. What you need to do is you need to execute and have a best in class product in that segment. Continue to serve the massive costumer base you have and compete for those you don't have.

That segment was almost continuously growing so even if you just keep market share, your going to do really well.

The issue was that from about 1986 the competition, namely Sun and HP started to compete much better and later IBM joined. While DEC continue to execute worse and worse, fluttering around without a clear strategy.

And when their core systems don't do well, because their other systems, like storage and others was VAX only, those systems suffer too. DEC storage division was actually successful once they started to become more commodity. Their workstation products and PC did well when they were closer to commodity.

So I think its wrong to suggest these companies could profit from more openness and vertical sales. DEC Storage, DEC Printers, DEC Networking, DEC Tape should all have embraced spreading out. But in many case Olson refused many ideas in that direction.

Here is my list of the biggest issues in execution:

1. Complete mishandling of micro computers. Olsons idea of 3 competing internal products launched on the same day.

2. Compete mishandling of workstations. Refusing to authorize VMS based workstation despite many in the company pushing for something like it. Of course Apollo were former DEC people, and many DEC people ended up at Sun for that exact reason (including Sun essentially CTO Bernard Lacroute).

3. Failure to develop RISC despite MANY internal teams all convinced that it was the right decision. Then trying to unify on one thing, PRISM. Then changing what PRISM was many times over, never making a decision. Then eventually deciding on 32bit PRISM specifically only for workstations narrowly focus at beating Sun, rather then revamping the VAX line around this new architecture.

4. Then canceling PRISM in favor of MIPS, but then not making 32 bit MIPS their new 32 bit standard and porting VMS on it. They even had the license to develop their own MIPS chip or could have even just acquired MIPS (like SGI later did). MIPS had a costumer base and with DEC engineering team and fab power behind it, could have done very well. MIPS was Ultirx only leading to a situation where their Ultrix product was better price/performnace then their VMS product.

5. Believing high end computers would continue to not be microchips. Despite their own VAX CMOS teams putting out very high quality chips, the they had like 3-4 different 'high' end VAX teams all producing machines that were noncompetitive already by 1988. Literally billions of $ waste on machines that had no market. VAX 9000 being the worst offender, but not the only offender. Ironically they had the exact VMSCluster software you needed to sell clusters of mid-range RISC servers, rather the individual mainframes.

6. After the initial micro-computer failure, they also didn't handle the PC ecosystem well. Olson didn't like the PC and despite DEC having lots of knowledge and the infrastructure to become a clone builder, they didn't do great with it. No reason that DEC couldn't have done what HP did, HP was also a minicomputer company that started getting into clones. DEC had more experience mass manufacturing thanks to their massive terminal bushiness.

7. Refusing to see reality that was clear. Not down-sizing or adjusting to reality in any way. And then eventually downsizing in way that cost literally billions, giving people deals that were actually insane in how generous they were (at least in the first few waves). This essentially burned their 80s VAX war-chest to the ground.

8. While Alpha was amazing and a success story, it was also 'to early' on the market. Most people simply didn't needed 64bit yet. HP for example did an analysis of the market and came to the conclusion that 32 bit would be fine for another 1-2 generations. DEC continuing to not do RISC based VMS on 32 bit killed their market in that range. And Alpha wasn't optimized for Unix because again, this VMS-Unix split thinking. They bet on Alpha becoming that standard, but there was zero chance, Sun, HP, IBM, SGI adopting it. And even when they had the opportunity to move it toward being a widely adopted, by selling to Apple, they didn't really even want to do that. Instead of AIM would could of had ADM. Gordon Moore also tried to get Intel to adopt Alpha for their 64 bit, but again, no deal. Intel went with HP and Itanium.

9. This is basically what lead to Olson finally being removed. But then they gave the job to Robert Palmer. And he was just the wrong person for the job, he got the job because he really wanted the job. He was a semiconductor guy who really had no clue what so ever on how to turn around a systems company. He invest to much in semiconductors and not enough in figuring out what the key issue with their core product line was or how to come up with new products. And he quickly pivoted not to saving the company, but restructuring it to sell it. And the board was complicit in this 'strategy' and selling up long term profitable units for short term cash.

10. The had amazing legal leverage on Microsoft and Intel, literally had them dead to rights on major, major violations, and literally fumbled the bag on both of them. Two of the most successful companies in the 90s, and DEC was absolutely vital to their success and DEC failed to do much with either. HP got a deal with Intel that was 10000x times better with no legal leverage.


Asking about Y (or Z, or some other problem a few layers down) is common when yak shaving. Aka doing the thing that's needed to do the thing that's needed to do X. Not to be confused with the also-present problem of ADHD sequential distraction by some other unrelated problem (possibly one sighted along the way to eventually get X done).

It's a gross idealization that every problem can be directly solved, or is "shovel ready." In my world there are often oodles of blockers, dependencies, and preparations that have to be put in place to even start to solve X. Asking about Y and Z along the way? Par for the course.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: