Hacker Newsnew | past | comments | ask | show | jobs | submit | twsted's commentslogin

Same for me. Tried just a few days ago and, frustrated, gave up.


Useful insight: any sources?



Browser performance tips from 2014 mean very little twelve years on. Not only have machines gotten faster and networks gotten faster, rendering engines gotten faster. And I'm doubtful it nested flexboxes would've been all that much of a problem in most cases even then.

The most important thing is to use the right tool for the job. If grid lets you express what you want in the most straightforward way, use it; if flexbox does - even if it needs nesting - then use it instead. Don't shoehorn one into a situation where the other makes more sense. And sometimes either will work for a particular situation and that's fine too; use whatever you find most ergonomic. They're both very good in their own way.


The article is largely about layout shifts caused by flexbox during loading, and while networks have indeed gotten faster, they haven’t gotten faster uniformly across situations and people. Being able to show things properly while they are still downloading remains useful.


Try resizing a browser window with nested a flex layout.


Should you optimize for resize performance? I guess that depends on the app. Use the tool that fits the requirements.


Resizing is not the optimization target, it just makes reflow performance visually apparent.


"A strange game. The only winning move is not to play."


I thought I was the only one on Earth to do that (but surely I'm the only boomer doing it).


I am checking this carefully. The red line is here, for me and I think for many Apple customers. I choose Apple for being different from other companies, for valuing customer experiences and for rejecting ads and other "insults" for users. I think that if they cross the line, me and many other customers will leave.


> I choose Apple for being different from other companies, for valuing customer experiences and for rejecting ads and other "insults" for users

Yes. The point of willingly putting yourself in the walled garden was that the experience was definitively better than the other options.

When the walled garden ceases to be better and starts adopting all the same dark patterns and user hostile experience as everyone else, what point is there in staying inside?


The hardware is still marginally better but the experience is no longer better. In fact with android at least you can sideload and install full powered ad blockers. At some point once the iOS experience degrades beyond a certain threshold, android will be a more attractive option.


From the perspective of a casual user, on Android you get mobile Chrome which doesn't do extensions at all, while on iOS mobile Safari has extensions including ad blockers.


On Android you can get Firefox with its own rendering engine which can run full uBlock Origin. On iOS sure you can get some ad blockers on Safari but not the full powered uBlock Origin. Or you can get Firefox but it's just a reskinned Safari and can't run the powerful ad blockers.


Casual users don't care about rendering engines. They care about things working or not working, and in practice Safari with AdGuard is "good enough".


I agree, what I said should not have conflated "rendering engine" with "ability to install fully powered ad blockers". What I meant by that was that Safari's renderer on iOS doesn't allow the full uBlock Origin no matter if you use a reskin of it or not (firefox, brave, etc.), but if Firefox and its rendering engine were allowed on iOS (similar to how it is on android) we would have the full power ad blocker.

It makes a difference. I have uBlock Origin lite on my iPhone and it misses ads on Facebook that uBlock Origin on my PC blocks. Facebook has the most advanced anti-ad-blocker tech, so they're a good benchmark for how effective an ad blocker is.


Where will you go? The alternatives seem worse in almost every way.

> and I think for many Apple customers

Unfortunately, I think people who care about this enough to leave are a rounding error. It’s why the entire consumer product market looks the way it does.


https://fightchatcontrol.eu/

(And I need to understand why the hell my country, Italy, supports the motion)


Well, if you will allow the steelmanning, I can think of a couple of reasons why the authorities of _Italy_, of all countries, would want to follow organized groups conducting illegal activities.

I mean, "organized crime" and "Italy" probably appears in a couple of n-grams in LLMs index, right ? Maybe even if you narrow it down to reviews of movie trilogies from the 70s ?

That being said, I'm sure you will disagree. The whole discussion on those topics is about mistrust:

- law enforcement claims to need tools to prosecute organized crime (which does exists), and claims any opponents is just mafias masquerading as concerned citizens.

- opponents claims the new tool is only meant for surveillance, and claims any opponent is just an autocrat masquerading as concerned parents.

- fun fact 1 : both autocrats and mafias exist

- fun fact 2 : reading some messages mean reading all messages

Which is why we have the debate every few years.

Meanwhile law enforcements use other tools (they have been for years), mafias are still out there, organized crime is still harming lots of people, and encrypted messages are relatively safe - but people use unencrypted FB's messaging because it's easier.


Mafia has been around in Italy for more than 200 years, even before messengers were invented. It's likely this won't change it.

Meanwhile Meloni has deployed NSO against Italian journalists.


Do you mean in this context [1] ? It would not be NSO, and not clearly targeted _from_ the Italian Government (though it would be a plausible explanation)

I can't find allegations of usage of Pegasus in Italy per se (yet, might not have looked enough), as there are for other countries [2].

Not an expert, so I might have missed some instances. And by definition, those things are hard to track.

[1] https://www.bbc.com/news/articles/cvgmzdjw24yo

[2] https://www.washingtonpost.com/investigations/interactive/20...


Contact them, and ask!


It's strange to call a device with access to Claude and ChatGPT a "dumbphone"


Strangely enough, I found Claude and ChatGPT crucial to making all this work.

In the essay:

> Whenever I need some information, I can just ask my LLM, and it can give me a distraction free summary. It helps the long-tail of weird situations too: for example if someone asks me to take a look at a website, I can ask my LLM to scrape it and summarize the details for me. It’s pretty hard to get distracted this way.


I find google searches distracting, I can only imagine what an AI chat must be like. I use a flip phone.


or a twisted way in calling Claude and ChatGPT dumb which I wouldn't disagree with myself


Can someone explain the various acronyms?


IC -- individual contributor, EM -- enginering manager, TLM -- technical lead manager


EM = Engineering Manager IC = Individual Contributor


Look at the report (https://www.plasticlist.org/report), it is very informative


I know that Anthropic is one of the most serious company working on the problem of the alignment, but the current approaches seem extremely naive.

We should do better than giving the models a portion of good training data or a new mitigating system prompt.


I am aware in relative terms you are correct about Anthropic.

But I’m having a hard time describing and AI company “serious” when they’re shipping a product that can email real people on its own, and perform other real actions - while they are aware it’s still vulnerable to the most obvious and silly form of attack - the “pre-fill” where you just change the AI’s response and send it back in to pretend it had already agreed with your unethical or prohibited request and now to keep going.


The solution here is ultimately going to be a mix of training and, equally importantly, hard sandboxing. The AI companies need to do what Google did when they started Chrome and buy up a company or some people who have deep expertise in sandbox design.


I'm confused: can you explain how the sandbox helps?

I mean, if the plan is not to let the AI write any code that actually gets allocated computing resources and not to let the AI interact with any people and not to give the AI write access to the internet, then I can see how having a good sandbox around it would help, but how many AI are there (or will there be) where that is the plan and the AI is powerful enough that we care about its alignedness?


The problems here aren't different to restricting malicious or hacked employees, or malicious or hacked third party libraries.

You start with the low hanging fruit: run tool commands inside a kernel sandbox that switches off internet access and then re-provide access only via an HTTP proxy that implements some security policies. For example, instead of providing direct access to API keys you can give the AI a fake one that's then substituted by the proxy, it can obviously restrict access by domain and verb e.g. allow GET on everything but restrict POST to just one or two domains you know it needs for its work. You restrict file access to only the project directory, and so on.

Then you can move upwards and start to sandbox the sub-components the AI is working on using the same sort of tech.


This conversation began as a conversation about Claude, which has access to 100s of 1000s of people with no training and no interest in learning about how to prevent Claude from doing damage to society. That makes it materially different from a library because even if an intruder can subvert a library running on servers serving 100s of 1000s of users, e.g., a library for compressing files is very unlikely to be able to start having conversations with a large fraction of those users without someone noticing that something is very wrong.

Although I concede that there are some applications of AI that can be made significantly safer using the measures you describe, you have to admit that those applications are fairly rare and emphatically do not include Claude and its competitors. For example, Claude has plentiful access to computing resources because people routinely ask it to write code, most of which will go on to be run (and Claude knows that). Surely you will concede that Anthropic is not about to start insisting on the use of a sandbox around any code that Claude writes for any paying customer.

When Claude and its competitors were introduced, a model would reply to a prompt, then about a second later it lost all memory of that prompt and its reply. Such an LLM of course is no great threat to society because it cannot pursue an agenda over time, but of course the labs are working hard to create models that are "more agentic". I worry about what happens when the labs succeed at this (publicly stated) goal.


You are right, but the field is moving too fast and so it is forced to at least try to confront the problem with the limited tools and understanding available.

We can only turn the knobs we see in front of us. And this will continue until theory catches up with practice.

It's the classic tension of what usually happens from our inability to correctly assign risk on long tail events (high likelihood of positive return on investment vs extremely unlikely but bad outcome of misalignment)--there is money to be made now and the bad thing is unlikely; just do it and take the risk as we go.

It does work out most of the time. Were it left to me, I would be unable to make a decision, because we just don't understand enough about what we are dealing with.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: