Hacker Newsnew | past | comments | ask | show | jobs | submit | more mightybyte's commentslogin

My default uninformed assumption would be that Google is paying Mozilla for making Google the default search engine for Firefox. Does anyone know if this is the case, and if so, what the likely magnitudes are? Because it seems like Google can throw quantities of money at Mozilla that would easily overwhelm whatever pressure this petition might put on them.


Yes, this is correct. Google pays Mozilla hundreds of millions of dollars annually to be the default search engine. This makes up the vast majority of Mozilla Corporation's revenue. It's somewhere in the ballpark of 85% of all their annual revenue last I heard.

They've tried hard in recent years to get out from under Google by diversifying into other areas. For example, they have a VPN service that is a wrapper around Mullvad, and they've made some privacy tools that you can pay to use, also largely wrappers around other companies' tools.

I was an employee of Mozilla Corporation and saw first-hand the effort they were making. In my opinion, it's been a pretty abysmal failure so far. Pulling Google funding would effectively hamstring Mozilla Corp.


If you've made any kind of DNS entries involving this subdomain, then congratulations, you've notified the world of its existence. There are tools out there that leverage this information and let you get all the subdomains for a domain. Here's the first one I found in a quick search:

https://pentest-tools.com/information-gathering/find-subdoma...


There's a spectrum of efficiency/redundancy choices an organization can make. On one (theoretical) end of the spectrum the organization maximally leverages each individual's unique skills/knowledge. This is the most efficient part of the spectrum. On the other end every person is an interchangeable cog. This is the most resilient but also least efficient part of the spectrum. Small organizations usually skew much more to the efficiency end of the spectrum because they typically have significant resource constraints. As an organization grows, more people depend on its existence, and my observation has been that resilience and self preservation typically become more important than efficiency. This phenomenon is not unique to governments, it happens in all kinds of organizations.


I have a question for all the LLM and LLM-detection researchers out there. Wikipedia says that the Turing test "is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human."

Three things seem to be in conflict here:

1. This definition of intelligence...i.e. "behavior indistinguishable from a human"

2. The idea that LLMs are artificial intelligence

3. The idea that we can detect if something is generated by an LLM

This feels to me like one of those trilemmas, where only two of the three can be true. Or, if we take #1 as an axiom, then it seems like the extent to which we can detect when things are generated by an LLM would imply that the LLM is not a "true" artificial intelligence. Can anyone deeply familiar with the space comment on my reasoning here? I'm particularly interested in thoughts from people actually working on LLM detection. Do you think that LLM-detection is technically feasible? If so, do you think that implies that they're not "true" AI (for whatever definition of "true" you think makes sense)?


> 3. The idea that we can detect if something is generated by an LLM

The idea behind watermarking (the topic of the paper) is that the output of the LLM is specially marked in some way at the time of generation, by the LLM service. Afterwards, any text can be checked for the presence of the watermark. In this case, detect if something is generated by an LLM means checking for the presence of the watermark. This all works if the watermark is robust.


The original Turing test started by imagining you're trying to work out which of two people is a man or woman based on their responses to questions alone.

But supposing that you ran that test where one of the hidden people is a confederate that steganographically embeds a gender marker without it being obvious to anyone but yourself. You would be able to break the game, even if your confederate was perfectly mimicking the other gender.

That is to say, embedding a secret recognition code into a stream of responses works on humans, too, so it doesn't say anything about computer intelligence.

And for that matter, passing the Turing test is supposed to be sufficient for proving that something is intelligent, not necessary. You could imagine all sorts of deeply inhuman but intelligent systems that completely fail the Turing test. In Blade Runner, we aren't supposed to conclude that failing the Voight-Kampff test makes the androids mindless automatons, even if that's what humans in the movie think.


I think measuring intelligence in isolation is misguided, it should always be measured in context. Both the social context and the problem context. This removes a lot of mystique and unfortunately doesn't make for heated debates.

In its essentialist form it's impossible to define, but in context it is nothing but skilled search for solutions. And because most problems are more than one can handle, it's a social process.

Can you measure the value of a word in isolation from language? In the same way you can't meaningfully measure intelligence in a vacuum. You get a very narrow representation of it.


The idea that LLM can pass off as a human author for all reviewers is demonstrably false:

https://www.youtube.com/watch?v=zB_OApdxcno


Google search is completely broken IMO. I stopped using Google search years ago and every time I go back on the off chance that it's bigger index has something that DuckDuckGo couldn't find for me.

Image search isn't great either but it still often gives me something close and that usually satisfies my image searching needs.

I still find YouTube recommendations quite good for me, but there are occasional ones I've watched already. I still go down its fun (and educational!) rabbit holes all the time.


Exact same experience, YouTube recommendations (in incognito without being logged in!) usually give me stuff related to the video in watching.

However when they don't, it's invariably to push some alt-right slop down my throat. Video is about a comedian? Suggestion "feminist woke takedown compilation". Video about news? Suggestion "$european_far_right_party's channel says gypsies are subhuman". Video about economics? Suggestion is Jordan Peterson ranting about something. And so on and so on. It's pretty tiring.


I think the fundamental approach being taken by this project is immensely valuable to the world. This kind of education about open standards might actually be the most powerful tool that can help us take steps in the direction away from giant opaque corporations and back towards the systems based on open standards that the internet originated from. I really hope this project continues to be updated and get more and more eyes and contributors. If you feel the same way, I'd say at least throw it a GitHub star. https://github.com/blakewatson/htmlforpeople

(Note: I have nothing to do with this project thus far and have nothing to gain from saying this.)


Mozilla has amazing documentation that's been around for years.

Here's their basic html tutorial section: https://developer.mozilla.org/en-US/docs/Learn/HTML

No one is or has been stopping people from learning HTML.


HTML for People is waaaay more approachable than this. My wife could follow the HTML for People tutorial. It shows you how to create a real web page in a real browser without first bogging you down in coding details.

The MDN tutorial is talking about img alt attributes before you even create a single .html file! That's how to put people off.


As a technical person who recently taught myself frontend from scratch, I found https://web.dev/learn way more structured and thorough. The CSS lesson covers all the essentials and actually made me enjoy working with CSS.

web.dev doesn't get as much love as MDN, but it totally should!


This is how I've been learning html + css. It's been fantastic and I treat it as THE docs for the web.

I'm very proud of my single file html document for reporting results.

Of course no JS!


Super approachable. (sure Jan meme.gif)


That’s the website my high school used in engineering sciences classes to give students an introduction to HTML. I don’t see the point of your comment (I think it’s sarcasm, but I’m not even sure), can you be a little bit more constructive?


The point may be that OP's guide is not meant for high school/engineering students, it is meant for everyone. MDN's "introductory" sections have too many big words to be of use to laypeople.


I really hope so too. I really wonder what would happen if there was an alternative like... instead of spending X dozen hours learning how to use WordPress, or MS Word for that matter, people (in the general population) felt like spending those X dozen hours learning HTML was a viable and useful alternative to achieving their goals!


OP here. I appreciate the kind words. Yeah, I hope it finds its way into the hands of non-professionals.


Will you add on to it to include custom CSS, or maybe a section for using different CSS templates (and where to find them), to make a slightly larger website like your own (blakewatson.com)?


No I think I will probably keep it focused on HTML. I think my "CSS basics" chapter is as far as I want to go with styling. But I would love to see other folks publish easy-to-understand CSS tutorials.


::backdrop was useful to me. Right now I am learning the last two years of stuff, refreshing my frontend skills. Things like scoping are a dream come true.

I haven't got all the way through it, but seeing the contents drop-down made me feel at home.

I put document structure first so the content looks good with no styling and no class attributes. I use no divs, just the more sensible elements. Sections, Articles, Asides and Navs work for me. There should be headings at the start of these elements, optionally in a Header and optionally ending with a Footer. The main structure is Header - Main - Footer.

Really there should be a need to keep it simple, and that begins with the document structure. It is then possible with scoping to style the elements within a block without having to use any classes except for at the top of a block.

It infuriates me that we have gone the other way to make everything more and more complex. We have turned something everyone should be able to work with into an outsourced cottage industry. Nowadays the tool chain needed for frontend development is stupid and a true barrier to entry. Whenever you look under the hood there is nothing but bloat.

My approach requires strict adherence to a document structure, however, my HTML content is fully human readable and the content looks great without a stylesheet, albeit HTML 1.0 pre-Netscape looking.

Tim Berners Lee did not have class attributes in HTML 1.0 but he did want content sectioning. Now that there is CSS grid it is easy to style up a structured document. However, 'sea of divs' HTML requires 'display: contents' to massage the simplest of form to fit into a grid.

I feel that a guide is needed for experienced frontend developers that are still churning out 'sea of div' content. In the Mozilla guide for 'div' it says that it is the element of last resort. I never need the 'div' because there is always something better.

The CSS compilers are also redundant when working with scoping and structured content. Sadly my IDE is out of date so I have to put the scoping in at the end as it does not recognise @scope. Time to upgrade...

Anyway, brilliant guide, in the right direction and of great interest to me and my peculiar way of writing super neat content and styling.


The Evolution of Cooperation by Robert Axelrod https://www.goodreads.com/book/show/366821.The_Evolution_of_...

See also https://ncase.me/trust/ for a really nice 30-min interactive summary of the ideas presented in the book.

My rough definition of "best" here is "most potentially impactful to humanity" (see also Andrew Breslin's Goodreads review).


> I have the sense that Youtube is net bad for the world and the monetization of Youtube has incentivized and amplified mediocrity, stupidity, and social decay.

Interesting that you say this regarding YouTube. I've been saying this regarding Twitter for awhile even though I consume quite a bit of YouTube content. However, I've curated my YouTube feed to be almost entirely stuff that is interesting, educational, and that I think I'm getting value from. I've learned tons of useful stuff from YouTube such as how to dress better and tailor my own clothes, how to fix things that break around my house, more effective training methods to accomplish specific fitness goals...I could go on and on. When I go to YouTube in incognito mode, I definitely see the bottom-of-the-barrel content that you're talking about. But it doesn't have to be that way.


> However, I've curated my YouTube feed to be almost entirely stuff that is interesting, educational, and that I think I'm getting value from.

Those creators are still making orders of magnitude less money than people who make zero content attention grabbing controversy meme slop videos.


> Those creators are still making orders of magnitude less money than people who make zero content attention grabbing controversy meme slop videos.

Off the top of my head, Gamers Nexus is a counterpoint. Obviously not Mr Beast-scale, but we're also looking at a huge difference in target demographic breadth.

Besides, is YouTube any worse in this regard than what came before it? Substance-free reality TV predates YouTube. For as long as cheap printing and mail services have been around, artists have had strong incentive to go design ads rather than pursue their art independently.

YouTube definitely has a race to the bottom going on, but it's not all-consuming and well-researched, high-quality material is still profitable for creators as long as you know how to play the thumbnail game.


> Besides, is YouTube any worse in this regard than what came before it? Substance-free reality TV predates YouTube.

I extend the same criticisms towards traditional television as well.

They're both just symptoms of the advertising problem. Advertisers are the enablers of this stuff. They'll back any content that draws attention, and the ones which draw the most are memes, controversy, generally negative value slop. People endlessly scrolling apps with infinite content being fed instant gratification with product offerings in between. Algorithms that actively push them towards controversy and hate because it maximizes "engagement".


> Substance-free reality TV predates YouTube

And I would say 99% of it is worse than the goofy YouTube stuff. Reality TV is mostly people hooking up and pretending to fall in love.


But are they enjoying what they are doing? If so, then what difference does it make how much cash YT hands to Mr. Beast?

While many try to make a living off YouTube (and some do) there are no guarantees offered nor should any be expected.


> If so, then what difference does it make how much cash YT hands to Mr. Beast?

I think it matters a lot. It creates massive distortions in society's perception of value.

Because of YouTube's advertising, you have people becoming multimillionaires by making total nonsense videos where they do things like react to other videos. Literally a YouTube video of a guy watching other YouTube videos, pausing and saying whatever pops into his head. Like this comment section. And he gets millions of dollars for it.

There's something deeply wrong with a society where you are rewarded for nothing. The people who actually do something tend to feel cheated when they see it happen. Imagine being a professional, a trades person and seeing a random dude get 1000x richer than you because he said stupid shit on the internet. And if you point it out, some startup founder accuses you of sour grapes.

Society should think deeply about the incentives it offers to people. Because people will respond to them.


IF it were a net good, they'd let me disable Shorts. But they don't.


> That's why, if you like the Haskell philosophy, why would you restrict yourself to Haskell? It's not bleeding edge any more.

Because it has a robust and mature ecosystem that is more viable for mainstream commercial use than any of the other "bleeding edge" languages.


I think the general concept here is putting in place restrictions on what code can do in service of making software more reliable and maintainable. The analogy I like to use is construction. If buildings were built like software, you'd see things like a light switch in the penthouse accidentally flushing a toilet in the basement. Bugs like that don't typically happen in construction because the laws of physics impose serious limitations on how physical objects can interact with each other. The best tools I have found to create meaningful limitations on code are a modern strong static type system with type inference and pure functions...i.e. being able to delineate which functions have side effects and which don't. These two features combine nicely to allow you to create systems where the type system gives you fine-grained control over the type of side effects that you allow. It's really powerful and allows the enforcement of all kinds of useful code invariants.


> I think the general concept here is putting in place restrictions on what code can do in service of making software more reliable and maintainable. The analogy I like to use is construction.

The concept is quite old, and it's called software architecture.

All established software architechture patterns implicitly and explicitly address the problems of managing dependencies between modules. For example, the core principle of layered/onion architecture or even Bob Martin's Clean Architecture is managing which module can be called by which module.

In compiled languages this is a hard constraint due to linking requirements and symbol resolution, but interpreted languages also benefit from these design principles.


The goofy thing is that “software architecture” was killed by YAGNI dogma and yet the need for properly layered code hasn’t disappeared, so people are inventing tooling to enforce it.


Offtopic, but this reminds me of a plausible tech support gore story.

An office was experiencing random Internet outages and they were struggling to figure out why. They traced it back to their router rebooting randomly. Tracing it back further, they found the outlet was experiencing big voltage drops. They then realized it was on the same circuit as a pump used to flush a porta potty for a construction team onsite. Everytime they'd flush the toilet, the router would lose power and reboot.


I agree, and in fact that's the basis of my Haskell library Bluefin[1]. If you look at it from one angle it's a Haskell "effect system" resembling other Haskell approaches for freely composing effects. If you look at it from another angle it's a capability-based security model (as also mentioned by quectophoton in this dicussion[2]). There's actually quite a lot of similarity between the two areas! On the other hand it's not really a "firewall" as described by this article, because it doesn't do dynamic permission checks. Rather permission checks are determined at compile time. (Although, I guess you could implement dynamic permission checks as one of the "backends".)

[1] https://hackage.haskell.org/package/bluefin-0.0.6.1/docs/Blu...

[2] https://news.ycombinator.com/item?id=41366856


ah, you say that, but with wifi light switches and wifi toilets it's easy to connect those together nowadays!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: