This is what I do, I'm a little confused by the issue. If you have a device that outputs HDMI just never connect the TV to your wifi. It's not like you need or want firmware updates if there's no internet connection.
A much more fair retort is that an extra device to output video costs more, though I might argue that if you don't use the TV's built in system the manufacturer is losing ad revenue. So if you only use it as a normal TV you kinda are buying it subsidized by everyone else watching ads on theirs.
Imagine the process of solving a problem as a sequence of hundreds of little decisions that branch between just two options. There is some probability that your human brain would choose one versus the other.
If you insert AI into your thinking process, it has a bias, for sure. It will helpfully reinforce whatever you tell it you think makes sense, or at least on average it will be interpreted that way because of a wide variety of human cognitive biases even if it hedges. At the least it will respond with ideas that are very... median.
So at each one of these tiny branches you introduce a bias towards the "typical" instead of discovering where your own mind would go. It's fine and conversational but it clearly influences your thought process to, well, mitigate your edges. Maybe it's more "correct", it's certainly less unique.
And then at some point they start charging for the service. That's the part I'm concerned about, if it's on-device and free to use I still think it makes your thought process less interesting and likely to have original ideas, but having to subscribe to a service to trust your decision making is deeply concerning.
> And then at some point they start charging for the service. That's the part I'm concerned about, if it's on-device and free to use I still think it makes your thought process less interesting and likely to have original ideas, but having to subscribe to a service to trust your decision making is deeply concerning.
This, speaking about environmental impacts. I wish that more models start focusing on the parameter density / their compactness more so that they can run locally but this isn't something that big tech really wants so we are probably gonna get models like the recent minimax model or glm air models or qwen or mistral models.
These AI services only work as long as they are free and burn money. As an example, me and my brother were discussing something yesterday related to LLM and my mother tried to understand and talk about it too and wanted to get ghibli styles photo since someone had ghibli generated photo as their pfp and she wanted to try it too
She then generated the pictures and my brother did a quick calculation and it took around 4 cents for each image which with PPP in my country and my currency is 3 ruppees.
When asked by my brother if she would pay for it, she said that no she's only using it for free but she also said that if she were forced to, she might even pay 50 rupees.
I jumped in the conversation and said nobody's gonna force her to make ghibli images.
Don't most tech startups lose money for years before they maybe make a profit?
I mean, I agree that such companies are over-represented in thinking about small businesses if that's what you mean. Normal companies have to be profitable quickly for sure.
It feels like tons of companies get valued based on userbase or revenue or theoretical breakthrough rather than ever having to really think about breaking even, but I know that's just because those folks get all the press.
I think uMatrix is the better extension. I use it in tandem with uBO.
But yeah, Raymond didn't have the resources to develop both at once and chose uBO which offered a more digestible, install-and-forget experience palatable to a wider audience.
Raymond basically said uMatrix was feature complete. But there could be bugs.
I think what's important here is to reduce harm even if it's still a little annoying. Because if you try to completely ban mentioning something is LLM written you'll just have people doing it without a disclaimer...
Yes, comments of this nature are bad, annoying, and should be downvoted as they have minimal original thought, take minimal effort, and are often directly inaccurate. I'd still rather they have a disclaimer to make it easier to identify them!
Further, entire articles submitted to HN are clearly written by a LLM yet get over a hundred upvotes before people notice whether there's a disclaimer or not. These do not get caught quickly, and someone clicking on the link will likely generate ad revenue that incentives people to continue doing it.
LLM comments without a disclaimer should be avoided, and submitted articles written by a LLM should be flagged ASAP to avoid abuse since by the time someone clicks the link it's too late.
It is an interesting point about handwriting as distinct from reading or writing alone. I appreciate it, thank you.
I would not concede that speed is not as important as doing it correctly in the context of evaluating learning. There are homework, projects, and papers where there is a lot of time available to probe whether they can think it through and do it correctly with no time limit. It's ideal if everyone can finish an exam, but there needs to be some kind of pressure for people to learn to quickly identify a kind of problem, identify the correct solution approach, and actually carry out the solution.
But they shouldn't be getting penalized for not doing a page of handwritten linear algebra correctly, I totally agree that you need to make sure you're testing what you think you're testing.
I recall AVR-GCC not only working just fine in 2005 but being the official method for compiling code for those chips. I used it before Arduino came out to target the same chips.
Arduino was a nice beginner friendly IDE for sure that eliminated the need for make files or reading documentation for GCC, but the existing ecosystem was definitely not closed source.
Maybe it was different in 2005, but now Arduino is just an IDE, that uses GCC under the hood. So it is still "the official method for compiling code for those chips".
Definitely, it's always used GCC under the hood, and also just been the IDE.
Arduino (the original AVR boards anyway) have always relied on GCC, and not just that but the entire open source chain that already existed for AVR-GCC. I'm sure they contribute back (I guess "sure" is an exaggeration), but it worked pretty darn well already.
Arduino, for me, replaced emacs for an IDE. The main reasons I use it are because I don't need to write a makefile, and the integrated serial port. Those are good enough features that I still use the IDE even though I haven't touched a real Arduino in a decade or more. But I work alone and don't usually have more than a few thousand lines of code so it's not too complex to manage.
That tracks with my experience, since I don't prefer the IDE, because I need a build system that generates unique IDs and conditionally compiles stuff and handles debug flags. Also it needs to be reproducible and run unattended. In addition a build system is way faster. Arduino almost spends half of the build time on reanalyzing the project files to find out, what it actually needs to do and then it creates build artifacts in a random location. That's all problems you don't have with a proper build system, so the build system ends up way faster. I think Arduino also doesn't support parallel compilation.
A third of the country rents. Renters pay the utility bills. Landlords pay for appliance upgrades.
Why would the landlord put any effort into upgrading appliances when the cost of not upgrading them is borne by the renters?
I've never rented at a place where they didn't want to fix broken equipment with the cheapest possible replacement. And no renter would ever consider purchasing a major appliance like this since they'll end up priced out before they recover the cost in utility bills.
They're a nice technology, but our incentives are all wrong for a lot of housing stock.
In some locations you can't rent out places without minimum energy efficiency ratings, which then leads to insulation and heat pumps getting installed.
This is referred to as "Minimum Energy Efficiency Standards (MEES)" and seems to have been pioneered in the UK and adopted by Netherlands and France and then the EU generally.
They are efficient but do not have as high of an energy output as a smaller and cheaper gas furnaice. Apart from that, the water temperature is lower, so you need much larger radiators. Due to the lower energy output, you also need better insulation or a relatively massive heat pump. And the tech was not around 20 years ago (for reasons unknown to me).
The water temperature which you deliver to radiators are not defined by capacity of the heatpump, but how hot the radiators can be for safety/comfort reasons. If the radiators are too hot people could burn by touching them or stuff like platsic chairs would melt. Also the piping in the walls and floors cannot support too hot temperatures.
The temp for water used in radiators 60-70C is easily achievable by an air-top-water heat pump. It does not depend on the energy source, gas/oil/electricity.
Condensing gas boilers similarly run more efficiently at lower temps.
If the water returning to the boiler isn't below 54C then there will be no condensing at all, and the advertised 90%+ efficiency won't happen till the return value is more like 46C.
That translates roughly to max winter temp of 65C leaving the boiler and lower when lesss heating is required.
This can be tweaked by the end user and save 10-20% on heating bills.
From context I can't tell if they mean the heated coils in a heat pump head, or somehow connecting to a traditional radiator.
In older homes there isn't necessarily HVAC at all and instead there are actual radiators. I've lived in two like that, there is just no forced air to rooms.
I listed a reason that impacts a third of houses. I didn't write an essay because the article lists plenty of others. It was just weird that they never mentioned the misaligned incentives.
Right and you simply break even there so there's not much upside in terms of variable costs unless your electricity is somehow cheaper and not mainstream California prices.
That doesn't square with the fact that new rentals are built with granite countertops and stainless-steel appliances. Tenants do shop around on the basis of amenities.
Sure, but those amenities are highly visible. Lots of units have a stainless dishwasher exterior, but most will still be the landlord-special plastic tub inside. Who is shopping around based on whether or not there’s a heat pump? I would consider myself relatively well-educated on this and still the heat/cooling source is an afterthought.
Honestly the ductless mini split system in my new apartment was a big factor for me. But it was the first time I'd seen one over here in the mid-atlantic.
The combination with air conditioning and dehumidifying is genuinely compelling for the simplicity. Especially in new construction.
But these things trickle down to renters last. And if the landlord installs it, you bet your ass the rent is going up more than your savings on electricity.
Lose lose lose, if it gets installed then the current residents probably get priced out anyway. It eventually trickles down but we could do so much better.
Could you clarify what you mean about getting serial over the USB port in the context of debug pins?
I've been using Teensy devices for over a decade and have always had it just recognize the device as if it were a USB to serial adapter and I can talk to it as what I'd call "serial over the USB port". But that obviously doesn't involve what I think software people usually mean when they're talking about firmware debug -- which usually entails stepping through execution, right?
I'm used to just printing debug statements to the Serial.println() function, I learned on the 8051 where the best bet was to toggle different pins when code lines are passed, so even Serial.println() was a huge step up.
It wasn't specifically in the context of debug pins.
On a "normal" arduino, an FTDI chip on the board handles the job of exposing a serial adapter to your computer over USB. The atmel chip on the other side of the FTDI chip runs your code and getting serial out from your firmware is a short codepath which directly uses the UART peripheral.
On a teensy, there is still a secondary chip, but its just a small microcontroller running PJRC code. This microcontroller talks over the debug pins of the main chip, and those pins aren't broken out (at least back when I last used a teensy). Despite covering the debug pins, this chip only handles flashing and offers no other functionality. Since there is no USB serial adapter, for hobbyists trying to use it for running code with an arduino HAL, the HAL has to ship an entire USB driver just for you to get serial over USB. And this itself means you can't use the USB for other purposes.
For advanced users, this makes debugging much harder, and god forbid you need to debug your USB driver.
It's kind of just a bunch of weird tradeoffs which maybe don't matter too much if you are just trying to run arduino sketches on it but it was annoying for me when I was trying to develop bare metal firmware for it in C.
Moderation (the intent and success) varies to such a huge extent that it's practically silly to talk about moderation on Mastodon unless you mean moderation on a specific mastodon server (like mastodon.social). But moderation (the process) is intense and servers are usually community run on the change found in a spare couch (i.e. they're volunteers).
I think they do quite well considering the disparate resource levels, but some servers are effectively unmoderated while others are very comfortable; plenty are racist or other types of bigot friendly, but the infrastructure for server-level blocks is ad-hoc. Yet it still seems to work better than you'd guess.
Decentralization means whomever runs the server could be great, could just not be good at running a server, could be a religious fundamentalist, a literal cop, a literal communist, a literal nazi, etc etc. And all have different ideas of what needs moderating. There is no mechanism to enforce that "fediverse wide" other than ad-hoc efforts on top of the system.
Thank you for the clarification; that makes sense.
It is perhaps also worth noting that the Fediverse architecture does nothing to remove racists or bigots from the possibility of being found in the "fediverse" (here referring to the collection of all servers using the protocol and not the protocol itself), and... That's pretty much as-intended. Truth Social uses Mastodon as its backend; there is nothing the creators / maintainers of Mastodon could, or by design would, do to shut it off. The same architecture that makes it fundamentally impossible for Nazis to shut down a gay-friendly node makes it impossible for other people to shut down a Nazi node; there is merely the ability of each node to shield its users from the other.
That's a feature of the experiment, not a bug, and reasonable people have various opinions on that aspect of it.
A much more fair retort is that an extra device to output video costs more, though I might argue that if you don't use the TV's built in system the manufacturer is losing ad revenue. So if you only use it as a normal TV you kinda are buying it subsidized by everyone else watching ads on theirs.
reply