Hacker Newsnew | past | comments | ask | show | jobs | submit | tablespoon's commentslogin

> chicken and egg problem, there won't be a need for increased charging infrastructure until there are more EVs on the road.

Which we don't have the electricity infrastructure for.

The solution is to get rid of cars, period. Ownership should require a permit like gun ownership requires in some cities (i.e you should only be able to buy one of the DMV agrees you have good reason to need a car).


Can you please stop posting ideological flamebait to HN? You've unfortunately been doing it a lot lately. It's not what this site is for, and destroys what it is for.

https://news.ycombinator.com/newsguidelines.html


> If they really wanted to cut, emissions bycicles and trains are the way to do it.

Exactly, what we really need to do is have a restrictive permitting system for cars like some cities have for guns. If you can't show good cause for needing a car, you shouldn't be able to get permit to buy one. Just use public transit or bike.


> My brother is a contractor in the Bay Area and he told me that PG&E will not allow the installation of 220 volt EV charging infrastructure in new construction or a home remodel unless the homeowner can prove they already own an EV.

How can they do that? Could you just say you want to install an electric dryer in your garage (or even buy a used one off of CraigsList and literally do it for a week)?


Or welder compressor medically needed air conditioner etc.


> Many people, including many people on this site (and, yes, including myself) wouldn't think twice about plugging into an available port if they need a charge. Maybe I don't plug into an unlabeled port in some random location where it doesn't look like it belongs, but honestly I wouldn't think twice about charging at a designated area at a conference.

This is the solution to that problem:

https://www.amazon.com/PortaPow-3rd-Data-Blocker-Pack/dp/B00...

https://www.amazon.com/PortaPow-NA-USB-C-Data-Blocker/dp/B08...

https://www.amazon.com/PortaPow-Data-Blocker-USB-C-Converter...


If you're already committed to carrying Yet Another Accessory, then why not just carry a small portable charging battery. Some models are not much larger than that USB connector, and could charge the phone more than sitting babysitting a charging phone for an hour.


Yeah, I normally carry bigger portable batteries but I've got a bunch of small ones that I've typically been given by vendors which are probably good for at least getting a phone off life support.


> Personally I predict that generative AI is going to be the next Metaverse and crypto.

A common thread tying those three things together is that, in large part, they're all impressive technologies in search of problems to solve.

Technologies like that are pretty much always overhyped and oversold.


> in large part, they're all impressive technologies in search of problems to solve.

I'm no AI fanboy, but let's be fair: machine learning has been solving problems for decades now. From OCR, to translation, to facial recognition, etc. While GPT4 or DALL-e may be "toys," large models (be they language, vision, or otherwise) definitely have a future in business automation, data collation, military applications, etc.


I feel like this is some of where my own cynicism comes from: the machine learning that has been solving problems for decades now were almost all predicted and prototyped in the 60/70s "AI boom". The generative models were all "toys" then, too and none of the 60/70s "predictions" of when/where/how they might become more than "toys" ever really came to pass and sounded so much like people on HN are saying today.

We're certainly doing more of (almost) everything explored/predicted by "the ancients", we're doing it all much, much faster with much more massive data sets of input and output. For me, though, there isn't a sense that we are doing anything substantially new beyond Moore's Law meets mega-scale GIGO. There's something of a pervasive feel to me to this hype cycle like we are just recreating past mistakes of boom and then (inevitable) bust and haven't learned enough from them.

But I've become too cynical, perhaps.


Coming up with a problem statement is only one part of the puzzle. Making the solution is the next one. In Heinlen's "The Roads Must Roll", published in the '40s, there's a scene where the protagonist wakes up and reads the paper on a "newspaper facsimile receiver." We can recognize this today as a smartphone or an ebook reader. But the ebook reader and the smartphone didn't really exist until the mid 2000s, a full 60 years after Heinlein's story. An anime in the early '90s called Serial Experiments Lain predicted a lot of the effects of the internet on socialization. The events predicted only actually happened 20 years later or so.

Humans have been predicting things that we invent for a really, really long time. That's only the first part. I read "The Roads Must Roll" in the late '90s before ebook readers and smartphones became available and it motivated me to try and recreate that experience myself. I played around with ebooks on a Palm Pilot I found in the trash because of that.


The phrase "newspaper facsimile receiver" is telling and its own reminder that even sci-fi rarely predicts things from whole cloth and instead extrapolates from the world already around it. There were fax machines in the 1940s. It would also be a few decades before fax machines were common enough to be everywhere people wanted to (pretend to do) business, but making guesses that they might get lighter, more wireless, more common in the home if they found the right killer app (easier newspaper delivery, perhaps) isn't a tough prediction. If anything it failed to extrapolate too far towards what our reality eventually pieced together with smartphones/ebook readers was that they are much more general, multi-purpose devices. The idea of having a standalone ereader dedicated only to the daily newspaper is a quaint, fun DIY hack project you see on HN sometimes, but not the norm of how we use smartphones or ereaders today nor how you'd expect to find one commercially sold.

The human tendency to prediction is still grounded in the human perspective and the point of view of its time. Heinlein in the 1940s wasn't perfectly predicting the smartphone or ereader, Heinlein was predicting a better fax machine. It certainly can be used by someone in the 1990s for motivation towards better smartphones/ereaders, but that's already from a shifted perspective. Meanwhile there certainly were sci-fi writers in the 1990s extrapolating from they saw and predicting the smartphone/ereader, it seemed far more inevitable then.

My concerns are not that there are unfulfilled predictions from the 1960s nor that 1960s predictions aren't useful to modern ears (I suggested the opposite that we probably aren't listening enough to them) but that the "point of view" seems so much the same as from the 1960s. For the same types of generative models different newer people are still generally predicting what sounds like the same old types of predictions and it feels a lot more like we are stuck at "the 1960's idea of a better fax machine". Discounting AGI hyper-speculation, we don't seem to have a better perspective today about what's beyond "better 1960s fax machines" and that does leave behind some sense that maybe that's because there isn't anything beyond there to predict. It is easy to cynically wonder if we don't have good ideas or new predictions because we don't actually have any concept for good uses for these generative models beyond "fax machine" (or even more cynically and pessimistically "toy", in these specific examples). That doesn't say anything about whether or not we are able to make solutions for existing predictions, I don't know enough about current trends to have an opinion on that. But it does still suggest that if the last time people were making these sorts of predictions and they failed, the historic precedent is failure and if you are concerned about the glass being only half-full you should plan for that disappointment (and consequent industry-wide job shuffle to follow) even if you really want to expect things will be better this time.


The tech may not be substantially new, but the emergent phenomena definitely are.


The quantity has certainly changed. It took researchers months to build an "AI generated novel" way back in the day and some form or relative of ChatGPT spits out nearly that every minute now.

I still haven't felt impressed that the quality has truly changed, yet. LLMs seems more "fluent" in the language than ever before, but it's still hallucinating nearly as much and now the fluency just helps make people more often see "meaning" or "anthropic action" (lies, defamation) where the hallucinations are. The underlying structures of LLMs are still complicated casinos that invoke the Gambler's Fallacy much more than any signs of true "learning". We've put millions of monkeys in front of billions and trillions of slot machines and told them to produce Shakespeare and many of them believe they are doing just that. (Not just metaphorically, by monkeys I mean as much humans susceptible to casino payout mechanics and excitedly spinning slot machines.)

Again, yes, I'm a terrible cynic right now, and I hate to be so down on the technology, but I'm still waiting for something to be excited about that isn't just casinos masquerading as "learning". But people love casinos, they deliver addictive fun. I'm not going to stop people from being excited about all these casinos. I just think that professionally as a software developer, if I wanted to be a bad faith casino manager I'd rather just get into mobile games and gacha/loot-box mechanics. That's more fun, more profitable, and maybe, weirdly, more currently ethical than current "generative AI" hype.


The emphasis on "hallucinations" is misplaced from this perspective, IMO. Thing is, when models do hallucinate, they still reason about what they hallucinated. Larger ones (e.g. GPT-4) can even spot their own hallucinations. That is nothing like what we had in the 60s, or even 10 years ago.


I dislike the term "hallucinations" because I feel it also anthropomorphizes the process too much. Unfortunately, "random garbage output" is too many words, but that's closer to what I meant everywhere I used that word.

> Larger ones (e.g. GPT-4) can even spot their own hallucinations.

I've not yet been convinced that this is actually what is happening from the examples I've seen. It all looks to me like more "random garbage output" that "feels correct" but isn't provably correct. Most examples I've seen so far look too much like "Stochastic Crow Mode" [1]. It is prompts and questions that are doing much more work on the humans reading them (and our interests in anthropomorphizing them or mythologizing them) than the LLMs answering them.

[1] https://fediscience.org/@ct_bergstrom/110182336553459017


There’s absolutely been value delivered, but at every turn it’s been vastly less than what was promised. I would be very surprised if generative AI doesn’t turn out to be the same: legitimately and seriously useful in certain use cases, but not as revolutionary as it’s being sold. I’m excited to see what comes of all this, but the hype is at an absolute fever pitch and I’m yet to see much past niche use cases and help along the periphery for more general tasks.

I mean even crypto has some actual, meaningful use cases, it’s just not replacing all currencies and fundamentally reshaping financial systems like a lot of folks thought it would.


> What you're experiencing is confirmation bias. You read about one crime, then started noticing all of the stories about crime, formed a theory based on this hyper focusing, and now you believe it's worse than ever despite the stats clearly showing otherwise.

Exactly. The whole "crime is terrible in SF" is just a Republican propaganda narrative that people who should know better are buying into.


> So much of the problem in SF comes down to the progressive politician types who only want to things that "impact the root causes of crime" and its extremely frustrating and frequently just plain wrong. Yes, you do not solve the "root problem of why people choose to commit crime" by putting repeat offenders in jail, but you do make the world way better for everyone else who is not a criminal.

AND by doing that you do harm, by perpetuating racist systems of of injustice and oppression. The only way to solve that more important problem is by addressing the root causes and allow longer term healing to happen.

Yes, that means some people will be inconvenienced, but that's acceptable and a necessary part of the solution. The only way to speed that phase up is to implement comprehensive reparations quickly.


>Yes, that means some people will be inconvenienced

It's okay to say "killed" here. We all know that's the true cost of these policies.


> I think the architecture of Notes/Domino was technically very interesting - a rapid application development environment incorporating a replicated document-oriented database, cross-platform GUI forms designer, and scripting language.

And so ahead of its time I understand it's been used to kill patents, as a demonstration of prior art.

I think I read an article once about a patent case that featured someone tracking down a still-shrink wrapped copy of Lotus notes, then having a developer use it to demonstrate it had the features that had been erroneously been patented by someone later.


> Notes was pretty horrible for creating all sorts of legacy technical debt. Some handy Joe would create some database that would worm itself into critical business processes but be completely unmaintained.

And is the "better" alternative is to avoid that "legacy technical debt" by forcing that "handy Joe" to keep doing things by hand, by denying him the tools to solve his problem? Because if you don't have the connections to get budget to pay a professional developer, you shouldn't be able to solve your problem with software?

IMHO, it's better to think of those kinds of "handy Joe" apps as prototypes.


The problem is they often don't get beyond the protoype stage, the 'developer' leaves the company and whole business processes end up depending on something that is no longer maintained.

In our place it took a huge effort to move away from notes. Literally thousands of 'important' databases in the system over the years. Some were converted to web using low-code tech, some were simply archived or exported. But it was a huge mess.

I'm not against prototyping or efficiency at all. But the reality is that Notes had become a really stale platform, and even a prototype should have a continuous maintainer.

In the end we just had too many users using things that nobody knew anything about. This was really a huge risk.

> Because if you don't have the connections to get budget to pay a professional developer, you shouldn't be able to solve your problem with software?

This is a good point though, and we've now kept a whole team of low-code devs that take on things just like this for new projects that could offer efficiency, but they do it in a proper way with documentation and maintenance.


> That's not title case - you don't capitalize words such as a, the, of, etc. unless they are at the beginning of the title.

Exactly. Capitalizing every letter is the lazy, half-ass pseudo title case that I always have to correct. Unfortunately its becoming normalized because many major companies that should know better don't even bother to do it right.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: