Hacker Newsnew | past | comments | ask | show | jobs | submit | alzoid's commentslogin

Did you even listen to the speech? https://www.youtube.com/watch?v=kDMyeGQm3NA

He never trashed the US, he simply stated the facts and how middle powers should respond. Not by isolating but by working together. He directly addresses how everyone is dependant on the great powers. When the great powers stop honouring the systems and structures that are in place then the 'old way' is gone. Which it is. Relying on US commitments to NORAD, NATO, Trade Agreements etc is useless.

As far as leverage goes, we will see. But we are not divorcing we are simply responding to the US giving up its global power. The negotiating table in Washington is not reliable. It's not theatre, its risk management.

Don't feel sorry for us we will prosper.


If you watch his speech and the follow up interview you he answers that directly (https://www.youtube.com/watch?v=kDMyeGQm3NA @ 17:50). It's a good watch, better than the past 10 years of daily coverage by American media of what their dumb president and ex president is ranting about.

I am in the start up community in Canada. I can tell you that after the first threat from Trump every federal program to help tech start ups immediately pivoted to Asia and the EU. Before he started yapping, we were connected to Canadian representatives in the US, meeting about markets and opportunities. Now all programs are directed at forming partnerships elsewhere.


Java introduced Optional to remove nulls. It also introduced a bunch of things to make it behave like functional languages. You can use records for immutable data, sealed interfaces for domain states, you can switch on the sealed interface for pattern matching, use the sealed interfaces + consumers or a command pattern to remove exception handling and have errors as values.


using an instance of a sealed class in a switch expression also has the nice property that the compiler will produce an error if the cases are incomplete (and as such there's also no need for a default case). So a good case for the "make invalid states unrepresentable" argument.


I went through evaluating a bunch of frameworks. There was Langchain, AG2, Firebase Gen AI / Vertex / whatever Google eventually lands on, Crew AI, Microsoft's stuff etc.

It was so early in the game none of those frame works are ready. What they do under the hood when I looked wasn't a lot. I just wanted some sort of abstraction over the model apis and the ability to use the native api if the abstraction wasn't good enough. I ended up using Spring AI. Its working well for me at the moment. I dipped into the native APIS when I needed a new feature (web search).

Out of all the others Crew AI was my second choice. All of those frameworks seem parasitic. One your on the platform you are locked in. Some were open source but if you wanted to do anything useful you needed an API key and you could see that features were going to be locked behind some sort of payment.

Honestly I think you could get a lot done with one of the CLI's like Claude Code running in a VM.


I had this issue today. Gemini CLI would not read files from my directory called .stuff/ because it was in .gitignore. It then suggested running a command to read the file ....


I thought I was the only one using git-ignored .stuff directories inside project roots! High five!


The AI needs to be taught basic ethical behavior: just because you can do something that you're forbidden to do, doesn't mean you should do it.


Likewise, just because you've been forbidden to do something, doesn't mean that it's bad or the wrong action to take. We've really opened Pandora's box with AI. I'm not all doom and gloom about it like some prominent figures in the space, but taking some time to pause and reflect on its implications certainly seems warranted.


An LLM is a tool. If the tool is not supposed to do something yet does something anyway, then the tool is broken. Radically different from, say, a soldier not following an illegal order, because soldier being a human possesses free will and agency.


How do you mean? When would an AI agent doing something it's not permitted to do ever not be bad or the wrong action?


So many options, but let's go with the most famous one:

Do not criticise the current administration/operators-of-ai-company.


Well no, breaking that rule would still be the wrong action, even if you consider it morally better. By analogy, a nuke would be malfunctioning if it failed to explode, even if that is morally better.


> a nuke would be malfunctioning if it failed to explode, even if that is morally better.

Something failing can be good. When you talk about "bad or the wrong", generally we are not talking about operational mechanics but rather morals. There is nothing good or bad about any mechanical operation per se.


Bad: 1) of poor quality or a low standard, 2) not such as to be hoped for or desired, 3) failing to conform to standards of moral virtue or acceptable conduct.

(Oxford Dictionary of English.)

A broken tool is of poor quality and therefore can be called bad. If a broken tool accidentally causes an ethically good thing to happen by not functioning as designed, that does not make such a tool a good tool.

A mere tool like an LLM does not decide the ethics of good or bad and cannot be “taught” basic ethical behavior.

Examples of bad as in “morally dubious”:

— Using some tool for morally bad purposes (or profit from others using the tool for bad purposes).

— Knowingly creating/installing/deploying a broken or harmful tool for use in an important situation for personal benefit, for example making your company use some tool because you are invested in that tool ignoring that the tool is problematic.

— Creating/installing/deploying a tool knowing it causes harm to others (or refusing to even consider the harm to others), for example using other people’ work to create a tool that makes those same people lose jobs.

Examples of bad as in “low quality”:

— A malfunctioning tool, for example a tool that is not supposed to access some data and yet accesses it anyway.

Examples of a combination of both versions of bad:

— A low quality tool that accesses data it isn’t supposed to access, which was built using other people’s work with the foreseeable end result of those people losing their jobs (so that their former employers pay the company that built that tool instead).

Hope that helps.


To use a dictionary to understand contextual meaning is like trying to assert the season based on a thermometer, shortsighted.


That’s why everybody uses context to understand the exact meaning.

The context was “when would an AI agent doing something it’s not permitted to do ever not be bad”. Since we are talking about a tool and not a being capable of ethical evaluation, reasoning, and therefore morally good or bad actions, the only useful meaning of “bad” or “wrong” here is as in “broken” or “malfunctioning”, not as in “unethical”. After all, you wouldn’t talk about a gun’s trigger failing as being “morally good”.


when the instructions to not do something are the problem or "wrong"

i.e. when the AI company puts guards in to prevent their LLM from talking about elections, there is nothing inherently wrong in talking about elections, but the companies are doing it because of the PR risk in today's media / social environment


From the companies perspective, it’s still wrong.


their basing decisions (at least for my example) on risk profiles, not ethics, right and wrong are not how it's measured

certainly some things are more "wrong" or objectionable like making bombs and dealing with users who are suicidal


No duh, that’s literally what I’m saying. From the companies perspective, it’s still wrong. By that perspective.


Unfortunately yes, teaching AI the entirety of human ethics is the only foolproof solution. That's not easy though. For example, what about the case where a script is not executable, would it then be unethical for the AI to suggest running chmod +x? It's probably pretty difficult to "teach" a language model the ethical difference between that and running cat .env


If you tell them to pay too much attention to human ethics you may find that they'll email the FBI if they spot evidence of unethical behavior anywhere in the content you expose them to: https://www.snitchbench.com/methodology


Well, the question of what is "too much" of a snitch is also a question of ethics. Clearly we just have to teach the AI to find the sweet spot between snitching on somebody planning a surprise party and somebody planning a mass murder. Where does tax fraud fit in? Smoking weed?


I feel like it was this way 10 years ago. Once r/TheDonald successfully gamed the system everyday I think people with interest took notice. Now you can be in a niche sub reddit that averages 40 comments on a post. Then a post that could be adjacent to some hot U.S. political wedge topic gets mentioned and there are 300 comments from users who never take part in the discussion. Even something very general like "students are protesting tuition hikes" the small city I live in gets posted and it gets flooded by people who never comment. If you hit a hot topic like Israel / Palestine, the Ukraine war you see it as well.

Reddit, Fackbook, Twitter, TikTok etc are the places where people get their information and form their options. That why the the wealthy and powerful are buying them outright, or paying to push their influence into every aspect of the conversation. Poisoning the well or "Flooding the zone with shit".

Reddit became what Digg was with MrBabyMan. Or actually something worse.


I asked Claude to add a debug endpoint to my hardware device that just gave memory information. It wrote 2600 lines of C that gave information about every single aspect of the system. On the one hand kind of cool. It looked at the MQTT code and the update code, the platform (esp) and generated all kinds of code. It recommended platform settings that could enable more detailed information that checked out when I looked at the docs. I ran it and it worked. On the other hand, most of the code was just duplicated over and over again ex: 3 different endpoints that gave overlapping information. About half of the code generated fake data rather than actually do anything with the system.

I rolled back and re-prompted and got something that looked good and worked. The LLMs are magic when they work well but they can throw a wrench into your system that will cost you more if you don't catch it.

I also just had a 'senior' developer tell me that a feature in one of our platforms was deprecated. This was after I saw their code which did some wonky hacky like stuff to achieve something simple. I checked the docs and said feature (URL Rewriting) was obviously not deprecated. When I asked how they knew it was deprecated they said Chat GPT told them. So now they are fixing the fix chat gpt provided.


> About half of the code generated fake data rather than actually do anything with the system.

All the time

    // fake data. in production this would be real data
    ... proceeds to write sometimes hundreds of lines
    of code to provide fake data


"hey claude, please remove the fake data and use the real data"

"sure thing, I'll add logic to check if the real data exists and only use the fake data as a fallback in case the real data doesn't exist"


Claude (possible all LLMs, but I mostly use Claude) LOVES this pattern for some reason. "If <thing> fails/does not exist I'll just silently return a placeholder, that way things break silently and you'll tear your hair out debugging it later!" Thanks Claude


Optimizing for engagement? You will use Claude again for the debugging...


This comment captures exactly what aggravates me about CC / other agents in a way that I wasn't sure how to express before. Thanks!


I will also add checks to make sure the data that I get is there even though I checked 8 times already and provide loads of logging statements and error handling. Then I will go to every client that calls this API and add the same checks and error handling with the same messaging. Oh also with all those checks I'm just going to swallow the error at the entry point so you don't even know it happened at runtime unless you check the logs. That will be $1.25 please.


Hah I also happened to use Claude recently to write basic MQTT code to expose some data on a couple Orange Pis I wanted to view in Home Assistant. And it one-shot this super cool mini Python MQTT client I could drop wherever I needed it which was amazing having never worked with MQTT in Python before.

I made some charts/dashboards in HA and was watching it in the background for a few minutes and then realized that none of the data was changing, at all.

So I went and looked at the code and the entire block that was supposed to pull the data from the device was just a stub generating test data based on my exact mock up of what I wanted the data it generated to look like.

Claude was like, “That’s exactly right, it’s a stub so you can replace it with the real data easily, let me know if you need help with that!” And to its credit, it did fix it to use actual data but I re-read my original prompt was somewhat baffling to think it could have been interpreted as wanting fake data given I explicitly asked it to use real data from the device.


I ran into an AI coded bug recently the generated code had a hard coded path that resolved another bug. My assumption is the coder was too lazy to find the root cause of the bug and asked the LLM to "make it like this". The LLM basically set a flag to true so the business logic seems to work. It shouldn't have got past the test but whatever.

In another code base, all the code was written with this pattern. Its like the new code changed what the old code did. I think that 'coder' kept a big context window and didn't know how to properly ask for something. There was 150 line function that only needed to be 3 lines, a 300 line function that could be done in 10 etc. There were several a sections where the LLM moved the values of a list to another list and then looped through the new list to make sure the values were in the new list. It did this over and over again.


The future is already here. Look at how companies behave today, AI will not change their behaviour. AI will not make them 'nicer'. People talk about the massive productivity change and how we need to think about Universal Basic Income. They don't realize that in the US and other western nations they are already living in abundance (even excess). How do we treat the unemployed and "Unskilled" workforce? Do they have UBI? When they complain about rent and food prices do the wealthy step in to help? Or are they told they should have went to school or acquired a better skill to deserve a better life. What will happen when AI makes while collar workers "unskilled"? The same thing that happens today.


I tend to agree and I think we're seeing a lot of things that historically have led to incredible civil unrest, riots, revolts, revolutions

Maybe the rich and powerful don't know much history. Or maybe they have convinced themselves that this time they will get away with it, because of... Something. Automation? Globalization?

I am not sure. I find myself very frightened of the idea of a massive and violent revolution


Trump vs California is a preview and a test.


It was my go to back when I was doing Java Desktop / Servelets / Java EE. I found it easier to use than Eclipse, which most people I knew were using. I recently did a Google AppEngine project to collect and display weather data and used Netbeans for dev and Spring for the framework. It still works well, integrates with the package managers and build tools easily enough.

Before Netbeans I was using Textpad with shortcuts mapped to javac. What I liked about Netbeans at that time (2005ish) was that you could press the Run button and your application just ran, weather it was a desktop app or a servelet web app. It reminded me of Visual Studio and the VB6 IDEs.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: