Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One of the things are about fakes is that they evolve over time.

Believe it or not, an incident in 1917 involving Arthur Conan Doyle, inventor of Sherlock Holmes, is instructive.

The "Cottingley Fairies" were imagined when a couple of teenage girls took photographs of themselves with pictures of fairies. [1]

The thing that is important is that, to my eyes and I think to a typical person of this era, these photos of girls with cutouts of fairies look like ... exactly that. When I first saw these pictures, I couldn't believe anyone could be fooled by them. But this period, circa 1917, was a period when photos only recently had appeared and so only recently had photo fakes. So the skill to spot the difference had only recently appeared.

Which is to say, I'm pretty sure the author is correct that deal of the OpenAI text generated isn't intelligent text generation but stuff with enough of the markers of "real" text that people might not notice it if they weren't paying attention.

Moreover, I strongly suspect that if this sort of sham were to become more common, the average person's ability to notice it would increase significantly.

[1] https://en.wikipedia.org/wiki/Cottingley_Fairies



>The thing that is important is that, to my eyes and I think to a typical person of this era, these photos of girls with cutouts of fairies look like ... exactly that. When I first saw these pictures, I couldn't believe anyone could be fooled by them. But this period, circa 1917, was a period when photos only recently had appear and so only recently had photo fakes. So the skill to spot the difference had only recently appeared.

Tons of people believe in crude "ufo" and "bigfoot" and "chupacabras" and "lock ness" and such photos, well into today though.


People believe in the Falkirk Wheel, even though it's clearly bad CGI!


Loch Ness.


Nitpick, but, isn’t Loch Ness just the name of the lake?


How do you know the lake is real?


I've seen it. It could have been a very convincing fake, but.. if that doesn't count, then nothing does.


How do I know you're not in on the conspiracy.


Maybe it was just an oversized puddle, and not actually a lake.


The fake water there is very cold.


Amazingly no, the lake is called Ness, Loch just means "lake" in the old regional dialect.


And Frankenstein was the name of the doctor.


You can think of Frankenstein's monster as the doctor's son, so they would also be a Frankenstein.


Ohh nice one :)


That's probably right, but as with the fairy pictures the technology to make the fakes is also advancing, and no one can tell a whether a picture that's been made recently is fake or not just by eye.

As fake text becomes more common the tools to make it will become more advanced to the point where we can't tell it apart from the real thing.


Surely the only worry about 'fake text' is scale? People have been able to write down lies for thousands of years.


Personalization of the fake text to precisely match what the reader is most susceptible to is probably a bigger problem, especially if the bad actor is able to target small groups (say, politicians?). An AI writer that could write what someone will believe, in the style they're most open to, using their personal information and a history of exactly what they've already seen, would be very hard to resist.

Couple that with scale and it'd be game over for distributing written information across the internet. No one would be able to believe anything they see online any more.

Although, weirdly, that actually sounds like a decent use case for a blockchain.


Well, most "high-value" groups like politicians, journalists, billionaires and such are targets right now in the sense that intelligence agencies and private opportunists have their information and trying to use text to influence them. The AI we're talking about isn't as good as human and so it's not going produce things that even as well tuned as a people currently do - since the method involves just emulating normal text, the AI is, at best, going to become nearly as "good as average".

But it's reasonable to say this could do a bit of damage to "moderate value targets". Given that you already some portion of retirees today "infected" with fake-news obsessions. Not only would have personalized spam/social-engineer but you could train the AI further on what worked once you even had a lowish success-rate.

All that said, it seems like the OpenAI text generator would not be such a customized social-engineering-constructor. Rather, such a thing would have to be trained by the malicious actors themselves, who have their own data about what works. So the now-always-in-the-background question, OpenAI's shyness to release code justified, seems like still a no.


Of course, any AI that is sufficiently advanced to sway both public and personal opinion will probably also be able to mount a 51% attack against whatever blockchain we expect to refute its lies.


Not so: 1) because the abilities of AI don't scale in that manner and 2) because, unlike in a true decentralized blockchain, there are centralized trust sources that you can use to verify the content. This is really a better use case for keybase rather than blockchain.


Scale is a big problem, though.

If it's cheaper to automatically create noise than it is to automatically remove it - public debate in internet becomes impossible.


Posting per Post 1 Cent


That's a startup idea.

Reddit where each account has bitcoin wallet connected. Every comment/post/upvote costs like 0.1 cent, every upvote on your comments/posts gives you 0.09 cent.

The rest is used for running the website (so no adds).


That seems like it's going to put a HEAVY incentive on the echo chamber effect though.

If it costs me real money to have an opinion that runs contrary to the herd, I'm not going to spout my opinion regardless of whether that opinion is factual and accurate.

That whole thing seems dangerous to me for some reason that I can't pin down.


It would cost a tiny amount to state your opinion as an individual, but spamming an opinion would be more expensive. A problem is that deep pockets would allow you to fake wide support for a minority position, and that most of the money would just circle within an opinion group.

Ultimately I think we will come around to the idea of verified digital identities almost everywhere. You could still have an AI agent spam in your name (or pseudonym), but you could not pretend to be multiple people.


Then remove the "get money for upvotes" aspect.

I can see politicians using the service as a propaganda channel, but they already do the same with free services, and this way at least it would cost them something.


It's not going to cost you real money to have a contrary opinion. Just 0.1cents. You just won't get that 0.1 cent back if it's contrary. I didn't see any -0.09 for down just +0.09 for up. Or you could make it net with 0 minimum.


It's one of the direction explored by status.im (full disclosure I work there) with tools such as visibility stake and tribute to talk


Could you say more? Is it already available?


The app itself is already available in beta on Android. It is an ethereum client for mobile, and it includes a messenger that uses whisper, a gossip protocol to transmit data (at least for now), which provides darkness and encryption.

The first iteration of Tribute to Talk will be pretty basic with a simple transactional model. A pays B to start talking to B, B can block A at any time. But the smart contract developers are working on more sophisticated schemes for the future.

Here are two related discussions on our discuss forums: Visibilty Stake for Public Chat Room Governance https://discuss.status.im/t/visibilty-stake-for-public-chat-...

PRBS protocol proposal - An incentivized Whisper like protocol for status https://discuss.status.im/t/prbs-protocol-proposal-an-incent...

If you want more precise answers don't hesitate to post there, Ricardo loves to discuss these topics


In browser crypto mining would allow this to work well too.


Yes,and like with junk food ( junk = fake here ) the solution is provenance. In the same way we should not consume food without some idea of it's provenance we need to verify provenance of the information we consume.


Very relevant article [0]. The second picture of the article is a real eye opener on the obviousness of fakes.

https://www.theregister.co.uk/2019/02/25/ai_roundup/


It reminds me of this: https://en.wikipedia.org/wiki/Turing_test#Na%C3%AFvet%C3%A9_...

I remember reading once that a machine had finally passed the Turing test, but when I looked in detail at what some of the judges on the panel had thought was a human talking, I realized how subjective the test was.


Right now people talk to bots every day, in the form of customer support.

People's modern perceptions about bots are much more evolved than when the test was first theorized, so now it is the time to do an actual Turing test.

It will probably fail, but we are surely close to the point where an AI will actually pass it.

My guess is we are 10 years away from that moment. It will be like the movie "Her".



I don't think that's true at all. People can rarely tell the difference between an article without facts or sources and a well sourced article. It's only going to get worse when the bots can generate convincing 'fake news'.


Of course this is more or less what the NPC meme is about, insinuating that others are on autopilot and swayed by fake news.


That’s a really good point. Thanks for the food for thought.


It'll be interesting to see which evolves faster.


I would bet my entire net worth on it being machines.


Does that include saleable organs?


Those won't be worth much when the machines take over.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: