What kind of answer to your question would you accept? What kind of answer would you be very satisfied with, versus merely be placated by? I'm curious because I have my own answer to your question for my own things that I suspect you would be unhappy with, and I'm wondering how absolute-good you require your interrogatees to be.
I don't have any expectations. In fact that idea that there would be an answer that I would be 'very satisfied with' would make such a question meaningless, it would presume that I am about to judge the author, which I'm not. It's an AMA so I did just that.
> There are probably going to be a lot of people negatively affected by this for quite some time to come. One thing to point out is that there are grades of things. There is "public", and then there is "top hit on Google". Similarly, there is "insecure" and then there is "simple doubleclick tool to facilitate identity theft".
> How many millions of dollars and man hours is it going to take to lock down every access point? How many new servers are going to be needed now that https is used for everything and requests can't be cached?
Indeed it is, but I'm interested in this particular author's stance on this, prompted by their disclaimer which clearly indicates that they realize that there is the risk of abuse, and an AMA seems to be an excellent opportunity to gain some insight.
I'm not OP but I would answer like this: abuses of this technology are inevitable and can only be mitigated by counter-software (which leads to an arms race).
The release of this source code could kickstart the development of deepfake detection software.
Or maybe in general people need to put less weight on video evidence.
We have warned many vendors about the vulnerability of their commercial biometrics software. The threat is currently downplayed by the whole industry. We hope this release to be a wake up call and that our team will be joined by other experts in raising the alarm.
Deepfakes are already used for spoofing KYC around the world. This is already happening, and not by using `dot`.
Tools like these will end up in the hands of those that are trying to harden an installation and in the hands of those that will end up using them to try to break into such installations.
I've often wondered whether the net effect of the whole security research community is a net positive or a net negative and I honestly do not know the answer. So yes, it is a loaded question. But asking yourself if what you can do is actually the right thing to do is always good, especially if you - as these authors imply - know up front that there is a large chance of abuse.
I used to work on secure communications software, and I've often wondered if many more criminals than oppressed people were using it.
In the end, I decided that one oppressed person using it to improve their situation is morally "worth" many criminals using it. It's kind of like "better ten guilty men go free than put one innocent man behind bars".
Because the one innocent is usually a defender, whereas the guilty men are usually the attackers and to give the attackers an effective advantage in an arms race is a risky thing with unknown and potentially devastating outcomes.
In my own life (the live video example listed elsewhere) it would have meant that we probably would have had everything that we have today anyway, only maybe a little bit (not even that much, I'm aware of one other individual who was working on a similar concept who contacted me after our release) later.
Video conferencing, live streams in the browser without plug ins and so on all would have happened, for sure. But at least the massive mountain of abuse cases would not rest partially on my shoulders. And because I've been confronted with the direct evidence of the results of my creation for me that link is easy to make. But if you work on secure communications software you are probably not aware of the consequences.
I've been in that position, which is one of the reasons why I'm asking. When I came up with 'live streaming video on the www' I never for one second sat down to think about the abuse potential. Color me hopelessly naive. And when confronted with the various abuses over the years I've always had a problem with that, this was the direct consequence of me just 'scratching my itch' and it caused a huge amount of misery. Oh, say the defenders, but if you had not done it then somebody else would have. This is true, but then that moral weight would be on their shoulders and not on mine.
Hence my question. Because I do feel that weight and it has caused me to carefully consider the abuse potential of the stuff that I've released since then and I've only released those things that I feel have none that I can (easily) discern.
One thing I learned early in my career, and numerous times during it: For every ethical stand you take against writing a bit of software you consider questionable, there's a line of other software engineers out the door willing to do it. I remember when as a junior engineer, I worked up the courage to tell my boss I had a moral problem with writing some code that would help the product cheat at a benchmark. He totally understood, and I didn't get fired or anything--just moved on to a different project. Bob, two cubicles down, was more than happy to write the benchmark-cheating code.
Software engineers and other technology creators don't take a "Do No Harm" oath like doctors. Many of them have never even taken a single Ethics In Technology course at university (it was an optional class when I was in undergrad decades ago). And, even in the alternate universe where ethics was baked into engineering training, all it takes is a single rogue willing to ignore them, and now the world has to deal with it.
Which is one of the reasons I'm so completely against software patents and a large number of patents in general. Quite a few of them are simply things that the time is right for.
Here is one of my idea dump lists, you can check for yourself which ones are not yet done (which is probably a really small number by now) and which ones have turned out to be homeruns (and in some cases billion dollar+ companies).
One that wasn't on there eventually led to https://pianojacq.com/, which I'm happy to report to date has not led to any kind of abuse. And no, it did not put any piano teachers out of business either.
We have warned many vendors about the vulnerability of their commercial biometrics software. The threat is currently downplayed by the whole industry. We hope this release to be a wake up call and that our team will be joined by other experts in raising the alarm.
Deepfakes are already used for spoofing KYC around the world. This is already happening, and not by using `dot`.
I just wonder. How well will this work with less realistic textures? Like imagine applying some textures on video game 3D model?
And yeah please don't be discouraged to continue publishing your work and models because some people think it's can be abused. Bad guys always have access to the tech they want anyway.
Thanks for questions. The main faceswap algo that we use is SimSwap. You can take a look at their repo for understanding limitation and chance to be applied to 3d game model. To some extend it will work, but I suspect it may not blend very photorealistically.
1. How would you feel if this toolkit were used to create a embarassing and convincing deepfake videos of you and/or your family members (perhaps your parents)?
2. Why do you think you have to enable people to fake videos very easily?
Are you not worried about the conflict of interest inherent in providing offensive and defensive tools simultaneously?
If someone charged money for minesweeping and simultaneously gave out mines I think that would be a fairly clear problem. I think it's a good metaphor because it captures both the conflict of interest and large potential for collateral damage.
Proud to share this preview of my team's work on AI Red Teaming of face biometrics systems. We are the first to show how easy is to spoof industrial-grade identity verification systems with real-time deepfakes, and other tricks.