Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

The Far Reaching Effects of AI - Mimicking the Human Voice

  1. Jun 8, 2018 #1

    jedishrfu

    Staff: Mentor

  2. jcsd
  3. Jun 8, 2018 #2
    The implications are frightening right?
     
  4. Jun 8, 2018 #3

    jedishrfu

    Staff: Mentor

    Yes, although folks have done social engineering like this before with a not so perfect voice and gotten away with it.

    In one case, years ago a crafty lawyer created a fake company and forged a fake letter to steal the domain address from another guy by stating he was an employee of their company and that they were transferring ownership to a new company. The internet registrar did it no questions asked and it took several years and a long court fight to get it back and many more years later to get paid for the loss. It was through offical looking letters and not through a fake voice but you get the idea of how it can be used. (See case of Kremens vs Cohen and the fight for an **redacted** domain name)
     
  5. Jun 8, 2018 #4
    Presumably you can create a video of anybody saying anything you like and it would be difficult to determine if it was fake. Imagine David Muir (ABC) breaking in and announcing "live" on site an alien invasion (H. G. Wells "War of the Worlds") . What will we be able to believe.
     
  6. Jun 8, 2018 #5

    jedishrfu

    Staff: Mentor

    VIdeos can be analyzed and debunked due to various artifacts found. Scientific American once posted an article about photo debunking where they looked at how shadows were cast and in many fake photos there was a clear discrepancy not obvious to the casual observer. I figure a similar scheme is used in debunking fake videos.

    https://www.scientificamerican.com/article/5-ways-to-spot-a-fake/

    Lack of resolution causes big problems though that are hard to debunk easily. There was a video of cars mysteriously jumping around on a roadway as if there was selective anti-gravity at work. The resolution didn't show the downed power line cable that was being dragged by a street sweeper that caused the cars to flip as it became more taunt.

    https://www.cnn.com/videos/world/2015/11/30/china-levitating-cars-mystery-solved-orig-sdg.cnn
     
  7. Jun 8, 2018 #6
    That was 10 years ago. Maybe things go a little more sophisticated.

    check this out
     
  8. Jun 8, 2018 #7

    jedishrfu

    Staff: Mentor

    This brings up the dilemma of group specialization where the folks who built it kick the ball down the line when it comes to the moral issue of using the technology. It’s similar to gun makers who don’t feel morally responsible to how their guns are used, or gunshops who sell the guns... each group refuses to take responsibility and so no one does and the technology is used for bad things.

    One inventor I knew loved to invent things he hated. Why? Because then he could patent it and prevent it from being made at least for awhile.

    Perhaps we need something like that for technology.
     
  9. Jun 10, 2018 #8

    anorlunda

    Staff: Mentor

    I heard that discussed on NPR. The expert being interviewed said that the problem is asymmetric warfare. One can create a fake video in an hour but it takes 40 hours of skilled labor to debunk it. In addition, who funds the debunker and how are the debunked conclusions disseminated?

    But I see nothing new here. New technology has always been used for good and bad, and it always will. What else would you expect?
     
  10. Jun 10, 2018 #9

    CWatters

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

  11. Jul 17, 2018 #10
    What technology is exempt from bad use? Should we vilify farmers and grocers for feeding bad guys? Granted, some technologies are more readily adapted to harmful and wrongful use than others; however, the responsibility for wrongdoing is primarily with the doer of wrong. I think that the more potentially harmful a technology is, the more its purveyors should be called upon to be diligent that they do not knowingly provide it in aid of a harmful purpose, but it's no easy task to determine and put into practice exactly the right measures wherewith that call to duty should appropriately be effectuated.
     
  12. Jul 17, 2018 #11

    anorlunda

    Staff: Mentor

    Sure. In the end all such questions reduce to judgements of good and evil, which are based on values, which are not universal, and to what degree can the majority impose its values on the minority. Blah blah. We loosely call it politics or maybe religion. We discuss such things in the GD forum on PF, but not in the technical forums.
     
  13. Jul 17, 2018 #12
    In this instance, a Staff member introduced the terms "moral issue", "morally responsible", "responsibility" and "bad things" into the topic; I responded accordingly.
     
  14. Jul 17, 2018 #13

    anorlunda

    Staff: Mentor

    No problem. You did nothing wrong. But if this thread continues to go in that direction, I'll move it to General Discussion.
     
  15. Jul 17, 2018 #14
    Fair enough, Sir; the following, I hope, is back on topic:

    This problem of fake human phone callers being used fraudulently seems to me to be similar in some ways to the problem of one-way authentication/validation/verification, where two-way would be appropriate. Websites can use captchas to ensure the user is human and not a bot; humans should be able to do something similar to a caller.

    An example of the one-way-only problem is the fake ATM that collects the mag stripe data from a would-be user's card, prompts for the PIN, then says something like EID6049I LINK ERROR 02A3 EID6051I LOCAL SYSTEM RESET 012B and then re-displays the welcome screen. The fake ATM collects card data and PINs, and the operator then removes it, and uses the data to make counterfeit cards, which he can then use, along with the PINs, to steal money.

    A remedy for this would be a protocol by which your name was not encoded on the card, and the welcome screen displays your name by consulting the bank's records, and if it doesn't display your name, you can call the hotline number on the card and report it, instead of entering your PIN.

    Similarly, to prevent machines from fraudulently pretending to be human, we could use ringback protocols in the reverse direction. The original use of ringback protocols was for a computing machine user connecting via modem from an offsite location. The user would call a number for the switch, and the switch would present an authentication dialog, and then the switch would ring back the authenticated user, who would then complete a repeat of the authentication dialog, this time with the switch having made an outgoing call.

    A reverse example: if I get a call, ostensibly from a person, who says he's an FBI field agent in Chicago, I can ask him which field office published number I can call him back at, from which the switchboard operator there can route the callback to him. That's 2-way authentication: the FBI knows it's me because the agent called my listed number, and I know it's the FBI because I called back and got the same agent, who acknowledged having just called me.

    That might seem a bit much, but before you give out your credit card numbers over the phone, you should at least be able to ensure that the caller is an authorized representative of the entity with which you're trying to do business, and with bots being able to successfully pretend to be human, and the attendant ramp-up in the possible number of phishing calls, we'll have to do something about it; establishing two-way protocols is a reasonable stop-gap measure -- devising human-presentable Turing tests that are very hard for machines to pass and easy for humans is something that we may soon have to get used to.
     
  16. Jul 17, 2018 #15
    I see no reason that this will pose any particular type of risk. Nobody uses voice recognition for security anymore and the threat of an AI being used to swindle someone is no different than someone doing an impression of someone. How many of you have gotten calls from Microsoft or the IRS where the guy on the phone was named "Dave" but had a heavy New Delhi accent? We'll adjust our behavior based on these progressions and legitimate companies will likely to go out of their way to make sure that you know who you're talking to: "Hello, I am Cortana, to speak to a live person, please press 0, otherwise, tell me what you're calling about."
     
  17. Jul 18, 2018 #16

    CWatters

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    See #9.
     
  18. Jul 20, 2018 #17
    Dave?
    Dave's not here.
     
  19. Jul 26, 2018 #18

    russ_watters

    User Avatar

    Staff: Mentor

    I know I'm late to this party and it perhaps may not actually matter much to the discussion in the thread, but this issue really has very little directly to do with AI. That seems like just a way to scare people into reading the article. It's essentially just high-quality, seamless audio editing and/or synthesis. A big step up from Ferris Bueller's implementation and slightly smaller step up from Ethan Hunt's. Yes, it opens-up new avenues for fraud by forgery, but that's not an AI issue (you just don't have to make a fool of yourself trying to get your mark to say "passport" anymore). On the upside, maybe it will make my GPS audio directions less irritating to listen to.

    Just a pet peeve of mine, this constant use of "AI" as a slur.
     
    Last edited: Jul 26, 2018
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted