The Far Reaching Effects of AI - Mimicking the Human Voice

  • Thread starter jedishrfu
  • Start date
  • #1
11,512
5,057

Answers and Replies

  • #2
18,087
7,510
The implications are frightening right?
 
  • #3
11,512
5,057
Yes, although folks have done social engineering like this before with a not so perfect voice and gotten away with it.

In one case, years ago a crafty lawyer created a fake company and forged a fake letter to steal the domain address from another guy by stating he was an employee of their company and that they were transferring ownership to a new company. The internet registrar did it no questions asked and it took several years and a long court fight to get it back and many more years later to get paid for the loss. It was through offical looking letters and not through a fake voice but you get the idea of how it can be used. (See case of Kremens vs Cohen and the fight for an **redacted** domain name)
 
  • #4
gleem
Science Advisor
Education Advisor
1,603
966
Presumably you can create a video of anybody saying anything you like and it would be difficult to determine if it was fake. Imagine David Muir (ABC) breaking in and announcing "live" on site an alien invasion (H. G. Wells "War of the Worlds") . What will we be able to believe.
 
  • #5
11,512
5,057
VIdeos can be analyzed and debunked due to various artifacts found. Scientific American once posted an article about photo debunking where they looked at how shadows were cast and in many fake photos there was a clear discrepancy not obvious to the casual observer. I figure a similar scheme is used in debunking fake videos.

https://www.scientificamerican.com/article/5-ways-to-spot-a-fake/

Lack of resolution causes big problems though that are hard to debunk easily. There was a video of cars mysteriously jumping around on a roadway as if there was selective anti-gravity at work. The resolution didn't show the downed power line cable that was being dragged by a street sweeper that caused the cars to flip as it became more taunt.

https://www.cnn.com/videos/world/2015/11/30/china-levitating-cars-mystery-solved-orig-sdg.cnn
 
  • #6
gleem
Science Advisor
Education Advisor
1,603
966
That was 10 years ago. Maybe things go a little more sophisticated.

check this out
 
  • #7
11,512
5,057
This brings up the dilemma of group specialization where the folks who built it kick the ball down the line when it comes to the moral issue of using the technology. It’s similar to gun makers who don’t feel morally responsible to how their guns are used, or gunshops who sell the guns... each group refuses to take responsibility and so no one does and the technology is used for bad things.

One inventor I knew loved to invent things he hated. Why? Because then he could patent it and prevent it from being made at least for awhile.

Perhaps we need something like that for technology.
 
  • #8
8,258
5,088
VIdeos can be analyzed and debunked due to various artifacts found.
I heard that discussed on NPR. The expert being interviewed said that the problem is asymmetric warfare. One can create a fake video in an hour but it takes 40 hours of skilled labor to debunk it. In addition, who funds the debunker and how are the debunked conclusions disseminated?

But I see nothing new here. New technology has always been used for good and bad, and it always will. What else would you expect?
 
  • #9
CWatters
Science Advisor
Homework Helper
Gold Member
10,529
2,295
  • #10
1,158
582
This brings up the dilemma of group specialization where the folks who built it kick the ball down the line when it comes to the moral issue of using the technology. It’s similar to gun makers who don’t feel morally responsible to how their guns are used, or gunshops who sell the guns... each group refuses to take responsibility and so no one does and the technology is used for bad things.

One inventor I knew loved to invent things he hated. Why? Because then he could patent it and prevent it from being made at least for awhile.

Perhaps we need something like that for technology.
What technology is exempt from bad use? Should we vilify farmers and grocers for feeding bad guys? Granted, some technologies are more readily adapted to harmful and wrongful use than others; however, the responsibility for wrongdoing is primarily with the doer of wrong. I think that the more potentially harmful a technology is, the more its purveyors should be called upon to be diligent that they do not knowingly provide it in aid of a harmful purpose, but it's no easy task to determine and put into practice exactly the right measures wherewith that call to duty should appropriately be effectuated.
 
  • #11
8,258
5,088
What technology is exempt from bad use? Should we vilify farmers and grocers for feeding bad guys? Granted, some technologies are more readily adapted to harmful and wrongful use than others; however, the responsibility for wrongdoing is primarily with the doer of wrong. I think that the more potentially harmful a technology is, the more its purveyors should be called upon to be diligent that they do not knowingly provide it in aid of a harmful purpose, but it's no easy task to determine and put into practice exactly the right measures wherewith that call to duty should appropriately be effectuated.
Sure. In the end all such questions reduce to judgements of good and evil, which are based on values, which are not universal, and to what degree can the majority impose its values on the minority. Blah blah. We loosely call it politics or maybe religion. We discuss such things in the GD forum on PF, but not in the technical forums.
 
  • #12
1,158
582
Sure. In the end all such questions reduce to judgements of good and evil, which are based on values, which are not universal, and to what degree can the majority impose its values on the minority. Blah blah. We loosely call it politics or maybe religion. We discuss such things in the GD forum on PF, but not in the technical forums.
In this instance, a Staff member introduced the terms "moral issue", "morally responsible", "responsibility" and "bad things" into the topic; I responded accordingly.
 
  • #13
8,258
5,088
In this instance, a Staff member introduced the terms "moral issue", "morally responsible", "responsibility" and "bad things" into the topic; I responded accordingly.
No problem. You did nothing wrong. But if this thread continues to go in that direction, I'll move it to General Discussion.
 
  • #14
1,158
582
No problem. You did nothing wrong. But if this thread continues to go in that direction, I'll move it to General Discussion.
Fair enough, Sir; the following, I hope, is back on topic:

This problem of fake human phone callers being used fraudulently seems to me to be similar in some ways to the problem of one-way authentication/validation/verification, where two-way would be appropriate. Websites can use captchas to ensure the user is human and not a bot; humans should be able to do something similar to a caller.

An example of the one-way-only problem is the fake ATM that collects the mag stripe data from a would-be user's card, prompts for the PIN, then says something like EID6049I LINK ERROR 02A3 EID6051I LOCAL SYSTEM RESET 012B and then re-displays the welcome screen. The fake ATM collects card data and PINs, and the operator then removes it, and uses the data to make counterfeit cards, which he can then use, along with the PINs, to steal money.

A remedy for this would be a protocol by which your name was not encoded on the card, and the welcome screen displays your name by consulting the bank's records, and if it doesn't display your name, you can call the hotline number on the card and report it, instead of entering your PIN.

Similarly, to prevent machines from fraudulently pretending to be human, we could use ringback protocols in the reverse direction. The original use of ringback protocols was for a computing machine user connecting via modem from an offsite location. The user would call a number for the switch, and the switch would present an authentication dialog, and then the switch would ring back the authenticated user, who would then complete a repeat of the authentication dialog, this time with the switch having made an outgoing call.

A reverse example: if I get a call, ostensibly from a person, who says he's an FBI field agent in Chicago, I can ask him which field office published number I can call him back at, from which the switchboard operator there can route the callback to him. That's 2-way authentication: the FBI knows it's me because the agent called my listed number, and I know it's the FBI because I called back and got the same agent, who acknowledged having just called me.

That might seem a bit much, but before you give out your credit card numbers over the phone, you should at least be able to ensure that the caller is an authorized representative of the entity with which you're trying to do business, and with bots being able to successfully pretend to be human, and the attendant ramp-up in the possible number of phishing calls, we'll have to do something about it; establishing two-way protocols is a reasonable stop-gap measure -- devising human-presentable Turing tests that are very hard for machines to pass and easy for humans is something that we may soon have to get used to.
 
  • #15
1,516
617
I see no reason that this will pose any particular type of risk. Nobody uses voice recognition for security anymore and the threat of an AI being used to swindle someone is no different than someone doing an impression of someone. How many of you have gotten calls from Microsoft or the IRS where the guy on the phone was named "Dave" but had a heavy New Delhi accent? We'll adjust our behavior based on these progressions and legitimate companies will likely to go out of their way to make sure that you know who you're talking to: "Hello, I am Cortana, to speak to a live person, please press 0, otherwise, tell me what you're calling about."
 
  • #16
CWatters
Science Advisor
Homework Helper
Gold Member
10,529
2,295
I see no reason that this will pose any particular type of risk. Nobody uses voice recognition for security anymore
See #9.
 
  • #17
3,379
942
  • #18
russ_watters
Mentor
19,435
5,626
I know I'm late to this party and it perhaps may not actually matter much to the discussion in the thread, but this issue really has very little directly to do with AI. That seems like just a way to scare people into reading the article. It's essentially just high-quality, seamless audio editing and/or synthesis. A big step up from Ferris Bueller's implementation and slightly smaller step up from Ethan Hunt's. Yes, it opens-up new avenues for fraud by forgery, but that's not an AI issue (you just don't have to make a fool of yourself trying to get your mark to say "passport" anymore). On the upside, maybe it will make my GPS audio directions less irritating to listen to.

Just a pet peeve of mine, this constant use of "AI" as a slur.
 
Last edited:

Related Threads on The Far Reaching Effects of AI - Mimicking the Human Voice

  • Last Post
2
Replies
32
Views
2K
Replies
14
Views
1K
Replies
3
Views
3K
Replies
5
Views
3K
  • Last Post
Replies
14
Views
2K
  • Last Post
Replies
8
Views
3K
  • Last Post
Replies
2
Views
4K
  • Last Post
Replies
2
Views
2K
  • Last Post
Replies
7
Views
6K
Top