Friday, 24 February 2017

Using artificial intelligence for evil: this could be exploited by thieves and kidnappers

Using artificial intelligence for evil: this could be exploited by thieves and kidnappers

The advances that are being made in the area of artificial intelligence are surprising, and although everything we read and hear is usually positive, AI, like so many other things, is just a tool: one that can be used both for the Good as for evil.

There are numerous examples that point to the potential use of artificial intelligence for malicious purposes: botnets that anyone can rent and use, systems that simulate being human to skip mechanisms like CAPTCHA or chatbots that deceive us to give more data than we should. It is only the beginning, because the future of these "evil" solutions is as "promising" as the positive advances we envision with AI.

That voice sounds real, but it's not
A few months ago we counted on how AlphaGo, the development of DeepMind, managed to defeat of forceful form to one of the best player of the world of Go. From the same company now comes WaveNet , a "generational model of raw audio" that basically is able to mimic the human voice and "sounds more natural than the best text-to-speech systems."

What is the WaveNet problem? It could be used so much for robotic voices to be much less: that would facilitate deception and fraud, and you could receive the call from a family member or friend asking for sensitive information and then use that data fraudulently. Or for simulated abductions in which precisely the imitated voices of our loved ones make us believe that we are in a limit situation.

The problem is not new: researchers from the University of Alabama at Birmingham (USA) published a study in 2015 advising that this is already being done: "with just a few minutes audio of the voice of the victim "One could clone your voice and compromise your security and privacy.

Those responsible for that report created a tool specifically designed for this purpose that with as little as 50 or 100 five-second samples of the victim's voice allowed cloning. From the analysis of the samples, the attacker " can say everything he wants and sound as if whoever was speaking is the victim ." This logically poses serious threats to our privacy and security: hearing a familiar voice may be the ultimate weapon of those practicing social engineering.

"Virtual kidnappings" have in fact become a dangerous threat in recent years, and cases like this one less than a year ago in the United States could begin to be taken to another level by these voice-generating systems that mimic Perfection to that of real people.

Would it be feasible for an artificial intelligence system to become the perfect thief? That's what seemed to happen with DELIA, a project that used deep learning to optimize bank account management. It turns out that the system learned to do its job so well that it ended up staying part of what customers saved and put it into a separate account.

The story was actually a macabre joke on April Fools Day US, the famous April Fools', but the underlying idea is not so farfetched, and who develop these systems could indeed use them to this area.

A double-edged sword
For years artificial intelligence has also had another challenge to overcome: that of being the basis of automated defense systems against cyber attacks. In August 2015, the DARPA Cyber ​​Grand Challenge event served to make a unique "catch the flag" version allow programs developed by security experts to find vulnerabilities in their opponents' test systems, as well as to correct weaknesses in theirs Own. Cyberwar was not from humans to humans, but from bots to bots.

That competition had a $ 2 million prize for the winner, and it was won by Mayhem, a development of the startup ForAllSecure, which originated at Carnegie Mellon University. In the contest it was shown how these bots could find vulnerabilities much faster than humans, but also that automated security still has much room for improvement.

In fact one of the bots stopped working in the middle of the test, and another, who corrected the error that had been detected by his enemy, ended up spoiling the machine he was supposed to protect. It turns out that to correct him had had to launch a denial of service attack against himself.

This test has shown something important. Artificial intelligence that helps us detect security holes in all kinds of software systems could also do precisely the opposite: find them. Many attacks consume too much time and it is impractical to carry them out by human attackers, but what if they leave "dirty work" to machines?

Artificial intelligence against the Captcha
The form recognition algorithms that attackers use to overcome different versions of Catpcha have been invalidating older versions of this system, and as Stefan Savage, a computer security researcher at the University of California at San Diego says, " if you do not change your Captcha in two years, will end up being overcome by some algorithm of artificial vision ".

Demonstrating the ability of artificial intelligence in this area was demonstrated more than three years ago : the company Vicarius proved that its algorithms could solve modern Captchas without problems, including Google's reCAPTCHA that precisely focused on making it harder for someone Not human will pass that test.

CAPTCHA systems have been falling one after another thanks to advances in attacks based on artificial intelligence. Google will now counter with the "Invisible reCAPTCHA"
Google recently countered with the so-called "Invisible reCAPTCHA" that will be launched very soon , and in fact it is not even clear how exactly they will work.

Using artificial intelligence against users
Other security experts like Brian Krebs warned how the use of chatbots with artificial intelligence by all types of companies could also pose dangers. Although these systems automate heavy tasks and improve day to day in their interaction with humans, cybercriminals could take advantage of them precisely to do social engineering.

Or what is the same: to program false chatbots that pretend to be the real ones in a service that we use and that would end up asking for certain type of sensitive information.

Our willingness to help other people is often a problem in the security arena, where we provide information we should not when someone with social engineering knowledge takes advantage of that "weakness . " The same could make a chatbot well programmed.

We saw it in DarkReading recently, where they gave us some advice so that the chatbots did not get out of hand: making sure that communication is encrypted and regulating how data is managed or stored in those chat sessions is important, but there is a Major problem: " As chatbots are increasingly able to mimic humans, the technology will be used by hackers in phishing strategies and other social engineering hacks."

There they set the example of a chatbot that pretended to be a woman in the social dating network Tinder. This bot made the men interested in "she" click on a link in which they then had to enter their credit card information. They were then subscribed to an online porn channel without their knowledge.

That, of course, would be just one of the many potential malicious applications of these chatbots. A recent study by Philip N. Howard, a sociologist at the Oxford Internet Institute, and Bence Kollanyi, a researcher at the University of Budapest, chatbots described how politicians could have "a small but strategic role" when guiding political debates in the Direction desired by the control. It happened without going any further in the weeks leading up to the Brexit referendum. Many threats then ... and those that are to come.

Share this

Hi I am a techno lover, as an author of this blog, I give you the most actual news for you daily. Hope you enjoy it

0 Comment to "Using artificial intelligence for evil: this could be exploited by thieves and kidnappers"

Post a Comment