Career Advice 名家分享

The Battle Between AI and Humans

In a closed-door meeting held on 15th September in the United States, a comprehensive regulatory oversight on the rapid development of AI technology and its potential threats was discussed. Chuck Schumer, the leader of the majority party in the US Senate, presided over the grand meeting, and 60 senators attended, including Elon Musk, the founder of Tesla, Sundar Pichai of Google's parent company Alphabet, Mark Zuckerberg of Facebook, Bill Gates, the founder of Microsoft, and Satya Nadella, the CEO of Microsoft. Musk's words were particularly eye-catching. He bluntly stated that AI errors will lead to serious consequences, and if immediate regulatory actions are not taken, we may not be able to turn the tide.

 

In recent years, AI technology has enormous potential for normal use, such as remaking the appearance of the late star of "Fast and Furious" to continue the remaining part. However, in the hands of malicious elements, AI technology unfortunately falls into the hands of fraudsters, and their means are sophisticated. There are various features and techniques of fraud:

 

- Face - Video Deepfake: Using AI technology, malicious elements can create almost identical fake videos of people, making it difficult for people to distinguish between real and fake. They can even make famous people say things they have never said, thereby influencing public opinion or defrauding individuals.

 

- Voice - Audio Deepfake: Similar to Video Deepfake, but focused on audio. Malicious elements can imitate the voice of others, create misleading recordings, and even commit fraud to lure people in.

 

- Photo: Using AI technology to adjust or synthesize photos, creating fake scenarios or scenes. -

 

- Non-human sending messages: AI programs send large amounts of messages, usually containing fraudulent links, trying to obtain personal information or passwords from the recipient.

 

- Chatbots: Using highly realistic AI chatbots to interact with humans, luring them to provide private information or pay fraudulent fees.

 

- Network phishing: Using AI technology to analyze and locate targets, and then sending highly customized phishing emails.

 

- Biometric unlocking: AI can learn and imitate biological features, such as fingerprints or retinal patterns, to perform illegal unlocking.

 

However, we are not defenseless against such challenges. Based on the framework of "personnel", "process", and "technology", we can take effective preventive measures against AI fraud.

 

First, in the "personnel" aspect, we must continuously educate and train employees to be aware of the latest fraud methods. Second, in the "process" aspect, companies need to establish a comprehensive process to ensure security, from adversarial testing, data validation, to external audits. Finally, in the "technology" aspect, in addition to adopting multi-factor authentication and updating security systems, companies should invest in fraud detection tools to ensure that their technology is not used by malicious elements.

 

However, no matter how hard we try, new threats will always emerge. This is just like the arms race in the video, where one side's progress will drive the other's innovation. But we must realize that instead of engaging in an endless struggle, we should address the problem at its source and establish a fair, transparent, and responsible AI development standard to ensure that this technology truly benefits humanity, rather than becoming our source of harm.