Recently, the artificial intelligence (AI) ChatBot, ChatGPT, has taken the Internet by storm. It is reported that the tool already has 100 million users. Most users say that compared with traditional search engines which only rely on input queries to provide websites of highly relevance, ChatGPT allows users to ask questions in the format of a person-to-person dialogue and then output responses. In addition, the generated answers are very accurate with detailed explanations, saving the time to search for information after using search engines.
While its developer, OpenAI, says the current free ChatGPT is in research preview, its popularity marks a major success in bringing AI and machine learning technology to the masses. Recently, major information technology companies have successively announced plans to integrate AI into their online services: for example, Microsoft will integrate AI technology more powerful than ChatGPT with Bing search engine and Edge browsers; another search engine giant Google will also incorporate its conversational AI service Bard in its products in the coming weeks. It can be seen that in the near future, more AI technologies will be integrated into different online services, getting more involved in our daily lives.
The wide applicability of AI, on the one hand, makes our work or life more time-saving and convenient. For example, some people have used ChatGPT to write programmes and articles faster and less error-prone than real people, and the content of the article is also rich and well-organised; but on the other hand, there have also been criminals who used ChatGPT to create phishing email content and even write malware. Even though the official has added a security mechanism to prohibit the generation of malicious content, cyber criminals have developed evasion methods and sold them as a Crime-as-a-Service (CaaS). Thereby potential security issues cannot be ignored.
HKCERT predicted attacks utilising AI and CaaS to be one of the five key security risks for 2023. It even listed out a variety of possible scenarios of how criminals use AI to attack. In addition to the above-mentioned examples, it also includes AI fraud and poisoning AI models. In summary, the risks involving AI are as follows:
- Data privacy and confidentiality: AI requires vast amounts of data for training, which can include sensitive information such as personal details, financial information and medical records. This may raise privacy concerns, as models may be able to access and generate sensitive information.
- Misinformation: AI may insert false or misleading information in order to produce coherent and smooth results. Users rely on such information without verification may be at risk of being convinced the misinformation. In addition, the accuracy of the message will also be affected by the training data it receives. For example, the training data of ChatGPT only goes up to 2021, so when asked about the most recent World Cup champion, it will answer France (2018 champion), not Argentina (2022 champion). Other examples of misinformation include a Google ad promoting its ChatBot Bard, which was found to contain misinformation in an answer to a question about the “James Webb Space Telescope.”
- Bias issues: AI training data may come from the Internet, which may contain bias and discrimination. This can lead to models producing responses that make these biased and discriminatory. In addition, criminals can also use biased data to train AI models and make AI generate malicious responses. This method is called Adversarial Perturbation.
- Copyright issues: It is important to consider the rights of third parties, for example, owners of any copyrighted material that may be involved in responses output by ChatGPT. Violation of the rights of others, including unauthorised use of their copyrighted material, may result in legal liability. Therefore, when using ChatGPT, consider and respect the intellectual property rights of its developers and others, and ensure that any use of ChatGPT responses complies with applicable laws and regulations.
Actually, AI is a neutral tool and there is no right or wrong in itself. Just like when ChatGPT was asked if it has security risks, its final response is
However, it is important for users and developers to be aware of these security concerns and take appropriate measures to mitigate them.
Source: https://www.hkcert.org/blog/verify-from-various-sources-to-ensure-security-when-searching-for-answers-with-ai