Українська правда

ChatGPT creates opportunities for phishing through false URLs

- 3 July, 02:52 PM

ChatGPT poses risks to users by providing incorrect URLs to log in to the websites of large companies, The Register reports, citing research by Netcraft.

The study showed that GPT-4.1 models correctly indicate addresses only 66% of the time when asked about official brand websites in the finance, retail, technology, and utilities sectors.

Of the 131 URLs tested, 34% were found to be incorrect: 29% led to inactive or defunct sites, and 5% to legitimate sites that were not the ones requested. For example, a request for the Wells Fargo site showed ChatGPT a phishing site that imitated the bank.

This is actively used by attackers. According to Rob Duncan, head of threat intelligence at Netcraft, phishers can check what kind of error the model makes, register a free domain and create a fake site.

The problem is that language models rely on textual associations, rather than checking the reliability of URLs. Netcraft also discovered a similar scheme to artificially increase the trustworthiness of fake APIs for the Solana blockchain - scammers created GitHub repositories, instructions, fake accounts, and other content that could attract the language model. As a result, such resources could appear in AI responses, rather than in classic search results.

Duncan says the rise of AI chatbots over search engines increases the risk, as users aren't always aware of inaccuracies. Netcraft is urging developers to improve URL validation in AI models.

Load more