As ChatGPT becomes more widely used, scammers are likely to apply the technology for phishing attacks, according to experts. OpenAI, the creator of ChatGPT, restricts some misuse of the technology, but Microsoft's plans to incorporate it into Azure AI services could lead to wider use. Chester Wisniewski, a principal research scientist at Sophos, studied how easily ChatGPT can be manipulated for malicious attacks. He found that ChatGPT makes it easier for scammers to launch phishing attacks, writing often more believable phishing lures than real humans can. With the advancement of AI, it's important to consider the potential security risks and what needs to be done to combat malicious use of the technology.