- ChatGPT is fun and provides us with various contents but its also making the cyber criminal’s life easier.
- While OpenAI, which created ChatGPT, has built barriers to stop malicious content creation on the platform, criminals are working their way around these to create phishing emails and malware code.
- Generative AI models like ChatGPT are enabling spear phishing at scale.
- Spear phishing is where an email or electronic communications scam is targeted towards a specific individual or organisation.
- A basic knowledge of coding and minimal resources is all it takes to create realistic phishing emails.
- AI and natural language processing (NLP) systems have also reached a stage where humans find it difficult to discern machine-generated prose from human-written ones in casual conversations.
- Synthetic media such as DeepFakes, allow an adversary to appear and sound like a trusted person on a video call.
- It is easy to convince the tool (ChatGPT) to assist with creating convincing phishing lures and respond in a conversational way that could advance romance scams and business email compromise attacks.
- Cyber criminals are also breaching ChatGPT’s access modes through bots, so that they cannot be identified.
- Access to ChatGPT is based on the user’s IP address, payment cards, and phone numbers.
- But Check Point’s research team notes active chatter in underground forums disclosing how to use OpenAI’s API to bypass these barriers.
- This is mostly done by creating Telegram bots that use the API, and these bots are advertised in hacking forums to increase their exposure.
- It has been found hacking forums on the dark web attempting to use programs like ChatGPT to frame phishing lures that sound very genuine.
- Miscreants are even using it in creative ways, like developing crypto currency payment systems with real-time currency tracker.
- In effect, they try to get people to invest in crypto currencies that do not even exist.
- ChatGPT can also serve as an upgrade to malware-as-a-service.
- So, instead of getting a person to write malware for you, you can ask ChatGPT to automatically do this.
Some Protective measures
- The best way to tackle ChatGPT-related threats would be to train and deploy one’s own AI engines to identify malicious requests, and for people to stay alert to phishing scams and social engineering attacks, and be more wary of suspicious emails and click-invites.
- An organisation could also implement authentication and authorisation in order to use the OpenAI engine.
- This will limit attackers’ ability to misuse the ChatGPT-based chatbots you may have.
- It’s important to invest in threat hunting, increase cyber education of employees, and use ML-based next generation endpoint detection systems.
SOURCE: THE HINDU, THE ECONOMIC TIMES, PIB