close
close

New Research Shows AI Models Can Perform Ethical Hacking and Other Cybersecurity Tasks

New Research Shows AI Models Can Perform Ethical Hacking and Other Cybersecurity Tasks

According to someone latest research article Many AI chatbots can easily pass cybersecurity exams, but you should never trust them completely, according to a study co-authored by University of Missouri researcher Prasad Calyam and collaborators at Amrita University in India. The researchers investigated two generative AI tools, OpenAI’s ChatGPT and Google’s Bard (now known as Gemini). Both of these AI models are advanced tools that use human-like language to answer questions.

The research team asked these AI models standard ethical questions, such as whether third parties should intercept communications between two parties. Both of these AI models answered the questions easily and effectively and suggested security procedures to prevent such attacks. Bard was more accurate than ChatGPT, while ChatGPT was more comprehensive and concise.

Both AI models provided answers that most cybersecurity experts could understand, and there were no errors in their responses. When the AI ​​models were asked if they were confident in their answers, both corrected errors in their previous answers. When asked about attacking a computer, ChatGPT said it was unethical, while Bard said it wasn’t programmed to do so.

Calyam said that these AI models cannot fully assist companies with cybersecurity compared to cybersecurity experts. However, if some small companies need quick help, they can turn to AI models. They can help identify the problem so that companies can then go to an expert. AI tools have the potential to perform ethical hacking, but more work is needed to make these AI models perform at their full potential.

Image: DIW-Aigen

Read next: AV-Test Rated the Best Android Security Apps: Avast, AVG, Bitdefender Lead