I’ve seen an article like this before, but I don’t remember if it is this was like this one.
Its an interesting one because they’re saying that Chat GPT will not allow for making malicious code but other versions of the GPT framework can.
I’ll let others decide what they think of this one. Do you think its going to be possible for GPT models to do this long term?
ChatGPT creates mutating malware that evades detection by EDR is the article.
I’m interested in your thoughts.
Discover more from Jared's Technology podcast network
Subscribe to get the latest posts sent to your email.