I work in the cybersecurity industry and after my EOD days in the mil did coolguy signals & EW stuff, still do it as a hobby, but more work on the data science, data engineering & cyber security side. More private sector nowadays but I think it "qualifies" me a bit to talk to this stupidity.
Firstly, the Executive Order that diaperboy signed is about "Ethics in AI" this was the outcome of that meeting the White House had with NVIDIA, Meta (Facebook), Alphabet (Google), Microsoft, OpenAI, and a bunch of other nerds. Elon wasn't invited lol. So it's a feel good thing about equity, and access, but also laying out certain stipulations. Not too dissimilar to having cryptography, direct energy, EW (well, EP/ESM), and other similar non-kinetic stuff on ITAR. Except this comes with big helping of socialist feelgood talking points. Notably, their major complaint is the training data that various AI (and subsequent ML & DNN products) are trained on for whatever they purpose is is biased or racist and what not.
There is some truth to it, but none of these systems are perfect, and none of them do the same thing. Natural Language Processing (NLP) is a machine learning family of algorithms to help machines understand speech, translation, cross-map dialects, recognize writing, and are used for everything from "Translate This Page" on Google or LinkedIn to helping retarded ass doctors annotate patient files, recognize handwriting, and what not. Another part of NLP is the conversational element for these "chatbots" which have been around for several decades in various forms, that is where we also enter into Large Language Models (LLMs) and other broad-domain corpus that these NLP systems can be trained on.
When it comes to "cyberwarfare" or "Cyber Electromagnetic Activities (CEMA)" within the broader Electromagnetic Spectrum Operations (ESO, once called Information Operations [IO]) and the application of AI, well it's not new either. There have been several AI- and ML-backed cybersecurity tools that do everything from anomaly detection, automating recon and discovery of digital targets, craft payloads, obfuscate payloads, and more. The Chatbots and LLMs, by virtue of:
1) Being easy to use - thank you NLP!
2) Being trained on a ton of "interesting" data like cybersecurity!
It's easy to use them to craft your own hacking tools all along the "cyber kill chain" - be it for "hey ChatGPT write me a tool like masscan that finds Industrial Control Systems" or "hey ChatGPT write me an encryption program in Python or Golang that uses XSalsa20 and is obfuscated that can run on MacOS Ventura" -- so going back to the ethics piece, it's also the Government, who has no real clue what they are doing making sure plebs cannot weaponize these things.
That said, they already have. There are plenty of other LLMs besides the ChatGPT-* series of them, Amazon has Titan, Meta has LLaMa, and there are dozens more. Being that the training and gathering of data and the compute power is what makes these things hard to operationalize, this tech has already proliferated, you can find versions and wrappers all over that can get around the protections that the AI companies are forced to rollout to stop fucking morons from asking on how to synthesize high explosives or SuperAIDS.
That also leads to the other flaws they want to get ahead of, the so-called "hallucinations" due to inaccurate training data, or maliciously inaccurate training data that is also called "model poisoning". Because the fact is the Gov & Mil are big consumers of this stuff, check out the DIU's solicitations or anything coming from the IC asking about AI-driven systems. DARPA has their own AI models, notably in the C-UAS space around taking over or meaconing for DJI drones they use in Ukraine. All that to say, now that adversaries know that these companies are scraping for huge bits of data, it's not too hard to setup a "trap" and get the model jacked up. It takes A LOT OF BAD DATA to poison a LLM, but it can happen, at least academically.
So, that brings me back to the whole "muh AI cyberwarfare". It's not new, it can be done, but it's not some Skynet fully automated thing, yet. AutoGPT and some botnets with LangChain and internet plugins for ChatGPT have been shown to be limited autonomous but cyber isnt as simple as "lol chatgpt plz hack my gf facebook" -- there is still a lot of ISR-T that goes into it. Cannot exactly hack a system if it's not internet facing or you cannot get a human in the loop to do something stupid like click a link.
And to top this all off - IT DONT APPLY TO THE GOV. They can use it to build SuperAIDS and SuperC4 and SuperMetasploit...not you, dirty fucking pleb, and also AI is racist reeeeee wahhhhh.
We are in dangerous times though. Now the normies know about AI, all sorts of stupid shit will continue to happen.