A new study has found frightening details about AI. A prognostication made by Microsoft executives and Elon Musk might come true now.
As it turns out, AI tech can outsmart cybersecurity experts with proxy data reports and other modes of misinformation. Yes, you had heard that right. AI is now capable enough to perform deception. Not just in a chess game, but a real-life scenario with frightening ramifications and scenarios. The latest research has revealed that AI tech has devised a clever method of hoodwinking cybersecurity experts.
The misinformation and proxy reports given by AI can massively hamper the work of cybersecurity experts, the new research entails. The implication of this manipulation of data could be anything. It all depends on intention. If the AI tech starts catering to the intention of the wrong sets of people, then it could pose frightening threats to the cybersecurity and computer security field disciplines.
The experts of cybersecurity do not only come into the when there is a major hack. They have a day job of detecting, solving, and predicting all the anomalies, which can impact the entire computer ecosystem. They are always hunting for leads that can bring forward flaws in various ubiquitous computer networks.
It is safe to say that there has been no shortage of cyberattacks in recent years. Furthermore, nefarious algorithms have become smarter and effective in exfiltrating and infiltrating the data. In most cases, the intention of breaching cybersecurity networks has not been more than money. The hackers demand ransom in exchange for the data they have illegally acquired. Two of the most popular cases in recent times are JBS Meats and Colonial Pipeline. JBS Meats had to give the ransom of 11 million, which was never recovered. And Colonial Pipeline is still seeking help from the FBI in getting their money and assets back.
People in the United States are also not alien to the term “misinformation,” especially after the recent discoveries on social media. The role of misinformation is also pretty clear with regards to elections, as it was made clear by Cambridge Analytica. We live in a cyberworld where it is hard to differentiate between a lie and a truth. The misinformation is being spread at a rampant pace by vested interests to shape the public narrative.
Now, if the AI in question falls into such hands, the consequences could be catastrophic.
To know the details about the new research, you should read the latest report published by Wired. The report relays the fact that researchers have examined the use of AI in spreading misinformation. They have figured out a frightening detail that AI could even be utilized in misleading researchers and professionals involved in the cybersecurity industry.
The Wired report reveals that the researchers working for Georgetown University utilized “GPT-3” to test AI. The algorithm “GPT-3” utilized AI tech to spread fake reports and proxy data that almost look legitimate.
Are we propagating towards AI singularity? Well, it can’t be said for certain, but there are pieces of evidence that allude to the fact that we are in massive need of AI regulation.