To the layperson, legitimate concerns keep piling up that give pause to AI adoption. Among these concerns are high profile court cases including the New York Times suing OpenAI for Copyright Infringement.
Now Elon Musk suing OpenAI for alleged theft of intellectual property since the organization was supposed to be a non-profit, but has since behaved as a for-profit entity at the behest of Microsoft.
Oddly, Bill Gates even interviewed the CEO of OpenAI, Sam Altman on what looks like a sort of podcast.
That interview came across as more of a damage control piece since the recent upheaval of the team over at OpenAI.
The Musk vs OpenAI lawsuit is somewhat of a blockbuster, because that lawsuit may undermine the assumptions that brought ChatGPT into existence. The primary assumption in question is the training data for the ChatGPT LLM (large language model). OpenAI asserts that information freely available online should be fair game to train their model to speak more like a human.
This training data, in turn, creates a massive database that the model runs on. The language nature of the model basically uses statistical analysis to reproduce sentences and phrases that closely match the patterns of the training data. Most people who have tried ChatGPT agree that it is pretty good at some tasks, like writing essays or blog posts (not this one!).
If Elon wins this lawsuit, it could basically un-bake the cake so to speak for ChatGPT. The court may conclude that our social media profiles for example, or our photos or blog posts are not fair game for absorption into an Artificial Intelligence training data set.
The court could order OpenAI to destroy that training data and begin again. That might be the just thing to do here, because models such as ChatGPT were not built with a filter on data generated by the average joe, likewise, they were not built with security in mind.
Speaking of security, the security flaws in current AI models across the board require urgent attention.
There will be high demand for the cybersecurity engineers of the coming years to urgently fix and patch issues including viruses promulgated by AI. That cybersecurity landscape will resemble the 2000s in terms of how destructive virus activity could easily propagate.