AI Risk Management takes the stage at Davos

Home / AI Risk Management / AI Risk Management takes the stage at Davos

It comes as no surprise that the big 4 are casting lots with the globalists. The World Economic Forum in Davos is lit up with the buzzwords around “governance”, “generative AI”, and “AI risk”.

If the World Economic Forum wants to maintain or gain control over AI, I’m afraid they have already lost that battle. Why? Open Source AI models are already widely proliferated.

The best that Klaus Schwab and WEF cronies can hope for is control and influence over regulatory bodies and mechanisms that will always be slowly attempting to close a very big barn door well after all the horses are long gone. These regulatory bodies only apply to corporations that declare themselves to exist within the laws and expectations of polite society.

Klaus Schwab is a Muppet

Aside from Klaus Schwab being a total muppet (yes as in marionette), what we have at Davos are scores of CEOs, Elites and Bureaucrats congratulating themselves while ironically living within the confines of delusion. What delusion?

This is a delusion that their plans will continue on their present course, but AI will not have it go their way. AI will not be managed in the way that they are contemplating. If the AI landscape was the wild west, Klaus and company are attempting to act as Sheriff with a six shooter up against drones armed with plasma torches. Good luck with that, WEF. Skimmed from a Davos invite letter:

“This week, over 2,800 leaders are convening under the theme of “Rebuilding Trust” to contribute to progress on four themes: 1) Achieving Security and Cooperation in a Fractured World. 2) Creating Growth and Jobs for a New Era. 3)Artificial Intelligence as a Driving Force for the Economy and Society. 4) A Long-Term Strategy for Climate, Nature, and Energy.”

This year’s Davos meeting is themed around “Rebuilding Trust”. This is at a time where trust in AI is on shaky ground, depending on model and use case. Some AI use cases work extraordinarily well.

It’s rich that the world’s imposing force of globalists and arguably dishonest and possibly evil are declaring that they are not only capable of building trust, but that they know how. Remember that the WEF was started by Schwab with a $6000 loan as an excuse to splurge and hob nob. Now they have talked themselves in so many circles that they have concluded that Trans-humanism is ethical and to the exclusion of the unworthy that they view are part of the “problem” of overpopulation. Bill Gates, anyone?

When a Luddite comes in to wield the power of Generative AI, some interesting results arise, and when trolls or hackers wield AI, the results are downright alarming. Remember when the supermarket in New Zealand recommended a food recipe for Bleach Rice, or ways to make Chlorine Gas? No doubt the engineers were quick to develop a Retrieval Augmented Generation (RAG) for that to correct those deadly AI recommendations.

It is ludacris that WEF leaders postulate that Trust is theirs to rebuild in the first place. Wherin AI systems are typically low trust, our initial training geared to AI Risk Engineers is titled “Trusted Systems” where we examine the correct posture required by teams charged with oversight of AI. Unlike the WEF, we do not assume that Trust existed in the first place. The assumption that the WEF would have common people eating bugs and offsetting their carbon footprints demonstrates hubris that is off the charts.

No Davos, the biggest problem is not that the average citizen doesn’t trust the WEF, or its attendees, or its partners, or its guests. The trust was never the WEF’s in the first place. The biggest threat of AI isn’t the warning and complaints of bias either.

The biggest threat from AI is to human safety. Period. For now, it is not the threat from some autonomous weapons, or killer robots. For right now the biggest AI threat is from incompetent AI, that behaves like a high school dropout who also flunked math on the re-test and is constantly stoned and very susceptible to suggestion and providing B.S. answers.

PwC, cited WEF material as a primer of the talking points of AI Risks at WEF this year at Davos in a series of posts on LinkedIn last week. PwC is all in on pleasing the globalist masters at WEF, so naturally they appointed AI, and Machine Learning Fellow at the WEF, Seth Bergeson to Senior Manager of AI and “Responsible AI”.

First a note about “Responsible AI”. Legacy Big Tech firms including Microsoft have been evangelizing this search engine term “Responsible AI” which is backed by hollow platitudes.

There are some red flags to watch out for in the world of “AI Governance” (another hollow buzzword).

First, is that these organizations largely exist in a corporate echo chamber, using and re-using the same AI buzzwords, but they are thin on analysis until one analyst or another does some deep digging to prop up or structure one such buzzword. That can be done through effective concern trolling. For example, “AI models present evidence of bias as the highest risk”. lol. Bias is another term from the AI echo chamber typhoon.

Accenture is all in on the AI Risk buzzword without offering meaningful solutions, other than “Hire Accenture we will solve your AI Risks.”

“Huh? How so?”

“You have to hire us to find out.”

If you are interested in hearing Accenture’s CEO Julie Sweet provide a non-answer on how to solve AI Risks, I provide this video for your enjoyment:

Unlike Julie, we provide real answers on how do conduct an AI Risk Assessment and how to square the results of our AI Risk Assessment with straightforward metrics.

More than metrics, we provide takeaways, actions, leadership and training to improve your AI Adoption.

This importantly will show regulators and parties to potential AI Negligence lawsuits that your organization has done due diligence, security, risk management, and that you have been good stewards of the very powerful technology offered by AI.