What is the biggest risk of AI for humans now or in the future?
I would argue that the biggest risk in the future is that AI somehow gets out of control of humans, and at the same time humans are unable or unwilling to shut it down.
The short and medium term risks are that the AI in infancy isn’t living up to the hype, and is actually not anywhere close to replacing someone at their job, but rather it is quite incompetent at doing the job as advertised.
As of this writing, when AI is applied properly in a narrow use case, it does some things fairly well, but as humans, our trust in this technology should remain low.
Last month, I read about and learned quite a bit about a specific interesting use case with AI being trained on Police Body Camera footage. I was pretty alarmed, as I instantly thought, ok we are blazing full speed towards a Robocop scenario where AI Robot police will be on our doorstep in a matter of a few short years.
AI Cannot Spell the word “Police”
I imagined the constitution going up in flames, and a dystopian future, where even police were being replaced by AI that could one day govern or oppress humanity with our own laws.
When I learned of this specific use case of body camera footage as the feeding data for AI, I set out to do some research on whether or not this was a threat we might soon face.
I learned about Truleo, who is the company covered in the news articles and the interviews where I first learned of this technology.
To my surprise, Truleo was exceptionally transparent about how they use police body camera footage for their AI technology, how they manage that data, and what they use it for specifically.
Thankfully because of their transparency most of all, and their use of NLP which is basically a speech recognition and video processing language model, I was able to once again sleep at night knowing that this specific, narrow use case of AI would not trample us under some sort of AI Tyranny.
So Truleo, has done a really good job, but have they mitigated risks as much as possible? It is hard to tell.
Is anyone currently providing a third party audit of these use cases? The short answer is yes.
The biggest risk comes from police departments and law enforcement entities around the world growing accustomed to using AI, and then providing very sensitive data, such as evidence, case documentation, or body cam footage to an AI that they mistakenly trusted.
In such a case, not only could someone train an AI to act like or impersonate a cop, they could with the right resources build a private police force. Again this is a concern for the future, and not an immediate risk. Right now the biggest risk is that police expect to use AI more and more, and as they embrace it, it makes that future risk more realistic.
Could you imagine getting pulled over by a cop who cannot even spell the word “Police”? It might happen, but only time will tell.