It is crucial to consider the role of AI actors in the adoption and implementation of AI systems.
So, who exactly are these AI actors? The NIST AI RMF, or Risk Management Framework, defines AI actors as the humans involved in the AI adoption process who have a vested interest or stake in its success. This encompasses a wide range of professionals, including attorneys, AI engineers, data scientists, quality assurance experts, project managers, user acceptance testers, AI behavior testers, systems administrators, IT security specialists, and AI security personnel.
These AI actors play a pivotal role in establishing an AI governance team, which is responsible for overseeing the responsible implementation of AI within an organization’s AI adoption journey. It is this governance team that ensures AI adoption begins and ends on the right note.
In the initial stages of the AI adoption process, it is essential for the legal team to conduct thorough research and analysis. This research helps them understand potential risks associated with third-party involvement, licensing constraints, and any legal pitfalls that may arise from adopting a particular AI model. By involving the legal team from the outset, organizations can proactively address legal concerns and ensure compliance throughout the AI adoption process.
However, the responsibility of AI actors does not end with the legal team. Throughout the entire AI adoption process, from engineering to the end-user experience, all AI actors must work together to create a culture that fosters fast innovation while maintaining high standards of quality and security.
This culture of responsible innovation requires constant collaboration and communication among the AI governance team and the various AI actors involved. It is crucial to establish clear guidelines and protocols for development, testing, and deployment to ensure that AI systems are reliable, safe, and ethical.
Moreover, AI actors must be cognizant of the potential biases and ethical implications associated with AI systems. By actively considering the societal impact and ensuring fairness, transparency, and accountability in AI algorithms, AI actors can mitigate the risks of AI adoption and promote responsible use of this remarkable technology.
In conclusion, AI actors are the unsung heroes behind the scenes, working diligently to ensure the responsible implementation of AI systems. From legal experts to engineers and security specialists, every AI actor plays a critical role in establishing an AI governance team that oversees the entire AI adoption journey. By fostering a culture of responsible innovation, organizations can harness the power of AI while mitigating potential risks and ensuring a positive impact on society.
As of this writing, only Forest Global Training partners has developed the best practices for assembling and managing an AI Governance Team.
At the IAPP PSR Conference, in October 2023 in San Diego, there was broad agreement that AI Risk Management is a Board Level Position within an Organization. Consequently, there is no such thing as thinking too big on this topic.