As with all of my blog posts, the views and opinions expressed within do not explicitly represent the views or opinions of my employer.
This post will be part of an ongoing series where I focus upon what I’m learning in my AI Governance implementation journey. I will be focusing upon some of the challenges associated with safely implementing AI and accommodating AI use cases within organizations, including any topics I feel are being underrepresented in public dialogues.
AI and the CIA Triad
First, let’s establish that the traditional legacy components of the triad absolutely do apply to AI and emerging technology. I continue to encounter in-person and online discussions with cybersecurity professionals in which the narrative regarding emerging technology tends to drift away from where we should still be focused; the foundational principles of cybersecurity. Considering the current levels of fear, uncertainty, and doubt that are now associated with generative AI and where we find ourselves on the hype cycle (the trough of disillusionment), I think a conversation about how we approach governing and securing generative AI at a foundational level is definitely warranted. Examples of how the traditional CIA triad applies to AI use cases is a much longer conversation that I will explore in subsequent posts; for now, I will focus on a principle which I think is supplemenatry to the triad and applies to both generative AI and machine learning use cases.
AI = CIA + H(arm)
Beyond the application of our traditional understanding of cybersecurity risk, we have to consider the harm that AI could cause. Defining and treating the potential for harm caused by AI use cases can be a somewhat uncomfortable activity for cybersecurity professionals to find ourselves performing within the context of our daily responsibilities. Throughout most of our careers, our understanding of harm from attackers and actions performed by malicious individuals has been presecriptive and is pretty well defined. For example, we are well aware that poor encryption approaches and bad key management practices can be ascribed to loss of data confidentiality and/or integrity. It’s been drilled constantly that poorly implemented physical controls could result in reduced availability. We know that failure to protect our exposed endpoints could impact all three of the triad principles. The discomfort we now must face is the extensible nature of the harm that generative AI can cause. This is because “harm” is a principle that can be interpreted differently depending on how the AI system is being used, for what purpose, and how we define and apply our organizational context. Addtionally, harm may be interpreted differently by different individuals based upon their life experiences, their unique perspectives, and even their demographic or socioeconomic status. What one of us believe is harmful may not be considered harmful to someone else. As an example, AI can produce outputs which test the boundaries of taste, ethics, and bias. These ideals are often defined by society and they are not always static in nature and could even vary between cultures. Crafting your organization’s definition of harm is an important step and shouldn’t be overlooked as you approach your governing strategy.
Define What “Harm” Means to Your Organization
As I explored this topic, I ran into a few very good sources of guidance on this broad topic. One of these is the CSET blog post “Understanding AI Harms: An Overview” posted on August 11, 2023 and written by Heather Frase and Owen Daniels. The authors provide a very simple but effective approach to categorize harm into its primary components, vectors, and exposures. As I note above, the organization’s mission, vision, and goals should also be considered when evaluating harm. As an example, if our theoretical organizational mission is to provide the best banking services on the planet, then it would be significantly harmful if our customer’s financial standing was impacted by any AI use case we seek to implement. We should also consider our organization’s primary revenue model and what regulations must be complied with. It would be harmful to our organization if our AI use case was making improper decisions using regulated data or information. Finally, and as I note above, we must consider what impact our AI use case has on society as a whole. Public AI chatbots are replacing traditional agent-based customer service models rapidly, and as such are in a unique position to do harm to society. Examples could include inadvertant discrimination, unethical guidance issued by chatbots, or even overtly racist or violent responses. Remember when a chat bot influenced a teenager to take their own life? How about when Microsoft’s chat bot hallucinated badly and produced highly offensive content as a response to user prompts. Clearly these are harmful outcomes, and though it represents an extreme example, it’s an excellent case study on understanding the power of this technology to harm.
There are some excellent online resources available to help further define and evaluate the harm posed by the use of AI. The UNESCO ten core principles for AI adoption and use are a very good place to start.
Conclusion
Harm is a foundational principle associated with generative AI and should be considered alongside the traditional CIA triad to establish defense in-depth for organizational AI use cases.
Generative Artificial Intelligence Disclosure
Generative AI was not used to directly facilitate the writing of this post. Validate this statement.
Generative AI was used only to verify the factual components of this blog post. The ChatGPT prompt used was:
Can you verify the factual components of this blog post and highlight inaccuracies based upon the citations?
As a result, non-material corrections were made to the examples cited supporting harmful AI.



