Latest Posts

  • AI Governance: Consider the Harm

    As with all of my blog posts, the views and opinions expressed within do not explicitly represent the views or opinions of my employer.

    This post will be part of an ongoing series where I focus upon what I’m learning in my AI Governance implementation journey. I will be focusing upon some of the challenges associated with safely implementing AI and accommodating AI use cases within organizations, including any topics I feel are being underrepresented in public dialogues.

    AI and the CIA Triad

    First, let’s establish that the traditional legacy components of the triad absolutely do apply to AI and emerging technology. I continue to encounter in-person and online discussions with cybersecurity professionals in which the narrative regarding emerging technology tends to drift away from where we should still be focused; the foundational principles of cybersecurity. Considering the current levels of fear, uncertainty, and doubt that are now associated with generative AI and where we find ourselves on the hype cycle (the trough of disillusionment), I think a conversation about how we approach governing and securing generative AI at a foundational level is definitely warranted. Examples of how the traditional CIA triad applies to AI use cases is a much longer conversation that I will explore in subsequent posts; for now, I will focus on a principle which I think is supplemenatry to the triad and applies to both generative AI and machine learning use cases.

    AI = CIA + H(arm)

    Beyond the application of our traditional understanding of cybersecurity risk, we have to consider the harm that AI could cause. Defining and treating the potential for harm caused by AI use cases can be a somewhat uncomfortable activity for cybersecurity professionals to find ourselves performing within the context of our daily responsibilities. Throughout most of our careers, our understanding of harm from attackers and actions performed by malicious individuals has been presecriptive and is pretty well defined. For example, we are well aware that poor encryption approaches and bad key management practices can be ascribed to loss of data confidentiality and/or integrity. It’s been drilled constantly that poorly implemented physical controls could result in reduced availability. We know that failure to protect our exposed endpoints could impact all three of the triad principles. The discomfort we now must face is the extensible nature of the harm that generative AI can cause. This is because “harm” is a principle that can be interpreted differently depending on how the AI system is being used, for what purpose, and how we define and apply our organizational context. Addtionally, harm may be interpreted differently by different individuals based upon their life experiences, their unique perspectives, and even their demographic or socioeconomic status. What one of us believe is harmful may not be considered harmful to someone else. As an example, AI can produce outputs which test the boundaries of taste, ethics, and bias. These ideals are often defined by society and they are not always static in nature and could even vary between cultures. Crafting your organization’s definition of harm is an important step and shouldn’t be overlooked as you approach your governing strategy.

    Define What “HarmMeans to Your Organization

    As I explored this topic, I ran into a few very good sources of guidance on this broad topic. One of these is the CSET blog post “Understanding AI Harms: An Overview” posted on August 11, 2023 and written by Heather Frase and Owen Daniels. The authors provide a very simple but effective approach to categorize harm into its primary components, vectors, and exposures. As I note above, the organization’s mission, vision, and goals should also be considered when evaluating harm. As an example, if our theoretical organizational mission is to provide the best banking services on the planet, then it would be significantly harmful if our customer’s financial standing was impacted by any AI use case we seek to implement. We should also consider our organization’s primary revenue model and what regulations must be complied with. It would be harmful to our organization if our AI use case was making improper decisions using regulated data or information. Finally, and as I note above, we must consider what impact our AI use case has on society as a whole. Public AI chatbots are replacing traditional agent-based customer service models rapidly, and as such are in a unique position to do harm to society. Examples could include inadvertant discrimination, unethical guidance issued by chatbots, or even overtly racist or violent responses. Remember when a chat bot influenced a teenager to take their own life? How about when Microsoft’s chat bot hallucinated badly and produced highly offensive content as a response to user prompts. Clearly these are harmful outcomes, and though it represents an extreme example, it’s an excellent case study on understanding the power of this technology to harm.

    There are some excellent online resources available to help further define and evaluate the harm posed by the use of AI. The UNESCO ten core principles for AI adoption and use are a very good place to start.

    Conclusion

    Harm is a foundational principle associated with generative AI and should be considered alongside the traditional CIA triad to establish defense in-depth for organizational AI use cases.

    Generative Artificial Intelligence Disclosure

    Generative AI was not used to directly facilitate the writing of this post. Validate this statement.

    Generative AI was used only to verify the factual components of this blog post. The ChatGPT prompt used was:

    Can you verify the factual components of this blog post and highlight inaccuracies based upon the citations?

    As a result, non-material corrections were made to the examples cited supporting harmful AI.

  • Script and HTTP Header Integrity (Part 2 of 2)

    As with all of my blog posts, the views and opinions expressed within do not explicitly represent the views or opinions of my employer.

    This is the second part of a two-part post about script and HTTP header integrity and associated tamper detection/prevention controls, focused primarily upon content security policy (CSP), sub-resource integrity (SRI), and their equivalent controls. In this second part, we will discuss the relevant compliance frameworks and regulations that might be satisfied by implementing CSP and SRI controls. If you’re an assessor looking for guidance, a new cloud/software engineer looking for clarification, or a CISO seeking to better understand this technology, these two posts will hopefully be useful for you.

    If you require a re-cap on what CSP and SRI are then, head on over to part 1 for the overview discussion.

    Probably More Than You Thought

    Initially when authoring these two blog posts, I was intimately aware that PCI DSS and NIST CSF both had prescribed and/or suggested references associated with the use of CSP and SRI. As I began my research, I was surprised to find that the implementation of CSP and SRI could satisfy a great number of requirements, and not only within compliance frameworks. As a further surprise to me, even some of the more stringent and complex data privacy and security rules might be satisfied by this control (depending on your scope). I will highlight a few below, though I am sure there are many others I have not considered. If there’s one you’re aware of, let me know in the comments.

    PCI DSS Requirements 6.4.3 and 11.6.1

    As I note above, as a former QSA I was already very familiar with the PCI DSS requirements for implementation of script and page integrity controls under these two requirements. Let’s briefly review each requirement.

    • 6.4.3 – Payment Page Scripts

    This requirement expects that all payment page scripts which are loaded in the consumer’s browser to be authorized, with integrity controls, and for an inventory of all scripts on the payment page to be maintained with a business justification for each. The authorization and the inventory are important, but our focus is on the integrity controls. The “Good Practice” and “Examples” language of this requirement make reference to both CSP and SRI as potential controls to implement.

    • 11.6.1 – Change and Tamper Mechanisms

    Similar to, but distinctly different from 6.4.3, this requirement sets the expectation that any changes or attempted changes to the security-impacting HTTP headers *and* the script contents of payment pages are alerted upon at least weekly or *periodically* as determined by the entity’s targeted risk analysis. I think the most important takeaway from this requirement is that the intent and focus is upon the detection capability and not a requirement to prevent. As a former QSA, many of my clients became quite hung up on a perceived prevention aspect of this requirement, where there is actually none.

    Here are some excellent resources that go into detail on this topic from the perspective of PCI DSS:

    NIST CSF: Category – Protect (PR.DS) Data Security

    NIST CSF has had a data security category since its inception, which included the PR.DS-6 subcategory. In CSF v1.1, that subcategory read as follows:

    PR.DS-6: Integrity checking mechanisms are used to verify software, firmware, and information integrity

    The leading three words of this subcategory would certainly lead any GRC, IT risk professional, or auditor to be able to assert that the implementation of SRI and CSP would satisfy this requirement for web applications.

    In NIST CSF v2.0 this verbiage was changed, and the equivalent subcategory now reads as follows:

    PR.DS-02: The confidentiality, integrity, and availability of data-in-transit are protected

    There’s that word again: “integrity”. If the scope under evaluation includes web application, I think a case could easily be made that CSP and SRI (and their equivalent controls) would at least meet the intent of this subcategory to protect data integrity rendered in an end-point browser. I feel like many entities point to strong encryption as the only mechanism that would satisfy integrity requirements, but as I have (hopefully) demonstrated in part 1 of this blog post, that may not go quite far enough to satisfy an eager auditor.

    23 NYCRR 500 (NY DFS)

    This particular regulation does not actually specifically reference script and http header integrity, but there certainly seems to be opportunity to leverage its implementation to comply with some of the key sections. Here are a few of interest:

    • 500.5: Penetration Testing and Vulnerability Assessments (page 6)
      • Regular testing of cybersecurity defenses through penetration testing should almost certainly include publicly accessible web assets.
      • Should transactional capabilities be available on publicly facing web assets, these must be included in scope and consideration must be made for how the end user is protected from known threats.
    • 500.7: Access Controls and Identity Management (page 6)
      • Restricting the logical location from where scripts are able to load via CSP (or similar control) reduces the risk of unauthorized content execution.
      • If transactional capabilities are in scope, reducing unauthorized content execution almost certainly supports compliance with this section.

    Conclusion

    Consideration should be given to controls which ensure the integrity and authorization of scripts and mitigation against common attack vectors upon web pages capable of transactional use cases. I think you’ll find that your investment reduces more compliance burden than you expected.

    Generative Artificial Intelligence Disclosure

    Generative AI was used to directly facilitate the writing of this post to verify the applicability of script integrity and http header integrity to the risk frameworks that the author was already aware of. Validate this statement. Prompts used were:

    "What are the requirements for content security protocol and sub-resource integrity within 23 NY CRR 500?"
    "Where would script and http header integrity apply to the HIPAA Security Rule?"

    Generative AI was used to verify the factual components of this blog post. The prompt used was:

    Can you verify the factual components of this blog post and highlight inaccuracies based upon the citations?

  • Script and HTTP Header Integrity(Part 1 of 2)

    As with all of my blog posts, the views and opinions expressed within do not explicitly represent the views or opinions of my employer.

    This will be a two-part post about script and HTTP header integrity and associated tamper detection/prevention controls. I will focus the discussion on content security policy (CSP) and sub-resource integrity (SRI) and equivalent controls; what, who, why, and a little bit of how. We will discuss these two important security configuration settings first at a high level and then examine in just a bit more detail. If you’re an assessor looking for guidance, a new cloud/software engineer looking for clarification, or a CISO seeking to better understand this technology, these two posts will hopefully be useful for you.

    If you already understand what CSP and SRI are then, head on over to part 2 for the compliance discussion.

    What are the threats and risks?

    You’ve heard of them over and over by now. Injection of malicious scripts, malicious JSON injection, click-jacking, and the good-ole fashioned cross site scripting (XSS) name only a few of the threat vectors applicable to public pages rendered by web applications with transactional capabilities. (In this post, when I speak of “transactional capabilities” I mean where the client browser would GET and then POST back information to the web service as an expected use case.) These vectors are most easily described by the attacker manipulating web pages with transactional capabilities within the user’s browser at the time the page is presented to the user, and injecting their own script or object in place of the original which was delivered by the web service. The risks associated with successfully executing such an attack include exactly what we’d expect: compromised user credentials, download or activation of malware, unauthorized access to session information, and ultimately; unauthorized access to sensitive information.

    What is Content Security Policy?

    In this post and the part follow-up, I intend to focus on CSP and SRI, as these methods are basic and fundamental. There’s certainly many other extravagant methods to achieve the same level of mitigation, and I welcome that discussion in the comments.

    CSP is a security feature that is part of the http response header information sent by the server and interpreted by client browser. Put very simply, CSP is a set of instructions (called “directives”) which set prescriptive parameters specifying where the content in the pages served by the http service may originate from, what kind of content may be served, and what other pages can be directed into the session. The uses for such policy directives should be very obvious; correct implementation of CSP will help mitigate cross-site scripting (XSS) attacks, injection, and click-jacking. In fact this was the original intent of CSP, which was first implemented in 2010 by Firefox v4 and then quickly adopted by the browser forum.

    What Does CSP Actually Look Like?

    Actually seeing what CSP looks like and how it is rendered is important for assessors, software engineers, site administrators, and even general users to understand. There’s a few ways we can examine http response headers. Let’s walk through a couple of ways to do that.

    How should I configure CSP?

    This depends on the services offered on your web application and the depth to which you intend to monitor/protect content on your pages. An excellent minimal example can be found at CSP Evaluator (click “Example safe policy”). This will typically achieve most compliance goals and satisfy your auditors/examiners that you’ve taken steps to ensure integrity of information your organization presents in its web applications. Additional reading on Foundeo’s CSP page should be considered mandatory if you’re looking to go deeper here. It’s important to remember that as you turn CSP directives on and off, the behavior of your web application will most certainly change, possibly in unpredictable ways. As always, test all changes in lower environments within the boundaries established by your organizational change management program before proceeding. If you don’t believe me, take it from the experts.

    Viewing HTTP Headers in Postman

    Postman is a free-to-use API viewer and builder. It’s super useful for many things and this case is no exception. I use the downloadable/compiled version but there is also a web-browser only version. Once you launch Postman, it’s simple to check the HTTP headers. Just type (or copy/paste) the URL you’re testing into the ‘GET’ bar. In this case, you’re quite literally sending a ‘GET’ request to the HTTP server. The server should then return all of the response headers that we can explore. Using the tabs in the response section (the bottom part), we should see a tab that says “headers”. Look for content-security-policy, and select the text in that field. Here’s a GET request I just ran against this site:

    Take note of the red rectangular highlighted sections here. We can easily see that this site has CSP enabled! Screenshot from Postman.

    Most modern browsers will also allow a look at the headers through developer tools included within the browser advanced settings. Depending on what browser you are using, your mileage may vary. Personally, I recommend the Chrome console as a good place to start.

    Using CSP-Evaluator

    A far easier (and quicker) way to check up on a site’s CSP is to simply use the CSP-Evaluator site. As I noted above, I really love this tool because not only is it easy to use, but it has samples of what a “good” CSP might look like. For now, check out the results of running this site against the CSP-Evaluator.

    Using Burp Suite Extension

    If you’re a Burp Suite user, it’s likely you’re already using the CSP Auditor extension. It’s super useful for taking a deeper look into CSP. I won’t cover the details here, but head on over to the linked GitHub repository to check it out.

    What is Sub-Resource Integrity?

    SRI has been identified by PCI DSS and other frameworks as a potentially effective way to prevent the most common vulnerabilities associated with loading external resources in an endpoint browser. Just a few vulnerabilities include supply chain attacks, man in the middle (MITM), code tampering, and XSS/XSRF. SRI is pretty simple in its most basic implementation: a developer includes a cryptographic hash of a resource, and the browser compares its rendered hash to that which was provided by the host/service. If there is a mismatch, most browsers simply fail to present the content associated with that particular object. Pretty nifty, and its been around since 2016 having been first introduced as a W3C spec.

    What Does SRI Actually Look Like?

    Again, seeing SRI will simplify understanding how it works and ease of use.

    <script
      src="https://example.com/example-framework.js"
      integrity="sha384-oqVuAfXRKap7fdgcCY5uykM6+R9GqQ8K/uxy9rx7HNQlGYl1kPzQho1wx4JwY8wC"
      crossorigin="anonymous"></script>
    

    In this example from Mozilla, we can see where a script element is invoked, followed by the “integrity” value. We also see the “sha384” algorithm used, followed by the actual hash. So, the browser would compare the hash in the code to a hash it creates of the script at the time the page is read. Mismatches result in rejected content. Easy to implement in a simplistic web architecture.

    What if We Can’t Implement SRI and CSP Within Rendered Pages?

    This question has become quite common with entities who have compliance burden but due to technical constraints cannot enable CSP and/or SRI without impacting the functionality of their solution. So, then what are the alternatives? Thankfully, there are options available. I will highlight a few that I am aware of below, although I am sure there are many others I am not aware of.

    I have seen Akamai CSP demonstrated a few times, and it’s an interesting solution. At its core, it works by injecting client-side JavaScript into all page code and the script then monitors the client-side activity rendered in the browser. Not necessarily completely without flaws as some known malware can block and redirect browser scripting, but certainly a very good mitigation.

    While I am not as familiar with the Jscrambler Client-side Protection and Compliance platform and products, their website contains a great deal of documentation, which I have had the chance to browse through. Again, this solution seems like it may be in an excellent position to meet the compliance and technical burden of script and page integrity. Possibly a good fit based upon what one’s organization might need.

    As I note earlier, there are most likely many other solutions available, including open source options. If you’re aware of one or more, tell me about them in the comments; I will check them out and happily add them here. As always with any solution, your requirements should be the foundation for selection.

    Summary and Call to Action

    CSP and SRI are emerging as not only a necessity for compliance, but an important mitigation for client-side attacks. While the cost of re-architecting your web application to accommodate these controls may be steep, I think it should be weighed against the risk of compromise as we would in any basic cost-benefit analysis. Things to consider should include your compliance burden, the transactional profile of your web application (i.e. credit cards, collecting PII, collecting PHI, etc), the volume of transactional data, and the potential reputation costs of leaking that information because of client-side attacks. In part 2, I will summarize some of the compliance burden that should be examined to determine applicability.

    Generative Artificial Intelligence Disclosure

    Generative AI was not used to directly facilitate the writing of this post. Validate this statement.

    Generative AI was used only to verify the factual components of this blog post. The ChatGPT prompt used was:

    Can you verify the factual components of this blog post and highlight inaccuracies based upon the citations?

    As a result, slight corrections were made to dates regarding CSP implementation (2010 instead of 2011) and guidance from PCI DSS for SRI adoption. No other changes resulted from generative AI analysis.

  • About This Place

    As I have progressed through my career, I am finding more and more that the knowledge and wisdom that I have accrued is worthy of being shared. As many of my peers and colleagues know, explicit knowledge documented publicly makes us all better at our jobs and helps us improve the service we provide to our principles and clients. I’ve absorbed so much explicit knowledge, I felt like it was time to give back.

    My hope is that this blog is consulted by those looking for and seeking the same kind of useful and insightful information that I have sought and found throughout my career. The topics I will try to cover will likely focus a great deal on risk and compliance to include PCI DSS, financial services trends, SOC2, and cybersecurity trends. I may also share some tips and tricks I have learned during my years as a technology and cybersecurity professional, along with some leadership anecdotes and best practices for service delivery. Finally, I may even highlight some landscaping, gardening, and grill master tips for those interested and to break up the monotony.

    Of course, I am obliged to include the following ubiquitous legalese: all of the opinions expressed on these page are my own and do not represent the firm I work for, any of my client information, or any principles I serve.

    Thanks for being here.

  • About Me

    I’m Jon.

    Since 2000, I have worked in the information technology field, spending the first 13 years of my career in a number of diverse roles across several industry sectors. I began in tech support, working at a call center and then later at a data center. I spent five years working my way up to systems and network administration for a technology startup, and during that time I event spent some time as a .NET developer. During those first 13 years, I was exposed to a great deal of what are now legacy technologies and platforms and they included Oracle Unix, DB2, z/OS, JCL, COBOL, .NET, Sun Systems, and a whole plethora of old and dated acronyms we hardly even use anymore. I developed multiple compiled and web applications for manufacturing firms and electric utility providers, and to my knowledge they continued to be used well beyond my tenure at those employers.

    Starting in 2011, I began my transition to cybersecurity by serving as an information security analyst at a regional bank, where I performed technology risk assessments on third parties and upon internally used applications. I served in this role until 2017, when I decided to pursue my Qualified Security Assessor (QSA) with a small consulting firm which had recently been accepted as a QSA firm. Since that time, I moved to a Certified Public Accounting firm, where I performed cybersecurity risk, PCI DSS and PCI 3DS assessments. As a manager at that firm I served a high performing team of talented individuals who in turn helped me to serve my clients.

    In December 2024, I was thrilled to return to industry where I am now able to leverage my experience as a practitioner and auditor to help my colleagues, customers, leadership, and stockholders to protect my organization’s assets and information.

    In addition to PCI QSA and PCI 3DS, I hold multiple industry certifications including Certified Information Systems Security Professional (CISSP), Certified Information Systems Auditor (CISA), ISO 27001 Lead Auditor, and Payment Card Industry Professional (PCIP). My current focus areas are cybersecurity assessments, artificial intelligence governance, and PCI DSS compliance. I also have experience leading and conducting independent assessments for financial services clients within the SWIFT Customer Security Programme (CSP), FFIEC Cybersecurity Maturity Assessment Framework/Tool, and the FedLine Solutions Security and Resiliency Assurance Program.

    Finally, my passions are gardening, cooking meals on my smoker, and spending time with my family. I am a proud father and dedicated husband. I drink bourbon.