📆 Join Us Every Wednesday at 9:00PM ET (see our latest events on LinkedIn)
What is GRC?
GRC is an umbrella term for 3 domains:
It encompasses the Who, What, When, Where, Why, and How for much of an organization’s cybersecurity posture. GRC work helps provide the foundations for how an organization maintains security, gives justification for those foundations, and protects organizations from potential catastrophes.
Why do these 3 domains get lumped together?
GRC is a rather interdisciplinary section within the cybersecurity realm, and as such, the 3 domains have an interconnected working relationship with each other.
Since we’re dealing with umbrella terms, how about an explanation in umbrella format?
- GOVERNANCE — You decide to keep an umbrella in your car, and for your own peace of mind, infrequently check to make sure you didn’t leave the umbrella at your front door.
- RISK — You’re headed to an important meeting wearing your nicest clothes and it looks like there are rain clouds overhead.
- COMPLIANCE — As a human being, you like being warm and dry, but also have a personal standard to not potentially ruin your nice clothes in front of the boss.
Your Governance (the umbrella policy) dictates how you manage the Risk (rain) in accordance with your Compliance standards (not walking into your meeting looking like you came back from the water park). But the beauty of the umbrella example (to inflate my own ego) is it also demonstrates flexibility of approach within GRC. Your governance policy can instead be you carry a spare set of clothes in the car and change into that when you arrive at work. Your management of the rain risk could be you request the meeting be held online so you can do it from home. Or, you could even change the Compliance standard you adhere to by getting a different job entirely that has no dress code (not as advisable, but I am not a career counselor).
The umbrella example can also be looked at from different perspectives to see the interconnections of the domains’ relationships.
GRC in Cybersecurity
If we look at the nature of GRC, we can view different “levels” at play in terms of who’s in charge, what’s at stake, and what the timelines look like. Based on what your position is in the field, you may place different importance on what needs to be focused.
Various resources exist for GRC materials, but a significant entity in providing excellent guidance is OCEG (Open Compliance and Ethics Group)
- Governance provides organizational guidance to operations and management.
- In cybersecurity, Governance usually translates to key components of
- Policies and Procedures
- Management and Oversight
- Planning of Initiatives
- Policies dictate approaches to various risk items facing an organization, and procedures tend to provide more direct action for accomplishing these goals.
- Often these policies and procedures are born from Compliance standards.
- Management and oversight allows for more of a leadership for these initiatives to take place. The management of these policies and procedures help ensure that things are done properly, but also can allow for continuous improvements to organizational security posture.
- Often management in this case takes the form of a designated security team made up of different members of organizational pillars (e.g., IT, product, HR, etc.).
- Security teams are often also responsible for handling Incident Response for security incidents.
- Planning of Initiatives goes hand-in-hand with the management side as planning can warrant what actions are needed to be undertaken and the timeframe. Anything from organizational product changes to specific implementation of required cybersecurity actions require some level of planning to not completely wreck operations.
Of the 3 domains of GRC, Governance is fairly simple to understand. It is significant for “rubber meeting the road” in terms of its usage in cybersecurity, and essentially boils down to 3 components itself:
- Policies and Procedures
- Management and Oversight
- Planning of Initiatives
Policies and Procedures
Again, this notion of Governance is straightforward and fairly self-explanatory. For a well-managed and executed Cybersecurity program, Policies and Procedures must be developed. This documentation serves as a means of providing valuable information both internally to team members, and externally to any customers as assurance of good security practices.
Policies and Procedures vary in complexity and can encompass a few dozen specific documents for important business operations, but also a more monolithic form as a single document covering all necessary topics. There's functional benefits to each method, so it's pretty much dealer's choice on which way to go.
That said, more effective documentation in this realm contains specific references to the systems being used for the procedures, and references to the regulatory/framework standards serving as the basis for an organization's decisions. Policies and procedures can also vary in terms of elements to be considered — a smaller organization may not have a robust sense of documenting Secure Software Development Lifecycle procedures at all for example. It will all generally be scoped by
Policies and procedures can also vary in terms of elements to be considered — a smaller organization may not have mature documentation of its Secure Software Development Lifecycle procedures for example. It will all generally be scoped by the Security Team that is managing the organizational posture. A perfect segue into:
Management and Oversight
Policies and procedures may lay down the law, but security leads and management are the sheriff and deputies.
A significant aspect of Governance is determining the appropriate team members (or contractors!) for providing the human touch in maintaining standards. Having policies in place mean very little if there’s nobody actually holding a team accountable for adherence.
Generally speaking, dedicating a select few people to manage the security posture of an organization is all that is required. Typical roles include:
- Security Officer — Usually provides high level oversight of all security initiatives
- Compliance Manager — Provides insight into aligning security initiatives with necessary Compliance standards
- Product Security Officer — Oversees security practices being done by product development teams
- IT Security Officer/Manager — Manages the security components of corporate infrastructure
- Facility Security Manager — Manages the security components of corporate facilities (e.g., server rooms, actual office buildings)
- HR Manager — Generally handles security measures involving internal risks (i.e., staff members)
- CIO/CISO - Chief Information (Security) Officer — C-level executive providing a wide overview of security initiatives for an organization
More specific tasks and roles can be broken out among an organization, but these roles generally suffice to cover the bulk of needs for an organization. Often tasks such as Incident Response gets delegated to/from these roles, so there is a more practical level effort instead of simply passing on dictates from on high.
Further, there is an important relationship between these management roles and policy development. The security team should be regularly performing maintenance of the policies and procedures as organizational structures are frequently changing — not much use in having security policies apply to a product that is no longer being developed. Which leads into…
Planning of Initiatives
Determining the security initiatives to undertake is obviously an important part of the Governance side of the house. Much like how leadership of an organization must provide roadmaps for business growth, the growth and refinement of the security posture as well as execution of certain elements must be considered (e.g., developing plans to test business continuity procedures and orchestrating penetration testing). This planning is an important element in and of itself as organizational changes frequently warrant consideration for ongoing security standards.
Planning will also involve some level of Risk management, but that will be covered in the Risk section.
Putting it together
Governance is an almost self-explanatory measure. It is the steering element of an organization's cybersecurity program which develops and executes on the policies that dictate the security posture. As with any element of business, an organization's security team will have nuances and overlaps of duties, but that gets into nitty-gritty of things.
- A simple view of risk is the analysis of threats an organization faces weighed against the vulnerabilities the organization owns.
- Vulnerability = a weakness in an organizational structure
- Threat = whatever exploits vulnerabilities
- Multiple types of risk exist that an organization may face. From a cybersecurity perspective, the risks evaluated relate to the Confidentiality, Integrity and Availability of an organization’s systems, products and data.
- Risk factors can be evaluated in different ways, and the evaluation is used to determine the best course of action for managing a risk.
- Not all risks require remediation, and some can’t be fully remediated.
- Governance helps provide guidance on how risks may be evaluated, monitored, and remediated.
- Compliance helps determine some of the risks an organization faces and also can dictate how the risks should be managed.
What, were you expecting some OTHER board game?
What is Risk?
Quite a loaded question that seems obvious while at the same time being quite the rabbit hole. For a proper definition, I will defer to NIST:
“An effect of uncertainty on or within information and technology. Cybersecurity risks relate to the loss of confidentiality, integrity, or availability of information, data, or information (or control) systems and reflect the potential adverse impacts…”
From a practical standpoint, Risk can be considered as the analysis of threats an organization faces weighed against any vulnerabilities the organization has.
Clear as mud? Good! Now you know why there’s money in this type of work. Read on!
Vulnerability and Threat
Chances are if you’re reading this page or are involved in cybersecurity from even a studious standpoint, you know what a vulnerability is. But for a good little refresher, a vulnerability is a weakness in an organization that exposes it to potential damage. In the same vein, a threat is whatever exploits the vulnerability to make that potential damage a reality.
Being cybersecurity fans, we tend to think of hackers and corporate espionage when we think of threat. We think of malicious actors inside and outside an organization seeking to steal data or bring the organization down by finding poorly written code that makes exposes everything.
But there are certainly less theatrical (and sometimes more theatrical) versions of vulnerabilities and threats. While we often think of malicious actors as exploiting employees that are not privy to reporting phishing emails, sometimes we overlook other significant vulnerabilities such as something as simple as a lack of a fire extinguisher close to a server room! The vulnerability (servers not doing well in fire) is still there, and the threat (a building fire) still exists, so we still have a Risk to consider!
Types of Risk
Risks take all shapes and sizes — the fire-threatened server room is an example of a non-technical risk. As such, it is prudent for organizations to take stock of the potential Risks facing them and decide on what to do. Some examples to consider include:
- External risk — Malicious actors, supply chain problems, geopolitical disputes
- Internal risk — Also malicious actors, but also accidents happen!
- Environmental risk — Earthquakes, fire, floods, hurricanes
- Operational risk — Working with a third party with a history of significant data breaches
- Technical risk — Lack of geographical redundancy in the event of regional downtime
- And many more…
This list is of course not exhaustive of all types of examples, but it demonstrates that Cybersecurity Risks can be well beyond the more obvious scopes.
For properly evaluating risk, an organization must consider the 2 primary factors mentioned earlier — the vulnerability and the threat. Each of these will have differing magnitude based on an organization’s scoping.
A common method among the pentesting side of cybersecurity tends to rank risks based on a matrix of Magnitude of Impact (MoI) vs Ease of Exploitation (EoE).
MoI / EoE
This is a conceptual model of how one can define risk, but it can grow exponentially or take on other metrics depending on who is looking at what aspect. If we were to apply a similar model to more of an Internal Risk (such as a disgruntled employee), then both their Magnitude of Impact and Ease of Exploitation would be directly correlated with any access privileges that employee has. And this same Risk extends to anybody in the organization, but while a system admin might be considered a High Risk, a building custodian could be considered Low Risk depending on the scope.
In some cases, what also appears to be a Risk may not actually be so. If you have a database that contains very important customer data like their birth dates, SSNs, and payment card info, exposure from that database would be massively risky. But if the database is behind a private intranet and accessible to only 2 people under an NDA and strict activity monitoring, then the Risk level of having the database itself can be considerably lowered.
The key point here is that when one considers the metrics of the vulnerability vs. threat. There are also multiple ways to calculate the risk factors facing an organization (link), the above model is a simple one that is easy to digest. But the idea is an organization can classify what actions to take in response to the risk
Which leads us to…
The crux of the Risk domain. With an understanding of Risk and how to evaluate its impact to an organization, those in charge of managing the Risks can make important decisions regarding it. Essentially there are 3 decisions when it comes to managing risk.
There are a few more actions an organizations can decide on, but these 3 are mainly what it boils down to. And they’re fairly straight-forward in theory.
- Remediate = Fix the vulnerability to negate it
- Monitor = Carefully watch for signs the vulnerability is being exploited or increases in severity
- Accept = Keep track of the Risk factors and escalate as needed, but otherwise less active monitoring
The management in this regard usually boils down to the severity of the Risk itself, specific actions are usually based around factors such as cost of remediation, manpower/technical expertise required, practicality of the vulnerability itself, and considerations around business operations.
Risk management tends to warrant its own levels of policy within organizational Governance, and is a significant motivating factor for Governance practices. Risks also play into Compliance considerations as well since Compliance seeks to provide regulatory oversight into how organizations should best manage the Risks that face them.
In essence, Risk ties the 3 domains together into what actually must be done to protect organizations.
- Compliance standards set the rules by which organizations determine their cybersecurity postures.
- Some regulatory standards are required based on industry, type of business conducted, and where business conduct is occurring.
- Other standards are wholly voluntary and are intended to improve organizational security posture and/or strengthen business-to-business (B2B) and business-to-consumer (B2C) relationships.
- Some standards are strict and specific, while others are flexible and up for some interpretation.
- Organizational compliance can be maintained internally, but is also commonly audited by third-party entities.
- Automated tooling exists to aid in auditing, but is still in its relative infancy.
- Compliance standards are frequently used to determine Risk factors facing an organization as well as developing organizational Governance practices.
What is Compliance?
Compliance is arguably the towering entity of the GRC triad. There are the rules and what entity/entities oversee their adherence. And in the tightest of all nut shells, being compliant means you play by the rules — literally answering the question “Do you and your organization COMPLY with xyz rules?” After all:
With that in mind, however, not all rules get to apply. In some cases, regulations are required, in others, they can be “addressed” and some regulations are entirely voluntary and/or serve as guidance.
Common Compliance standards
Note: * indicates a voluntary standard and compliance is based on internal decision-making (aka Governance) or legal obligations such as customer contracts
The average compliance standard documentation. Note not only the massive size, but the archaic language within.
Don’t worry, we will not be covering every single one of these standards in the call. These are references if you decide to peruse this page on your own.
- Established and overseen by the US Department of Health and Human Services (HHS)
- Required by any organizations involved in transactions with Protected Health Information (PHI) — basically any of the following:
- Healthcare provider (Hospital, doctor’s office, pharmacy)
- Health insurance group
- Healthcare clearinghouse
- Or any organization conducting business with these health institutions (usually referred to as “Business Associates”)
- ~180 criteria for auditing the cybersecurity posture of the above organization
- Criteria relate to data privacy controls (The Privacy Rule), security controls (The Security Rule) and breach notification standards (The Breach Rule)
- Criteria can be dependent on the type of organization being audited (e.g., Business Associates don’t need to adhere to the majority of privacy controls while Covered Entities like hospitals do)
- Criteria are not prescriptive and compliance can be up to interpretation based on size and scope of organization, and are thus designated as either “Required” or “Addressable”
- Wall of Shame
- Combined “Simplified” Language of key HIPAA Rules
- Established and overseen by the American Institute of Certified Public Accountants (AICPA)
- Broken into 2 Types
- Type 1 — A “snapshot” type audit, it reviews the configurations and designs of security controls and scoped environments
- Usually the stepping stone to Type 2
- Type 2 — An audit that covers a length of time (often 6 months) in which not only are controls evaluated, but processes and policy compliance as well
- Usually performed retroactively on an annual basis (e.g., if the audit is performed January 2023, the audited timeframe could be June-December 2022).
- Voluntary standard, but it is robust enough that it looks VERY good for an organization to be compliant.
- SOC2 reports are generally intended to serve internal initiatives as well as any customer requests.
- Can be obtained at any point in an organization’s life, provided the security posture is mature enough and the organization has enough money to cover the audit process.
- SOC1 and SOC3 reports are also available, but have different scoping and relevance
- SOC1 focuses on control design and Transaction Processing
- SOC3 is essentially a redacted SOC2 that can be used for general/public consumption
- Overseen by the Payment Card Industry Security Standards Council (PCI SSC)
- A mandatory standard for any organization that processes card payments from the major card carriers (MasterCard, VISA, etc.).
- However, it is not actually a legal standard like HIPAA or GDPR.
- Failure to comply with PCI DSS can cause multiple operational issues such as revocation of the ability to process card payments, monetary penalties paid to PCI SSC (up to $500k per incident), and potential investigations and audits from the FTC.
- Compliance is based on 12 requirements that cover technical controls, policies and procedures, and testing of these items regularly.
- Created and overseen by the US Department of Defense.
- Mandatory for any organization that is contracting with the DoD
- Based around protecting Federal Contract Information (FCI) and Controlled Unclassified Information (CUI)
- FCI is information not intended for public release but generated by or for the US Government for products/services.
- CUI is basically any other information used in accordance with Government contracts and includes information such as Infrastructure, Intelligence and Law Enforcement data.
- Aligned to NIST SP 800-171/172 standards.
- 3 Levels of compliance (1 being lowest, 3 being highest) and contracting will dictate the level of compliance required by an organization.
- Established in 2020, but technically under “interim rules” until March 2023.
- Mandatory compliance is slated for October 2025.
- Mandatory standard for any organization that collects data from or targets citizens of the European Union.
- Very robust standards and harsh punishments for non-compliance (up to 20 million euros!!!)
- Mandates how organizations collect and process data as well as forcing provisions to be made for the transparency of data use and option to delete user data. (If you see that banner that asks you to accept all cookies on a website, you can thank the EU!)
- This includes not only corporate infrastructure, but any applications/digital products must also be maintaining good documentation and transparency.
- Dense legalese in the defined articles. But they provide a good overview.
- Passed in 2003 and overseen by the Federal Trade Commission (FTC).
- Defines “unsolicited marketing emails” and creates rules to be abided by for these messages.
- Organizations sending marketing emails must allow a recipient to unsubscribe from future emails (even if they never requested subscription in the first place).
- Dictates proper plain-to-see data for the recipient must be present within the messages (e.g., accurate sender information, address of organization, and some labeling indicating any Adult content).
- Also requires any and all marketing messages to clearly indicate accurate information for the product or service being advertised.
- Oversaw the creation of a national “do-not-email” registry.
- Penalties for violations include fines and jail time.
- In case you find yourself alone and bored on a Saturday night.
- US federal law passed in 2002 and overseen by the SEC
- Created to protect investors and the general public from financial errors and fraud due to “cooked books”
- Mandatory standard for all publicly traded companies in the US
- Mandates the following standards for companies:
- Executive-level assurance to the accuracy and legitimacy of financial data
- Regular financial reporting controls
- Formalized Data Security Policies and Procedures (Access controls, data backups, change management, general security protections)
- Failure to comply can lead to shutting down of businesses, hefty fines and jail time
- Enacted in 2018
- Impacts any organizations targeting or interacting with the data of California citizens and meets one or more of the following criteria:
- Generates gross revenues of $25 million or higher
- Buys, sells, or receives data for 50,000+ consumers or households
- Earns more than half its annual revenue from selling consumers’ personal information
- Essentially like GDPR with more specific provisions
- Compliance requires proof of data protection and evidence of proper data deletion protocols and tracking of sold data
- HIPAA actually overrides CCPA in cases where consumer data is classified as PHI
- Established roughly in 1995 by the ISO and IEC (International Electrotechnical Commission)
- Meets on a semi-annual basis for any potential updating of standards
- Actually a family of 63 different standards to cover any type of applicability — Overarching Information Security Management, Auditing Guidelines, Cloud Service controls, etc.
- Voluntary standard to provide solid guidance on how to operate cybersecurity systems
- Originally published in 2014
- Framework based on 3 components:
- Core — Standards and Guidelines for cybersecurity practices
- Tiers — Contexts on how organizations can measure cybersecurity postures
- Profile — Cybersecurity posture outcomes based on business needs (e.g., scoping)
- 5 Critical Functions — Identify, Protect, Detect, Respond, Recover
- Voluntary standard like ISO 27K series, but doesn’t actually have a certification tied to it
- Reference document
- Developed and overseen by SANS Institute
- Based on 18 different control criteria with 3 levels of maturity across each criterion
- IG1 - Basic cybersecurity hygiene
- IG2 - Robust cybersecurity hygiene
- IG3 - Fully mature and ongoing cybersecurity posture
- No organizational certification for compliance, however… If you want to add another cert to your personal list
Different standards are used for different purposes. Some standards are mandatory by industry or business activities (i.e., PCI DSS for any organization that processes customer credit card data), while some are regarded on more of a voluntary basis (e.g., SOC2). What will justify the adherence to compliance standards will be based on multiple factors, such as:
- Industry of the organization
- Contractual obligations
- Region(s) of business operations
- Financial burden (both to implement compliance-based controls and dealing with potential fines/lawsuits for non-compliance)
- Organization size
- Operational maturity
- Manpower needed to maintain adherence
- Customer relationships
This list has common factors in determining the pursuit of compliance but is not exhaustive. These considerations will ultimately contribute to the overall cybersecurity posture of an organization and the Governance and Risk discussions as well.
Even though most standards will be based on industry best practices, the prescriptive nature and strictness will vary greatly between standards. For example, HIPAA has several criteria that are considered “Addressable” in that an organization can have some level of compensating controls that offset a more prescriptive need (e.g., hosting confidential data on servers physically hosted by Microsoft Azure rather than a server found in the closet of an employee’s home) or can have exceptions made to not meet a specific criterion.
The notion of strictness help define a standard and can thus be more amenable to certain organizations based on scope. If an organization is just getting it’s feet wet in the cybersecurity space and does not have a high level of revenue, it probably does not need to call for a SOC2 attestation (this goes back to the Justification section above).
Further, the notion of strictness also applies to penalties in the event of non-compliance. If an organization is not compliant with ISO 27k, there is no direct penalty facing it beyond client relations as ISO is not a regulatory body. However, if that same organization violates GDPR standards, then it can face significant damages in terms of fines to the European Union.
Just like you should call in a house inspector before buying a home, so too does an organization need to have a set of eyes watching compliance. However, compliance can be determined internally by an organization depending on the standards. An organization can assign a Compliance Manager type role to an individual or group of employees, can they can provide oversight of NIST CSF compliance for example.
However, not all compliance standards can be audited internally and some should not. A SOC2 audit is performed by a third-party entity granted such power by AICPA (the body governing SOC2 standards). Beyond that, an internal audit may be subject to unintended bias that overlooks critical elements of standards.
In some cases, non-compliance may not even be readily discussed until AFTER a significant instance of non-compliance appears. The US Department of Health and Human Services does not conduct regular audits of organizations for HIPAA, but it will audit an organization in the event of a breach. In such a case, it would be a potentially cheaper option to engage a third-party to perform an audit as part of a “Gap Assessment” rather than shelling out money for fines. (The Office of Civil Rights has collected over $130 million worth of fines in the past 19 years!)
Working in tech, you can safely bet there are software tools around to determine compliance posture. With as much information that goes into determining compliance, it’s often more than one person can hold in mind effectively at any given moment. Tooling can be as simple as a spreadsheet to as complex as a whole automatic testing solution utilizing APIs for various systems.
Spreadsheets have the convenience of being available basically for free, but the cost lies in a) effective development of the sheet’s contents, and b) proper analysis of the content. It can be quite time consuming to develop and alter a functional system this way, especially if you are performing assessments for multiple clients.
Increasing complexity of compliance standards have yielded the creation of automated tooling for compliance. Tools such as Drata, myVCM, and Microsoft Compliance Center have begun stepping into make the auditing process easier. However, while the automation helps to generate good evidentiary material, we are still relatively in the infancy stage. Automated ratings tend not to pick up on nuances of scoping and can also generate false positives or negatives based on its analysis. Further, these tools may also not fully integrate with organizations’ systems and must be used judiciously.
Putting it all together
This page scratches the surface of GRC as the topics of each domain can be discussed on their own for hours at a time. Since I lack the interest in developing that content (and you would likely lack the interest in consuming it unless you are a masochist for spreadsheets and document templates), this last section will attempt to coalesce these items into relevant connections to the cybersecurity landscape as a whole.
The image above provides a very rough idea of where GRC domains can roughly fit into commonly known cybersecurity paradigms (Red, Blue, and Purple teams). Compliance, however, is a bit different since it can provide a context for performing any one of the items on the Red-Blue spectrum, so in essence, it can be the “encapsulating” factor. One can especially see this when it comes to running through a regulatory “Gap Assessment” or “Risk Assessment”.
Having GRC knowledge is a boon for any cybersecurity professional, or organization seeking to improve its cybersecurity posture. Red teamers can utilize the institutional knowledge of the Risk Management systems and any potential Governance/Compliance documentation to provide insight into potential means of attack and how to develop better reporting for engagements. Blue Teamers can cater their services to better meet the Compliance and Governance needs of an organization, while also serving as a critical component in Risk Management.
A Compliance Analyst’s perspective
I came into this field relatively light on technical skills compared to my colleagues. Although I am Security+ certified, I lacked robust experience in both the Red Teaming and Blue Teaming fronts. But getting involved with GRC has allowed me to grow my knowledge significantly in just a short time — not only in nuances of IT set ups, but also in the realm of penetration testing as I review, and sometimes contribute to, pentesting reports.
There is also variances in my day-to-day activities. Between Gap Assessment work and vCISO service work, one day I can go from contributing to tracking risks for a client to performing Security Incident Response, to analyzing policy documents, and even developing tools to make my jobs easier.
The variability in duties is actually enjoyable to me as it allows for more creativity and can prevent a sense of mundanity that might face more straightforward roles. I also have a strong sense of customer service, so this provides a means of getting that career desire met.
Suggested traits for prospective GRC team members
This list is by no means comprehensive, but can be helpful in determining who might be better served for GRC work.
- Organized — Absolutely necessary for juggling multiple documents at once
- Communication skills — Both verbal and written since there will likely be a lot of customer/internal communications occurring as well as obvious document creation
- Flexible thinking — Scoping is huge in GRC and can be tricky to consider
- Knowledge of common technical infrastructure — AWS, Google Cloud, and Microsoft Azure are common entities you can be auditing or dealing with, at least indirectly
- Knowledge of common cybersecurity frameworks and regulations — Seems obvious, but a lot of regulations are based on well-known frameworks such as NIST CSF
- Resourcefulness — Sometimes the answers aren’t always right in front of you, so you’ll need to do some searching