Controls management is an integral part of risk management but can often end up in disarray. Controls can proliferate, leading to unnecessary duplication. The purpose of the controls themselves may not be fully understood, and testing methods can be inconsistently applied.
Protecht’s From controls chaos to controls assurance webinar brought together Chief Research & Content Officer, David Tattam, and our Research & Content Lead, Michael Howell to explore how you can build a robust control assurance program and bring it to life. If you aren’t sure whether your organisation is in control, this webinar is for you.
We had great feedback from our attendees, including the questions answered below. If you missed the webinar live, then you can view it on demand here:
Question topics
- Control types
- Control taxonomies and characteristics
- Shared controls
- Controls testing and assurance process
- Control ownership & accountability
- Other questions
Control types
Is your position regarding 'directive' controls because they don't ensure a particular outcome?
Interested in the role of internal policy in the controls framework - you mentioned that you saw it as a directive control.
What is the significance and future outlook of directive controls keeping in view rising costs of doing business?
For meeting compliance obligations sometimes it is a requirement to have directive controls like policies or procedures. So in those cases how it can be demonstrated whether we have compliance requirements or not?
I enjoyed getting familiar again with the bow tie concept. I may have missed it in the webinar, but what was your reasoning for not using directive controls?
In healthcare directive controls (following a specified process) are important. WHO Surgical Checklist as an example has reduced wrong site surgery.
It looks like this position struck a chord! While I’m a little more open to directive controls than David, and have used them in the past, I understand the concerns that David has with them, and I now use them much less. Let’s look at both sides.
A directive control includes directing people on how to act or behave, typically in the form of policies or procedures. Especially if you think about risks in terms of bow ties and the cause-event-impact lifecycle, directive controls don’t typically attach to anywhere specific in the bow tie. What you might find is that within the policy or procedure are one or more controls that can be extracted. When considered this way, the policy or procedure supports the more specific control. We typically consider this as part of its design effectiveness.
As an example, you might have an ‘Anti-Money Laundering procedure’ listed as a control. Within that, there are likely to be more specific actions that must be taken that will address specific causes or interim events (probably multiple at different points) throughout the lifecycle of risk, and these are more likely to be the actual preventive, detective or reactive controls. If there isn’t a targeted control objective (explaining how it is modifying the related risk), it can be more difficult to understand and assess whether a directive control is designed and operating effectively.
Depending on exactly how they are applied, there can be a side benefit to directive controls. They can inform people on expected outcomes, including how more specific controls are meant to work. Rather than being considered a control themselves, they can enhance awareness of the other controls, so people can identify and call out control weaknesses early. This is one way that directive controls can go hand in hand with the other types.
Of course, if regulation requires you to document a policy or procedure as a control, then you should do so. If all you’ve got is a directive control, ask the key question; how does it modify the risk? Try and be specific. This might help you identify additional controls that are more specific – or make you realise you are just hoping everyone reads the document and does the right thing.
Sometimes we struggle with differentiating between a control and a process. What would you define as the key differences between the two, what actually makes something a control vs a process?
In my experience, a lot of controls are really primary/core processes, rather than controls. What advice do you have on how to explain the difference?
Is an exception process a control?
A process is something that is required to produce the outcome you are looking for and to achieve an objective. For example, you may need to process payments for your customers, which is absolutely essential to meet their objective. An exception process might be a control in this instance, such as a fraud check. The fraud check is not required to complete the process, but you want to remove or reduce uncertainty that it isn’t going to the wrong person. We typically say ‘detect and act’, depending on what the analysis finds. It might find nothing and then get processed, or the funds might be held with appropriate follow up or additional verification.
We usually include exception reporting as a category in our controls taxonomies.
Is there a good generic example of a real-world corrective control? I'm a bit grey between them being future actions like closing Internal Audit findings (which are also RTP's) or doing something to reduce the impact of a risk once it has materialised.
I assume ‘RTP’ in this context means Risk Treatment Plans – we would refer to these as actions which are addressing a control weakness or risk that is outside of appetite / tolerance. I can see how this might be viewed as ‘corrective’ but is different than a corrective control.
Corrective controls are sometimes also called reactive controls. They are late in the lifecycle of risk, and usually come into effect once there has been an impact on objectives, and their objective is to reduce the impact. The typical example is insurance – you experience loss, and then some of it is recovered from the insurer. Data backups is another example – you may have experienced disruption or a data breach, and the more recent the backup, the less the impact is on your organisation.
I've pictured detective controls as being controls that let you know that a risk event is occurring (e.g. system monitoring has identified someone transferring sensitive files into a thumb drive). With early and late detective controls, is the idea that early detective controls allow the organisation to know the event is beginning, and late detective control is for knowing how it is unravelling?
The shorthand version is that early detective controls reduce likelihood, and late detective controls reduce impact. Detective controls really mean ‘detect and act’ based on analysed information. Let’s take the risk of processing payments to the incorrect account. If you have an exception report that identifies anomalies between data entry errors and payments actually being sent, this would be early detective – you can fix it before it actually gets processed. If the exception report wasn’t reviewed until the end of the day, by design this would be late detective. You might be able to recall or reverse the transaction before damage is done, thus reducing the impact.
Control taxonomies and characteristics
I love the concept of control taxonomies with organisations defining standard level 1 or level 2 controls. However, to drive the concise development of key controls while highlighting the objectives of the controls, wouldn't it better that granular level 3 controls are developed by the respective business units such that the control descriptions are tailored to fit the risks embedded in the processes carried out by these business units?
We agree. Our recommendation is to have a taxonomy (classification) that includes a library of controls mapped to that taxonomy. In this example we refer to the ‘level 3’ as the library rather than taxonomy, as this is where all the actual detail of the control needs to be captured. As you’ve said, controls applied in different areas of the organisation may be similar, but not the same.
We talk a lot about control taxonomy at work, but how best should be structure that? And why is this important?
Are there any resources available that detail taxonomies available for controls?
Do you have any favoured methods or approaches for building a taxonomy?
We favour two levels in your taxonomy, supported by a control library, where each control is linked to that taxonomy. That’s a rule of thumb – some organisations have a simple one-level taxonomy, and we’ve seen a few that have more. The structure of a taxonomy for controls can be challenging. The basic approach Protecht took to adopt its current controls taxonomy was to group controls by common ways that they were applied or functioned, while trying to be as ‘mutually exclusive, collectively exhaustive’ as possible. We will confess that it was not an easy exercise!
There are some specific control frameworks that include taxonomies and lists of controls for specific use cases. Examples include ISO 27001 and NIST CSF controls for information security, among others. We recommend linking these to a broader Enterprise Risk Management control taxonomy.
I also love a good taxonomy... do you see 'Tags' as the primary feature to implement the taxonomy? If yes do you have any mechanism to build the taxonomy in Protecht, so that the application of tags is conditional?
In our Protecht ERM system, Tags is the most common way we see taxonomies applied. We have some exciting new developments coming for controls frameworks and libraries later this year that will help align control taxonomies and frameworks – watch this space.
What is the difference between key control vs non-key control? Does each risk need to have at least one key control?
What are your thoughts on assigning a level of 'criticality' to controls, and using that as a key consideration into determining which controls require the greatest investment to test/assure?
Individual frameworks may have specific criteria as to what they consider key, but generally we consider 3 types of controls:
- Key controls – A non-negotiable control. You would not perform the activity without it
- Non-key – medium controls – An important but negotiable control
- Minor controls – Controls that modify risk but are not essential to achieving objectives
The first two would typically be captured as part of formal risk management activities and may require assurance. More focus is given to the key controls, particularly assurance. Minor controls may not be formally captured at all. There may be awareness that they exist – they may form part of procedures, or be identified in risk workshops, but don’t warrant assurance and therefore aren’t formally documented.
We have worked with some customers who included a requirement for every risk to have a key control as defined in their framework, but this is not a requirement. While it is typical that one or two controls will have the most influence on a risk, in some instances a single control may not be considered key, but collectively several controls bring the risk to within appetite.
Is there a rule of thumb around number of controls per critical process? If you set up too many controls the business will consider them a hindrance and would become ineffective.
As a rule of thumb, perhaps 4-6 per risk, but that comes with the usual caveats. It may warrant more depending on the size and nature of the risk. You may find risks related to cyber and information security have a great deal more, and some smaller risks might only have one two controls that really matter.
This needs to align with the assurance / testing process to avoid the burden you highlight. A few controls may warrant more frequent in-depth assurance and validation by someone independent; lesser controls may still be worth a regular attestation that the control still exists and is working as intended, even if no formal testing is conducted.
Shared controls
Are you seeing an increase in the practice of centralised Controls Assessment? It wouldn't remove the obligation of the business units to operate the control within the business units, but the centralised Business Units (e.g. HR. IT, other) are probably best placed to give an overall assessment and it is more efficient than having all business units opining on Controls which are centrally designed and the HR function for example can give an informed view on overall performance.
We are seeing a bigger call to try and rationalise control libraries while streamlining testing. We are currently updating our controls functionality to make management of shared controls even easier. There are a few ways this can work in practice, depending on the specific control:
- HR conduct a design effectiveness review (or an independent team if required), and then assess whether it is being operated across each department for an overall rating
- HR conduct a design effectiveness review, and each manager assesses operating effectiveness in their department, which can then be aggregated
- Similar to the above, HR conduct a design review, with Key Control Indicators implemented as a proxy for operating effectiveness. E.g. % of employees who have a completed performance review. That can be reported as an aggregate number for overall operating effectiveness, while highlighting departments that require further investigation to address deficiencies
In the Employee Performance Review example, our HR dept think that this is purely a management task to monitor - how do you overcome this.
My initial response is, they might be right! Management should be responsible for operating the control, and should be monitoring that they (or leaders within their team) are managing this review process. I would suggest though, that HR should be responsible for its design. One centralised place needs to design it so that it can be applied consistently across the organisation. If it isn’t applied consistently, is it because the design is not effective, or because the management team aren’t operating if effectively?
How to overcome it? Sounds like Line 2 or 3 might need to come to the rescue (assuming that’s you). You may need to find the ear of the right person, and highlight how potential breakdown in this process may be undermining the organisations ability to achieve objectives. Perhaps related data isn’t getting enough airtime – I once performed an assurance review on the performance review process. After I reported to management that some employees had not had a performance review in more than four years, they sat up and noticed, and that statistic became part of management reporting.
Controls testing and assurance process
To ensure that a risk is mitigated to an acceptable level, is it sufficient to assess only the key controls, or should the full set of controls, including secondary controls, be evaluated?
Short answer – fit for purpose. If you have all the information available and handy, then you should use it. Often, those secondary controls are there to be the ‘backup’ if the key controls fail, so you would often assess them all. Depending on the level of risk, you might perform lower levels of assurance on those secondary controls. Often if key controls are found to be ineffective, this is when those secondary or compensating controls need additional assurance while the key control weakness is being addressed.
I'd really love to hear more stories about challenges with this work and how to overcome them. Conceptually it is easy to get buy in for this, easy to sell the value. Much harder to actually implement.
We assume that work means controls assurance testing. Here are few tips that might help with implementation:
Challenges |
Suggested Solutions |
Controls testing considered tick the box exercise, not outcome focused |
Ensure that each tested control has a clear outcome focused objective, and all testing is aligned to those objectives |
Controls being tested aren’t controls |
Ensure there is clear guidance on what is and is not a control. If in doubt leave it out! |
Insufficient or inconsistent skills in front line for testing their own controls |
Train your people in both responsibility and capability; or develop a specialised controls testing team. We sometimes see a ‘Controls Office’ or similar in larger organisations. |
Inconsistency in assessing control effectiveness |
Develop a repeatable and formal methodology, and set clear guidelines on how rate controls based on test results |
Volume of effort required or expected in conducting controls assurance |
Develop a risk-based approach to test controls, such as only testing key controls, or lower frequency for non-key controls. This should be supported by strict definition of what is a key control. |
What is the difference between risk assessment and RCSA. Should the risk assessments all be conducted in the RCSA approach? What are regulators looking for when comes to risk assessment methodology? I don't see the need for all risk assessments to be conducted in one RCSA excel template.
A risk assessment can be applied to any set of objectives, and therefore can be applied to any level of the organisation. A risk and control self-assessment is typically applied at a department or business unit level, where they assess the risks that their department faces, and assess the controls they have in place. This is the ‘self’ part of the description.
An RCSA is one specific application of a risk assessment, but there are a large number of risk assessment methodologies you could use. While they might be applied to the ‘self’, we would still recommend common templates and taxonomies be used, which can allow others, such as Line 2 risk teams, to report on common trends across the organisation (shameless plug – we enable this type of aggregation by using common templates in Protecht ERM). To your point, other risk assessment methods can be applied to different situations – project risk assessments might have different criteria and processes, for example.
What is the purpose of having an assessment of a partially effective control? Is just a working/not working assessment enough to prioritise action on controls assessment?
Working and not working can be sufficient. Partially effective gives some indication that it is working, but not fully up to expectations. This can enable prioritisation, so those with major deficiencies are addressed before those with minor deficiencies. We don’t commonly see them, but some methods can also use percentage of effectiveness, or other scales.
What controls should you prioritise for testing?
The shorthand is the ones that will have the most effect on the level of risk if they fail.
The biggest shorthand are controls classified as key – those that are expected to have the most effect on the risk if they fail. This classification is likely based on a subjective (but hopefully well informed) risk assessment.
On the high end of maturity where risk is quantified, you might calculate the value of the control in dollar terms, which can then be used to prioritise testing (not an approach we see often).
That is based on the value of the control, but you also need to consider:
- The last time the control was tested
- Whether other controls over the risk (and the specific causal pathway) have been tested recently
- Indicators which might imply the risk or control environment is changing
- The assurance needs of stakeholder such as the board, regulators or critical third parties
- Compliance requirements
These take in the broader context of the risk and control environment.
What is the best way to get comfort around an automated control? Should automated controls be priority for testing?
To get comfort over an automated control, you need to design control testing similar to how you would treat all controls. What steps would you need to take in order to assess that the control is designed effectively, and is then operating up to that design? Automated controls may include algorithms and logic that can be assessed. An automated exception report that highlights zero exceptions might actually be a red flag – it might be finding nothing because it hasn’t been designed correctly.
Automated controls that have been developed based on machine learning / artificial intelligence approaches may have some challenges in this regard if they are not explainable, making it difficult to gain assurance.
Whether a control is automated or not may not be enough, at least on its own, to determine its priority. The problem with some automated controls is that that they can be applied at scale and therefore might have big ramifications in a short period of time if they fail. Consider how quickly control deficiencies might result in losses or impact, and whether there is a lag between that impact accumulating and being identified.
In your experience, what are some of the most common scenarios for a control to be effectively designed but fails when operating it? what are the most common root causes of that in your opinion?
At the highest level, the most common failures here are where a control is designed effectively and needs to be operated manually by a person. It might look good on paper (or in an ERM system) but people either don’t know how to operate it, have forgotten about it, or it simply isn’t applied as intended. This can also relate to shared controls that apply across multiple departments – it might be well designed, and operating effectively in some areas, but not in others.
Imagine an exception report that highlights potential fraudulent transactions that requires human review. There might be a standard documented procedure on how to triage those entries, but you might have:
- someone who ignores the procedure and just waves them all through because they rarely find anything worth their attention
- someone who applies the wrong thresholds or procedures because they don’t adopt the procedure
- someone who reviews the reports weekly when it is meant to be daily
Anecdotally, one of the root causes for this type of failure is change management. People change roles, and may not be properly trained in the control, or misunderstand its application.
From a controls perspective and there should be a stated tolerance for control performance/testing to assist in determining whether this is effective. This should consider cost/benefit of control deployment and maintenance.
We assume the reference to tolerance is a defined tolerance for a test. E.g. 30 records reviewed with 2 failures might be acceptable for that particular test and the control still considered effective. This can provide some more objectivity as to whether a control is effective and ensure consistency between testers.
We also agree that value should be one of the key considerations when assessing the design effectiveness. If it costs way more than the risk it manages, it is overengineered!
Controls can have an effect on managing multiple risks while it is initially built/created with focus on a single risk. The effectiveness of this control can then be applied to multiple risks as part of the risk assessment.
One control may be applicable to a number of risks in our portfolios. What is the best way to apply a weighting. Example - control may be 100% applicable on 1 risk and 20% applicable on another.
What about where the same control is applied slightly different in other risks and has a different level of effectiveness across the risks it is applied to? How is that managed/reflected in the control library to avoid duplicated controls?
Absolutely, one control can be applied to multiple risks. The question here is, are you testing the control whose results can then be ‘trusted’ across all associated risks, OR are you testing a control only in relation to a specific risk or sub-set of risks?
This brings us back to control objectives – sometimes a control can have multiple objectives. These need to be aligned with the risks that are being applied. Controls may require multiple tests, which can test the design and operating separately, but also different objectives or components of the control. Different frameworks have different approaches, but you might have multiple tests that influence an overall rating for the control to avoid duplicating the control itself in the library.
On weighting, you need to consider why you need weighting in the first place. If the control is effective, it might have a different effect on the risk in two business units, but I’m not sure there is benefit to weighting here. Each business unit would consider the controls effectiveness when assessing the residual level of risk, but would be conducted independently.
Control ownership & accountability
Who is typically responsible for control assurance? Line 2 (Risk Management) or Line 3 (Internal Audit)?
How is a control review for the purpose of second line of defence distinct from an internal audit review?
There is a difference between who should be, and what is typical. Line 1 – the frontline business – should be responsible for ensuring the controls they own are operating effectively, which should include assurance. In less mature organisations it can fall to a Line 2 risk team, though this should be to support Line 1, not replace it, and certainly not to be responsible for ensuring it is operating. Line 2 should be providing constructive challenge to Line 1 on the controls themselves and their assurance processes, with specific controls assurance activities undertaken if there is need for more independent assurance.
If Line 2 are providing assurance, what should not be happening is Line 2 being responsible for implementing and improving controls. Line 1 should own any responses and actions that might arise from Line 2’s involvement in assurance.
Internal audit should be reviewing the effectiveness of the risk framework and its application across Line 1 and Line 2. Similar to Line 2, they can provide independent assurance over specific controls, but they do not own any corrective actions that may need to be taken.
Going back to the typical – for some organisations Line 1 do own operation of their controls, but Line 2 conduct assurance reviews. Over time, this Line 2 work should move into Line 1.
If I don't own the controls (owned by tech), should I include these controls in my RCSA?
We would suggest yes. The reason is that those controls support the ongoing achievement of your department’s objectives. As part of that assessment, you might be able to include any results of testing or assurance from that internal provider. This highlights a trend that we are seeing, which is to consider RCSAs not just on a department basis but looking at from an end-to-end process view.
Great session diving into controls. Despite 'the business' owning the risks and controls I can't see the business taking on reviewing them, it will be up to me.
What if the umbrella handler is unskilled in its use and lets it go and it pokes you in the eye?
Love it! This extends the analogy we provided in the webinar where:
- Line 1 own risks and controls and hold up an umbrella to prevent rain (risk) from impacting on objectives
- Line 2 and Line 3 are there to check Line 1’s umbrella, NOT to hold a second and third umbrella
If they don’t know how to hold the umbrella – time for training and uplift. That might involve you holding the umbrella for a while as you teach them to use it – as a responsible employee you want to make sure the organisation doesn’t get wet while they learn. Make sure you are clear you are teaching them to hold the umbrella and it isn’t your responsibility in the long term. This might be a culture change and you need to communicate that appropriately to other stakeholders.
There is always an issue when Line 2 is doing the risk and control assessment and Line 1 is expecting that Line 2 to carry out the control testing. What is your view of it if the organisation is small?
Smaller organisations may not have the breadth of skills in the first line to perform formal controls testing and assurance. If the limit is providing assurance and not performing controls, this can be a reasonable compromise.
Reading between the lines, if the Line 2 risk team is also doing risk assessments, this could contribute to the perception that they own risk and controls. If this is the case, responsibilities need to be clearly articulated that Line 2 is facilitating risk assessments, not owing them.
Any advice on how to manage the conflict between control owner and risk owner, especially when system controls are designed by the business e.g. through providing business requirements to IT to build a system control that they own, but IT will not have visibility of the business need unless told.
At first glance, it sounds like a service level agreement (whether formal or informal) might assist. There may be some additional layers to consider here, such as cost of the control / budget considerations and how this is allocated, which may be addressed by some form of business case.
If there is conflict, I’d go back to basics – what is the risk that we are trying to manage, and what are the objectives of the control? If they are understood and agreed, this should go a long way to addressing conflict.
Other questions
I'd be interested to hear your thoughts regarding how 'Resilience' is 'quantified' in order for it to be tracked & reported on.
If we look at what regulators are looking for related to operational resilience (particularly in the financial services sector), they are looking for an assessment of whether the organisation is able to maintain operations within defined tolerance if it were disrupted. This is typically assessed using scenarios, which can be ‘quantified’ as a percentage of operations that are considered resilient to disruption. There are other measures – something that may be a topic to explore in and of itself!
Please consider talking about Controls Assurance within the context of BCBS 239 compliance (Base Committee on Banking Supervision – Principles for effective data aggregation and risk reporting) - there are strong cross-overs here. Also, given that 239 compliance was only mandatory for large FS firms what is your view on firms voluntarily adopting it to enhance their controls and risk universe?
Some of the content we discussed in the webinar aligns with the principles of BCBS 239, such as taxonomies that enable aggregation, and reporting on assurance and characteristics of controls. Timeliness, accuracy and integrity, completeness, comprehensiveness, clarity and usefulness – it’s not hard to see how these can be supported by controls assurance (you could say it is required). Aggregate reports that show the percentage of controls tested, their effectiveness, and their breakdown across taxonomies or departments can provide effectiveness insight to executives.
On voluntarily adopting it – given BCBS 239 is principles-based, it seems like good practice for other entities to adopt it on a proportional basis.
What resources would you recommend to read to learn more about controls assurance?
To continue your learning on controls assurance, we have a risk control frameworks eBook, and a Controls design and assurance course as part of Protecht Academy. You can purchase the course individually online. If you are looking to upskill a team, contact us at academy@protechtgroup.com to discuss our subscription packages.
Do you have anything on Board Assurance Frameworks?
We have two courses in Protecht Academy which may support your needs. We have a Risk management for boards course, which looks at the internal control framework from the perspective of the board. This can be supported by our Controls design and assurance course, which covers the practical implementation in the organisation.
Conclusions and next steps for your organisation
If you missed this webinar live, Protecht’s Chief Research & Content Officer, David Tattam, and our Research & Content Lead, Michael Howell explored how you can build a robust control assurance program and bring it to life. You can view it on demand here: