Skip to main content

São Paulo/SP – June 29, 2023 – TRUST AI and Adaptation to the Artificial Intelligence Risk Management Framework (AI RMF) – Points of Attention and Challenges

* Rodrigo Santiago

AI RMF Overview 

O AI RMF (Artificial Intelligence Risk Management Framework) of NIST (National Institute of Standards and Technology) is a set of guidelines and best practices for managing risks related to the use of artificial intelligence (AI).

The purpose of AI RMF is to help organizations identify, assess, and mitigate AI-related risks in their processes, systems, and applications. It provides a framework for managing and minimizing the privacy, security, ethics, and reliability risks associated with implementing AI systems.

Although most risk management processes generally direct efforts towards negative impacts, the framework AI RMF of NIST offers approaches to minimizing predicted negative impacts on AI systems and identifying opportunities to maximize positive impacts. Effectively manage the risk of potential damage, in addition to leading to more reliable AI systems and delivering benefits to people (individuals, communities and society), organizations and systems/ecosystems.

Risk management can allow AI creators and users to understand the impacts and gain visibility into the limitations and uncertainties inherent in their models and systems, which in turn can improve their overall performance and reliability and the likelihood of AI technologies are used in a beneficial way.

The methodology of AI RMF it is designed to deal with new risks as they arise. This dynamic is particularly important when impacts are not easily predictable and applications are evolving.

While some risks and benefits of AI are well known, it can be difficult to assess negative impacts and the degree of harm. Below we cite examples of potential harm that may be related to the use of AI systems.

Impacts on People

 Damage to image, rights, physical or psychological integrity and financial impact;

  • Harm to groups/communities such as targeted acts of discrimination;
  • Impact on democratic citizen participation or access to basic services (health and education).

Impacts on Organizations 

  • Impact on business operations;
  • Information leaks and financial impacts;
  • Damage to the company's reputation and image in the market.

Impacts on the Ecosystem 

  • Impact on supply chains, financial systems and globally interconnected systems;
  • Impacts on the environment, on the planet's natural resources.

AI risk management efforts must take into account that humans can assume that AI systems work well in all contexts.

For example, whether they are correct or not, AI systems are often seen as being more objective than humans or as offering greater capabilities than the human. software general.

Challenges for AI risk management 

Risk Measurement 

AI risks or failures that are not well identified or not adequately understood are difficult to measure quantitatively or qualitatively. Some risk measurement challenges include:

  • Monitoring of emergency risks;
  • Availability of reliable metrics;
  • Risk in real contexts;
  • Human reference.

Risk Tolerance

Risk tolerance is the level of risk acceptable to organizations or society and is specific to each application and use case.

although the AI RMF can be used as a guide for risk prioritization, it does not address risk tolerance. Risk tolerance refers to the willingness of the organization or the AI solution to bear risk in order to maintain the availability of its activities and achieve its objectives. Risk tolerance can be influenced by legal or regulatory requirements.

Risk Prioritization

Attempting to eliminate risk entirely can become unproductive in practice, because not all incidents and failures can be eliminated.

Unrealistic expectations about risk can lead organizations to allocate resources in a way that makes risk screening inefficient or impractical, or wastes resources. For this item the AI RMF indicates that the adoption of a risk management culture can help organizations recognize the difference between risks and carry out a more assertive prioritization.

Integration of organizational governance and risk management

AI risk management must be integrated and incorporated into broader enterprise risk management strategies and processes. Addressing AI risks alongside other critical risks such as cybersecurity and privacy will yield a more integrated outcome and organizational efficiencies.

O AI RMF can be used together with good practices and other frameworks to manage AI system risk or broader enterprise risk. Examples of risks include:

  • Privacy concerns related to the use of data to train AI systems;
  • Security concerns related to the confidentiality, integrity and availability of the system and its input and output data;
  • general security of software and hardware fundamental to AI systems,
  • Energy and environmental implications associated with resource-intensive computing demands.

AI RMF CORE 

The core of AI RMF (AI RMF Core) provides results and actions that enable dialogue, understanding of activities to manage AI risks, and development in a responsible and reliable way. The core consists of four functions: TO RULE (government), MAP (map), MEASURE (Measure), It is TO MANAGE (manage).

Each of these functions is divided into categories and subcategories, and these are subdivided into specific actions and results. It is important to emphasize that the actions do not constitute a checklist that will be audited, nor are they necessarily an ordered set of steps, but guidelines to achieve an acceptable level of risk management. 

Trust AI Solution

The Trust AI solution is a set of activities that in its scope cover the entire AI lifecycle, from the design and development of models, to their deployment and continuous monitoring.

Trust AI's main objective is to establish a safe and reliable environment for the implementation and use of AI systems following the guidelines of the AI RMF (Artificial Intelligence Risk Management Framework) of the NIST. 

Considering the guidelines reinforced by the framework, SAFEWAY developed an approach divided into four activities, considering the main governance controls that must be achieved for the use of an AI system with a higher level of security.

The foci are:

Risk assessment:

Identification and assessment of risks associated with the implementation and use of AI systems. It involves analysis of potential threats, vulnerabilities and adverse impacts.

Governance and Policies

Establishment of clear policies, guidelines and processes for the use and development of ias ensuring accountability, transparency and compliance with necessary legal and ethical requirements within the processes.

Ethical development:

Implementation of good ethical practices in the development and use of AI systems considering privacy, fairness to ensure user trust.

Monitoring and auditing:

Adoption of continuous monitoring, auditing and anomaly detection mechanisms to ensure compliance, early detection of problems and adaptation to changing operating conditions.

Main benefits

  •  Trust and Acceptance: Trust AI aims to increase trust in AI technologies. By implementing practices that ensure security, transparency and ethics in the use of AI, organizations can earn the trust of users and the general public. This is critical to the adoption and acceptance of these technologies.
  • Accountability and Accountability: The AI Trust encourages organizations to be accountable for the development, deployment and use of AI systems. This involves setting clear guidelines and adhering to ethical principles to ensure that AI systems are used responsibly.
  • Transparency: Transparency is a critical aspect of Trust AI. AI systems must be able to provide clear and understandable explanations for how they arrive at their decisions. This allows users to understand the decision-making process and identify potential unwanted biases or biases.
  • Privacy and security: Trust AI seeks to ensure the privacy and security of data used by AI systems. This involves implementing robust data protection measures and complying with relevant regulations such as the European Union's General Data Protection Regulation (GDPR).
  • Improved quality and performance: By following the Trust AI principles, organizations can improve the quality and performance of their AI systems. Transparency and accountability allow for a more accurate assessment of systems, identifying areas for improvement and adjusting the development process for better results.

Conclusion 

Combining the AI RMF solution with strategies for risk management and governance plays a key role when it comes to using and relying on artificial intelligence. As AI continues to play an increasingly significant role in various areas of society, it is essential to proactively and strategically address the risks associated with it.

Trust in AI depends on the ability to identify, understand and mitigate these risks, ensuring that AI systems are safe, reliable and ethical.

*Rodrigo Santiago is GRC manager at Safeway.

 

How can we help? 

SAFEWAY is an Information Security consulting company recognized by its clients for offering high added value solutions through projects that fully meet the needs of the business. In 15 years of experience, we have accumulated several successful projects that have earned us credibility and prominence among our clients, which largely constitute the 100 largest companies in Brazil.

Today through 25 strategic partnerships with global manufacturers and our SOC, SAFEWAY is considered one one stop shopping with the best technology, process and people solutions. SAFEWAY can help your organization with the implementation of an AI solution in a secure way. If you want more information, contact our experts!