Responsible Ai

Responsible AI

https://scaledagileframework.com/responsible-ai/ (opens in a new tab)

  • Bias
  • Hallucination
  • Data leaks

Aspects of Responsible AI

  • Trustworthy
    • Privacy
    • Security
    • Resilience
    • Reliability
    • Accuracy
  • Explainable
    • Transparency
    • Interpretability
    • Accountability
  • Human centric
    • Safety
    • Fairness
    • Ethics
    • Inclusiveness
    • Sustainability
    • Compliance

Trustworthy means AI systems work as designed, are secure, and protect private information. When AI is trustworthy, it has the following attributes:

  • Privacy – making sure customer and company confidential information doesn’t leak out, especially when using third-party AI tools

  • Security – preventing hackers from compromising systems by taking advantage of AI’s weaknesses

  • Resilience – handling attacks and fixing itself quickly if a part of the AI stops working

  • Reliability – working as it should, being available when needed, performing quickly, and doing what it was created to do

  • Accuracy – giving correct information and results

When an AI system has all these qualities, companies and their customers are more likely to trust and rely on it.


Explainable AI makes AI systems open and clear so people can understand how they arrive at the results they produce. This involves:

  • Transparency – Making it easy to understand how AI systems work and how they produce results similar to a human’s output by providing clear documentation

  • Interpretability – Ensuring that the way AI makes decisions is easy for humans to understand

  • Accountability – Holding organizations responsible for AI behavior and outcomes

When companies make their AI explainable, it helps fix mistakes faster and makes customers trust their AI more.


Human-centric AI systems should always be safe and avoid harming people, property, or the environment. They should also respect the rules and values of society. Here are the key attributes of human-centric AI:

  • Safety – Ensuring AI doesn’t pose any dangers to humans

  • Fairness – Ensuring AI systems treat everyone equally, without bias

  • Ethics – Ensuring AI systems follow the moral principles and values that society holds

  • Inclusiveness – Ensuring AI considers the wide range of people who might use it

  • Sustainability – Ensuring AI does not harm the environment

  • Compliance – Ensuring AI follows existing laws, regulations, and standards

Resources