Quick Notes - DOMAIN 3: AWS Certified AI Practitioner
- Aman Bansal
- Nov 9
- 5 min read
Updated: Nov 12
If you are prepping for the AWS Certified AI Practitioner https://aws.amazon.com/certification/certified-ai-practitioner/, these notes should be enough to get the fundamentals for the exam.
Domain 3: Responsible Artificial Intelligence Practices
As you develop your AI system, whether it is a traditional or generative AI application, it is important to incorporate responsible AI.
Responsible AI
Responsible AI is the standard of upholding responsible practices and mitigating potential risks and negative outcomes of an AI application.
Biases and Variance : The number one problem that developers face in AI applications is accuracy. Both traditional and generative AI applications are powered by models that are trained on datasets.

Note: Regularization is an optimization technique that you can use to reduce overfitting. Increasing the regularization parameter decreases model complexity.
Bias-variance tradeoff is when you optimize your model with the right balance between bias and variance.
Core dimensions of responsible AI

With fairness, AI systems promote inclusion, prevent discrimination, uphold responsible values and legal norms, and build trust with society.
Explainability refers to the ability of an AI model to clearly explain or provide justification for its internal mechanisms and decisions so that it is understandable to humans.
Privacy and security in responsible AI refers to data that is protected from theft and exposure.
Transparency communicates information about an AI system so stakeholders can make informed choices about their use of the system. Some of this information includes development processes, system capabilities, and limitations.
Veracity and robustness in AI refers to the mechanisms to ensure an AI system operates reliably, even with unexpected situations, uncertainty, and errors.
Governance is a set of processes that are used to define, implement, and enforce responsible AI practices within an organization.
Safety in responsible AI refers to the development of algorithms, models, and systems in such a way that they are responsible, safe, and beneficial for individuals and society as a whole.
Controllability in responsible AI refers to the ability to monitor and guide an AI system's behavior to align with human values and intent. It involves developing architectures that are controllable, so that any unintended issues can be managed and addressed.
AWS Services for Responsible AI
Amazon SageMaker AI is a fully managed ML service. With SageMaker AI, data scientists and developers can quickly and confidently build, train, and deploy ML models into a production-ready hosted environment.
Amazon Bedrock is a fully managed service that makes available high-performing FMs from leading AI startups and Amazon for your use through a unified API. You can choose from a wide range of FMs to find the model that is best suited for your use case.
Foundation model evaluation: Amazon offers model evaluation on Amazon Bedrock and Amazon SageMaker AI Clarify.
Amazon Bedrock offers a choice of automatic evaluation and human evaluation.
SageMaker AI Clarify supports FM evaluation. You can automatically evaluate FMs for your generative AI use case with metrics such as accuracy, robustness, and toxicity to support your responsible AI initiative.
Safeguards for generative AI
With Guardrails for Amazon Bedrock, you can implement safeguards for your generative AI applications based on your use cases and responsible AI policies. Guardrails helps control the interaction between users and FMs by filtering undesirable and harmful content, redacting personally identifiable information (PII), and enhancing content safety and privacy in generative AI applications.
AWS Services that helps:
SageMaker AI Clarify helps identify potential bias in machine learning models and datasets without the need for extensive coding. You specify input features, such as gender or age, and SageMaker AI Clarify runs an analysis job to detect potential bias in those features.
SageMaker AI Clarify is integrated with Amazon SageMaker AI Experiments to provide scores detailing which features contributed the most to your model prediction on a particular input for tabular, natural language processing (NLP), and computer vision models.
You can use Amazon SageMaker Data Wrangler to balance your data in cases of any imbalances.
Amazon SageMaker Model Monitor monitors the quality of SageMaker AI machine learning models deployed in production.
Amazon Augmented AI (Amazon A2I) is a service that helps build the workflows required for human review of ML predictions.
Amazon SageMaker Role Manager: With SageMaker Role Manager, administrators can define minimum permissions in minutes.
Amazon SageMaker Model Cards: With SageMaker Model Cards, you can capture, retrieve, and share essential model information, such as intended uses, risk ratings, and training details, from conception to deployment.
Amazon SageMaker Model Dashboard: With SageMaker Model Dashboard, you can keep your team informed on model behavior in production, all in one place.
AI Service Cards are a form of responsible AI documentation that provides a single place to find information on the intended use cases and limitations, responsible AI design choices, and deployment and performance optimization best practices for AWS AI services.
Transparency and explainability:
Transparency helps to understand HOW a model makes decisions. This helps to provide accountability and builds trust in the AI system. Transparency also makes auditing a system easier.
Explainability helps to understand WHY the model made the decision that it made. It gives insight into the limitations of a model. This helps developers with debugging and troubleshooting the model. It also allows users to make informed decisions on how to use the model.
Models that lack transparency and explainability are often referred to as black box models.
AWS tools for transparent and explainability:
To help with transparency, Amazon offers AWS AI Service Cards and Amazon SageMaker Model Cards. The difference between them is that with AI Service Cards, Amazon provides transparent documentation on Amazon services that help you build your AI services. With SageMaker Model Cards, you can catalog and provide documentation on models that you create or develop yourself.
For explainability, Use SageMaker AI Clarify.
Amazon SageMaker Autopilot uses tools provided by SageMaker AI Clarify to help provide insights into how ML models make predictions.
Interpretability is a feature of model transparency. Interpretability is the degree to which a human can understand the cause of a decision.
Human-centered design for explainable AI:
Design for amplified decision-making: The principle of design for amplified decision-making supports decision-makers in high-stakes situations.
Design for unbiased decision-making: The design for unbiased decision-making principle and practices aim to ensure that the design of decision-making processes, systems, and tools is free from biases that can influence the outcomes.
Design for human and AI learning: Design for human and AI learning is a process that aims to create learning environments and tools that are beneficial and effective for both humans and AI.
Reinforcement learning from human feedback:
ML technique that uses human feedback to optimize ML models to self-learn more efficiently. RLHF incorporates human feedback in the rewards function, so the ML model can perform tasks aligned with human goals, wants, and needs.
EX: Amazon SageMaker Ground Truth: SageMaker Ground Truth offers the most comprehensive set of human-in-the-loop capabilities for incorporating human feedback across the ML lifecycle to improve model accuracy and relevancy.
Reference: AWS SkillBuilder https://skillbuilder.aws/learn


Comments