Quick Notes - DOMAIN 4/5: AWS Certified AI Practitioner
- Aman Bansal
- Nov 10
- 2 min read
Updated: Nov 12
If you are prepping for the AWS Certified AI Practitioner https://aws.amazon.com/certification/certified-ai-practitioner/, these notes should be enough to get the fundamentals for the exam.
Domain 4: Developing Machine Learning Solutions
Machine learning (ML) lifecycle refers to the end-to-end process of developing, deploying, and maintaining machine learning models.

With SageMaker Jumpstart, you can deploy, fine-tune, and evaluate pre-trained models from the most popular model hubs.
SageMaker Canvas gives the ability to use machine learning to generate predictions without needing to write any code.
Bias and variance

Confusion matrix: A confusion matrix can help classify why and how a model gets something wrong.
MLOps: Machine learning operations, or MLOps, refers to the practice of operationalizing and streamlining the end-to-end machine learning lifecycle, from model development and deployment to monitoring and maintenance. It helps ensure that models are not just developed but also deployed, monitored, and retrained systematically and repeatedly.
Domain 5: Developing Generative AI Solutions
GenAI Lifecycle

Prompt engineering techniques:
Zero-shot prompting
Few-shot prompting
Chain-of-thought (CoT) prompting
Self-consistency
Tree of thoughts (ToT)
Retrieval Augmented Generation (RAG)
Automatic Reasoning and Tool-use (ART)
ReAct prompting
RAG is a natural language processing (NLP) technique that combines the capabilities of retrieval systems and generative language models to produce high-quality and informative text outputs.
Fine-tuning refers to the process of taking a pre-trained language model and further training it on a specific task or domain-specific dataset. Fine-tuning allows the model to adapt its knowledge and capabilities to better suit the requirements of the business use case.
Instruction fine-tuning uses examples of how the model should respond to a specific instruction. Prompt tuning is a type of instruction fine-tuning.
Reinforcement learning from human feedback (RLHF) provides human feedback data, resulting in a model that is better aligned with human preferences.
Creating a foundation model from scratch:

ROUGE is a widely used metric for evaluating text summarization systems. It measures the overlap between the generated summaries and reference summaries, capturing the ability of the system to produce relevant and comprehensive summaries.
Reference: AWS SkillBuilder https://skillbuilder.aws/learn