top of page

Quick Notes - DOMAIN 8: AWS Certified AI Practitioner

  • Writer: Aman Bansal
    Aman Bansal
  • Nov 10
  • 3 min read

Updated: Nov 12

If you are prepping for the AWS Certified AI Practitioner https://aws.amazon.com/certification/certified-ai-practitioner/, these notes should be enough to get the fundamentals for the exam.


Domain 8: Essentials of Prompt Engineering


Understand Prompts: By interacting with a model through a series of questions, statements, or instructions, you can adjust model output behavior based on the specific context of the output that you want to achieve. Using effective prompt strategies can offer you the following benefits:


  • Enhance the model's capabilities and bolster its safety measures.

  • Equip the model with domain-specific knowledge and external tools without modifying its parameters or undergoing fine-tuning.

  • Interact with language models to fully comprehend their potential.

  • Obtain higher-quality outputs by providing higher-quality inputs.


Negative prompting: is used to guide the model away from producing certain types of content or exhibiting specific behaviors. For instance, in a text generation model, negative prompts could be used to prevent the model from producing hate speech, explicit content, or biased language. By specifying what the model should avoid, negative prompting helps steer the output towards more appropriate content.


Modifying and Refining Prompts to get better results:

Inference parameters: When interacting with FMs, you can often configure inference parameters to limit or influence the model response. The parameters available to you will vary based on the model that you are using. Inference parameters fit into a range of categories, with the most common being randomness and diversity and length.


  • Randomness and diversity:

    • Temperature: This parameter controls the randomness or creativity of the model's output. A higher temperature makes the output more diverse and unpredictable, and a lower temperature makes it more focused and predictable. Temperature is set between 0 and 1.

    • Top P: Top p is a setting that controls the diversity of the text by limiting the number of words that the model can choose from based on their probabilities. Top p is also set on a scale from 0 to 1.

    • Top k limits the number of words to the top k most probable words, regardless of their percent probabilities. For instance, if top k is set to 50, the model will only consider the 50 most likely words for the next word in the sequence, even if those 50 words only make up a small portion of the total probability distribution.


Prompt Engineering Techniques: Using these prompt engineering techniques can help you use generative models most effectively for your unique objectives.


  • Zero-shot prompting is a technique where a user presents a task to a generative model without providing any examples or explicit training for that specific task.

  • Few-shot prompting is a technique that involves providing a language model with contextual examples to guide its understanding and expected output for a specific task.

  • Chain-of-thought (CoT) prompting is a technique that divides intricate reasoning tasks into smaller, intermediary steps. This approach can be employed using either zero-shot or few-shot prompting techniques.


Prompt Misuses and Risks


  • Poisoning refers to the intentional introduction of malicious or biased data into the training dataset of a model. This can lead to the model producing biased, offensive, or harmful outputs, either intentionally or unintentionally.

  • Hijacking and prompt injection refer to the technique of influencing the outputs of generative models by embedding specific instructions within the prompts themselves.

  • Exposure refers to the risk of exposing sensitive or confidential information to a generative model during training or inference.

  • Prompt leaking refers to the unintentional disclosure or leakage of the prompts or inputs (regardless of whether these are protected data or not) used within a model. But it can expose other data used by the model, which can reveal information of how the model works and this can be used against it.

  • Jailbreaking refers to the practice of modifying or circumventing the constraints and safety measures implemented in a generative model or AI assistant to gain unauthorized access or functionality.


References:

1 Comment

Rated 0 out of 5 stars.
No ratings yet

Add a rating
Guest
Nov 12
Rated 5 out of 5 stars.

Great Work!

Like

Stay informed with the latest trends and insights in cybersecurity. 

    © 2023 by BansalOnSecurity. All rights reserved.

    bottom of page