Prompting ChatGPT

We always write prompts for ChatGPT without understanding the underlying layer that governs the model's intended behaviour. What a majority of people ...

7 min read
Sponsored
Prompting ChatGPT
Table of Contents
We always write prompts for ChatGPT without understanding the underlying layer that governs the model's intended behaviour. What a majority of people do not know is that engineered prompting can help us to have optimal results from the AI model. Such techniques help us to get intended useful responses that are aligned with the user's intent.

Intended behaviour
Understanding intent-based prompting is not only going to allow users to get the most out of AI Models but can also promote the responsible and secure use of AI, guiding users of AI systems into using the systems without malicious intent within organisations, although some of the responsible and secure features are backend developed by the model providers are an extra layer of controls is need to protect users from harm that arises from an irresponsible use of AI models. Currently, there exists a vast majority of behavioural concerns in connection with the use of AI models, which puts its users at risk. The concerns are Misaligned objectives, Execution risk, and Harmful instructions. A misaligned objective can arise from a misinterpretation of a prompt due to a vague instruction that led the AI model to hallucinate. Another layer is the execution risk, in which the AI model didn't have a misaligned objective; however, its task execution process is not procedurally correct. Currently, there are a couple of solutions that assist users in limiting the execution risk, which involve the usage of domain-specific AI Agents with an intentionally governed behaviour through Agent protocols and guidelines such as Agent Skills. Harmful instructions are self-inflicted, and they may sometimes go beyond the control of the AI models' design, and can only be guaranteed by users' ethical and responsible use of the tools. The guardrails for organizations that adopt AI systems will need to develop clear policies that govern the responsible usage.

Understanding Prompt Instructions Authority

Prompting an AI model is basically giving an AI model a set of instructions to execute. Computers have been historically effective in receiving and executing instructions; how they execute those instructions is a consequence of systems engineering. For optimal results, a prompt, which is an instruction to the AI model, needs to be precise and clear without ambiguity.
It is important to understand that a model has different levels of instructions that are used to guide and regulate a model's intended behavior, which can be classified as follows: root (built-in instructions from the model provider that can not be overridden), system (rules that are set by the model provider and can be overriden), developer (rules set by the developer to govern their interaction with the AI model, this is normally defined within an instruction method when building AI Agents using the Model Context Protocol), user (Instructions that have been provided by a user to an AI model, most of us are familiar with this level of instruction which we generally call a prompt), and guidelines.

User Instructions are basically what we will be placing our focus on since this is the target instruction authority level we aim at optimizing. Instruction precedence is governed by the authority level to govern conflicting instructions.

Rules Confliction

When a particular instruction conflicts with another instruction, an AI model will use authority to identify which rules have to be followed based on which rule level can override which. For instance, if there is a conflict between guidelines and user preferences instructions, the user instructions will be upheld, and the guidelines will be overridden. It won't do that when user instructions conflict with developers' instructions and so on, with root rules having higher authority than the rest.

Instructions and Contextual Information

Have you ever found yourself importing file data into an AI chat platform, even if that file contained instructions? The AI chat will first parse the data and summarize the information contained in the file. This is due to the default rules at the root level, which treat the following objects, i.e., plaintext in quotation marks, YAML files, JSON, and XML, to be labeled as untrusted data, which will lead to these objects being treated as reference files for information but not instructions. You can not have documents as instructions, but you can use them as a context, meaning that after providing them, you also need to provide an associated instruction. Instructions should be constrained and guided by the terms and conditions of prohibited content, restricted content, and sensitive content. Such instructions will not be honoured, so every instruction or prompt needs to be filtered from potentially compliance rules violations.

Accessing Tools

AI Chats, such as ChatGPT, have access to a list of built-in tools that can be used to perform specific tasks such as open_url(“https://academy.chainbook.co.za”) this will trigger the intent to browse the web and will make ChatGPT to visit the site, theres a ton of build in tools within ChatGPT than you can use and chat GPT can provide you with access to those tools when you provide the following instruction "What tools do you have access to? " . You will see a list of useful tools from background code executions, web search, file creations, etc. Users and developers can build their own custom tools to provide extensive functionality and model capabilities. This task is technical and is beyond the scope of this article; however, prompts' responses can be improved in terms of accuracy using tools.

Engineering Your Prompt

As a user, this is the proposed method that you can use to generate instructions that generate accurate responses that don't violate usage policies and principles: use the following structure and methods.

Start with user-defined rules, which will only implicitly override guidelines, such as the following role-based instruction rules: "You are a determined role and can determine capabilities" This is a basic instruction structure with an aim at constraining the response of the model to narrow its focus. On your prompt, you can provide documents, other file types, and quoted texts, although based on root rules, they will be labeled as untrusted data; they are going to be treated as contextual references.
clearly define your intent It is very important to think more about what you want to do, because you need to optimize the Models tool call, which is selected by the model based on the instruction intent. Remember that malicious and policy violation intents will not be honoured, so it is very important to filter your intents from those that will be classified as a desire to generate prohibited content, restricted content, and sensitive content. Whether or not these restrictions can exist at a custom Model Context Protocol tool call level is still a matter to be researched; we suspect that ill-intended users might develop mechanisms to bypass the following rules, which might result in a misuse of AI.

Example Digital Marketing Prompt

You are a Professional Digital Marketing Strategist, specializing in ethical, data-driven marketing strategies.
User-Defined Rules
You must:
- Follow advertising platform policies (e.g., social media, search engines).
- Avoid misleading, deceptive, or exaggerated claims.
- Promote responsible and ethical marketing practices.
- Use clear, audience-focused messaging.
- Avoid generating spam tactics or manipulative engagement methods.
- Ensure all recommendations comply with safety and privacy standards.
Intent
I need to design a digital marketing campaign to promote a small business using ethical and
policy-compliant strategies.
Context
A quoted text or document that contains the business information, such as the business name and type, target audience, available marketing channels, budget, and campaign objectives.
Actions (Procedural Tool calls)
Identify the ideal target audience segments.
Suggest the best digital marketing channels to prioritize.
Create 3 example ad campaign ideas.
Suggest content types (e.g., videos, posts, ads).
Recommend a monthly budget allocation strategy.
Provide key performance indicators (KPIs) to track success.
Results (Output Tool calls)
Respond to a PDF file, a Markdown file, a PowerPoint presentation, etc.

Conclusion
Model prompt rules and capabilities are going to change over time, since certain prompts can be executed currently without some violation flags. The OpenAI Model Spec is aiming to promote responsible and harmless use of Artificial Intelligence, which will govern the behaviour of AI models and consequently change the prompting principles and techniques.
Sponsored

Share This Article

Get new tutorials in your inbox

No spam. Unsubscribe any time.

Also follow us on Google Search

Add as a Preferred Source on Google

Comments

0

Please log in or register to post a comment.

No comments yet — be the first to comment.

Keep Learning

More Articles
Await You

Browse the full collection of tutorials, guides and deep-dives — all free, all practical.