What is Prompt?
Definition.

A prompt is a message you give someone to help them perform a specific task. Over the years, we humans have learned to understand the language of computers and give them commands accordingly. This was the case with various commands in the DOS days, and it's the case with all sorts of code that developers write.
In the computer world, prompts are commands, like a command line interface (CLI), to get a desired result. Since computers can't understand human language, you can set the commands in advance and it will output the results accordingly. However, since there are many commands and options, there is a disadvantage that the computer can only present the predetermined results. Humans had to memorize and learn the commands of the computer in order to give the right command, and the computer could only give a limited result.
With recent advances in Generative AI, computers are finally able to 'natural language' has become understandable. Computers are able to understand understand human language, which gives us much more freedom and variety in our output. You could talk in human language and the computer would understand it and carry out your commands. Now the word prompt is Generative Models in natural language, telling them what to do and allowing them to output the desired results.
What is Prompt Engineering?
In recent years, advances in natural language processing (NLP) have led to the development of Generate human-like text for Powerful language models such as ChatGPT and GPT-4 have been developed. While generative AI has become capable of understanding human speech and natural language, it is still limited by the human nature of the input it receives. Depending on the quality of the prompts, the AI will answer differently because you may mean the same thing but say it differently.
This is where prompt engineering comes in. By carefully crafting and optimizing prompts, you can use LLM to get more accurate, relevant, and Generate engaging text. If you're a data scientist, content creator, or someone interested in exploring the capabilities of natural language processing, learning how to use prompt engineering can help you achieve your goals.
For example, if you have two different ways to ask a question, as shown below, the second question will give you a more understandable answer.
First question type
Describe a black hole.
Second question type
The following is a conversation with an AI research assistant.
The assistant should answer at a level that an elementary school student can understand:
Question: Can you explain the creation of black holes:
AI:
In this way, we can understand the understanding intent and crafting prompts to give better answers is called prompt engineering. Prompt engineering is a concept in the field of artificial intelligence that involves constructing appropriate prompts to get high quality output from AI.
Even the most sophisticated generative AI can't do what Human intent and purpose are not always perfectly understood. It sometimes produces inappropriate or erroneous output, and it can never fully replace human creativity or sensitivity. That's why generative AI needs human oversight and moderation to work well. Recently, a new job title has emerged - prompting engineers - and their value is skyrocketing. It's a sign of the times that the process of exploring, designing, and developing prompts to use AI has become so important.
Prompt example
Generative AI has fostered the expectation that AI will just make what I want out of thin air. But when I actually use it, I'm often disappointed. This is where knowledge of Prompt comes in. In an age where we can command things in natural language, we still have to learn how to talk to computers.

The prompt templates needed to use the various generative AIs are already widely available on the internet, including on prompt markets. For language models like ChatGPT, see Prompt Engineering Guide can help you learn how to use them, and ChatGPT cheat sheet, which lays out how to command ChatGPT to get the answers you want.

Prompt engineering is more than just designing and developing prompts. Prompting is an important skill for interacting with, building, understanding, and developing LLM. This means that prompt engineering has the advantage of improving the relevance of LLMs and building new capabilities, such as extending LLMs with domain knowledge and external tools.
Recently, as models have been developed based on well-crafted prompts, zero-shot or few-shot, chain of thought (where a series of questions leads to a better answer), and many other techniques are evolving. If you're interested in specific prompting techniques, check out the Prompt Engineering Guide article.
How to create a good prompt?
Components of the prompt

A valid prompt must include the following components
- Context: Provides background information needed to set the stage for the prompt and help LLM understand what to do. For example, you can provide a short description of the problem or a summary of relevant information.
- Task: This is what defines the specific goal or objective of the prompt. It should be clearly defined, concise, and provide the You should focus on specific information or actions.
- Constraints: Provides boundaries or limits on prompts. You can place limits on specific requirements or on the length or complexity of responses.
- Extra Information: You can specify additional context or output format. Output format options include markdown or JSON, or a programming language.
How to create a good prompt
Let's dive in and talk specifically about how to create good prompts.
Write clear and specific instructions
Using a language model like ChatGPT is a bit like onboarding a smart new employee: they'll do a better job with clear and explicit instructions. You might find this Prompt guide for users of ChatGPT, published by OpenAI, more helpful.
Briefly describe the problem or task at hand. [context]
Clearly define the specific goal or objective of the prompt. [task]
Provide any relevant constraints or limitations for the prompt. [constraints]
Include any additional information or prompts that may be useful for
the LLM to generate a relevant response. [extra]
- Provide context
For example, if you're asking ChatGPT to draft an email, you'll get much better results if you tell the model about the recipient's relationship to you, the author's role, the intended outcome, and other relevant details. - Customize
Assigning specific roles to your model based on the task at hand will allow it to produce more sophisticated output for that scenario. - Use delimiter
Delimiters serve as an important tool in prompt engineering, helping to distinguish specific segments of text within a larger prompt. Delimiters take many different forms, such as quotation marks (""), quotes (""), dashes ("-"), brackets (<>), XML tags (</tag>), or section headings. - Structured output request
Tailoring the output format to your specific needs not only greatly improves the user experience, but also simplifies the work of application developers. Depending on your needs, you can request output in a structure such as a bulleted list or table, HTML, or JSON format. - Validate user input
Before the model generates a response, it needs to check the input for errors. For example, if the user enters text that is too short or too long, the model will not generate a response. - Provide a story/example
Provide successful examples, which will help the model learn what kind of response to expect.
Prompts are instructions that tell the generative model what to do. If you write clear and specific instructions, the AI can understand better and produce better results. For example, a prompt that says "Please write an email" is too vague - the AI won't know what to write, whereas a prompt that says "Please write an email, it should be to my friend, and it should be about wishing my friend a happy birthday" is clearer and more specific.
Chain of thought
Models are more prone to inference errors as they jump to conclusions. Reformulating the query to require sequential reasoning before a final answer can also improve the effectiveness of the prompt. There are ways to encourage LLMs to think step-by-step and solve problems.

With the standard prompt, popularized by GPT-3, the model is given an example of an input-output pair and asked to give an answer (image left). A prompt using a chain of thought (image right) asks the model to output intermediate inference steps.
Chain of thought Previously, these answers were possible through fine tuning, but only with large training datasets or Without having to modify the weights of the language model, we were able to drive this thought process with prompts alone.
Overcoming hallucinations with prompts
Even if you follow the aforementioned tips, there may be times when your model produces unexpected results. This could be due to "hallucinations," a chronic problem with generative AI. Many organizations struggle to overcome this, and there are ways to overcome some of it with prompts.
Overfitting by illusion refers to the tendency of a model to become overly fixated on a single viewpoint or to rely on inaccurate information. When this happens, you can use prompts to request information from different perspectives;
For example,
- Instead of "Analyze customer reviews and evaluate the pros and cons of that product";
- "Please analyze the customer reviews from a project perspective and provide an assessment of customer satisfaction and product quality."
The idea is to ask for different perspectives, as shown above, to prevent the model from showing a strong bias toward a limited perspective.
To overcome hallucinations, it is also important to provide the model with clear context. For example, you can establish context by including information in the prompt that was mentioned in a previous conversation or sentence. This will allow the model to understand the context and generate consistent responses. However, we have also found that models will often unintentionally "lie" in order to generate overly perfect results, so you can also add noise to the prompt to encourage them to generate more realistic results.
ChatGPT Prompt Templates

Well-crafted prompts can be useful in a variety of situations. For example, this article describes how to create an English picture book with ChatGPT. In this article, we first give ChatGPT the role of an English storybook author. Then we ask it to write an English children's book with the desired topic and content.
If you can understand the prompts and use them properly, like in the example above, you can get 200% out of generative AI. We've seen paid templates sold and purchased for this effect.
Prompt engineering vs. fine tuning
Prompt engineering vs. fine tuning

Fine tuning and prompt engineering are both methods of taking a pre-trained language model and improving the model's performance for a specific task. However, there are differences in their approach and the context in which they are used.
- Fine-tuning:
A method of adjusting the weights of a pre-trained language model to improve its performance on a specific task or domain.
Use additional datasets and labels for task specialization.
This will improve the performance of the model as it learns about changes to the existing model.
With fine tuning, you get specialized performance for each task.
Use it when you need to improve performance on a domain that is typically intractable or for complex tasks. - Prompt Engineering:
Focuses on making careful changes to a pre-trained language model input interface to produce desired results.
Optimize by playing with factors such as input sentence (prompt) structure and format.
No additional training data is required, and is primarily useful when interacting with consumers.
Generally performs well in simple or routine question-and-answer situations.
In conclusion, fine-tuning mainly leads to good performance improvements in complex and specialized tasks, while prompting engineering can be used in simpler and more routine situations can be used to produce the desired results with minimal resources and time consumption.
Parameter-efficient fine-tuning (PEFT)

Parameter-efficient fine-tuning (PEFT) is a fine-tuning method that updates only a subset of the weights when applying a pre-trained language model to a specific task or situation. The goal is to reduce the number of parameters while maintaining the performance of the model, thereby reducing training time and the memory cost of learning.
PEFT is used in a way that only fine-tunes model structures that are at a common high level for different tasks. This minimizes the parameter updates required to improve the trained model.
Since many parameter updates are not used directly, this approach has advantages in terms of efficiency and utilization of small training data sets. By fine-tuning the model in parts instead of the entire model, it speeds up learning, reduces resource usage, and allows for effective learning and improvement with a smaller amount of data.
PEFT can be useful for many deep learning projects, and is particularly effective in environments with small data sets and limited computing resources. However, it is important to note that depending on your use case, you may experience performance differences from typical fine tuning.
Structure and type of data set needed for fine tuning

Fine tuning is the process of taking a pre-trained language model and using additional training datasets to further fine-tune the model to achieve good performance for that domain or application. If the structure of the datasets required for fine tuning is set correctly, the model can achieve higher performance and maintain correct training results.
To help LLM models perform well on domain-specific tasks, A sufficient amount of diverse data. You should also make sure that the format and structure of your input and output examples are consistent. This makes it easier for the model to understand and evaluate your predictions. Unwanted elements within the dataset, such as errors or noise, should be removed and accurately labeled. This will help the model get more accurate results during the training process.
Bottom line: When building your model, consider your characteristics, infrastructure, and data when choosing your fine-tuning approach.
Today, in the field of deep learning and natural language processing (NLP), language models are A key component of artificial intelligence (AI)-based applications. Some of these models, particularly Large Language Models (LLMs), are pre-trained on massive amounts of data to improve overall natural language understanding (NLU) performance. However, these base models come without performance optimization for individual tasks or consideration for specific domains.
- LLMs are typically pre-trained on huge amounts of data, but they can perform poorly on details specific to a particular task or domain. Fine tuning and prompt tuning play an important role in addressing these issues and improving the performance of the model.
- Knowledge of a specific domain By utilizing these techniques, you can achieve a better understanding and outcome for your chosen domain. This is a key factor in creating effective solutions.
- With proper fine-tuning or prompt tuning, you can reduce the size and computational steps of your model while still achieving good results. This saves resources and simplifies the deployment of the model.
However, before you train LLM, you need to fully understand the properties of the model and decide which method to choose. Here are a few things to consider before training and tuning your LLM.
- Developers should choose the LLM that best fits their task and domain. Consider task performance, model size, and learning/inference speed to determine which model is right for you. For example, in the software platform example, you need to consider whether the code and documentation are open source.
- If your organization is involved in machine learning engineering or You should consider whether you have the expertise and personnel to manage your infrastructure.
- If you don't have the expertise or budget for model tuning, you may be better off with prompted engineering, which is more expensive.
The use of fine-tuning and prompt tuning has varying results in some cases, so you should choose the appropriate approach based on your purpose and domain. These techniques will help you tune your underlying model to create a successful AI solution.
Reference.
- AI Prompts and Prompt Engineering | Definition and Examples - Codestates Official Blog
- Prompt Engineer | What it is, what it does, the skills required and the outlook
- Why is it important for using LLMs effectively? | MLExpert - Crush Your Machine Learning interview
- Prompt Engineering Guide
- Prompt Structure | MLExpert - Crush Your Machine Learning interview
- 10 Essential Prompt Engineering Methods For Successful ChatGPT & LLM Applications
- Quickstart tutorial - OpenAI API
- Fine Tuning vs. Prompt Engineering Large Language Models