What is data labeling?
Definition.
Data labeling refers to the purposeful input of data, such as images, videos, text, and audio, into a machine learning algorithm.
To get AI to do what we need it to do, whether it's driving a car or identifying people in a photo, we need a large number of datasets with information specific to that purpose. This has led to advances in computer vision, natural language processing (NLP), and other technologies that are still being refined.
People who can perform data labeling are called labellers. In general, data labeling involves putting purposeful labels (annotations) on a variety of data such as images, images, audio, video, etc. to match the data format required by the AI. After data labeling, the resulting data is provided in a format customized to the customer's request, such as json, csv, COCO, TFRecord, etc.
Data Annotation vs. Data labeling

Data annotation and data labeling are both similarly used words for entering meta data into data that describes the data, such as classifications and tags that AI can learn. Data annotation and data labeling are used interchangeably in Korea and around the world, but they have slightly different meanings when you look at them specifically.
- Data Annotation: Annotate a specific Object (BBox, Polygon, etc..) within an image.
- Data Labeling: A concept that refers to multiple annotated data collectively.
Data labeling types
Image Annotation
Image annotation is the task of typing labels describing the components of an image for a computer without eyes, which can be done by hand or with computer assistance. Image annotation includes methods for tagging or categorizing an entire image according to labels predetermined by the engineers developing the AI model, marking individual objects in the image, or dealing with unrecognizable parts of the image.
Image annotation is an essential task for training computer vision models such as object detection, classification, and segmentation. In particular, recent autonomous vehicles rely on sophisticated image annotation tasks. Labeling all the objects in the environment that a car encounters, such as vehicles, people, signals, lanes, and so on, helps the vehicle's You can make the computer recognize it and determine its behavior based on traffic conditions.
When it comes to real-world applications of image annotation, real-time is increasingly becoming the most important criteria. This is why the YOLO (You Only Look Once) model is gaining more traction than other image annotation models because it speeds up the process of analyzing and labeling images in a single step.
There are several different image annotation tasks, depending on what the AI is trying to accomplish.
- Image classification
- Multi-label classification
- Object detection
- Instance segmentation
- Semantic segmentation
- Polygon
- Polyline
- Keypoint
- OCR

Text Annotation
For machine learning, Text annotation is the act of converting information that exists as text into something a computer can understand. In simpler terms, it's the act of specifying where things are, what they mean, what their intent is, and so on, to be used to build a training dataset for an AI model. Examples include the preliminary work of deep learning and machine learning models utilizing various natural language processing (NLP) and computer vision techniques.
A training dataset with textual annotations can be used to learn sentiments such as "I love this movie!" or "I don't like this movie." Similarly, emotional information that a computer can't recognize can be An annotated snippet of text. This allows the machine learning algorithm to recognize the learn to identify patterns associated with positive and negative sentiment. There are many other things you can do with machine learning models, including topic modeling, Named Entity Recognition (NER), and part-of-speech tagging.
Once the AI model is trained on the textual information, the AI can annotate the text on its own and identify sentiment and can be used to predict tastes. It is mainly utilized for emotional and linguistic context, NTT analytics, text categorization, and language annotation, and is also used to categorize user reviews or social media reactions to understand customer reactions. It is also actively used by chatbots and virtual assistants to specifically understand the intent of everyday conversations.
Text annotation types are
- NER
- Q&A
- Text analysis

Audio Annotation
Today, automatic speech recognition (ASR) models have streamlined the process of transcribing audio, enabling a generation of AI that understands spoken language. Audio annotation is already at work in many places, including Alexa, Google Assistant, and AI technologies that use speech recognition.
Speech annotations are available for Speech-to-Text (STT) and Text-to-speech (TTS). STT refers to the conversion of speech to text, while TTS refers to the conversion of text back to speech. In STT, the focus is on ensuring that the model hears and accurately converts your speech to text and understands it, while TTS is all about implementing what the model understands in natural language, like a human voice. TTS can be used for automatic subtitle generation on YouTube, or for podcasts and radio programs, where Audio content generation.
Building an ASR system requires a large amount of labeled audio data. For example, Whisper is a model trained on 680,000 hours - about 77 years - of audio. Some models need to collect sample data in a variety of conditions to perform effectively, including data with intonation and background noise.
Audio annotation types are
- Audio classification
- Tagging
- Audio to Text

Video Annotation
Video annotation is trickier than working with images. That's because you need to place tags on frames of video information, not a single photo. As technology advances,Continuous frame methods, called multi-frame or stream annotation, have been studied. Video annotation is inevitably a multimodal task. not only does it have to recognize the images and audio that make up the frame, but it also has to understand the context of it and finally put it together with the preceding and following scenes to understand the flow.
Computer vision technology is essentially based on the ability of machines to extract important information from visual data, such as video. In other words, computer vision technology works on the principle of human visual perception. To achieve this, it is important for computers to learn and operate from the results of video annotation. A neural network sees the world as a kind of "black box," but it's the accuracy of the video annotation that gives meaning to that footage.
Video annotation types include
- Object detection
- Segmentation
- Object tracking
- Classification
- Tagging
- Keypoint Skeleton pose

What's important about labeling data?

In a fast-paced business environment, 1% of data-driven AI is making a difference, rather than 99% of model-driven AI. Despite this, only 15% of all AI is being applied in real-world industries due to low-quality data.
According to data analytics company Cognilytica, 80% of AI project time is spent collecting, organizing, and labeling data. Before you can train and deploy your model, you're spending a lot of time ensuring you have properly structured data. Traditional data labeling processes are often inefficient or have costly structural issues, making it difficult for companies to devote significant resources to data quality before adopting AI.
Bringing AI to your business is all about the quality of your training data. Prior to building a machine learning model, it is increasingly important to label your data for training. This is because quality issues with labeled data can adversely affect model training and model decision-making, so it's important to add quality assurance or Actively improve through the QA process.
Data Labeling Trends - South Korean Government's Data Quality Policy
In 2023, the 'Data Construction Project for Artificial Intelligence Learning', organized by the Ministry of Science and ICT and hosted by the Korea Intelligent Information Society Promotion Agency, will 'Data Quality Management Guidelines and Construction Guide for AI Learning' has been published. It is a document that processes the quality control activities and outputs accumulated through the data construction project for AI learning for many years. The main quality control standards for data labeling specifically mentioned on page 76 of the guideline are as follows.
[Quality Control Metrics - Data Accuracy]
- Syntax correctness: A metric that measures the consistency of the attribute values that make up the Annotation Data with the data format and input value range you originally defined.
- Semantic correctness: To verify the ground truth of the semantic data, we use the following metrics: accuracy, precision, recall, IoU, ROUGE, BLEU, Fβ - Metrics to measure score, EM, etc.
The reality of data labeling - the need for expertise and quality control in data labelers
These days, you can save money by leveraging global companies or small and medium-sized enterprises that hire low-cost data labelers based in Southeast Asia or Africa. However, this often results in poor quality training datasets due to the inevitable lack of accuracy in the metadata, as they understand it in a different context than you do. In the worst case, you may have to repeat the data preprocessing/processing process again, which can take up to 80% of the project time.
Getting the training dataset right the first time is critical, as once a model is built, it's not easy to retire it and rebuild or scale it. This is where the expertise of data labelers becomes even more important. Already, key players in the AI industry are focusing on hiring and managing data labelers as professionals, rather than introducing the concept of part-time work.
This was especially true for recent Datahunt customers. At DataHunt, we're convinced that even if labor costs are a bit higher, it's still a better option to work with a skilled and proven in-country labeler for your project.
As we begin to utilize generative AI in more ways than one, we increasingly need to collect and process data for model fine-tuning. In this case, Few-shot learning, the quality of the data labeling output is becoming more important. Companies looking to bring generative AI to their business should work with a partner who has a deep understanding of the technology to build training data alongside raw data collection and processing.
Compare South Korea data labeling companies
It is estimated that there will be more than 700 data labeling companies registered in South Korea as of 2023. With so many companies, it can be difficult to make a choice, so let's briefly introduce the characteristics of the major players from the perspective of a company selecting a partner.
Crowdworks

- Provide training curriculum and staffing services to foster professional labelers
- Smart Labeling Platform for working with more data in less time
AIMMO

- Build a web browser working environment for reliable performance
- Improve project efficiency with quality and performance management tools
Datahunt

- Human in the loop structure with up to 99% accuracy and 50% reduction in labor time and operating costs
- Auto-labeling and real-time quality control and log data extraction with a SaaS platform
- Gain a higher level of data quality competency than overseas data labeling companies.
Testworks

- Customer satisfaction through a quality management system and follow-up specialized in increasing efficiency and productivity.
- Run a virtuous cycle of data creation, refinement, processing, and management
Selectstar

- Crowdsourcing platform with 250,000 workers allows you to get small jobs done
- Visualization-based dataset analysis tools to view the distribution and characteristics of training data
DataHunt's data labeling edge
Auto-labeling
DataHunt actively leveraged the SaaS platform to build an AI-assisted auto-labeling structure, which improved data accuracy to 99%. In addition, they were able to save over 50% in terms of cost and time.

Prior to the project, DataHunt selects the best annotation model for your AI model's training data. This involves a customized project planning process between your PM and our expert AI engineers, which includes selecting and refining various models such as Meta, Segment-anything, YOLO, BERT, etc. for your project. At DataHunt, our AI developers and PMs actively test and research different models and work to ensure that the outcome is right for your business.
We've also built our own algorithms to improve the quality of AI-driven results, and We are constantly working on improving performance. With DataHunt's labeling know-how, carefully selected models are being used for pre-labeling, auto-labeling, and validation. With 300M+ data pre-trained on the models, we have been able to reduce our client's work time by more than 50%.
Manage accuracy with human-in-the-loop (HITL) verification
DataHunt is not only meticulous in its data processing, but also goes through secondary and tertiary checks to improve accuracy. This process is conducted in a human-in-the-loop (HITL) manner, where humans and AI complement each other.

DataHunt's automated AI validation works on the data you've worked with in the first place, identifying work products that are likely to be incorrect and suggesting corrections. Once the AI assesses the reliability of the work, a human reviewer goes through the process of accepting or rejecting the AI's suggestions. At DataHunt's platform, we're training our models on the data we've worked with at the request of our customers.
AI plays a huge role in the process of finding the human errors that are always going to happen. This not only saves you time in reviewing, but it also saves you money. Most importantly, it can significantly improve the quality of the data you end up with.
Managing data labelers
DataHunt has built a SaaS platform for efficient and accurate projects. Through the Admin feature, you can manage various projects. First, you can see the overall project progress at a glance, including average processing time, work efficiency, and rejection rate for time management. For quality management, it provides work accuracy prediction and worker good/bad work prediction through AI integration.
We also use the labelers' work log data to assess their proficiency and assign them to projects, and we provide our customers with real-time quality-related information through our SaaS platform.
Takeaway: Data labeling plays a crucial role in the quality of training datasets, and the core competencies of the labeler workforce and the enterprise are becoming increasingly important.
- Data labeling is a technique that allows computers to identify source data and provides context for AI models to learn from. It is used interchangeably with data annotation and refers to the task of assigning information to data points to build a dataset for training a machine learning model.
- With 80% of AI project time spent collecting, organizing, and labeling data, the quality of labeled data is becoming increasingly important, and government data quality policies are emphasizing more specific standards.
- Today, companies are adopting crowdsourcing and tapping into the general workforce for data labeling. However, labelers should be viewed as professionals rather than part-timers, and this requires knowledge that entails project understanding and the ability to be flexible to business needs.