AI in security: Use cases and techniques

Security AI for a future you can trust

Sangsun Moon
AI in security: Use cases and techniques

In the modern world, cyber threats have emerged as a serious threat to the safety of businesses and individuals. As a result, security AI is playing an increasingly important role and is being utilized in a variety of industries.In this article, we'll cover why security AI is needed, the technologies used, real-world applications, and their advantages and limitations.

Why security AI

Offline risk response

Security AI can address more risks that can happen offline. For example, it can be used to automate many of the tasks currently performed by security personnel, such as monitoring CCTV footage or responding to alarms. It can also predict and alert you to potential risks, such as where and when a crime is likely to occur. You can also customize security alerts to meet the specific needs of your business, or get recommendations for security measures that match your personal risk profile.

Responding to cybersecurity threats

In recent years, not only offline risks, but also threats in cyberspace have become a major concern. A lot of manpower is required to deal with a large number of cybersecurity risk cases, but if there is a shortage of specialized personnel, security gaps are bound to emerge in each organization. This can be overcome by applying AI to security tools to build scalable solutions.

Automating repetitive tasks, such as automatically detecting and responding to risks, or patching or updating software, can also free up human time. Because AI specializes in analyzing large amounts of data in real time, it can identify potential risks faster and more accurately than human analysts.

What is MLOps, which provides data management, machine learning system development, and service operations as a single service?

Leveraging AI for Security

So what technologies are being applied to security AI?

  • Statistical analysis
    AI analyzes patterns of user activity in a system or network and compares them to normal behavior. This requires large amounts of data and typically uses statistical metrics such as mean, standard deviation, and variance to establish normal ranges. It then compares them to incoming data in real time to detect anomalies or unusual patterns.
  • Machine learning
    AI models are used to detect anomalies based on pre-trained data. For example, models are trained on data such as login patterns, file access patterns, network traffic patterns, etc. The trained model makes predictions on new input data and identifies anomalous behavior from those predictions.
  • Deep Learning
    Deep learning uses multi-layer neural networks to perform complex pattern recognition and classification tasks. It automatically learns various features and patterns in data and uses them to detect anomalies. For example, a neural network can take network traffic data as input, learn normal patterns, and detect patterns in new data that differ from those patterns and classify them as anomalies.

Techniques for leveraging offline security

Modern security can be broadly categorized into online and offline situations. For example, the technology utilized in offline security enables usto detect and respond to physical threats faster and more accurately thanhumans.

Nvidia's Metropolis
Nvidia's Metropolis is an intelligent video analytics platform that can process more than a billion security camera images looking for patterns and usable data. (Source: NVIDIA)
  • Behavior-based surveillance
    AI can detect anomalies by monitoring the behavior of users or applications on a system in real time.
    For example, if a process deviates from an expected pattern of behavior, or if unauthorized system changes are detected, the AI system can determine that this is anomalous behavior and take action to alert or block.
  • Facial, Object, and Event Recognition
    No matter how well you record video, it can be very difficult to identify a specific event, object, or person during or after an incident. That's where AI security and surveillance solutions come in, which can not only easily recognize faces, objects, and events, but also facilitate real-time delivery and proactive security.

In addition, AI-enabled security solutions can recognize people without their faces. For example, they can identify people based on their height, gender, or other physical characteristics such as clothing or posture. They can also recognize patterns of activity to detect crimes or unusual behavior while promoting a safe environment.

What is YOLO, a representative model for object detection?

Technology to leverage online cybersecurity

How AI works within cybersecurity
How AI works within cybersecurity (Source: Artificial Intelligence (AI) in Cybersecurity: Future andReal Examples)

Pattern and signature-based detection
AI detects anomalies by comparing them to a database of patterns and signatures for known malware, cyberattacks, and other security threats. These databases are constantly updated, adding information about new threats.

Security AI Artificial Intelligence Use Cases

Cybersecurity in the cloud - Google

Google Security Command Center
A Deep Dive into Cloud Security Alerts - SafeBase Blog

Google has built its own security intelligence system, leveraging AI and machine learning to recognize and block malicious software and behavioral patterns.

Google Security Command Center is a TensorFlow-based security solution. It continuously monitors your cloud environment to detect anomalies and respond to security threats. It also uses services like Event Threat Detection andSecurity Health Analytics to detect security issues in your environment. These services can scan logs and resources in Google Cloud to look for threat indicators, software vulnerabilities, and misconfigurations.

CCTV enabled by visual AI technology - UST researchers

'Smart CCTV' developed by UST PhD
"Predict and warn in advance": 'Smart CCTV' developed by UST PhD students | TJB Daejeon-Sejong-Chungnam News

South Korea has suffered from the problem of inadequate security personnel, making it difficult to respond immediately to dangerous situations even with CCTV. However, the development of intelligent CCTV that can detect abnormal behavior even with a small staff is expected to overcome this problem.

The intelligent CCTV developed by the Electronics and Telecommunications Research Institute (ETRI) of the Korea Advanced Institute of Science and Technology (KAIST) applied an integrated framework technology that can detect and judge multiple abnormal situations in a complex manner. It combines visual artificial intelligence and linguistic artificial intelligence, and applies a "zero-shot learning technique" in which the computer combines information based on system experience to infer results. Ultimately, the AI technology that sets up and analyzes detection situations during the inference process is said to have reduced the time and cost of building big data and the efficiency of applying technology.

👉 About few-shot learning totrain with less data based on Foundation Model

There have been many intelligent CCTV products in the market, but this model is special for a reason. This is because it is the only technology inKorea that has passed all seven intelligent CCTV certification areas of theKorea Internet & Security Agency (KISA): 'Loitering, intrusion, falling, fighting, abandonment, arson, and marketing'. In addition, in the 'RGB+D'standard performance evaluation of the world's largest three-dimensional visual dataset (DB), Nanyang Technological University (NTU) in Singapore, we were able to obtain a behavior recognition rate of 94.We were able to derive a behavior recognition rate of 66%, which proves that it has the world's highest abnormal behavior detection accuracy. In particular, it is said that it can accurately detect human movement and abnormal behavior even in various external environments such as fog, snow, and night.

Limitations of security AI

Security AI is an important tool that many companies and organizations are adopting to combat the growing cyber threats in the modern world. However,these technologies also have their own limitations and challenges. As securityAI advances, we must recognize that it requires continuous research and development to understand and overcome its limitations.

Limitations 1. Technical limitations

Security AI offers many benefits and innovative opportunities. However, it still has technical limitations.

  • Data poisoning
    AI models rely heavily on large data sets for training. If a learner has malicious intent or is able to inject manipulated data, it can affect the performance and behavior of the AI system. This can lead to biased or inaccurate results, making the system vulnerable or unreliable.
  • Unintended consequences
    AI systems are trained on historical data that may contain biases or reflect societal prejudices. If these biases are not adequately addressed, AI can perpetuate discrimination or unfair practices, creating social and ethical problems.
  • Lack of explainability
    Some AI algorithms, such as deep learning neural networks, are so complex and difficult to interpret that they are often referred to as "black boxes." A lack of explainability can make it difficult to understand how an AI system makes decisions. This can hinder your ability to effectively detect and respond to potential security threats.

Limitations 2. AI Security Vulnerabilities as GenerativeAI Advances

Risks Associated with Generative AI
Risks Associated with Generative AI (Source: Cybersecurity Snapshot: What, Me Worry? Businesses AdoptGenerative AI, Security Risks Be Damned)

Advances in AI technology are a double-edged sword for the security industry, especially in the area of generative AI like ChatGPT. ChatGPT has the potential to be exploited to find security vulnerabilities or generate source code that can be utilized for hacking.

Generative AI models like ChatGPT tend to refuse to answer questions about direct malicious code creation or malware development. However, there are frequent attempts by hackers to use prompt engineering techniques to get the answers they want or generate code to help them hack. This is because these models only deny explicit questions, but still have the potential to generate information that can be exploited to circumvent the hacker's request.

Therefore, the security AI vulnerabilities that have emerged as a result of advances in generative AI include

  • Potential to generate information that can be used to create malware and hacking tools
  • Concerns about leaking information outside of legitimate questions through techniques like prompt engineering
  • Conversely, generative AI models themselves can be used as workarounds and analytical tools.

Of course, the flip side is that generative AI models can also be used as hack analysis tools. Many domestic security companies are now also working to utilize AI technology in a variety of ways.

Conclusion: only organizations with a deep understanding of AI can overcome the vulnerabilities of secure AI.

One way to overcome vulnerabilities in secure artificial intelligence is through "AI red teams". Launched in 2018, the Microsoft AI Red Team is researching ways to identify and respond to AI system vulnerabilities. In2020, we succeeded in building a SW tool that can detect, respond to, and mitigate AI system threats through industry-academia collaboration. In the same year, we released a tool to automatically test AI system security as open source. Earlier this year, we released our 'AI Security Scanner' on GitHub and have been actively working on it ever since.

The Google AI Red Team is working on a project to prevent AI algorithms from being abused for political purposes. Similarly, OpenAI recognized that theGPT-3.5 model used in ChatGPT was vulnerable to exploits such as hacking, andis modifying its datasets and algorithms to overcome this. The presence of "AI red teams" at global big tech companies highlights the fact that even AI that can do dramatic things needs constant human research and development.

Overcoming the weaknesses and limitations of security AI requires an understanding of training data. Security AI is particularly dependent on the integrity of the training data. Therefore, when training an AI model, you need to build training data that takes into account the diversity and representativeness of the data. You should also focus on maintaining the accuracy and consistency of the labeled data.

In addition, to build excellent security AI, it is necessary to understand and collaborate with AI-based business models. Like an "AI red team,"it is necessary to recognize that AI needs a human touch and continue to manage and develop specialized personnel. Therefore, in order to fully utilize security AI, it is necessary to further respond to the introduction of new algorithms and technologies to reflect the latest trends and continuously improve the performance and accuracy of AI models to respond to security threats. At DataHunt, we look forward to the many advances in security AI and are preparing for them.


  1. Redefine Security with AI-Enabled Surveillance     Systems | SunTec Ai
  2. Security Command Center Overview | Google Cloud
  3. Visual AI enables smarter CCTV than humans - 정보통신신문
  4. Assessing the pros and cons of AI for cybersecurity     | Security Magazine
  5. How far have domestic security companies come in utilizing AI...still     focusing on detection and analysis - Byline Network
  6. What Does a Big Tech 'AI Red Team' Do - 지디넷코리아

Talk to Expert