Insight

What is AI? - Definition, history, types, applications

A primer on AI concepts, types, and uses

2023
.
07
.
12
by
Sangsun Moon
What is AI? - Definition, history, types, applications

What is AI?

Definition.

What is Artificial intelligence (AI), The science of building machines that can think like humans. It means using computers to perform tasks that have traditionally required human intelligence.

AI technology can process larger amounts of data than humans in a variety of ways. Artificial intelligence is the development of computer systems that can learn, think, and act in ways similar to human intelligence. AI is being developed based on research from a variety of disciplines, including computer science, data analytics, statistics, neuroscience, philosophy, and psychology.

Based on this, AI can be used for data analysis or Predictions, classification, NLP, recommendations, intelligent data import, and more. It is also used in a variety of fields, including self-driving cars and robots, medical diagnostics, financial analysis, and customer service.

Advantages

There are many benefits to leveraging AI technology.

  • Workflow and process automation
    AI can automate workflows and processes, and work independently and autonomously. AI can also be used to verify documents or text transcription, a simple customer Automate repetitive, simple tasks like answering questions so your human capital can focus on more important things.
  • Reduce human error
    AI processes more information faster than humans, finding patterns in data that humans might miss and It's good at finding relevance. In addition, AI is able to stay on track with its assigned tasks. Automation and algorithms that follow the same process every time can reduce manual errors that humans can make in data processing, analytics, manufacturing, and other tasks.
  • Accelerate research and development
    Vast amount of The ability to quickly analyze data can lead to accelerated research and development.

Global interest in AI on Google Trends
Global interest in AI on Google Trends

Since the end of 2022, search queries for AI have surged domestically and internationally. In 2022, the global AI market was valued at $69.25 billion, and 35% of companies worldwide were using AI. Today, AI is still gaining traction because it can provide in-demand skills, help us understand new technologies, and improve our problem-solving and decision-making abilities.

Strong vs. weak AI

Strong AI vs. Weak AI
Strong vs. Weak AI

The difference between strong and weak AI depends on the purpose of the AI deployment.

What is Weak AI (WAI)?

  • AI applications designed to automate tasks that require specific cognitive skills.
  • 'Weak' means that the application is focused on specific cognitive functions.
  • Machine learning models tuned for specific tasks, such as object recognition, chatbots, personal voice assistants, autocorrect systems, or Google search algorithms.
  • ChatGPT, Midjourney, Stable Diffusion, DALL-E, Bard, etc.

What is Strong AI (SAI)?

  • The belief that computational abilities, such as analytical thinking and other intellectual abilities in AI, can mimic the human brain.
  • Don't rely on models to perform narrow tasks
  • Potential to simulate human brain function to handle common tasks
  • The ability to adapt to evolving technology systems and environmental changes over time.

Here's a quick summary of the differences between Weak AI and Strong AI.

  1. Unlike weak AI, which automates specific processes to increase efficiency in various fields, strong AI is designed to mimic the functioning of the human brain, allowing it to perform a wide range of tasks "like a human.
  2. Weak AI relies on specific data sets to learn patterns and perform repetitive tasks. Strong AI, on the other hand, uses Learning from vast, massive amounts of data.
  3. Predictions and results made by systems with weak AI are reliable because they are consistent. Strong AI, on the other hand, adopts a problem-solving approach to handling more complex and creative tasks, which can often produce unclear and unreliable results.

AI Technology Categories

Artificial intelligence is the ability to learn, reason, and understand in order to perform tasks with systems that mimic behaviors and characteristics similar to human intelligence. This has led to remarkable advances in a variety of industries. In this section, we'll introduce you to four of the most prominent categories of artificial intelligence technology.

Machine learning

  • An AI application that automatically learns and improves from experience instead of being programmed to perform a specific task.
  • Use algorithms such as Unsupervised, Supervised, and Reinforcement Learning.
  • Output accuracy varies depending on data accuracy

NLP

  • programming a human language into a computer to facilitate interaction between humans and computers.
  • Used for speech-to-text, text-to-speech, interactive voice response (IVR), translation, etc.

Automation and robotics

  • Aims to enable machines to perform monotonous and repetitive tasks to achieve cost-effective results and improve productivity and efficiency.
  • Applying Machine Learning and Neural Network Graphs to Automation
  • CAPTCHAs, a technique to prevent fraud in online financial transactions, are a prime example.
  • RPA (Robot Process Automatic) to build large numbers of repetitive tasks that can adapt to changes in different situations.

Computer Vision

  • Capturing visual information using a camera, converting analog images into data, and processing them.
  • Used for signature identification and medical image analysis through pattern recognition, object detection, etc.

History of AI

History of Artificial Intelligence
History of Artificial Intelligence - Javatpoint

The First AI Boom - GOFAI

Interest in AI really took off in the mid-1950s. In 1958, the first programming language for AI research, LISP (an acronym for List Processing), was created. The robotic processes that sprang up at the time were the precursors to modern-day autonomous driving, robotics, and chatbots.

This is when Alan Turing's book, <The Turing Test, which first appeared in Machinery and Intelligence>, has since become a metric used by experts to measure computer intelligence.

Turing Test definition
What is the Turing Test? | Definition from TechTarget

The Turing Test is used to determine whether a machine is equal to or indistinguishable from a human Testing if it exhibits intelligent behavior. A human judge has a natural language conversation with two other parties, one human and one machine. If the judge cannot reliably distinguish between the machine and the human, the machine is deemed to have passed the test.

Of course, the Turing Test isn't perfect - it doesn't measure a machine's ability to understand or reason, only its ability to communicate like a human - but it was an important milestone in the field of artificial intelligence, and it's still widely used today.

Second AI Boom - Expert systems

However, the first "AI winter" arrived when research into solving specific problems with computerized reasoning and exploration reached its limits. Back then, there wasn't as much computer power as there is now. and Computer vision or artificial intelligence applications like NLP require huge amounts of information about the world, and at the time, it was not only impossible to build such databases, but it was also unclear how to learn from this vast amount of information.

By the 1980s, AI research was again experiencing rapid growth and interest. Deep learning techniques and the use of expert systems became more popular, enabling computers to learn from their mistakes and make independent decisions. After the first meeting of the Association for the Advancement of Artificial Intelligence was held at Stanford, the first expert systems appeared on the commercial market and more sophisticated models emerged.

Architecture of an Expert system
What Is an Expert System? | Definition from TechTarget

The expert system is a Neural network research was revitalized. In 1982, the concept of a neural network (Hopfield Net) was first introduced by physicist John Hopfield as a form of learning and processing information in a completely new way. In addition, Geoffrey Hinton and David Rumelhart created the Backpropagation succeeded in popularizing a learning method called Backpropagation. It made it practically possible to train multiple perceptrons and solve nonlinear problems.

The 3rd AI Boom (now) - Machine learning

However, as the hardware power of desktop computers has increased by leaps and bounds, it has been pointed out that it is difficult to maintain an expert system that requires expensive hardware. Also, while expert systems are useful, they are only useful in a few specialized situations, which is a shame for general-purpose problems.

However, advances in hardware capabilities have made it possible to harness computing power for AI research like never before. This is the era of deep learning.

In 1997, Deep blue, developed by IBM, became the first computer to defeat a human chess champion when it beat the world champion. Since then, the AI industry has been building deep learning-based Successes like AlphaGo, LibraTus, and the rise of self-driving cars. The societal excitement and interest generated by this dubbed the third AI Boom, which continues to this day.

Approaches

The two most popular methodologies for AI research are bottom-up and top-down. Bottom-up is the view that by interpreting the neural networks and analyzing the chemistry in the human brain, an electronic model of the brain can be created, and artificial intelGetting Started with Generative AI

Later, more sophisticated AIs finally had the creativity to go beyond task executors and generate original output that inspired us. This is called generative AI. Generative AI as we know it today is derived from neural network research. Developed in 2014 by Ian Goodfellow's team, the Generative Adversarial Network (GAN), as well as Variational Autoencoders (VAEs) and Recurrent Neural Networks (RNNs) are beginning to demonstrate their ability to generate new content.

ligence can be created. Conversely, a top-down approach sees the tasks that require intelligence as Solve with algorithms. The bottom-up approach aims for data-centric AI, while the top-down approach aims for model-centric AI.

What is Data-Centric AI All About and Why Is It Important?
What is Data-Centric AI All About and Why Is It Important? - Datafloq

Data quality is a challenge that plagues nearly every industry. As datasets get larger, it becomes difficult to ensure quality without using algorithms. Therefore, to overcome the disadvantages of low-quality training data, recent machine learning systems trained on large data sets use We've relied on a lot of manual human work. However, these efforts have not completely overcome the data quality challenge, which is why automated workflows and disciplined engineering principles are needed to ensure that machine learning models are trained with clean data, which is why recent research emphasises data-centric AI for a variety of applications.

Getting Started with Generative AI

Later, more sophisticated AIs finally had the creativity to go beyond task executors and generate original output that inspired us. This is called generative AI. Generative AI as we know it today is derived from neural network research. Developed in 2014 by Ian Goodfellow's team, the Generative Adversarial Network (GAN), as well as Variational Autoencoders (VAEs) and Recurrent Neural Networks (RNNs) are beginning to demonstrate their ability to generate new content.

GPT-4 grades LLM Outputs
GPT-4 grades LLM Outputs

In February 2023, Meta announced LLaMA, its latest AI language model. According to Meta's report, the 13B version of LLaMa has already outperformed GPT-3. In fact, LLaMa had lower performance than GPT when it went live, but Open Source camp started freely improving LLaMa and found that it caught up to 92% of GPT's performance in just two weeks.

Types of AI

There are four main types of artificial intelligence. Some types are not feasible with current technology, but they are areas of ongoing research and development in the AI community.

Reactive AI (React AI)

Reactive machines always react to the same situation the same way every time. While they have the limitation of not being able to learn behaviors or imagine the past or future, they can make Predictable output, which has the advantage that you can trust the results.

ReAct AI - Synergizing Reasoning and Acting in Language Models
ReAct: Synergizing Reasoning and Acting in Language Models

Reactive AI was pioneered by IBM's supercomputer Deep blue, which defeated world chess champion Garry Kasparov. It has since evolved into spam filters that detect and block spam or promotions in your inbox, recommendation engines that personalize your taste, and more.

Limited Memory AI - AI Today

While reactive AI is a huge step forward in the history of artificial intelligence development, it cannot do more than what it was originally designed to do. Therefore, developers have created the next type of AI, "limited memory AI," which can learn from the past and build empirical knowledge by observing behavior or data. It is characterized by using past observations along with pre-programmed information.

For example, self-driving cars use limited memory AI to read the road, observing the speed or direction of other cars and adjusting their direction or speed as needed. The process of understanding and interpreting incoming data makes it possible to drive reliably on roads with many variables. Of course, as the name implies, it's still limited, meaning that the information it gains is fleeting and cannot be stored in long-term memory.

In addition, limited memory AI is the most widely used today because it can make predictions from historical observations and perform many complex classification tasks.

Theory of Mind AI;

The theory of mind AI refers to emotionally intelligent robots that look and sound like real humans. With this type of AI, machines will acquire the ability to make decisions that are truly human-like. They will also be able to understand and remember emotions and then act on them when interacting with people.

Self-aware AI (Self-aware AI);

When machines are able to recognize their own emotions, as well as the emotions of those around them, they will have a level of consciousness and intelligence similar to that of humans. They will then be able to feel their own wants, needs, and emotions. Self-aware AI aims to be the most advanced type of AI based on this theory.

AI and deep learning

What is deep learning?

Deep learning is a form of artificial intelligence that teaches computers to process data in ways inspired by the human brain. It can recognize the complex ways in which pictures, text, sounds, and other data are organized to make accurate predictions and decisions. Deep learning methods can be used to automate tasks that typically require human intelligence.

Deep learning technology has become an integral part of our lives, whether it's digital assistants, voice recognition, or facial recognition. It plays a key role in powering AI applications in everyday products.

Deep Learning vs. Machine Learning;

To summarize in one sentence, deep learning is a specialized subset of machine learning, which is in turn a subset of artificial intelligence. In other words, deep learning is a concept within machine learning. If you look at the definitions of the two technologies, you can see the differences.

Differences in structure

  • Machine learning is the term for when a computer learns from data.
  • Deep learning is the analysis of data in a logical structure similar to how humans draw conclusions.

Differences in forecasting methods

  • Machine learning algorithms use methods to make predictions based on patterns and inferences.
  • Deep learning uses a layered algorithmic structure called an artificial neural network (ANN).

While machine learning algorithms are a technology at the intersection of computer science and statistics, deep learning is inspired by the biological neural networks of the human brain, which is why it has a much better learning process than standard machine learning models.

How to learn deep learning
딥 러닝 학습 방법

How AI deep learning learns;

Deep learning can be categorized into three types of learning methods. Depending on the learning method, there are also representative algorithms. You can read more about them below.

Supervised Learning: A method of learning in which a computer is taught data for which it knows what the correct answer is (=Labeled data) to predict outcomes for new incoming data.

  • Regression: Linear Regression, Ridge Regression, Lasso Regression
  • Classification: KNN Classification, Naive Classification, SVM Classification

Unsupervised Learning: A learning method that makes predictions without telling you the right answer, looking for patterns or shapes in unlabeled data and clustering similar data together.

  • Clustering: K-means, PCA, Density Estimation, Association analysis

Reinforcement Learning: Given an agent and an environment for the agent to operate in, the agent's behavior is observed and rewarded based on its actions.

  • How to learn to make your agents' behavior change for the better
  • DQN, A3C

AI applications

Artificial intelligence has become an integral part of our lives and is being utilized in a variety of fields. Here are a few key areas that continue to receive a lot of attention.

  • Image/video recognition: Face recognition, object detection, self-driving cars, etc.
  • Translation: Website/document, conversation translation, etc.
  • Predictive modelling: Predicting customer behavior, predicting product demand, predicting fraud, etc.
  • Data analytics: Optimizing business processes, identifying trends, etc.
  • Cybersecurity: Using AI to detect and prevent cyberattacks

What is Generative AI?

Market Map: Generative AI for Virtual Worlds
Market Map: Generative AI for Virtual Worlds | by Jon Radoff | Building the Metaverse | Medium

Generative AI is a type of artificial intelligence that can generate new content, such as text, images, or music. Generative AI models are trained on large datasets of existing content and use this data to learn patterns and rules about how that content is created. Eventually, Generative AI is able to generate new content that is similar to the content it has been trained on.

ChatGPT reached 1,000 users within 40 days of launch. As of February 2023, the estimated number of users exceeded 200 million, and subscribers reached 1 million. According to the industry, the number of domestic ChatGPT users is 2.2 million, and it is widely used by IT giants and startups. Recently, it is said that many companies are jumping into the development of self-generated AI services, considering security vulnerabilities and market preemption effects. You can expect the emergence of new applications comparable to ChatGPT in the near future.

Leveraging AI by industry

AI applications, which incorporate expert systems, natural language processing, speech recognition, and more, are all around us in the form of "virtual assistants". Recently, AI has been actively used not only in our daily lives but also in the business world.

Marketing and finance

  • Quant, AI stock trading, commodity/stock price prediction
  • Analyze large amounts of financial data and make trades based on patterns and trends.
  • Improve customer experience, analyze customer reviews, analyze customer needs

Education and Healthcare

  • Automation of simple repetitive tasks, personalized learning, and smart content
  • Medical Literature Analysis, Role of the Physician Assistant

Autonomous driving and navigation

According to a survey on the adoption of artificial intelligence by Korean companies in 2021, 53% of companies reported a lack of specialized manpowerI am struggling to adopt AI. The average number of people required to work in data is 24.5, which is not a small number, but the lack of hands-on talent makes it difficult for organizations to hire AI engineers on their own.

We also need to find a The problem is that we don't have enough data. Even if raw data is available, it may be at risk of degradation and corruption during processing, or it may contain personal information that needs to be processed separately. Models trained with low-quality data will inevitably perform poorly, so it may take a long time to go to market, or the project may be abandoned altogether. This is one of the reasons why AI adoption in Korea is still low.

Controversies and misconceptions about artificial intelligence

Copyright controversy for training data;

Recently, two American novelists filed a lawsuit against OpenAI, claiming that their books were used without permission to train ChatGPT. This is the second lawsuit after a US law firm sued for copyright and violation of internet users' privacy.

The European Union is considering a new law that would require generative AI developers to disclose all copyrighted material used to train their systems. The legislation would categorize generative AI tools according to their level of risk, with higher transparency requirements applied specifically to high-risk AI tools. The EU says it is enacting the legislation with the aim of protecting citizens from the potential harms of AI, without hindering AI innovation.

The EU's AI Act is being touted as an important benchmark for AI regulation. The EU hopes that if passed, it will help ensure that AI is used safely and responsibly.

Controversy over job losses due to AI;

The World Economic Forum (WEF) has projected that a whopping 14 million jobs will be lost due to AI, which is 2% of the world's jobs. Rapidly advancing AI technologies are disrupting the labor market, raising concerns about mass job losses in certain roles, such as bank tellers and data entry clerks.

In fact, many jobs are being replaced by AI. This is because AI can automate mundane tasks without the need for humans. On the other hand, some argue that AI could lead to lower wages rather than fewer jobs. With the introduction of GPS and platforms like Uber, which provide automated directions, some experts have suggested that drivers will experience a wage decline of around 10% rather than fewer jobs.

Conclusion: AI continues to evolve, it's what you do with it that counts

From detecting and categorizing objects in images to autonomous driving, artificial intelligence has many applications. In any of these areas, the keyword "AI" conjures up images of efficiency and accuracy, as well as the perception of being powered by cutting-edge technology to achieve the user's goals. It also makes users feel like they're using the best technology available, and gets them excited about the possibilities that AI offers.

But in the long run, more than getting a buzz from AI, it's about It's important to buy your users' trust. In order for your service or product to become a successful BM, you need to think about how you will utilize AI. This is a process that requires detailed research and design, right down to the process of building models and training data.

At Datahunt, we actively seek to understand the structure of our clients' industries in order to leverage AI for their business. We engage with them to understand what data they need and what the key features of their models are, and then we implement them. Because the first step to a high-performing model is high-quality data.

Talk to Expert