The background, basic principles
and the main applications of
Artificial intelligence, Machine
Learning and related concepts
Editorial support was provided byKatarina van der Linden
This document benefited fromsub- stantive input fromAndrea LoreggiaFrancois GuichardNoah MillerCharles Alexandre FreiSalvatore Sapienza
This document was produced undersupervision ofStefano QuintarelliChair of the Advisory GroupNena DokuzovVice-Chair of the Advisory GroupTomas MalikSecretary of the Advisory Group
And the overall guidance ofElisabeth TuerkDirector, Economic Cooperation andTrade Division UNECE
* * * * * * * *
HISTORY OF AI
USE CASES OF AI
NEXT STEPS FOR AI
####### Artificial Intelligence did not emerge through a breakthrough or a relatively rapid revolution like the
####### coming of the internet or blockchain that reshaped our economies and societies in just a few years.
####### A revolution that is so much typical in technology word. Instead, Artificial intelligence endured a
####### rela- tively long process of evolution spanning over the decades and generations of slow
####### improvements, discoveries, and practical applications.
####### Rest assured, Artificial Intelligence is an emerging and advanced technology, which origins root back
####### as far as the first theoretical and practical application of modern informatics and computers back in
####### the first half of the 20th century, but laid dormant waiting for its moment to rise and shine. And with
####### an abundance of data about our world available and computing power more potent than ever before,
####### this moment of the reawakening of Artificial Intelligence is right now.
####### The resulting paper is an eloquent tribute to the wisdom and maturity of all contributors to Artificial
####### Intelligence research and applications over the past decades, as well as to the current practitioners
####### that combined the accumulated knowledge and technological developments into the state-of-the-
####### art solu- tion we can see and use nowadays with potentially more to come in the upcoming years.
####### It is my hope that the work begun with this paper will move forward with vigor and imagination,
####### spark- ing new project developments and, through joint understanding and work on standardization,
####### bring the true potential of Artificial Intelligence to day-to-day scenarios for the practical applications
####### in interna- tional trade processes, and through flourishing trade and economic developments,
####### bettering our Word and bringing more prosperity for the generations to come.
As with every emerging technology, discussion aboutAI tends to produce both enthusiasm and scepticism.Today, in this relatively early stage of its practical ap-plication, AI is the source of many controversies andgrand proclamations. Even now, AI is still perceivedas one of the big existential threats to humanity, asfa- mously expressed by Stephen Hawking, “
”1 Yet, AI can also be perceived asone of humankind’s biggest opportunities. If handledwell, an AI application can excel in various fields ofactivity historically reserved for humans - even out-performing them.
When human activities (or parts of them) are
digitized,their performance and precision are greatlyimproved.
Traditional algorithmic computer programming hasdramatically empowered humans in repetitive, proce-dural activities. An important branch of artificial intel-ligence - machine learning - empowers humans toad-
dress different types of activities such as perception,classification and prediction. In this new applicationspace, computers can operate at a much higher
speedand on a larger scale than humans can. Humans per-forming repetitive perception, classification and pre-diction tasks are empowered by machines, augment-ing our capabilities at unprecedented levels.
When AI is mentioned in general discussions, manypeople still imagine the AI archetype presented inpopular media over the last decades - a malevolentout-of-control computer such as the voice of HAL9000 from 2001: A Space Odyssey; Skynet fromTer- minator; or Agent Smith from The Matrix - butthe banal reality is that artificial intelligence, as thecom- puter science and developer communitiesunderstand it, is already all around us in some wayor another. It is a part of our everyday lives withoutmost of us even realizing or noticing it.
In order to shed some light on this often misunder-stood technology, we’ll first provide a brief history,outlining the basic principles behind AI and present-ing some types of problems it can solve. Next, we’lldescribe some existing use cases, and conclude bypresenting some potential near-future applicationsrelating to international business and other advancedtechnologies such as the Internet of things (IoT),blockchain and distributed ledger technologies (DLT),and concepts such as the smart city and theIndustrialRevolution 4.
the develop-ment of full artificial intelligence could spell the end ofthe human race.
1 Rory Cellan-Jones, “Stephen Hawking warns artificial intelligence could end mankind”, BBC News, 2 December2014. Available at: bbc/news/technology-
ARTIFICIAL INTELLIGENCE (AI) IS ONE OF THE MOST
DISCUSSED BUT ALSO THE MOST MISUNDERSTOOD AND
UNDERESTIMATED ADVANCED TECHNOLOGIES OF OUR
This test was later disputed by philosopher JohnSearle and his also famous Chinese room thoughtexperiment5. He argued that if a computer simplymakes the correct connection between an input (aquestion in the Chinese language, represented by aseries of characters) and the correct output (an an-swer, again represented by a series of characters) itdoes not necessarily imply that the machine under-stands the Chinese language. It would appear to haveappropriate answers and pass the Turing test as anintelligent machine, but it would merely simu- lateunderstanding; and without understanding, we cannotconsider the machine to be thinking or of having amind. This argument led to the use of con- cepts suchas strong AI and weak AI, distinctions still in use today(to be explained further in this text).
The term “artificial intelligence” was first coined byJohn McCarthy as “the science and engineering ofmaking intelligent machines”6 in a document createdfor a conference on the campus of Dartmouth Col-lege in 1956. This conference kickstarted the begin-ning of serious AI research in the upcoming decades.
One of the first successful applications of AI algo-rithms dates back to 1962 and is attributed to an-other AI pioneer, Arthur Samuel, who coined theterm “machine learning” and who created a com-puter program which could learn and play a game
of checkers (drought) with itself, and evolve itselffinding new strategies. Eventually, it was able tolearn the game to a degree that it was consistentlyable to beat its creator. This led to a series of fa-mous man vs. machine competitions, in which ma-chines have ultimately been able to slowly conquerone field after another, with possibly more to come.
The reason why classic board games are so popularin AI experiments is that their framework and rulesare relatively easy to model and transfer to the
virtualworld. All possible player behaviours are predefinedand can therefore be translated to a programminglanguage as a set of conditions and instructions.Moreover, all context is perfectly visible to all play-ers. The game of Checkers is relatively simple due toits rules and the limited number of possible combina-tions of moves, but more complex games introducenew challenges and require different approaches totransfer their models. Strategic, real-worldbehaviours(such as driver decisions) introduce many additionalsets of variables (e. the laws of physics, drivingrules and human values) and translating them intoalgorithms is one of the ultimate challenges of AI.
####### Figure 1
####### The principle of Turing’s test
Where some relatively trivial problems can be algo-rithmically solved, the game of chess is acompletely different problem. The complexity(possible varia- tions) of a single game of chess areexplained by its lower bound, also known asShannon Number (10120), which is famously higherthan the number of atoms in the observableuniverse (1080). This means that there isn’t enoughavailable matter for the memory demands of abrute-force algorithm trying all possible moves andchoosing the best one. This rapid growth in aproblem’s complexity is known as “combinatorial
In a standard algorithmic approach to solving prob-lems, we define, analyze and understand the prob-lem, create a model of the scenario and constructan algorithm that is able to try differentcombinations of approaches to solving theproblem. One of the fun- damental building blocksof any algorithm is a condi- tional decision: IF somecondition occurs, THEN do some action. Thecomplexity depends on the variety of possibleinputs and outputs, the number of states that canoccur, and the possible outcomes. AI is notprimarily based on deterministic logic of conditionaldecisions, AI is an umbrella term for different kindof technologies. From classical rule-based logic ap-proach (e., expert systems) to machine learning,arguably the most relevant approach today, usingprobabilistic reasoning algorithms, based onobserva- tion and statistics.
explosion”; in such scenarios, different approachesmust be used. Applications of artificial intelligenceuse algorithms that autonomously extract correla-tions from the observed data (e. checker moves),building a statistical model that can be later used topredict the best move to perform in a given situation.
Artificial general intelligence (AGI) - also known asstrong, deep, true or full AI - is the hypotheticalstate of artificial intelligence where a machine canlearn and understand tasks in the same manner as ahuman be- ing - meaning cognitive abilities,reasoning and prob- lem-solving.
Artificial general intelligence is the type of AI that cantheoretically pass the Turing test and Chinese roomtest, and is the type of AI that research is trying toachieve. It is the mainstream understanding of AI re-flected in sci-fi works.
Artificial superintelligence (ASI) is the next hypo-thetical step in AI. ASI would not only resemble hu-man intelligence and behaviour, but would be able toexceed and surpass it.Narrow AI (or weak AI) is designed to handle verynarrow and specific tasks, while still resembling someaspects of human Intelligence.
TYPES OF AI
IN SIMPLE TERMS, AI IS A SYSTEM THAT SIMULATES SOME
ASPECTS OF HUMAN/BIOLOGICAL INTELLIGENCE IN AN AR-
A I P RI NCI PL E S
Going back to board games, task T is to play a game, experience E is all matches of the game, and P can be win/loss ratio. In other words, the win/loss ratio grows as the algorithm plays more rounds of the game.
When Machine learning (ML) is a subfield of AI that aims to create a learning algorithm to gradually improveitself based on more experience: more algorithm runs, more processed data, and (in some cases) feedbackprovided from external sources – either by the environment or by a human.
Tom Mitchell provides a more modern definition: “
Similar to the human approach to learning, where some skills are obtained in school by learning from teachersand subject matter experts, and other skills are discovered in the environment (through our ownobservations), a machine can also approach learning in different ways – each suited to different problem-solving environ- ments and goals.
In principle, AI algorithms work by processing a large amount of data and trying to recognize patterns in it.Based on these patterns, algorithms are able to interpret the data or take some predefined action such asto create predictions, classify data based on their features, or suggest or perform some automated action.
In its inner workings, the AI algorithm is consuming data and using this data to retrain itself. It does this bycreating a model for understanding the data and its attributes, to be used in future decisions. These modelsare
then continuously reviewed, improved, and adapted with more input data and production runs. This
simulateslearning behaviour as we understand it from our human learning perspective.
When AI is asked to perform a task, it uses its previous experience-based model to perform a best guess inunderstanding the input data and to suggest or perform an action as an output.
ARTIFICIAL INTELLIGENCE ALGORITHM PROCESS EXPLANATION
TYPES OF MACHINE LEARNING: SUPERVISED, UNSUPERVISED, SEMI-
SUPERVISED AND REINFORCED LEARNING
A computer program is said to learn from experience E withrespect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P,improves with experience E.
11 Tom Mitchell, Machine Learning (New York, NY, McGraw Hill, 1997). Page 2.
####### Figure 3
####### Basic principle of machine learning
####### Source: Andrew Ng
####### Figure 4
####### Supervised machine learning
Supervised learning is like having a trainer. The algorithm is provided with labelled training data that describesthe parameters of the input data. The algorithm is then trained to process such data, recognizing patterns init, and associating the data with the provided labels. This process, which we call training, is performed byskilled professionals to obtain a statistical model of the scenario under study. In the next step, the algorithm ispro- vided with new, unlabeled data. Based on previous observations, the algorithm will attempt to apply alearned mechanism to perform a best guess and assign the unlabeled data with additional parameters,typically sorting it into categories or performing predictions.
Supervised machine learning algorithms are the most common type in current commercial use. Classificationand regression problems (defined below) are typical cases of supervised learning.
####### Figure 6
####### Supervised machine learning
A regression task is one that analyzes continuous data in an attempt to find relationships between variables(typically between one dependent and several independent variables) to predict a theoretical outcome forwhich there are no available data measures, such as future predictions.
Regression is mostly used in prediction and forecasting models; a prediction algorithm learns and creates itsmodels on the features of current or historical states of variables to create and predict continuous valueoutput.
This can lead to a simple linear regression relationship or to a more complex logarithmic, exponential or poly-nomial relationship of different degrees. By extending values into unknown regions while followingdiscoveredvariable relationships, we can predict their positions outside of scope available to us in training data sets.
0-degree polynomial (linear function) shows “underfitting” of prediction model to training data, therefore missing most of the datapoints
3rd-degree polynomial fits “just right” and follows pattern of training data close enough to predict new data points outside of the
train-ing set15th-degree polynomial shows “overfitting” of prediction model that aligns too closely to training data but would perform poorly on
dataoutside of the training set
Raw data Clustered data
Unsupervised machine learning, on the other hand, is an approach where a learning algorithm is trained bysup- plying it with unlabeled input data and without any specific guidance, desired outcome, or correctanswer. The algorithm tries to analyze the data on its own to identify data features and to recognize anyunderlying patterns and structures. In this type of machine learning there is no feedback based on predictedresults.
Output results and data interpretation can be used in modelling methods such as clustering, association,
anom-aly detection, or dimensionality reduction, as explained below.
Clustering, in unsupervised learning, involves looking for patterns and common connections in unlabeled datasets and creating groups of data based on common attributes. Clustering can be seen in the marketing sectorwhere it is used for customer segmentation, that is, for creating “buckets” of customers that share some com-mon attributes.
####### Figure 8
####### Example of clustering
####### Figure 7
####### Unsupervised machine learning
3D “Swiss Roll“ manifold “unfolded“ into 2D space
Anomaly in time series Anomaly in Gausssian distribution
When we analyze data, we may find that it has many dimensions. In healthcare data we usually study all pa-tients’ history and physiometric parameters. The insurance industry is trying to create more precise modelsby taking data from a variety of sources for more accurate risk assessments. Web-streaming data mayinclude hundreds or thousands of different dimensions with strongly correlated data.
In very large data sets, typically produced through big data, multiple dimensions may contain redundancies
(e.height in feet, metres, and centimetres) or data that is irrelevant for a specific need. Dimensionality reductionsimplifies data analysis by creating a subset of data features or extracting specific sets of data features to cre-ate a new data set.
Dimensionality reduction may be used in image recognition. An AI dealing with form recognition might, for
ex-ample, convert a coloured high-resolution image into a black and white, single colour intensity value per pixel,lower-resolution image that is sufficient for subsequent recognition tasks.
####### Figure 10
####### Examples of anomaly detection
####### Figure 11
####### Examples of dimensionality reduction
Semi-supervised learning, as the name suggests, is a combination of both supervised and unsupervised ap-proaches. A smaller set of labelled data is provided, together with a bigger set of unlabelled data. Thishelps the training algorithm produce a better model and can, potentially, greatly improve the model’sperformance in terms of output precision and accuracy.
Dimensionality reduction is usually intended to improve computational efficiency for subsequent processes.It can help in pre-processing for further training algorithms, or it can find hidden intrinsic patterns in data.Di- mensionality reduction can be also used in data compression data visualization of high-dimensional datasets, reduced and displayed as 3D or 2D data visualizations.
One of the most popular algorithms for reinforcement learning is Q-learning where “Q” stands for “quality”,which represents the output value that the algorithm is trying to maximize.
Examples of Q-learning applications might include improving a score in a computer game, minimizing a cost,creating a logistic route, or maximizing profit score. In 2015, the team behind Google’s DeepMind projectpub-lished an article describing the process of teaching AI algorithms, using deep reinforcement learning methods,to learn in situations approaching real-world complexity when playing classic Atari 2600 games. They statedthat “
In reinforcement learning (RL), an artificial agent maximizes a reward function which is designed to representsuccess in the domain. The idea at the core of this learning paradigm is that the agent (or system) exploresthe environment and receives a reward or a penalty depending on the actions it takes. In an example wherean agent learns to play a video game, the agent will learn that certain moves lead to a better score whileothers will make it lose the game.
####### Figure 12
####### Example of reinforced learning - agent training and feedback loop
####### Source: DeepMind
.. deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpassthe performance of all previous algorithms and achieve a level comparable to that of a professional human gamestester...”. 12
12 Mnih, V., Kavukcuoglu, K., Silver, D. et al. “Human-level control through deep reinforcement learning”, Nature,25 February 2015.Available at: doi/10.1038/nature
Artificial neural networks also utilize more advanced techniques to improve their performance, such as thebackpropagation algorithm which retroactively adjusts the weights between neurons by comparing theoutput of labelled data with the initial input in an attempt to minimize errors and improve the performanceof the neural network.
Artificial neural networks (ANN) are virtual network structures that are designed, in their topology andbehav- iour, in ways that resemble an extremely simplified model of neuron cells and their connections in thebiological brain. Thanks to the Internet and big data, today we have access to larger amounts of data to trainthese neural networks than we had in the past. Combined with extremely powerful processors, we canachieve much higher accuracy in solving problems that traditional methods could not. While some problemscan be described as having discrete sets of rules for data features (that is, having no data points, or rules, incommon with neigh- bouring sets) defining rules for image recognition is quite different. For example, todefine or create rules to distinguish a cat from a dog in an image is extremely complicated, since they sharemany common features. That is why we use huge amounts of training to generate these rules; it is thanks tothis extensive training, and the huge amount of data, that ANN can become a viable tool for providing apossible solution.
These networks consist of nodes: artificial neurons, organized into layers, with weighted connections
betweenneurons in the layers immediately preceding and following them. The neurons in input layers receive the rawdata, and the output layers produce the resulting outcome. In between there are typically a series of hiddenlayers that process the data. They are activated to different degrees based on the data features that theprevi-
ous layer observed and propagated further. As a result, the output layer can provide some estimated output(with varying degrees of confidence) indicating that input data, based on their observed features, have somespecific quality. For example, it could identify a specific character (in OCR), an image as containing a specificobject, or an email that has text evaluated as spam.
####### Figure 14
####### Topology of artificial neural network
####### Source: playground.tensorflow
Some use cases utilizing neural networks, that do not necessarily need complex network topology, canperform well with just a simple structure and one hidden layer. More complex tasks use deep learningmethods where the ANNs have multiple hidden layers between the input and output layers. This layeredapproach allows for different parts of the network to process different data features and can solve morecomplex problems like au- tonomous driving, where observing the environment and processing all the signalsfrom vehicle sensors require much more complex network topology.
The human brain has 100 billion biological neurons and each neuron can be connected to 10,000 other neu-rons. This creates over 100 trillion synapses. Analogously, in deep learning each layer responds to differentfeatures (in image recognition these features may be light, outlines, corners, shapes, forms, movement) andprogressive layers build from simple information, such as light, to more complex outcomes such asmovement.
Two types of ANNs, used in deep learning, are convolutional neural networks and recurrent neural networks:
Convolutional neural network (CNN): Usually, these neural networks are composed of manylayers, each layer breaking down the input data into simple information, such as points. Then,through the different intermediate convolutional levels, information is aggregated to identify struc-tured information such as edges or borders. Gradually the information is composed and recognizedas structured objects. These neural networks are used to analyze images and extract informationsuch as the presence or absence of specific objects (for example, the identification of a particularindividual’s face); and
Recurrent neural network (RNN): These neural networks can store certain pieces of informationand consider the time dimension during the learning phase. They are employed to keep track of theintrinsic knowledge contained within a sequence or time series. For instance, they are employed indialogue or voice recognition tasks and are useful in formulating meaningful answers.
####### Figure 15
####### Relationship between AI, ML and DL
An AI system can be defined as the study of the rational agent and its environment. The agents sense the environment through sensors and act on their environment through actuators. An AI agent can have mental properties such as knowledge, belief, intention, etc.Which type of AI is able to perform a dedicated task with intelligence? ›
Narrow AI is a type of AI which is able to perform a dedicated task with intelligence.The most common and currently available AI is Narrow AI in the world of Artificial Intelligence. Narrow AI cannot perform beyond its field or limitations, as it is only trained for one specific task.What exactly AI means? ›
Artificial intelligence leverages computers and machines to mimic the problem-solving and decision-making capabilities of the human mind.What is the main goal of AI that most AI researchers currently focus on? ›
AI research is focused on developing efficient problem-solving algorithms that can make logical deductions and simulate human reasoning while solving complex puzzles.What is the goal of AI agent? ›
A goal-based AI agent can be defined as an intelligent program that can make decisions based on previous experiences, knowledge, user input, and the desired goal. The goal-based agent distinguishes itself through its ability to find a solution according to the required output.What are intelligence agents called? ›
They are most often referred to as case officers or operations officers. Agents are the foreigners who betray their own countries to pass information to the officer; agents are also known as confidential informants or assets.Which type of AI is designed to do specific tasks is called _________? ›
Weak AI, also known as narrow AI, is designed and trained to complete a specific task. Industrial robots and virtual personal assistants, such as Apple's Siri, use weak AI.What are the 4 types of AI? ›
- Reactive machines. Reactive machines are AI systems that have no memory and are task specific, meaning that an input always delivers the same output. ...
- Limited memory. The next type of AI in its evolution is limited memory. ...
- Theory of mind. ...
AI works by combining large amounts of data with fast, iterative processing and intelligent algorithms, allowing the software to learn automatically from patterns or features in the data.What is an example of AI? ›
Apple's Siri, Google Now, Amazon's Alexa, and Microsoft's Cortana are one of the main examples of AI in everyday life. These digital assistants help users perform various tasks, from checking their schedules and searching for something on the web, to sending commands to another app.
Artificial intelligence is about making computers act more like humans. All artificial intelligence designs are inspired by the human brain. Artificial intelligence is a type of computer technology which is concerned with making machines carry out work in an intelligent way, similar to the way a human would.What is most important in AI? ›
The role of data is more important than ever; Its gives you an edge over your competitors if you have the best data system in this competitive industry because the best data will win!What is the most important part of AI? ›
Artificial intelligence is a powerful and exciting technology with the potential to revolutionise many industries, including marketing. However, human intelligence ultimately determines how effectively AI is used to drive real-life benefits.What was the first purpose of AI? ›
1940-1960: Birth of AI in the wake of cybernetics
For Norbert Wiener, a pioneer in cybernetics, the aim was to unify mathematical theory, electronics and automation as "a whole theory of control and communication, both in animals and machines".
- Purely Reactive. These machines do not have any memory or data to work with, specializing in just one field of work. ...
- Limited Memory. ...
- Theory of Mind. ...
- Self-Aware. ...
- Machine Learning. ...
- Deep Learning. ...
- Input Layer. ...
- Hidden Layer.
- Natural Language understanding, generation and translation.
- Common-sense Reasoning.
- Simple reasoning and logical symbol manipulation.
- Robot Control.
- Personalized Shopping. ...
- AI-Powered Assistants. ...
- Fraud Prevention. ...
- Administrative Tasks Automated to Aid Educators. ...
- Creating Smart Content. ...
- Voice Assistants. ...
- Personalized Learning. ...
- Autonomous Vehicles.
THE COMMON TASK FRAMEWORK
Critical for the reduction of methods to practice is a set of application problems where a common task framework can be used to evaluate methods. This has been exceptionally successful in the computer vision and speech recognition communities where ML/AI has had transformative impact.
Limited Memory AI
This type of AI uses historical, observational data in combination with pre-programmed information to make predictions and perform complex classification tasks. It is the most widely-used kind of AI today.
In this fun one-hour class, students will learn about the Five Big Ideas in AI (Perception, Representation & Reasoning, Learning, Human-AI Interaction, and Societal Impact) through discussions and games.
Some of the most important tools and frameworks are: Scikit Learn. TensorFlow. Theano.Why is AI so important? ›
Thanks to machine learning and deep learning, AI applications can learn from data and results in near real time, analyzing new information from many sources and adapting accordingly, with a level of accuracy that's invaluable to business. (product recommendations are a prime example.)How is an AI created? ›
To make an AI, you need to identify the problem you're trying to solve, collect the right data, create algorithms, train the AI model, choose the right platform, pick a programming language, and, finally, deploy and monitor the operation of your AI system.What are 3 examples of AI that you know? ›
The following are the examples of AI-Artificial Intelligence: Google Maps and Ride-Hailing Applications. Face Detection and recognition. Text Editors and Autocorrect.What is the best example for AI? ›
- Manufacturing robots.
- Self-driving cars.
- Smart assistants.
- Healthcare management.
- Automated financial investing.
- Virtual travel booking agent.
- Social media monitoring.
- Marketing chatbots.
Here are some examples: Self-driving cars: Google and Elon Musk have shown us that self-driving cars are possible. However, self-driving cars require more training data and testing due to the various activities that it needs to account for, such as giving right of way or identifying debris on the road.What is AI in real life? ›
Voice assistants, image recognition for face unlock in cellphones, and ML-based financial fraud detection are examples of AI software currently being used in everyday life.How is AI used in everyday life? ›
Already, AI- and machine learning-enabled technologies are used in medicine, transportation, robotics, science, education, the military, surveillance, finance and its regulation, agriculture, entertainment, retail, customer service, and manufacturing.What are 3 benefits of AI? ›
- AI drives down the time taken to perform a task. ...
- AI enables the execution of hitherto complex tasks without significant cost outlays.
- AI operates 24x7 without interruption or breaks and has no downtime.
- AI augments the capabilities of differently abled individuals.
AI is already a powerful practical tool used to optimise search engines, accelerate drug research, invent new materials and improve weather prediction, and is fuelling advances in fields including mathematics, biology, chemistry and physics. Its future potential is vast and will “profoundly affect us all”.
Google DeepMind — AlphaGo
AlphaGo is considered to be one of the most intelligent AI systems in the industry due to its advanced capabilities and its ability to learn and adapt over time.
“AI for Everyone”, a non-technical course, will help you understand AI technologies and spot opportunities to apply AI to problems in your own organization. You will see examples of what today's AI can – and cannot – do.Does computer science defines AI research as the study of intelligent agents? ›
Computer science defines AI research as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.What are the different types of AI intelligent agents? ›
- Simple Reflex Agents. ...
- Model-Based Reflex Agents. ...
- Goal-Based Agents. ...
- Utility-Based Agents. ...
- Learning Agents. ...
A human agent has sensory organs such as eyes, ears, nose, tongue and skin parallel to the sensors, and other organs such as hands, legs, mouth, for effectors. A robotic agent replaces cameras and infrared range finders for the sensors, and various motors and actuators for effectors.Which of the following statements is true about artificial intelligence? ›
The correct answer is 'Artificial intelligence is a branch of computer science. 'Which statement is true regarding artificial intelligence AI? ›
Answer: The correct answer to this question is the basic determinant of whether AI succeeds or fails. Artificial intelligence (AI), a subfield of computer science, is the study of and development of computer systems that are capable of performing tasks that require human intelligence.Which of the following is true about artificial intelligence AI? ›
Answer: Explanation: Artificial intelligence: The study and development of computer systems that can carry out activities that need human intelligence is the focus of the computer science subject known as artificial intelligence (AI).What are the 4 types of AI technology? ›
- Reactive machines. Reactive machines are AI systems that have no memory and are task specific, meaning that an input always delivers the same output. ...
- Limited memory. The next type of AI in its evolution is limited memory. ...
- Theory of mind. ...
According to the current system of classification, there are four primary AI types: reactive, limited memory, theory of mind, and self-aware.