AiLab Artificial Intelligence Glossary
The AiLab Artificial Intelligence Glossary lists common terms used throughout the field of AI.
We're constantly listing new terms, so please check back for updates.
The content in the AiLab Glossary is copyright to koolth pty ltd & AiLab © 2024 and may not be reproduced in whole or part without the express permission of AiLab.
A
Activation Function
A mathematical formula that calculates the output of a node (artificial neuron) in a Neural Network, based on the input to the node and some threshold level.
Agent
Something that can perceive the world around it (using sensors) and that acts autonomously upon objects in an environment to achieve a goal. Agents can be physical (e.g., Humans are examples of Intelligent Agents) or fully embedded in software. Multiagent systems arise from agents acting together to solve more complex tasks (goals) than can be achieved individually.
Algorithm
An unambiguous and detailed set of steps to perform a task or reach a goal. Some algorithms are fairly easy to write (e.g., a recipe for making pancakes), whereas others are much more difficult (e.g., how to ride a bike). Algorithms are used to program computers (sets of instructions that expect certain data and have defined output).
Annotation
Data used in Supervised Learning needs to be ‘tagged’, or Annotated. An example is associating the labels (tags) of “water”, “rod”, “shore”, “sea”, “fishing”, “person” to a picture of someone trying to catch a fish at the beach.
Artificial General Intelligence (AGI)
A common term used to describe machines that match human level intelligence and that can perform functions and tasks to the same level as humans e.g., reasoning, planning, decision making in multiple situations, etc. Although there is currently no clear path to AGI, it is thought that once achieved (if possible), that Artificial Super Intelligence will quickly follow because machines with AGI capabilities will be able to build even smarter computer systems.
[related: Artificial Intelligence (AI)]
Artificial Intelligence (AI)
The goal of creating software to act and compute in a way that simulates human intelligence. In other words, getting computers to do things that we humans do. As you would expect from AiLab, we have a whole page about 'What is AI?'
[related: Artificial General Intelligence (AGI)]
Artificial Neural Network (ANN)
[see Neural Networks (NN)]
Artificial Super Intelligence (ASI)
A common term used to describe machines that far exceeds the intelligence of the greatest of human minds. AGI raises philosophical questions about whether machines that are more intelligence than humans will take over the world. In recent years (since the success of Deep Learning), debate around if and when ASI will occur has grown. Many prominent scientists and technology figures have expressed concern over the rise of ASI. However, in the short term, many AI researchers remain less concerned about ASI because there is still no clear path to how AGI and therefore ASI may be achieved.
Associative Learning
A learning algorithm that encodes relationships in data by associating input patterns with required output patterns. Pavlov’s Dog is a good example of Associative Learning, where over a period of time (training) a dog is presented with food at the same time a bell is rung. The food makes the dog salivate. Eventually, the dog can be made to salivate just by ringing the bell (without presenting food) because the dog has learnt to associate the bell with food, which in turn triggers salivation.
Auto-associative Network
The mapping between the input and output data of a neural network, where the network aims to produce on its output layer the same as was presented on the input layer. A key feature of Auto-associative Networks are they can reconstruct (output) the original data by being presented with only a small part of the pattern, e.g., reconstruct a complete sentence from only the first few words.
B
Black Box
A system where the internal workings are hidden from view and/or investigation. Neural Networks are known as black boxes, because the knowledge (rules) used to make decisions are stored in numerical form across hundreds, thousands or even millions of Weights. The distributed nature of the knowledge, along with the sheer number of values, makes it almost impossible to understand how the network arrived at any given decision.
Note: There is current research being undertaken to explain how NNs make decisions, because this is a major barrier to the use of NNs in many mission critical scenarios (e.g., medical diagnosis, government systems, utilities such as energy and water).
Breadth-first Search
A way of searching a graph for a goal (answer) by assessing each node at the same level (sibling nodes), before searching the nodes at the next level.
Brittleness
A feature of AI systems that break down quickly when presented with unknown data. In symbolic, rule-based systems, this happens when there is no rule that matches the presented data (and therefore the system does not ‘know’ what do resulting in failure to produce any output). Within Neural Networks, failure to Generalise to unseen data causes the system to ‘guess’ – usually poorly (although because a Neural Network is a Black Box it’s difficult to know if and when the network is brittle).
D
Deep Learning
The application of Neural Networks to complex problems. The term ‘Deep’ refers to the multiple layers of hidden nodes that are used to form very large neural networks.
[related: Neural Networks (NN)]
Depth-first Search
A way of searching a graph for a goal (answer) by assessing each node through the same branch (parent-child nodes), before searching the nodes at the next branch.
[related: Breadth-first Search]
E
Edge Case
Data that falls just over the outer limits of the data used to train a machine learning system, thereby making it difficult for the system to identify and/or act upon. As an example, autonomous vehicles can be trained to recognise animals it may encounter, e.g., dogs, cats, horses – an ‘edge case’ could be a kangaroo, because this animal is unlikely to feature in the training data. Within AI, edge cases can be one of the main delays in moving a system from the lab to the real-world.
G
Generalisation
The ability of a learning system to process data that it has not been trained on. For a Neural Network to generalise well (produce correct output for unseen data different from the training data), the data used for training must be representative of the problem being modelled and a correct network architecture needs to be employed. The power of a connectionist network is based upon its ability to provide useful generalisations.
H
Hetro-associative Network
The mapping between the input and output data of a neural network, where the desired output of the network is different to the presented input. For instance, a Neural Network could be trained to output a picture of a cow, when presented with the processed soundwave of ‘mooing’.
Hypothesis
An idea about how something might function or perform without having any real evidence. A hypothesis is a prediction that can be investigated and tested via experimentation (and generally said to be true or false under most conditions). An example Hypothesis might be “Drinking coffee after 2pm makes it more difficult to sleep”.
M
Machine Intelligence
Machine Learning
A subfield of AI that is concerned with teaching computers to learn. Traditionally this area of AI was dominated by probability and statistical techniques, leading into connectionism and genetic algorithms (rather than symbolic systems). More recently, learning algorithms have been created an applied to a wider range of techniques (especially hybrid systems that combine both symbolic and connectionist approaches).
N
Narrow AI
The ability of machines to perform a single, or narrow range of tasks at least as well as a human. This is where the field of AI is at the moment. For instance, there are currently machine learning algorithms that are able to identify certain medical conditions in X-Ray images as well as, or even better than human doctors.
[related: Artificial General Intelligence (AGI)]
Natural Language Processing (NLP)
The use of computational techniques to process natural language end-to-end. You can think of NLP as a complete system that allows humans to completely interact with computers using natural language (whether by speech or text).
Natural Language Understanding (NLU)
A smaller, but very important part (subset) of NLP, where computational techniques are used to try and understand the actual meaning of a text. NLU is required to achieve NLP, because an NLP system would be unable to produce correct or suitable output (e.g., answer) without understanding the natural language input (e.g., question).
Neural Architecture Search
Coined by Google and implemented in AutoML, neural architecture search is “neural networks that design other neural networks”.
Neural Networks (NN)
A computer implementation of interconnected nodes (processing units) and weights (connections) based loosely on the human brain (this latter point being very important – neural networks are generally NOT simulations of the brain, rather, they 'borrow' basic ideas from the way neurons function and how they are connected). The computational power of these networks to learn about the data provided to them comes from their parallel nature, coupled with powerful learning algorithms.
[related: Supervised Learning; Unsupervised Learning]
Neuroscience
The science and study of the nervous system throughout the body (including brain, spinal cord and nerves). Neuroscientists are interested in the development and function of the nervous system and may focus on the biological aspects (e.g., neurotransmitters) or the physiological aspects (e.g. behaviour).
O
Overfitting
Occurs within machine learning algorithms when the system fails to capture the generalities of the data and instead captures the exact features, i.e., the algorithm represents the training data too perfectly. Overfitting of the data leads to poor Generalisation (and the system exhibits Brittleness).
P
Philosophy
The study of knowledge and of thinking about behaviours, society, the world in which we live and the universe. It is often defined as "thinking about thinking", because the ideas are other abstract and hard to define. Philosophy can be thought of (no pun intended) as the search for knowledge and truth (hence why a PhD is a Doctor of Philosophy and requires original research that adds to the world's knowledge).
Philosophy of AI (Study of AI)
Artificial Intelligence is concerned with using computing machinery to duplicate the cognitive mental states that lead to intelligent thought and actions (a philosophical theory of mind). Computers can be used as an aid to study and simulate the inputs/outputs of the mind (Weak AI) or as an attempt to fully replicate the mind including conscious thought (Strong AI).
Physical Symbol System Hypothesis
A Physical Symbol System manipulates symbols (that denote physical objects), by combining the symbols into formal structures. These structures can then be processes (manipulated) to produce new structures (with different meanings). The 'Physical Symbol System Hypothesis' is a philosophical approach that suggests the manipulation of symbols and symbol structures in this way can lead to General AI.
Principal Component Analysis (PCA)
A well known and frequently used statistical technique that removes redundant information. It has numerous applications (especially in signal processing) and is useful in reducing the amount of data in the system.
R
Reinforcement Learning
Given a goal (such as “stack the blocks”), the machine learning algorithm that learns by having its efforts towards solving the goal evaluated. The closer to the goal, the better the evaluation (can be thought of a learning by trial and error in much the same way humans learn a lot of things).
S
Singularity
The [hypothetical] point in time when computing machinery supasses human level intelligence and AGI is achieved. Debate rages as to (a) whether AGI is possible, and/or (b) how long it will take. The recent successes of Deep Learning within AI has prompted many researchers to re-evaluate the timescales involved (for instance, a few years ago it was not uncommon to see estimates in excess of 100 years, whereas now, many experts suggest the AI singularity could be just decades away).
Small Data
Not really a term, but it stands to reason it’s the opposite of Big Data.
Strong AI
Within the Philosophy of Artificial Intelligence, Strong AI claims that computing machinery will not only be able to simulate human intelligence, but will also exhibit consciousness and be a Thinking Machine. Within Strong AI, AI scientists use computers as a tool to replicate mind and to mirror the cognitive functions found in humans.
[related: Weak AI]
Supervised Learning
The machine learning algorithm learns by being given input data and told what data to output – for example, a picture of an animal as input, and the animal name as output. Once many different pictures have been processed (different animals), the algorithm is tested by presenting images that were not used during training (same animals but different versions) to make sure the system has correctly learnt the patterns in the data that identify animals in the pictures. In other words, supervised learning is the task of reproducing a particular given output for a particular given input.
Symbol
A token that wholy represents something else. Symbols are often used to represent objects in the physical world (circular lines on a map denote hills or mountains). They can also represent beliefs and concepts (a red triangle on a road sign means beware!). A key feature of Symbols is they are easy to understand (human readable). However, Symbols are also Brittle, because all knowledge about an object or concept is stored (represented) in a single place – destroying the Symbol destroys all knowledge. A great example of something in the real-world being represented by Symbols is the map of the London underground (the map is not an exact representation of the real rail network, but the symbols on the map provide enough information to be useful).
Symbol System Hypothesis
Symbolism
The field of study relating to knowledge representation and manipulation using ‘symbols’ and ‘symbolic structures’.
Syntactic Parsing
The act of interpreting and placing free-form text (sentences) into corresponding structures that denote the linguistic organisation of the language (syntactic roles). Within AI, syntactic parsing was traditionally in the form of hand-crafted rules.
T
Threshold Level
Used in the Activation Function of a node in a Neural Network, the threshold is a value that decides if the output from a node is on or off.
U
Unsupervised Learning
The machine learning algorithm discovers hidden patterns in the data without being provided with samples of required input->output. The hidden patterns found by the algorithm can automatically group related data. For instance, grouping customers based on their previous buying habits. In other words, unsupervised learning forms its own output for particular given inputs.
W
Weak AI
Within the Philosophy of Artificial Intelligence, Weak AI claims that computing machinery will be able to simulate human intelligence but without the need for the machine to exhibit consciousness or be a Thinking Machine. Within Weak AI, AI scientists use computers as tools to study the mind rather than build the mind (with associated cognitive states).
Note: This term is often incorrectly used as an equivalent to Narrow AI.
[related: Strong AI]
Weight
Some value that, when combined (usually multiplied) with another value either increases the number (positive) or decreases it (negative). Within NNs, each Node is connected to other nodes via ‘weights’, so the output from a node within a NN is multiplied with the value of the weight to either help excite the receiving node (positive value) or inhibit the receiving node (negative value).
The value of the weight can be seen as a measure of the strength of the connection between two nodes. For a NN, the learned ‘knowledge’ of the system is said to be held in the network weights (because these are the values that change during learning). The term Synaptic Weight is used in both AI (NN) and in biology (brains).
The content in the AiLab Glossary is copyright to koolth pty ltd & AiLab © 2024 and may not be reproduced in whole or part without the express permission of AiLab.