Use our Applied Intelligence Glossary to improve your fluency around terms and concepts associated with AI, automation and analytics — from image recognition to the Turing test, dark data to quantum computing.


Abductive Reasoning – is a type of logical inference that starts with an observation and sets out to find the most likely, least complex explanation. Unlike deductive reasoning, it delivers a plausible conclusion but does not verify it absolutely. Abductive reasoning is widely used in the development of machine learning systems.

Accuracy – is a quality benchmark used in classification tasks, which measures the proportion of true results (true positive + true negatives) against the whole population of test cases. Cf. Precision, Coverage, Recall.

Action Model learning – sometimes also called “action learning,” is an area of machine learning focused on creating and modifying a software agent's knowledge of the effects and preconditions of actions in its environment.

Activation Function – is the formula or the calculation carried out by an individual node, in a neural network, to define its output. The choice of the activation function is critical to make the whole system differentiable, which is a key requirement for using the selected backpropagation algorithm. Some popular activation functions are sigmoid function or rectified linear unit (ReLU).

Adagrad - is a procedure that uses knowledge of the data geometry from previous iterations to carry out gradient-based learning. In contrast, standard stochastic sub-gradient methods generally align to a preset procedure that takes no account of data characteristics observed in prior iterations of training a model.

Ad Targeting – is a sophisticated method of advertising where online advertisers target ads at the audiences that will be most receptive to their offerings. Using AI and machine learning, ad targeting can quickly select optimal target audiences, such as in-market buyers or custom audiences, and serve them ads that are most likely to trigger a positive response.

Affine Layer – is a layer in a neural network in which all nodes connect to the nodes contained in the next layer. Affine layers are a common feature of both convolutional neural networks and recurrent neural networks.

AI Adoption – is the adoption of artificial intelligence (AI) by various industries and sectors. The ability of an organization to successfully adopt AI in its business process will greatly depend on identifying the correct use case, presence of a data driven culture and ability to scale the solution.

AI Bias vs. Human Bias – highlights how artificial intelligence (AI), just like humans, is subject to bias. This is not because AI determines something to be true or false for any illogical reasons. It’s because latent human bias may exist in machine learning, starting with the creation of an algorithm to the interpretation of data and subsequent interactions.

AIP+ – Our collection of modular, pre-integrated AI services and capabilities, augmented by Accenture IP, that helps accelerate and scale new outcomes.

AI Winter – describes a period of reduced funding and interest in artificial intelligence (AI) research. The term, intended to be analogous to nuclear winter, has been deployed to describe periods of disappointment and criticism that follow a surge in AI activity and research. The AI winter timeline reflects funding cuts and a loss of interest, typically followed by renewed interest years or decades later.

Alan Turing Institute – was founded in 2015 as the United Kingdom's national institute for data science and artificial intelligence (AI). The Alan Turing Institute is named after the British mathematician, cryptanalyst and modern computing pioneer, considered by many to be the originator of AI.

AlexNet - was the first published example of a computer vision deep learning model that used GPUs to train the model. It was a milestone in 2012, improving accuracy by almost 100% against the status quo.1

Algorithmic Learning Theory – is a branch of computation learning theory which, unlike statistical learning theory, distinguishes itself by giving a non-probabilistic approach to learning limits. This framework is highly suitable in scenarios where data is not considered a random sample, for example learning a language.

Algorithms – a sets of rules, initially set by humans, for computer programs to follow. Artificial intelligence can tweak these algorithms using machine learning, so programs begin to adapt rules for themselves and continuously self-optimize based on what they learn. For example, predictive analytics algorithms become smarter and faster the more they are used and the more data they analyze.

AlphaGo® – an artificial intelligence developed by Google's DeepMind® Technologies. In 2015, AlphaGo DeepMind® became the first computer program to defeat a professional human player at the highly-complex board game "Go". In 2017, it went on to defeat the World Number One-ranked Go player.

AlphaGo Zero® – a second, even more powerful, version of AlphaGo®, developed by DeepMind® Technologies. Unlike its predecessor, which trained on human amateur and professional games, AlphaGo Zero® games were played against itself. In this way, it rapidly outstripped human levels of play and went on to beat AlphaGo® by 100-0.

Analogical reasoning – compares the similarities between two concepts, systems or objects to draw conclusions about what is likely to be true about the new concept, system or object. Examples of analogical reasoning include James Clerk Maxwell’s discovery of electromagnetism: by comparing similarities between earlier work on magnetic and electric fields and equations used to calculate flows of liquids, he was able to explain how magnetic lines of force behave.

Analytics – encompasses the discovery, interpretation, and communication of meaningful patterns in data. It relies on the simultaneous application of statistics, computer programming and operations research to quantify performance and is particularly valuable in areas with large amounts of recorded information. The goal of this exercise is to guide decision-making based on the business context. The analytics flow comprises descriptive, diagnostic, predictive analytics and eventually prescriptive steps.

Anything-as-a-Service / Everything-as-a-Service (XaaS) – is a cloud computing term covering the extensive array of services and applications users can access on-demand over the internet as opposed to having to buy and install hardware and licensed software along with associated security on-premise. Everything-as-a-Service started as Software-as-a-Service (SaaS) and has since expanded to include services such as Infrastructure-as-a-Service, Platform-as-a-Service, Storage-as-a-Service, Desktop-as-a-Service, Disaster-Recovery-as-a-Service, and even nascent operations such as Marketing-as-a-Service and Healthcare-as-a-Service.

API – are Application Programming Interfaces (API)s that provide the building blocks for software development. They make it easier to develop a computer program by enabling applications to communicate with each other and share data, providing all the building blocks, which are then put together by a developer.

Applied Intelligence – is Accenture’s way of combining artificial intelligence technologies with data analytics, automation and human ingenuity and applying them to solve our clients’ most complex business problems. This enables Accenture to deliver intelligent insights at speed and scale that drive innovation and new outcomes for enterprises worldwide.

What is Applied Intelligence?

View Transcript

Apriori Algorithm - is a data mining procedure used to extract association rules from data. These rules are useful because they can find patterns of correlation in data in the form of events. For example, in the retail industry this could enable "basket case" analysis to identify instances of people buying milk and cookies together.

ARIMA Models - are a class of statistical methods for time series forecasting. It stands for Auto Regressive Integrated Moving Average.

Artificial Intelligence (AI) – is the broad term used to describe the set of technologies that enable machines to sense, comprehend, act and learn. The history of artificial intelligence is often traced back to the 1940s and 1950s, when scientists and mathematicians began discussing how a mechanical device could be used to imitate the human process of mathematical deduction.

Attention Mechanisms – refer to the ability to focus on specific parts of an image. Attention mechanisms in neural networks have been used for some time in image recognition and are now increasingly being used for natural language processing as well, helping networks to learn from inputs and make predictions.

Augmented Reality (AR) – superimposes computer-generated images over views of physical environments. Its purpose is to enrich how viewers see and interact with these environments. Augmented reality apps, from social media filters, to games like Pokémon Go® to surgical procedures, are expanding rapidly thanks to their ability to bring elements of the virtual world into the real world and enhance what people see, hear and feel. Augmented reality lies in the middle of the reality spectrum: between the real and virtual worlds.

Autoencoder – is a type of artificial neural network that’s used for unsupervised learning of efficient coding. An autoencoder neural network is trained to compress data into code, and then decompress that code to closely match the original data.

Automation – refers to the technology where procedures or processes are performed with minimal human intervention. Machines can be configured based on an explicit set of rules or algorithms. Robotic Process Automation (RPA), which is widely used to handle routine transactional activities, is one example.

Autonomic Computing – describes a system's capacity for adaptive self-management of its own resources (without user input) for high-level distributed computing functions.

Autonomous Electric Vehicles – are electric vehicles that use artificial intelligence, sensors and global positioning system coordinates to drive themselves without the need for a human driver to take control.


Backpropagation – is short for “backward propagation of errors.” Backpropagation in convolutional neural networks is a way of training these networks based on a known, desired output for a specific sample case.

Backpropagation Algorithm - was first formulated by Paul Werbos in 1974, and subsequently rediscovered by others including Geoff Hinton, in the training of neural network models. Backpropagation algorithms work by transmitting the error gradient in a backwards direction, from the output layer to the input layer. The backpropagation algorithm works with optimization algorithms, like Stochastic Gradient Descent, to solve the 'credit assignment problem,' adjusting the weights of each neuron according to the impact they have on the error.

Backward Chaining Rules – is a method in which machines work backward from a desired goal, or output, to determine whether any data or evidence exists to support those goals or outputs.

Bayesian Network – is a model that represents and calculates the probabilistic relationships between a set of random variables and an uncertain domain via a directed acyclic graph.

Behavioral Analytics – uses data about people’s behavior to understand their intent and predict future actions. The upsurge in consumer data from e-commerce platforms, gaming, web and mobile applications, and the Internet of Things feeds predictive behavioral analytics algorithms that can enable marketing teams to target the right offerings to the right micro-segment at the right time.

Bias Variance Tradeoff - is used in the release procedure of a statistical model in which models with a lower bias in parameter estimation have a higher variance of the parameter estimates across the data sample.

Big Data – is a term for very large and complex datasets that exceed the ability of traditional data processing applications to deal with them. Big data technologies include data virtualization, data integration tools (like EMR® and Hadoop®), and search and knowledge discovery tools.

Big Data Analytics – examines large and diverse datasets (i.e. big data) to identify patterns, trends, correlations, and other information that lead to insights organizations can harness in support of better decision-making. Big Data Analytics is the science and engineering of problem solving where the nature, size, and shape of the data renders traditional analytics tools difficult or even impossible to use.

What is big data analytics?

View Transcript

Big Data Visualization – refers to techniques that enable users to rapidly understand patterns in big data by presenting it through pictures or graphics. Big data visualization tools can include traditional graphs and pie charts, as well as heat maps, 3D computer models and dendrograms.

Biometric Recognition – is an information system that identifies a unique individual according to one or more of their physiological and behavioral characteristics. These can range from fingerprints to irises or voice and gesture. Biometric recognition technology uses algorithms to analyze data inputs from a subject to reconstruct and recognize a person’s identity.

Black Box – is a metaphor describing how people are unable to see or understand how technologies work and is particularly used to characterize the lack of understanding of how an algorithm works. While we can understand the outputs of artificial intelligence (AI) – in terms of recommendations, decisions and so on – the processes to achieve them are too complicated for us to understand. Concerns about the black box nature of AI center on its apparent lack of accountability, potential unseen biases and the inability to have clear visibility into what is driving an AI’s potentially life-changing decisions.

BLEU – stands for bilingual evaluation understudy. It is a standard benchmark to test machine translation capabilities from one language to another.

Boltzmann Machines – is a type of neural network model that was inspired by the physical process of thermodynamics and statistical mechanics. They were proposed independently by Terry Sejnoski and Geoffery Hinton. Full Boltzmann machines are impractical to train, which is one of the reasons why a limited form, called the restricted Boltzmann machine, is used.

Boosting – refers to an approach for reducing bias and variance in supervised learning, as well as converting machine learning algorithms from weak learners to strong learners.

Bots – are any program or script that performs an automated script or task.

Business Analytics – describes the skills, technologies, statistical methods and data driven approaches used to explore and investigate past business performance to gain new insights that can support business planning. Examples of business analytics tools include data visualization, business intelligence reporting and big data platforms.

Business Intelligence (BI) – encompasses the technologies, applications and practices used in the collection, integration, analysis, and presentation of business information to support better business decision-making. The use of artificial intelligence (AI) in big data and business intelligence is already widespread, with companies now using machine-learning algorithms to identify trends and insights in vast quantities of data and make faster real-time decisions. Some examples of these solutions in practice include dashboards, reporting tools, and data discovery tools tied either to a data warehousing tool or to the cloud.


Capsule Network – is a form of machine learning system designed to imitate biological neural organizations. In a capsule network, artificial neural networks (ANNs) are created by nesting neural layers with other neural layers. These are often used to model hierarchical spatial relationships, which C networks are unable to handle.

CAP Therom – initially a conjecture by Eric Brewer, that large distributed systems that handle data, need to make some trade-off between availability and consistency in the event the systems suffer a partition. This is an important constraint that engineers must be aware of as they design many of the modern big data systems.

Case-based Reasoning – describes the process of solving new problems by applying the solutions to similar past problems. Case-based reasoning algorithms can be sued for both regression and classification analysis.

Categorical Variable – a variable or feature in data that can take on fixed list of labels (categories).

Causation - in statistics can be inferred to occur when the value of one variable is significantly affected by the event or value of another variable.

Central Processing Units (CPUs) – perform most of the processing inside a computer, carrying out the instructions of a computer program. The CPU controls instructions and data flow to and from other parts of the computer, for which it relies heavily on a chipset located on the main printed circuit board, also known as the motherboard.

Channel – refers to any of the various platforms offering a chatbot channel to communicate with users, these channels include Facebook Messenger®, Viber™, WhatsApp™, Skype™ etc.

Chatbot – is an artificial intelligence program that simulates human interactions through pre-calculated user phrases and/or auditory or text-based signals. More than simple speech-to-text programs, chatbots use either text (text chatbots) or voice (voicebots) to communicate with a person. Text chatbots have been in use for longer than voicebots. Voicebots can conduct a conversation orally, understanding and responding to speech. Chatbot examples include personal assistants as well as applications such as online banking or interactive pop-up text windows on websites.

Chinese Room Argument – is a famous philosophical argument by John Searle, which states that no artificial system can have any real understanding or intelligence; therefore the Strong AI can never be achieved. Searle demonstrates the argument with a thought experiment: a man enclosed in a room may be able to process Chinese text passed through a window using a Chinese language rule book. However, that does not mean the man truly understands Chinese.

Chomsky Hierarchy – is an organization of the computational models or grammars, given by Noam Chomsky in 1956. This organizational hierarchy starts with the simplest, Type 3- regular grammars which are engines behind many pattern machine algorithms, to the most powerful Type 0- recursively enumerable grammars, which any Turing machine can recognize.

Classifier – is a type of algorithm (like KNN and SVM) that is used for data classification problems. Machine learning algorithms, especially deep learning, work well in many kinds of classification tasks, especially the cognitive classification tasks like recognizing an object in an image.

Cloud – uses a network of remote servers hosted on the internet to store, manage, and process data, rather than requiring a local server or a personal computer. Cloud computing technology has advanced rapidly. For example, Amazon™, Google™, and Microsoft™ have been incorporating features such as face recognition in online photos and language translation for speech into their respective cloud services—AWS®, Google Cloud®, and Azure®. The next stage of evolution is to create AI-based platforms that can be used by almost any type of company, regardless of its size and technical sophistication. Also, many Platforms-as-a-Service (PaaS) solutions have started incorporating AI capabilities. The current landscape of cloud AI technologies provides two major groups: Cloud Machine Learning (ML) Platforms and AI Cloud Services.

Clustering – describes an unsupervised machine learning technique for identifying structures among unstructured data. Clustering algorithms group sets of similar objects into clusters, and are widely used in areas including image analysis, information retrieval, and bioinformatics.

Clustering Analysis - is a type of unsupervised machine learning used for exploratory data analysis to find hidden patterns or groupings in datasets. Using metrics like probabilistic or Euclidian distance, clusters group together similar data points.

CNTK – is Microsoft's™ deep learning framework and toolkit, which is now referred to as Microsoft's Cognitive® Toolkit.

Cognitive Automation – is the use of software to introduce intelligence when processing large amounts of information. The use of artificial intelligence differentiates cognitive automation when compared with traditional robotic process automation (RPA).

Cognitive Computing – describes platforms that combine artificial intelligence with all other aspects of cognitive cycle – like perception, reasoning and optimizing decisions – that collectively approaches human-level cognitive dynamics. They encompass disciplines such as machine learning, natural language processing (NLP) and computer vision among others.

Cognitive Robotic Process Automation (RPA) – such as Accenture cognitive RPA, leverages artificial intelligence technologies like machine learning and natural language processing (NLP) to enhance the capabilities of RPA and improve customer experience by incorporating unstructured tasks as part of the process automation.

Cognitive Science – is the interdisciplinary study of the mind and its processes, drawing on the foundations of philosophy, psychology, linguistics, anthropology, and neuroscience. Techniques from cognitive science and artificial intelligence look at how the systems of the mind can be simulated in machines.

Collaborative Filtering – is a very popular technique used in product or content recommendation systems that takes advantage of user behavior that might have some similar purchasing or consumption characteristics. This is contrasted with, for example, content filtering which uses the profile of the content itself as the primary driver of the recommendations.

Computational Learning Theory – is a field of artificial intelligence, that focuses on all aspects of designing, building and analyzing the computational nature of machine learning. There have been various foundational theories that have advance the field, from Vapnik-Charvonenkis (VC) theory with strong statistical basis, to Leslie Valiant's Probably Approximately Correct (PAC) model with strong computational complexity basis.

Computer Vision – is a field of artificial intelligence that uses computer vision algorithms to mimic the way human vision acquires, processes, analyzes and understands visual information. It can use this real-world visual data to produce numerical or symbolic information and support decisions or take other actions.

What is computer vision?

View Transcript

Conditional Probability - is the measure of probability of an event taking place on the condition that another event has also occurred.

Confusion Matrix - is a tabulation of the correct and false responses of a classification model and is often used to describe a model's performance.

Continuous Learning – is how a machine learning or continuous learning algorithm can improve over time with the data it processes and/or the additional inputs it receives.

Continuous Variable - can take any value between the minimum and maximum in a given sample.

Conventional Software – describes software or applications that perform specific tasks. Examples include the Microsoft Office® suite of software.

Conversational AI – describes a branch of artificial intelligence (AI) that focuses on interpreting human (colloquial) language and subsequently communicating back with them. Powered by advanced features such as natural language processing (NLP), conversational AI is the logic that creates virtual conversations. Conversational AI examples of these are the voice assisted devices that are commercially available.

Conversational UI – is the platform that allows a user to communicate with a computer that mimics a ‘human like’ conversation. Powered by natural language processing (NLP), a conversational UI framework allows for an interaction that accounts for the user’s sentiments as well as the context carried during the conversation.

Convolutional networks – refer to a type of artificial neural network commonly used in image analysis. Examples of these networks in daily usage include Amazon™ product recommendations and Instagram™ search engine.

Convolutional neural network (CNN) – is a type of neural network that identifies and makes sense of images. Uses for convolutional neural networks include image classification (e.g. training machines to distinguish between images of cats and dogs).

Correlation - is a statistical approach in which the proximity of two variables can be mathematically related.

Covariance - is a measure of variability between two unrelated variables.

Credit Assignment Problem - generally refers to the problem of assigning proportional credit to individual components in a multi-component system. For example, in neural networks, the learning algorithm has to work out how to attribute the impact of each neuron on the results of the whole system during a prediction.

CUDA® – Computer Unified Device Architecture® is an interface library used to program and work with Nvidia™ GPUs. A lot of deep learning frameworks are integrated with CUDA® libraries.

Cybernetics – is the study of regulation or control systems. It was defined by Norbert Wiener and has been great influenced by the work of system theorists like Ross Asby. Cybernetics has had significant influence in the development of intelligent systems, especially in robotics.


Dark data – also known as “dusty data,” is any data that typically is not used – and in many cases not even stored – and therefore does not interact with other regular forms of data. Typically it is the digital data amassed throughout the enterprise from intermediate layers like network devices, system logs, and even emails and other unstructured sources. Many times, companies don’t even realize the powerful insights inherent in that data. Dark analytics identifies opportunities in this hidden data – unstructured, untapped, and/or untagged information including that found in server log files and mobile geolocation data – to provide valuable new customer and/or business insights.

What is dark data?

View Transcript

Data – is a set of quantitative or qualitative variables. A novel, a movie or a chart of accounts are all examples of data. Artificial intelligence (AI) requires data to train, learn, and act. The more information that’s readily available, accessible, and accurate the more likely that AI will be successful.

Data aggregation – is the act of collecting data from multiple sources for reporting or analysis. Data aggregation methods include average, minimum, media, maximum, variance and standard deviation.

Data Analysis – is the process of inspecting, cleansing, transforming, and modeling data to discover useful information, and support decision-making. The many different types of data analysis include data mining, a predictive technique used for modeling and knowledge discovery, and business intelligence, which relies on aggregation and focuses on business information.

Data Analytics – describes the end-to-end process by which data is cleaned, inspected and modeled. The objective is to discover useful and actionable information that supports decision-making. In a B2C context, the data analytics process encompasses qualitative and quantitative techniques used to discover insights into purchasing behaviors, trends and patterns.

Data Efficient Learning - is the ability to learn complex task domains without requiring massive amounts of data. Although supervised deep learning can address the problem of learning from larger datasets, for many real-world examples the amount of available training data is not sufficient to utilize such systems.

Data Mining – is the process of collecting data, aggregating it according to type and sorting through it to identify patterns and predict future trends. An ecommerce company, for example, would use data mining to analyze customer data and give product suggestions through the “customers who bought this item also bought” window. Data mining tools include Tanagra, R and Weka (a suite of machine learning algorithms for data mining).

Data Science – unites statistical systems and processes with computer and information science to mine insights with structured and/or unstructured data analytics. Common applications of data science (together with machine learning) include developing tools to predict future events.

Dataset – refers to a grouping of individual, but related, data points that a computer can process as a single unit. Datasets are referred to as "big data" when they’re too large to be handled by traditional data processing applications.

Decision Model – is a set of rules used to understand and manage the logic behind business decisions. Typically involving the application of sophisticated algorithms to large quantities of data, decision modeling can be used to recommend a course of action and predict its outcomes.

Decision Tree – a tree and branch-based model, like a flow chart, used to map decisions and their possible consequences. The decision tree is widely used in machine learning for classification and regression algorithms.

Deductive Reasoning – is a formal process of deriving a conclusion, from an initial set of axioms, and premise—and applying preset, agreed upon logical steps and calculations.

Deep learning – is a machine learning methodology where a system learns the patterns in data by automatically learning a hierarchical layer of features. In contrast, machine learning approaches rely on substantial feature engineering processes, carried out by humans, before a model can learn the relationship between features. In deep learning, data is typically pushed through a "deep" stack of activation layers. Each layer builds a representation of the data, with subsequent layers using the features of the previous layer to build more complex representations. The output of the final layer is mapped to a category to which the data should belong. Getting this final mapping correct is the objective of a deep learning algorithm. To understand deep learning vs machine learning, imagine a child learning to recognize an object such as a ball. The child’s parent will either confirm or deny whether a specific object is a ball. The longer this goes on, the more precise the child’s awareness of what constitutes a ball becomes. Without knowing it, they’re clarifying and constantly refining a complex abstraction (the concept of ball). Deep learning algorithms do much the same. At first, the computer program is provided with training data, for example images that have been labeled with meta tags. The algorithm uses this information to build a progressively more accurate predictive capability. In contrast, shallow machine learning approaches rely on a substantial amount of feature engineering processes carried out by humans before a model can learn the relationship between features; in deep learning, however, the system acquires the features and their relationships simultaneously.

What is deep learning?

View Transcript

Deep Neural Networks – refers to any neural network architecture that has multiple hidden layers of artificial neurons (nodes). Such architectures enable the models to learn multiple hierarchical, hence deep, inter relationships of the features present in the data. One example is the deep neural networks for YouTube™ recommendations, which is among the most advanced recommendation engines developed to date.

Defense Advanced Research Projects Agency (DARPA™) – is dedicated to the development of new technologies that can be used by the military. As a US Department of Defense (DoD) agency, DARPA™ is independent, reporting directly to senior DoD™ personnel. DARPA ™ has, over the years, been at the forefront of the development of emerging technologies that have become commonplace in our lives – including, for example, the internet and artificial intelligence.

Descriptive Analytics – refers to the analysis of historical data to quantify what happened. The difference between descriptive analytics vs predictive analytics can be illustrated through examples of each: the former could include company reports providing a historic perspective on performance, while the latter considers patterns and trends in past data and applies these to understand what might happen next.

Design Thinking – a methodology developed to enable practical, creative resolution of problems using approaches like those deployed in product and/or service design. Design thinking strategies have been shown to dramatically improve innovation by enabling organizations to think like creative designers.

Digital Assistant – is an artificial intelligence system that understands natural language voice commands to perform tasks, like customer service. For example, the digital assistant Amazon Alexa®, can respond to a wide range of spoken requests from checking the weather to ordering a taxi.

Digital Transformation – describes the process through which businesses and enterprises become increasingly digital and dependent on IT for successful outcomes, as well as enabling their people to solve traditional challenges with the support of digital technologies.

Dimension Reduction - is the process in machine learning where the number of predictor variables is reduced to a few significant ones.

Dropout Regularization - in a deep learning model generation process, refers to a temporarily random reduction in the number of connected neurons to remove their contribution during the calibration step. This process reduces overfitting by desensitizing the network to the weights of individual neurons.

Dynamic Programming – is a technique for breaking down an optimization problem into simpler sub-problems and storing the solution to each sub-problem so that each sub-problem is only solved once. Examples of a dynamic programming solution include backward induction, lattice models for protein-DNA binding and many string algorithms.


Ecosystem – a technology ecosystem is a product platform defined by core components made by the platform owner and complemented by peripheral applications, services and data from other organizations. A key benefit of ecosystems is that they provide solutions which are greater than those provided by the platform developer. As such, they can solve an industry’s technical problems as well as opening new opportunities for growth.

Enhanced Interaction – is how hyper-personalization and curation of real-time information deliver superior experiences to customers and users, increasing customer acquisition, retention, and overall satisfaction. An enhanced interaction framework sets out how an organization can harness technology to achieve those results.

Enhanced Judgment – uses artificial intelligence to augment human intelligence. In doing so, it improves the quality and effectiveness of human decision-making to support better performance. Examples range from diagnosing cancer to automobile design.

Ensemble Methods - is a process in machine learning where multiple model forms are extracted to obtain better predictive capability than individual algorithms can provide.

Expert Systems (Inference) – A computer system that emulates the decision-making ability of a human expert. Inference in expert systems applies logical rules to a knowledge base and deduces new knowledge from it.


Facial Recognition – is a technology capable of identifying or verifying a person from a digital image or a video frame. The software extracts facial features and subsequently classifies them by comparing the given image with faces within a database.

Feature or Predictor - refers to the measurable variable that is used to predict an outcome for a machine learning model. For example, an individual's biometrics, such as height and gender, could be used to predict his or her weight.

First Order Logic - refers to predicate calculus. It is called first order because logic statements allow variables of entities (simpler logic systems like Boolean or prepositional logic do not allow variables). For more complex logical expression, higher order logic systems allow greater variables.

Fisherian Statistics - is a foundational basis in inferential statistics, in contrast to other modern inference schools like Frequentist or Bayesian statistics. It was developed by Sir Ronald Fisher and is also referred to as fiducial inference. In Fisherian statistics, statisticians use both probability and likelihood for inference, whereas both Frequentist and Bayesian techniques restrict inference to probability.

FisherianFormal Concept Analysis - is a formal mathematical way for extracting hierarchical conceptual ontologies from data. It has applications in data mining, knowledge management, and machine learning.

Forward Chaining Rules – describes a method where an Expert system must work “forward” from a problem to find a solution. Using a rule-based system, forward chaining forces the artificial intelligence to determine which “if” rules it should apply, until the goal is achieved.

FPGA – a Field Programmable Gate Array is a type of specialized computing module that can be programmed to perform certain tasks quickly.

Frequentist Statistics – is defined by an inferential emphasis placed on the proportion of certain sample of data. In a frequentist approach, the probabilities are therefore discussed with a well-defined random experiment.

Fully Connected Networks – are multi-level networks in which all nodes are interconnected. A general Boltzmann machine is a type of fully connected network model.

Fuzzy Logic – is a form of logic system, where the distinction between truth and false values is not binary but multi valued, therefore allowing for a richer expression of logical statements. It was invented by Lotfi Zadeh and is used in expert systems design.


Gain and Lift Charts – are used by data scientists to measure the performance of models, as measured against random process or without the predictive model present.

Game Theory – is a field of mathematical modeling applied across many areas of study – like economics, biology, and internet and network design – to derive outcomes when participants are complex in ways that can result in zero-sum games (win for one results in loss for another) or non-zero-sum games. One of the grandfathers of modern computers John Von Neuman was the early pioneer in setting the mathematical foundations of game theory, which was later expanded by John Nash and others to a more general setting of non-cooperative game theory.

Gated Recurrent Unit – is a type of recurrent neural networks (RNN), and more specifically a simplified variation on LSTM type of RNN. They are used extensively in the modeling language, sequential or time series data. Like LSTM, GRU enables the flow of control of information, in the individual cells (units) of the neural network architecture, which makes the training of the models much more tractable.

Gaussian Distribution – also known as normal or the bell curve, is a type of continuous probability distribution which is defined by two parameters, the mean µ, and the standard deviation s.

Generative Adversarial Networks – describe pairs of alternately trained models using competing deep learning algorithms. Here, the first model is trained using a second model, to discriminate between actual data and synthetic data. This ability to capture and copy variations within a dataset can be applied for uses such as understanding risk and recovery in healthcare and pharmacology.

General AI – is a form of artificial intelligence that can be used to complete a broad range of tasks in a wide range of environments that a human can perform.

General Purpose Technology – The significance of general purpose technologies (GPTs) lies in the overall impact that they have on society, as well as the vast range of complementary innovations they support. To date electricity, the internet and information technology are probably the most significant. Artificial intelligence is another landmark general purpose technology development.

Genetic Algorithm – Inspired by natural evolution, Genetic Algorithm is a class of optimization techniques where the best models go through a process of "population control" through a methodical cycle of fitness, selection, mutation and cross over. Genetic algorithms are one example of broader class of evolutionary algorithms.

Genetic Programming – refers to a subset of artificial intelligence in which computer programs are encoded as sets of genes that are adjusted using evolutionary algorithms. In this way, genetic programming follows Darwin’s principles of natural selection: the computer program works out which solutions are strongest and progresses those, discarding the weaker options.

Genomic Analysis – Genomic analysis technologies are used to identify, measure or compare genomic features such as DNA sequence, structural variation, gene expression, or regulatory and functional element annotation at a genomic scale.

Gesture recognition – is how a computing device interprets a specific human gesture or motion. Gesture recognition technology can recognize movements or characteristics from any bodily motion or state.

Gradient Descent - is the process of finding the minimum of a function by iteratively taking steps that are proportional to the negative of the gradient.

Graphics Processing Units (GPUs) – refers to a specialized electronic circuit (chip) that’s designed to carry out calculations at very high speed. Mainly used for image display, graphics processing unit architectures can be found in a huge range of products, from mobile phones and personal computers, to workstations and game consoles.

Ground Truth – refers to a process, usually carried out on-site, for measuring the accuracy of a training dataset in proving or disproving a research hypothesis. For example, self-driving cars use ground truth data to train artificial intelligence to properly validate the road and street scenes.


Heuristic Search Techniques – are practical approaches to problem-solving that narrow down searches for optimal solutions by eliminating incorrect options. In the field of artificial intelligence, heuristic search techniques rank alternatives in search algorithms at each decision branch, using available information to decide which branch to follow.

Human-in-the-loop – refers to the process of inserting humans into machine learning processes to optimize outputs and boost accuracy. HITL is widely recognized as a best practice technique in machine learning: examples include Facebook’s photo recognition algorithm which invites users to confirm the identity of a photo’s subject when its confidence falls below a certain level.

Hyper Parameters - are used to tune artificial intelligence algorithms by taking preassigned values during calibration steps.

Hyper Parameters Tuning - is the process of assigning optimized values to hyperparameters by methods such as grid search of a predefined loss function.


Image Analytics – is how information from image data is extracted and analyzed using digital image processing. As well as identifying faces to establish age, gender, and sentiment, image analytics algorithms can also recognize numerous features simultaneously (including logos, objects, scenes, etc.). The use of bar codes and QR codes are simple examples, but more complex uses include facial recognition and position and movement analysis.

Image Recognition – is technology that can identify objects, places, people, writing and actions in images, using machine vision in combination with a camera, statistical approaches and artificial intelligence.

Image Search – refers to the use of specialized data search tools to find images. Google™ Image Search and the Microsoft Photos® are examples of image search engines: both provide users with images like their queries (keywords, links or even other images).

Inductive Reasoning – is a process where evidence and datasets are used to reach specific goals. In practice, this should not be much different from normal programming because it works on datasets already present, instead of constructing them.

Information Distance - is a metric for measuring the similarity between two objects. It is used in many algorithms, for example, in unsupervised clustering to find the groupings of similar objects. [Also see Levenshtein Distance as an example.

Information Retrieval - is a field of computer science that focuses on tools, processes and capabilities to extract, sort, and organize actionable information from disparate sources.

Information Theory - is a field that studies mathematical quantification of information, its transmission, and encoding. It is a fundamental basis for the modern design of information communication in, for example, noisy channels and/or compression of stored data.

Innovation Diffusion – is artificial intelligence’s (AI) ability to create a multiplier effect for economic growth. Innovation diffusion examples include the driverless car. Driverless cars’ AI systems will create huge amounts of data that will generate opportunities for others to develop new services and products. For example, insurers will generate new revenue streams from their ability to cover new risks and offer customers new, data-enabled services. Urban planners and others will also be able to take advantage and create new services – for example how they charge for road use. Other AI-enabled technologies could have a similar impact and create growth across a broad spectrum of adjacent commercial activities.

Intelligent Automation – refers to an automation solution that is enhanced with cognitive capabilities that enable programs and machines to learn, interpret and respond. Early adapters include solutions such as automobile cruise controls while current state of the art case would be that of a self-driving car.

What is intelligent automation?

View Transcript

Intelligent Enterprise Strategy – is the management approach that applies technology and new service paradigms to improve business performance.

Intelligent Products – communicate with humans and each other over a global network, like the internet. Intelligent products in artificial intelligence settings include Amazon Alexa®, autonomous vehicles and smart energy meters.

Intelligent System – is a machine embedded with an internet-connected computer that has the capacity to gather and analyze data and communicate with other systems; it is an artificial intelligent system that can think and learn for itself and with capability to adapt itself to current data.

Internet of Things (IoT) – describes the network where sensing elements such as sensors, cameras, and devices are increasingly linked together via the internet to connect, communicate and exchange information.

Interpretable AI – is a framework that provides a rigorous approach to interpreting machine learning models by non-data scientists, that go beyond the quality metrics or statistical measurements.


Julia Programming Language - is a multi-paradigm programming language designed for numerical analysis. Increasingly adopted as a favored language in machine learning, it brings together many key features of functional programming, objected-oriented programming and dynamic type casting.


K-fold Cross Validation - is the process during a machine learning model calibration where the training data is split into several subsamples; each one of them is dropped during the calibration and used to validate the generated model. The final model averages the parameters of the individual steps to provide a statistically robust model.

K-Means Clustering - in machine learning refers to the process of finding groups within unlabeled data by assigning data points to clusters depending on their proximity to a centroid.

K-Nearest Neighbors - is a machine learning algorithm where all the training cases are stored and new cases are grouped according to their proximity/similarity to previously-classified cases based on a similarity measure.

Knowledge Representation – is the field of artificial intelligence dedicated to representing real-world information so that a computer system can use it to solve complex tasks such as making medical diagnoses or conversing in a natural language. Knowledge representation in artificial intelligence describes how “things” are modelled in a solution. Several ways to model knowledge include logical language, like first order logic, or vectorizing the data to enable features to be correlated with each other.

Knowledge-Based Systems – is a computer program that investigates the past knowledge and makes inferences from it to solve complex problems. A current example would be that of classification of objects on the internet. For this use case, the knowledge base is very wide and includes unstructured data, making the task of meeting a specific data model extremely difficult.


Levenshtein Distance - expresses the similarity of two words or strings by indicating the lowest number of character edits that would be required to change one word into another.

Linear Algebra - is a mathematical field that enables the manipulation of linear equations. As the language for expressing and manipulating data, it is a fundamental prerequisite for understanding machine learning.

Linear Regression – is an area of unsupervised machine learning that uses linear predictor functions to understand the relationship between a scalar dependent variable and one or more explanatory variables.

Logic Programming – is a style of programming in which the execution is based on formal logic statements, like for example first order logic or predicate logic. Prolog is a defacto standard programming language that facilitates logic programming.

Longitudinal Study - is an observational experiment design where subjects are under examination repeatedly for a long period of time, often for many years or even decades.


Machine Intelligence – describes a computer’s self-learning capabilities using artificial intelligence, machine learning and deep learning technologies.

Machine Learning – is a type of artificial intelligence that enable systems to learn patterns from data and subsequently improve from experience. It is an interdisciplinary field that includes information theory, control theory, statistics, and computer science. As it gathers and sorts more information, machine learning constantly gets better at identifying types and forms of data with little or no hard coded rules. For example, through pattern recognition, machine learning will increase the accuracy of identifying specific objects or images.

What is machine learning?

View Transcript

Machine Perception – describes a system’s ability to receive and interpret data from the outside world in a way that’s like how humans use their senses. This is typically achieved through attached hardware.

Machine Translation – is when computers translate text or speech from one language to another. There are two main kinds of machine translation: knowledge-based systems that draw on dictionaries, grammars and so on; and statistical machine translation enabled by deep learning analysis of bilingual texts to derive meanings that enable translation from one language to another.

Machine Vision – is the method used to provide image-based automated inspection and analysis using optical sensors. In industry, uses for image processing analysis and machine vision include robot guidance, automatic inspection, security monitoring and damage assessment.

Mainframe – is a system used by large organizations for a range of data-processing tasks. This could include statistical analysis, ERP functions and financial transactions. During the 1960s and 1970s, mainframe computers were largely associated with IBM™ owing to its dominant market share. Mainframes remain a key computing resource for many large organizations and are likely to do so for many years.

Map and Reduce - is a process used for big data sets. It maps chunks of data to perform filtering/sorting operations before reducing the data by adding a summary operation to it.

Max-Pooling - in convolution neural networks (CNN) refers to the combination of clusters of neurons in a layer into a single neuron in the subsequent layer by choosing the maximum value from the prior cluster.

Memetics - is the anthropological study of how ideas (memes) can be viewed through a Darwinian evolutionary lens. Memetics helps to explain how ideas diffuse in cultures much like genes with a lifecycle that mimics genetic evolution as it combines, mutates, and copies itself into other memes.

Minimum Description Length (MDL) - is a formalization of the Occam's Razor principle used to find the best model (description) for capturing key aspects of data. MDL assumes that the simplest, most compact representation of data is the best and most probable explanation of the data.

Minimum Viable Product – a development technique widely used by startups to launch new products or websites rapidly, assess their viability, and fast-track modifications. A final set of features is only developed once feedback from early adopters has been considered. Groupon's™ early platform, built on third-party technologies, provides a minimum viable product example.

MNIST – A very well-known data set collection of handwritten numerical digits, used often in image recognition benchmarks. It is often used as no trivial example of classification task.

Mobile – describes how the internet, online services, voice calls, applications, information and content are accessed via smartphones or other mobile devices. Mobile is often seen as a distinct market sector.

Model – is simply a representation or simulation of some real-world phenomenon. There are many kinds of models that can be produced, e.g. iconic models, analogic models, and analytic models. In machine learning, analytic models are produced through the process of executing a learning algorithm against some data.

Model Training – is how artificial intelligence (AI) is taught to perform its tasks, and in many ways follows the same process that new human recruits must also undergo. AI training data needs to be unbiased and comprehensive to ensure that the AI’s actions and decisions do not unintentionally disadvantage a set of people. A key feature of responsible AI is the ability to demonstrate how an AI has been trained.

Model Workflow – are tasks within a workflow that can be mapped and analyzed before making positive changes to that workflow through business process modeling (BPM) techniques.

Monte Carlo Method - is a statistical technique where repeated random numbers are used to derive a numeric outcome. They are useful in practice to solve problems in areas such as optimization and probabilistic distributions.

MxNet - is an open-source deep learning library supported by cloud vendors such as AWS® and Microsoft Azure®.


Named Entity Recognition (NER) – is a kind of natural language processing (NLP) task where the objective is to take sentences in human language, and automatically identify which words refer to proper nouns (individual persons, places, company or brand names etc.). Deep learning approaches have achieved major successes in named entity recognition.

Narrow AI – describes an application of artificial intelligence where the model has been optimized to perform a limited number of tasks. In practice, there are lots of tasks such as information retrieval from a knowledgebase that need to be performed routinely, accurately and quickly that could be assigned to a bot.

Natural Language Processing (NLP) – is a field of computer science, with the goal to understand or generate human languages, either in text or speech form. There are two primary sub fields of NLP, Natural Language Understanding (NLU), and Natural Language Generation (NLG).

What is natural language processing?

View Transcript

Natural Language Understanding (NLU) – a sub-field of natural language processing (NLP), describes the quest to build machines with true reading comprehension, so that humans can communicate with them in natural human language and the machine can respond appropriately. Commercial applications of interest include applications ranging from text categorization, where emails are routed to the proper department based on their content, to full comprehension of newspaper articles.

Neural Networks – are a very advanced and elegant form of computing system. Machine learning neural networks consist of an interconnected set of "nodes" which mimic the network of neurons in a biological brain. Common applications include optical character recognition and facial recognition.

Neural Turing Machine - is a model that combines the ideas of pattern-matching neural network architectures with the programming and storage of Turing machines.

Neuromorphic chip – is a type of computer chip, designed to act as a neural network, that mimics in a simplified way how neurons and synapses function in the human brain. Neuromorphic chip technology is expected to greatly accelerate artificial intelligence (AI) applications in the future and is ideal for analysis-based tasks such as data-sensing, associative memory and cognitive computing. In 2017, Intel announced a neuromorphic artificial intelligence (AI) test chip, "Loihi", that is designed to mimic brain functions by learning from data it gains from its environment.2


Object detection – is a type of image segmentation task where the goal is to identify one or more real-world objects in a digital image or video and localize them within the image. With advances in deep learning, object detection has been widely adopted in multiple areas. Applications include navigation in autonomous vehicles, crowd monitoring through video analytics and quality control in manufacturing processes.

Open source platform – refers to any program whose source code is made available for use or modification by other users or developers. An open source platform is usually developed as a public collaboration and made freely available. An example of an artificial intelligence open source platform is Google’s TensorFlow®. There are various licensing models for open source, which enforce adopting organizations to proliferate open standards. Linux is one of the great successes of the open source movement.

Optical Character Recognition – converts images of typed, handwritten or printed text into machine-encoded text. Sources can include a scanned document, a photo of a document, a scene-photo or even text superimposed on an image. Google™ optical character recognition, for example, can detect and convert more than 248 languages.

Oversegmentation – is the process through which objects being segmented from images are themselves broken down into separate fragments. The goal in oversegmentation is not to segment an image into distinguishable high-level objects, but into smaller “super pixel” regions that are approximately uniform in color and texture. Python is widely used as a programming language in oversegmentation.


P-Value - in statistics, is a way to measure the significance of a new hypothesis testing. Typically, a p-value of less than 0.05 indicates a weak hypothesis, whereas one of greater than 0.05 would indicate a strong correlation.

Part of Speech Tagging - in natural language processing (NLP) is the process of marking a word in text to correspond to a specific part of speech based on context.

Perplexity - is an indicator of how effectively a probability model can predict a sample in the context of statistical measurement of information. For example, BLEU is a perplexity score for language translation models that measures how well the model can translate from one language to another.

Personality Insights – are obtained through the analysis of unstructured digital text (email, social posts, etc.) to predict individual personality traits, demands and principles.

Predicate Calculus - is a formal system of logic in which quantified variables can be used in logic statements. For example, 'all unicorns are beautiful' can be expressed in first-order, or predicate logic, as 'for all X, such that X is unicorn, X is beautiful'. Another type of quantifier could be 'Accenture is a company'. Such statements could not be expressed in prepositional (Boolean) logic.

Predictive Analytics – describes the practice of using historical data to predict future outcomes. It combines mathematical models (or “predictive algorithms”) with historical data to calculate the likelihood (or degree to which) something will happen. Machine learning based predictive analytics has been around for a while. But until recently it has lacked three key features that are important to drive true marketing value: scale, speed, and application.

What is predictive analytics?

View Transcript

Predictive Search – is a feature based on popular searches to predict a user's search query as it is typed, providing a drop-down menu of suggestions that changes as the user adds more words to the search engine. Predictive search in Google™ search engine was introduced with auto-complete, which helps users to search and get results quicker. The arrival of Google Now® took this further, proactively providing users with information that they were likely to need (based on their search habits etc.)

Predictive Systems – a widely adopted application of machine learning, predictive systems use intelligent technology to forecast what’s likely to happen. It’s how consumer websites display products that customers may like on their return visits to the online store. Using machine learning to track every interaction, learn preferences and purchase behavior, the store customizes subsequent visits with recommendations.

Predictor - describes an observation that correlates closely to another variable and can therefore be used to predict its value through an AI model.

Prepositional Logic - also referred to as Boolean logic, is the logic of declarative truth value statements. In Boolean logic, it is not possible to have variables to define predictions or to quantify entities.

Principal Component Analysis - is a technique used in machine learning where the variance explained by a large number of predictors is reduced down to a few while retaining most of the information of the larger set.

Pruning – describes the process of cleaning up code to eliminate unwanted solutions. Although it makes it easier to maintain the code, cutting down code (pruning) restricts the number of decisions that can be made by machines.

Python™ – is a high-level programming language widely used in machine learning. With Python™, you can program an AI in just 10 lines of code, all thanks to NumPy® and Pandas®, two of the most powerful AI libraries in the programming world. Python™ is often used for clustering and “predictive modeling”, as well as concept and function learning.

PyTorch - is an open-source library, based on Python, that is used to perform tasks such as natural language processing.


Quantum computing – is the emerging field of computer sciences that aims to harness a radically different framework of information processing, called quantum mechanics. Much like classical computing, the field includes both the quantum version of information sciences, but also the physical development of systems that can take advantage of quantum information sciences.


Recommendation Systems - A subclass of information filtering system that seeks to predict the "rating" or "preference" a user would give to an item. Recommendation systems in machine learning monitor and learn from users’ choices and behavior to make relevant suggestions. One example is how Netflix™ can suggest new content to individual viewers.

Recurrent Neural Network (RNN) – refers to a type of artificial neural network used to understand sequential information and predict follow-on probabilities. RNNs are widely used in natural language processing, with applications including language modeling and speech recognition.

Regression – algorithms used to predict values for new data based on training data fed into the system. Areas where regression in machine learning is used to predict future values include drug response modeling, marketing, real estate and financial forecasting.

Regression SVM – although mainly used for classification analysis, support vector machines (SVMs) can also be used to sort the data used in regression analysis (modeling the relationships between response variables and predictor variables). Models produced by support vector regression (SVR) only draw on a subset of training data because their cost function ignores any training data close to the model prediction.

Reinforcement Learning – uses a kind of algorithm that works by trial and error, where the learning is enabled using a feedback loop of "rewards" and "punishments". When the algorithm is fed a dataset, it treats the environment like a game, and is told whether it has won or lost each time it performs an action. In this way, reinforcement learning algorithms build up a picture of the "moves" that result in success, and those that don't. DeepMind's AlphaGo® and AlphaZero® are good examples of the power of reinforcement learning in action.

ReLU - stands for Rectified Linear Units, which are non-linear activation functions used in neural networks, much like Sigmoid and/or Tanh functions. ReLu have become preferable in recent times as they help to address the vanish/exploding gradient problem.

Representation Learning - represents the class of machine learning methods where features are learned automatically by an algorithm. Deep learning is a form of representation learning.

Residual Network (Resnet) - is a deep learning neural network architecture, developed by Microsoft in 20153, that has become the state-of-the-art architecture for many image recognition models. Typically, very deep (> 50 layers) ResNet models are still trainable to accomplish many complex tasks. Prior to ResNet, adding more layers would not typically result in the improvement of models.

Responsible AI – is an important emerging capability for organizations, that addresses the gap in making the artificial intelligence systems more transparent, reliable, and interpretable so that organizational and societal issues of fairness, bias, legal compliance, and ethics can be addressed methodically. Accenture Responsible AI has been at the forefront of thought leadership, building the necessary infrastructure, process, and skills to help our clients build their maturity.

What is responsible AI?

View Transcript

Restricted Boltzmann Machines (RBM) - describe primitive forms of two-layer stochastic neural networks that are typically stacked together to form larger networks. RBMs gained recognition in the Netflix Prize competition, where they were a key component of the winning entries for developing a state-of-the-art recommendation engine for movies.4

Robotic process automation (RPA) – is the application of technology that configures computer software or a “robot” to capture and interpret existing applications for processing a transaction, manipulating data, triggering responses and communicating with other digital systems. For example, Robotic Process Automation in finance executes many repetitive tasks and processes—such as loan application processing.

Root Mean Square Error - is the squared root of the mean of the squared difference of modeled value and the actual observed value; it is frequently used as a figure of merit to judge the quality of a machine learning model.

Rules-based Algorithms – leverage a series of ‘if-then’ statements that utilize a set of assertions, from which rules are created dictating how to act upon those assertions. Rules-based algorithms enable intelligent and repeatable decision making. They are also used to store and manipulate knowledge.


Scale - using artificial intelligence it’s possible to create models that can handle enormous amounts of data to achieve unparalleled predictive capabilities in real time. This makes the insights they provide much more accurate and valuable in decision-making.

Semantic segmentation – refers to a complex form of image segmentation that involves clustering parts of an image that belong to the same object together and labeling them with one of potentially hundreds of category labels. This ability opens the door to numerous applications.

Signal Processing – is when signals are sampled and intentionally altered to enhance the signal, through various process and algorithms like Fourier Transformation, Kalman Filtering etc.

Smart Apps – are software programs that are designed to perform a specific function directly for the user or, in some cases, for another application. Applications of artificial intelligence systems range from Apple’s Siri® and Amazon’s Alexa® to autonomous vehicles and unmanned drones.

Smart Machines – smart machine technologies enable devices via machine-to-machine and/or cognitive computing technologies such as artificial intelligence, machine learning or deep learning. Smart machines can reason, problem-solve, make decisions and even take actions.

Smart Personalization – is enabled through intelligent algorithms that can personalize information, offers and experiences based on known information (e.g. location, demographics, device and past behaviors). One example of a smart personalization engine is Google News®, which helps users find news stories that are of interest to them.

Soft Robotics (robotic process automation/RPA) – refers to the use of software with artificial intelligence and machine learning capabilities to automate repetitive, high-volume tasks and common processes (like customer servicing, calculations and maintaining records) without having to transform existing IT systems.

Software – describes programs and other operating information used by a computer.

Speech Analytics – describes the process of analyzing recorded conversations to obtain insights that can improve communication and future interactions. These methods are widely used in domains such as call centers where speech analytics is applied to gather information on areas including customer satisfaction, customer churn, agent performance and campaign effectiveness.

Speed & Application – Artificial intelligence (AI) can accomplish years’ worth of “human” work in moments and that means they can enable predictive models that offer near real-time decision-making support. For example, search for a specific product online and a marketing AI can process the user’s information (past and present) and serve an ad for a related event – even before the homepage has loaded. In the absence of AI, a predictive model would be unable to handle so much data so rapidly and would not be able to act on their own. However, when combined with AI, predictive analytics becomes faster, smarter, and more actionable.

Statistical Machine Translation – is the process of translating documents by statistically analyzing a corpus of bilingual texts and extracting user models. The most significant advantage of this method over a rules-based approach is the efficient use of both human and data resources.

STEM – stands for science, technology, engineering, and mathematics and is the applied, integrated approach to those subjects. STEM subjects have risen to prominence owing to the perception that these areas are essential to support the development of technology-driven businesses. Politicians and other leaders in the US and elsewhere have expressed concerns that students would be ill-prepared to work in the fastest-growing career sectors and have out in place policies and programs to encourage greater take up of STEM subjects.

Strong AI – is a philosophy wherein a computer program is thought to be capable to perform similarly to a human mind in terms of perception, beliefs and other cognitive capabilities associated to humans.

Structured Data – refers to information with a high degree of organization, meaning that it can be seamlessly included in a relational database and quickly searched by straightforward search engine algorithms and/or other search operations. Structured data examples include dates, numbers, and groups of words and number “strings”. Machine-generated structured data is on the increase and includes sensor data and financial data.

Supervised Learning – is a type of machine learning in which an algorithm takes a labelled data set (data that’s been organized and described), deduces key features characterizing each label, and learns to recognize them in new unseen data. One example of supervised machine learning: having been shown multiple labelled images of cats, an algorithm will learn how to recognize a cat and identify one in other previously unseen pictures.

Support Vector Machines (SVM) – are supervised machine learning algorithms used for classification and regression analysis. Employed in classification analysis, support vector machines can carry out text categorization, image classification, and handwriting recognition.


Temporal Difference Learning Method - is a form of Reinforcement Learning system designed to predict future values for a partial unknown system. Invented by Rich Sutton, it has been successfully used in many areas of learning game strategies. A famous program was TD-backgammon that became the first artificial system to play at master level.

TensorFlow® – is a collection of open-source software tools developed by Google™ for deep learning applications. Similar projects include Torch and Theano.

Tensor Processing Units (TPUs) – Like GPU's, TPU's are a proprietary type of processor designed by Google™ for AI. Unlike GPU's, the TPU's are designed to perform high volume of low precision computing enabling deployment of AI models at significantly low power consumption.

Thompson Sampling - also known as posterior sampling, is a heuristic developed by William Thompson in the 1930s. In 2012, Thompson Sampling gained significant interest as it was shown to perform well in achieving optimal strategy.

Training Data – is the data used to train a machine learning algorithm. Generally, data in machine learning is divided into three datasets: training, validation and testing data. In general, the more accurate and comprehensive training data is, the better the algorithm or classifier will perform.

Transfer Learning – is a technique in machine learning in which an algorithm learns to perform one task, such as recognizing cars, and builds on that knowledge when learning a different but related task, such as distinguishing dogs. Developed by Google™, TensorFlow® is an example of a technology that supports and enables transfer learning.

Transparency – this describes how artificial intelligence (AI) explains both what it does and how it reaches decisions. To achieve that requires a means to document how the AI arrived at decisions. Transparency also includes the right governance structures to monitor the AI and where necessary optimize how it reaches decisions.

Tuning – of artificial intelligence models is required before they can be deployed at scale. This process is often non-intuitive and time-consuming and the tunable model parameters (hyperparameters) can greatly affect the model accuracy.

Turing Test – developed by Alan Turing, this tests the ability of a machine to exhibit human intelligence. In this test, also called the “imitation game”, a human evaluator judges natural language conversation between a machine designed to provide human-like responses and a human. The machine passes the test if the evaluator is unable to differentiate between the two. Since its development, the test has become an important feature of the philosophy of AI.


Unsupervised Learning – is a type of machine learning in which the algorithm does not need the data with pre-defined labels. Unsupervised machine learning instead categorizes entries within datasets by examining similarities or anomalies and then grouping different entries accordingly. For example, an unsupervised learning algorithm might look at many unlabeled images of cats and dogs and would sort images with similar characteristics into different groups without knowing that one contained "cats" and the other "dogs."

Unstructured Data – describes information that does not have a pre-defined data model or is not organized in a pre-defined way. Often text-heavy, unstructured data can also be non-textual (e.g. JPEG images and MP3 files). Examples include e-mail, videos, photos, audio files, word processing documents, presentations and webpages. Unstructured data analytics tools like natural language processing, speech analytics and video analytics make it possible for machines to understand the huge volumes of unstructured data now available to organizations.

Utterance – is any input made by a human user to a chatbot or other conversational AI. An utterance generator can be either written or spoken. To work out what to do next, a chatbot must understand the utterance’s intent. For example, the utterance “I want to apply for a loan” will prompt a banking chatbot to respond with relevant information and processes.


Verification – refers to a process used in programming to validate whether an algorithm is delivering the correct output (according to defined parameters).

Video Analytics – uses deep learning algorithms to automatically analyze images in videos to detect and determine temporal and spatial events. Video analytics is used in a wide range of domains, including entertainment, security, healthcare, retail and transport.

Virtual Agent – describes an artificial intelligence-powered, representative that provides a human-like customer interface. Virtual agents can hold conversations with humans, answering queries intelligently and make limited gestures such as nodding in agreement.

Virtual Reality – is created by interactive software and hardware to provide 3-D immersive environments. Virtual reality applications create experiences that can make users suspend their disbelief about the reality of an environment and how they interact with it. This means the VR must not only look real but feel real too. Looking real can be achieved by wearing a head-mounted display (HMD) that displays a recreated life-size, 3D virtual environment without the boundaries imposed by a TV or computer screen. Feeling real can be achieved through handheld input devices such as motion trackers that base interactivity on the user’s movements.

Visual Search – uses an image, not text, as the online search query. For example, eBay™ visual search engine uses computer vision and deep learning technologies to compare images uploaded by shoppers to find products for sale on its site. Advanced visual search can identify what’s in a photo—a restaurant, for example—and supply additional information, like phone number and opening hours.


Weak AI – also known as narrow AI, describes an application of artificial intelligence (AI) where the model has been optimized to perform a limited number of tasks. In practice, there are lots of tasks such as information retrieval from a knowledgebase that need to be performed routinely, accurately and quickly that could be assigned to a bot.

Wearable Technology – describes smart electronic devices, worn on the body, and designed to enhance everyday activities. By gathering data and transmitting it to a user’s smartphone app, they provide real-time performance feedback in areas including sport and health, along with personalized recommendations for improvement. Wearable technology for athletes includes Fitbits™, Apple Watch® and Samsung Gear Fit 2 Pro®.

Web 2.0 – was a phrase first coined in 2004 to describe the second stage of the World Wide Web’s development. These developments include the evolution from static web pages to dynamic, interactive and user-generated content and the growth of social media. Examples of web 2.0 range from online banking to remote email access.

Word2Vec - is a model invented by Tomas Mikolov that learns to represent (embed) words in a vector space. For example, this might mean locating the words 'King' and 'Queen' in a similar way to locating 'Man' and 'Woman', allowing simple algebra to be computed like 'King - man + woman = Queen'. These language-embedding models can then be used to improve many types of natural language processing tasks, such as sentiment analysis.


XGboost - is a popular implementation of gradient-boosting approaches for building decision-tree models. It has performed well in many hackathon machine learning competitions like Kaggle and is now an almost automatic choice for classification-type problems.


Zero-Shot Learning - refers to a model's ability to learn about unseen data or examples that it has not been trained on.


[1] Alex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton. ImageNet Classification with Deep Convolutional Neural Networks. Curran Associates Inc., 2012.

[2] Intel Newsroom, “Intel’s new self-learning chip promises to accelerate artificial intelligence,” Sept. 25,2017.

[3] Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. Deep Residual Learning for Image Recognition. Miccrosoft Research, 2015.

[4] Andreas Toscher, Michael Jahrer. The BigChaos Solution to the Netflix Grand Prize. AT&T Labs – Research, 2009.

This document is produced by consultants at Accenture as general guidance. It is not intended to provide specific advice on your circumstances. If you require advice or further details on any matters referred to, please contact your Accenture representative.

This document makes descriptive reference to trademarks that may be owned by others. The use of such trademarks herein is not an assertion of ownership of such trademarks by Accenture and is not intended to represent or imply the existence of an association between Accenture and the lawful owners of such trademarks.

Subscription Center
Visit our Subscription and Preference Center Visit our Subscription and Preference Center