Deep Neural Networks – refers to any neural network architecture that has multiple hidden layers of artificial neurons (nodes). Such architectures enable the models to learn multiple hierarchical, hence deep, inter relationships of the features present in the data. One example is the deep neural networks for YouTube™ recommendations, which is among the most advanced recommendation engines developed to date.
Defense Advanced Research Projects Agency (DARPA™) – is dedicated to the development of new technologies that can be used by the military. As a US Department of Defense (DoD) agency, DARPA™ is independent, reporting directly to senior DoD™ personnel. DARPA ™ has, over the years, been at the forefront of the development of emerging technologies that have become commonplace in our lives – including, for example, the internet and artificial intelligence.
Descriptive Analytics – refers to the analysis of historical data to quantify what happened. The difference between descriptive analytics vs predictive analytics can be illustrated through examples of each: the former could include company reports providing a historic perspective on performance, while the latter considers patterns and trends in past data and applies these to understand what might happen next.
Design Thinking – a methodology developed to enable practical, creative resolution of problems using approaches like those deployed in product and/or service design. Design thinking strategies have been shown to dramatically improve innovation by enabling organizations to think like creative designers.
Digital Assistant – is an artificial intelligence system that understands natural language voice commands to perform tasks, like customer service. For example, the digital assistant Amazon Alexa®, can respond to a wide range of spoken requests from checking the weather to ordering a taxi.
Digital Transformation – describes the process through which businesses and enterprises become increasingly digital and dependent on IT for successful outcomes, as well as enabling their people to solve traditional challenges with the support of digital technologies.
Dimension Reduction - is the process in machine learning where the number of predictor variables is reduced to a few significant ones.
Dropout Regularization - in a deep learning model generation process, refers to a temporarily random reduction in the number of connected neurons to remove their contribution during the calibration step. This process reduces overfitting by desensitizing the network to the weights of individual neurons.
Dynamic Programming – is a technique for breaking down an optimization problem into simpler sub-problems and storing the solution to each sub-problem so that each sub-problem is only solved once. Examples of a dynamic programming solution include backward induction, lattice models for protein-DNA binding and many string algorithms.
Ecosystem – a technology ecosystem is a product platform defined by core components made by the platform owner and complemented by peripheral applications, services and data from other organizations. A key benefit of ecosystems is that they provide solutions which are greater than those provided by the platform developer. As such, they can solve an industry’s technical problems as well as opening new opportunities for growth.
Enhanced Interaction – is how hyper-personalization and curation of real-time information deliver superior experiences to customers and users, increasing customer acquisition, retention, and overall satisfaction. An enhanced interaction framework sets out how an organization can harness technology to achieve those results.
Enhanced Judgment – uses artificial intelligence to augment human intelligence. In doing so, it improves the quality and effectiveness of human decision-making to support better performance. Examples range from diagnosing cancer to automobile design.
Ensemble Methods - is a process in machine learning where multiple model forms are extracted to obtain better predictive capability than individual algorithms can provide.
Expert Systems (Inference) – A computer system that emulates the decision-making ability of a human expert. Inference in expert systems applies logical rules to a knowledge base and deduces new knowledge from it.
Facial Recognition – is a technology capable of identifying or verifying a person from a digital image or a video frame. The software extracts facial features and subsequently classifies them by comparing the given image with faces within a database.
Feature or Predictor - refers to the measurable variable that is used to predict an outcome for a machine learning model. For example, an individual's biometrics, such as height and gender, could be used to predict his or her weight.
First Order Logic - refers to predicate calculus. It is called first order because logic statements allow variables of entities (simpler logic systems like Boolean or prepositional logic do not allow variables). For more complex logical expression, higher order logic systems allow greater variables.
Fisherian Statistics - is a foundational basis in inferential statistics, in contrast to other modern inference schools like Frequentist or Bayesian statistics. It was developed by Sir Ronald Fisher and is also referred to as fiducial inference. In Fisherian statistics, statisticians use both probability and likelihood for inference, whereas both Frequentist and Bayesian techniques restrict inference to probability.
FisherianFormal Concept Analysis - is a formal mathematical way for extracting hierarchical conceptual ontologies from data. It has applications in data mining, knowledge management, and machine learning.
Forward Chaining Rules – describes a method where an Expert system must work “forward” from a problem to find a solution. Using a rule-based system, forward chaining forces the artificial intelligence to determine which “if” rules it should apply, until the goal is achieved.
FPGA – a Field Programmable Gate Array is a type of specialized computing module that can be programmed to perform certain tasks quickly.
Frequentist Statistics – is defined by an inferential emphasis placed on the proportion of certain sample of data. In a frequentist approach, the probabilities are therefore discussed with a well-defined random experiment.
Fully Connected Networks – are multi-level networks in which all nodes are interconnected. A general Boltzmann machine is a type of fully connected network model.
Fuzzy Logic – is a form of logic system, where the distinction between truth and false values is not binary but multi valued, therefore allowing for a richer expression of logical statements. It was invented by Lotfi Zadeh and is used in expert systems design.
Gain and Lift Charts – are used by data scientists to measure the performance of models, as measured against random process or without the predictive model present.
Game Theory – is a field of mathematical modeling applied across many areas of study – like economics, biology, and internet and network design – to derive outcomes when participants are complex in ways that can result in zero-sum games (win for one results in loss for another) or non-zero-sum games. One of the grandfathers of modern computers John Von Neuman was the early pioneer in setting the mathematical foundations of game theory, which was later expanded by John Nash and others to a more general setting of non-cooperative game theory.
Gated Recurrent Unit – is a type of recurrent neural networks (RNN), and more specifically a simplified variation on LSTM type of RNN. They are used extensively in the modeling language, sequential or time series data. Like LSTM, GRU enables the flow of control of information, in the individual cells (units) of the neural network architecture, which makes the training of the models much more tractable.
Gaussian Distribution – also known as normal or the bell curve, is a type of continuous probability distribution which is defined by two parameters, the mean µ, and the standard deviation s.
Generative Adversarial Networks – describe pairs of alternately trained models using competing deep learning algorithms. Here, the first model is trained using a second model, to discriminate between actual data and synthetic data. This ability to capture and copy variations within a dataset can be applied for uses such as understanding risk and recovery in healthcare and pharmacology.
General AI – is a form of artificial intelligence that can be used to complete a broad range of tasks in a wide range of environments that a human can perform.
General Purpose Technology – The significance of general purpose technologies (GPTs) lies in the overall impact that they have on society, as well as the vast range of complementary innovations they support. To date electricity, the internet and information technology are probably the most significant. Artificial intelligence is another landmark general purpose technology development.
Genetic Algorithm – Inspired by natural evolution, Genetic Algorithm is a class of optimization techniques where the best models go through a process of "population control" through a methodical cycle of fitness, selection, mutation and cross over. Genetic algorithms are one example of broader class of evolutionary algorithms.
Genetic Programming – refers to a subset of artificial intelligence in which computer programs are encoded as sets of genes that are adjusted using evolutionary algorithms. In this way, genetic programming follows Darwin’s principles of natural selection: the computer program works out which solutions are strongest and progresses those, discarding the weaker options.
Genomic Analysis – Genomic analysis technologies are used to identify, measure or compare genomic features such as DNA sequence, structural variation, gene expression, or regulatory and functional element annotation at a genomic scale.
Gesture recognition – is how a computing device interprets a specific human gesture or motion. Gesture recognition technology can recognize movements or characteristics from any bodily motion or state.
Gradient Descent - is the process of finding the minimum of a function by iteratively taking steps that are proportional to the negative of the gradient.
Graphics Processing Units (GPUs) – refers to a specialized electronic circuit (chip) that’s designed to carry out calculations at very high speed. Mainly used for image display, graphics processing unit architectures can be found in a huge range of products, from mobile phones and personal computers, to workstations and game consoles.
Ground Truth – refers to a process, usually carried out on-site, for measuring the accuracy of a training dataset in proving or disproving a research hypothesis. For example, self-driving cars use ground truth data to train artificial intelligence to properly validate the road and street scenes.
Heuristic Search Techniques – are practical approaches to problem-solving that narrow down searches for optimal solutions by eliminating incorrect options. In the field of artificial intelligence, heuristic search techniques rank alternatives in search algorithms at each decision branch, using available information to decide which branch to follow.
Human-in-the-loop – refers to the process of inserting humans into machine learning processes to optimize outputs and boost accuracy. HITL is widely recognized as a best practice technique in machine learning: examples include Facebook’s photo recognition algorithm which invites users to confirm the identity of a photo’s subject when its confidence falls below a certain level.
Hyper Parameters - are used to tune artificial intelligence algorithms by taking preassigned values during calibration steps.
Hyper Parameters Tuning - is the process of assigning optimized values to hyperparameters by methods such as grid search of a predefined loss function.
Image Analytics – is how information from image data is extracted and analyzed using digital image processing. As well as identifying faces to establish age, gender, and sentiment, image analytics algorithms can also recognize numerous features simultaneously (including logos, objects, scenes, etc.). The use of bar codes and QR codes are simple examples, but more complex uses include facial recognition and position and movement analysis.
Image Recognition – is technology that can identify objects, places, people, writing and actions in images, using machine vision in combination with a camera, statistical approaches and artificial intelligence.
Image Search – refers to the use of specialized data search tools to find images. Google™ Image Search and the Microsoft Photos® are examples of image search engines: both provide users with images like their queries (keywords, links or even other images).
Inductive Reasoning – is a process where evidence and datasets are used to reach specific goals. In practice, this should not be much different from normal programming because it works on datasets already present, instead of constructing them.
Information Distance - is a metric for measuring the similarity between two objects. It is used in many algorithms, for example, in unsupervised clustering to find the groupings of similar objects. [Also see Levenshtein Distance as an example.
Information Retrieval - is a field of computer science that focuses on tools, processes and capabilities to extract, sort, and organize actionable information from disparate sources.
Information Theory - is a field that studies mathematical quantification of information, its transmission, and encoding. It is a fundamental basis for the modern design of information communication in, for example, noisy channels and/or compression of stored data.
Innovation Diffusion – is artificial intelligence’s (AI) ability to create a multiplier effect for economic growth. Innovation diffusion examples include the driverless car. Driverless cars’ AI systems will create huge amounts of data that will generate opportunities for others to develop new services and products. For example, insurers will generate new revenue streams from their ability to cover new risks and offer customers new, data-enabled services. Urban planners and others will also be able to take advantage and create new services – for example how they charge for road use. Other AI-enabled technologies could have a similar impact and create growth across a broad spectrum of adjacent commercial activities.
Intelligent Automation – refers to an automation solution that is enhanced with cognitive capabilities that enable programs and machines to learn, interpret and respond. Early adapters include solutions such as automobile cruise controls while current state of the art case would be that of a self-driving car.