In their book, Prediction Machines: The Simple Economics of Artificial Intelligence, University of Toronto economists Ajay Agrawal, Joshua Gans and Avi Goldfarb posit that the principal benefit of artificial intelligence (AI) is that it significantly lowers the cost of making accurate predictions. They argue that when the cost of something comes down, we use more of it and apply it in ways not thought of before. Basic principles of economics would suggest, therefore, that the use of AI is set to witness exponential growth. And, the results of Accenture’s 2018 Technology Vision survey support this as 68 percent of respondents indicate that their organization plans to invest in AI over the next year.1
So how might the chemical industry use AI to automate and augment prediction-making capabilities?
In the course of making decisions when running a chemical company, workers, managers and executives make scores of predictions every day, sometimes several times a day. For example:
A formulation chemist predicts how a change in the recipe might affect the functionality of the formulation before making the change.
A product manager predicts how a five cent per pound increase in price will be received by the customer and competitors before deciding on the pricing action.
A talent manager predicts how a candidate will perform and fit into the culture of the organization when making a hiring decision.
A maintenance worker predicts the likelihood of the continued functioning of a piece of equipment before deciding whether to let it run or take it down for maintenance.
A treasurer predicts future cash needs when deciding how to fund or where to invest excess cash.
These predictions are not pulled out of thin air. They are made on the basis of data and models that analyze this data. The challenge, however, is that the data sets are incomplete and the models not necessarily reliable because they are built on these limited datasets. For example:
Forecasting prices: A product manager for polyester makes her price prediction based on several data sets: the price of ethylene glycol and terephthalic acid, industry cost curves, polyester operating rates, etc. Price forecasts made with this limited data set tend to be directionally correct, at best, over the long run. This is because other variables like the price of cotton, short-term fashion trends (e.g., rise of athleisure wear), disposable income, shipping rates or inventory positions also affect price. But this data is either hard to obtain or difficult to factor into predictions.
Creating formulations: A formulation chemist who develops an additive package for a new engine oil specification relies on historical recipes to come up with initial formulations. He must deal with perhaps 20 to 25 complex components that react with each other and collectively produce the desired characteristics in the additive package. Once the potentially viable samples are developed, he runs expensive engine trials that generate a vast amount of quantitative and qualitative data (e.g., visual inspection of piston head). The process of developing a new package thus involves a lot of trial and error, and informed guesswork.
To understand why the use of AI is set to explode, it is important to first understand what has happened in recent years to the costs of data and computing power, two key inputs for making predictions.
Data: The biggest change is that it is now easy and inexpensive to gather and store vast amounts of data. The cost of Internet of Things (IoT) sensors has plummeted, lowering the cost of data gathering, and the cost of storage has dropped because of the cloud.
Computing power: Over the last 25 years, computing power per dollar has increased by an order of magnitude every four to five years; some estimates even suggest by as much as a factor of 10.2
Let us revisit the two examples discussed earlier in this new world of cheap, virtually limitless data sets and very low cost of computing power.
Forecasting prices: The product manager for polyester now uses an AI assistant that continuously accesses—in addition to the company’s proprietary supply curves and demand projections—several additional information sources, such as Chicago Board of Trade cotton futures, news reports on fashion trends, the ICIS3 feed on plant outages across the polyester value chain and shipping prices. Machine learning algorithms continuously detect patterns in this vastly expanded data set to make increasingly accurate predictions. With this information, the product manager can continuously improve her ability to price product to optimize profitability.
The AI agent does what it does well—processes vast amounts of data, detects patterns and makes predictions. The product manager does what humans do best—exercise judgement on how to translate the predictions into pricing actions. In other words, a crucial function in the future of running the polyester business has become “digital.”
Creating formulations: The formulation chemist feeds the new engine oil specifications into an AI aide—that is, a “digital chemist.” The AI-based assistant accesses all the recipes ever created in the company’s history. The “digital chemist” is connected to the company’s laboratory information system and taps into patent and CAS4 databases. Years of quantitative and qualitative engine test data have also been ingested.
In a matter of minutes, the machine learning algorithm driving the digital chemist recommends three recipes that have the highest likelihood of meeting the new specifications. Now the formulation chemist only needs to run three engine trials instead of dozens, thereby saving the cost of developing a new recipe and significantly accelerating the speed to market. Again, a crucial function in R&D has become “digital.”
As these two examples illustrate, chemical companies can apply AI throughout the organization and across every function to significantly improve the quality of critical decisions that are made every day. And when the use of AI to enhance decision-making becomes pervasive across a company, a chemical business becomes a digital business!
There is another way to think about this. The chemical industry has long thought in terms of value chains, and chemical companies have mastered the art of creating value by managing the physical flows in a value chain.
As they go digital and apply AI, however, chemical companies must master the art of creating value by tapping into the vast data flow that accompanies the physical flow. This data flow has always existed, but what has changed, as highlighted above, is the cost of gathering and processing the data. Deploying AI across the business represents numerous opportunities to monetize this data flow every bit as well as, if not better than, mastering the physical flows. And when a chemical business attains mastery over the data flow it become a digital business!
Circling back to the title of the blog, I proposed that “data is the new Permian.” How so? As Figure 1 illustrates, the chemical industry is no stranger to disruption. From Ziegler-Natta catalysts5 creating the new age of plastics in the 1950s to the oil shocks of the 1970s and 1980s, this industry has been disrupted about every 20 years. The transformation story over the past decade has been shale in the Permian Basin coupled with new horizontal drilling and hydraulic fracturing technologies. Looking ahead to the next twenty years, the story seems to be shifting to data (like shale) and digital technologies (like fracking and horizontal drilling) being the key forces shaping the industry. So, data is indeed the new Permian Basin!