Skip to main content

Finger on the AI Pulse: Revionics Scientists Explore Google AutoML Tables

In the tech and business news coming from the world’s major AI players and influencers in 2019, a theme has emerged: democratization of machine learning.

New tools are becoming available, designed for non-statisticians to process and predict based on their own datasets. With these tools, providers claim, those formerly essential prerequisites of mathematical and coding knowledge have been rendered optional.

As an organization centered around providing accurate and actionable retail forecasting, the Revionics science team takes an interest in what these new tools could provide, and how their offerings compared to our Price, Markdown, and Promotion Optimization products.

One of the tools that promises an intuitive interface while requiring minimal depth of statistical knowledge is AutoML Tables, a Google Cloud offering currently in beta. Users can import tabular data, select a target column for predicting future values, set some objectives and limitations, and then allow the software some time to determine a model based on data in the other columns.

We decided to try out AutoML Tables, to see how the results compared to our internal modeling strategies and algorithms. We knew that it wouldn’t really be an apples-to-apples comparison; AutoML Tables is a general-purpose tool, still in its beta testing phase, and has some known limitations.

“AutoML was meant for problems with very large quantities of training data,” said Alex Braylan, Senior Optimization Scientist at Revionics. The generic nature of AutoML Tables can be an asset for organizations looking for which variables have the most impact on target variable, whereas Revionics’ modeling is specifically designed for analyzing retailers’ time-series data.

In order to compare the results from AutoML Tables to our existing modeling strategies, we compiled an identical list of ten products from a sample retailer dataset. AutoML Tables requires that a dataset be at least 1,000 rows, and our sample was 1,511. We set a budget of one hour of training time, which if fully used, would be billed to us at $19.32 in computing costs.

In both cases, the objective was to predict future sales based on historical, time series data. While the Revionics model provided forecasts that closely approximated the actual sales in the testing period (the last 20% of the two-year data set), the AutoML Tables results showed evidence of overfitting, in which excessive expressiveness and fit freedom result in a model that has low training error, but generalizes poorly for out-of-sample prediction.

“In this case AutoML overfits the data -- it finds patterns in the noise which extrapolate very poorly into the future,” explained Braylan. “On the other hand, we could choose not to give it dates as an input, in which case it cannot detect any time-dependent effects and underfits.”

Visibility into the model was another concern. When forecasting produces unexpected results, Revionics has teams of data scientists ready to investigate how and why those results occurred, allowing for tuning if needed to increase accuracy. “With AutoML being a black box, we cannot control how much freedom we give the model to fit time-dependent patterns the way we do with our own models,” Braylan continued. “We also cannot impose common-sense constraints around sensitivity to inputs such as price and promotional exposure.”

Despite the limitations on AutoML Tables’s configurability and application to specific retail forecasting scenarios, Google’s tool was definitely easy to use and detailed in its final results. The integration with Google Cloud Platform storage made data importing trivially simple, and the “Analyze” phase of the process provided clearly visible metrics for data integrity and distribution.

Ultimately, the Revionics data science team’s impression was that while AutoML Tables could be great for many general applications, the processes that have been developed at Revionics specifically to analyze retailers’ sales history and provide actionable forecasts are the better option. “These kinds of tools are very powerful and are useful for many kinds of problems. But this kind of data-fitting power comes at the cost of being overly sensitive to noise, opaque to users, and difficult to integrate domain knowledge,” Braylan concluded. “For forecasting economic time series, even simple methods that have existed for a long time can be stronger baselines.”

Our science team continues to work hard every day to drive innovation and enhancements in our suite of AI and ML capabilities, and we also continue to watch developments in the data science arena to remain up to the minute on emerging tools and capabilities.

About the Author

Maisie is a content marketer and copywriter specializing in B2B SaaS, ecommerce and retail. She's constantly in pursuit of the perfect combination of words, and a good donut.