
Harnessing Power of Random Convolutional Kernels for Accurate Time Series Classification
Existing methods for attaining state-of-the-art accuracy in time series classification are computationally expensive and often focus on a single type of feature. In this blog post, we discuss the recent success of creating exceptionally fast and accurate time series classification using random convolutional kernels. We explain how simple classifiers could be used along side random convolutional kernels to capture discriminative patterns with low computational requirements that scale to millions of time series, achieving high accuracy at fraction of the cost compared to existing methods. This research is invaluable for CTOs and AI engineers looking to improve their accuracy in time series classification tasks – read on to find out more!
Introducing convolutional kernels for time series classification
Why convolutional neural networks (CNNs) you might ask? Well, we all know about the success of the CNNs in image classification. Time series data essentially have the same topology as images, with one less dimension.
Time series classification is a fundamental task in many areas of machine learning, and convolutional kernels have been used to attain state-of-the-art accuracy. But existing methods that use convolutional kernels require significant training time even for smaller datasets and are intractable for larger datasets.
Challenges of existing methods for time series classification
Existing classification methods usually focus on a single representation such as frequency, shape or variance. Convolutional kernels can capture many of these features which have each previously required their own specialized techniques. However, some of the existing convolutional kernel-based methods for time series classification require a large number of convolutional kernels to attain the state-of-the-art accuracy. This results in significant training time and computational expense. Additionally, these convolutional kernels are typically large and can be difficult to interpret.
How random convolutional kernels can provide state-of-the-art accuracy with a fraction of the computational expense
The recent success of random convolutional kernels for time series classification is a major breakthrough in the field of machine learning. This novel approach to classifying time series data enables models to quickly and accurately capture subtle patterns with far less complexity than previous convolutional kernel-based methods. By randomly sampling convolutional kernels and measuring the proportion of positive values (or ppv) for each one, this innovative technique helps to summarize the output of feature maps and provide weights for a classifier to better assess a given time series.
What makes this research especially noteworthy is that it could be applied not only to time series data in machine learning, but potentially also to other data types such as images. Additionally, this approach has been found to be substantially more effective than traditional max pooling operations, providing state-of-the-art accuracy with dramatically reduced computational requirements. The ability to scale up to larger time-series datasets further cements the value of random convolutional kernels in achieving fast and accurate classification results.
Outlining ROCKET, a proposed method for increased efficiency
ROCKET is a method that adopts random convolutional kernels to transform time-series, where each kernel is applied to each input time series, producing a feature map.
Rocket uses a massive variety of kernels where each kernel has random length, dilation, padding, weight and bias. On the other hand, for example in typical convolutional networks it is common for a group of kernels to have the same size, dilation and padding. The random convolutional kernels are sampled from a pool of convolutional kernel candidates, with each candidate having a corresponding maximum value of the resulting feature map and another factor called proportion of positive values (ppv) score. The ppv in essence directly captures the proportion of the input which matches a given pattern. Therefore, for K kernels ROCKET produces 2K features per time series.
By running multiple trials of this process on varying window sizes and pooling strategies, users can further refine and optimize their results. Additionally, Rocket has proven to be substantially faster than existing methods thanks to its lower computational requirements and simpler implementation structure. For example, while traditional convolutional kernel methods may take hours or even days to train on large datasets, Rocket can rapidly produce accurate results in just minutes!
MiniROCKET, a nearly deterministic reformulation of ROCKET
MiniROCKET is a version of ROCKET that has a set of fixed convolutional kernels with random hyperparameter. Essentially, the aim is to reduce the number of random options for each kernel. Therefore, key differences can be broken down into kernel length, weight, bias, dilation and padding. As mentioned before, ROCKET applies global max pooling. On the other hand, MiniROCKET does not need global pooling because it does not improve performance. As a result, MiniROCKET generates half as many features as ROCKET with comparable accuracy. Finally, as a result of these adjustments and changes MiniROCKET can even be faster than ROCKET with comparable accuracy.
Wrlds is maximizing AI training efficiency using ROCKET
At Wrlds, we use ROCKET as a transformer for time series data which is then followed by a ridge regression classifier as one of our solutions for time series motion sensor data. We have seen considerable decrease in training times while maintaining relatively high accuracy. Considering the simple implementation and architecture of this solution, it can be a very valuable asset for any company that deals with time series data.
Conclusion
The research into convolutional kernels and their applications to time series classification tasks is a crucial development for CTOs or AI engineers looking to improve accuracy in multisensor motion analytics. By utilizing convolutional neural networks, we can achieve state-of-the-art levels of performance while reducing computational requirements significantly. This breakthrough provides us with an exciting new tool for tackling complex data analysis problems and has the potential to revolutionize how we process sensor data from connected devices at the edge. With convolutional kernel methods such as ROCKET, businesses are now able to quickly assess data sets on demand without having to spend hours training models — this opens up huge opportunities for increased productivity across multiple industries.
And, don’t hesitate to get in touch with myself or any of my colleagues at Wrlds Technologies if you want to know how we could help you take your data analysis to the next level, using our system for machine learning training and algorithm creation.

Author: Amir Namazi, AI Engineer at Wrlds Technologies