8.5 C
London
Thursday, September 29, 2022

Machine Learning Algorithms and Training Methods: A Decision-Making Flowchart

Machine learning is set to transform investment management. Yet many investment professionals are still building their understanding of how machine learning works and how to apply it. With that in mind, what follows is a primer on machine learning training methods and a machine learning decision-making flowchart with explanatory footnotes that can help determine what sort of approach to apply based on the end goal.

Subscribe Button

Machine Learning Training Methods

1. Ensemble Learning

No matter how carefully selected, each machine learning algorithm will have a certain error rate and be prone to noisy predictions. Ensemble learning addresses these flaws by combining predictions from various algorithms and averaging out the results. This reduces the noise and thus produces more accurate and stable predictions than the best single model. Indeed, ensemble learning solutions have won many prestigious machine learning competitions over the years.

Ensemble learning aggregates either heterogeneous or homogenous learners. Heterogeneous learners are different types of algorithms that are combined with a voting classifier. Homogenous learners, by contrast, are combinations of the same algorithm that use different training data based on the bootstrap aggregating, or bagging, technique.

2. Reinforcement Learning

As virtual reality applications come to resemble real-world environments, trial-and-error machine learning approaches may be applied to financial markets. Reinforcement learning algorithms distill insights by interacting among themselves as well as from data generated by the same algorithm. They also employ either supervised or unsupervised deep neural networks (DNNs) in deep learning (DL).

Reinforcement learning made headlines when DeepMind’s AlphaGo program beat the reigning world champion at the ancient game of Go in 2017. The AlphaGo algorithm features an agent designed to execute actions that maximize rewards over time while also taking the constraints of its environment into consideration.

Ad tile for Artificial Intelligence in Asset Management

Reinforcement learning with unsupervised learning does not have either direct labeled data for each observation or instantaneous feedback. Rather, the algorithm must observe its environment, learn by testing new actions — some of which may not be immediately optimal — and reapply its previous experiences. Learning occurs through trial and error.

Academics and practitioners are applying reinforcement learning in investment strategies: The agent could be a virtual trader that follows certain trading rules (actions) in a specific market (environment) to maximize its profits (rewards). Nevertheless, whether reinforcement learning can navigate the complexities of financial markets is still an open question.


Machine Learning Decision-Making Flowchart

Graphic of Machine Learning Decision-Making Flowchart

Footnotes

1. Principal component analysis (PCA) is a proxy for the complexity of the prediction model and helps reduce the number of features, or dimensions. If the data has many highly correlated Xi features, or inputs, then a PCA can perform a change of basis on the data so that only the principal components with the highest explanatory power in regards to the variance of features are selected. A set of n linearly independent and orthogonal vectors — in which n is a natural number, or non-negative integer — is called a basis. Inputs are features in machine learning, whereas inputs are called explanatory or independent variables in linear regression and other traditional statistical methods. Similarly, a target Y (output) in machine learning is an explained, or dependent variable, in statistical methods.

2. Natural language processing (NLP) includes but is not limited to sentiment analysis of textual data. It usually has several supervised and unsupervised learning steps and is often considered self-supervised since it has both supervised and unsupervised properties.

3. Simple or multiple linear regression without regularization (penalization) is usually categorized as a traditional statistical technique but not a machine learning method.

4. Lasso regression, or L1 regularization, and ridge regression, or L2 regularization, are regularization techniques that prevent over-fitting with the help of penalization. Simply put, lasso is used to reduce the number of features, or feature selection, while ridge maintains the number of features. Lasso tends to simplify the target prediction model, while ridge can be more complex and handle multi-collinearity in features. Both regularization techniques can be applied not only with statistical methods, including linear regression, but also in machine learning, such as deep learning, to deal with non-linear relationships between targets and features.

5. Machine leaning applications that employ a deep neural network (DNN) are often called deep learning. Target values are continuous numerical data. Deep learning has hyperparameters (e.g., number of epochs and learning rate of regularization), which are given and optimized by humans, not deep learning algorithms.

6. Classification and regression trees (CARTs) and random forests have target values that are discrete, or categorical data.

7. The number of cluster K — one of the hyperparameters — is an input provided by a human.

8. Hierarchical clustering is an algorithm that groups similar input data into clusters. The number of clusters is determined by the algorithm, not by direct human input.

9. The K-nearest neighbors (KNN) algorithm can also be used for regression. The KNN algorithm needs a number of neighbors (classifications) provided by a human as a hyperparameter. The KNN algorithm can also be used for regression but is omitted for simplicity.

10. Support vector machines (SVMs) are sets of supervised learning methods applied to linear classification but which also use non-linear classification and regression.

11. Naïve Bayes classifiers are probabilistic and apply Bayes’s theorem with strong (naïve) independence assumptions between the features.

AI Pioneers in Investment Management

References

Kathleen DeRose, CFA, Matthew Dixon, PhD, FRM, and Christophe Le Lannou. 2021. “Machine Learning.” CFA Institute Refresher Reading. 2022 CFA Program Level II, Reading 4.

Robert Kissell, PhD, and Barbara J. Mack. 2019. “Fintech in Investment Management.” CFA Institute Refresher Reading, 2022 CFA Program Level I, Reading 55.

If you liked this post, don’t forget to subscribe to the Enterprising Investor.


All posts are the opinion of the author. As such, they should not be construed as investment advice, nor do the opinions expressed necessarily reflect the views of CFA Institute or the author’s employer.

Image credit: ©Getty Images/Jorg Greuel


Professional Learning for CFA Institute Members

CFA Institute members are empowered to self-determine and self-report professional learning (PL) credits earned, including content on Enterprising Investor. Members can record credits easily using their online PL tracker.

Yoshimasa Satoh, CFA

Yoshimasa Satoh, CFA, is a director at Nasdaq. He also sits on the board of CFA Society Japan and is a regular member of CFA Society Sydney. He has been in charge of multi-asset portfolio management, trading, technology, and data science research and development throughout his career. Previously, he served as a portfolio manager of quantitative investment strategies at Goldman Sachs Asset Management and other companies. He started his career at Nomura Research Institute, where he led Nomura Securities’ equity trading technology team. He earned the CFA Institute Certificate in ESG Investing and holds a bachelor’s and master’s degree of engineering from the University of Tsukuba.

Source link

Latest news
Related news