Machine learning explained
Arguably one of the most important aspects of computer science and technology, machine learning (ML) can be described as “a type of artificial intelligence (AI) that allows software applications to become more accurate at predicting outcomes without being explicitly programmed to do so”.1 Machine learning algorithms can receive historical data as an input to output new values or predictions. A machine learning model is generally defined as the output of an ML algorithm that has been run on data.
ML can also be described as “one way to use AI. It was defined in the 1950s by AI pioneer Arthur Samuel as ‘the field of study that gives computers the ability to learn without explicitly being programmed’”.2 IBM adds that ML “focuses on the use of data and algorithms to imitate the way that humans learn, gradually improving its accuracy”.3
Margaret Rouse adds onto the definition of ML, noting that “in this context, the word machine is a synonym for computer program and the word learning describes how ML algorithms become more accurate as they receive additional data”.4 Rouse continues, stating that while machine learning as a concept isn’t modern, its modern applications are. The practical application of ML “in business was not financially feasible until the advent of the internet and recent advances in big data analytics and cloud computing [because] training an ML algorithm to find patterns in data requires a lot of compute resources and access to big data”.4