The History of AI

The term Artificial Intelligence (AI) was first coined in a proposal at the Dartmouth Summer Research Project Conference in 1956, held at Dartmouth College, U.S.A.

Many of the first generation of AI researchers predicted that a machine as intelligent as, if not more than, human beings would soon become a reality and many algorithms were developed. This period is known as the 1st AI boom. The key logic of AI in this era was propositional reasoning and search. This is the technology used, for example, to solve a puzzle or to find a way to get through a maze. Due to the limited capacity of computers, the algorithms developed in this period worked only when the rules and goals were clearly defined. In other words, AI was not practical in real-life situations where no such clarity exists, and, thus, the first AI boom ended.

The 2nd AI boom began in the 1980s when computers became available for household use. The “expert system” that emulates the decision-making ability of a human expert was the key characteristic of AI at that time. Expert systems are designed to solve complex problems, and were successful, to some extent, thanks to the improvement in computer capacity. The task of designing the database and maintaining rules defined by experts was quite complex, however, and the system could not handle exceptional cases and/or contradictory rules. Research in machine learning and deep learning was already underway to address these challenges, but computers were not powerful enough to implement them. This was the end of the 2nd AI boom.

The further miniaturization and the improvements of computers along with the Internet and the capacity of cloud-based management of a huge amount of data in the 2000s brought the 3rd AI boom. Deep learning was the key driver of the boom. A breakthrough in implementing deep learning was published in 2006. The technique was used to search big data and then progress was made in machine learning from around 2010. In 2016, AI made a leap and shocked the world as AlphaGo, an AI-supported computer program outsmarted humans at Go. Google Home, Amazon Echo and other practical products and systems became increasingly popular in 2017, the year some said was the New Year of AI. The 3rd AI boom has yet to show any sign of waning as the market for AI continues its rapid expansion.

Challenges of AI Applications in Manufacturing

The use of AI in industrial robots can significantly reduce the learning time and help to improve efficiency in manufacturing. The area of AI applications is expanding as they are used in the predictive maintenance of a factory’s machine equipment, the visual inspection of products, and productivity improvement through the analysis of workers’ movement, among others.

However, AI does not achieve the full-automation of production, nor does it automatically result in efficiency improvement. Also, some manufacturers may find it difficult to introduce AI. To use AI in manufacturing, it is necessary to have a large amount of precise data that is used as a basis for judgment criteria. First it is necessary to build an environment to collect a large amount of data before the application of AI. The accuracy of data is equally important.

In particular, a very small difference can change the result of visual inspection. At the moment, most such inspections are done by experienced inspectors and the AI-based automation of the process is still in its infancy. A large amount of data that supports decision making will be needed to initiate the learning process and eventually automate the process. If the quality of judgment is poor, automation will be counterproductive.