Table of Contents
Researchers and programmers have been studying AI for decades, but it still remains one of the most elusive areas in the field of Computer Science. That’s mainly because the field is nebulous, very large and connected to other fields. The concept of intelligent artificial beings and robots date back to ancient Greece in Antiquity myths[i]. Philosophers have mulled over the idea of artificial beings and mechanical men for a long time.
Although the roots of AI are long and deep, the modern history of AI as we know today spans less than 100 years[ii]. The idea really took off after the invention of the Atanasoff Berry Computer (ABC) in the 40s, which inspired scientists and researchers to move forward with creating intelligent machines. ABC was the first digital computer that could solve up to 29 linear equations at once and provided researchers with the processing power they needed to really get started.
Thinkers, philosophers, theologians, and mathematicians have been thinking about intelligent beings for a long time, but the topic was widely discussed in the early 1700s in popular literature. The novel “Gulliver’s Travels” by Jonathan Swift mentioned the earliest references to computers able to improve knowledge. Samuel Butler, the author of Erewhon, coined the idea of machines that could possess consciousness in the future.
Czech playwright Karel Čapek discussed the idea of artificial people or robots made in a factory in his sci-fi play (translated in English as “Rossum’s Universal Robots”). After this first reference to the robot, others also started implementing it into their art, research, and work.
Metropolis, a sci-fi movie featured a robotic girl, which is the first depiction of a robot on the big screen and inspired future non-human characters.
The first mechanical robot named Gakutensoku was built in Japan by Makoto Nishimura, a biologist and professor. It was only able to move its hands and head and change facial expressions.
Walter Pitts and Warren McCulloch published a paper that proposed the 1st ever mathematical model for a neural network.
Donald Hebb proposed in his book that experiences create neural pathways and that these connections or pathways become stronger with frequent use. His learnings and theories are still used as a model in Artificial Intelligence today.
Pioneer of Machine Learning and a mathematician, Alan Turing proposed the Turing Test in his book, which sets a baseline to determine if a machine is intelligent or not. Dean Edmonds and Marvin Minsky (Harvard undergraduates) build the 1st neural network computer. In the same year Isaac Asimov published his book Three Laws of Robotics and Claude Shannon published his paper ‘Programming a Computer for Playing Chess.’
First self-learning program developed by Arthur Samuel that could play checkers.
Georgetown-IBM was able to translate 60 Russian sentences automatically into English.
1956 (The Birth of AI)
The term AI coined at Dartmouth Summer Research Project on Artificial Intelligence. Participants set goals and scope of AI. LT (Logic Theorist), the 1st reasoning program developed by Herbert Simon and Allen Newell, which was able to prove 38/52 mathematical theorems and even found more elegant ways of proofing some of them.
1958 to Mid-70s (Early Enthusiasm)
Artificial Programming language LISP developed by John McCarthy, which became the programming language of choice for AI researchers and developers.
GPS (General Problem Solver) developed to imitate how humans solve problems. Geometry Theorem Prover developed. The term machine Learning coined at IBM by Arthur Samuel. MIT AI Project founded by John McCarthy.
A robot invented in the 50s by George Devol becomes the first to work on New Jersey’s GM assembly line.
AI Lab founded by John McCarthy at Stanford
An early AI program ‘STUDENT’ created by Daniel Bobrow. It was written in LISP, was able to solve algebra word problems and is considered to be an early milestone and a breakthrough in NLP.
The US government canceled all machine translation projects after a report (ALPAC) showed lack of progress in the field. First chatbot called ELIZA created by Joseph Weizenbaum. Charles Rosen in collaboration with 11 others developed the first general purpose mobile robot.
XX and MYCIN developed at Stanford, which were the 1st expert systems developed to diagnose blood infections.
PROLOG developed (logic programming language). Japan builds WABOT-1, the first ‘intelligent robot.’
The British Government releases a report showing disappointments in the field of AI research, which led to massive cuts in funding for ongoing and future AI projects.
1974 to 1980 (The First AI Winter)
DARPA (Defense Advanced Research Projects Agency) cuts academic grants for AI research after ALPAC and report released by the British government. Research in the field of AI stalls, which is a period known as the ‘First AI Winter’. AI projects required a lot of computing power in the 70s and started exceeding the limits of research computers. LISP exacerbated the situation even further as it was not suitable for regular commercial computers, which were optimized for FORTRAN and assembly languages.
1980 to 1987 (AI Booms)
R1, also known as XCON, was the first commercial expert system. It was developed by Digital Equipment Corporation and kicked off an investment boom that lasted for a decade, ending the ‘First AI Winter’. The system was created to configure new computer system orders. WABOT-2, another robot built by Waseda University, was made capable of playing music, reading musical scores, and communicating with people.
Japan allocated $850mn in 1981 to its Fifth Generation Computer Project (FGCP), aiming to develop computers able to translate languages, converse, express reasoning and interpret pictures. The FGCS project was launched a year later aiming to create a platform for development of AI and supercomputer like performance.
The US launches its own initiative in response to Japan. The SCI (Strategic Computing Initiative) was DAPRA-funded research program in AI and advanced computing.
Businesses and organizations start spending over a billion dollars per year on expert systems. LISP machine market emerges as a growing industry supported by these organizations, including LISP Machines Inc. and Symbolics that built their own specialized systems to run on LISP.
Mercedes introduced a driverless van with built-in sensors and camera. The van could drive up to 55 mph on a road without a human driver and no other obstacles.
1987 to 1993 (The Second AI Winter)
Market for LISP machines collapsed in 1987 because of cheaper alternatives and advancements in computing technologies. The ‘Second AI Winter’ starts, a period in which expert systems became too expensive to update and maintain. Japan killed its Fifth Generation Computer System in 1992, while DARPA also ended SCI in 1993 after spending almost $1bn. Both terminated their programs citing failing to meet the ambitious goals that were previously set.
1991 to 2011 (Intelligent Agents Emerge)
DART, an automated tool for logistics planning/scheduling developed by the US during the Gulf War.
Inspired by ELIZA, the first chatbot created in 1966, ALICE was developed with an added functionality of data collection of natural language samples.
IBM develops Deep Blue, which managed to beat Gary Kasparov, the world chess champion. LSTM (Long Short-Term Memory) was developed, which was a RNN architecture (Recurrent Neural Network) for speech and handwriting recognition.
First ever pet robot for children developed by Caleb Chung and Dave Hampton.
SONY introduced its own version of a robotic pet dog called AIBO (Artificial Intelligence RoBOt). It was able to learn by interacting with its owners, environment and other AIBO. In addition to communicating with its owner, it could also respond to over a hundred voice commands.
AI witnessed an upward trend after the fears of the Y2K bug subsided. Kismet, a robot able to recognize emotions and stimulate them with its face developed by Professor Cynthia Breazeal. Its face was structured like a human with artificial eyes, eyelids, eyebrows, and lips. HONDA launched its humanoid robot ASIMO.
AI entered for the first time in homes in the shape of the vacuum cleaner Roomba.
DARPA’s Grand Challenge (a funded prize competition for development of autonomous vehicles) won by the self-driving car STANLEY. The US military starts investing in autonomous robots. Tech companies such as Facebook, Netflix, and Twitter start using AI.
Breakthroughs achieved by Google in the discipline of speech recognition and introduced it in Google’s iPhone app.
Google secretly starts developing a driverless car (Project Chauffeur).
AI starts becoming part of our daily lives, including voice assistants. Microsoft Launched Kinect, the first gaming device for Xbox 360 that could track human movement using infrared and a 3D camera.
2011 Onwards (Deep Learning, AGI and Big Data)
IBM Watson competes and wins against human competition on Jeopardy! Google Launches Google Now, an app that provides predictive information to the users. Apple released Siri, which used a natural-language interface to listen to and answer questions in a human voice.
Google feeds 10mn YouTube videos to its Brain Deep Learning project as a training set. It used deep learning algorithms to feed this massive amount of data to its neural network, which learned to recognize cats without having been explicitly programmed to do so. This led to a breakthrough that ushered a new era for deep learning and neural networks.
Google’s self-driving car passes the state driving test, becoming the first to achieve this milestone. Microsoft released its own virtual assistant Cortana, while Amazon released its home assistant Amazon Alexa.
Stephen Hawking, Elon Musk, Steve Wozniak, and around 3,000 others signed an agreement that prohibited development of autonomous weapons.
Lee Sedol, Go Player world champion was beaten by Google’s DeepMind AlphaGO. This is considered to be a major breakthrough because AI researchers saw the ancient game as a big hurdle to clear due to its complexity. In the same year Google launched its smart speaker Google Home, which serves as a personal assistant.
IBM’s Project Debater debated on many complex subjects with two expert debaters and performed admirably. Sophia, a humanoid robot developed by Hanson Robotics, which looked more like a human than previous humanoids and could recognize images, communicate, and make facial expressions.
2017 till Present
Samsung introduced Bixby, its own version of a virtual assistant. Google developed Duplex, a virtual assistant and Facebook, Amazon, IBM, Microsoft, and other tech giants developed their own programs and smart devices.
Advancements in the field of AI are happening at a rate faster than ever before and we can expect this trend to continue in the foreseeable future. Some of the most important areas of current AI research and development include virtual assistants and chatbots, Natural Language Processing, Machine Learning, Deep Learning and autonomous vehicles.
Machine learning has become a key area of research in the field of AI and is one of the most powerful technologies. Many of the recent breakthroughs in this subfield of AI have promoted more efficient and faster Business Intelligence (BI) using abilities ranging from NLP to facial recognition. Machine learning programs are like subprograms or individual modules of AI that can operate on their own.
Although the ultimate goal of creating intelligent machines that can think on their own has not materialized yet, machine learning programs have gained the capability of performing specific tasks without human intervention. That’s one of the reasons why the terms like machine learning and AI are often used interchangeably. The key areas where there has been significant advancement in machine learning include[iii]:
Many tech giants have come up with their own version of a personal or home assistant, including Google, Apple, Microsoft, and Amazon. These virtual assistants are one of the most advanced types of AI that exist today. However, these virtual assistants are still limited in functionality. They do have a large vocabulary but can only understand and respond to basic commands.
VAs are created to help users with basic tasks such as managing to-do lists, providing reminders of important tasks, and taking notes. Progress is being made in this area and advanced VAs such as Flamingo AI and Interactions can help users augment human research and carry out transactions without human intervention.
Natural Language Processing
Recent advances in NLP have allowed us to trade jokes with virtual assistants like Siri and Alexa, but that’s not the only area where progress is being made. It can enable people with disabilities to interact with computers or allow people to communicate in different languages via real-time translation. Another area of research in NLP is reading words and sifting through large amounts of unstructured data in order to extract useful business information.
Chatbots are like a simpler version of Vas VAs. They have comparable language understanding abilities and are mostly used as informational kiosks. Currently, chatbots can only respond within a limited scope and a small selection of pre-programmed replies. Chatbots are usually the first thing related to AI and machine learning that most small and medium businesses come across and utilize.
Using ML to Develop Better ML
The number of machine learning experts and data scientists is limited, so they have to develop programs that use advanced machine learning algorithms to develop even better machine learning. This is called automated machine learning, which is another key area where breakthroughs are expected in the coming years.
Variational Autoencoders (VAEs) can be trained to detect smoothness properties and are widely used as an unsupervised method of learning complicated situations. VAEs are neural networks that show great promise in a variety of tasks and complex data model generation, including face models, physical models, and handwritten digits.
Graph Neural Networks
Graph Neural Networks (GNNs) are a subset of ML that use a neural network applied to a graph structure creating what are called graph databases. GNNs emphasize relationships (lines connecting nodes) and are used for tasks including classification, clustering, and regression. It is not easy to work with GNNs, but recent advancements in parallel computing, network architectures, and algorithms are changing the landscape. GNNs are mostly used to represent real-life scenarios including maps, web graphs, human relationships, chemistry, and knowledge graphs.
Generative Adversarial Networks
Generative Adversarial Networks (GANs) is a relatively new concept and revolves around generating an illusionary reality by using game theory dynamics. The expected end result is creation of content that sounds and looks like a copy of reality but is actually based on ‘fake’ information. This machine learning has the potential for different areas such as political humor and art, but it also raises serious ethical concerns and can be used in generating fake information and advertisements.
Machine learning relies on statistical analysis and procedures to find patterns and classify data in order to ‘learn’. Software such as SAS and SPSS accelerated the use of machine learning and other advanced statistical analysis techniques. SAS is a global provider of AI, analytics and data management solutions for a variety of industries and roles. IBM’s SPSS Statistics is an advanced statistical platform that helps businesses discover data insights to research and solve business problems.
Statistical analysis was used to overcome the initial limitations of AI and simple neural networks, which learn in a way similar to statistical algorithms.
Although AI might not yet be able to give decision-making capabilities for key businesses assignments, it can still provide businesses and organizations with accurate imperative insights. This allows the workforce to focus on high-level activities as machines take care of administrative and repetitive tasks.
AI grew exponentially in the mid-20th century, but thinkers have been thinking about it long before they even had a word to describe it. The pace of innovation increased significantly as more computing power and resources became available. Many advances came to fruition in the 1950s and 60s, thanks to new programming languages, research studies and to some extent inspirations by movies.
During the 70s, scientists and researchers accelerated advancements, especially in robotics and automations. However, reduced funding from governments slowed the pace and led to AI Winter. Almost the same thing happened in the 80s, but things took off in the 90s with the current decade being one of the most important in the history and development of AI.
[i] “HISTORY OF AI”. Retrieved from https://builtin.com/artificial-intelligence
[ii] “A Complete History of Artificial Intelligence”. Retrieved from https://www.g2.com/articles/history-of-artificial-intelligence
[iii] “One Hundred Year Study on Artificial Intelligence”. https://ai100.stanford.edu/2016-report/section-i-what-artificial-intelligence/ai-research-trends