Turing Test Definition
Developed in 1950 by Alan Turing, the Turing test is a measure for determining if a machine can perform basic human tasks, or matches up with human technology in actions, words and imaginations. This test is the foundation of Artificial Intelligence.
A Little More on What is a Turing Test
Technology is advancing every day, and one couldn’t have imagined this several centuries ago. From abacus to highly sophisticated machines, technological advancements are visible in our everyday lives. Technology helps in every aspect of life, from finance to personal growth, language processors, transportation, and even in performing simple home chores. Experts are also looking for ways to create new inventions and make discoveries, as well as build on what is already existing. This constant search has created a modification of how businesses and industries operate. To better harness technologies that can be used for improving business performances, different scientists and individuals are providing strong support for Artificial Intelligence (AI), and they seek to create robots which can match human intelligence as well as perform their actions.
Alan Turing, a British Mathematician, designed the Turing Test to evaluate if a robot can be considered as a human, or having near-human intelligence. His first question was based on the possibility of machines having thoughts, and this led to the greatest challenge for AIs. While trying to solve this mystery, Alan Turing created a series of tests which involved investigating these robots and matching their intelligence to that of a human. However, to avoid partiality, he made sure that an individual (known as a judge) would be the one to decide whether a machine sounds like a human or not. This person will have access to interrogate both a real human and a robot (communication is only by voice; the judge is not pre-informed on the human or machine voice). Alan Turing stated that if this judge happens to pick the machine as a human, then the machine can be said to have human intelligence.
However, the concept of the Turing Test has changed in modern times, as interrogation is mostly done by a number of individuals, otherwise known as judges. A machine is said to have near-human intelligence if more than 30% of these judges happen to misjudge it for a human. Hugh Loebner created the Loebner Prize in 1991, and this event was a Turing test competition. In this competition, Leobner modified the rules to include a twenty-five minutes conversation with each judge, where the winning bot is the one which receives the highest votes from Judges. No machine was able to pass the Turing test up till 2014, thus contradicting Alan Turing’s statements that bots will be able to match human intelligence in the 2000s.
Kevin Warwick of the University of Reading marked a Turing Test competition in 2014, to mark the 60th anniversary of Alan Turing’s death. In this event, a bot named Eugene Goostman was declared the winner after being misjudged for a human by more than 30% of the judges. There are, however, different controversies surrounding this conclusion.
Just like every other sector or gathering, there are bound to be opposing sides, and this is typical in the field of Artificial Intelligence. Non-supporters are backing the claim that no matter how sophisticated a machine might be, it will never have a mind of its own, as it cannot perform activities beyond its initial program.
References for “Turing Test”
Academic research for “Turing Test”
Turing test: 50 years later, Saygin, A. P., Cicekli, I., & Akman, V. (2000). Turing test: 50 years later. Minds and machines, 10(4), 463-518.
Pessimalprint: a reverse turing test, Baird, H. S., Coates, A. L., & Fateman, R. J. (2003). Pessimalprint: a reverse turing test. International Journal on Document Analysis and Recognition, 5(2-3), 158-163.
[PDF] Chatterbots, tinymuds, and the turing test: Entering the loebner prize competition, Mauldin, M. L. (1994, August). Chatterbots, tinymuds, and the turing test: Entering the loebner prize competition. In AAAI(Vol. 94, pp. 16-21).
Lessons from a restricted Turing test, Shieber, S. M. (1994). Lessons from a restricted Turing test. arXiv preprint cmp-lg/9404002.
Subcognition and the limits of the Turing test, French, R. M. (1990). Subcognition and the limits of the Turing test. Mind, 99(393), 53-65.
[PDF] Turing test considered harmful, Hayes, P., & Ford, K. (1995, August). Turing test considered harmful. In IJCAI (1) (pp. 972-977).
An analysis of the Turing test, Moor, J. H. (1976). An analysis of the Turing test. Philosophical Studies, 30(4), 249-257.
Visual turing test for computer vision systems, Geman, D., Geman, S., Hallonquist, N., & Younes, L. (2015). Visual turing test for computer vision systems. Proceedings of the National Academy of Sciences, 112(12), 3618-3623.
The Turing Test: the first 50 years, French, R. M. (2000). The Turing Test: the first 50 years. Trends in cognitive sciences, 4(3), 115-122.
The turing test, Copeland, B. J. (2000). The turing test. Minds and Machines, 10(4), 519-539.