Last Updated on February 10, 2021 by Sean B
Competitions take place all over the world every year in different fields from sporting events to spelling bees, and winners are announced after careful evaluation by the judges. The history of competitions is very rich as well and has been a part of various cultures globally.
As you might expect, the same is true for the field of Artificial Intelligence; the domain of artificial intelligence is making leaps and bounds everywhere and therefore has been taken its steps in the field of competition as well.
Each year, the Loebner Prize competition is held, and the AI chatbot who possesses the most human-like qualities and consciousness is declared the winner by the judges. Hugh Loebner initiated the competition in 1990 by collaborating with the Cambridge Center for Behavioral studies. So, let’s take a look at the competition and the prize ceremony in detail.
What You Need to Know About the Loebner Prize
Loebner Prize is awarded each year to the AI bots with the most human-like characteristics, such as the ability to communicate with the judges interactively and intuitively. It has become popular since the conception of the annual competition and has been well received by those in the field of IT. Let’s now discuss the factors associated with the prize in detail.
The Format of the Loebner Prize Competition
The format of the competition works on a standard Turing Test. In each round, the judges engage in a conversation simultaneously with an AI bot and a human agent through the computer but are not told who is who. It’s the judge’s job to decide based on the responses which one is the bot and which one is human.
The Turing Test also has a very rich history and was proposed as a method to evaluate the machines’ intelligence. The principle behind this test is that verbal communication based on responses is a good method to determine the AI’s intelligence.
Turing Tests have broken boundaries in the field of IT and have been very influential. However, from time to time, some experts like to debate the real authenticity of the Turing Test, but just like any other field in science, everything receives rigorous testing to improve itself.
Harnad’s Total Turing Test
Turing Tests have also been improved over the years. An example will be of Harnad’s Total Turing Test conducted in 1991. Harnad called this new method Total Turing because, in this format, verbal communication was not the sole criterion for determining the intelligence of the AI machine.
In the Total Turing Test, the bot’s other behaviors are also taken into account. To get a better picture of the bot’s intelligence. Unfortunately, the Total Turing Test is only applicable to the AI bots which possess a physical body such as robots. Since Harnad says that these robots must be able to mimic the behavior of other people in the open world, the criteria cannot be applied to the AI, which is confined to a screen.
Alan Turing’s Imitation Game
The Turing Test owes its name and conception to one of the most renowned thinkers in computer science, Alan Turing. In 1950, Alan Turing put forth a remarkable question – can machines think? Turing then proposed a test, which he initially called the Imitation Game, we now know it as the Turing Test.
Since then, machines have proven time to time again that the answer to Alan’s question was most probably yes.
As discussed above, Alan’s Turing test requires instructors to converse with an AI bot and see whether or not the AI can trick the judges into thinking that they are talking to a real human and not a bot.
The strategy of the Turing test, as discussed above, has been changed. New methods have been introduced and incorporated into the Turing test, and its criteria have been made more effective to test the intelligence of the bots.
An example of what shook the judges is the example of Eugene Goostman, which is described as the first AI bot to trick the judges into thinking that they are talking to a human agent. Eugene is an AI bot with a 13-year-old boy’s characteristics and can trick anyone into thinking that they are talking to a 13-year-old boy and not a Chatbot.
The Turing test is carried out in a completely random way, and the judges do not have any pre-defined topic or goal in mind. The conversation with the bot is spontaneous and is unpredictable. The conversation between the judge and the bot can go in any way, and the results can vary.
What’s interesting about the Turing test is that Alan later pondered on his question and rephrased it. The requirement is not that the AI bot exhibits intelligence highly; rather, to truly trick a judge into thinking that they are talking to a human requires the Chatbot to makes humans just like a normal human will do.
This notion of making Chatbots fallible just like a human adds to their ability to exhibit human behavior. This is why the Chatbot Eugene Goostman was so effective at tricking the judges into thinking that they are talking to a 13-year-old boy. Eugene was able to mimic the fallibility of a 13-year-old boy.
The Loebner Prize Award
Originally, when the competition was conceptualized by Hugh Loebner, an award of $2,000 was scheduled to be given to the most human-like program owner, which participates in the competition. Later in 2005, the prize was raised to $3,000 but was then changed to $2,500 in 2006.
The once again, in 2008, $3,000 was awarded to the winners of the competitions. Along with these prizes, there are two-time only awards that were announced but were never rewarded to anyone.
The prize of $25,000 was announced for the first computer program, which convinces the judges that he is the real human and tricks the judges into thinking that the human person is the computer program.
The prize of $100,000 was announced for the first computer program, which can convince the judges to think that he is a real human and decipher and understand visual images, auditory inputs, and texts.
If these prizes are achieved, then the competition’s goal will reach, thus putting an end to the annual competition. As Alan envisioned, the goal of the competition is to find out the most human-like chatbot that can communicate with humans just like a normal human would.
The Rules and Regulations of the Competition
Before 1995, the restrictions were strict, but after 1995 the restrictions have been lifted. Anyhow, let’s discuss the rules that have been a part of the Loebner Prize.
In 2007, for the three entries of Robert Medeksza, Noah Duncan, and Rollo Carpenter, some screening questions were introduced to determine the state of the technology used in the AI. These questions included some basic questions about what time is it, what round of the tournament it is currently, some general knowledge questions (Such as asking the name of the president), basic comparisons (whether a giraffe or an elephant is taller), some memory questions, etc.
The contestants didn’t need to give highly intelligent answers to these questions.
In 2008, it was allowed for web-based entries to also take part in the competition. This introduction of web-based entries opened up new possibilities for the competition.
The interrogators which converse with the Ai chatbots are invited for the task; the rules do not define how these interrogators are selected. Interrogators have limited time to converse with the AI bots. Such as in 2003, the interrogators had 5 minutes.
Between 2004-2007, the time was 20+ per pair. In 2008-2009 the time was 5 minutes to have a simultaneous conversion with an AI Chatbot and a human agent to find out who is who. The time limit for the simultaneous conversation was increased to 25 minutes since 2010.
The History of the Loebner Prize and Some Remarkable Loebner Prize Winners
The prize was first conducted in 2006; Tim Child and Huma Shah organized the competition. By August 30, four finalists were announced. Those four finalists are Rollo Carpenter, Noah Duncan, Robert Medeksza, Richard Churchill, and Marie-Claire Jenkins.
The contest was held in the VR theatre, Torrington, place campus of University Campus London on September 17. The judges were the university of reading’s cybernetic professor Kevin Warwick, a barrister, an artificial intelligence professor, University of Birmingham’s metaphor research specialist John Barnden, Victoria Butler-Cole, a journalist, and Graham Duncan-Rowe.
The winner of the competition was “Joan,” which was created by Rollo Carpenter as an avatar of Jabberwacky. After 2006, annual competitions have continued to occur, and many winners have been announced. One of the most remarkable of them is perhaps Mitsuku, created by Pandorabot.
AI specialists describe Mitsuku as the best conversational Chatbot in the world. It has broken barriers when it comes to our understanding about what AI can do. Mitsuku is a record five-time Lobener Prize winner and continues to be a leading icon in the field of AI.
Mitsuku is available on the internet on the Pandorabots website and has the purpose of entertainment; however, the customers who use Pandorabots are free to develop their AI for their uses such as education, customer service, finance, and health sectors.
So, what is the Loebner Prize? Simply put, it is a test of brains, albeit artificial ones, that takes place every year to determine which chatbot can communicate with a group of judges in the most natural way. In some ways, the competition is little more than a personality contest, but the object isn’t to be the smartest in the room, just the one that is closest to interacting with the judges like a human instead of a computer.
We hope that the article was helpful. We would love to hear from you.
Please feel free to share your remarks in the comments section below.