Amazon’s next CAPTCHA could test your knowledge of physics

First it was simple; all you had to do was click a box. Then, proving you are not a robot became slightly more complicated. There was the deciphering of barely legible scrambled up letters, or picking out all the images with cars in them.

Amazon's next CAPTCHA could test your knowledge of physics

It could get even more exciting, according to a new patent. The next CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart, in case you ever wondered) method from Amazon could test your understanding of Newtonian physics.

Amazon has filed a patent for a CAPTCHA test that shows the user a set of scenarios, and asks them what will happen next. Passing the test relies on a fundamental grasp of the laws of physics; in particular, gravity.

For example, in one of the scenarios a ball is placed on top of a slope, and in another a heavy weight is hanging in mid-air.

The person would be given four possible scenarios about what might happen next, and has to choose out of these four. The three other answers make no sense in terms of physics, but they provide an amusing alternative.

“As computers become more powerful and as artificial intelligence becomes more advanced, there is a continuing need to evolve the tests employed by CAPTCHAs,” the patent says, according to Business Insider. “Computers programmed with artificial intelligence or other text and image recognition algorithms can defeat certain weak CAPTCHA algorithms.”

While it may seem intuitive, the patent application says computers need much more information and “might be unable to solve the test”.


This comes just over a month after another idea for an Amazon CAPTCHA came to light, a test that humans are expected to fail.

The patent, given to Amazon in August, described a type of inverse Turing test, that humans are more likely to fail than robots. Its success relies on the unusual trait that humans are consistently inconsistent.

“Computers can be programmed to correctly solve complex logic problems, or at least provide reliably and/or predictably incorrect answers to such problems,” the patent said.

“It is unlikely that a bot could be programmed to intentionally answer the question or challenge in the same manner as a human user, because predictably reliable inconsistent performance is a uniquely human characteristic.” 

Disclaimer: Some pages on this site may include an affiliate link. This does not effect our editorial in any way.

Todays Highlights
How to See Google Search History
how to download photos from google photos