Google’s AI builds its own AI child and it’s better than anything humans have made

Google’s AI-building AI actually went ahead and built a fully-functional AI child that, as it turns out, is more capable than any AI built by human hands. Historians will look back at this moment, from their ruined cities and hideouts from their robot masters, as the time where the downfall of humanity began.

Google’s AI builds its own AI child and it’s better than anything humans have made

Of course,  it’s not actually all that doom and gloom, the child AI is really only capable of a specific task – image recognition. Using its AutoML AI, Google’s AI-building AI created its child AI using a technique called reinforcement learning. This works just like machine learning, except it’s entirely automated where AutoML acts as the neural network for its task-driven AI child.

Known as NASNet, the child AI was tasked with recognising objects in a video, in real time. AutoML would then evaluate how good NASNet was at its task and then improve its algorithms using the data to create a superior version of NASNet.

READ NEXT: Watching an AI create fake celebrity faces is nightmare fuel

This endless automated tweaking paid off though when tested on the ImageNet image classification and COCO object detection datasets – both known for being “two of the most respected large-scale academic data sets in computer vision” – NASNet outperformed all other systems.

nasnet_image_recognition

NASNet was 82.7% accurate at predicting images on ImageNet’s validation set. Going by previously published results, this is 1.2% higher than any other system. It’s also listed as being 4% more efficient than man-made devices and has a mean Average Precision (mAP) of 43.1%. Interestingly, a less-demanding version of NASNet outperforms mobile platforms by 3.1%.

Obviously, in its current guise, NASNet isn’t going to be the downfall of humanity. It is, however, the key to how we build better AI systems in the future. With self-learning AI and AI’s that can also moderate and alter other AIs, we could create AI that is better for autonomous vehicles or automated factories.

“We hope that the larger machine learning community will be able to build on these models to address multitudes of computer vision problems we have not yet imagined,” they wrote in their blog post.

The trouble is, such advances in AI could have dangerous implications. Beyond simply building in a way that’s hard to regulate or intervene in, it’s possible an AI system like the shuttered Microsoft Tay, could pass on its learned biases, hard-coding them into its next-generation AI.

Thankfully, there are regulatory bodies out there trying to ensure this future doesn’t come to pass. We already know that Elon Musk and Stephen Hawking are very much against AI advancement, but the world’s biggest tech companies are also pushing a joint partnership on AI. This partnership on AI is all about bringing together these megaliths of tech to ensure the future of AI isn’t going to cause the breakdown of society.

Disclaimer: Some pages on this site may include an affiliate link. This does not effect our editorial in any way.