Interest in artificial intelligence continues to grow, as Google searches in the last 12 months are at 92% of their normal rate, but recent research suggests that AI’s success may be its downfall. Amidst the growth of online AI content, a team of researchers at the universities of Cambridge and Oxford set out to see what happens when artificial intelligence tools ask what AI is made of. What he found was terrifying.
Dr. Ilia Shumailov of the University of Oxford and a team of researchers found that when artificial intelligence programs rely on artificial intelligence generated by genAI, the answers tend to degrade, according to a study published in Nature last month.
After the first two clues, the answers gradually miss, followed by a sharp decline in the fifth attempt and a complete increase in the nonsense pablum with the ninth question in a row. The researchers called it the gradual overload of the AI ​​model—the gradual decline of learned AI solutions that constantly corrupts repeated training until the output is a useless distortion of reality.
“It’s surprising how quickly and how quickly this type of collapse can occur. At first, it affects only a few people – data that is misrepresented. Then it affects the diversity of the output and the diversity decreases. Sometimes, you see small changes in a lot of data, which hides the deterioration of performance employees on limited data. The collapse of the model can have dire consequences,” Shumailov explains in an email exchange.
This is important because approximately 57% of all web content was created or translated through an AI algorithm, according to a separate study by the Amazon Web Services research group published in June. If the data created by people on the Internet is quickly written by things created by AI and the findings of Shumailov’s study are true, it is possible that AI is killing itself-and the Internet.
Researchers Find AI Fooling Itself
This is how the team confirmed the fall of the model took place. They started with a pre-trained AI wiki that was modified based on their own development. As the dirty data corrupted the original training facts, the information gradually degraded to the point of incomprehensibility.
For example, after the ninth question, a section of wiki research about 14th-century English church altars was mysteriously changed to a concept of a variety of jack-tailed rabbits.
Another example mentioned in Nature report to illustrate this point about a technical model of AI trained on dog breeds. Based on the findings, less well-known species can be removed from the replication group in favor of more popular species such as golden retrievers. The AI ​​creates its own “use it or throw it away” algorithm that deletes low-quality models from its memory. But with enough cycles of AI inputs alone, the AI ​​can produce meaningless results, like figure 1 below.
Like eating your own vomit: Blackmagic’s billionaire ex-founder vents on AI
“In fact, imagine that you want to create an AI model that creates images of animals. If before you start learning machine models you just find images of animals on the Internet and create a model from them, then today it is very difficult. Many images on the Internet are not real and include misconceptions caused by other races,” explains Shumailov.
How Does Model Collapse Happen?
For some reason—and the researchers don’t know exactly why—when the AI ​​is eating a regular feed of its artificial data, it loses contact with the real thread and creates its own best solution based on the well-edited data. data points.
But something gets lost in the translation of AI and factoid regurgitation.
The study concludes that the only way artificial intelligence can achieve long-term sustainability is to ensure that it is accessible to non-AI, human-made products and to continuously provide human-made innovations going forward. .
The Internet’s AI-Generated Waves Are Rising Fast
However, these days you can’t seem to move a meme over your head without hitting an AI-generated meme on the internet – and it could be worse than you think.
In fact, one AI expert and policy advisor has predicted that due to the increasing use of artificial intelligence, 90% of all content on the internet could be generated by AI sometime in 2025.
Even if the number of AI-powered devices will not be 90% by next year, there will still be a lot of studies available on future AI. It is not a comforting hope from Shumailov’s findings and the lack of a solution to this problem, which only increases with the popularity of generative AI.
Houston We Have a Problem—Make Those Problems
No one knows what laws or regulations may be passed in the coming months and years that may restrict access to bolus content or the amount of copyrighted material.
In addition, with the vast amount of information on the Internet that has been created using AI, without a way to reduce this explosion, it will be difficult for the developers of the algorithms of the next generation of AI to avoid this as the first population. they decrease.
Adding to the issue, Shumailov says it’s becoming increasingly difficult for developers to filter the output of large-scale AI language systems, with no solution in sight.
“Not until now.” There are academic discussions, and hopefully we will make progress on how to solve the fall of the model and reduce the costs associated with it, “says Shumailov.
“One way is to bring the community together to ensure that the various groups involved in LLM development and deployment share what is needed to resolve questions related to the origin,” says Shumailov. “Otherwise, it would be very difficult to teach the latest models of LLMs without using data that was exposed on the Internet before the technology took off or directly using human-generated data.”
Shumailov says that the most important factor in the collapse of the model is the corruption of the previously impartial educational institutions, which have now turned to errors, mistakes, and injustice. It can also increase false positives and positives – the best guesses AI makes that aren’t real – which have already appeared in several genAI platforms.
Given the steady march towards the collapse of the AI ​​model, everything on the internet needs to be verified through an immutable method such as blockchain or a similar “Good Housekeeping” seal to ensure trustworthiness.
Otherwise, the death of AI and the Internet may mean the death of truth.
This article was originally published on forbes.com.
your financial future. Join us at the Forbes Australia Icons & Investors Summit to hear directly from Australia’s top business and financial experts. Click here to secure your ticket.
Take a look back at a week full of hand-picked stories from Australia and around the world. Log in to Forbes Australia here or become a member here.
#silently #killing #internet