December 2, 2021

robertlpham

Just another WordPress site

Prepare for Artificial Intelligence to Produce Less Wizardry

Early last year, a large European supermarket chain deployed artificial intelligence to predict what customers would buy each day at different stores, to help keep shelves stocked while reducing costly spoilage of goods.

The company already used purchasing data and a simple statistical method to predict sales. With deep learning, a technique that has helped produce spectacular AI advances in recent years—as well as additional data, including local weather, traffic conditions, and competitors’ actions—the company cut the number of errors by three-quarters.

It was precisely the kind of high-impact, cost-saving effect that people expect from AI. But there was a huge catch: The new algorithm required so much computation that the company chose not to use it.

“They were like, ‘Well, it’s not worth it to us to roll it out in a big way, unless cloud computing costs come down or the algorithms become more efficient,’” says Neil Thompson, a research scientist at MIT, who is assembling a case study on the project. (He declined to name the company involved.)

The story highlights a looming problem for AI and its users, Thompson says. Progress has been both rapid and dazzling in recent years, giving us clever game-playing programs, attentive personal assistants, and cars that navigate busy roads for themselves. But such advances have hinged on throwing ever-more computing resources at the problems.

a total noob
about his
active
additional hints
additional info
additional reading
additional resources
address
advice
agree with
anchor
anonymous
are speaking
article
article source
at bing
at yahoo
basics
best site
blog
bonuses
breaking news
browse around here
browse around these guys
browse around this site
browse around this web-site
browse around this website
browse this site
check
check here
check it out
check out here
check out the post right here
check out this site
check out your url
check over here
check these guys out
check this link right here now
check this out
check this site out
click
click for info
click for more
click for more info
click for source
click here
click here for info
click here for more
click here for more info
click here now
click here to find out more
click here to investigate
click here to read
click here!
click here.
click now
click over here
click over here now
click this
click this link
click this link here now
click this link now
click this over here now
click this site
click to find out more
click to investigate
click to read
clicking here
company website
consultant
content
continue
continue reading
continue reading this
continue reading this..
continued
conversational tone
cool training
Get the facts
Related Site

In a new research paper, Thompson and colleagues argue that it is, or will soon be, impossible to increase computing power at the same rate in order to continue these advances. This could jeopardize further progress in areas like computer vision, translation, and language understanding.

AI’s appetite for computation has risen remarkably over the past decade. In 2012, at the beginning of the deep-learning boom, a team at the University of Toronto created a breakthrough image-recognition algorithm using two GPUs (a specialized kind of computer chip) over five days. Fast-forward to 2019, and it took six days and roughly 1,000 special chips (each many times more powerful than the earlier GPUs) for researchers at Google and Carnegie Mellon to develop a more modern image-recognition algorithm. A translation algorithm, developed last year by a team at Google, required the rough equivalent of 12,000 specialized chips running for a week. By some estimates, it would cost up to $3 million to rent this much computer power through the cloud.

“Deep neural networks are very computationally expensive,” says Song Han, an assistant professor at MIT who specializes in developing more efficient forms of deep learning and is not an author on Thompson’s paper. “This is a critical issue.”

Han’s group has created more efficient versions of popular AI algorithms using novel neural network architectures and specialized chip architectures, among other things. But he says there is a “still a long way to go” to make deep learning less compute-hungry.

Other researchers have noted the soaring computational demands. The head of Facebook’s AI research lab, Jerome Pesenti, told WIRED last year that AI researchers were starting to feel the effects of this computation crunch.

Thompson believes that, without clever new algorithms, the limits of deep learning could slow advances in multiple fields, affecting the rate at which computers replace human tasks. “The automation of jobs will probably happen more gradually than expected, since getting to human-level performance will be much more expensive than anticipated,” he says. “Slower automation might sound good from a jobs perspective,” he says, but it will also slow gains in productivity, which are key to raising living standards.

In their study, Thompson and his coauthors looked at more than 1,000 AI research papers outlining new algorithms. Not all of the papers detailed the computational requirements, but enough did to map out the cost of progress. The history suggested that making further advances in the same way will be all but impossible.

Improving the performance of an English-to-French machine-translation algorithm so that it only makes mistakes 10 percent of the time instead of the current rate of 50 percent, for example, would require an extraordinary increase in computational power—a billion billion times as much—if it were to rely on more computation power alone. The paper was posted to arXiv, a preprint server. It has yet to be peer-reviewed or published in a journal.