Sunday, May 15, 2016

NVIDIA: A $2 Billion Chip to Accelerate Artificial Intelligence (NVDA)

First a heads-up. The technical trading gurus at Investor's Business Daily are saying take profits after Friday's big move, now 20% above their buy point. I don't think so but it all depends on your time frame.

And whether you have the discipline to buy back in should the stock not pull back.
(see links below for some of the things driving the stock)

On the other hand it's not like the stock is unknown and there are hordes of naïfs who have yet to discover it. NVDA was the fourth-best performer on NASDAQ in 2015 and through Friday is up 95% in 52 weeks and 60% since the overall market bottomed on Feb. 11.

NVDA NVIDIA Corporation daily Stock Chart
Here's some background from MIT's Technology Review, April 5:
The field of artificial intelligence has experienced a striking spurt of progress in recent years, with software becoming much better at understanding images, speech, and new tasks such as how to play games. Now the company whose hardware has underpinned much of that progress has created a chip to keep it going.

On Tuesday Nvidia announced a new chip called the Tesla P100 that’s designed to put more power behind a technique called deep learning. This technique has produced recent major advances such as the Google software AlphaGo that defeated the world’s top Go player last month (see “Five Lessons from AlphaGo’s Historic Victory”).

Deep learning involves passing data through large collections of crudely simulated neurons. The P100 could help deliver more breakthroughs by making it possible for computer scientists to feed more data to their artificial neural networks or to create larger collections of virtual neurons.

Artificial neural networks have been around for decades, but deep learning only became relevant in the last five years, after researchers figured out that chips originally designed to handle video-game graphics made the technique much more powerful. Graphics processors remain crucial for deep learning, but Nvidia CEO Jen-Hsun Huang says that it is now time to make chips customized for this use case.

At a company event in San Jose, he said, “For the first time we designed a [graphics-processing] architecture dedicated to accelerating AI and to accelerating deep learning.” Nvidia spent more than $2 billion on R&D to produce the new chip, said Huang. It has a total of 15 billion transistors, roughly three times as many as Nvidia’s previous chips. Huang said an artificial neural network powered by the new chip could learn from incoming data 12 times as fast as was possible using Nvidia's previous best chip.

Deep-learning researchers from Facebook, Microsoft, and other companies that Nvidia granted early access to the new chip said they expect it to accelerate their progress by allowing them to work with larger collections of neurons....MORE
We've been touting NVIDIA for a while:
May 12
NVIDIA Sets New All Time High On Pretty Good Numbers, "Sweeping Artificial Intelligence Adoption" (NVDA)
April 13 
CERN Will Be Using NVIDIA Graphics Processors to Accelerate Their Supercomputer (NVDA)
January 5
Class Act: Nvidia Will Be The Brains Of Your Autonomous Car (NVDA)
November 2015
Stanford and Nvidia Team Up For Next Generation Virtual Reality Headsets (NVDA)
November 2015
"NVIDIA: “Expensive and Worth It,” Says MKM Partners" (NVDA)
October 2015
Quants: "Two Glenmede Funds Rely on Models to Pick Winners, Avoid Losers" (NVDA)
May 2015
Nvidia Wants to Be the Brains Of Your Autonomous Car (NVID)
We've mentioned, usually in the context of the Top 500* fastest supercomputers, that:
Long time readers know we have a serious interest in screaming fast computers and try to get to the Top500 list a couple times a year. Here is a computer that was at the top of that list, the fastest computer in the world just four years ago. And it's being shut down.
Technology changes pretty fast. 
That was from a 2013 post.

Among the fastest processors in the business are the one's originally developed for video games and known as Graphics Processing Units or GPU's. Since Nvidia released their Tesla hardware in 2008 hobbyists (and others) have used GPU's to build personal supercomputers.
Here's Nvidias Build your Own page.
Or have your tech guy build one for you.

In addition Nvidia has very fast connectors they call NVLink.
Using a hybrid combination of IBM Central Processing Units (CPU's) and Nvidia's GPU's, all hooked together with the NVLink, Oak Ridge National Laboratory is building what will be the world's fastest supercomputer when it debuts in 2018.

As your kid plays Grand Theft Auto....