8Artificial Intelligence (“AI”) is seeing a lot of fervent enthusiasm. Claims are being made that entire industries will change, self-driving vehicles and robots will take over, and millions of jobs will be displaced. Since 2010, venture funding into AI has increased 20-fold, from around $500 million to $10.8 billion in 2017. Big corporations spent even more on AI—possibly as much as $40 billion or more in 2017. The initial focus has been from large tech companies but 70 percent or more of companies have no actual AI efforts underway.Assuming you are an executive at the majority of companies that are not currently utilizing AI, is all the talk about computers taking over hype or is it time to act?AI has a History of False Starts and ExaggerationIt is natural for business leaders to be skeptical, especially given that AI has suffered from a history of false promises and irrational exuberance.The 1950’s were considered the “Golden Age” of artificial intelligence. In 1956, after IBM scientist Arthur Samuels demonstrated a checkers playing program on television on one of IBM’s first mainframe computers, IBM’s stock went up by 15 points in one day. Claude Shannon, a brilliant researcher at Bell Labs, who invented the field of information theory, demonstrated a mechanical mouse named “Theseus” that could find its way through a maze. Researchers at major universities developed AI theory and made a lot of great breakthroughs. Simultaneously, government and private companies invested heavily in AI.By the late 1960’s, many well-known scientists were convinced that machines would outpace human performance within a very short time—and they were not shy about declaring their exuberance.In the November 1970 issue of Life Magazine, AI researcher Marvin Minsky asserted that in “three to eight years we will have a machine with the general intelligence of an average human being.”Sadly, the wild claims of super-human robots were over-zealous and the revolution did not happen. AI funding was cut and most of the decade of the 1970’s became known as the First AI Winter.Elation over AI returned in the 1980’s when major companies looked to utilize so-called “Expert Systems” to enhance business operations. By the mid 1980’s, investment in AI had increased from a trickle to many billions of dollars per year.But, despite a lot of initial enthusiasm, Expert Systems were expensive and inefficient and became displaced by workstations and PC’s. By the late 1980’s, AI funding was again cut and researchers declared that the Second AI Winter had begun.Having survived two AI winters, AI practitioners took a more somber approach from the early 1990s through 2012. Attendance at major AI research conferences dropped precipitously. The advances that came were no longer overhyped. Although IBM’s Deep Blue computer defeated human champion Gary Kasparov in 1997, the IBM Research website cautions that the breakthrough was “not really AI” but instead massive computing power that was able to look many moves ahead.By 2010, machine learning algorithms, most of which were originally developed in the 1950’s, were able to shine as a result of exponentially faster computers and plenty of online data from the internet revolution. But the systems were still time and knowledge intensive to program, difficult to implement, and generally unable to do better than human experts.Time to Pay AttentionThis time really i s different. Wild excitement about AI is back and some are projecting AI to grow exponentially from here. Enthusiasm is justified because recent advances are achieving ARE YOU ANAI REVOLUTION?By Daniel Bain, CFO, Uttam Galva North America, Inc.OPINIONIN MY
<
Page 7 |
Page 9 >