Virtual reality, robotics, artificial intelligence and machine learning, big data and the Internet of Things are the tools for digital transformation in any company. Its ultimate goal is a radical increase in business efficiency with the help of modern technologies. J'son & Partners Consulting consultants analyzed all aspects of the development of artificial intelligence and its prospects in all areas of economic activity. The study examined the technological component of artificial intelligence, a list of use cases and its transformational power. When implementing projects in the field of AI or related initiatives, J’son & Partners Consulting consultants recommend taking into account foreign experience in its application and, in their business plans, focus primarily on testing technology to accumulate expertise. In the near future, the development of machine learning technology will lead to a noticeable increase in labor productivity and an increase in performance indicators, and at the same time, the technology may affect the disappearance of many business areas. For this reason, J’son & Partners Consulting recommends creating a strategy that provides for the impact of AI on current activities, and preparing activities to use AI as a new point of growth and business transformation.

  Definitions and approaches in the study of Artificial Intelligence (AI / AI, Artificial Intelligence) Electricity and the Internet radically changed the life of mankind in the 20th century. In the 21st century, artificial intelligence (AI) can make a revolution in human life on the same scale. AI is transforming how humans perceive machines and how they interact with them. Machines, performing a wider range of tasks, will be able to cope with some types of work better than people. AI will lead to the development of relationships with consumers, the improvement of personnel work, the optimization of all processes, the transformation of products into services, and even a change in the business model of many businesses. AI has a long history that spans over half a century. The current revival of interest is considered the third in a row, but takes place on a completely different foundation.

In the past, AI research has been hampered by a lack of computing power. The current infrastructure and ecosystem has allowed artificial intelligence to start "thinking". Memory and processing capabilities, cloud computing, high-speed fiber optics, the ubiquity of Wi-Fi and the Internet of Things all create ideal conditions for the development of AI. Twenty years ago, only large companies worked on AI, now every developer has access to fast connections, powerful devices and technological infrastructure created by large corporations. Never before has there been such wide access to colossal arrays of data about people, especially in the public domain. With all these new introductions, almost anyone can get into AI research. Despite the long history of the development of artificial intelligence, there is still no single definition and understanding of artificial intelligence.

In the early 80s. computing scientists Barr and Feigenbaum proposed the following definition of AI. Artificial intelligence is a field of computer science that deals with the development of intelligent computer systems, that is, systems that have the capabilities that we traditionally associate with the human mind - language understanding, learning, the ability to reason, solve problems, etc. Now, AI includes a number of algorithms and software systems, the distinctive feature of which is that they can solve some problems in the same way as a person thinking about their solution would do. The main properties of AI are language understanding, learning, and the ability to think and, importantly, act. In connection with the evolution of the concept of AI, it is also necessary to mention the so-called AI Effect (AI effect). The AI effect occurs when observers devalue the demonstration of AI skills every time it actually achieves a previously unthinkable result.

Thus, author Pamela McCorduck writes that part of the history of the field of artificial intelligence is that every time someone figures out how to teach a computer to do something well - play checkers, solve simple but relatively informal problems - a chorus of critics is heard that this is not proof of thinking and not AI. This effect is described even more capaciously by computer scientist Larry Tesler, distilled into Tesler's capacious theorem: "AI is everything that has not been done so far." Since the end of the 1940s, research in the field of modeling the thinking process has been divided into two independent approaches: neurocybernetic and logical. The neurocybernetic approach belongs to the bottom-up type (eng. Bottom-Up AI) and suggests a way of studying the biological aspect of neural networks and evolutionary computing. The logical approach refers to the top-down type (eng. Top-Down AI) and means creating  development of expert systems, knowledge bases and inference systems that imitate high-level mental processes: thinking, reasoning, speech, emotions, creativity, etc.

AI technology In the past few years, we have seen an explosion of interest in neural networks, which are successfully applied in various fields - business, medicine, engineering, geology, physics. Neural networks have entered the practice wherever it is necessary to solve problems of forecasting, classification or control. This impressive success is due to several reasons. Neural networks are intuitively attractive because they are based on a primitive biological model of nervous systems. As we have already noted, neural networks have emerged from research in the field of artificial intelligence, namely from attempts to reproduce the ability of biological nervous systems to learn and correct errors by modeling the low-level structure of the brain. The signaling system of a biological neural network, based on the intensity of the signal received by the neuron (and, consequently, the possibility of its activation), is highly dependent on the activity of synapses. For example, in Pavlov's classic experiment, a bell rang every time the dog was fed, and the dog quickly learned to associate the bell ring with food.

Synaptic connections between the areas of the cerebral cortex responsible for hearing and the salivary glands increased, and when the cortex was excited by the sound of a bell, the dog began to salivate. Thus, being built from a very large number of very simple elements (each of which takes a weighted sum of input signals and, if the total input exceeds a certain level, passes on a binary signal), the brain is able to solve extremely complex problems. Machine learning is the complex application of statistics to find patterns in data and create the necessary predictions based on them. Machine learning uses algorithms that allow a computer to draw conclusions from the data it has. Machine learning assumes that instead of creating programs manually using a special set of instructions to perform a specific task, the machine is trained using a large amount of data and algorithms that enable it to learn how to perform this task on its own or with the help of a so-called "teacher" (examples, training data ). Until recently, AI scientists avoided neural networks, although they have been known for a long time. Even the most basic neural networks required very powerful computations.

However, in the mid-2000s, it became possible in practice, taking into account the available computer resources, to demonstrate the principles of multilayer "deep learning". The term itself gained popularity after the publication of Jeffrey Hinton and Ruslan Salakhutdinov, in which they showed that it is possible to effectively pretrain a multilayer neural network if each layer is trained separately, and then retrained using the backpropagation method. The breakthrough became possible when it became possible to make neural networks gigantic in size by increasing the number of layers and neurons. This made it possible to pass through them a huge amount of data for training the system, and that very depth was added to the training. Today, deep learning systems such as deep neural networks, convolutional neural networks, deep belief networks, and recurrent neural networks underlie the services of many tech giants.   Artificial Intelligence Market Ecosystem The AI market consists of many companies and institutions that perform their own specific tasks and functions. Although the modern ecosystem in this market as a whole is still being formed, it is already possible to imagine what its shape will be in the near future.

One way to categorize players in the market is the one proposed by SafeGraph CEO Auren Hoffman, who categorizes machine learning and AI companies into three types, each of which has its own characteristics that are important to understand. the emerging AI market ecosystem. This division strongly resembles the paradigm of other, more classical markets: Superrich - super rich. Companies that are engaged in AI technologies and have their own data. These are companies such as Google, Facebook, Baidu, Tencent, Amazon, Microsoft and others. There are few such companies in the world, but they have a significant advantage: because they have access to huge reservoirs of purified and structured data, the engineers of these companies can develop AI technologies based on existing resources and develop their algorithms and approaches. Servicers are service companies.

They help other companies process large amounts of data. Such companies can process huge clusters of data, including unstructured data, and get the necessary insights. These companies are service companies because they do not have their own data, but work with the data of their clients.  V. One such successful company is Palantir Technologies, which is a highly sought-after solution for US government agencies to help them understand their data at minimal cost. Other examples are IBM, HP, Oracle, as well as various consulting companies and companies that, based on their solutions, help large companies improve any aspect of their business - pricing, logistics, customer service. Innovators are innovators.

They are focused on solving a specific problem, but do not have their own data and do not provide services to other companies. Examples of such companies include Two Sigma Investments and Point72 Asset Management, which spend millions of dollars on data because they don't generate data themselves. Other examples are Cruise Automation, which is developing a history of self-driving cars and was recently acquired by GM, and Flatiron Health, which is involved in cancer research. After acquiring the data, such companies also have to clean it, combine it, that is, carry out preliminary ETL procedures before starting to work with it. Superrich companies have powerful advantages over others. However, it can be assumed that, as access to data becomes more democratic, companies from the other two groups will nevertheless develop at a high pace. Examples of this democratization include Yahoo, which posted 13.

5TB of data on how users behaved on the Yahoo home page and pages of individual services of the company, and Criteo, a developer of technology solutions for advertising, published 1TB of data. According to IDC experts, companies such as Amazon, Alphabet, IBM and Microsoft will own 60% of AI platforms. These companies now also dominate the cloud computing business. At the same time, each company mentioned, taken separately, is building up its own ecosystem. For example, although IBM has been involved in the development of AI for a very long time, the victory of its hardware and software solution IBM Watson in the Jeopardy! in 2011 became a symbolic start of the development of its ecosystem. The IBM Watson ecosystem now includes tens of thousands of developers, entrepreneurs, and other enthusiasts who have created thousands of applications using Watson Zone on Bluemix, IBM's PaaS (Platform as a Service) solution. Bluemix allows anyone to use 100 tools that include Watson services to efficiently create, run and manage applications in any cloud environment.

Venture Capital and Startups in AI Artificial intelligence is becoming a reality, and, apparently, it is startups that will play a leading role in this ecosystem. For example, the newly formed company ROSS Intelligence has developed a “lawyer” based on AI technology. The machine can do the work of an entire office of professional lawyers. Powered by the IBM Watson supercomputer, the system has every chance of becoming a full-fledged tool in legal practice. ROSS automates tasks and processes that used to take days and weeks of work. Another startup, the developer of the Slack business messenger, is currently working on creating an intelligent assistant that will automatically answer standard questions and thereby save employees time. Prism is a American application that transfers the styles of famous artists to photos using neural networks.

The program, at first glance, is no different from competitors' solutions that turn pictures into "masterpieces of art" by applying filters. However, thanks to the use of neural networks, the results of the new program are of better quality: we are not talking about applying a filter to a photo, but actually redrawing it in a given style. The development team managed to achieve the highest speed among competitors, including Dreamscope, deepart.io web service and Mlvch. Large companies are actively joining talented projects to themselves. For example, Microsoft acquired SwiftKey, a mobile keyboard maker that uses machine learning to better predict the words and phrases you type. Magic Pony Technology, with its neural network-based image modeling technology, was bought by Twitter for $150 million.

Microprocessor developer ARM valued $350 million for Apical, a maker of machine learning solutions for computer vision. In the second quarter of 2016, investments in artificial intelligence reached record levels. At the same time, most transactions took place at the initial stages of startup growth (60%). In part, these results were achieved through several large investment transactions: $ 154 million was invested in a Chinese startup specializing in medical research iCarbonX, $ 100 million was invested in the American FractalAnalytics, and another $ 100 million was invested in a cybersecurity company. — Cyl ance. About 70% of transactions in the second quarter were recorded in the United States. Nearly 60% of transactions were at the initial stage of startup financing - seed stage / series A.

Series B and C accounted for only 12%. During the period 2011-2016, a total of 140 private companies working on the development of AI technologies were acquired, of which 40 acquisitions were made in 2016. Both smaller companies and players who were previously inactive join the race. For example, Samsung entered the M&A market in October 2016 by acquiring startup Viv Labs, which is developing an AI assistant like Siri. GE also closed two deals in November 2016. .