Moore’s law postulates that the number of transistors on a chip doubles every two years. As such, it serves as a wonderful example of exponential growth. We can intuitively grasp this principle when, for instance, we observe the advances in the performance capabilities of smartphones and other devices alone within the past few years.
Moreover, this law only covers a small portion of overall technological development, which is growing just as exponentially – although it was unbearably slow in its early days, it is now progressing at lightning speed. In the past, paradigm shifts took millennia to enact (as was the case with stone tools and the wheel), whereas today, they only take a few years (as with the Internet or, once again, the smartphone). We can safely assume that this rate of acceleration will continue to increase at the same pace – and not just in the realm of IT, but also in other scientific fields, such as physics and biology. While we’re on this topic, it’s interesting to note the parallels between technological and global population growth: Both are exponential.
AI will play a key role in this development; and as far as this is concerned, experts such as Ray Kurzweil and Jay Wheeler also expect exponential growth. That’s why there isn’t just one ‘kind’ of AI; we currently still distinguish between ‘weak’ AI, whereas several years from now we will see ‘strong’ AI. Some people even refer to artificial superintelligence when describing the next technological revolution. Technological singularity will come at the end of this development: At that point, humans will no longer be able to perceive said development, even approximately (Harvard Science Review has contemplated the possibility of an artificial mega-brain with an IQ of 34,597: A far cry from the average human IQ of 100). In the latter half of this century, there will likely no longer be any clear differentiation between artificial and human intelligence.
We haven’t reached that point yet, though, not by a long shot – strictly speaking, AI is still in its infancy. Laws need to be put into place regarding the technology’s potential and potential risks, related ethical issues need to be resolved, and there is a lack of sufficient standardization – even though work is already being done to this end, such as in China. There is a lack of AI expertise on the personnel market, and millions of AI developers and companies worldwide are struggling with an inadequate IT infrastructure that falls far short of the requirements for AI development: The network bandwidth is too weak, there is insufficient memory, and there are equally few specialized AI solutions. AI applications eat up vast volumes of data and usually need to learn these in real time; otherwise, a great advantage of AI goes lost. The existing solutions tend to be more bad than good.
Specialized hardware and software solutions are required, as is an adapted, AI-oriented infrastructure – not only to quickly process data, but also to quickly develop and implement new AI applications. In the global competitive arena, time-to-market is essential for artificial intelligence. Eighty percent of companies surveyed by market researcher ESG on behalf of Dell EMC believe that their new developments revolving around AI and machine learning will take under two years to yield significant business advantages.
Building up an AI-specific infrastructure requires AI expertise as well as general IT expertise. IT departments need to work hand in hand with data scientists to select the correct servers, graphics processors, memory solutions, and networks with sufficient and scalable bandwidth. Next, they need to carry out the construction and testing phases, plus time-consuming fine-tuning of the AI framework and libraries as well as the software interplay. Finally, the data scientists must validate and approve the entire system. Only then can they start to develop the initial models.
Working through the cloud does not necessarily represent a faster route when it comes to this kind of project; on the contrary. Although many public cloud providers offer AI computing power and libraries, they are unable to supply reference configurations or solution centers for customers, to say nothing of sufficient consulting; as far as this is concerned, they leave data scientists out in the cold. Moreover, there are concept-related performance problems to contend with, such as ones caused by data transfer, which typically make an internal solution preferable.
Dell EMC knows that, too. We recently introduced the Dell EMC Ready Solutions for AI, which we specially designed together with NVIDIA for machine learning with Hadoop and deep learning. These solutions make AI rollout easier and quicker, and provide comprehensive findings from data more quickly. Companies no longer have to procure their AI solutions in individual components, combine them, and spend precious time fine-tuning them: Instead, they can rely on a package of best-of-breed software that has been designed, validated, and fully integrated by Dell EMC, including AI frameworks and libraries as well as the required computing, network, and memory capacities.
Our solutions increase data scientists’ overall productivity by 30 percent and reduce the time required until the productive application of an AI solution by up to 12 months, as compared with a DIY solution. Moreover, new services on offer from Dell EMC Consulting fully support companies with AI, from implementing and commissioning ready-solution technologies and AI libraries through to providing architecture recommendations and industry consultations.
Even the most modest AI development will hardly get off the ground without the right infrastructure. An ambitious AI roadmap with which technology can be used throughout the company on a wide scale requires a highly specialized AI infrastructure. At this point, companies can no longer afford to do without this if they wish to remain competitive over the long term.