Plan Inferencing Locations to Accelerate Your GenAI Strategies

Dell on-premises solutions can be more cost-effective than the public cloud and APIs for Generative AI LLM inferencing with RAG.

Businesses expect Generative AI (GenAI) to improve productivity, reduce costs and accelerate innovation. However, implementing GenAI solutions is no trivial task. It requires a lot of data, computational resources and expertise.

One of the most critical stages of GenAI model operation is inferencing, in which outputs are generated from a trained model based on user requests. Inferencing can have significant implications for the performance, scalability, longevity and cost-effectiveness of GenAI solutions. Therefore, it’s important for businesses to consider how they can optimize their inferencing strategy and choose the best deployment option for their needs.

Leveraging RAG to Optimize LLMs

Large language models (LLM), such as GPT-4, Llama 2 and Mistral, hold a lot of potential. They’re used for various applications, from chatbots to content creation and even to write code. However, LLMs depend on the data they’re trained on for accuracy.

Depending on the need for customization, some organizations may choose to implement pretrained LLMs, while others may build their own AI solutions from scratch. A third option is to pair an LLM with retrieval augmented generation (RAG), a technique for improving the accuracy of LLMs with facts from external data sources, such as corporate datasets.

Considerations for Where to Run Inferencing

To help determine where to place an inferencing solution, consider important qualifiers such as the number of requests that will be sent to the model, the number of hours a day the model will be active and how usage will scale over time. Additional considerations include the quality and speed of output and the amount of proprietary data that will be used.

Inferencing On-premises Can Save Costs and Accelerate Innovation

For GenAI solutions that pair LLMs with RAG, inferencing on-premises can be a better option than inferencing through the public cloud.

Inferencing LLMs and RAG in the public cloud can be expensive, as they can incur high data transfer, storage and compute fees. According to a recent study commissioned by Dell Technologies, Enterprise Strategy Group (ESG) found that inferencing on-premises can be more cost-effective. Inferencing LLMs and RAG on-premises with Dell solutions can be 38% to 75%¹ more cost effective when compared to the public cloud.

ESG also found that Dell’s solutions were also up to 88%¹ more cost effective compared to APIs. As the size of the model and the number of users increased, the cost effectiveness of inferencing on-premises with Dell grew.

LLMs paired with RAG can generate sensitive and confidential output that may contain personal or business information. Inferencing in the public cloud can be risky, as it can expose the data and outputs to other parties. Inferencing on-premises can be more secure, since data and outputs remain within a company’s network and firewall.

LLMs and RAG can benefit from continuous learning and improvement based on user feedback and domain knowledge. By running inferencing on-premises, innovation can flourish without being bound by a cloud provider’s update and deployment cycles.

Leverage a Broad Ecosystem to Accelerate Your GenAI Journey

At Dell, we empower you to bring AI to your data, no matter where it resides, including on-premises in edge environments and colocation facilities, as well as in private and public cloud environments. We simplify and accelerate your GenAI journey, creating better outcomes tailored to your needs, while safeguarding your proprietary data, with sustainability top of mind.

We offer a robust ecosystem of partners and Dell services to assist you, whether you’re just starting out or scaling up in your GenAI journey and provide comprehensive solutions that deliver ultimate flexibility now and into the future. In addition, with Dell APEX, organizations can subscribe to GenAI solutions and optimize them for multicloud use cases.

Learn more at Dell for Generative AI.

1 Based on Enterprise Strategy Group research commissioned by Dell, “Maximizing AI ROI: Inferencing On-premises With Dell Technologies Can Be 75% More Cost-effective Than Public Cloud” comparing on-premises Dell infrastructure versus native public cloud infrastructure as a service and token-based APIs, April, 2024. Expected costs were modeled utilizing RAG for small (5k users), medium (10K users) and large (50K users) and two LLMs (7B and 70B parameters) over 3 years. Actual results may vary. [Economic Summary]

Fuzz Hussain

About the Author: Fuzz Hussain

Fuzz Hussain leads business value discussions for Dell Technologies AI and APEX portfolio marketing. With over 15 years of experience in technology, manufacturing and healthcare industries, Fuzz has had the privilege of helping multiple organizations launch and accelerate customer-centric digital platforms. He holds a Master of Business Administration from Indiana University and a Bachelor of Science from Purdue University.