Core, Cloud, and Edge Computing: Good Things Come in Threes

By Anne Brazao, Contributor

Enterprises everywhere are exploring how emerging technologies and hybrid computing will reshape the corporate data center as it enters the next data decade. Today’s infrastructure strategies embrace both the core corporate data center, as well as private and public cloud; tomorrow’s question is how infrastructure needs to evolve to meet the anticipated growth in edge computing requirements in the coming years.

Compute functions have moved back and forth between centralized and distributed models for decades. The “cloud-first” approach that is still widely promoted has already begun to shift to a more distributed, edge-centric model. Why? Because data is now being generated at the network edge, where you find things like mobile phones, laptops, connected cars, and Internet of Things (IoT) devices.

Today, there is more data and compute being pushed to the edge. Does this mean there’s no longer a place for core and cloud?

Since the idea of cloud computing first appeared in the 1990s, pundits pondered whether the traditional, on-premises data center would survive. And just a few years later, experts proclaimed cloud “king.” Today, there is more data and compute being pushed to the edge. Does this mean there’s no longer a place for core and cloud?

“The majority of enterprise data will be generated at the edge over the next decade, so organizations will need to evolve their infrastructure strategy from ‘cloud first’ to ‘data first’…”

—Faiz Parkar, global messaging director, IT Transformation, Dell Technologies

“The majority of enterprise data will be generated at the edge over the next decade, so organizations will need to evolve their infrastructure strategy from ‘cloud first’ to ‘data first’—that is, move infrastructure closer to the data, rather than vice versa,” says Faiz Parkar, global messaging director of IT Transformation at Dell Technologies.

It’s just not feasible to move all that data to the cloud or another centralized location for storage, processing, and analysis. Clearly, there’s a right place for everything. Edge will not supplant cloud; it augments both cloud and core by addressing risk, data transfer, and dependability concerns. No one of these three infrastructure approaches—core, cloud, and edge—can, in and of itself, provide the complete solution for all use cases.

The Right Data. At the Right Place. At the Right Time.

How do enterprises decide on the right place for their workloads in this hybrid model? Managing workload placement in the coming years will present many challenges and necessitate rethinking the corporate data center. Enterprises must confront the conundrum of placing, securing, and accessing their data in this new hybrid data center that exists at the core, cloud, and edge—and then decide which workloads (databases, applications, containers, etc.) belong where.

YOU MAY ALSO LIKE: Containers, Kubernetes and the Centralization of IT

The right workload placement decisions in a multi- or hybrid cloud infrastructure can dramatically increase application performance and responsiveness—or, conversely, can negatively impact it.

For example, Seagate recently noted that a single autonomous vehicle can, under certain circumstances, generate up to 32 terabytes per day. This data is initially gathered and stored at the edge—but are both data and compute best left at the edge in this scenario? Or is it better to split the application across different elements in the hybrid infrastructure? As Parkar points out, in making the right decision, organizations must take several factors into account:

Latency tolerance: safety-critical use cases demand low latency, even millisecond response times

Network reliability: sufficient network bandwidth and connectivity may not be available when dependability of mission-critical/time-critical applications is a key consideration

Cost of data transfer: transferring massive amounts of noncritical data at high speed may not be practical or cost-effective—even when high bandwidth is available

Bandwidth: the volume of data produced at the edge creates bandwidth concerns—massive amounts of critical data cannot be transferred quickly enough and can disrupt communication channels because of sheer volume

Risk tolerance: the greater the distribution of data and compute, the larger the attack surface

Organizations need to decide: What is the latency tolerance of any given application? Do you have—and can you afford—enough bandwidth to transfer massive amounts of data? When do certain workloads belong at the edge or in the cloud, and not in your corporate data center? How should an increasing amount of data—generated at the edge—be processed? Distributing different aspects of an application across the core, cloud, and edge may be the answer.

Solving the Workload Placement Puzzle Requires Core, Edge, and Cloud

Parkar sees infrastructure strategies evolving to support the distributed nature of modern, cloud-native applications, citing the example of a connected-car use case. Application splitting is one way to address this particular use case. “Compute, in this case, would need to be done at the network edge where the data is generated, because a sub-second response time is needed for things like automated braking to avoid an accident. For safety-critical applications like this, there’s no question of where to place both data and compute.

“However, the connected car also collects other valuable data that’s neither safety-critical nor latency-critical. That data can be aggregated and sent back to a centralized data center or cloud for storage, processing, and analysis that enables developers to then gain insight from that data.”

The Evolving Role of IT

With the increasing predominance of the edge, and the fact that IT is typically associated with the core data center, one might assume that the role of IT has diminished. But, in Parkar’s view, that’s not the case at all: He views the role of IT as even more important in this distributed model.

Data policies are still defined centrally, even though enforced locally. Policies must be consistent across the entire infrastructure; therefore IT must establish those policies from a central location. But the central data center need not be responsible for the enforcement. In fact, the prosecution of enterprise policies is more efficiently implemented at the source—in this case, wherever the policy requires enforcement: Be it at the core, the edge, or in the cloud.

“With IoT and edge computing, it’s not as simple as just signing up to a public cloud,” Parkar says. “Infrastructure requirements, latency, performance, data sovereignty, compliance, policies, and security all mandate a carefully developed, hybrid architecture that supports the needs of the business. These are exactly the skill sets which IT has cultivated over the last couple of decades.”

Thriving in a Data-driven World

It is a data-driven world, and nowhere will this be more apparent than in the next decade. The coming years will be characterized by a distributed approach to infrastructure and compute, in addition to increasingly distributed data.

As Parkar points out, “Realizing the full potential of cloud investments requires a comprehensive strategy from core to cloud to edge. And it’s important for organizations to stop thinking ‘cloud first’–they need to think ‘data first.’ Then the workload placement puzzle solves itself.”