Skip to main content

EDUCATION

AI-Driven Platforms Take the Data Center to the Edge

edge data center

Data has driven the industry for a long time, but it used to be that heavy-duty analytics happened only in the cloud. Take the example of a manufacturing company. Information from machines was aggregated and processed in the cloud, and next steps were planned in response. The edge was a mere data aggregator, routing data to the cloud to do all the heavy lifting.

But today, moving the data center out to the network edge has come into its own. It’s flexing its muscle, thanks to increased computing power and the ubiquity of IoT implementations. Edge solutions enable near real-time data processing and greater control over essential information, a boon for enterprises. In our manufacturing example, split-second decision-making can enable predictive maintenance of assets in real time.

Edge Computing Orchestration

Even better, moving to the edge does not mean having to say goodbye to the orchestration and management capabilities that the cloud offers. Managing and scaling compute in the cloud is a known entity. Sure, you have tens of thousands of servers running applications, but they’re all in one central location, managed by one team of IT professionals.

“When you’re talking about the edge, you’re still talking about thousands of servers, but unlike the traditional data center, now they’re distributed across hundreds or thousands of physical locations,” points out Jeff Ready, CEO of Scale Computing, a provider of edge computing, virtualization, and hyperconverged solutions.

Maintaining IT staff at each edge location is impractical and expensive, a problem solved by orchestration management software. Scale Computing gives a hassle-free cloud experience at the edge. The Intel® NUC Enterprise Edge Compute Built with Scale Computing replaces the need for distributed, on-premises IT personnel—a small, centralized team can be just as agile.

Edge Computing for Every IT Scenario

A small, centralized team is exactly what Jerry’s Foods, a Minneapolis-based grocer with 40 locations across the country, works with. The retailer has layered many applications, including point-of-sale software, video analytics, and others on its operating system.

Jerry’s edge AI-enabled solutions facilitate impressive personalization and revenue-boosting strategies, adjusting in-store ad delivery depending on the contents of a shopper’s cart. This kind of real-time analytics needs compute to be reliable and always available, which is what Scale Computing ensures.

When one of Jerry’s locations was damaged, the store was no longer accessible to the community. Its IT team was able to gain access to the SC//Platform appliances, and restore all applications and basic store functionality to stand up a temporary store in a tent in the parking lot. This allowed the local community to have continued access to life-sustaining food and beverage. “This is a small team of IT folks managing locations around the country, and they were able to pull it off with the SC//Platform products they had in place,” Ready explains.

Scale Computing works with system integrators and reseller partners as avenues to reach enterprises looking for edge orchestration solutions. These partners can also work with Scale Computing to deliver additional services like migration services and disaster-recovery planning.

“The beauty here is that the selection and #configuration of all #applications can be done centrally and via one portal” – Jeff Ready, @ScaleComputing via @insightdottech

Self-Healing Technology for Turnkey Applications

The need for a self-healing platform became apparent when Ready and his co-founders saw the problems IT routinely faces: the infrastructure works fine on day one, but gets progressively more difficult to troubleshoot as additional components get bolted on over time.

Ready and his team understood that error detection and mitigation needed to be baked into the foundational architecture. And for IT teams, especially those that are remote from physical locations, it helps to have self-healing technology—problems fix themselves automatically.

The HyperCore operating system is installed on the Intel® NUC for a small-footprint, edge computing orchestration and management solution. The OS provides active error detection and mitigation of problems using a technology called Autonomous Infrastructure Management Engine (AIME), an AIOps system based on pattern recognition, looking for patterns that indicate something is broken.

When it locates a problem, SC//HyperCore looks through its Rolodex of problems and associated solutions, and, if it finds a match, executes the corresponding fix automatically. When the system detects a problem that does not exist in its vocabulary, it alerts IT and the problem gets resolved. When the same problem recurs a few times, the fix gets baked into the Scale Computing platform, becoming smarter over time.

SC//HyperCore at each site connects to Scale Computing Fleet Manager, which monitors the health of entire deployments. SC//Fleet Manager also facilitates zero-touch deployments on-site. This means that everything a location needs for edge computing, including vertical-specific apps, gets dispatched automatically from the central portal when the NUC is first plugged in.

“The beauty here is that the selection and configuration of all applications can be done centrally and via one portal,” Ready says. “The solution is scalable, so when enterprises want to expand from 10 to 100 or 1,000 locations, it’s just copy and paste, it’s turnkey, and there’s no change in how you manage it. Automating the fixes and deployments is like having an extra IT person on-site to cover all locations.”

The Future of Distributed Computing

The need for that extra edge will only increase in the future.

Ready reminds us that computing goes through its cycles of centralized and distributed computing periodically. “This isn’t the first time we’ve been here in IT. We’ve gone from centralized computing originally in the mainframe era to distributed computing, client-server-type architectures. That evolved back to centralized data centers and the cloud; now we’re going back to distributed.”

“Edge computing effectively completes the cloud vision,” Ready says. “The cloud was never meant to imply a large data center in Seattle; it meant computing resources, ubiquitously available.” And ensuring that those resources are available when needed and to the extent needed without a heavy lift on IT? That’s where Scale Computing comes in. “It makes the edge behave like the cloud,” he says, allowing for always-available compute power at scale, managed seamlessly.

 

Edited by Georganne Benesch, Associate Editorial Director for insight.tech.

About the Author

Poornima Apte is a trained engineer turned technology writer. Her specialties run a gamut of technical topics from engineering, AI, IoT, to automation, robotics, 5G, and cybersecurity. Poornima's original reporting on Indian Americans moving to India in the wake of the country's economic boom won her an award from the South Asian Journalists’ Association. Follow her on LinkedIn.

Profile Photo of Poornima Apte