Edge compute is a hot topic right now.
Here, I’ll focus on the lifecycle of the local edge - the on-premises compute and storage that’s now being placed typically where just networking infrastructure used to be.
The local edge is essential to business today. It’s a bridge to the cloud and provides the compute speed and capacity necessary to better facilitate digital processes and experiences.
These micro data centers create competitive differentiation and include physical and network security, power and cooling and remote management.
Retailers, hospitals, and financial institutions, in particular, are beginning to upgrade their edge compute capabilities. Remote tellers are one use case in banking.
Branches are transforming ATMs into more robust kiosks with a video display that can cue up a teller online after hours.
Distributed IT is not enough to support the digital age, so edge is becoming business critical for a high-level of customer touch.
Organizations that upgrade and evolve to local edge will create new differentiators and get ahead of the competition.
Most of us use apps every day. Have you ever considered where the information for that app comes from?
My favorite airline app, for example, tells me if my flight is on time in the morning, calculates how long it will take me to get to the airport, let’s me know the length of the lines at check-in and security.
All of this information and more is being fed by IoT sensors gathered by local edge applications around the country or the world ensuring that I have an excellent flying experience before I even step on the aircraft. On the contrary, since my user expectation is set by the best last experience I’ve had, any hiccup will create an unpleasant flying experience.
Local edge is, therefore, a must to fill the gap as connectivity and processing demands grow and move farther away from the corporate data center.
Assessing and planning the local edge lifecycle should start with a feasibility study of ROI.
Determine what capex and opex costs are, and document the return - whether it be truly monetary or a differentiating capability (or both). Then, quantify a total value to get senior leadership approval to invest in an edge solution strategy.
Unlike a traditional data center assessment, local edge will usually involve multiple sites, so be sure to account for the different power characteristics and available spaces.
Knowing what you want to rollout where, is one of the first challenges unique to edge deployments — especially if the rollout will be to multiple hundreds or even thousands of sites.
Don’t forget to consider the varying costs of getting the work done in the separate locations. Local edge deployments will require local contractors.
Physical infrastructure design is the next phase of the lifecycle and where the equipment to support the IT stack comes in. How are you going to house it? Cabinets or racks? Two post, four post or enclosure?
Look at the power and cooling requirements. Converged IT and hyper-converged solutions can throw off a lot of heat, which may not be ideal in a former networking closet. Could the building cooling pick up the heat load? Or, does there need to be dedicated cooling?
In a traditional data center, standardizing on a white-space design is very common, but with multiple site local edge, there’s more complexity. To help simplify, we’ve developed a local edge configuration tool for our partners.
The tool simulates placing the IT gear into a 2D image of the rack, then calculates heat load, power draw and the total weight going into this cabinet.
Such intuitive knowledge simplifies the iterative design process. Changes can be made on the fly, templates saved and applied again.
When it comes to the build - populating the IT rack and putting in place the physical infrastructure (UPS, rack mount, PDUs, etc.) cooling, cable management, physical and local security — the components can be ordered through us, already integrated in shock packaging. Or the parts can be ordered in pieces from your preferred partner.
First, it’s important to know who will be at the other end to implement the local edge solution.
Will the customer take it on? Or will the build be subbed out to an integrator or IT vendor? And so on.
Multiple locations cause a conundrum at this step too. The cost of buying a pre-integrated solution is very favorable when compared to contracting on site to rack, stack and configure, hook up to the network, ensure that the software and applications work correctly and even discarding all the cardboard and boxes that everything shows up in — across hundreds of locations.
Standardized, integrated solutions can significantly reduce the cost of logistics and implementation.
Whereas, piece-by-piece installation becomes about managing the cost, quality and consistency many times over and then having the right people on site so the solutions can be installed, tested and validated to ensure it performs correctly.
Article by Thomas Humphrey, Schneider Electric Data Center Blog