More IT managers than ever believe their networks are vulnerable to disaster, according to a survey published by Quantum Corp. 11 per cent of respondents said they were “extremely vulnerable” from a disaster recovery standpoint, according to a story on Biztech2.com, up eight per cent from two years ago. OS failure was the second-highest vulnerabily cited after viruses.
On NoJitter, network engineer Terry Slattery suggests organizations that want a “five nines” network need to standardize their OS and hardware versions. “Some organizations can’t afford to perform hardware refreshes that often,” he admits. “Then consider dividing the network into zones and upgrading a zone at a time. A zone could be based on function, such as core and distribution, or it could be based on geography.”
No one wants to deal with a power outage, but Herve Tardy suggests IT departments can avoid the first by downloading open source management code from NetworkUpTools.org. “Data center managers can equip their infrastructure to shut down servers in the proper order when utility/server power becomes unavailable,” he writes on ITBusinessEdge.com.
More organizations are offering “bring your own device” (BYOD) programs, but in most cases that will require more money from IT departments. Among the 10 hidden network costs in BYOD highlighted by eWeek was the notion that snapshots don’t show the whole picture. “Historical data has bigger implications than you think, especially when it comes to compliance and security,” Chris Preimesberger writes. “It’s important to capture real-time data but also to log user behavior over time.”
And finally, despite several years of talk, there is still some confusion on how “new” cloud computing is, which is why CloudTweaks offers an answer. “The main difference in cloud computing and traditional networking or hosting is the execution, and in one word that is “virtualization,'” writes Abdul Salam. “It is apparent that cloud computing gains the upper hand in this comparison especially when price is involved.