Skip to main content

Building Management Has Come a Long Way from Its Humble Beginnings

The humble thermostat marked our society’s foray into environmental control and management. And my, what a difference 90 years makes.

939 Hits

Why Cloud Architecture Matters

Choosing an enterprise cloud platform is a lot like choosing between living in an apartment building or a single-family house. Apartment living can offer conveniences and cost-savings on a month-by-month basis. Your rent pays the landlord to handle all ongoing maintenance and renovation projects — everything from fixing a leaky faucet to installing a new central A/C system. But there are restrictions that prevent you from making customizations. And a fire that breaks out in a single apartment may threaten the safety of the entire building. You have more control and autonomy with a house. You have very similar choices to consider when evaluating cloud computing services.

The first public cloud computing services that went live in the late 1990s were built on a legacy construct called a multi-tenant architecture. Their database systems were originally designed for making airline reservations, tracking customer service requests, and running financial systems. These database systems feature centralized compute, storage, and networking that served all customers. As their numbers of users grew, the multi-tenant architecture made it easy for the services to accommodate the rapid user growth.

All customers are forced to share the same software and infrastructure. That presents three major drawbacks:

Data co-mingling: Your data is in the same database as everyone else, so you rely on software for separation and isolation. This has major implications for government, healthcare, and financial regulations. Further, a security breach to the cloud provider could expose your data along with everyone else co-mingled on the same multi-tenant environment. Excessive maintenance leads to excessive downtime: Multi-tenant architectures rely on large and complex databases that require hardware and software maintenance on a regular basis, resulting in availability issues for customers. Departmental applications in use by a single group, such as the sales or marketing teams, can tolerate weekly downtime after normal business hours or on the weekend. But that’s becoming unacceptable for users who need enterprise applications to be operational as close to 24/7/365 as possible. One customer’s issue is everyone’s issue: Any action that affects the multi-tenant database affects all shared customers. When software or hardware issues are found on a multi-tenant database, it may cause an outage for all customers, and an upgrade of the multi-tenant database upgrades all customers. Your availability and upgrades are tied to all other customers that share your multi-tenancy. Entire organizations do not want to tolerate this shared approach on applications that are critical to their success. They need software and hardware issues isolated and resolved quickly, and upgrades that meet their own schedules.

With its inherent data isolation and multiple availability issues, multi-tenancy is a legacy cloud computing architecture that cannot stand the test of time.

The multi-instance cloud architecture is not built on large centralized database software and infrastructure. Instead, it allocates a unique database to each customer. This prevents data co-mingling, simplifies maintenance, and makes delivering upgrades and resolving issues much easier because it can be done on a one-on-one basis. It also provides safeguards against hardware failures and other unexpected outages that a multi-tenant system cannot.

Continue reading
926 Hits

The Key To Performance Isn’t CPU Or Memory

Applications are driving the enterprise, whether it is a relatively simple application used by millions of customers or a complex, scalable database that drives an organization's back end. These applications, and the users that count on them, expect rapid response times. In a world that demands “instant gratification,” forcing a customer, prospect, or employee to wait for a response is the kiss of death.

–George Crump, lead analyst, IT consulting firm Storage Switzerland, LLC
 

For most data centers, Crump suggests, “the number one cause of these ‘waits’ is the data storage infrastructure, and improving storage performance is a top priority for many CIOs.”

Sound familiar?

It may be challenging for executives who live well outside the IT glass house to think in milliseconds, or to recognize how much speed — which translates directly to application performance — matters. It’s tough enough to wrap our heads around the fractional advantages that accrue to Olympians like Usain Bolt and Michael Phelps, much less grasp the arcane benefits of sub-millisecond flash storage to everyday business applications.

Continue reading
833 Hits

The CIO Challenge

I think it’s fair to say that the role of the CIO has to be one of the toughest jobs in the world, but also one of the most rewarding.

For starters, CIOs are responsible for ensuring that each member of an organization has the resources and tools to be productive. To do so, CIOs must also provide adequate infrastructure so their organizations can extract relevant information and analysis from computer networks in real time.

As if all of this isn’t hard enough, CIOs must constantly stay updated on the latest emerging technologies that evolve at a dizzying pace. Otherwise, they may get blindsided by the next big IT trends such as virtualization, containers, or the emergence of DevOps as a practice. Finally, CIOs must pull off this technology magic under tight constraints due to budgets and limited human resources. It’s easy to see how these CIO challenges often make or break careers, and yet they keep drawing people back to this profession.

The Shifting IT Landscape Over Time

As we all know, mobility and cloud computing have made the biggest impact on traditional IT since mainframes gave way to client-server architectures in the 1980s. However, many CIOs and datacenter managers still struggle with how to navigate the blurry edges between the enterprise and public clouds.

In this fast-changing landscape, it’s worth recalling the pioneer days of the world wide web back in the early 1990s. I remember installing and launching this strange thing called a browser from Netscape, now known as Mozilla. It was a kind of revelation — you could type in online addresses, and content would suddenly appear from far-off places.

Continue reading
916 Hits

From People-Managed Infrastructure To Software-Managed Infrastructure

The commoditization of infrastructure is one of the most significant developments over the last couple of decades. The growth of web-scale companies like Google, Facebook, and Twitter (which collect, analyze, and extract information from a large volume of data) has influenced this commoditization of the infrastructure.

Looking back at the last couple of decades, enterprises have realized that the problems faced and solved by web-scale companies end up being problems in the enterprise after a short gestation time. Enterprises start seeing similar issues in scaling, management, and analysis of infrastructure, processes, and data. At multiple layers of the application stack, the enterprises adopt web-scale strategies to solve similar problems (Figure 1).

At the infrastructure layer, web-scale companies have focused on scale-out systems where compute, storage, and networking components blend into units of infrastructure that can be quickly replicated and grown without worrying about the complex organization of each of the components. Hyperconverged systems are an example of this trend.

At the data management layer, concepts like BigTable and MapReduce morphed from internal tools of web-scale companies to concepts and then to open source software and ecosystems. Every enterprise has a big data project using ideas and tools similar to the big data solutions practiced in web-scale companies.

However, at the infrastructure management layer, the adoption of web-scale tools and processes has been the slowest. Interestingly, even from the web-scale companies, the infrastructure management tools are the last ones being released and discussed. For example, MapReduce, BigTable, Spanner, Cassandra, DynamoDB, etc., were all discussed by Google, Facebook, and Amazon first. Google and Facebook also discussed how they build servers out of commodity hardware and promoted open compute. The infrastructure management tools such as Omega, Borg, Andromeda, and Tupperware were usually the last ones that were talked about publicly.

Continue reading
809 Hits

Dealing With The Everywhere Data Center

Where does your data center end?

It might seem like an odd question as you can probably point out the physical building or buildings that house your data center(s). But does that physical installation line up with the logic concept of the “data center” held by your business?

Most businesses today have started to extend their IT footprint into the public cloud. Cloud service providers (CSPs) like AWS, Microsoft Azure, and the Google Cloud Platform have brought public cloud computing to the mainstream.

The benefits are considerable, such as shifting expenses to operating expenses (OPEX), accurately tying expenses to projects, immediate fulfillment, complete automation, and others; however, the public cloud also brings significant challenges to your overall IT strategy.

The biggest IT question facing today’s business is, “How can you secure and manage assets effectively in a hybrid environment?”

Continue reading
830 Hits

Hyper-scale data center eliminates IT risk and uncertainty

In June 2016, CyrusOne completed the Sterling II data center at its Northern Virginia campus. A custom facility featuring 220,000 sq ft of space and 30 MW of power, Sterling II was built from the ground up and completed in only six months, shattering all previous data center construction records.

The Sterling II facility represents a new standard in the building of enterprise-level data centers, and confirms that CyrusOne can use the streamlined engineering elements and methods used to build Sterling II to build customized, quality data centers anywhere in the continental United States, with a similarly rapid time to completion.

CyrusOne’s quick-delivery data center product provides a solution for cloud technology, social media, and enterprise companies that have trouble building or obtaining data center capacity fast enough to support their information technology (IT) infrastructure. In trying to keep pace with overwhelming business growth, these companies often find it hard to predict their future capacity needs. A delay in obtaining data center space can also delay or stop a company’s revenue-generating initiatives, and have significant negative impact on the bottom line.

The record completion time of the Sterling II facility was the result of numerous data center construction principles developed by CyrusOne. These include standardized data center design techniques that enable CyrusOne and its build partners to customize the facility to optimize space, power, and cooling according to customer needs; effective project management in all phases of design and construction, thanks to CyrusOne’s established partnerships with data center architects, engineers, and contractors; advanced supply-chain techniques that enable CyrusOne to manufacture or pre-fabricate data center components and equipment without disrupting work at the construction site; and the use of Massively Modular® electrical units and chillers to enable rapid deployment of power and cooling at the facility according to customers’ IT capacity needs.

Introduction

In late December 2015, CyrusOne broke ground on the Sterling II data center, the second facility at its Northern Virginia campus. Built for specific customers, the Sterling II facility is a 220,000-sq-ft data center with 30 MW of critical power capacity. The facility was completed and commissioned in mid-June 2016. Its under six-month construction time frame is the shortest known time to completion ever achieved by CyrusOne for an enterprise-scale data center of its size. The 180-day build time shattered all known industry construction records.

Continue reading
821 Hits

Summertime And Living In The Cloud Is Easy

Welcome to Cloud Strategy’s 2016 Summer issue! We really outdid ourselves this time.

To begin, Allan Leinwald of ServiceNow is here with an in-depth look at cloud architecture for our cover story. But there is more! Kiran Bondalapati from ZeroStack writes about the commoditization of infrastructure; Sumeet Sabharwal of NaviSite writes on the opportunities available to independent software vendors in the cloud; Mark Nunnikhoven of Trend Micro talks about the trend of the everywhere data center and the danger of dismissing the hybrid cloud; Alan Grantham of Forsythe writes about the cloud conversations companies should be having; Peter Matthews of CA Technologies, Anthony Shimmin of AIMES Grid Services, and Balazs Somoskoi of Lufthansa Systems share their tips for selecting the right cloud services provider; Adam Stern, founder and CEO of Infinitely Virtual writes about the importance of cloud storage speed; Shea Long of TierPoint tackles the hot topic of DRaaS; and Steve Hebert, CEO of Nimbix writes on the challenges CIO face in balancing public, private, and hybrid clouds.

In addition, we have a case study from Masergy on its successful implementation of a high-speed network to implement Big Data analytics.

Another great issue, if we say so ourselves.

851 Hits

Transitioning To An Agile IT Organization

If you have even a passing interest in software development, you’re likely familiar with the premise of agile methods and processes: keep the code simple, test often, and deliver functional components as soon as they’re ready. It’s more efficient to tackle projects using small changes, rapid iterations, and continuous validation, and to allow both solutions and requirements to evolve through collaboration between self-organizing, cross-functional teams. All in all, agile development carves a path to software creation with faster reaction times, fewer problems, and better resilience.

The agile model has been closely associated with startups that are able to eschew the traditional approach of “setting up walls” between groups and departments in favor of smaller, more focused teams. In a faster-paced and higher-risk environment, younger companies must reassess priorities more frequently than larger, more established ones; they must recalibrate in order to improve their odds of survival. It is for this reason that startups have also successfully managed to extend agile methods throughout the entire service lifecycle — e.g., DevOps — and streamline the process from development all the way through to operations.

Many enterprises have been able to carve out agile practices for the build portion of IT, or even adopt DevOps on a small scale. However, most larger companies have struggled to replicate agility through the entire lifecycle for continuous build, continuous deployment, and continuous delivery. Scaling agility across a bimodal IT organization presents some serious challenges, with significant implications for communication, culture, resources, and distributed teams — but without doing so, enterprises risk being outrun by smaller, nimbler companies.

If large enterprises were able to start from scratch, they would surely build their IT systems in an entirely different way — that’s how much the market has changed. Unfortunately, starting over isn’t an option when you have a business operating at a global, billion-dollar scale. There needs to be a solution that allows these big companies to adapt and transform into agile organizations.

So what’s the solution for these more mature businesses? Ideally, to create space within their infrastructure for software to be continuously built, tested, released, deployed, and delivered. The traditional structure of IT has been mired by ITIL dogma, siloed teams, poor communication, and ineffective collaboration. Enterprises can tackle these problems by constructing modern toolchains that shake things up and introduce the cultural changes needed to bring a DevOps mindset in house.

Continue reading
855 Hits

The Critical Cloud Conversations You Are Not Having, But Should

Someone, somewhere at your organization is discussing cloud. In strategy meetings someone has said “Let’s move it to the cloud.” At the watercooler or in the breakroom you’ve probably overheard snippets of conversation about micro services, DevOps, cloud strategy, and a host of other cloud-speak. The cloud conversation is everywhere because more companies are moving to the cloud. But they are running into trouble by not defining the expected business outcomes.

Article Index:

Don't Put the Cart Before the Horse

The first question you should ask at your company is “What are we trying to accomplish with the cloud?” Too many organizations start investigating cloud options before identifying desired business outcomes.

Without knowing the answer to this first question, it is difficult to define the ecosystem for your cloud. And the second question is “What is the ecosystem you need to enable in your cloud?”

The third question is, how will you get there and what is your strategy? An all too often unasked part of that question is, “do you have the skills, knowledge and resource availability to take this on?”

Continue reading
871 Hits

Cloud App Monitoring from Riverbed Technology

Riverbed Technology has released enhancements to Riverbed SteelCentral that bring major advances to troubleshooting capabilities and improved monitoring across the cloud while simultaneously improving ease of use and scalability. These enhancements continue to support a common theme of improved SteelCentral platform integration while enhancing several critical capabilities, including:

Extending powerful monitoring capabilities into the cloud with Microsoft Azure and AWS Platform-as-a-Service (PaaS) and containerized environments Large-scale virtualized network performance monitoring Expanded unified communications (UC) monitoring with new support for Skype for Business Next generation diagnostics and troubleshooting

Further, SteelCentral vastly improves the ability to monitor applications deployed on PaaS and containerized environments. As these environments dynamically scale during peak and off-peak periods, conventional performance monitoring tools that trace interactions between servers cannot coherently represent application behavior. This release introduces the Application Performance Graph that visually maps interactions between application modules in real-time, regardless of the underlying infrastructure. This reveals dependencies and hotspots obscured by the elasticity of the environment so that IT can observe and fix issues with the most overarching business impact.

www.riverbed.com

935 Hits

Hybrid Backup and Disaster Recovery from ioSafe

ioSafe’s BDR 515 is a unique fire- and waterproof hybrid backup and disaster recovery (BDR) appliance designed to eliminate downtime, protect data and provide near-zero recovery time objectives (RTOs) and recovery point objectives (RPOs), even during times of internet outage. The BDR 515 enables data to be protected onsite as well as securely replicated to the cloud.

Powered by Windows Server 2012 R2 and StorageCraft® ShadowProtect® SPX, the ioSafe BDR 515 has a flexible architecture that can be administered by partners and is available in capacities between 5 and 30TB. Other features include:

100% private cloud backup with dedicated target BDR appliance located at ioSafe Cloud Ability to replicate, virtualize and protect primary server using the ioSafe BDR Versatile Windows/Intel based hardware available for virtually any task Onsite protection from fire up to 1550°F, 30 minutes per ASTM E-119 Onsite protection data from floods up to 10-foot depth, three days, complete submersion Locking floor mount and rack mount kits for physical theft protection and security onsite Replicate, test, backup and manage via the award-winning StorageCraft software suite Recover a file or a folder or restore a whole system fast, to the same or different hardware VMware®, Microsoft® Hyper-V® Ready Simple month-to-month agreement with no vendor lock-in

Similar to other BDR solutions, the 515 has the ability to spin-up both local or remote replicated version of a primary server in the ioSafe Cloud. The ioSafe BDR has a tremendous advantage in that during an actual disaster scenario, it allows for up to 30TB of data to be recovered onsite, immediately after the disaster, giving a business the best chance to be back at full strength quickly — regardless of bandwidth. 

https://iosafe.com/

827 Hits

IoT Software Platform from Advantech

Advantech’s WISE-PaaS/RMM 3.1 is an open standardized IoT software platform for users by applying MQTT, a standard and popular IoT M2M protocol for device and server communication.

WISE-PaaS/RMM 3.1 comes with more than 100 RESTful APIs including, account management, device management, device control, event management, system management, and database management. RESTful APIs create new web services and help integrate functions and data with their management tools. Furthermore, WISE-PaaS/RMM 3.1 will release WISE-Agent source code as open source. WISE-Agent software works on the device side, helping customers to develop their own applications. WISE-PaaS/RMM highly enhances connectivity for hardware, software, devices and sensors, and helps customers to transform their business to include IoT cloud services.

Based on WISE-PaaS/RMM 3.0 for remote device monitoring and management, version 3.1 offers centralized management and a dashboard builder for data visualization. Customers can develop dashboards to monitor and manage all their connected devices. WISE-PaaS/RMM 3.1 is also integrated with Node-Red, which is a “drag and drop” logic editor tool for users to access data and features in WISE-PaaS/RMM 3.1 for device flow and action control management.

To provide a stable and reliable centralized management platform, WISE-PaaS/RMM 3.1 includes server redundancy whereby devices can have direct connection with the security of a back-up server if the main server loses connection. This is designed to make sure data and services auto-sync between the main server and the back-up server. WISE-PaaS/RMM 3.1 also provides a hierarchical server, which supports the main server and sub-server structure at the same time. Users can use the sub-server for local device management and use the main server to collect data from local servers in order to disperse the load on the main server.

www.advantech.com

907 Hits

A Primer On Cloud Management

By now, the benefits and simplicity of cloud computing are well understood, and the promise of benefits like cost-savings, greater efficiency, and increased application agility have inspired companies of all sizes to kick start their journey to the cloud. In fact, it’s expected that the cloud infrastructure and platform market will grow by 19% annually from 2015 to 2018, reaching 43 billion dollars by 2018.

Ultimately, the question these days is not “if” businesses will move to the cloud, but rather “when.” At the same time, as cloud functionality becomes more complex and IT professionals are increasingly relied upon to manage and deploy cloud services, many organizations struggle to manage their cloud deployment of choice — public vs. private — in a manner that produces the most efficiency and ROI.

To that end, it’s helpful to understand the benefits and challenges — and accompanying management best practices — of each approach to better inform an organization’s cloud integration strategy.

The Private Cloud 

A private cloud — hosted on an organization’s proprietary architecture — provides a valuable benefit in today’s cloud landscape: the element of control, primarily, greater controlling governance over the services catalog. Businesses that stand up their own private clouds can offer a self-service portal for endusers in their organization that allows access only to IT-approved services, which in turn helps meet data compliance and security requirements.

There is also an element of cost control inherent to private cloud deployments. IT professionals familiar with the public cloud likely know that all too often an organization can be stuck contending with a cloud provider’s on-demand or spot pricing, meaning it’s difficult to accurately predict how much that service will cost per month or per quarter. Comparatively, a private cloud offers administrators the ability to control that spend by monitoring chargeback and showback records to quantify how efficient that infrastructure as a service (IaaS) is in serving the needs of its endusers.

Continue reading
997 Hits

Moving Beyond The Cloud

As we move from the data center and transition into the cloud, what comes next? That’s the question on many CIOs’ minds as they contemplate how to increase customer experience and deliver an environment of true digital enablement where users are capable of conducting their jobs in an always-on, anywhere, anytime world. What they run into at the edge of the cloud is a hazy area — The Fog — where the Internet of Things (IoT) exists. This fog is going to become increasingly critical to both endusers and CIOs alike as smart and connected devices, meant to bring data from every point imaginable, become part of the enterprise.

We are defining the digitally enabled enterprise as an organization that embraces technology and services to improve the customer experience (CX) it delivers to both internal and external customers and, in doing so, often changes the nature of the organization itself. The alignment of and investment in technology and business models is critical to more effectively engage digital customers at every touchpoint in the CX lifecycle. It doesn’t matter the type of business, where it’s located, or in many cases, how large the organization is; the focus on customer experience is pervasive and all consuming.

Achieving an improved level of customer experience — including a superior level of customer engagement and satisfaction — requires continually testing and deploying new service models and technologies. These include four key technologies and solution areas that comprise the next generation platform of IT: cloud, analytics, mobile, and social.

When we look across a broad spectrum of industries, what we’re seeing is the myriad of ways that CXOs of every stripe are fully embracing the digital world and digital technologies to build the digitally enabled enterprise. And IoT is shaping up to become a part of that ecosystem and customer experience.

Let’s take a step back and look at what the IoT is shaping up to be. Some of the hype surrounding the IoT makes the hype around the cloud look small in comparison. It is often talked about as the next step on an internet evolutionary ladder. IoT is comprised of those devices in the world that we use and take for granted every day — from building systems to cars and trucks to vending machines — all using sensors and internet connectivity to capture and exchange information.

Continue reading
942 Hits

Community Cloud: The Fourth Cloud Infrastructure Option

Cloud is the IT structure of the future due to its scalability, flexibility, and cost-efficiency. But does the option of a public cloud, private cloud, or even a mix of these environments work for every organization? The simple answer is no. Companies in highly regulated industries such as financial services, health care, education, and government are often stymied by governance, risk, and compliance concerns related to data stored and accessed in the cloud. But all hope is not lost — there is the fourth option of the community cloud.

A community cloud is the lesser known version of cloud infrastructure, although it is relatively popular amongst particular industries. As defined by National Institute of Standards and Technology (NIST), community cloud is an “infrastructure [that] is provisioned for exclusive use by a specific community of consumers from organizations that have shared concerns. It can be owned, managed, and operated by one or more of the organizations, in the community, a third party, or some combination of them, and it may exist on or off premises.” In essence, a community cloud is a subset of the public cloud, but tailored for a particular industry and may be offered as Infrastructure-as-a-Service (IaaS) or Platform-as-a-Service (PaaS). Since the community cloud is personalized, it is adapted to fit the exact performance, security, and compliance requirements of the industry or community it serves.

This form of cloud has been around for several years as groups of companies or legal entities that share a similar risk profile have banded together to secure computing resources in a multi-tenant environment. Whether it is a group of hospitals or a consortium of financial services firms working together, the community can be comprised of organizations that are related to each other in some way such as the same industry, parent company, holding company or association.

Is There a Need for This Fourth Cloud Option?

Until now, the majority of cloud discussions focused on private, public, or hybrid platforms. Very little has been written on the community cloud and its role within particular industries. Even though companies must adhere to strict security and data governance policies that cannot be met in the public cloud, this should not exclude them from using cloud infrastructure. Rather, community cloud answers their needs and can be found in these industries:

Health care. Institutions are under strict guidance from the Health Insurance Portability and Accountability Act (HIPAA), which negates the use of public cloud use due to security concerns around personally identifiable information. Security breaches and data leaks are far too common in the news raising concerns over public or hybrid cloud use. On the other side, health care hospitals and agencies can more easily share patient information and research results in the community cloud. Government agencies. Sensitive information requires strict data governance and access. Several countries, especially in Europe, need information to reside within the country with specialized security requirements including at-rest encryption, encryption keys maintained outside of the cloud provider’s team, or physical destruction of hard drives. A single government agency could administer a community cloud for several offices, sub-agencies, and departments. Financial services firms. Trading firms and other companies within this industry rely heavily on analytics and high data volumes that require high-performance clustering and low latency not found in public cloud environments. Community clouds address these needs much better than other cloud infrastructures. Education. Colleges and universities require a scalable environment through the continuous growth of student numbers; however, budgetary concerns around capital expenditures are always an issue. Community clouds help them leverage group buying power, reduce uncontrolled purchase of cloud and shadow IT deployments, and re-use innovation throughout their organization.

As with each cloud infrastructure platform, there are benefits and drawbacks to community cloud. Some firms may worry that their competitors are using and sharing the same resources, and there may be a small likelihood of data leakage or loss of a technological competitive advantage. Additionally, internal IT departments may not want to have a community organization or third-party provider administer the cloud. The loss of control can be a roadblock. The benefits of reduced operating costs, efficient management, lessened risk, and strong security and compliance as outlined above far outweigh these risks.

Continue reading
871 Hits

How To Make Mobile Part Of Your Digital Workplace Strategy

As everyday consumers themselves, today’s workforce has explicit expectations for mobility that mirror the demands of the customers they serve. In fact, more than 90% of IT decision makers (ITDMs) see enterprise mobility as the critical function for customer engagement, competitiveness, and operational productivity in 2016.1

Enterprise mobility plays a leading role in the digital workplace. However, the digital workplace is transforming how IT services are delivered to endusers. Employees want to access network resources from any device, at any time, and from any location. Not only is this good for employee morale, general satisfaction, and productivity, it also gives the business a competitive advantage, thereby enabling a fast response to market changes and customer needs more efficiently.

One of the IT suite’s major challenges, then, is how to ensure enterprises continue to keep up with the rapid-fire pace that the demands present. It takes more than just redefining business processes to take advantage of mobile, rolling out apps on the enterprise store, and providing employees with flexible options for choosing devices and apps. It requires ensuring that your mobile app actually delivers a quality experience while your workforce uses them. After all, what drives the business benefit of a digital workplace is not whether the workforce simply has access to apps, content, and data. What counts is usage and performance as experienced by the workforce in the field.

Therefore, with these demands ever present, the dialogue surrounding how to leverage Mobile Application Performance Management (Mobile APM) is heating up. The current technology landscape holds more questions than answers that address how enterprises can ensure that business critical “apps that matter” deliver an excellent experience to mobile workforce users. While the IT suite’s ultimate goal may be to create a solid digital workplace strategy, one has to know the critical components that make up this mobile toolkit before a strategy can be methodically put together — and be alert to the forces holding back the execution of that strategy in order to achieve it.

Mobile APM vendors have responded accordingly with products that provide mobile app developers with capabilities like crash analytics, app error reporting, service performance metrics, and data consumption tracking. As important as these Mobile APM capabilities are, they primarily solve problems for consumer-facing apps because the capabilities of these vendors lack visibility into important aspects of the enterprise enduser, which is comprised of their identity, role, and business function, the full range of apps and devices they use, and the business activities for which employees are responsible.

Continue reading
854 Hits

Software-Defined Infrastructure from Intel

Intel® has released Xeon® processor E5-2600 v4 product family delivers the foundation for modern, software-defined clouds. New Intel® SSDs, including Intel’s first 3D NAND drives optimized for cloud and enterprise workloads, deliver fast, dependable data access.

Collaborations with leading cloud software and solution providers, new industry programs help accelerate businesses’ access to enterprise-ready, easy-to-deploy cloud solutions.

Intel Corporation today announced a range of new technologies, investments and industry collaborations aimed at making it easier to deploy agile and scalable clouds so businesses can deliver new services faster and drive revenue growth.

Businesses want flexibility and choice in cloud deployment models to support innovation while maintaining control of their most strategic assets. Despite a willingness to invest in modern software-defined infrastructure (SDI)1, businesses find the prospect of doing so themselves to be complex and time-consuming.

Intel is easing the path with new processors, solid state drives and a range of industry collaborations to help businesses deliver new services at the scale and speed previously found only in the most advanced public clouds.

Continue reading
919 Hits

Software-Defined Data Centers: Hype, Reality, And What's Next

The software-defined data center is either an overused buzzword for a sector filled with tire kickers, or the breakout trend of 2016, depending on which industry experts are quoted. Yet IT managers know there’s a profound transition taking place as control of data center gear shifts from hardware to software.

The latest data show they’re allocating budgets accordingly. According to an April 2016 survey, 66% of CIOs plan to expand their use of software-defined data center technologies this year.1 Spending for software-defined data centers is forecast to increase 14% in 2016, although present deployments represent just 21% of data centers surveyed in early 2016.2 For many enterprises, it’s not an option; Gartner estimated that by 2020 75% of organizations will need to implement a software-defined data center3 in order to support the DevOps approach and hybrid clouds they need as part of agile digital business initiatives.

First Servers, Then Networks 

Thanks to virtualization technologies debuting over a decade ago, server and networking domains are already well on their way to software-defined control. By 2013, 51% of servers were virtualized,4 and today in 2016 server virtualization rates exceed 75% in many organizations.5 Before long, software-defined network virtualization from Cisco, Juniper Networks, Barracuda, and more also took hold, such that a Gartner report cited in a May 2016 trade publication forecasts that 10% of customer appliances will be virtualized by 2017, up from 1% this year.6

For the data center, virtualization meant that gear-filled glass houses where one could practically get a tan from the heat of servers, switches, and spinning disks didn’t need so much hardware. Less hardware meant less costly square footage, electricity for operation and cooling, and capital outlays that were depreciated across five years. While staff expertise required for operating the data center didn’t go away, it shifted to more valuable activities because their prior tasks became easier as management interfaces improved.

Next Up: Storage

The next logical evolution of the software-defined approach is data storage — traditionally a big-iron, big price tag dominated sector, and one that’s poised to deliver tremendous improvements. Research & Markets estimated the software-defined storage market as totaling $1.4 billion in 2014,7 growing at about 34% annually through 2019 — thought just a fraction of the overall $36 billion storage market that year.8

Continue reading
1191 Hits

Cloud Suite from Red Hat

Red Hat, Inc. has announced the general availability of Red Hat Cloud Suite and Red Hat OpenStack Platform 8, helping to bridge the gap between development and operations teams at the scale of cloud computing. With today’s newly-available products, Red Hat now offers a complete, integrated hybrid cloud stack with a container application platform (OpenShift by Red Hat), massively scalable infrastructure (Red Hat OpenStack Platform 8) and unified management tools (Red Hat CloudForms), all available individually or via a single, easy-to-deploy solution with Red Hat Cloud Suite.

A growing number of organizations are building private clouds1, to give them massively scalable and modern infrastructure, while maintaining increased security and control. According to the Red Hat Global Customer Tech Outlook 2016, a global survey of Red Hat customers, private cloud deployments are expected to outpace public cloud by 6x. In addition, development teams are looking to streamline the creation and deployment of new cloud-native applications while IT leadership is hoping to meet growing business demands with cloud-based automation.

Red Hat’s latest cloud solutions help answer these needs through the availability of:

Red Hat OpenStack Platform 8, the newest version of Red Hat’s leading OpenStack offering, adds optimized storage and management capabilities through the native inclusion of Red Hat Ceph Storage and Red Hat CloudForms, respectively. Red Hat Cloud Suite, a pre-integrated set of Red Hat’s cloud technologies that, for the first time, allows for cloud-native application development and deployment with the inclusion of OpenShift by Red Hat in addition to massively-scalable infrastructure and unified management.

Red Hat’s OpenStack-based private cloud solutions provide an open foundation to meet production-level needs of today’s modern businesses, combining highly-scalable infrastructure and management with developer productivity, backed by a broad ecosystem of certified hardware and software providers. Available as a single platform with Red Hat OpenStack Platform or as an integrated offering through Red Hat Cloud Suite, Red Hat’s cloud solutions help customers build scalable, fault-tolerant, IaaS environments.

Red Hat OpenStack Platform 8

Forming the backbone of Red Hat’s hybrid cloud offerings is Red Hat OpenStack Platform 8, the latest version of the company’s highly scalable IaaS platform based on the OpenStack community “Liberty” release. A co-engineered solution that integrates the proven foundation of Red Hat Enterprise Linux with Red Hat’s OpenStack technology to form a production-ready cloud platform, Red Hat OpenStack Platform is becoming a gold standard for large production OpenStack deployments. Hundreds of global production deployments and even more proof-of-concepts are underway, and in the information and telecommunications industry, a strong ecosystem of industry leaders are rallying around Red Hat OpenStack Platform for transformative network functions virtualization (NFV) and software-defined networking (SDN) deployments.

Continue reading
920 Hits