Wednesday, May 27, 2015

Dell and VMware help customers future-proof their businesses, Newly updated Dell Engineered Solutions for VMware EVO:RAIL, infrastructure edition 1.2, includes greater scalability, automated serviceability and reduced licensing costs.

Dell has announced the general availability of Dell Engineered Solutions for VMware EVO:RAIL Horizon Edition, and updates to the Dell Engineered Solutions for VMware EVO:RAIL appliance. Both of these workload-optimised technology solutions provide customers with faster time-to-value, elasticity and greater ease of use.

With Dell Engineered Solutions for VMware EVO:RAIL Horizon Edition (HE), Dell is the first vendor to offer a hyper-converged end-to-end virtual desktop infrastructure (VDI) appliance for VMware EVO:RAIL. Using Dell servers and storage with VMware EVO:RAIL software, this new offering enables customers of all sizes to quickly and easily deploy and scale infrastructure for virtual desktops to end users based on business demand, and is optimised for consistent performance across all virtual desktops.

The updated Dell Engineered Solutions for VMware EVO:RAIL, infrastructure edition 1.2, introduces enhanced features for new and existing customers, including serviceability automation and increased scalability. Serviceability automation enables one button replacement of hard drives and network interface cards, allowing customers to spend less time on IT management tasks and more on strategic, forward-looking projects. Increased scalability allows customers to purchase the capacity they need today, with the ability to scale in the future as their business requires.
 
Dell Engineered Solutions for VMware EVO:RAIL are designed to power virtual infrastructure general-purpose workloads, and virtual desktop infrastructure. The solutions offer customers:

· Reduced TCO: With a reduction in power consumption and operating costs, the Dell Engineered Solutions for VMware EVO:RAIL reduce total cost of ownership (TCO) by up to 63.9 percent over three years compared to a do-it-yourself solution.

· Greater ease of use: With a single-pane-of-glass management experience, Dell Engineered Solutions for VMware EVO:RAIL provide zero downtime, easier upgrades, patch management and out-of-the-box integrations with existing VMware management tooling. Updating Dell Engineered Solutions for VMware EVO:RAIL takes 88 percent fewer steps compared to a do-it-yourself solution.

· Rapid deployment, reduced risk: Deploying Dell Engineered Solutions for EVO:RAIL is dramatically simplified compared with a do-it-yourself solution. EVO:RAIL reduces the steps required to stand up a new private cloud from 683 to 50. This speeds up the deployment process but also greatly reduces the possibility for human error, reducing risk of delays and downtime.
 
Dell and VMware have a long-standing alliance and strong history of working together to offer efficient virtualization and cloud infrastructure solutions that are fast to deploy and easy to manage. The two companies are continuing to collaborate to provide customers with the flexibility to adapt their cloud and virtual infrastructures based on their needs today and in the future.
 
Dell Engineered Solutions for VMware EVO:RAIL Horizon Edition: easy to deploy and scalable VDI

Dell marks another significant milestone in making desktop virtualization easier than ever to plan, deploy and run with the general availability of Dell Engineered Solutions for VMware EVO:RAIL Horizon Edition. With Dell Wyse PCoIP zero clients and all-in-one thin clients for VMware, a full suite of management tools, as well as Dell software and services, Dell offers a truly holistic VMware-verified VDI solution based on EVO:RAIL HE.
 
Dell Engineered Solutions for VMware EVO:RAIL HE is a high value EVO:RAIL appliance for customers deploying desktop virtualization today, and is designed to help organizations deploy applications faster, scale easier, and better manage infrastructure and workload delivery. It is the first EVO-based, hyper-converged end-to-end VDI solution that reduces the traditional ordering cycle by one third, shrinking implementation from multiple weeks to hours and pilot time from months to weeks. Dell Engineered Solutions for VMware EVO:RAIL Horizon Edition eases complexity by reducing the number of steps to deploy by up to 92 percent compared to a do-it-yourself solution. Each appliance scales to approximately 250 virtual desktop VMs, and the current maximum of eight appliances allows for approximately 2,000 persistent virtual desktop VMs.
 
Dell Engineered Solutions for VMware EVO:RAIL 1.2 enhances scalability and serviceability

Today’s announcement of the Engineered Solutions for VMware EVO:RAIL Infrastructure Edition 1.2 offers customers an even simpler path to a future-ready IT environment. As the first infrastructure update since the Dell EVO:RAIL solution was announced, it offers new and existing customers a simplified user experience with greater capacity for future growth.

With a focus on customers’ business needs, the update offers:

· Improved linear scale-out capabilities: The Dell Engineered Solutions for VMware EVO:RAIL allows customers to easily scale performance, bandwidth and capacity by adding appliances as their business needs increase, reducing the number of steps to bring on more servers from 329 to 17. The update increases the maximum scalability of a single cluster from four appliances to eight, with each appliance designed for approximately 100 general purpose virtual machines (VMs). With each appliance containing four server nodes, the new maximum cluster size is increased from 16 to 32 nodes, supporting approximately 800 general purpose VMs.

· Automated serviceability: VMware EVO:RAIL 1.2 edition reduces time consuming management tasks by automating serviceability. In the event of a failed drive, a customer can now replace it with a simple click of a button which automates backend tasks to add the host back to the cluster. This frees up time spent on maintenance, and allowing for more time to be spent on strategic projects.

· Reduced licensing costs: The adoption of the VMware EVO:RAIL vSphere Loyalty Program allows eligible customers to apply existing VMware licenses to the purchase of Dell Engineered Solutions for VMware EVO:RAIL, preserving their existing investment in VMware software while reducing the overall cost of the appliance purchase.

http://dcseurope.info/news_full.php?id=38254&title=Dell-and-VMware-help-customers-future-proof-their-businesses

Posted by
 Date Center Solutions

Friday, May 15, 2015

Data Center Cooling System



One of the most important infrastructures pre-requisite in an IT establishment is that of cooling. Cooling systems do not mean air conditioners for offices and adequate ventilation. It also means datacenter cooling and computer room cooling. To ensure server longevity and organization vitality, it is important that the organization has a good connection of chillers and compressors. If you are establishing an IT organization, then one of the most herculean tasks is to establish a proper data center cooling system. Before you decide the right cooling system, you must analyze the efficiency level of your facility and the rack power density of your equipment.

While managing the data center cooling system, there are three major issues you may encounter. 

Power Consumption
Actual  IT equipments consume far lesser power than the cooling systems. It is a fact that cooling is the biggest power consumer in any system. Hence, it is important that you choose the right solution so as to have improved and optimum energy efficiency level. For this, you must first define the power consumption and power density needs of all the IT equipment in the organization. This must be followed by chalking a solution suiting the equipment. This gives ample room for flexibility. Also following a systematic pattern gives you room for scalability in future. Picking the correct solution will not only influence greatly on the energy efficiency level but also has an immense effect on the operational cost of the establishment.
   
Room Temperature
One of the key components in managing the room temperature is to understand the data center cooling technology. This is very important as you do not want a shooting electricity bill owing to higher air conditioner and compressor cooling consumption! For this you must understand different concepts like Data Center infrastructure efficiency Assessment, Computational Fluid Dynamics and Power Usage Effectiveness. Also you must understand the air circulation, ventilation direction of the room. These will help you set an optimal cooling system.

Environmental Condition
To achieve maximum efficiency, you must understand the airflow and control the same in your technical space. Cooling is also integral for achieving efficiency, sustaining the right environment, and managing hotspots. For this, you must look upon server heat load, server heat rejection and cool air supply levels as inter-related and not as separate, isolated systems.

Rahi Systems

Friday, May 8, 2015

Managing Software Defined Data Center Through Single Pane of Glass


Just a few years ago, “Software-Defined Data Center” was thought by many to be the next buzz term in the IT industry. Since then, the “software-defined-everything” movement has taken off with the realization there is value not just in the hardware, or in the product, but in the software itself. In fact, industry research firm Enterprise Management Associates named 2014 the year of the Software Defined Data Center (SDDC). How will the industry evolve this year? I predict organizations will seek new solutions to manage this type of data center, ultimately leading to increased operational efficiencies.

The Evolution of the Data Center

Let’s take a look back at how the adoption of software defined data centers began. Over the last several years, there has been a significant shift toward converged infrastructure. Traditional servers, storage, and networks were distinct products managed separately by multiple management platforms. We then saw the emergence of converged systems, such as Cisco’s Unified Computing System (UCS). However, these systems were still cabled together by multiple components and systems from various product vendors.

The next evolution of the converged system was a move toward the software defined model. As such, each component (servers, storage, and networks) are not just a compilation of components from various product vendors, but managed as a single unified framework; thus, enabling the organization to tap into compute and storage resources more gracefully. Organizations could now add more compute and storage power through software, rather than by adding systems. Ultimately this provides increased performance, resiliency, and ease of management.

In the software defined data center model, the management of the converged data center is handled by software. As a result, organizations are benefiting from the optimization and pooling of resources to improve efficiencies, ensuring that servers are never over- or under-utilized, and taking advantage of the full expanse of the data center’s physical assets.

Organizations can also reap the benefits of the cloud while maintaining their legacy applications. They can more easily complete the migration to, and management of, hybrid cloud environments. This enables them to lower costs by reducing the infrastructure resources needed, providing scalable infrastructure, and enabling the efficient roll-out of software upgrades. In addition, Open Source technology prevents the organization from being locked into a particular vendor or protocol by providing a more malleable platform. Open Source also allows for access to multiple technologies, which can all be managed under one umbrella, resulting in increased productivity and decreased costs.

Managing the Software Defined Data Center
In order to take advantage of the software defined data center, it is necessary to take a holistic view of the various layers that make up the data center stack; i.e. virtualization, software, middleware, database, and application layers, as well as the hardware and cloud environments in which everything connects. All of these components should be operated and centrally managed on a common software based management platform.

However, it is not easy to manage this complex of a system while maintaining the integrity of every layer within the SDDC. For example, if a glitch occurs in one layer, it can impact other areas of the data center environment. It is important to understand the interdependencies between the various pieces of the data center in order to seamlessly manage the environment and solve any problems that may occur.

Working with a Managed Service Provider

Many organizations turn to infrastructure-as-a-service and software-as-a-service vendors to help manage their data center growth. Typically, these vendors are not necessarily equipped to deliver the services associated with managing the software environments they host. However, managed service providers are not relegated to a specific technology and can help organizations manage their data center environments; therefore, removing the burden of managing IT from the business. Managed service providers can also help the organization significantly accelerate the provisioning of virtual machines and related core services.

When choosing a managed service provider, it’s important to look for one that can manage both hybrid IT environments and the associated hardware, OS, applications, and network layers.

It can be difficult to find a single managed service provider that does not rely heavily on subcontractors. Organizations should choose one that offers a single point of contact and maintains continuity throughout the entire process, from planning to building to operating the data center. The provider should also ensure that a wrapper of security is built around the entire SDDC environment. In this context, security should be viewed as a vertical layer that spans across every horizontal layer of the data center stack.

A Single Pane of Glass

Organizations can benefit from working with a qualified managed service provider that offers a unified framework and unified view into their data center operations. The offerings should be combined with proactive 24×7 monitoring of the complete environment even across multiple data center or geographies. This constant monitoring allows the managed service provider to remedy issues in real time and anticipate problems before they occur. This prevents bottlenecks and other IT holdups from impacting the end-user experience and helps keep the bottom line intact. When data centers are managed this way, organizations are able to move data much more quickly. For example, provisioning that would have previously taken days, can now be completed in just hours or minutes.

By choosing a managed service provider that can manage the data center holistically across multiple locations and environments, organizations can take full advantage of the benefits this data center trend offers, including consolidation efforts, and increased efficiencies that ultimately improve the bottom line.

Rahi Systems
Data Center Solutions

Reference
http://www.datacenterknowledge.com/archives/2015/04/13/managing-software-defined-data-center-single-pane-glass/