Understanding the Impacts of Mixed-Version vCenter Server Deployments

There are a few questions that come up quite often regarding vCenter Server upgrades and mixed-versions that we would like to address. In this blog post we will discuss and attempt to clarify the guidance in the vSphere Documentation for Upgrade or Migration Order and Mixed-Version Transitional Behavior for Multiple vCenter Server Instance Deployments. This doc breaks down what happens during the vCenter Server upgrade process and describes the impacts of having components – vCenter Server and Platform Services Controller (PSC), running at different versions during the upgrade window. For example, once you get some vCenter Server instances upgraded, say to 6.5 Update 1, you won’t be able to manage those upgraded instances from any 5.5 instances. While most of the functionality limitations manifest themselves when upgrading from 5.5 to 6.x, there could also be some quirks in environments running a mix of 6.0 and 6.5. There are a couple of additional questions that seem to arise from this doc so let’s see if we can address them.

The Upgrade Process

I’m not going to go through the entire process here, but it is important to understand the basics of how a vCenter Server upgrade works. Remember that there are two components to vCenter Server – the Platform Services Controller (PSC) which runs the vSphere (SSO) Domain and vCenter Server itself. For a vCenter Server upgrade, the vSphere Domain and all PSCs within it, must be upgraded first. Once that is complete, then the vCenter Servers can be upgraded. Obviously, if you have a standalone vCenter Server with an embedded PSC, this is a much simpler proposition. But, for those requiring external PSCs because of other requirements such as Enhanced Linked Mode, just remember the PSCs need to be upgraded first.

Mixed-Version Upgrade Phases

The other important point to make here is that upgrading by site is not supported. Looking at the above example, you can see there are two sites each with an external PSC and a vCenter Server. It is a common that a customer would like to upgrade an entire site, test, and then move onto the next site. Unfortunately, this is not supported and all PSCs within the vSphere Domain across all sites must be upgraded first.

Mixed-Version Support

Now, on to the questions mentioned earlier. The first question is, “Can I run vCenter Servers and Platform Services Controllers (PSCs) of different versions in my vSphere Domain?”  The answer here is yes, but only during the time of an upgrade. VMware does not support running different versions of these components under normal operations within a vSphere Domain. The exact verbiage from the article is, “Mixed-version environments are not supported for production. Use these environments only during the period when an environment is in transition between vCenter Server versions.” So, do not plan on running different versions of vCenter Server and PSC in production on an ongoing basis.

The second question is then, “How long can I run in this mixed-version mode?” This question is a bit tougher to answer. There is no magic date or time bomb when things will just stop working. This is really more of a question of understanding the risks and knowing how problems may affect the environment should something go wrong while in this mixed-version state.

The Risks

An example of one such risk would be if you were upgrading to vSphere 6.5 from 5.5. Let’s say you had your vSphere Domain (i.e. PSCs) and one vCenter Server already upgraded leaving you with 1 or more vCenter Server 5.5 instances. Imagine that something happens leaving a vCenter Server 5.5 completely wiped out. You could restore that vCenter Server 5.5 instance and be back in production as long as you have a good, current backup. If the backup you need to restore from was taken prior to the start of the vSphere Domain upgrade, you would not be able to use it to restore. The reason for this is that the vCenter Server instance that you would be restoring is expecting a 5.5 vSphere Domain and the communication between that restored vCenter Server instance and the 6.5 PSC would not work. An alternative to this would be to rollback the entire vSphere Domain and any other vCenter Servers that were upgraded.

Another risk would be if we are unable to restore that instance because the backups were bad (it does happen) or you couldn’t accept the outcome of losing the data since that backup was taken.  The result here is that you would be forced to rebuild that vCenter Server instance and re-attach all the hosts. This may not be desirable because this new vCenter Server instance would have a new UUID and all of the hosts, VMs, and other objects would also have new moref IDs. This means that any backup tools or monitoring software would see these as all net new objects and you would lose continuity of backups or monitoring. You also would have to rebuild the vCenter Server instance as 6.5 which also may not be desirable because you may have an application or other constraint that requires a specific version of vCenter Server. If you rebuild the instance as 6.5 you may break that application.

Finally, let’s consider the possibility of having a PSC failure instead of losing a vCenter Server. What happens?  Normally, you could easily repoint a vCenter Server instance to another external PSC within the same SSO Site. However, this would not be possible if the vCenter Server is not running the same version as the PSC you are attempting to repoint to. For example, if you had a vCenter Server 5.5 or 6.0 and they were pointing to a 6.5 PSC (because it has already been upgraded), if that PSC failed you would not be able to repoint that vCenter Server to another PSC. Remember that all PSCs must be upgraded first so all PSCs should be running 6.5 already. The only way to recover from this scenario is to restore or redeploy the failed PSC which may take longer than repointing.


So, give the above scenarios, what do we tell a customer who asks, “My upgrade plan spans multiple sites over multiple months. How should I plan my upgrade?” Here are our recommendations:

  1. Minimize the upgrade window
  2. Follow the upgrade documentation
  3. Take full backups before, during, and after the upgrade
  4. Check the interop matrices and test the upgrade first

The first recommendation is to minimize the upgrade window as much as possible.  We understand that there’s only so much you can do here, but it is important to reduce the amount of time you’ll be running different versions of vCenter Server (and PSC) in the same vSphere Domain. The second recommendation is to, no matter how tempting to do otherwise, upgrade the entire vSphere Domain (SSO Instances and PSCs) first as is called out in the vSphere Documentation. It is not supported to upgrade everything in one site and then move onto the next. You must upgrade all SSO Instances and PSCs in the vSphere Domain, across ALL sites and locations, first. Third, make sure you have good backups every step of the way. While snapshots can be a path to a quick rollback, when dealing with SSO, PSCs, and vCenter Server they don’t always work. Taking a full backup ensures the ability to restore to a known clean state. Last, and certainly not least, do your interoperability testing and test the upgrade in a lab environment that represents your production environment as much as possible.

Emad has a great 3-part series on upgrades (Part 1, Part 2, Part 3) so be sure to check it out prior to testing and beginning your upgrade. Also know and understand the risks and impacts of problems during the upgrade process. Finally, know how the upgrade process is going to affect all of the yet-to-be-upgraded parts of your environment and have good rollback and mitigation plans if any issues come up.

The post Understanding the Impacts of Mixed-Version vCenter Server Deployments appeared first on VMware vSphere Blog.

Source: VMware vSphere Blog

A Maintainer’s Perspective: What Is Software Made Of?

By Darren Hart, director/open source architect, VMware Open Source Technology Center

What is a software project made of? What are the components? Ask software engineers of various disciplines and language expertise, and you’ll likely receive a fairly predictable set of responses: Software is comprised of source files, a build system and, if you’re lucky, documentation.


Digging a little deeper, one might ask: What are source files comprised of? Source files are made up of classes, methods, functions, variables, statements, etc. These are the components that make software work, but they are only sufficient descriptors when describing software at a single point in time (just barely).

Let’s expand the requirements of a software project to include the development process, release management, support and maintenance. Would that change the answer? As a maintainer of a Linux kernel subsystem, I work on one of the largest and most collaborative software projects in the world. In order to manage the rate of change, resolve code conflicts from independent contributors and identify where bugs are introduced, I rely heavily on strong change control management. If you were to ask me what software is made of, my answer would be, “commits.”

When I consider a software package, I am interested in its history, as well as its current functionality. I want to know how it became what it is. The commit log provides the origin and history of every line of code. It tells me much more about a function than just a snapshot of the statements making up that function in the latest release. A review of a snapshot of code may create a false perception that the section of code was developed intentionally all at once. In reality, it’s most likely the result of an initial implementation followed by a series of changes, possibly by multiple contributors.

Strong change control management is just as important for small, single developer projects as it is for large projects with thousands of contributors. If you’ve ever read a piece of code and thought, “Why did they do it that way?” it is very possible that they didn’t. Bugs are often introduced by changes that miss the broader context of the code developers are changing. In these cases, having the history available to compare against the current snapshot can prove enlightening for why a piece of code was written the way it was. It’s worth pointing out that this applies equally to the question, “Why did I write it that way?” After all, every developer has asked themselves that question at some point.

Consider the changes we create to be integral components to the software we are developing, and not as transient objects to be discarded once the change is incorporated. The change must justify its value; its description must match its implementation. If the change is the deliverable, it becomes more tangible, and we begin to think and talk about software with emphasis on commits as the fundamental building block. This, in turn, leads to considerably more maintainable software.

The post A Maintainer’s Perspective: What Is Software Made Of? appeared first on Open Source @VMware.

Source: Open Source @VMware

Monthly Security Patch Program for the vCenter Server Appliance

The vCenter Server Appliance (VCSA) has become the recommended deployment type starting with vSphere 6.5. The three main components of the VCSA – operating system, database, and application – now all fall under VMware’s umbrella. The VCSA now uses Photon OS which is a custom operating system built from the ground up for virtualization and removes the dependency on third party support. This not only provides one central place for support, but also allows for quicker releases of security patches.

VMware is now introducing a new Monthly Security Patch Program for the VCSA. The program will deliver important OS vulnerability patches on a monthly release cycle. VMware will monitor and fix any newly discovered OS vulnerabilities. As detailed in the VMware Security Response Policy, the response time to vulnerabilities depends on the severity. When there’s a Critical vulnerability, VMware will immediately start working on a fix or corrective action and provide it to customers in the shortest commercially reasonable amount of time. For Important through Low categorized vulnerabilities, VMware will deliver a fix with the next planned maintenance or update release of the product and where relevant. There’s no change to the existing policy. To better serve customers, we are adding this new Monthly Security Patch Program designed for VCSA.

The Monthly Patch will be cumulative and allow customers to have a choice of which patches to apply without having to apply all of them. If there’s no security patch content in a given month, we will skip the release of that month. If there’s an update or a scheduled patch, the monthly patch will be added to it. The monthly patches can be found on the My VMware patch portal (My VMware login required). Customers can sign up to receive security alerts on the VMware Security page and see a list of all VMware security advisories.

To learn more about VCSA patching and to provide feedback or ask questions, please see this article on the VMware Security Blog.

The post Monthly Security Patch Program for the vCenter Server Appliance appeared first on VMware vSphere Blog.

Source: VMware vSphere Blog

Growing a Community for a New Open Source Project

Justin Pettit at presenting on Open vSwitch and OVN Projects at LinuxCon China earlier this year.

by Ben Pfaff,  principal engineer, and Justin Pettit, senior staff engineer, VMware Networking & Security Business Unit (NSBU) R&D

We traveled to Beijing this summer to present two talks at LinuxCon China June 19–20. Our first talk, “The Open vSwitch and OVN Projects,” was a technical talk about the work that we do on Open vSwitch (OVS), which is a familiar matter to us.

Our second talk, “The Business Reality of Building Open Source: What We Learned from OVS and OVN,” was not technical. Instead, it was primarily about our experience working on an open source project at VMware. We explained what we learned about how to create and advocate for open source within a company that has not been oriented around open source.

We aimed to dispel some of the misconceptions that we have seen managers and developers bring to open source, especially to new open source projects. A favorite example is the notion that an open source project will acquire a vibrant community of users and developers immediately upon its initial release. This is usually wrong, but the myth persists.

A specific case that often comes to mind is from a networking industry event that we attended in May 2012, where the speaker announced an open source project release for June, saying that it would grow a community by September. That project never grew beyond 20 contributors, and its last commit was in August 2015.

We used our talk at LinuxCon China to give our own tips on how to grow and promote an open source project, rooted in a few principles:

  • The leading principle is transparency, primarily by documenting the processes used to advance the project. Users and potential contributors need transparency and consistency to enable them to rely on a project to reach their own goals.
  • Another important part of growing an open source community is focusing on long-term goals. A clear plan makes it clear to potential users that the project will continue to be maintained, and contributors understand the types of features that would be useful and welcomed by the project.
  • We also discussed the importance of building a supportive and positive development culture. Some projects foster a more confrontational community that we feel can be off-putting and dissuade potential contributors from participating.

We found that by applying these principles to Open vSwitch and OVN projects, we fostered a vibrant community of both users and contributors.

What principles work for you when trying to promote a new project within your company and the open source community? And how do you encourage new and ongoing contributions? Share with us in the comments below and on Twitter @vmwopensource.

Because you like this blog:

The post Growing a Community for a New Open Source Project appeared first on Open Source @VMware.

Source: Open Source @VMware

A Look Inside the vRealize Operations Export Tool, VMware’s Latest Open Source Fling

We sat down with Principal System Engineer Pontus Rydin to discuss his open source project: the VMware vRealize Operations Export Tool. Pontus focuses on VMware’s global accounts and specializes in vRealize and other management products. Read on for his expert insights.

Q: What was your inspiration for the vRealize Operations Export Tool?

The inspiration was that, over and over again, we would get requests to export data from vRealize Operations (vROps) and we could only come up with one-off, clunky solutions. Sometimes, we simply said we just couldn’t do it. I knew there was an API for getting data out of vROps, so I figured, why don’t I write a tool that acts like a kind of Swiss Army knife for exporting data from vROps?

Q: Can you explain how the tool works?

The vROps Export Tool is a command line tool that takes some basic parameters and a definition file that describes what you want to output. You then point it toward your vROps, and it gathers big chunks of data in multiple threads. Then, it is exported into the format you selected. The data is mostly metrics and time series data, but the tool can also capture properties like AlwaysOn versions, CPU versions, hardware information and other types of similar data.

Q: What formats can the tool export your data into?

Your data can be exported into one of a few formats. The initial format was common separated files (CSV), which is very useful because you can put it into Excel easily. We then decided to add support for putting data straight into a SQL database, and we just incorporated native support for Wavefront by VMware.

Q: What types of users did you have in mind when designing this tool?

The vROps Export Tool is for everyone who has valuable monitoring data inside of vROps and who wants to export it to further process their data.

Q: Can you discuss a couple of use cases for this tool?

I can give you a few real-world examples of where we’re using this.

We have one financial customer who exports their data into a sophisticated proprietary analytic system. In this environment, they perform tasks like correlating performance of their virtual machines (VMs) to how their mutual funds are performing because of their high-frequency trading. What they learned was that there is a correlation between the response times of their VMs and how the funds are performing. This is a perfect example of taking data from vROps and then exporting said data to an environment where they can further analyze it.

Another use case is archiving. Users might not want data to sit in vROps forever, but they need to have it somewhere for audit purposes. They can use this tool to export it to some external auditing database in order to keep accurate records.

Q: Why did you decide to open source the vROps Export Tool?

The first reason was personal. I enjoy working with open source! I like the way the model works in that you can take on contributors very easily, because there’s basically no onboarding process. Anybody who wants to contribute can do so.

The other reason is that many customers will have specific requirements I might not be able to accommodate, because I have been working on this in my spare time. The open source model allows me to say, “Hey, I can’t do that right now, but here’s the core idea and codebase.” Customers can then adapt the tool to their specifications and share their contributions to the project.

The open source method gives customers more agility and allows for upstream contribution back to the original project, creating a win-win scenario.

Q: Is the vROps Export Tool available for use now?

Yes! It’s out in the wild and up as a fling on our official GitHub page. Anyone can download it now for free.

Be sure to follow Pontus on Twitter and at his blog virtualviking.net.
For more news on VMware’s open source contributions, stay tuned to the Open Source blog and follow us on Twitter @vmwopensource.

Because you liked this blog:

The post A Look Inside the vRealize Operations Export Tool, VMware’s Latest Open Source Fling appeared first on Open Source @VMware.

Source: Open Source @VMware

Get your vSphere questions answered at VMworld Europe 2017

After an action packed week at VMworld US in Las Vegas, the Cloud Platform Business unit has assembled in Barcelona for VMworld Europe 2017. We’re excited to talk to our European friends about the products our group supports: vSphere (and especially vSphere 6.5), and now VMware Cloud on AWS.

This post is an attempt to pull all of the activities our group has planned for the Barcelona show into one place. We’ll see you soon!


  • The master list of our sessions, broken up by category, has its own page on the vSphere blog.
  • Here are our last-minute session suggestions for you:
    • Wednesday, 12:30 PM: What’s Next for vSphere? [SER3154SE]. Find out from the VP of Product about the future of vSphere. She’s bringing a customer along for the discussion, it should be good.
    • Thursday, 10:30 AM: Real World IT Gurus Share Their Perspective on Operations Management for the Multicloud Era [MGT1218PE]. Come here from customers about their experiences of creating an operational model where they act more like service providers.

Come visit with us in the booth!

We’ll have experts in our booth to talk about all things vSphere. Have a question? Here are the times we’ll be in the booth:

  • Tuesday: 10:30 – 7:30
  • Wednesday: 10:30 – 7: 00
  • Thursday: 10:30 – 2:30

Here are some of the things we’ll be talking about:

  • Simplified vSphere Administration
    • vCenter Server Appliance and HA
    • vSphere Lifecycle Management
    • REST-based APIs
    • HTML 5 vSphere Client
    • vSphere Upgrades
  • Comprehensive vSphere Security at Scale
    • At-Rest and In-Motion Encryption
    • Secure Chain of Trust
    • Audit Quality Loggig
  • Proactive Data Center Management
    • Proactive HA
    • DRS / Intelligent Workload Placement
    • Predictive DRS
  • New Workloads
    • vSphere Scale-Out
    • Tech Preview: Persistent Memory
    • Tech Preview: vGPU
    • Tech Preview: RDMA

Our experts want to help you resolve your hardest questions

Our experts will be in the Meet the Expert area. If you have a particularly hard question, make sure to plan to visit them.

Day Pod Time Session Expert
Tues 1 11:15- 12:00 Emad Younis MTE4729-SER
Tues 4 11:15 – 12:00 Frank Denneman MTE4720-SER
Tues 4 12:15 – 1:00 Mike Foley MTE4716-SER
Tues 4 3:15 – 4:00 Dennis Lu MTE4724-SER
Tues 4 4:15 – 5:00 William Lam MTE4724-SER
Wed 4 11:15 – 12:00 Alan Renouf MTE4723-SER
Wed 4 12:15 – 1:00 Kyle Ruddy MTE4722-SER
Wed 4 1:15 – 2:00 Eric Grey MTE4718-SER
Wed 4 4:15 – 5:00 Adam Eckerle MTE4719-SER
Thurs 4 9:15 – 10:00 Mike Foley MTE4716-SER
Thurs 1 10:15 – 11:00 Emad Younis MTE5729-SER
Thurs 4 10:15 – 11:00 Frank Denneman MTE4720-SER
Thurs 4 11:15-12:00 Ravi Soundararajan MTE4725-SER
Thurs 4 12:15 – 1:00 Kyle Ruddy MTE4721-SER

Get your hands dirty in a Hands-On Lab

Try out the new features of vSphere 6.5 in a hands-on lab. Here are the HOLs that are specific to vSphere:

  • HOL-1804-01-SDC: [vSphere 6.5] Performance Diagnostics and Benchmarking
  • HOL-1804-02-CHG: [vSphere 6.5] Challenge Lab
  • HOL-1811-01-SDC: [vSphere v6.5] What’s New
  • HOL-1811-02-SDC: [vSphere with Operations Management] Getting Started
  • HOL-1811-03-SDC: [vSphere with Operations Management] Advanced Topics
  • HOL-1811-04-SDC: [vSphere Security] Getting Started
  • HOL-1811-05-SDC: [vSphere Automation] PowerCLI
  • HOL-1811-06-SDC: [vSphere Automation and Development] API and SDK
  • HOL-1811-07-SDC: [vSphere HTML Client SDK] Build a Plugin
  • HOL-1830-01-CAN: [PhotonOS and Container Basics] Getting Started
  • HOL-1830-02-CAN: [vSphere Integrated Containers] Getting Started
  • HOL-1831-01-CAN: [vSphere Integrated Containers Kubernetes] Advanced Topics
  • HOL-1810-01-SDC: Virtualization 101





The post Get your vSphere questions answered at VMworld Europe 2017 appeared first on VMware vSphere Blog.

Source: VMware vSphere Blog

Learn All About ‘SCHED_DEADLINE’ from Steven Rostedt at Open Source Summit 2017

The Open Source Summit North America touches down in the City of Angels this year, taking place September 11-14 at the JW Marriot LA Live in Los Angeles, Calif. Hosted by the Linux Foundation, Open Source Summit 2017 boasts the combined might of LinuxCon, ContainerCon, CloudOpen and Open Community Conference under one umbrella event. The summit will offer attendees the chance to collaborate, share insights and learn from the best and the brightest in the open source technology realm.

Open Source Summit 2017

This year, VMware’s Steven Rostedt, an open source developer, will join 2,000 other technologists and open source community members at Open Source Summit 2017. Steven will deliver his talk entitled “Understanding SCHED_DEADLINE” on Monday, Sept. 11 at 11:50 a.m. Steven has spent most of his professional career dealing with real-time operating systems (RTOS) and worked to turn the Linux kernel into a RTOS.

Starting in Linux version 3.14, a new scheduling class called “SCHED_DEADLINE” was introduced. This scheduling class implements Earliest Deadline First (EDF) along with a Constant Bandwidth Scheduler (CBS) that is used to give applications a guaranteed amount of CPU for a periodic time frame.

This type of scheduling is extremely advantageous and has numerous use cases for robotics, media players and recorders, as well as virtual machine guest management. Steven’s talk will explain the history behind “SCHED_DEADLINE” and compare it with other various methods to deal with periodic deadlines. He will also discuss some of the issues with the current Linux implementation and some of the improvements that are currently in development.

Most Linux applications use the standard real-time priorities. Linux supplies 100 different priorities for real-time tasks over the normal task priorities that run as “SCHED_OTHER.” This means that the highest priority task that is ready to run will be scheduled, and won’t stop running until it voluntarily schedules out.

While priority-based scheduling is well-known, it doesn’t handle periodic scenarios properly. That’s where “SCHED_DEADLINE” comes into play. A system utilizing priority-based scheduling can only take advantage of at most 69 percent of a CPU. During his presentation, Steven will dive into the reasoning behind this and discuss how the “SCHED_DEADLINE” approach for periodic scheduling allows a periodic real-time designed system to fully utilize 100 percent of a CPU.

Understanding “SCHED_DEADLINE” holds crucial advantages for many using open source technologies. If you’re attending Open Source Summit 2017 in Los Angeles this year, be sure to catch Steven’s presentation for his expert insight on the matter. Also, as a gold sponsor of the summit, VMware will have its own booth where attendees can stop by and chat with open source developers and learn about some of our investments in open source. We hope to see you there!

Stay tuned to the VMware Open Source blog and our Twitter handle (@VMWOpenSource) for all the latest news and updates around Open Source Summit 2017.

The post Learn All About ‘SCHED_DEADLINE’ from Steven Rostedt at Open Source Summit 2017 appeared first on Open Source @VMware.

Source: Open Source @VMware

Open Source Puppies: Love at First Sight Meets Deployment Reality

Adopting a new puppy is not without surprises. The soft fur, big eyes and playful joy capture your heart in a split second. It’s love at first sight. Day 1 is all fun and games—nothing could be better. Day 2? Well, reality sets in. The teeth marks on the chair leg, your favorite sneakers are destroyed and the remnants of your fluffy pillow are scattered around the living room floor. Puppies need supervision, training, attention and time—lots of time.

And open source?

Well, in many ways, bringing open source into your development environment or data center can feel the same. Adopting—er, deploying—open source projects in a production environment can feel a lot like raising a puppy.

According to Ed Blackwell, VMware CTO Ambassador and Principal Cloud Architect, open source can pose some interesting and unexpected challenges. Even before Day 2 arrives, there’s testing, integration and patching to do. Once in production, those Day 2 challenges arrive: reliability, scaling, support and upgrades, to name a few. But, you still want that puppy (technology). So, now what?

An important first step when considering adopting open source is to understand the scope of the project or technology being considered and the capacity of your team. For a smaller project with a limited footprint in your environment, the challenges may be quite manageable. For a larger project, such as OpenStack, the implications are more widespread—and may be more akin to “litter of puppies,” according to Blackwell. For some, the extra investment and effort is well worth it. For others, the payoff is too small or uncertain to justify the choice.

How can you get the benefits of open source while minimizing surprises? Do your homework. Evaluate your choices. In many cases, vendors may offer a supported version of the open source project you are considering. These vendor-supported distributions offer a tested, stable and supported “productized” version of the open source project. This gives you and your staff more confidence that the technology will work as expected.

Want to learn more about open source? Attend the Open Source Summit in Los Angeles September 11-14. VMware will be there to showcase our open source projects, present on some key technologies and open source contributions and meet with the open source community.

And you can visit puppies, too. The Linux Foundation will host L.A. Animal Rescue to bring Puppy Pawlooza to Open Source Summit! Attendees are welcome to enjoy playtime with these shelter dogs Wednesday inside the sponsor showcase. If a couple of hours with these furry friends is not enough, all dogs will be available for adoption.

The post Open Source Puppies: Love at First Sight Meets Deployment Reality appeared first on Open Source @VMware.

Source: Open Source @VMware