RHELAH support on vSphere

Hi everyone, I’m happy to announce support for Red Hat Enterprise Linux Atomic Host (RHELAH) running on ESXi. This enables vSphere admins to offer RHELAH as a supported container host OS to developers either experimenting with or creating container-based applications.

Similar to our existing support for RHEL 7.x, open-vm-tools is included as a standard part of RHELAH. And over the past couple of months, the team from Redhat and VMWare have also refactored open-vm-tools to run in its own system container while maintaining traditional Tools functionality. Updates to open-vm-tools will be made available as new host OS images are refreshed and published.

Support for RHELAH begins with the recently released v7.4. ESXi support is available from v6.0 to v6.5. Installation instructions are available and additional information regarding on running the vm can be found here –

http://partnerweb.vmware.com/GOSIG/RHEL_Atomic_Host.html

The post RHELAH support on vSphere appeared first on VMware vSphere Blog.

Source: VMware vSphere Blog

7 Questions with Alan Renouf on Open Source, SDKs & Community at VMware

Alan RenoufWe sat down with Alan Renouf, VMware Senior Product Line Manager, to discuss his involvement in the evolving open source community at VMware. As part of the VMware vSphere and, more recently, VMware Cloud on Amazon Web Services (AWS) teams, Alan focuses on:

  • Application programmable interfaces (APIs)
  • Software development kits (SDKs)
  • Command line interfaces (CLIs)

Alan shares his perspective on the past, present and future of open source within VMware and VMware project

1. From your perspective, what is open source, and what are its benefits?

In open source, everyone contributes to make code better, because we all work on the same things trying to solve the same problems, but have different experiences. People come together to commit to projects that others are working on and to make software better as a whole, without money necessarily being the objective.

Working at VMware, this kind of culture is enabled and encouraged as part of our EPIC2 values. These core values align nicely with the execution of open source repositories, making it possible to empower our customers to better work with VMware products.

[Related: What It Means to Be a Good Open Source Citizen]

2. What does open source mean for VMware customers?

First of all, it’s important for the visibility of our product code. From the security side of things, our customers can look through the code and scan for vulnerabilities, adding that extra layer of required compliance. Secondly, it is important for our customers to understand how our software works and why our software makes certain decisions. This is absolutely key to troubleshooting in a production environment. Thirdly, everyone contributing to open source makes VMware’s software and customer implementations better. By working directly with our teams, customers give us their direct feedback and contribute back to the project.

3. What are your accomplishments with open source?

We have always had SDKs that enable developers and automation engineers to work with our APIs and programmatic interfaces of our products. Those SDKs, however, have always been behind a gated system, where one must sign in to download and use them. Giving feedback was difficult. The samples were written by VMware and only VMware, so there was no contribution by anyone else and released on a cadence with vSphere major releases. Even if the feedback did make it back to us, the customer had to wait a long time to gain the enhancements.

My role in open sourcing the SDKs was all about making the kits available to VMware’s customers and partners. Customers should be able to go to the GitHub site, clone the project or download it and be up and running with our APIs and samples within five minutes. That was a big change from what we had before, where customers would tell me it would take two months to get to that point. Providing feedback and seeing the changes go straight back into the samples—often within days—is a huge benefit for our customers and us.

4. What was so significant about open sourcing the SDKs?

There is the aspect of being able to download VMware SDKs and run it faster, but now customers are able to contribute. They are the ones coming up with samples. We already have some contributors from our customer base that have said, “I like what you’ve done here and I’ve written something similar. I’m going to add it to your repository, so now everybody else has access to that code as well.” It is great for us to enable everybody and work on the same problems together.

Internally, open sourcing our SDKs was important, because it gives our teams a better understanding of open source and how to work with modern-day development and contribution tools. It is completely different from the old model that was mostly closed and difficult to get feedback.

[Related: Integration with vSphere Using the New Open Sourced SDKs]

 

Alan Renouf VR

5. Since open sourcing VMware’s SDKs, what are some things that
surprised you?

Personally, I have been surprised the effort contributors make to provide their work back with the rest of the community. Take this pull request as an example. It added a number of different Postman samples to work with the new VCSA Appliance API, which was released in 6.5 and now enables people to easily understand and view samples on working with this API in REST. Awesome work!

6. How does VMware’s commitment to open source effect VMware’s products?

Open source dramatically increases the relevance of VMware’s products. I believe open source products get much closer to a product that people want to consume, as the contributors have a hand in making the software. If they don’t like the way things work or find that it doesn’t quite fulfill their use case, customers can easily adjust the open source project to enhance it and add features or code that enable the project to meet that extra use case. VMware products can only get better for the users and more relevant to their day to day jobs because of open source.

7. Any pointers for someone trying to get involved with VMware’s open source projects?

I would say look at the VMware GitHub repositories. Become a community member. Start committing, adding and giving back to these repositories. VMware employees and customer
often have great knowledge about how our products work. If you see some code or documentation on a site that looks incorrect or could be done more efficiently, submit a change and educate the community on why you are making that change. Dip your toe in the water. Believe me, it’s warm and leads to a beautiful ocean!

Make the code better for you and the entire VMware community of people like you.

 

Follow Alan Renouf on Twitter @AlanRenouf

To stay up to date on the latest in the open source community and VMware, follow us on Twitter @VMWOpenSource.

More VMware Open Source blogs by Alan Renouf:

The post 7 Questions with Alan Renouf on Open Source, SDKs & Community at VMware appeared first on Open Source @VMware.

Source: Open Source @VMware

Automating Benchmarks for Cloud Infrastructure with Open Source Project Weathervane

Open source Project Weathervane may not tell you the direction of the wind, but it is a clear indication of where the wind is blowing when it comes to open source technology. Mandy Botsko-Wilson, a consulting architect at VMware, delivered an insightful vBrownBag Tech Talk at VMworld 2017 entitled “Automating Benchmarks for Cloud Infrastructure with VMware Weathervane & vRealize Automation.”

Throughout her vBrownBag talk, Mandy addressed common questions around performance implications of moving an application to the cloud, performance trade-offs between running application services in containers vs virtual machines and hybrid approach performance by presenting Weathervane as a benchmarking tool for these needs.

Developed over five years as a project by Hal Rosenberg, a performance engineer at VMware, Project Weathervane is VMware’s open source benchmarking tool that allows you to model an enterprise application–and its workload–and then automate the repeatable execution of runs to collect performance data.

Weathervane provides application-level cloud infrastructure performance benchmarking and can help determine solutions without you having to try out a bunch of different container services in your cloud infrastructure. It’s also well poised for benchmarking hybrid apps in your cloud infrastructure and allows for the discovery management of containers.

Weathervane contains three defining components:

  1. Model Application. With a model application, you don’t have to use your production application and re-architect it in different ways. The first model application available as part of Weathervane’s open source code is Auction, a modern web application.
  2. Workload Driver. This means Weathervane can drive a realistic and repeatable load against your application. You can build in the workload driver using Auction so you don’t have to build a point-and-click system for the enterprise app you’re trying to test out.
  3. Run-harness. This automates the process of executing runs and collecting the results.

Once you’ve launched Weathervane, you can deploy the software in a flexible configuration based on the application. You do not have to use all of the Weathervane services, but you do need:

  • A workload driver;
  • A messaging server;
  • An application server;
  • A relational database; and
  • A NoSQL database.

Underneath all of those components is the run-harness automating the runs.

Weathervane is not a typical, industry-standard benchmark. It offers flexibility in configuration without strict governing rules. As a performance benchmark, Weathervane provides:

  • Model Application, which is representative of a common class of production applications.
  • Workload, which only specific operations simulated users perform.
  • Application-level Performance Metrics, which compares application performance across multiple runs and makes tweaks to configurations.
  • Quality of Service (QoS), a measure to determine whether a given run is acceptable.

As an open source project, Weathervane will continue to grow and create more model applications that you can benchmark against in your own environments. It currently supports Docker containers and virtual machines, including vSphere Integrated Containers, the latest version of which supports native Docker Container Hosts. Support for other containerized options will be added in the future. Weathervane also offers simple setup and quick deployment, support for variable loads and application elasticity.

Taking it a step further, Weathervane can be blueprinted in VMware vRealize Automation to allow your DevOps teams to deploy, scale and configure Weathervane on varying endpoints to compare the effects on application-level performance before the enterprise application goes live for that architecture. Automating this process means you don’t have to move everything around in distribution, which can get a little tedious.

To watch a complete demo of the installation and initial run of Weathervane, Auction running a test on Weathervane, a run with Docker and blueprinting of Weathervane in vRealize Automation, check out the full recording of Mandy’s vBrownBag talk here. And remember, open source project Weathervane is available as a free download on GitHub.

Stay tuned to the VMware Open Source blog for more deep dives into vBrownBag talks from VMworld 2017, and follow us on Twitter (@VMWOpenSource).

The post Automating Benchmarks for Cloud Infrastructure with Open Source Project Weathervane appeared first on Open Source @VMware.

Source: Open Source @VMware

What It Means to Be a Good Open Source Citizen

By Tim Pepper

Our team recently discussed the word choice in our VMware Open Source Technology Center (OSTC) mission statement. Our goals are to:

  • Establish VMware as a good open source citizen.
  • Build VMware’s presence and influence in relevant projects through meaningful contributions and participation.
  • Develop and promote VMware standards for best practices in open source development and engagement with external communities.
  • Mentor internal teams to increase VMware’s open source competency and expertise.

good open source citizen

It’s easy to suppose that I, as an open source contributor, share some common, implicit understanding of what it means to be a “good citizen” with the rest of the open source community. Or, as a definition by negation, that I can think in shared terms of not doing some set of obviously bad behaviors. The reality is that these concepts can vary across cultures and contexts, and the meaning of the “good citizen” phrase has been debated for years and years.

What are some of the specific behaviors I strive for when I talk of good citizenship in open source?

One might start with license compliance and the free availability of code. However, open source citizenry is about so much more than just the code. While undeniably necessary, sharing code in itself is not a sufficient enough action for one to be considered a “good citizen” in open source communities. I believe that I must do more. The spirit of open source includes an active, participatory aspect through a spirit of collaboration.

Going Beyond Open Source Code

To me, here’s what it means to be a good open source citizen:

  • Collaborate and take action.
  • Share ideas through formulation and implementation.
  • File bug reports.
  • Test.
  • Develop.
  • Maintain and support code over the long haul.
  • Constructively review others’ code.
  • Submit your code for review by others.
  • Change based on the community’s feedback.

Open source collaboration involves being a mentor, but it also means being a mentee. Collaboration and good citizenry are a two-way street, with each party perhaps changing themselves as much or more than the code and projects they seek to change.

Conflict will occur. Nevertheless, a good citizen will not simply fork when ideas diverge. A true open source community member will seek opportunities to unfork, commonize, refactor and reconverge as technology evolves.

[Related: Spork! An Open Source Fork Utensil]

A good open source citizen will not simply follow project norms or codes of conduct, but engage in their communities’ governance as an ally of peaceful, fair and inclusive norms.

All of these aspects of collaboration are about adding and sustaining value at a scale impossible individually.

What It Means to Be a Good Open Source Citizen—Plus One

While VMware aims to offer great value through its participation in open source, I am also mindful of our OSTC organization being a relative newcomer. Astronaut Chris Hadfield has a useful bit of guidance in this situation:

“Over the years, I’ve realized that in any new situation, whether it involves an elevator or a rocket ship, you will almost certainly be viewed in one of three ways. As a minus one: actively harmful, someone who creates problems. Or as a zero: your impact is neutral and doesn’t tip the balance one way or the other. Or you’ll be seen as a plus one: someone who actively adds value. Everyone wants to be a plus one, of course. But proclaiming your plus one-ness at the outset almost guarantees that you’ll be perceived as a minus one, regardless of the skills you bring to the table or how you actually perform.”

—Chris Hadfield, An Astronaut’s Guide to Life on Earth

The VMware OSTC team continues to grow, and my goal is to share my knowledge and expertise beyond our group into all parts of VMware. It is a deliberate, thoughtful process. Over time, I hope our leadership will convince open source communities that VMware is an open source plus one.

About the Author

Tim Pepper is interested in development roles involving dynamic, sophisticated and deeply-skilled teams driving forward the state of the art in open source and Linux-based systems.

Born in California to a U.S. service member, he has had at least two dozen addresses and attended over a dozen schools, eventually settling in Oregon where he has now lived over almost 15 years, three times the length of any prior location. He holds a B.S. in Computer Engineering from Cal. Poly San Luis Obispo and an M.S. in Computer Science from Portland State University. He specializes in Linux and open source systems development and is a staff engineer with VMware’s Open Source Technology Center.

The post What It Means to Be a Good Open Source Citizen appeared first on Open Source @VMware.

Source: Open Source @VMware

7 of the Best Open Source Quotes from VMworld 2017

Top VMworld open source Quotes from VMworld 2017VMworld 2017 was VMware’s biggest showing of open source at VMworld to date! Here is a collection of our top seven favorite quotes from VMworld’s open source sessions.

1. Open Source at VMware: A Key Ingredient to Our Success and Yours [LDT1844BU]

“Open source is a powerful methodology for innovation and the development of APIs.”

—Dirk Hohndel, VMware Chief Open Source Officer

[Related blog: Open Source at VMworld—From Keynotes to Hackathons]

2. VMware and Open Source: Compliance, Quality, and Viability [FUT1226BU]

“Open source is mainstream. Open source is a strategic part of every company’s software portfolio.”

— Meng Chow, VMware Open Source Program Manager

[Related blog: 4 Vital Steps to Open Source Success in Your Company]

 

3. Open Source at VMware: A Key Ingredient to Our Success and Yours [LDT1844BU]

“[With open source], you need to think about scale. It’s easy when it’s small, but at 30 million users, things get hard.”

—Dirk Hohndel, VMware Chief Open Source Officer

[Related blog: Project, Process & Production]

 

4. VMware Open-Source SDKs: From Getting Started to Web App in One Hour [SER1912BU]

“VMware is finally aligned and listening to their customers and consumers of the VMware API. The improvements in their open source SDKs allow us to provide feedback and contributions, as well as a much simpler mechanism to adopt the information quicker. I wish this could have been done years ago!”

—Alan Renouf, VMware Senior Product Line Manager

[Related blog: vSphere Automation SDK for .Net Now Available in Github]

 

5.  Simplifying Your Open-Source Cloud with VMware [FUT3076BU]

“VMware embraces open source innovation by making it enterprise ready.”

—Edward Blackwell, VMware Global Accounts Principal Systems Engineer

“This was a tipping point in the presentation leading the audience to inquiry on previously discussed challenges with open source; thus, the audience was pleasantly intrigued and wanted to learn more,” Edward said about his experience as a presenter.

[Related blog: The Inspiration Behind Open Source Project Harbor]

 

6. VMware and Open Source: Compliance, Quality, and Viability [FUT1226BU]

“Security + viability = quality” Norman Scroggins,

—Senior Open Source Program Manager

[Related blog: VMware & Open Source: A Commitment to Innovation & Collaboration]

 

7. Women and Diversity in Tech: Disruption and Inclusion [VMinclusion]

“Inclusion, to me, means that we are innovative. Diversity requires intention. It does not happen by accident.”

— Nithya Ruff, Senior Director of Open Source Practice at Comcast.

[Related blog: Meng Chow Speaking at Women Who Code Connect 2017 on 4/29]

 

Looking for more VMware Open Source Content?

Watch VMworld session replays at our YouTube channel for more.

To stay up to date on the latest in the open source community and VMware, follow us on Twitter @VMWOpenSource.

The post 7 of the Best Open Source Quotes from VMworld 2017 appeared first on Open Source @VMware.

Source: Open Source @VMware

Key Manager Concepts and Topology Basics for VM and vSAN Encryption

At VMworld 2017 VM and vSAN Encryption and security of vSphere in general became VERY popular topics. And in those discussions the topic of Key Managers came up and specifically “How many key managers should I have?” was a recurring question.

This blog article will give you two examples of key manager topologies and will introduce you to some management concepts. Because every environment comes with unique configurations and requirements, the intent is not to “boil the ocean” but to just get you thinking and help you understand the underlying pieces so you can make more informed decisions.

Availability

The biggest requirement of key management is availability. The analog I use when talking about this is DNS. Nobody runs with a single DNS server in their environment. (I hope!) You have multiple replicating DNS servers “just in case” something goes wrong. Maybe in your single site you’ll have at least two and maybe three or four. Maybe you even have a DNS server or two  running in a cloud or have servers at another site, again, “just in case”. Why? Because if DNS is down then everything is essentially dead in the water. All roads lead to DNS, all roads can end if there’s no DNS.
The same holds true for key management. If the key management infrastructure is down, you can’t encrypt new VM’s or re-key existing VM’s! Even more importantly, you DON’T want that single point of failure. If you have just one KMS and something bad happens to it and you can’t recover the keys then you have some serious issues to attend to! There are no back doors to decrypt a VM. If you lose the keys, you’ve lost the data unless you’ve backed it up. See “Resources” below where James Doyle from our GSS organization did a talk on this at VMworld. A much watch!

Business Critical Infrastructure

This is why Key Management will become the next (critical) datacenter infrastructure requirement, just like DNS and NTP have become. This isn’t a “I need a KMS for vCenter” discussion as much as it is a “I need a KMS for the business” discussion.
After all, you don’t install DNS to make it easier to run just the datacenter. You run DNS because without it the business won’t run. Today you may only need a KMS for vSphere but going forward the business may need it for a whole host of things. Encrypted VM’s might be your first need for a KMS but it won’t be your last.
Note how I didn’t list other requirements like HSM’s (Hardware Security Modules). vSphere is just a KMIP client. These functions are handled by the Key Manager you choose. If you want HSM’s then the Key Manager will talk to them and vSphere will talk to the KMS. Honestly, this lessens complexity while providing you the best choice to meet your needs.

Key Manager Basics

There are a few things to set up in vCenter that are covered in the documentation but first, let’s go over some of the concepts and terms.
KMIP = Key Management Interoperability Protocol. This is the standard where any KMIP client can speak to a KMIP KMS and it should just work. Extensive testing is done each year at the RSA Conference. You can learn more at Wikipedia which actually has a great writeup on the whole process.
DEK = Data Encryption Key. This is the key that the ESXi host generates when you encrypt a VM. This is used to encrypt the data and is stored, encrypted, in the VMX/VM Advanced settings.
KEK = Key Encryption Key. This is the key from the KMS that encrypted the DEK. I pointer to the KMS Cluster and the KEK key ID are in the VMX/VM Advanced settings.
Key Manager Cluster/Alias = In vSphere this is a collection of the FQDN/IP addresses of replicating key managers. The name of the cluster is stored, along with the key ID of the KEK, with each VM. When a VM is powered on, this is what vCenter uses to retrieve the correct KEK to unlock the VM. Don’t get too wrapped up in the term “cluster”. It’s really a collection of FQDN’s and/or IP addresses that represent a group of replicating Key Managers. That’s it. 
VM or vSAN encryption, it’s all the same libraries. Note: Whether you are using VM Encryption or vSAN Encryption the configuration (and libraries and crypto) are all the same. The use the same interfaces to connect to the KMS and the same crypto libraries to encrypt or decrypt.
More great information is located in our vSphere 6.5 documentation. I encourage you to read it and watch the videos!

Key Manager Cluster/Alias

Key Managers today are usually set up in a way that they replicate keys to one another. If I have three instances of a key manager, KMS-A, KMS-B and KMS-C, they replicate the keys between them. If I create a key on KMS-A it will show up in KMS-B & KMS-C at some point. You’ll have to check with your key manager product to see how quickly the replication happens.
Using the example above, in vCenter I would create a key manager cluster/alias. In my example I’ll call it “KMSCluster” and add KMS-A, KMS-B and KMS-C into that KMS Cluster. I would then establish a trust with each of the key managers. There are multiple ways to establish trust and I would suggest you consult the documentation for your key manager on how best to do this.
Because vSphere currently uses a Key Management Operability Protocol 1.1 compliant library, any KMIP 1.1 key manager should work. For more information on which solutions have been certified go to VMware’s Hardware Compatibility List and search for the category of “kms”. 

Key Manager connection retry

Using the example above, if KMS-A is unavailable then vCenter will try KMS-B (next KMS in order). If KMS-A is up but the KMS service, for some reason, doesn’t respond, then vCenter will wait 60 seconds for a response. If after that 60 seconds there is no response, vCenter will try KMS-B. If KMS-A doesn’t respond (e.g. no IP connection at all) then vCenter won’t wait the 60 seconds and will try KMS-B immediately.
The maximum vCenter will wait for a KMS to respond is 60 seconds. This is not something that can be configured.

Separation of duties (when you can)

Ideally, you would want to have a separation of duties when it comes to key management. Large organizations that have a security team may want to control the keys and the IT admin would just use the key manager as a service.
Understandably, not every organization has that ability so for those IT admins that are also the network/storage/security/chief cook/bottle washer types, what you want to consider doing is ensuring only your most trusted admins can administer the key managers and that you have sufficient controls in place to ensure isolation from other network traffic if possible. If you have a management cluster they this might be a good place to run your KMS VM’s, if that’s what you are using. The goal is that only a select few administrators should be able to affect any change on these systems. Off the top of my head, I can’t think of a good business reason for a junior administrator to have any access.
Note about running some of your KMS’s on a management cluster: One thing to keep in mind is to avoid the Catch-22/Chicken&Egg scenario. What I mean by that is that especially with Single Site configurations, you really should have a separate infrastructure for your key management. For example, running all your key manager VM’s solely on an encrypted vSAN isn’t ideal. With vSAN Encryption you configure vCenter and then the hosts communicate with the KMS. If you think about it, do you really want the box that holds your key locked in a box that can only be opened by the key inside the inner box? Nope. Will it work? Yup. Is it a good practice? Probably not. A possible solution would be to have at least one of your key managers running in a cloud. Remember, availability, availability, availability.
The same is true for vCenter and PSC’s in a VM Encryption scenario. You shouldn’t encrypt them using VM Encryption because they would then need to boot up to get their encryption key to boot up. It’s best to run them in a separate management cluster. THOSE VM’s you COULD run on vSAN encryption and that is supported. (because the hosts themselves communicate with the KMS in a vSAN Encryption model)

Topologies

Single Site

As mentioned above, if you have a single site then you probably want to have, at minimum, two replicating key managers. Ideally, you probably want at least three so that at no time are you relying on a single key manager instance. In some cases, as I pointed out above, you may want to run one of your replicating key managers in a cloud service. Some of the certified key managers we work with allow for that type of capability.

What we’re looking for here it the avoidance of a single point of failure within your site. For example, don’t allow all your KMS VM’s (if that’s what you’re using) to run on the same host. Use Host Affinity Rules to keep them separate, ensuring that someone tripping over the power cord doesn’t cause an outage. Think through the scenarios for your environment. Come up with disasters that “could” happen and plan accordingly.

Multi Site

For multi-site, things get more interesting. You want all the KMS servers running and replicating across the sites. Within vCenter, you’ve created your KMS Cluster/Alias and you start adding your FQDN’s or IP’s.

KMS Order

But there’s something to take into consideration. And that’s the order in which you add those KMS. vCenter will try the first KMS on the list.
In the scenario below I have two sites, Site-A and Site-B. In Site-A I have KMS-A, KMS-B and KMS-C. In Site-B I have KMS-D, KMS-E and KMS-F.
In Site-A I want the order to go A,B,C,D,E,F
In Site-B I want the order to go D,E,F,A,B,C
Why? As mentioned, if vCenter doesn’t reach that KMS for 60 seconds it will try the next KMS on the list. What you don’t want to set up is a situation whereby vCenter is going to a remote KMS before exhausting all of the local KMS’s.
I don’t want either site to have to get a key from the remote site unless absolutely necessary. In my example I haven’t added in any additional latencies or other network configurations that may effect the transit of the key from one to the other.

What if?

At VMworld, one of our GSS guys, James Doyle, has really jumped on the encryption bandwagon and did a fantastic presentation on encryption from a Day 2 / disaster strikes standpoint.
As you can imagine, GSS gets a lot of customers calling up that might be in a the middle of an IT disaster. With those scenarios in mind, James added a “What if they were using VM Encryption?” to the mix.
Imagine the scenario where you DO have just one KMS system and a bunch of encrypted VM’s and Murphy strikes and you’re left with a bunch of running encrypted VM’s but no KMS and no way to every get those keys back.
Don’t panic. Whatever you do, do NOT shut things down!
James will go in to detail on how you can recover from many of these situations. I can’t recommend this high enough.

Wrap-Up

I hope this has helped give you a better understanding of KMS topologies and the needs and caveats when configuring them and vCenter. This is not meant to be the be-all/end-all for this topic. You should absolutely discuss this with your KMS vendor any specific recommendations they have and incorporate it into your design.
mike

The post Key Manager Concepts and Topology Basics for VM and vSAN Encryption appeared first on VMware vSphere Blog.

Source: VMware vSphere Blog

Three Key Reasons for Joining Modernize Data Centers Track at vForum Online

As digital transformation increases across the business world, the era of costly, complex legacy infrastructures is coming to an end. But what will it take to modernize infrastructures in such a way that IT gets the agility and flexibility it needs to operate, innovate, and scale to meet the demands of today’s competitive business climate?

To get answers from experts in the IT community, join us on October 18th for the Modernize Data Centers track at vForum Online Fall 2017. During this free half-day event — which also happens to be our largest virtual IT conference — you’ll receive the guidance and information you need to successfully evolve to a modernized infrastructure. Best of all, you get to attend from the comfort and convenience of your own desk.

To give you a better idea of what makes this a must-attend event for any IT professional, we’ve put together a list of three key reasons to join. Have a look.

  1. Attend Breakout Sessions
    Want to find out how you can transform your infrastructure to make delivery simple for any application on any platform? Perhaps you’re interested in something more specific, like upgrading your virtualization platform with VMware vSphere 6.5, getting to know storage virtualization with VMware vSAN better, accelerating the hybrid cloud with VMware Cloud on AWS, or taking a deep dive into VMware Cloud FoundationTM architecture?
    You’ll find all of this and more during our expert-led Modernize Infrastructure breakout sessions in the Modernize Data Centers track. Pick the ones you want to attend, investigate VMware solutions more deeply, and get the direction you need for building out a modern infrastructure that supports your business needs. 
  2. Chat with Experts and Get Your Questions Answered
    Have questions about the Software-Defined Data Center (SDDC)? Or maybe you’ve heard the buzz around VMware Cloud on AWS and just want to know more. At vForum Online, you get to talk these things through with the people who know them best: the experts.
    During our “Chat with the Software-Defined Data Center Experts” session, VMware pros will be live on video, answering your questions about everything from hyper-converged infrastructure and the SDDC to virtualization, containers, OpenStack, and more.
    In the  “What’s New with VMware Cloud” chat session, the experts will give the scoop on VMware Cloud on AWS. Our cloud pros will be ready to discuss the VMware cloud strategy, requirements for a modern infrastructure, and how to create successful planning roadmaps in preparation for digital transformation to the hybrid cloud.
  3. Practice With the Products in Hands-On Labs
    Theory is great, but nothing beats practical experience. And that’s what you’ll get with our hands-on labs. During these sessions, you’ll learn how to work with a modernized infrastructure by building your own SDDC and discovering for yourself what’s new with vSphere 6.5. Discover, make mistakes, and explore in real time what it takes to create a modern infrastructure.

vForum Online Fall 2017 is fast approaching. Don’t miss this chance to learn about the modern infrastructure with your peers and experts in the IT community. Register for the event today, and mark your calendar. We hope to see you there.

 

 

The post Three Key Reasons for Joining Modernize Data Centers Track at vForum Online appeared first on VMware vSphere Blog.

Source: VMware vSphere Blog

4 Vital Steps to Open Source Success in Your Company

By Meng Chow, Staff Open Source Program Manager, VMware

Open source software is gaining serious traction throughout many industries. A 2016 survey conducted by Black Duck Software found virtually all companies rely on open source software in their product development. This indicates that open source is widely adopted and is becoming a strategic part of every company’s software portfolio. As we reap the benefits of open source software and make it an essential part of our development process, what does it take to achieve production-quality, well-supported open source software?

Open Source Success

1.    Open Source Licensing

First and foremost, open source offers users the freedom to access technology tools. Because of this ease of access, there is a general misconception that open source is free. On the contrary, open source software comes with license obligations. For example, a GPL license means you have the freedom to use, copy and modify the software. However, if you sell your software or distribute it, a copy of your source code must be made available to the public under the same GPL license.

As such, choosing the right open source license that aligns with business goals is critical. At VMware, we take a proactive approach toward open source. We start out by learning and understanding the philosophy behind open source. Then, we focus on ensuring the open source license is compatible with the needs of our customers.

2.    Use Open Source Responsibly

With the freedom to access great technology tools, what obligations do we have as users of open source software? Eleanor Roosevelt has this great quote: “With freedom comes responsibility.”

By harnessing the collaborative prowess of the open source community, we don’t have to build everything from scratch. Shared development allows the community to build great solutions with shared benefits. This enables us to accelerate product time-to-market and simultaneously reduce development costs.

The responsibility hinges on having a long-term strategic plan to support and maintain the software. Besides complying to the license terms and mitigating security risks, it is essential to have a delivery infrastructure with well-documented steps to ensure software updates are deployed to customers in a reliable, repeatable fashion.

Open Source Success

3.    Open Source Community Collaboration

Since there is no formal technical support process from the open source community, and to ensure the best quality and support, VMware participates in active, vibrant open source projects that depend on a collaborative community. We have ongoing community review for projects so multiple eyeballs can look at a project’s code and provide critical peer review. This accelerates the discovery of defects, and oftentimes, issues are discovered and fixed before we know them.

Contrast this to dormant open source projects where nobody is improving and maintaining the code. Under these circumstances, you have to find and fix your own issues. This can be super challenging, especially if you were not involved in the development of the code from the beginning.

For successful open source production, think about the leap one has to take going from a developer sandbox to a production environment. At VMware, we carefully select and use the highest quality open source components at every stage of the development cycle. Augment that with the in-house expertise of a dedicated team of quality and security engineers, and this allows us to deliver the most robust software to our customers, complete with assurance of quality and reliability.

Open Source Success

4.    Open Source Compliance

From a risk mitigation standpoint, open source compliance is not just a legal exercise, nor is it just security risk management. We address open source compliance via the collaborative partnership of cross-functional teams. All facets of the company contribute to the management of open source usage to ensure proper compliance. Here’s how each team contributes to open source compliance:

  • Technical Education Team: Education is a key part of our compliance process. The Technical Education team provides online resources and classroom training to educate new employees about proper usage of open source software. In addition, ongoing training is carried out to ensure employees have a good understanding of the policies governing the use of open source software, including process updates and tool improvements.
  • Tools Engineering Team: To facilitate the open source review process, tooling is a key part of our compliance infrastructure as well, offering streamlined opportunities via integration with the software development process. The Tools Engineering team automates as much as possible to maximize developer productivity. They monitor and measure against set targets, and provide visibility and transparency of the end results, including clarity around how decisions are made. These self-checking, self-correcting mechanisms allow us to build the right mindset and culture around using open source.
  • Product Security Team: Addresses security needs in the initial product planning stages so that security is built into the product to begin with. Considering security up front is a fundamental aspect of secure development. Within each Product Team, functional groups such as Product Development, Quality Engineering, Release Engineering and Technical Support work closely with Product Management to choose the product release cadence that meets customer needs.
  • Legal: Elucidates the license obligations of open source to ensure we choose the right open source component during product development.
  • IT: Provides online resources to support the use of open source software—repositories, wikis, application frameworks and bug/issue tracking tools, to name a few.

Open Source Success

Conclusion

At first glance, an open source solution has the potential to reduce license costs and lower capital expense. However, the operational aspects of deploying open source software in production can be complex, with structural and strategic implications that must be factored into consideration. Once you implement a strategic, long-term support plan that mitigates risks proactively, open source software has the potential to lower your total cost of ownership and simultaneously improve business agility.

Which of these viewpoints resonates with you? What best practices come to mind when you ensure the production quality of open source software, and how you implement these four steps in your organization? Let us know in the comments!

The post 4 Vital Steps to Open Source Success in Your Company appeared first on Open Source @VMware.

Source: Open Source @VMware

Watch On-Demand: Tiejun Chen Talks “Unikernels and Explorations” at Open Source Summit North America

Earlier this month, The Linux Foundation hosted Open Source Summit North America in Los Angeles. The technical conference hosted more than 2,000 technologists and open source community members who collaborated and shared insights across a wide variety of topics.

Tiejun Chen, an open source expert and VMware China staff engineer, tackled unikernels in his presentation, “Unikernels and Explorations.” Unikernels are specialized, single-address-space machine images constructed using library operating systems (OS).

Tiejun delved into the major challenges facing unikernels and explored how developers can construct the best platform for running unikernel cases—like converting Linux as unikernel—for greater performance and convenience of these specialized machine images.

Compared to the traditional virtual machine (VMs) or containers, unikernels are ideal for cloud environments because they:

  • Offer more efficiency and security
  • Leave smaller footprints
  • Enable greater optimization and faster boot times

With all these benefits, why haven’t unikernels gained wider popularity?

Watch Tiejun’s full Open Source Summit North America presentation on “Unikernels and Explorations” below, courtesy of The Linux Foundation.

Want to watch more presentations from Open Source Summit North America 2017? Click here to access the playlist.

To stay up to date on the latest around the open source community and VMware, follow us on Twitter @VMWOpenSource.

The post Watch On-Demand: Tiejun Chen Talks “Unikernels and Explorations” at Open Source Summit North America appeared first on Open Source @VMware.

Source: Open Source @VMware