Quantcast
Channel: vRealize Automation – VMware Cloud Management
Viewing all 216 articles
Browse latest View live

vRealize Automation Can Manage IBM PowerVC

$
0
0

vRealize Automation simplifies the management of Amazon AWS, KVM, and vSphere this is nothing new.  It gives vRealize Automation broad coverage of the heterogeneous cloud.  Until now the PowerVC part of this space has been unmanageable by vRealize Automation.  vRealize Automation can manage these heterogeneous clouds and give users the right solution, in the right cloud, though a single experience.

Fast forward to vRealize Automation 7.1 and realize that IBM PowerVC 1.2.3 and later comes with a fully compliant OpenStack implementation starting at Juno.  Using the OpenStack endpoint type vRealize Automation can manage Power Systems the same way that it manages other x86 hypervisors.  This unifies x86 and IBM PowerVC workloads under a single management plane and gives users a single experience.

vRealize Automation Configuration

Configure the DEM workers for TLS 1.2:

  • Every DEM-worker node must be configured to support proper TLS 1.2 communication between DEM-worker service and IBM PowerVC instance.
  • Disable RC4 in Microsoft .NET Framework
  • On DEM-worker node, follow Microsoft Security Advisory 2960358.  Install the necessary security update for Microsoft .NET Framework 4.5.2.  This update disables RC4 in TLS protocol.  It also changes the SSL/TLS default protocol from TLS 1.0 | SSL 3.0 to TLS 1.2 | TLS 1.1 | TLS 1.0 if the node runs a .NET application on the .NET 4.5 runtime or higher.

Check the IBM PowerVC SSL Certificate

PowerVC will only work properly when accessed by IP address, if DNS is not configured on the PowerVC system.  Check this by examining the X.509 certificate presented by PowerVC to see if the hostname is included in the certificate. Correct the name resolution issues on the system, if that’s an issue.  Then run the following command to reconfigure PowerVC instance with the hostname:

powervc-config general ifconfig --set

Install IBM PowerVC SSL Certificate

  • If the PowerVC instance issues a self-signed or untrusted certificate, it must be installed into  Trusted Root
  • Certification Authorities store for Computer account on DEM-worker node. Obtain the certificate and use the certificates snap-in from Microsoft Management Console to install it.

Endpoint Configuration

Configuring the PowerVC endpoint happens just like any other OpenStack endpoint.

  1. Input your endoint address, for PowerVC, typically:
     https://openstack.mycompany.com/powervc/openstack/admin
  2. Choose a credential or create a new one using the credential dialog
  3. Set the OpenStack tenant name as the OpenStack Project (note: if you add multiple tenants make sure to tie your blueprints and reservations together using reservation policies)
  4. To use Openstack Keystone v3 identity provider when connecting to IBM PowerVC.  Add «VMware.Endpoint.Openstack.IdentityProvider.Version » custom property to vRA Openstack endpoint and set the property value to «3 ».
vrealize-automation-2016-10-13-17-44-42

Now you can build blueprints for x86 and Power Systems from the same portal.  Stay tuned for a follow up on using vRealize Orchestrator to customize AIX, Linux and System I LPARs deployed from vRealize Automation

That’s it, make vRealize Automation manage your whole cloud!  Any Cloud Anywhere!

The post vRealize Automation Can Manage IBM PowerVC appeared first on VMware Cloud Management.


vRealize Automation 7.2 – What’s New

$
0
0

 

Enhanced Ease of Use with Out-of-the-Box Support for Azure and Containers

To better address the needs of developers and IT teams, VMware vRealize Automation 7.2 brings new out-of-the box support for Microsoft Azure as well as new container management capabilities. With this new release, the ability of IT and DevOps practitioners to use unified service blueprints to simplify the delivery of integrated multi-tier applications with application-centric networking and security has been extended to Microsoft Azure, as well as currently out of the box supported clouds including Amazon Web Services (AWS), VMware vCloud Air and the vCloud Air Network.

REGISTER HERE to be notified as soon as the vRealize Automation 7.2 download becomes available.

The Azure endpoint will include:

  • Subscription and Active Directory users information
  • Reservations and integration with the governance model
  • Blueprint creation with Azure VMs, storage disks and nics
  • Azure networking supportazure-endpoint

Additionally, the ability to manage the deployment life cycle, including start, stop, restart and delete are possible via the endpoint.

container-catalog

Support for containers in vRealize Automation 7.2 will allow developers and application teams to accelerate application delivery. vRealize Automation 7.2 leverages Admiral, a highly     scalable and very lightweight container management platform to deploy and manage containers to Docker hosts and through a private beta, virtual container hosts on VMware vSphere Integrated Containers. Developers will be able to provision container hosts from the vRealize Automation service catalog as well as model containerized applications using unified service blueprints or Docker Compose.container-bp Application teams will have the ability to build hybrid deployments consisting of VMs and containers. Cloud administrators will be able to manage container hosts and apply governance to their usage including capacity quotas or approval workflows. vRealize Automation 7.2 is well suited for organizations with existing apps while also modernizing them via the adoption of micro services and a cloud-native architecture.

container-hosts

Learn More

 

Need help deploying your private cloud infrastructure, expanding to the public cloud, or developing your business justification? Contact us and our experts can help your team build the business case and the solution that will maximize your IT productivity.

For exclusive content and updates, follow us on Twitter @vRealizeAuto and subscribe to our VMware IT Management blog.

vRealize Automation 7.2 – What’s New

 

 

The post vRealize Automation 7.2 – What’s New appeared first on VMware Cloud Management.

One More Step Forward: vRealize Automation + Puppet

$
0
0

PuppetConf 2016 is kicking off this week in San Diego. For VMware customers who plan to be at the conference, and especially those using or looking to deploy vRealize Automation, here are two sessions you don’t want to miss (Click on the link below for detailed schedule).

 

Puppet and vRealize Automation: The Next Generation, with Ganesh Subramaniam from VMware (http://sched.co/6fk4)

Can’t-miss session! Join to find out what’s new for VMware-Puppet integrations. Come and see our live demo and learn how companies like yours have achieved greater IT agility and better business results with our latest solution.

Sponsor Theater: VMware – Better Together: Deploy, Manage and Orchestrate Applications with Puppet and VMware vRealize Automation (vRA), with Daniel Jonathan Valik from VMware (http://sched.co/7sor)

Want to learn more about how Puppet are integrating with various VMware products in the automation space, especially vRealize Suite? Join this session to hear all the details about features and capabilities, as well as a step-by-step guide on how to connect all these parts together in practice.

 

As a partner of our ecosystem, Puppet has been working closely with VMware to provide integration solutions that benefit our joint customers. By extending vRealize Automation self-service XaaS to embed Puppet’s configuration management, the current integration enables customers to centralize life-cycle management and standardize configuration from “a single pane of glass” using the vRealize Automation console.

vRealize Automation platform is designed with flexible architecture that enables customization and extensibility at multiple levels. While collaborating with partners, VMware has extended its cloud management platform to all areas of infrastructure and cloud environment to meet unique needs from our customers. The product currently provides various integration solutions with leading vendors of IPAM, Load balancer, service desk, physical endpoint, configuration management, and many more.

This year, to better serve our joint customers who desire to drive down provisioning delay of cloud resources with a fully-automated, self-service workflow, Puppet and VMware have committed to further improving solutions with the next-generation integration. Looking forward to hearing more updates? Please stay tuned for Oct 20th at PuppetConf!

The post One More Step Forward: vRealize Automation + Puppet appeared first on VMware Cloud Management.

Top 10 Sessions on DevOps, Continuous Delivery, Code Stream and Code Stream Management Pack for IT DevOPS at VMworld 2016!

$
0
0

A couple months ago we blogged about the top sessions on DevOps and vRealize Code Stream, as well as the vRealize Code Stream management pack for IT DevOPS that we presented at VMworld 2016 in Las Vegas. As many of you (our customers and partners) are also traveling to VMworld 2016 in Barcelona, I would like to give you an overview on the top 10 sessions on those topics for that event as well.

For many organizations today, success hinges on the time-to-market and quality of the applications they are producing. To speed up the software delivery process and bridge the gap that is often prevalent between development and operations teams, organizations are turning to DevOps.

What products and services from VMware can support your DevOps strategy?

First of all, VMware has several products and service that help customers and partners on the journey to realize their DevOps strategy. Let’s talk about products: If you are not aware of the vRealize suite, then you should have a look here. vRealize suite targets specific use cases for customers such as “Intelligent Operations”, “IT Automating IT” and “DevOps-Ready IT” and provides multiple tools and applications to make the overall software development and delivery process more efficient. The vRealize Suite includes our developer and IT focused tools such as vRealize AutomationvRealize OperationsvRealize Log Insight and vRealize Business for Cloud.

Another important add on is vRealize Code Stream (vRCS) and the latest vRCS Management Pack for IT DevOps (also known as “Project Houdini”) that provides a variety of features and capabilities to allow a more efficient release automation process for IT artifacts such as blueprints, workflows, templates and infrastructure as code in general. Both of those areas are extremely important in order to achieve your DevOps strategy and the choice of the right tools are a mandatory planning step. Let’s go into some more details about these two solutions:

  • Software Release Automation: Code Stream automates the software release process at each stage in the software delivery pipeline to assure speed and consistency through the entire process. Customers who are looking to automate their release process typically want to achieve continuous delivery for the purpose of shortening software delivery cycles and improving quality. Code Stream Integrates with existing software development, testing, artifact management and build systems to orchestrate the tasks that need to performed at each stage in the delivery process.
  • IT Artifact Lifecycle Management: Code Stream, when combined with the free “Management pack for IT DevOps” helps IT administrators manage the artifacts of their software defined data center. This includes artifacts like vSphere templates, vRealize Orchestrator workflows and vRA Blueprints to name just a few. Code Stream captures, stores, version controls (roll forward/backward) and distribute these artifacts between tenants and independent instances of vRealize Automation, vRealize Orchestrator, vRealize Operations or vCenter Server.

Need help? VMware provides you with a great DevOps practice through our consulting services…

If you need help on the way to your DevOps ready IT strategy, we certainly are happy to support you. Our VMware DevOps consultants enable and equip our customers with the technologies and tools to empower project teams, and provide the necessary organizational change management programs to ensure successful DevOps adoption. If you would like to know more about the services that we provide, please have a look here. 

If you would like to contact our professional services team, please have a look here.

Want to learn more? Please visit the upcoming VMworld 2016 conferences!

Our team is looking forward to meet you at the VMworld 2016 conference and to tell you more about DevOps, the mentioned use cases like “Intelligent Operations”, “IT Automating IT” and DevOps-Ready IT and what products and services can support you in this space. Here is a list of upcoming breakout sessions at the conference – we are looking forward to meet you there!

Session Number and Session Title: 

Please use the Schedule Builder to get date, time and logistics for the following sessions, click here:

SPL-1706-SDC-2, DevOps-Ready IT with vRealize Code Stream

SPL-1721-USE-4 vRealize Automation for DevOps

DEVOP9093 Panel Discussion: How I Survived the DevOps Transition

DEVOP767, vRA, API, CI, Oh My!

MGT8807, What’s New in vRealize Code Stream

CNA7806 – Pivotal & VMware – making the impossible, possible

CNA7739-GD, Cloud Native Apps: State of the Union

MGT8499, Moving to Infrastructure as Code: How Fannie Mae Is Managing vRealize Suite Artifacts with Code Stream

MGT8831, Digital Transformation Through VMware DevOps Code Stream

DEVOP7788, Industry Perspective: Enterprise Reality of Doing DevOps

MGT8763, How SKY Got Their Cloud DevOps Ready Using SDDC, NSX, AWS and Azure!

 

Any questions? Just contact VMware or send me an email to: dvalik@vmware.com

 

The post Top 10 Sessions on DevOps, Continuous Delivery, Code Stream and Code Stream Management Pack for IT DevOPS at VMworld 2016! appeared first on VMware Cloud Management.

Building Solutions for IBM Power Systems Infrastructure with vRealize Automation 7.1

$
0
0

Last week we showed how to configure vRealize Automation to provision AIX, Linux, and IBM i LPARs for IBM PowerVM using PowerVC enabled by OpenStack, the questions come rolling in.  What can I do with that Infrastructure?  How do I customize it?  How do I apply IBM’s 250+ Cloud Builder Patterns?  vRealize Automation makes this all very easy using the built-in Event Broker combined with vRealize Orchestrator extensibility.

vRealize Automation Blueprinting and Guest Customization

Creating blueprints in vRA 7 for OpenStack works just like it does for other endpoints using the converged blueprint canvas.  In today’s example we’ll use vRealize Automation to provision an LPAR and use an Event Broker callout to run a vRealize Orchestrator workflow to drop a file on the machine, followed by running an SSH command on the machine.

  • Start by opening the blueprint Designer.
  • Drag a new OpenStack machine onto the canvas.
  • Select an image that has been data collected from your PowerVC Endpoint
    screenshot-2016-10-14-12-19-54-1
  • Set any custom properties that need to be passed to the Event Broker call-out. To do this, you’ll need to set a custom property on the blueprint to define the custom properties that are sent with the payload to vRealize Orchestrator from the Event Broker.
    • Create a custom property named: Extensibility.Lifecycle.Properties.VMPSMasterWorkflow32.MachineProvisioned
      Note
      : the name of this property is specific to the Event Broker event selected. In this case, we will be calling vRealize Orchestrator workflows when a machine is provisioned.
    • Add a comma separated list containing the names of each custom property you would like to have exposed to the vRealize Orchestrator workflow.
      Note: As you can see in the screenshot below, we have created a custom property for the SSH Command we will run within the vRealize Orchestrator workflow and we are passing that custom property in via the payload by adding it to the value of the Extensibility.Lifecycle.Properties.VMPSMasterWorkflow32.MachineProvisioned property.
      screenshot-2016-10-14-12-03-25-1
  • Save your blueprint and publish it!
Next we’ll create an event broker subscription to be fired once the machine is provisioned, this will invoke a vRealize Orchestrator workflow which connects to the provisioned LPAR and drops an XML file and then run an SSH command against the provisioned LPAR.  This mechanism can be used for any guest customization, using any file and any SSH command.
Set the Event Topic to Machine provisioning.
screenshot-2016-10-14-12-44-29-1

Set the conditions so that the workflow will only run:

  • Post machine-provisioning
  • For this blueprint only

To do so, you will need to configure the following:

  • Run based on conditions
  • All of the following:
    • Data>Lifecycle state>Lifecycle state name Equals VMPSMasterWorkflow32.MachineProvisioned
    • Data>Lifecycle state>State phase Equals POST
    • Date>Blueprint name Equals AIX 7.1
screenshot-2016-10-14-12-47-23-1

Finally, pick the workflow to run and finish the event subscription:

screenshot-2016-10-14-12-48-32-1

 

The vRealize Orchestrator workflow can be quite simple, although more additional steps and error handling may be need in a production environment. In this case, the workflow will do the following:

  • Write the payload from vRealize Automation into an XML file. The XML file will be saved to the vRealize Orchestrator server. Note: the payload will only include any custom properties were  added to the custom property on the blueprint for the extensibility lifecycle properties (e.g. Extensibility.Lifecycle.Properties.VMPSMasterWorkflow32.MachineProvisioned). In our case, we included the out-of-the-box property for the assigned IP address (VirtualMachine.Network0.Address) so that we could access the newly provisioned VM for the SCP and SSH commands.
  • Use an SCP put command to copy the file from the vRealize Orchestrator server to the newly provisioned VM. This is an out-of-the-box workflow that is available and the custom workflow uses the standard workflow and passes in the appropriate parameters (e.g. source file name, remote file name, user name, password).
  • Run SSH command against the newly provisioned vRO server. This is also an out-of-the-box workflow that is available and the custom workflow uses the standard workflow and passes in the appropriate parameters (e.g. command user name, password). Note: in this case the vRealize Orchestrator workflow uses the command defined in a custom property on the blueprint.
screenshot-2016-10-13-15-30-49

For the purposes of this workflow, the entire payload from the vRealize Automation deployment is written to the XML file and sent through to the newly provisioned machine. The code to write the XML file looks like this:

screenshot-2016-10-13-15-35-23-1

Pulling this all together we can now manage infrastructure and applications on our x86 virtualization stack and IBM Power Systems stacks with the same user experience, governance and reporting capabilities.  This abstracts the infrastructure from the application and gives users the applications that they need in the right cloud without being concerned by the complexity of managing multiple clouds.

A big thanks to Mandy Botsko-Wilson for contributing this content 

The post Building Solutions for IBM Power Systems Infrastructure with vRealize Automation 7.1 appeared first on VMware Cloud Management.

Management vs. Monitoring

$
0
0

I had the pleasure this week of hosting an expert panel on Intelligent Operations Management at the Boston VMUG USERCON https://www.vmug.com/bosvmug.

The panelists included:

The recurring theme from panelists and attendees, was that cloud computing is forcing IT organizations to drastically rethink how they manage their data centers. The conversation centered on a few key areas, including management vs. monitoring, automation and convergence.

Management vs. Monitoring

Michael Sheehan kicked off a spirited conversation, stating that customers need to leverage management solutions like vRealize Operations http://www.vmware.com/products/vrealize-operations.html more strategically, as “management solutions vs. monitoring tools.” Thinking further about Michael’s comments, he is spot on. Historically, management has been a very reactive game of chasing events, with operations teams often playing defense vs. offense. Operations teams often stare at screens, waiting for an alert when something goes wrong, and then spend valuable time and resources firefighting the issue.

But cloud computing requires a much more proactive approach to operations management that expands the conversation to the needs and requirements of the business. When considering cloud solutions, costing, performance and capacity become critical parts of the puzzle, and can shift dramatically based on the particular application or business use case. Michael’s point is important, because it requires a subtle, but important shift in the way we approach managing complex, hybrid cloud environments. Michael and his team have elevated operations beyond monitoring to true management, by aligning his operations strategy with the unique needs of a fast growing biomedical business and research organization.  By making this shift, Michael and his team have also transformed how he and his team are perceived within the four walls of their organizations, as innovators and enablers of his research team, so they are able to spend their time creating breakthrough treatments and pharmaceuticals.

Automation and Convergence

The second half of the panel focused on an equally important topic, automation and convergence, which was led by Marcus Puckett, Thomas Bryant and Mike Eisenberg. When many people think of automation, they think of products like vRealize Automation http://www.vmware.com/products/vrealize-automation.html, which have revolutionized the way IT groups provision and deliver new services. But the panel was quick point out that automation is a continuum, which starts with a solid operations management foundation. The panel emphasized that vRrealize Operations should be working hand-in-hand with vRealize Automation, ensuring that new services are being provisioned to bullet proof infrastructure. Michael Sheehan made another very important point, stating that the industry has to align their operations management strategies with more of a DevOps mentality, to help increase agility and respond more quickly to the requirements of the business.

Another theme highlighted by the panel, was convergence, meaning that operations teams can no longer manage compute, network and storage domains as individual stovepipes. The panel made the point that operations teams that continue to cling to this legacy approach, whether because of internal politics or a desire to maintain the status quo, will quickly be left behind because of the increasing pressures and economies of cloud computing. Panelists emphasized that managing cloud, means having visibility across all domains, and that virtualization has effectively broken down the walls between these compute, network and storage.

Thank you to the panelists for sharing their insights during this week’s Boston VMUG USERCON. if you are interested in learning more about best practices for managing cloud and virtual environments, please register for our upcoming webcast on November 15th. https://vts.inxpo.com/scripts/Server.nxp?LASCmd=AI:4;F:QS!10100&ShowKey=35482&AffiliateData=UWSocial_vSphereBlog.

The post Management vs. Monitoring appeared first on VMware Cloud Management.

Cross-cloud Container Management with vRealize Automation

$
0
0

By Jim Bugwadia, Nirmata

Software is redefining our world, and the businesses who deliver software the fastest will win. Enterprise development teams are increasingly adopting application container technologies (like Docker) to accelerate business agility and enable DevOps best practices.  However, operationalizing application containers remains a daunting challenge. Let’s take a look at how a vRealize Automation based cross-cloud container management solution can enable business agility and provide a clear separation of concerns across IT Ops and development teams.

Cross-cloud Container Management

According to a recent IDC survey, almost half (47%) of organizations that recognize DevOps as their primary development strategy expect to rely on five or more clouds by 2020.

vRealize container Automation

Cloud Management Platforms (CMPs), like vRealize Automation (vRA), provide a common layer of management and governance across clouds. vRA is also designed for extensibility, which makes it possible to leverage extensions and best-of-breed tools from the ecosystem. While CMPs address key challenges for virtualization and infrastructure management, additional tools are needed for cloud-native application lifecycle management. This is exactly what Nirmata is designed to do.

Nirmata

Nirmata provides adaptive application management without lock-in to any cloud provider or container technology. Nirmata is delivered as a cloud service and fully automates the deployment, and operations of cloud applications. Nirmata offers policy based scheduling, integrated microservices tooling, monitoring, analytics, alarms, access controls, audit trails, etc. Basically everything you need to easily deploy and operate containerized applications across any public or private cloud. Nirmata is designed to solve a complex set of problems, in a simple manner. As we looked at integrations with CMPs and vRealize Automation, we came across another team who adopted the same principles and the same simple user experience methodology – SovLabs.

SovLabs

SovLabs’ vRA modules eliminate the pain of building and managing custom extensions and workflows by simplifying complex integrations across a multitude of technology stacks using a platform-native, software-driven approach. Using this approach, SovLabs can provide the best possible end-user experience and value available in conjunction with vRA. Given the obvious match to Nirmata we decided to partner with SovLabs to integrate our solution and vRealize Automation. The solution is rich and full-featured, but let’s take a look at how major use cases we heard from our customers are handled:

vRealize automation container

Self-service Catalog of Containerized Applications

Using the solution, IT operations teams can use vRA to deploy and manage container hosts. This allows using existing best practices, and complete control and visibility into how container hosts are provisioned and used.

vRA plugin container

There are several SovLabs modules that aid in provisioning container hosts, which are typically VMs themselves. Some popular SovLabs modules include Custom Host Naming, Infoblox, BlueCat, SolarWinds, Active Directory Registration, Puppet, F5 and ServiceNow CMDB.

With the SovLabs Multi-Cloud Docker Container Management with Nirmata, as container hosts are spun-up, the Nirmata Host Agent dynamically registers the host with Nirmata. Each container host is assigned to a host group in Nirmata, to enable policy-based host segregation across workloads. For example, you can now easily segregate production hosts from dev-test hosts and even select the most suitable cloud resources for each, all using a common management plane.

Nirmata provides a built-in catalog of containerized applications. The SovLabs module dynamically discovers all available applications, and also discovers other policy settings that are used to manage the scheduling and deployment of containers. Users can now request containerized applications from the vRA catalog, and select to deploy them to an environment. The SovLabs module will check capacity and dispatch the request to Nirmata who allocates resources on the designated hosts, deploys the application containers, and initiates management of the application containers. The user can then be provided access information for the application.

Nirmata seamlessly supports traditional, microservices-style, and clustered applications.  The containerized applications supported can range from a traditional web applications, to complex distributed software like Apache Mesos.

Operations of Container Hosts

Beyond deploying and managing applications, the solution also manages the lifecycle of container hosts.

nirmata3

For example, it is possible to grow or shrink the cloud-based container host clusters based on usage. For container hosts (VMs) provisioned via vRealize Automation, simply destroying the VM will automatically deregister the host in Nirmata and shift the containers appropriately within the cluster based on policy settings. This allows a lot of flexibility in managing applications and hosts, while providing service continuity.

Enable Multi-mode Applications

While containers are great for packaging and deploying most application components, enterprise applications may leverage services which cannot be easily containerized, or for which containerization may not provide the same level of benefits. A typical example of services which may need to be deployed directly on VMs, is of backing services such as databases and messaging software.

To address these cases, the solution enables easy provisioning and management of applications where components span VMs and containers. When requesting a containerized application, users can easily inject properties including addresses of external services which essentially creates a runtime binding across containerized application components, and the components running on VMs.

Summary

vRealize Automation is a proven solution that enables cross-cloud management. By leveraging its extensibility, and flexible integration capabilities, SovLabs and Nirmata are able to deliver an innovative and highly efficient solution to address the pain points of managing application containers across clouds. Using vRA and Nirmata – SovLabs solution, you no longer need to choose across agility and control. To learn more about the Nirmata and SovLabs cross cloud container management solution available on VMware solution exchange, click here or request a demo.

The post Cross-cloud Container Management with vRealize Automation appeared first on VMware Cloud Management.

Integrating NSX with vRealize Automation – Part II

$
0
0

Purpose

This is a blog series on how to integrate and use NSX environment to be consumed in vRealize Automation. Individually these two products are great products with their own use cases but together they form the most formidable force to build a true Software-Defined-Datacenter.

In the First Part of the series I have covered how to install NSX plugin in vRealize Orchestrator and further integrate it with vRealize Automation. Also I had covered about using NSX entities to make reservations in vRealize Automation.

In this post I am going to cover the after integration parts. That is, how to consume NSX entities in vRealize Automation Blueprints and then show the end result.

Pre-Requisites:

As mentioned in the first part, before you can use the NSX entities, you need to configure and use them in the reservation. So first you need to configure network profile and then in Reservation map those profiles to the portgroups created and exposed by NSX. Note: The Logical Switch you create in NSX will be exposed and listed as a portgroup here in vRA (at the end of the day, those logical switches are portgroups created in dvSwitch).

Also note, the security policies created in NSX will be available in reservation tab. So the security profiles you want to use in a Business Group needs to be selected in Reservation tab. Once selected, they will be available to be consumed in Blueprints.

Once the above is done we will further go ahead and use those entities in Blueprint.

 

Configuring the vRA Blueprint with NSX

For this example the following scenario is taken:

  • This is a multi-tier blueprint with Web, App and DB components in it
  • The Web and App tier will multiple machines in cluster so that they can scale when needed
  • The Web tier will use a Dynamic Load balancer. So whenever an instance is deployed from this Blueprint, a load balancer will be deployed for this
  • All the VM’s created from this blueprint will be under activity monitoring security policy.
  • SSH port will be automatically blocked to all the VM’s through Block-SSH security policy.

The below video shows the steps to create the blueprint and use NSX entities. It also covers publishing, entitlement and “Request for Catalog item” as well.

 

Steps Used for Configuration:

  • Create a New Blueprint, use NSX settings
  • In the designer form use 3 vSphere Machines (one for each category)
  • Make subsequent modification in each of them (used clone and linked clone methods of create a VM)
  • Added network and configured them
  • Added security policies and configured them
  • Added load balancer and made required configuration
  • Publish the blueprint, entitle it and then from Catalog consume the entity

 

Result of NSX Integration:

Provided below are the screenshots from the deployment stage.

For example I have taken a Multi-Machine blueprint named Multimachine-Web. It has the same configuration as that of the above example. To request for the item go to Catalog and click on Request.

 

NSX - 1

 

Click on Submit to request for the item.

NSX - 2

 

The request is successfully submitted.

NSX - 3

 

Go to Requests tab to check for the request. Click on the request number to get more details.

NSX - 4

 

Click on “Execution Information” to get detailed step by step execution information.NSX - 5

 

In details information you can check completed, pending and failed (if any) steps.

NSX - 7

 

At the backend, in vCenter we can get the new entities that are being created.

result-8-1

 

The job successfully completed.

 

result-9

 

Let’s check the details of a VM

 

result-14

 

This is the Web component, it automatically got two IP’s

 

result-16

 

Details of On-demand load balancer.

 

result-17

 

Here we can see that the created VM is connected to the respective Logical Switch.

result-18

 

A separate logical switch has been created for On-Demand routed network.

 

result-19

 

A new edge gateway has been created for On-Demand Load Balancer.

 

result-20

 

The parameters for monitoring on Edge.

 

result-21

 

The load balancer is automatically configured.

result-22

 

Pools are configured in Load Balancer.

result-23

 

Virtual Servers are configured with IP

 

result-24

Created VM’s are automatically added to the security Policies.

 

result-26

 

Conclusion:

 

This concludes this series for NSX integration with vRealize Automation and consumption of NSX provided entities using vRA blueprint. In this series I covered the integration process of NSX with vRealize Automation through vRealize Orchestrator. And then how to utilize NSX entities in building a multi-machine blueprint in vRealize Automation.

The post Integrating NSX with vRealize Automation – Part II appeared first on VMware Cloud Management.


Better Together: Cloud Management Platform + Network Virtualization

$
0
0

Download Brochure: Cloud Management Platform + Network Virtualization

cloud management platform

Are you looking to deliver secure applications with blazing speed for your enterprise?  Applications are the lifeblood of the business in today’s digital economy.  Slow time to market can result in missed business opportunities, impacting revenue potential.  Download the new brochure to learn more how VMware vRealize (cloud management platform) and NSX (network virtualization) can help your business stand out from the competition.

 

Need for Automation and Intelligent Operations End-to-End

Our customers are increasingly deploying modern multi-tier applications into highly virtualized or cloud environments.  To do this, many IT organizations have tried to improve day-to-day operations for their development and QA teams by automating the delivery of infrastructure and application services. However, since most have not addressed network or security operations, faster provisioning of these services has only partially solved their businesses’ agility challenges.  Furthermore, with business success and brand reputation at stake, IT also must be prepared to continuously monitor ongoing operations to ensure quality of service and compliance.

VMware vRealize with NSX

Modern applications require a software-defined approach to their underlying infrastructure that gives businesses the speed, efficiency, and productivity needed to support ever-changing requirements.  Read the new brochure, which explores the integration between two VMware technologies: VMware vRealize (cloud management platform) and VMware NSX (network virtualization).  Together, these solutions help enterprises drive true business agility, by overcoming obstacles to speed application delivery and ensure quality of service.

Learn More:

 

The post Better Together: Cloud Management Platform + Network Virtualization appeared first on VMware Cloud Management.

The iSchool’s Cloud Management Journey

$
0
0

When people think about Syracuse University, their first thoughts are often about the basketball team, the Carrier Dome, and Otto — SU’s famous mascot. But there are some equally exciting Cloud Management and Architecture developments at SU’s iSchool (School of Information Studies).

cuse1studentsottoischool-logo

Joshua Lory and I recently took advantage of VMware’s Service Learning Program, to connect with iSchool students, faculty and leadership and found a team of passionate people working on innovative new cloud applications, studying the latest trends in cloud management and preparing for highly successful technology careers.

Harnessing Cloud Disruption

It is no secret that the cloud has shattered traditional approaches to IT and is quickly breaking down long-standing barriers between application, compute, network and storage domains. The cloud is also presenting enormous challenges to schools preparing the next-generation of technology professionals. But, the cloud has accelerated the pace of change exponentially, making it extremely challenging for faculty to stay on top of the latest developments. As soon as courses are developed, new developments in cloud technologies threaten to make the classes irrelevant.

Instead of fighting the disruptive forces of the cloud and clinging to traditional processes for developing class content, the iSchool is in the process of embracing the challenge and extending an already successful paradigm of industry partnership that it pioneered with visionaries like Jeffrey Rubin at SIDEARM Sports and several other companies, including a growing relationship with VMware. These partnerships have already provided enormous benefits to iSchool students, as well as the companies that support them and are allowing the iSchool to lay the groundwork for new classes and programs that will better position students to advise new employers on the latest cloud strategies for advancing their businesses.

As cloud adoption grows, it is no longer just a trend that matters to IT professionals, it is a discussion among C-Level executives and boards of director about how they will leverage its power to transform their businesses. Students graduating into this new world require a different skillset. Instead of building private datacenters and understanding the technical intricacies of network, storage and computing devices, they will be consulting with business leaders on which cloud solutions best align with the needs of the business.

Rapid Progress – The iSchool’s Cloud Management Journey

The iSchool has already made rapid progress with its new cloud initiatives and programs. Examples include:

class2 class3

Finally, thank you to Elizabeth Liddy, Dean of the School of Information Studies (iSchool), Art ThomasKim Pietro, David Molta and Sarah Weber for inviting us to the iSchool and creating an environment for me and Josh to engage and learn alongside the iSchool’s very talented students. We both appreciate the opportunity to learn and grow with you and the iSchool students as you travel your cloud journey.

 

The post The iSchool’s Cloud Management Journey appeared first on VMware Cloud Management.

vRealize Automation 7.2 Has Arrived

$
0
0

vRA Product Icon Mac_0Download the latest version here

 

Enhanced Ease of Use with Out-of-The-Box Support for ServiceNow®, Azure®, and Containers.

 

As we announced last month, VMware vRealize Automation 7.2 brings new out-of-the box support for ServiceNow®, Microsoft Azure as well as new container management capabilities.

 

ServiceNow Support

 Our new out-of-the-box integration with ServiceNow, will enable users to easily propagate vRealize Automation services to the service desk catalog as well as enable the use of ServiceNow to deploy services via vRealize Automation.

 

Azure Endpoint

Azure Endpoint

With this new release, the ability of IT and DevOps practitioners to use unified service blueprints to simplify the delivery of integrated multi-tier applications with application-centric networking and security has been extended to Microsoft Azure, as well as currently out-of-the-box supported clouds including Amazon Web Services (AWS), VMware vCloud Air and the vCloud Air Network.

The Azure endpoint users can:

  • Configure Azure connections (endpoints) per tenant
  • Assign reservations and integrate with their governance model
  • Design blueprints creation with Azure resources, specify network and storage options
  • Deploy converged blueprints with Azure resources in them
  • vRealize Automation can automatically select the most appropriate subscriptions to deploy to
  • Perform state-aware resource actions on their Azure resources

Additionally, the ability to manage the deployment lifecycle, including start, stop, restart and delete are all possible via the endpoint.

 

Container Management

 Support for containers in vRealize Automation 7.2 will allow developers and application teams to accelerate application delivery. vRealize Automation 7.2 leverages Admiral, a highly scalable and very lightweight container management platform to deploy and manage containers. Developers will be able to:

  • Provision and manage Docker hosts in VMW SDDC
  • Provision and manage multi-container apps via API or UIcontainer-bp
  • Create and consume container networks
  • Manage private or public container image registry
  • Use policies for container placements
  • Manage resources for containers and container hosts
  • Have visibility into operational and log visibility of containerized apps
  • Enable lifecycle actions for containers

Download the latest version here

 

Learn More

Need help deploying your private cloud infrastructure, expanding to the public cloud, or developing your business justification? Contact us and our experts can help your team build the business case and the solution that will maximize your IT productivity.

For exclusive content and updates, follow us on Twitter @vRealizeAuto and subscribe to our VMware IT Management blog.

The post vRealize Automation 7.2 Has Arrived appeared first on VMware Cloud Management.

Author Interview with vExpert Guido Soeldner on “Mastering vRealize Automation 7.1”

$
0
0

Guido Soeldner and his brothers Constatin and Jens-Henrik published their book “Mastering vRealize Automation 7.1: Implementing Cloud Management in the Enterprise Environment” in late October 2016. The three are principals of Soeldner Consult GmbH in Nuremberg, Germany.

 

We caught up with Guido to discuss their book and to get his thoughts on vRealize Automation. A vExpert, Guido has four years of experience using, training and consulting on vRealize Automation.

 

VMware: Your new book recently published. What can readers expect in “Mastering vRealize Automation 7.1: Implementing Cloud Management in the Enterprise Environment”?

 

GS: Our book covers all aspects of building a private cloud with VMware vRealize Automation. First, readers will gain an understanding of private cloud computing and learn how to design a private cloud environment. In addition to extensive design discussions, the book provides detailed hands-on instruction how to implement vRealize Automation at the customer’s site. In addition, large sections of the book describe how to extend vRealize Automation by using vRealize Orchestrator and integrate it with vRealize Operations, Log Insight, vRealize Business and even third-party tools like Infoblox. We draw on extensive consulting experience, having implemented what is probably the largest German vRA installation.

 

VMware: In your role as a consultant and trainer, why are customers adopting vRealize Automation?

 

GS: With cloud computing becoming relevant to nearly everybody, many companies struggle to find the right cloud computing strategy. vRealize Automation helps those customers to leverage their existing IT assets, making it ready for the cloud while still having full control of how quickly they want to move on to the cloud. While many customers are only interested in automatically deploying virtual machines and providing a self-service catalog, other customers have built impressive and comprehensive private clouds with service offering similar to what public cloud vendors like Amazon Web Services provide. In addition, by having a fully customizable service catalog and an orchestrating engine, they can always decide if they want to implement offerings by themselves or use existing public cloud offerings. In any case, vRealize Automation greatly enhances the enterprise agility while still having the whole control about the service offerings and being able to enforce governance rules within the company.  

VMware: What features of vRealize Automation 7.x have captured your attention?

 

GS: vRealize Automation 7 has lots of improvements to boast. Most notably are – of course – the new blueprint designer including the application authoring capabilities, the installation wizard, new Event Broker or support for containers. However, it is the small enhancements in vRealize Automation 7, but also in 7.1 and 7.2 that makes the private cloud admin’s life so much easier. vRealize Automation now ships with so many built-in features that the time to set up vRealize Automation and implement basic use cases is greatly reduced.

 

VMware: What components of vRealize Automation 7.x have users shown the most interest in? What is driving interest in vRealize Automation?

 

GS: Besides the many built-in features of vRealize Automation, it is the extensibility options that users are interested in. Each company focuses on different features and use cases. However, using vRealize Orchestrator together with vRealize Automation allows them to implement nearly all of them. Once vRealize Automation is fully running, customers really love the XaaS designer to publish new service offerings to their internal customers.

  

VMware: What words of advice do you have for new users of vRealize Automation?

 

GS: vRealize Automation is certainly a product with a steep learning curve. While most VMware products only need to be installed and configured to be ready-for-use, people should not only see vRealize Automation as a product, but as a platform to build up your own cloud solution. Hence, having automation skills is inevitable. So my advice is, learn scripting and programming and use Orchestrator as much as possible. It is really a great product.

  

VMware: Where can interested readers find your book?

 

GS: You can buy the most recent version of our book at amazon.com.

A previous community edition of our book can also be found on our blog at cloudadvisors.net.

The post Author Interview with vExpert Guido Soeldner on “Mastering vRealize Automation 7.1” appeared first on VMware Cloud Management.

Product News: VMware vRealize Code Stream 2.2 available!

$
0
0

As part of our VMware product team I’m excited to announce the availability of the new version vRealize Code Stream 2.2! This is a very important release for vRealize users and we hope that many of our customers will welcome the great benefits of the new features and improvements that we implemented in this new version of vRealize Code Stream.

Let’s get to the facts, what is new in vRealize Code Stream 2.2? Please have a look through this blog and don’t miss the opportunity to download the trial version here.

What’s New in vRealize Code Stream 2.2:

This release of vRealize Code Stream enhances platform capabilities. It extends Role-Based Access Control (RBAC) to pipeline templates, offers the ability to resume a failed pipeline execution from the point of failure, and provides support for integrating with remote jFrog Artifactory instances. It also introduces a Plug-in SDK to build custom plug-ins for vRealize Code Stream.

New Features

  • Resume from Failure
    This feature adds the ability to resume failed pipeline executions from the point of failure onward. Failures can occur because of transient issues such as a network outage or an issue with an external system. When you resume a pipeline after correcting the underlying issue for a failed task, the pipeline keeps the context of all previously executed tasks intact and allows the same pipeline execution to resume.An audit trail is also maintained as part of the execution metadata that highlights the resumed task(s), timestamp, and user information.You can identify the resumed pipeline executions on the Dashboard and Executions details page. You can resume a pipeline execution multiple times as long as you have permissions to trigger the resume action on that specific pipeline.
  • Role-Based Access Control on Pipelines
    The roles of Release Manager and Release Engineer are now extended to individual pipeline templates. You can optionally assign permissions on pipeline templates to restrict specific sets of users or groups to modify and trigger the pipeline templates.Similar to how users who have the Release Manager role can create and modify any pipeline template, when you assign a pipeline template to a set of Release Managers, the permission to modify or delete the pipeline template is only restricted to that set of Release Managers.Similarly, when you assign a pipeline template to a set of Release Engineers, only that set of Release Engineers can trigger an execution for the pipeline template. Any user that has either the Release Manager or Release Engineer role can continue to view all of the pipeline templates in the tenant. In addition to the explicit list of users who have permissions on the pipeline template, tenant administrators have the Release Manager role assigned implicitly, which allows them to rescue or clean up any pipeline template in the tenant.By default, all Release Managers can create a pipeline. When you do not add any users or groups, the following permissions are available by default:
    • All vRealize Code Stream Release Managers can modify and trigger the pipeline.
    • All vRealize Code Stream Release Engineers can trigger the pipeline.
    • A user who has the Release Manager or Release Engineer role and Tenant Administrator role has implicit access to modify and trigger pipeline templates even when not explicitly added to the permissions list.
  • Remote Artifactory Integration
    This version of Code Stream supports integration with multiple remote Artifactory instances v4.7.7 or later, which allows teams to directly integrate vRealize Code Stream with their existing Artifactory instances. Teams can now independently upgrade vRealize Code Stream and Artifactory instances, and leverage advanced deployment configurations offered by Artifactory, such as for High Availability.
  • Plug-in SDK
    Building plug-ins is now possible through the Plug-in SDK that is available from VMware. The Plug-in SDK provides all the necessary components that you need, including examples, documentation, and the build and packaging tools to allow you to build native plug-ins for Code Stream.The Plug-in SDK leverages the widely popular open source framework from Alpaca (http://www.alpacajs.org) to build interactive user interfaces from the JSON Schema. The unique plug-in architecture of Code Stream is built on the open source framework of Xenon (https://github.com/vmware/xenon), which is built to handle operations at cloud scale.

Dashboard Improvements

The dashboard now displays the overall status of pipeline executions, including those that were resumed.

  • Release Engineers can trigger pipelines.
  • Release Managers can modify and trigger pipelines.

Other Enhancements

Other enhancements for this release include:

  • Jenkins Plug-in: Added support for Jenkins Folder plug-in, which includes the ability to list and trigger Jenkins jobs organized within Jenkins sub folders.
  • vRealize Automation 7.x Plug-in: Added support for configuring the blueprint deployment count as a pipeline property variable.

Information on Licensing

To use the latest version of vRealize Code Stream, you can continue to use your 2.x license key.

For more information about licenses, see the Licensing Help Center.

Additional Documentation and Information

This release of vRealize Code Stream includes the following product documentation:

Interested in more info? Please have a look to our product page. We also have a blog for our IT and developer audience.

Additionally, here are some really good videos and content to help you get started with vRealize Code Stream:

…and of course if you have any additional questions, please contact your Vmware sales or technical representative – we are happy  to provide you with any additional information on vRealize Code Stream or any other Vmware product.

We will also provide a webcast on all the news and updates of vRealize Code Stream very soon – stay tuned, we will provide the details and invitation link as an update to this blog.

thanks!

Daniel Jonathan Valik, dvalik@vmware.com, Linkedin

The post Product News: VMware vRealize Code Stream 2.2 available! appeared first on VMware Cloud Management.

New vRealize Code Stream Management Pack for IT DevOps 2.2.0

$
0
0

This is a big week for vRealize users as we just launched the new version of vRealize Code Stream 2.2! In addition to all the great updates and changes included in vRealize Code Stream 2.2, we have some more news for our customers and partners, the launch of vRealize Code Stream Management Pack for IT DevOps 2.2!

But first, before we walk through all the details, what is the vRealize Code Stream Management Pack for IT DevOps?

What is the VMware vRealize Code Stream Management Pack for IT DevOps? 

The VMware vRealize Code Stream Management Pack for IT DevOps is an extensible, customizable framework that provides a set of release management processes for software-defined content with the ability to capture, version control, test, release and rollback. This management pack makes it possible to dispense with the time-consuming and error-prone manual processes currently required to manage software-defined content. Supported content includes entities from vRealize Automation, Orchestration, Operations and vSphere.

Where can I install the new version? 

This new version can be installed on a vRealize Automation 7.2 appliance/vRealize Code Stream 2.2. For more information about the installation process, see the Installation Guide documentation included in the release package of the management pack.

How can you get the vRealize Code Stream Management Pack for IT DevOps? 

The vRealize Code Stream Management Pack for IT DevOps is available at no charge to customers of vRealize Code Stream and vRealize Automation Advanced or Enterprise editions. For further information about these products or to obtain evaluation licenses, contact your VMware account representative.

What is new in this version? 

The version 2.2.0 release provides enhanced functionality in these areas:

  • Support for running the management pack on vRealize Automation 7.2/vRealize Code
  • Stream 2.2 appliance.
  • Support for managing content or objects from new endpoints:
    • vRealize Automation 7.2
    • vRealize Orchestrator 7.2
    • vRealize Code Stream 2.2
    • vRealize Operations 6.4
  • A new VMware Xenon based repository on the Primary Content Server (please see the Installation Guide for important instructions when upgrading from an earlier version of the Code Stream and the Management Pack).
  • Package Type Improvements:
    • Added support for Code Stream Pipelines
    • Orchestrator Packages are both deployed from and exported to source XML.
    • Zip files for Orchestrator Workflows and Actions are extracted to source XML files during capture. Note: Content is deployed from the original zip files.
    • Dependent Orchestrator Workflows and Actions for vRealize Automation package types and Code Stream pipelines are extracted to source XML files during capture. Note: Content is deployed from the original zip files.
  • Added version selector for vRealize Automation package types on Add Content Endpoint to assist with selecting the supported package types.
  • vRealize Automation 7 package types with secure string properties can be captured and deployed.
  • Other Enhancements:
    • Significant improvements in performance and scalability of Group Package Requests.
    • All content is at /storage/external on Primary Content Server with MD5 checksums.
    • Existing artifacts in JFrog Artifactory can optionally be migrated to the new Xenon
    • repository during upgrade.
    • Added day 2 action for Packages to show differences between two versions.
    • Added Repository CLI scripts.

Would you like to download vRealize Code Stream Management Pack for IT DevOps 2.2, Click here to Download

Interested in more info? Please have a look at our web site, where you can find additional resources. We also have a blog for our IT and developer audience. And of course, if you are interested in what the new version of vRealize Code Stream 2.2 offers, here the link to our blog

Additionally, here some really good videos on this topic:

…and of course if you have any additional questions, please contact your VMware sales or technical representative – we are certainly looking forward to provide you with any additional information on vRealize Code Stream Management Pack for IT DevOps, vRealize Code Stream, vRealize Orchestrator or any other Vmware product.

We will also provide a webcast on all the news and updates on vRealize Code Stream Management Pack for IT DevOps in the next couple days – stay tuned, we will provide the details and invitation link as an update to this blog very soon.  

thanks!

Daniel Jonathan Valik, dvalik@vmware.com, Linkedin

The post New vRealize Code Stream Management Pack for IT DevOps 2.2.0 appeared first on VMware Cloud Management.

How to use API in vRO to build XaaS services in vRA

$
0
0

Purpose:

The purpose of this post is to show how to build XaaS services in vRealize Automation (vRA) by using available API’s in vRealize Orchestrator (vRO). Target audience for this post is System Admins, Cloud Admins etc. who are not full fledged developers but has some experience in scripting and build Blueprints in vRA. Also in the process this post will clarify how to use vRO and also how to explore API structure and use them in building custom workflows.

Introduction:

First, a bit of background of this post. Recently, I got a request from a customer to build a custom workflow for the following use case:

Build a catalog item in vRA which when requested will do the following:

  1. It should have a Form which does the following
    • Shows a list of available business groups in tenant. User should be able to select one of the Business Groups
    • A list of all available entitlements. User should be able to select one or multiple Entitlements
    • Let the user input a user name
  2. Once the above information is provided and the user finishes the request, provided username should be added to the selected Business group and the entitlements.

Obviously, all of the above tasks are typically done through normal admin tasks in vRA, but giving this workflow will let the user streamline the requests and delegate the tasks. By default, users will be part of a group where they can run this workflow. So, they can run this workflow and make themselves part of the business groups and entitlements. For security reasons there will be approval policies associated with the workflow so that all the requests will be moderated.

There are no out of the box workflows available which we can use to do this. To build the above we will require building a custom workflow in vRO and exporting it as a XaaS item in vRA.

It is very easy to build workflows in vRO using other pre-defined actions or workflows. But it becomes a bit complicated when none of the pre-built workflows or actions will work for you and you need to write own Javascript using the API’s. It seems for normal non-developer community there is not enough documentation available which explains the workings. There are few available blogs explaining things a bit but nothing clears the finer points.

So, this post tries to explain those points while building the above solution step by step.

Few Concepts before we start:

Before I go ahead and dive into building the solution let’s cover some basic concepts:

General programming perspective:

API: Typically, a program has two interfaces, GUI which is mostly used by Human Users and another is API (Application Programming Interface) through which another program or piece of software interact with the program. Using API, we can programmatically communicate with another software.

Class: In Object-Oriented Programming, a Class is what a Blueprint is in vRealize Automation. They define the structure but is not the actual implementation. We use this structure to declare objects.

Object: An Object is the actual representation of a running entity. They are similar to what a running workload is in vRealize Automation. Actual workloads are the implementation and Blueprint in the structure. So, Class is like Blueprint but Object is the deployed workload from the Blueprint.

Method: These are the pre-defined tasks that can be performed on the Objects. Taking the similar analogy, methods are tasks that can be performed on the workloads. For example, “Power On”, “Power Off”, “Suspend” are the tasks that can be performed on a deployed VM in vRA, similarly, length() is a defined method in a string object.

Attribute: These are the properties of an Object. For example, name, amount of RAM etc. of a VM, are the properties of that VM. Attribute is similar properties of an object.

vRealize Orchestrator – (vRO) Perspective:

Action: Actions are small unit of program which can take multiple inputs but always return a single value. They can be perceived as a Function which when called does certain tasks and return a single value (the value may be of void type).

Module: A module is a collection of Actions.

Implementation in vRO:

Decision Making process:

  1. Requirement: We must show a form where user will provide input.

Process: We will take help of Presentation option in vRO to ask for the input from users. We will build a Form in vRA to show the required information.

  1. We need to pre-populate the form with Business Group and Entitlement list from current environment.

Process: We will take help of Actions to achieve this

There is no Out of the Box defined workflow or Action which will give us a list of all the Business Groups or Entitlement. So, we will create two custom Actions which when run will return a list of Business Groups which we will use to pre-populate the form.

Actions:

The first custom action is “listBusinessGroups”. This script will return a list of Business Groups in Array of String format. The code regarding this is provided below:

//Defining the variable of array of strings type which will hold the business group names
var bgNames = [];

// Finding the CAFÉ host and then the Business Groups in that Tenant
var cafeHost = Server.findAllForType("vCACCAFE:VCACHost")[0];
var businessGroups = vCACCAFEEntitiesFinder.getBusinessGroups(cafeHost);

// Storing the names of all the Business Groups in the variable bgNames
for (i=0; i < businessGroups.length; i++) {
	bgNames.push(businessGroups[i].name);
}

// returning the variable bgNames
return bgNames;

Next action item is “listAllEntitlements”. When run, this script will return a list of available Entitlements. Code regarding this action is provided below

// Defining the variable type of array of strings to hold the entitlement names
var entitlements = [];

// Finding the CAFÉ host and the entitlements
var cafeHost = Server.findAllForType("vCACCAFE:VCACHost")[0];
var entitlementlist = vCACCAFEEntitiesFinder.findEntitlements(cafeHost);

// Storing the entitlement names in the variable entitlements
for ( i=0; i<entitlementlist.length; i++){
	entitlements.push(entitlementlist[i].name)
}
// returning the variable with entitlements name
return entitlements;

Main Workflows:

Next we will write the main Workflow to add the user to selected business group and entitlements. The workflow name is “addUserToBG-Entitlements”. The main workflow will have two scriptable tasks, one for adding user to Business Group and another for adding user to entitlements. Details of this workflow and the scriptable tasks are provided below.

The Inputs:

Name Type Description
bgName String Business Group where user needs to be added
userName string User name to be added
entitlementnames Array/string List of entitlement names

One important point to note here, both the actions which we already defined  returns an “Array of Strings” as output, but here bgName is a String whereas entitlementnames is an Array of String. This is because we want user to select a single Business Group, whereas they can select multiple entitlements.

Attributes:

Name Type Description
businessGroups Array/vCACCAFE:BusinessGroup  List of Business Groups
finalBG vCACCAFE:BusinessGroup Business Group where user needs to be added
vCACACAFEHost vCACCAFE:VCACHost CAFE Host
entitlement  vCACCAFE:Entitlement Entitlement Where user to be added
entitlementslist Array/vCACCAFE:Entitlement A list of all the entitlements

In vRO, attributes are Global variables, we can use these variables from any workflow and they are consistent throughout the lifetime of the main workflows. For more information on different variables please read the wonderful blog at http://www.vvork.info/2015/06/variables-in-vrealize-orchestrator.html.

attributes for the workflow in vRO

Attributes for the workflow

Schema:

Provided below is the schema presentation of the workflow:

schema for the vRO workflow

schema for the vRO workflow

The first and major step is to define the form which will be presented to user at runtime. We can do that using the Presentation Tab. The input variables will automatically be shown in Presentation tab. We need to select the variables and then add parameter for them. For parameter type we will select “Predefined Answer” and for Value we will select OGNL. In OGNL type we will select the action items we defined earlier. Provided below are the screenshots for them.

First Input Parameter Presentation

First Input Parameter Presentation

Second Input Parameter Presentation

Second Input Parameter Presentation

Note for the values we selected the respective action items.

Next we will focus on the individual Scriptable tasks.

Also provided below is the Visual binding of the parameters for the first scriptable task (Add user to Business Group):

Visual Binding for first vRO workflow

Visual Binding for first vRO workflow

Code for the first scriptable task is provided below:

// Encapsulating the entire script in try - catch statement
try {

// Getting a list of all business groups
	businessGroups = vCACCAFEEntitiesFinder.getBusinessGroups(vCACACAFEHost);
// Running a for loop to match business group name with the user 
// provided business group name. Once the match is found get the Business
// group object
	for(i=0; i< businessGroups.length;i++){
		if (bgName == businessGroups[i].name){
			var finalBG = businessGroups[i];
			break;
		}
	}

// Getting a list of all existing users in the Business Group
	var userNames = finalBG.getUsers();
	
	var size = 0;

	if (userNames){
		size = userNames.length;
	}
	
	var alreadyExists;
	alreadyExists = false;
	
	if (size){
// Checking to see if the user already exists in the Business group
		for (var j = 0 ; j < size ; j++ ) {
			if (userName == userNames[j]) {
				System.warn("User " + userName + " is already added to the Business Group "+finalBG.getName()+". The user will be skipped.");
				alreadyExists = true;
				break;
			}
		}
	}
	if (!alreadyExists) {
// If this is true that means user is not already present in the BG
		userNames[size] = userName;
		finalBG.setUsers(userNames);
		if(!finalBG.activeDirectoryContaier){
			finalBG.setActiveDirectoryContainer("");
		}
		System.log("Adding user to business Group " + finalBG.getName() + "...");
// Adding the user to Business Group
	vCACACAFEHost.createInfrastructureClient().getInfrastructureBusinessGroupsService().update(finalBG);
		System.log("User added to Business Group " + finalBG.getName());
	}
}
catch(errorCode){
// In case of any error throw the error
	System.error(errorCode);
	throw errorCode;
}

Visual Binding for the second scriptable task (Add user to Entitlements) is provided below:

Visual Binding of second vRO workflow

Visual Binding of second vRO workflow

Code for the second scriptable task is provided below:

// Encapsulating the entire script block in try - catch statement
try {
	// getting a list of all the entitlements
	entitlementslist = vCACCAFEEntitiesFinder.findEntitlements(vCACACAFEHost);

	// Running the loop to get the entitlement which user specified
	// entitlementnames holds all the entitlement names which user specified
	// entitlementlist holds the list of all the entitlements.
	// note entitlementnames is an array of strings while entitlementslist is an array of entitlement object
	for( i =0; i<entitlementnames.length; i++){
		for( j=0; j<entitlementslist.length; j++){
			if ( entitlementnames[i] == entitlementslist[j].name){
				entitlement = entitlementslist[j];
				break;
			}
		}
	// at this stage we found our entitlement object
	// validating the object	
	System.getModule("com.vmware.library.vcaccafe.util").validateObject(entitlement, "Entitlement");

// Getting the CAFE host object for the entitlement
	var host = vCACCAFEEntitiesFinder.getHostForEntity(entitlement);
// Getting the entitlement service object
	var client = host.createCatalogClient().getCatalogEntitlementService();

	var alreadyExists;

	alreadyExists = false;
	
	// Checking to see if the user already exists in the entitlement
	for (var j = 0 ; j < entitlement.getPrincipals().length ; j++ ) {
		if (userName == entitlement.getPrincipals()[j].getRef()) {
			System.warn("User " + userName + " is already assigned to the entitlement "+entitlement.getName()+". The user will be skipped.");
			alreadyExists = true;
			break;
		}
	}
	if (!alreadyExists) {
// This means user is not already in Entitlement
		var principal = new vCACCAFECatalogPrincipal();
		principal.setType(vCACCAFEPrincipalType.USER);
		principal.setRef(userName);
		principal.setTenantName(host.tenant);
		System.getModule("com.vmware.library.vcaccafe.util").addElementToList(entitlement, "getPrincipals", principal);
		System.log("Assigning user to entitlement " + entitlement.getName() + "...");
		client.update(entitlement);
		System.log("User assigned to entitlement " + entitlement.getName());
	}


	}
}
catch(errorCode){
	System.error(errorCode);
	throw errorCode;
}

The explanation for the codes and other detailed discussion points are provided in the video. Mainly, the following points are covered in the video:

  1. General Modes of vRO
  2. How to create custom actions and workflows
  3. How to explore and use API’s
  4. Details of the above provided codes
  5. How to build a XaaS item in vRA
  6. How to build a Form in vRA
  7. Final result

Provided below is another short video to explain how to explore the API in vRO and use them in Scripts.

Conclusion:

This post explains the API explorer in vRO and how we can use them to write custom scripts and workflows to build XaaS services in vRealize Automation. Do let me know your feedback on this topic or any other topic that you want me to cover.

The post How to use API in vRO to build XaaS services in vRA appeared first on VMware Cloud Management.


Enable Automated Self-Service with vRealize Automation and Puppet

$
0
0

Editor’s Note: You may have heard about the new Puppet Plugin for vRealize Automation announced at PuppetConf this October from our previous blog. Today, we are pleased to hear from Puppet about the general availability of this new plugin. Read on more for details about how the latest integration can benefit our joint customers of vRealize Automation and Puppet.

By Lindsey Smith, Senior Product Manager at Puppet

At Puppet, most of our customers are on a path toward adopting DevOps practices and automation as a way of improving collaboration and deploying better software, faster. As part of their efforts to move faster and reduce hand-off times between development and operations teams, they are creating self-service solutions for their developers to request and immediately be provisioned with a fully configured set of infrastructure on demand.

That’s why we’ve worked with VMware to launch a new Puppet plugin for vRealize Automation (vRA) v2.0.0, available today on the VMware Solution Exchange. This plugin accelerates the delivery and operation of infrastructure by giving you a fully automated self-service provisioning workflow between vRealize Automation and Puppet. By leveraging Puppet’s massive base of existing management content, you can rapidly deliver fully-configured machines to your consumers. In some organizations, getting a new database server that is ready for production use can take up to 6 weeks. By combining vRealize Automation with Puppet, that 6 weeks can be shrunk down to 6 minutes.

With the integration, you can create blueprints for your virtual machines, using the graphical user interface in vRealize Automation. Building on your existing provisioning templates, the plugin adds automation for Puppet to configure your virtual machines, continually enforces your desired state and provides visibility into machines throughout their lifetime. We want to enable our mutual customers to be able to instantly deliver fully configured VMs to developers who request virtual infrastructure, and this new integration makes that process automated and repeatable.

puppet plugin for vrealize automation

Check out the new Puppet plugin for vRealize Automation today and let us know what you think.

 

Next Steps:

The post Enable Automated Self-Service with vRealize Automation and Puppet appeared first on VMware Cloud Management.

Demystifying vRealize Automation – Getting it Right!

$
0
0

Purpose:

My colleague Raminder Singh arranged for a session for customers on vRealize Automation (vRA) and I was asked to deliver the session. VMware is offering vRealize Automation in its previous vCAC and current vRA avatar for last few years. Yet it seems, there are still some challenges faced by customers while adopting it. Specially if they are trying it out for the first time.The session aimed at providing clarity and removing confusion around vRealize Automation. Specially this blog makes it clear what steps needs to be done while implementing vRA. If you want to clarify your concepts around this technology then this post is for you.

Introduction:

Building a Cloud environment using vRealize Automation has two major parts. First part is to successfully deploy all the components of vRealize Automation and getting it running. The second part is configuring it to successfully create a Cloud environment. In my opinion first part is easy but second part requires more overall and in-depth understanding of concepts related to vRA environment. Before you can configure it, understanding the different flows and sequence to be followed needs to be clarified. Keeping that in mind in this post I will try to clarify the following:

  • Start with explaining the different components of vRealize Automation and what each of them do
  • Next, the general description of roles and responsibilities
  • After this, different logically sequential tasks which needs to be performed is explained
  • The session is ended by giving a demo showing everything that was covered in theory.

Provided below is the video of the Webex session which covers the above. Go through it and let me know if it helped you.

N.B.: For more details on vRA, go to documentation from VMware. The best information is always provided there.

 

 

The post Demystifying vRealize Automation – Getting it Right! appeared first on VMware Cloud Management.

How to use dynamic property definitions in vRA 7.2

$
0
0

Since vRA 7.2. was released we are seeing more and more customers with earlier vRA versions wanting to move to the new version. I was recently involved in a migration of a customer’s production environment from vRA 6.2 to 7.2 (If you haven’t heard of migration, check it out – it’s a totally awesome feature). When the migration ended successfully, the customer started browsing the catalog items and in a patient and polite tone said “I don’t see my property values”, while I could feel he was screaming “Aaaaaaaghhh!” panicking on the inside.

no-values

 

You could safely presume that the idea of a major issue right before an approaching migration deadline increased my heartbeat rate in no time. So, while my first reaction was “Ugh…”, I asked the customer if I could take a look at their 6.2. environment – it seemed they were using relationships as XML files.

category-profile-relation-62

The idea of the current example is simple – a user selects a VM Category from a list during a request, then the drop down list of network profiles gets filtered based on this category, and so does the list of networks. The network profiles bear the same name as the networks so it is easier to choose. In vRA 6 this is achieved by using long, incomprehensible and rather static (I would say boring) XML statements that only specify relationships, and those relationships can only be formed in a 1-to-1 manner. This means that in 6.2 when we choose the VM Category, only the Profile Name drop down list can be filtered, but not the Network Name. The “problem” with 7.2 after migration is that it doesn’t migrate XML code as the property value. Instead, we have to write vRO actions, which, as you may know can do anything, except maybe brewing coffee, last time I checked.

So, after overcoming the first rush of blood to my head and estimating the situation I said to the customer: “Alright, we can fix this”. Seeing a slight smile on the face of the person next to me I started working on a resolution to this problem. Firstly, I had to develop some kind of a database for my vRO actions to play with. I opened the VM Category property definition and manually entered the categories in a static list as the customer wanted them.

vm-category-72

The next problem I had to solve was the mapping between the network profiles and their respective VM Category. I could easily enter that as a static dictionary in my Javascript code in vRO, but I wanted to stick to a basic principle when coding vRO actions – keep as much of the data as possible in vRA and leave the logic to vRO. One solution could be creating another property definition as a static list featuring the relationships, but this meant the customer would have yet another set of properties to manage whenever something in their infrastructure changed, for example a network got removed. I knew that in the customer environment the relationship between VM Category and Network profiles was 1-to-many, so I implemented another approach – I filled the Description of the already migrated Network Profiles with the VM Category names. This way when the customer removes a profile, they would also remove the relationship, and vice-versa.

network-profiles-72

Next, I logged onto the vRO client, went to Design and created a new module from the Actions pane.

vro-newmodule

Then, I created a new Action and named it GetNetProfiles, which, you guessed it, could fetch the profiles based on an input called VM Category. On the Scripting tab of the Edit window I set the return type to be “Array of Strings”, because this is what a drop down list actually represents.

vro-returntype-netpr

Then I added the input parameter and called it VMCategory of type String.

vro-input-netpr

Now, the reader might already be shouting “Show me the code!”, so I will not delay this anymore.

function getNetProfiles(VMCategoryName)
{
  var host = Server.findAllForType("vCAC:vCACHost")[0];
  var netProfilesNamesList = [""];
  var model = "ManagementModelEntities.svc";
  var entitySetName = "StaticIPv4NetworkProfiles";
  var property = new Properties();
  property.put("ProfileDescription", VMCategoryName);
  var netProfiles = vCACEntityManager.readModelEntitiesByCustomFilter(host.id, model, entitySetName, property, null);
  for each (var netProfile in netProfiles)
  {
    netProfilesNamesList.push(netProfile.getProperty("StaticIPv4NetworkProfileName"));
  }
    return netProfilesNamesList;
}
if (VMCategory)
{
  return getNetProfiles(VMCategory);
}
else
{
  return ["Please, select a VM Category"];
}

So, let’s analyse the code. First, it gets a host variable that represents the default IaaS Server, i.e. your Web IaaS Service of a vRA environment. Then, we craft a property variable that will be used to filter only those Network Profiles, that have VM Category as their description. After that, we just call the vCACEntityManager class and read all Entities of Set StaticIPv4NetworkProfiles by using the aforementioned property as a filter. Finally, the names of all Network Profiles which are returned by this query get extracted and poured into an array (remember the return type we set). So, basically we wrap this into a function and call this function in the action execution, while checking if we have a selected VMCategory.

After wrapping up the script code. I created another action, this time calling it GetNetworkName, which again returns an array of String and accepts an input string called NetProfile.

vro-returntype-netname

This action returns the name of the Network Profile as the name of the network in vCenter. Remember in my case I had the Network Profile name be the same as the Network name.

if(NetProfile)
{
 return [NetProfile];
}
else
{
 return ["Please, select a network profile"];
}

Now, you might be wondering how do we attach these actions to a drop down and how do we specify the input properties. Stay with me.

When done with scripting, I navigated back to the vRA 7.2 Automation Console and went straight ahead to editing a Property Definition. I selected the property with a name of VirtualMachine.Network0.ProfileName. I set the type to String and the display name as a Dropdown. The values were set to “External values” and the script action was selected from the list of actions. This list is filtered based on the data type of the property, so don’t freak out if you don’t see all the actions here.

As the Input parameter I entered VMCategory and made sure the value is passed from the VM Category Property Definition I had created earlier.

prop72-profilename

After that, I edited the VirtualMachine.Network0.Name property and selected the GetNetworkName action while setting the ProfileName property as an input parameter.

prop72-netname

I made sure the display order is set correctly, so the request form will show all the properties one under the other.

Next, I headed to the blueprint that needs these properties and made sure that the VM Category is set as a Custom Property and Show in Request is set to Yes, i.e. checked.

blueprint72-vmcatprop

Since the properties for the profiles and networks are network properties we need to check they exist in the network configuration of the blueprint.

blueprint72-netprop

Finally, I was able to request a new VM:

req72-vmcat req72-netpr

The number of ways this approach can be modified or optimized is infinite – for example, we can use the now recommended NetworkProfileName property which combines the Network Profile and Network Name properties into one or if the profile name differs from the network name we could create another set of a dictionary that maps these two, or you can connect to the company’s service catalog and extract the VM Categories dynamically and so on and so on. I leave it to the reader’s imagination…

More on vRA 7.2.

The post How to use dynamic property definitions in vRA 7.2 appeared first on VMware Cloud Management.

Automating Infrastructure with vRealize Code Stream (vRCS) and Artifactory

$
0
0

Purpose:

The purpose of this post is to show how to automate infrastructure in virtualized dataceter  through vRealize Code Stream (vRCS) and using other tools like Artifactory. Target audience for this post are System Admins, Cloud Admins etc. who are not full fledged developers but has some experience in scripting and automation or building blueprints in vRA. Also in the process this post clarifies on how to use vRCS for any other automation purpose that you want.

Introduction:

One point of clarification before I go ahead and start discussing about this topic. vRealize Code Stream (vRCS) is an amazing tool which gives probably the best Continuous Integration/Continuous Delivery (CI/CD) experience for DevOps environment. But in this topic I am using the same set of tools and using it to automate my infrastructure. There are other traditional ways you can achieve the same result and perhaps you have been doing it for long in that way. But I want to showcase how easily you can do the same tasks with vRCS and more elegantly. Sound intersting???? Read On…..

vRealize Code Stream (vRCS)-What is it?

In VMware words “VMware vRealize Code Stream provides release automation and continuous delivery to enable frequent, reliable software releases while reducing operational risks”. So, vRealize Code Stream is an automation tool which helps you to get the following:

  • Application Delivery Automation
  • Pipeline Modeling
  • Artifact Management
  • Release Dashboard and Reports

Among many other points. The strength of the solution is simplicity and integration capability. You typically integrate the solution with tools like Artifactory (for artefact management) and Jenkins (build automation), vRealize Automation (infrastructure deployment) to automate your entire product development, build, test and deployment life cycle.

Example Setup:

A typical example of the setup is provided below:

  • Have an Artifactory server deployed in your environment. Say Java is the development tool for your Organization. Integrate Maven with Artifactory. Artifactory will provide the artefacts management capability.
  • Next deploy a Jenkins server. Integrate Jenkins with Git repository for source control management. Also integrate Artifactory with Jenkins. Configure Maven for Jenkins as well.
  • For testing purpose for example, use Selenium and integrate it again with Jenkins.
  • Integrate vRCS with vRealize Automation. Where you have deployed and configured vRealize Automation.
  • You can build a pipeline in vRCS where the for example two stages are defined, Dev and Prod. Both stages start with deploying a VM in respective area in vRA and then deploy packages from Artifactory and then runs some tests in Selenium, runs few jobs from Jenkins.

Sample Use Case:

In above setup, whenever a developer submits a code in Git, it fires a job in Jenkins so it builds the respective code and pushes it to Artifactory Server. You can then start the pre-defined pipeline in vRCS which will do the following for you:

  • Deploys a VM in vRealize Automation in Dev environment
  • Deploys the application from Artifactory server to the newly deployed VM
  • Runs few pre-defined tests
  • Runs few Jenkins jobs

Depending on the output of the above tests (you can apply Gating Rules) next stage i.e, Prod is executed which runs similar pre-defined tasks.

All these are done automatically without any manual intervention. So essentially from the point where developer submits the code to the production deployment no manual tasks is needed. Thus with vRCS will can essentially build a no-touch environment.

You may be thinking, this sounds so developer-ish, so complex. It is actually not. It is a simple drag-drop-select environment which you can see in the video.

What is covered?

As mentioned above, in this post I am not going to cover a DevOps life cycle. I am actually covering something completely different. For all the System Admins and Cloud Admins out there, we know we have a lot of scripts running in our environment to automate the entire infrastructure (at least the areas we can cover). For example consider the following:

  • Today NSX has automated the network and security areas completely. But in most datacenters, after a VM running some server is deployed and all the patches, security measures are implemented then it goes to security team for their audit. Once it passes that audit then it goes to production. What if I can automate that audit itself?
  • In most cases the servers are not connected directly to internet and you have some kind of repository for patch management (yum repository, RHEL Satelite Server, WSUS server etc.). All these provide patch management in your environment. What if I write my own RPM’s and want to merge it into the repository? I write my own python scripts which I want to run in all the servers (new or old)? I have a bunch of Shell, PowerShell and PowerCLI scripts and want to manage and run them from a central repository. How do I do that?

Demonstrated use case:

This is exactly what I am trying to showcase. In the given video I am demonstrating an environment where the VM deployment is done through vRealize Automation. I have an Artifactory server working as a central repository for RPM packages, Python packages and my Shell Script and other documents. I want to achieve the following simple use case:

Want to configure a pipeline which once executed will do the following:

  • Deploy a VM in Dev area through vRA
  • Modify environment in deployed VM so that Artifactory server will be used as a central YUM Repository and Python Package repository
  • Download and install rpm packages and python packages from Artifactory.
  • Install and setup a WebServer inside the VM

I could easily do some testing and based on the testing result I could set a gating rule (I can set an approval policy as well) and run the same things in production environment.

But since through this post I wanted to showcase other use cases of vRCS so by this time you have a general understanding of the product (how to use it) and further utilize it as per your requirement.

Interested enough? Watch this video of 28 minutes.

 

Suggested Reads:

Greg Kullberg has amazing series of session on vRealize Code Stream. I strongly suggest you to watch them. Specially the following ones:

Also, as I always say, your best friend is VMware documentation for vRCS.

Conclusion:

The idea behind this post was to showcase a slightly different use case of vRCS. In the process explain what it is. Though implementation of the use case you should be able to get a clear idea of how you can utilize vRealize Code Stream for not only DevOps environment but also for automating your daily life.

I hope this post helps you in reducing few of your daily challenges faced in any datacenter.

As always do give me your feedback. It is through your feedback I get to know whether I am writing useful topics or not. So happy reading and watching till the next post.

The post Automating Infrastructure with vRealize Code Stream (vRCS) and Artifactory appeared first on VMware Cloud Management.

vRealize Automation 7.2 Detailed Implementation Video Guide

$
0
0

Welcome to the vRealize Automation 7.2 Detailed Implementation VIDEO Guide. This is a collection of all the videos making up the full vRealize Automation 7.2 Detailed Implementation Guide.

The guide (and these videos) was put together to help you deploy and configure a highly-available, production-worthy vRealize Automation 7.2 distributed environment, complete with SDDC integration (e.g. VSAN, NSX), extensibility examples and ecosystem integrations. The design assumes VMware NSX will provide the load balancing capabilities and includes details on deploying and configuring NSX from from scratch to deliver these capabilities.

 

01, Introduction

High-Level Overview

  • Production deployments of vRealize Automation (vRA) should be configured for high availability (HA)
  • The vRA Deployment Wizard supports Minimal (staging / POC) and Enterprise (distributed / HA) for production-ready deployments, per the Reference Architecture
  • Enterprise deployments require external load balancing services to support high availability and load distribution for several vRA services
  • VMware validates (and documents) distributed deployments with F5 and NSX load balancers
  • This document provides a sample configuration of a vRealize Automation 7.2 Distributed HA Deployment Architecture using VMware NSX for load balancing

Implementation Overview

To set the stage, here’s a high-level view of the vRA nodes that will be deployed in this exercise. While a vRA POC can typically be done with 2 nodes (vRA VA + IaaS node on Windows), a distributed deployment can scale to anywhere from 4 (min) to a dozen or more components. This will depend on the expected scale, primarily driven by user access and concurrent operations. We will be deploying six (6) nodes in total – two (2) vRA appliances and four (4) Windows machines to support vRA’s IaaS services. This is equivalent to somewhere between a “small” and “medium” enterprise deployment. It’s a good balance of scale and supportability starting point.

02, Deploy and Configure NSX

 

We will be leveraging VMware NSX in this implementation to provide the load balancing services for the vRA deployment as well as integrating into vRA for application-centric network and security. Before any of this is possible, we must deploy NSX to the vSphere cluster, prepare the hosts, and configure logical network services. The guide assumes the use of NSX for these services, but this is NOT a requirement. A distributed installation of vRA can be accomplished with most load balancers. VMware certifies NSX, F5, and NetScaler.

(You can skip this section if you do not plan on using NSX in your environment)


 

03, Deploy vRA Virtual Appliances

 

The vRA virtual appliance (OVA) is downloaded from vmware.com and deployed to a vSphere environment. In a distributed deployment, you will deploy both primary and secondary nodes ahead of kicking off the deployment wizard.

The VA also includes the latest IaaS installers, including the required management agent (that will be covered in the next section).


 

04, Prepare IaaS Hosts

 

vRA’s IaaS engine is a .net-based application that is installed on a number of dedicated Windows machines. In the old days, the IaaS components were manually installed, configured and registered with the vRA appliance(s). This included manual installation of many prerequisites. The effort was quite tedious and error-prone, especially in a large distributed environment.

In vRA 7.0 and higher, the installation and configuration of system prerequisites and IaaS components has been fully automated by the Deployment Wizard. But prior to kicking off the wizard, the vRA Management Agent needs to be installed on each IaaS host. Once installed, the host is registered with the primary virtual appliance and made available for IaaS installation during the deployment. While the Deployment Wizard will automatically push most of the prerequisites (after a prerequisite check), you have the option to install any or all of the prereqs ahead of time. However, the wizard’s success rate has improved greatly and is the preferred method for most environments.


 

05, Deployment Wizard

 

The Deployment Wizard is invoked by logging into the primary VA’s Virtual Appliance Management Interface (VAMI) using the configured root account. Once logged in, the admin is immediately presented with the new Deployment Wizard UI. The wizard will provide a choice of a minimal (POC, small) or enterprise (HA, distributed) deployment then, based on the desired deployment type, will walk you through a series of configuration details needed for the various working parts of vRA, including all the windows-based IaaS components and dependencies. For HA deployments, all the core components are automatically clustered and made highly-available based on these inputs.

In both Minimal and Enterprise deployments, the IaaS components (Manager Service, Web Service, DEMs, and Agents) are automatically pushed to available windows IaaS servers made available to the installer thanks to the management agent.


 

06.1, NSX Load Balancer Configuration

 

Next we’ll be configuring load balancing and high availability policies for the distributed components. An NSX Edge Service Gateway (ESG) will be providing the load balancing and availability services to vRA as an infrastructure service. vRA supports In-Line and One-Arm load balancing policies. This implementation will be based on an In-Line configuration, where the vRA nodes and the load balancer VIPs are on the same subnet.

(If you do not plan on using NSX for HA services, you can skip this configuration)


 

07, Initial Tenant Configuration

 

vIDM is policy-driven and adds a significant amount capability over the IDVA. vRA 7 customers will gain many of the OOTB capabilities of the stand-alone vIDM product and be able to configure and manage these features directly with the vRA UI. For anyone who has used vIDM as a stand-alone solution or as part of another product (e.g. Horizon Workspace), configuring vIDM will be just as straight forward. But even if you’ve never configured it before, it is intuitive and walks you through the logical steps of setting up auth sources and advanced policies…

For Active Directory integration, vIDM Directories are configured to sync with one or more domains. AD can be configured as the exclusive provider, a backup (e.g. when 2FA fails), or as part of a more complex authentication policy. Several AD-specific policies are available to fit most use cases. vRA itself does not query AD directly. Instead, only the vIDM Connector communicates with the configured AD providers and performs a database sync (AD -> Local vPostgresDB) based on the configured sync policy. In addition to AD, vRA 7.1 added support for LDAP auth stores.


 

08, IaaS Fabric Configuration

 

The IaaS Fabric is made up of all the infrastructure components that are configured to provide aggregate resources to provisioned machines and applications. vRA’s IaaS Fabric is made up of several logical constructs that are configured to identify and collect private and public cloud resources (Endpoints), aggregate those resources into manageable segments (Fabric Groups), and sub-allocate hybrid resources (Reservations) to the consumers (Business Groups).


 

09, Creating IaaS Blueprints

 

A Blueprint is a logical definition of a given application or service and must be create prior to publishing that service in the service catalog. That includes all traditional IaaS (Windows / Linux / Multi-Tier Apps), containerized applications, and XaaS (anything as a service). An IaaS blueprint also defines the resource configuration logic for the included service(s), including CPU, memory, storage, and network resource allocations for a given machine component and defines the workflow that will be used to provision the machine(s) at request time, depending on the desired outcome.

The Converged Blueprint (CBP) Designer is a single, converged designer for all blueprint authoring. Blueprints are now built on a dynamic drag-n-drop design canvas, allowing admins to choose any supported components, drag them on to the canvas, build dependencies, and publish the finished product to the catalog. Components include machine shells for all the supported platforms, software components, endpoint networks, NSX-provided networks, XaaS components, and even other blueprints that have already been published (yes, nested blueprints). Once dragged over, the admin can build the necessarily logic and any needed integration for each component of that particular service.

In this module, we will be creating a couple example vSphere Blueprints — 1 x Windows 2012 R2, 1 x CentOS 6.7 — and preparing them to be published in the catalog (next section). Later, we’ll be adding additional configurations to each of blueprints for more advanced use cases.


 

10, Catalog Management

 

Once the blueprints have been created and published, you make them available for consumption in the unified Catalog. The Catalog is the self-service component of vRA, which provides any number of services to consumers. But before that can happen, you must determine which users or groups (e.g. Business Group users) will have access to each catalog item. vRA uses a rich set of policies to provide granularity that ensures services are only available to users that are specifically entitled to that particular service (or action).

Catalog Management consists of creating Services (e.g. categories), assigning published catalog items to a Service, and entitling one or more Business Groups users to the item(s).


 

11, Approval Policies

 

Approval policies are optionally created to add governance and additional controls to any and all services. vRA provides a significant amount of granularity for triggering approval policies based on the catalog item, service type, component configuration, lifecycle state, or even based on the existence of a particular item. Once created, active approval policies are applied to Services, individual Catalog Items, and/or Actions in the Entitlements section.

Approval Policies can be triggered at request time (PRE) or just prior to delivering the service to the consumer (POST)…or a combination of the two. For example, manager’s approval can be required at request time (before provisioning begins) and another approval can be required for final inspection prior to making the service available to the requesting consumer. For traditional IaaS machines, a policy can also include options that allow the approver to modify the request prior to approving (e.g. mem, cpu configuration). At provisioning time, the approver is notified of the pending request. Once approved, the request moves forward. If it is rejected, the request is canceled and the user is notified of the rejection.

In this section, we will create three approval policies — one that is triggered based on configurations (cpu count), one that requires a Business Group manager’s approval and one that is triggered when a particular day-2 action is invoked.


 

12.1, Extensibility Basics

 

It’s really difficult to summarize vRA’s extensibility in one or two paragraphs, but i’ll give it a shot. Extensibility refers to any configuration or customization that modifies vRA’s default behavior. This can include customizing the user experience at request time (e.g. adding enhanced configuration options, requiring specific inputs, etc), incorporating ecosystem tools and binding them to a machine’s lifecycle (e.g. load balancers, CMDB/ITSM tools, IPAM, Active Directory, configuration management, and so on).

vRA’s vast extensibility capabilities can be as basic or as complex as required. But ultimately they are designed to ensure vRA is plugged in to the broader ecosystem of tools and services based on the business needs. Many lifecycle extensibility services are configured and managed within vRA’s UI (e.g. Property Dictionary, Custom Properties, AD Integration, Event Broker, and XaaS. But one of the most important components of Extensibility is vRealize Orchestrator (vRO), which can be consumed within vRA but managed in it’s own UI (vRO control center, vRO client).

In this module we’ll be getting our feet wet with Extensibility. I’ll provide an overview of vRA’s extensibility tools and usage and an introduction to the Property Dictionary, Custom Properties, and vRealize Orchestrator. I’ll introduce and put to use the Event Broker and XaaS — two critical pieces to vRA’s extensibility — in later modules.


 

12.2, Simple Extensibility Use Cases

 

Now that you have a general understanding of vRA’s extensibility capabilities, let’s put some of that knowledge to use. In this module we’ll be leveraging extensibility for some basic extensibility use cases. We’ll use Custom Properties to control vCenter folder placement of provisioned machines, create a Property Definition to provide resource placement options (vis drop-down) at request time, and create Active Directory policies for each Business Group to define where we want machine objects placed in Active Directory.

These are just the basics to get your feet wet. Extensibility will play a big part in many more modules later.

 


That’s it for now! Now that we’ve got the basics out of the way, the next set of videos will dive into more advanced topics, such as software authoring, container management, SDDC integration (VSAN, NSX), and several advanced extensibility use cases.

Be sure to refer back to the full guide for detailed configuration steps or more info on any given topic.

 

+++++
@virtualjad

The post vRealize Automation 7.2 Detailed Implementation Video Guide appeared first on VMware Cloud Management.

Viewing all 216 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>