Category Archives: SCVMM

How to : From the Trenches – Use SharePoint to Implement an ALM in Your Orginisation

After my successful creation and implementation of an ALM for Business Connexion using the SharePoint Platform, I thought I’d share the lessons I have learned and show you step for step how you can implement your own ALM leveraging the power of the SharePoint Platform

slide5[1]

In this article,

  • An Overview : SharePoint Application Lifecycle Management:
  • Learn how to plan and manage Application Lifecycle Management (ALM) in Microsoft SharePoint 2010 projects by using Microsoft Visual Studio 2010 and Microsoft SharePoint Designer 2010.
  • Also learn what to consider when setting up team development environments,
  • Establishing upgrade management processes,
  • Creating a standard SharePoint development model.
  • Extending your SharePoint ALM to include other Departments like Java, Mobile, .Net and even SAP Development
Introduction to Application Lifecycle Management in SharePoint 2010

The Microsoft SharePoint 2010 development platform, which includes Microsoft SharePoint Foundation 2010 and Microsoft SharePoint Server 2010, contains many capabilities to help you develop, deploy, and update customizations and custom functionalities for your SharePoint sites. The activities that take advantage of these capabilities all fall under the category of Application Lifecycle Management (ALM).

Key considerations when establishing ALM processes include not only the development and testing practices that you use before the initial deployment of a single customization, but also the processes that you must implement to manage updates and integrate customizations and custom functionality on an existing farm.

This article discusses the capabilities and tools that you can use when implementing an ALM process on a SharePoint farm, and also specific concerns and things to consider when you create and hone your ALM process for SharePoint development.

This article assumes that each development team will develop a unique ALM process that fits its specific size and needs, so its guidance is necessarily broad. However, it also assumes that regardless of the size of your team and the specific nature of your custom solutions, you will need to address similar sets of concerns and use capabilities and tools that are common to all SharePoint developers.

The guidance in this article will help you as create a development model that exploits all the advantages of the SharePoint 2010 platform and addresses the needs of your organization.

SharePoint Application Lifecycle Management: An Overview

Although the specific details of your SharePoint 2010 ALM process will differ according the requirements of your organizations, most development teams will follow the same general set of steps. Figure 1 depicts an example ALM process for a midsize or large SharePoint 2010 deployment. Obviously, the process and required tasks depend on the project size.

Figure 1. Example ALM process
Example ALM process

The following are the specific steps in the process illustrated in Figure 1 (see corresponding callouts 1 through 10):

  1. Someone (for example, a project manager or lead developer) collects initial requirements and turns them into tasks.
  2. Developers use Microsoft Visual Studio Team Foundation Server 2010 or other tools to track the development progress and store custom source code.
  3. Because source code is stored in a centralized location, you can create automated builds for integration and unit testing purposes. You can also automate testing activities to increase the overall quality of the customizations.
  4. After custom solutions have successfully gone through acceptance testing, your development team can continue to the pre-production or quality assurance environment.
  5. The pre-production environment should resemble the production environment as much as possible. This often means that the pre-production environment has the same patch level and configurations as the production environment. The purpose of this environment is to ensure that your custom solutions will work in production.
  6. Occasionally, copy the production database to the pre-production environment, so that you can imitate the upgrade actions that will be performed in the production environment.
  7. After the customizations are verified in the pre-production environment, they are deployed either directly to production or to a production staging environment and then to production.
  8. After the customizations are deployed to production, they run against the production database.
  9. End users work in the production environment, and give feedback and ideas concerning the different functionalities. Issues and bugs are reported and tracked through established reporting and tracking processes.
  10. Feedback, bugs, and other issues in the production environment are turned into requirements, which are prioritized and turned into developer tasks. Figure 2 shows how multiple developer teams can work with and process bug reports and change requests that are received from end users of the production environment. The model in Figure 2 also shows how development teams might coordinate their solution packages. For example, the framework team and the functionality development team might follow separate versioning models that must be coordinated as they track bugs and changes.
    Figure 2. Change management involving multiple developer teams
    Change management involving multiple teams

Integrating Testing and Build Verification Environments into a SharePoint 2010 ALM Process

In larger projects, quality assurance (QA) personnel might use an additional build verification or user acceptance testing (UAT) farm to test and verify the builds in an environment that more closely resembles the production environment.

Typically, a build verification farm has multiple servers to ensure that custom solutions are deployed correctly. Figure 3 shows a potential model for relating development integration and testing environments, build verification farms, and production environments. In this particular model, the pre-production or QA farm and the production farm switch places after each release. This model minimizes any downtime that is related to maintaining the environments.

Figure 3. Model for relating development integration and testing environments
Model for relating development environments

Integrating SharePoint Designer 2010 into a SharePoint 2010 ALM Process

Another significant consideration in your ALM model is Microsoft SharePoint Designer 2010. SharePoint 2010 is an excellent platform for no-code solutions, which can be created and then deployed directly to the production environment by using SharePoint Designer 2010. These customizations are stored in the content database and are not stored in your source code repository.

General designer activities and how they interact with development activities are another consideration. Will you be creating page layouts directly within your production environment, or will you deploy them as part of your packaged solutions? There are advantages and disadvantages to both options.

Your specific ALM model depends completely on the custom solutions and the customizations that you plan to make, and on your own policies. Your ALM process does not have to be as complex as the one described in this section. However, you must establish a firm ALM model early in the process as you plan and create your development environment and before you start creating your custom solutions.

Next, we discuss specific tools and capabilities that are related to SharePoint 2010 development that you can use when considering how to create a model for SharePoint ALM that will work best for your development team.

Solution Packages and SharePoint Development Tools

One major advantage of the SharePoint 2010 development platform is that it provides the ability to save sites as solution packages. A solution package is a deployable, reusable package stored in a CAB file with a .wsp extension. You can create a solution package either by using the SharePoint 2010 user interface (UI) in the browser, SharePoint Designer 2010, or Microsoft Visual Studio 2010. In the browser and SharePoint Designer 2010 UIs, solution packages are also called templates. This flexibility enables you to create and design site structures in a browser or in SharePoint Designer 2010, and then import these customizations into Visual Studio 2010 for more development. Figure 4 shows this process.

Figure 4. Flow through the SharePoint development tools
Flow through the SharePoint development tools

When the customizations are completed, you can deploy your solution package to SharePoint for use. After modifying the existing site structure by using a browser, you can start the cycle again by saving the updated site as a solution package.

This interaction among the tools also enables you to use other tools. For example, you can design a workflow process in Microsoft Visio 2010 and then import it to SharePoint Designer 2010 and from there to Visual Studio 2010. For instructions on how to design and import a workflow process, see Create, Import, and Export SharePoint Workflows in Visio.

For more information about creating solution packages in SharePoint Designer 2010, see Save a SharePoint Site as a Template. For more information about creating solution packages in Visual Studio 2010, see Creating SharePoint Solution Packages.

Using SharePoint Designer 2010 as a Development Tool

SharePoint Designer 2010 differs from Microsoft Office SharePoint Designer 2007 in that its orientation has shifted from the page to features and functionality. The improved UI provides greater flexibility for creating and designing different functionalities. It provides rich tooling for building complete, reusable, and process-centric applications. For more information about the new capabilities and features of SharePoint Designer 2010, see Getting Started with SharePoint Designer.

You can also use SharePoint Designer 2010 to modify modular components developed with Visual Studio 2010. For example, you can create Web Parts and other controls in Visual Studio 2010, deploy them to a SharePoint farm, and then edit them in SharePoint Designer 2010.

The primary target users for SharePoint Designer 2010 are IT personnel and information workers who can use this application to create customizations in a production environment. For this reason, you must decide on an ALM model for your particular environment that defines which kinds of customizations will follow the complete ALM development process and which customizations can be done by using SharePoint Designer 2010. Developers are secondary target users. They can use SharePoint Designer 2010 as a part of their development activities, especially during initial creation of customization packages and also for rapid development and prototyping. Your ALM process must also define where and how to fit SharePoint Designer 2010 into the broader development model.

A key challenge of using SharePoint Designer 2010 is that when you use it to modify files, all of your changes are stored in the content database instead of in the file system. For example, if you customize a master page for a specific site by using SharePoint Designer 2010 and then design and deploy new branding elements inside a solution package, the changes are not available for the site that has the customized master page, because that site is using the version of the master page that is stored in the content database.

To minimize challenges such as these, SharePoint Designer 2010 contains new features that enable you to control usage of SharePoint Designer 2010 in a specific environment. You can apply these control settings at the web application level or site collection level. If you disable some action at the web application level, that setting cannot be changed at the site collection level.

SharePoint Designer 2010 makes the following settings available:

  • Allow site to be opened in SharePoint Designer 2010.
  • Allow customization of files.
  • Allow customization of master pages and layout pages.
  • Allow site collection administrators to see the site URL structure.

Because the primary purpose of SharePoint Designer 2010 is to customize content on an existing site, it does not support source code control. By default, pages that you customize by using SharePoint Designer 2010 are stored inside a versioned SharePoint library. This provides you with simple support for versioning, but not for full-featured source code control.

Ads by CeheuapMMeAd Options
Importing Solution Packages into Visual Studio 2010

When you save a site as a solution package in the browser (from the Save as Template page in Site Settings), SharePoint 2010 stores the site as a solution package (.wsp) file and places it in the Solution Gallery of that site collection. You can then download the solution package from the Solution Gallery and import it into Visual Studio 2010 by using the Import SharePoint Solution Package template, as shown in Figure 5.

Figure 5. Import SharePoint Solution Package template
Import SharePoint Solution Package template

SharePoint 2010 solution packages contain many improvements that take advantage of new capabilities that are available in its feature framework. The following list contains some of the new feature elements that can help you manage your development projects and upgrades.

  • SourceVersion for WebFeature and SiteFeature
  • WebTemplate feature element
  • PropertyBag feature element
  • $ListId:Lists
  • WorkflowAssociation feature element
  • CustomSchema attribute on ListInstance
  • Solution dependencies

After you import your project, you can start customizing it any way you like.

Note Note
Because this capability is based on the WebTemplate feature element, which is based on a corresponding site definition, the resulting solution package will contain definitions for everything within the site. For more information about creating and using web templates, see Web Templates.

Visual Studio 2010 supports source code control (as shown in Figure 6), so that you can store the source code for your customizations in a safer and more secure central location, and enable easy sharing of customizations among developers.

Figure 6. Visual Studio 2010 source code control
Visual Studio 2010 source code control

The specific way in which your developers access this source code and interact with each other depends on the structure of your team development environment. The next section of this article discusses key concerns and considerations that you should consider when you build a team development environment for SharePoint 2010.

Team Development Environment for SharePoint 2010: An Overview

As in any ALM planning process, your SharePoint 2010 planning should include the following steps:

  1. Identify and create a process for initiating projects.
  2. Identify and implement a versioning system for your source code and other deployed resources.
  3. Plan and implement version control policies.
  4. Identify and create a process for work item and defect tracking and reporting.
  5. Write documentation for your requirements and plans.
  6. Identify and create a process for automated builds and continuous integration.
  7. Standardize your development model for repeatability.

Microsoft Visual Studio Team Foundation Server 2010 (shown in Figure 7) provides a good potential platform for many of these elements of your ALM model.

Figure 7. Visual Studio 2010 Team Foundation Server
Visual Studio 2010 Team Foundation Server

When you have established your model for team development, you must choose either a collection of tools or Microsoft Visual Studio Team Foundation Server 2010 to manage your development. Microsoft Visual Studio Team Foundation Server 2010 provides direct integration into Visual Studio 2010, and it can be used to manage your development process efficiently. It provides many capabilities, but how you use it will depend on your projects.

You can use the Microsoft Visual Studio Team Foundation Server 2010 for the following activities:

  • Tracking work items and reporting the progress of your development. Microsoft Visual Studio Team Foundation Server 2010 provides tools to create and modify work items that are delivered not only from Visual Studio 2010, but also from the Visual Studio 2010 web client.
  • Storing all source code for your custom solutions.
  • Logging bugs and defects.
  • Creating, executing, and managing your testing with comprehensive testing capabilities.
  • Enabling continuous integration of your code by using the automated build capabilities.

Microsoft Visual Studio Team Foundation Server 2010 also provides a basic installation option that installs all required functionalities for source control and automated builds. These are typically the most used capabilities of Microsoft Visual Studio Team Foundation Server 2010, and this option helps you set up your development environment more easily.

Setting Up a Team Development Environment for SharePoint 2010

SharePoint 2010 must be installed on a development computer to take full advantage of its development capabilities. If you are developing only remote applications, such as solutions that use SharePoint web services, the client object model, or REST, you could potentially develop solutions on a computer where SharePoint 2010 is not installed. However, even in this case, your developers’ productivity would suffer, because they would not be able to take advantage of the full debugging experience that comes with having SharePoint 2010 installed directly on the development computer.

The design of your development environment depends on the size and needs of your development team. Your choice of operating system also has a significant impact on the overall design of your team development process. You have three main options for creating your development environments, as follows:

  1. You can run SharePoint 2010 directly on your computer’s client operating system. This option is available only when you use the 64-bit version of Windows 7, Windows Vista Service Pack 1, or Windows Vista Service Pack 2.
  2. You can use the boot to Virtual Hard Drive (VHD) option, which means that you start your laptop by using the operating system in VHD. This option is available only when you use Windows 7 as your primary operating system.
  3. You can use virtualization capabilities. If you choose this option, you have a choice of many options. But from an operational viewpoint, the option that is most likely the easiest to implement is a centralized virtualized environment that hosts each developer’s individual development environment.

The following sections take a closer look at these three options.

SharePoint 2010 on a Client Operating System

If you are using the 64-bit version of Windows 7, Windows Vista Service Pack 1, or Windows Vista Service Pack 2, you can install SharePoint Foundation 2010 or SharePoint Server 2010. For more information about installing SharePoint 2010 on supported operating systems, see Setting Up the Development Environment for SharePoint 2010 on Windows Vista, Windows 7, and Windows Server 2008.

Figure 8 shows how a computer that is running a client operating system would operate within a team development environment.

Figure 8. Computer running a client operating system in a team development environment
Computer running a client operating system

A benefit of this approach is that you can take full advantage of any of your existing hardware that is running one of the targeted client operating systems. You can also take advantage of pre-existing configurations, domains, and enterprise resources that your enterprise supports. This could mean that you would require little or no additional IT support. Your developers would also face no delays (such as booting up a virtual machine or connecting to an environment remotely) in accessing their development environments.

However, if you take this approach, you must ensure that your developers have access to sufficient hardware resources. In any development environment, you should use a computer that has an x64-capable CPU, and at least 2 gigabytes (GB) of RAM to install and run SharePoint Foundation 2010; 4 GB of RAM is preferable for good performance. You should use a computer that has 6 GB to 8 GB of RAM to install and run SharePoint Server 2010.

A disadvantage of this approach is that your environments will not be centrally managed, and it will be difficult to keep all of your project-dependent environmental requirements in sync. It might also be advisable to write batch files that start and stop some of the SharePoint-related services so that when your developers are not working with SharePoint 2010, these services will not consume resources and degrade the performance of their computers.

The lack of centralized maintenance could hurt developer productivity in other ways. For example, this might be an unwieldy approach if your team is working on a large Microsoft SharePoint Online project that is developing custom solutions for multiple services (for example, the equivalents of http://intranet, http://mysite, http://teams, http://secure, http://search, http://partners, and http://www.internet.com) and deploying these solutions in multiple countries or regions.

If you are developing on a computer that is running a client operating system in a corporate domain, each development computer would have its own name (and each local domain name would be different, such as http://dev 1 or http://dev2). If each developer is implementing custom functionalities for multiple services, you must use different port numbers to differentiate each service (for example, http://dev1 for http://intranet and http://dev1:81 for http://mysite). If all of your developers are using the same Visual Studio 2010 projects, the project debugging URL must be changed manually whenever a developer takes the latest version of a project from your source code repository.

This would create a manual step that could hurt developer productivity, and it would also diminish the efficiency of any scripts that you have written for setting up development environments, because the individual environments are not standardized. Some form of centralization with virtualization is preferable for large enterprise development projects.

SharePoint 2010 on Windows 7 and Booting to Virtual Hard Drive

If you are using Windows 7, you can also create a VHD out of an existing Windows Server 2008 image on which SharePoint 2010 is installed in Windows Hyper-V, and then configure Windows 7 with BDCEdit.exe so that it boots directly to the operating system on the VHD. To learn more about this kind of configuration, see Deploy Windows on a Virtual Hard Disk with Native Boot and Boot from VHD in Win 7.

Figure 9 shows how a computer that is running Windows 7 and booting to VHD would operate within a team development environment.

Figure 9. Windows 7 and booting to VHD in a team environment
Windows 7 and booting to VHD in a team environment

An advantage of this approach is the flexibility of having multiple dedicated environments for an individual project, enabling you to isolate each development environment. Your developers will not accidentally cross-reference any artifacts within their projects, and they can create project-dependent environments.

However, this option has considerable hardware requirements, because you are using the available hardware and resources directly on your computers.

SharePoint 2010 in Centralized Virtualized Environments

In a centralized virtualized environment, you host your development environments in one centralized location, and developers access these environments through remote connections. This means that you use Windows Hyper-V in the centralized location and copy a VHD for every developer as needed. Each VHD is configured to be available from the corporate network, so that when it starts, it can be accessed by using remote connections.

Figure 10 shows how a centralized virtualized team development environment would operate.

Figure 10. Centralized virtualized team development environment
Centralized virtualized development environment

An advantage of this approach is that the hardware requirements for individual developer computers are relatively few because the actual work happens in a centralized environment. Developers could even use computers with 1 GB of RAM as their clients and then connect remotely to the centralized location. You can also manage environments easily from one centralized location, making adjustments to them whenever necessary.

Your centralized host will have significantly high hardware requirements, but developers can easily start and stop these environments. This enables you to use the hardware that you have allocated for your development environments more efficiently. Additionally, this approach provides a ready platform for more extensive testing environments for your custom code (such as multi-server farms).

After you set up your team development environment, you can start taking advantage of the deployment and upgrade capabilities that are included with the new solution packaging model in SharePoint 2010. The following sections describe how to take advantage of these new capabilities in your ALM model.

Models for Solution Lifecycle Management in SharePoint 2010

The SharePoint 2010 solution packaging model provides many useful features that will help you plan for deploying custom solutions and managing the upgrade process. You can implement assembly versioning by applying binding redirects in your web application configuration file. You can also apply versioning to your feature upgrades, and feature upgrade actions enable you to manage changes that will be necessary on your existing sites to accommodate feature upgrades. These upgrade actions can be handled declaratively or programmatically.

The feature upgrade query object model enables you to create queries in your code that look for features on your existing sites that can be upgraded. You can use this object model to obtain relevant information about all of the features and feature versions that are deployed on your SharePoint 2010 sites. In your solution manifest file, you can also configure the type of Internet Information Services (IIS) recycling to perform during a solution upgrade.

The following sections go into greater details about these capabilities and how you can use them.

Using Assembly BindingRedirect with SharePoint 2010 Assemblies

The BindingRedirect feature element can be added to your web applications configuration file. It enables you to redirect from earlier versions of installed assemblies to newer versions. Figure 11 shows how the XML configuration from the solution manifest file instructs SharePoint to add binding redirection rules to the web application configuration file. These rules forward any reference to version 1.0 of the assembly to version 2.0. This is required in your solution manifest file if you are upgrading a custom solution that uses assembly versioning and if there are existing instances of the solution and the assembly on your sites.

Figure 11. Binding redirection rules in a solution manifest file
Binding redirection rules in a solution manifest

It is a best practice to use assembly versioning, because it gives you an easy way to track the versions of a solution that are deployed to your production environments.

SharePoint 2010 Feature Versioning

The support for feature versioning in SharePoint 2010 provides many capabilities that you can use when you are upgrading features. For example, you can use the SPFeature.Version property to determine which versions of a feature are deployed on your farm, and therefore which features must be upgraded. For a code sample that demonstrates how to do this, see Version.

Feature versioning in SharePoint 2010 also enables you to define a value for the SPFeatureDependency.MinimumVersion property to handle feature dependencies. For example, you can use the MinimumVersion property to ensure that a particular version of a dependent feature is activated. Feature dependencies can be added or removed in each new version of a feature.

The SharePoint 2010 feature framework has also enhanced the object model level to support feature versioning more easily. You can use the QueryFeatures method to retrieve a list of features, and you can specify both feature version and whether a feature requires an upgrade. The QueryFeatures method returns an instance of SPFeatureQueryResultCollection, which you can use to access all of the features that must be updated. This method is available from multiple scopes, because it is available from the SPWebService, SPWebApplication, SPContentDatabase, and SPSite classes. For more information about this overloaded method, see QueryFeatures(), QueryFeatures(), QueryFeatures(), and QueryFeatures(). For an overview of the feature upgrade object model, see Feature Upgrade Object Model.

The following section summarizes many of the new upgrade actions that you can apply when you are upgrading from one version of a feature to another.

SharePoint 2010 Feature Upgrade Actions

Upgrade actions are defined in the Feature.xml file. The SPFeatureReceiver class contains a FeatureUpgrading method, which you can use to define actions to perform during an upgrade. This method is called during feature upgrade when the feature’s Feature.xml file contains one or more <CustomUpgradeAction> tags, as shown in the following example.

<UpgradeActions>
  <CustomUpgradeAction Name="text">
    ...
  </CustomUpgradeAction>
</UpgradeActions>

Each custom upgrade action has a name, which can be used to differentiate the code that must be executed in the feature receiver. As shown in following example, you can parameterize custom action instances.

<Feature xmlns="http://schemas.microsoft.com/sharepoint/">
  <UpgradeActions>
    <VersionRange EndVersion ="2.0.0.0">
      <!-- First action-->
      <CustomUpgradeAction Name="example">
        <Parameters>
          <Parameter Name="parameter1">Whatever</Parameter>
          <Parameter Name="anotherparameter">Something meaningful</Parameter>
          <Parameter Name="thirdparameter">additional configurations</Parameter>
        </Parameters>
      </CustomUpgradeAction>
      <!-- Second action-->
      <CustomUpgradeAction Name="SecondAction">
        <Parameters>
          <Parameter Name="SomeParameter1">Value</Parameter>
          <Parameter Name="SomeParameter2">Value2</Parameter>
          <Parameter Name="SomeParameter3">Value3</Parameter>
        </Parameters>
      </CustomUpgradeAction>
    </VersionRange>
  </UpgradeActions>
</Feature>

This example contains two CustomUpgradeAction elements, one named example and the other named SecondAction. Both elements have different parameters, which are dependent on the code that you wrote for the FeatureUpgrading event receiver. The following example shows how you can use these upgrade actions and their parameters in your code.

 <summary>
 Called when feature instance is upgraded for each of the custom upgrade actions in the Feature.xml file.
 </summary>
 <param name="properties">Feature receiver properties</param>
 <param name="upgradeActionName">Upgrade action name</param>
 <param name="parameters">Custom upgrade action parameters</param>

public override  FeatureUpgrading(SPFeatureReceiverProperties properties, 
                                        string upgradeActionName, 
                                        System.Collections.Generic.IDictionary<string, string> parameters)
{

    // Do not do anything, if feature scope is not correct.
     (properties.Feature.Parent  SPWeb)
    {

        // Log that feature scope is incorrect.
        return;
    }

    switch (upgradeActionName)
    {
         "example":
            FeatureUpgradeManager.UpgradeAction1(parameters["parameter1"], parameters["AnotherParameter"],
                                                 parameters["ThirdParameter"]);
            break;
         "SecondAction":
            FeatureUpgradeManager.UpgradeAction1(parameters["SomeParameter1"], parameters["SomeParameter2"],
                                                 parameters["SomeParameter3"]);
            break;
        default:

            // Log that code for action does not exist.
            break;
    }
}

You can have as many upgrade actions as you want, and you can apply them to version ranges. The following example shows how you can apply upgrade actions to version ranges of a feature.

<Feature xmlns="http://schemas.microsoft.com/sharepoint/">
  <UpgradeActions>
    <VersionRange BeginVersion="1.0.0.0" EndVersion ="2.0.0.0">
      ...
    </VersionRange>
    <VersionRange BeginVersion="2.0.0.1" EndVersion="3.0.0.0">
      ...
    </VersionRange>
    <VersionRange BeginVersion="3.0.0.1" EndVersion="4.0.0.0">
      ...
    </VersionRange>
  </UpgradeActions>
</Feature>

The AddContentTypeField upgrade action can be used to define additional fields for an existing content type. It also provides the option of pushing these changes down to child instances, which is often the preferred behavior. When you initially deploy a content type to a site collection, a definition for it is created at the site collection level. If that content type is used in any subsite or list, a child instance of the content type is created. To ensure that every instance of the specific content type is updated, you must set the PushDown attribute to , as shown in the following example.

<Feature xmlns="http://schemas.microsoft.com/sharepoint/">
  <UpgradeActions>
    <VersionRange EndVersion ="2.0.0.0">
      <AddContentTypeField ContentTypeId="0x0101002b0e208ace0a4b7e83e706b19f32cab9"
                           FieldId="{ccbcd479-94c9-4f3a-95c4-58897da434fe}"
                           PushDown="True"/>
    </VersionRange>
  </UpgradeActions>
</Feature>

For more information about working with content types programmatically, see Introduction to Content Types.

The ApplyElementManifests upgrade action can be used to apply new artifacts to a SharePoint 2010 site without reactivating features. Just as you can add new elements to any new SharePoint elements.xml file, you can instruct SharePoint to apply content from a specific elements file to sites where a given feature is activated.

You can use this upgrade action if you are upgrading an existing feature whose FeatureActivating event receiver performs actions that you do not want to execute again on sites where the feature is deployed. The following example demonstrates how to include this upgrade action in a Feature.xml file.

<Feature xmlns="http://schemas.microsoft.com/sharepoint/">
  <UpgradeActions>
    <VersionRange EndVersion ="2.0.0.0">
      <ApplyElementManifests>
        <ElementManifest Location="AdditionalV2Fields\Elements.xml"/>
      </ApplyElementManifests>
    </VersionRange>
  </UpgradeActions>
</Feature>

An example of a use case for this upgrade action involves adding new .webpart files to a feature in a site collection. You can use the ApplyElementManifest upgrade action to add those files without reactivating the feature. Another example would involve page layouts, which contain initial Web Part instances that are defined in the file element structure of the feature element file. If you reactivate this feature, you will get duplicates of these Web Parts on each of the page layouts. In this case, you can use the ElementManifest element of the ApplyElementManifests upgrade action to add new page layouts to a site collection that uses the feature without reactivating the feature.

The MapFile element enables you to map a URL request to an alternative URL. The following example demonstrates how to include this upgrade action in a Feature.xml file.

<Feature xmlns="http://schemas.microsoft.com/sharepoint/">
  <UpgradeActions>
    <MapFile FromPath="Features\MapPathDemo_MapPathDemo\PageDeployment\MyExamplePage.aspx"
             ToPath="Features\MapPathDemo_MapPathDemo\PageDeployment\MyExamplePage2.aspx" />
  </UpgradeActions>
</Feature>

Mapping URLs in this way would be useful to you in a case where you have to deploy a new version of a page that was customized by using SharePoint Designer 2010. The resulting customized page would be served from the content database. When you deploy the new version of the page, the new version will not appear because content for that page is coming from the database and not from the file system. You could work around this problem by using the MapFile element to redirect requests for the old version of the page to the newer version.

It is important to understand that the FeatureUpgrading method is called for each feature instance that will be updated. If you have 10 sites in your site collection and you update a web-scoped feature, the feature receiver will be called 10 times for each site context. For more information about how to use these new declarative feature elements, see Feature.xml Changes.

Upgrading SharePoint 2010 Features: A High-Level Walkthrough

This section describes at a high level how you can put these feature-versioning and upgrading capabilities to work. When you create a new version of a feature that is already deployed on a large SharePoint 2010 farm, you must consider two different scenarios: what happens when the feature is activated on a new site and what happens on sites where the feature already exists. When you add new content to the feature, you must first update all of the existing definitions and include instructions for upgrading the feature where it is already deployed.

For example, perhaps you have developed a content type to which you must add a custom site column named City. You do this in the following way:

  1. Add a new element file to the feature. This element file defines the new site column and modifies the Feature.xml file to include the element file.
  2. Update the existing definition of the content type in the existing feature element file. This update will apply to all sites where the feature is newly deployed and activated.
  3. Define the required upgrade actions for the existing sites. In this case, you must ensure that the newly added element file for the additional site column is deployed and that the new site column is associated with the existing content types. To achieve these two objectives, you add the ApplyElementManifests and the AddContentTypeField upgrade actions to your Feature.xml file.

When you deploy the new version of the feature to existing sites and upgrade it, the upgrade actions are applied to sites one by one. If you have defined custom upgrade actions, the FeatureUpgrading method will be called as many times as there are instances of the feature activated in your site collection or farm.

Figure 12 shows how the different components of this scenario work together when you perform the upgrade.

Figure 12. Components of a feature upgrade that adds a new element to an existing feature
Add a new element to an existing feature

Different sites might have different versions of a feature deployed on them. In this case, you can create version ranges, which define specific actions to perform when you are upgrading from one version to another. If a version range is not defined, all upgrade actions will be applied during each upgrade.

Figure 13 shows how different upgrade actions can be applied to version ranges.

Figure 13. Applying different upgrade actions to version ranges.
Applying upgrade actions to version ranges

In this example, if a given site is upgrading directly from version 1.0 to version 3.0, all configurations will be applied because you have defined specific actions for upgrading from version 1.0 to version 2.0 and from 2.0 to version 3.0. You have also defined actions that will be applied regardless of feature version.

Code Design Guidelines for Upgrading SharePoint 2010 Features

To provide more flexibility for your code, you should not place your upgrade code directly inside the FeatureUpgrading event receiver. Instead, put the code in some centralized location and refer to it inside the event receiver, as shown in Figure 14.

Figure 14. Centralized feature upgrade manager
Centralized feature upgrade manager

By placing your upgrade code inside a centralized utility class, you increase both the reusability and the testability of your code, because you can perform the same actions in multiple locations. You should also try to design your custom upgrade actions as generically as possible, using parameters to make them applicable to specific upgrade scenarios.

Solution Lifecycles: Upgrading SharePoint 2010 Solutions

If you are upgrading a farm (full-trust) solution, you must first deploy the new version of your solution package to a farm.

Execute either of the following scripts from a command prompt to deploy updates to a SharePoint farm. The first example uses the Stsadm.exe command-line tool.

stsadm -o upgradesolution -name solution.wsp -filename solution.wsp

The second example uses the Update-SPSolution Windows PowerShell cmdlet.

UpdateSPSolution Identity contoso_solution.wsp LiteralPath c:\contoso_solution_v2.wsp GACDeployment

After the new version is deployed, you can perform the actual upgrade, which executes the upgrade actions that you defined in your Feature.xml files.

A farm solution upgrade can be performed either farm-wide or at a more granular level by using the object model. A farm-wide upgrade is performed by using the Psconfig command-line tool, as shown in the following example.

psconfig -cmd upgrade -inplace b2b
NoteNote
This tool causes a service break on the existing sites. During the upgrade, all feature instances throughout the farm for which newer versions are available will be upgraded.

You can also perform upgrades for individual features at the site level by using the Upgrade method of the SPFeature class. This method causes no service break on your farm, but you are responsible for managing the version upgrade from your code. For a code example that demonstrates how to use this method, see SPFeature.Upgrade.

Upgrading a sandboxed solution at the site collection level is much more straightforward. Just upload the SharePoint solution package (.wsp file) that contains the upgraded features. If you have a previous version of a sandboxed solution in your solution gallery and you upload a newer version, an Upgrade option appears in the UI, as shown in Figure 15.

Figure 15. Upgrading a sandboxed solution
Upgrading a sandboxed solution

After you select the Upgrade option and the upgrade starts, all features in the sandboxed solution are upgraded.

Conclusion

This article has discussed some considerations and examples of Application Lifecycle Management (ALM) design that are specific to SharePoint 2010, and it has also enumerated and described the most important capabilities and tools that you can integrate into the ALM processes that you choose to establish in your enterprise. The SharePoint 2010 feature framework and solution packaging model provide flexibility and power that you can put to work in your ALM processes.

A look at : ALM and Lab Environments

What is a lab environment?

A lab environment is a collection of computers that are managed as a single unit, and on which you deploy the system under test along with test software. Here is a typical configuration of machines in a lab environment:

JJ159341.75955053636946458F51D7F086CADE6D(en-us,PandP.10).png

 

Typical lab environment configuration

This one is set up for automated tests of an ice cream vending service. The software product itself consists of a web service that runs on Internet Information Services (IIS) and a database that runs on a separate machine. The tests drive a web browser on a client machine.

With a lab environment, you can run a build-deploy-test workflow in which you can automatically build your system, deploy its components to the appropriate machines in the environment, run the tests, and collect test data. (The fully automated version of this is described in Chapter 5, “Automating System Tests.”)

The workflow is controlled by a test controller with the help of test agents installed on each test machine. The test controller runs on a separate computer.

Now you might ask why you need lab environments, since you could deploy your system and tests to any machines you choose.

Well, you could, but lab environments make several things easier:

  • You can set up automated build-deploy-test workflows. The scripts in the workflow use the lab role names of the machines, such as “Web Client,” so that they are independent of the domain names of the computers.
  • The results of tests can be shown on charts that relate them to requirements.
  • Lab Manager automatically installs test agents on each machine, enabling test data to be collected. Lab Manager manages the test settings of the virtual environment, which define what data to collect.
  • You can view the consoles of the machines through a single viewer, switching easily from one machine to the other.
  • Lab environments manage the allocation of machines to tests for reasons that include preventing two team members from mistakenly assigning the same machine to different tests.

Lab environments come in two varieties. A standard lab environment (roughly equivalent to a physical environment in Visual Studio 2010) can be composed of any computers that you have available, such as physical computers or virtual machines running on third-party frameworks.

An SCVMM environment is made up entirely of virtual machines controlled by System Center Virtual Machine Manager (SCVMM). SCVMM environments provide you with several valuable facilities; they allow you to:

  • Create fresh test environments within minutes. You can store a complete environment in a library and deploy running copies of it. For example, you could store an environment of three machines containing a web client, a web server, and a database. Whenever you want to test a system in that configuration, you deploy and run a new instance of it.
  • Take snapshots of the states of the machines. For example whenever you start a test, you can revert to a snapshot that you took when everything was freshly installed. Also, when you find a bug, you can take a snapshot of the environment for later investigation.
  • Pause and resume all the virtual machines in the environment at the same time.

Standard environments are useful for tests that have to run on real hardware, such as some kinds of performance tests. You can also use them if you haven’t installed SCVMM or Hyper-V, as would be the case if, for example, you already use another virtualization framework. But as you can see, we think there are great benefits to using SCVMM environments.

Stored SCVMM environments

Because you can store them in a library, SCVMM environments help to make your tests repeatable; when you run them for the next build, or when a new release is planned after a six-month break, you can be sure that the tests are running under the same conditions.

JJ159341.94F2E0510A25C5D3BB02389020654B0D(en-us,PandP.10).png

 

A stored SCVMM environment

For example, on Fabrikam’s ice cream sales project, the team often wants to deploy and test a new build of the sales system. It has several components that have to be installed on different machines. Of course, the sales system software is a new build each time. But the platform software, such as operating system, database, and web browser don’t change.

So at the start of the project, the team creates an environment that has the platform software, but no installation of the ice cream system. In addition, each machine has a test agent. The Fabrikam team stores this environment in the library as a template.

Whenever a new build is to be tested, a team member selects the stored platform environment, and chooses Deploy. Lab Manager takes a few minutes to copy and start the environment. Then they only have to install the latest build of the system under test.

While an environment is running, its machines execute on one or more virtualization hosts that have been set up by the system administrator. The stored version from which new copies can be deployed is stored on an SCVMM library server.

Lab management with third-party virtualization frameworks

Some teams have already invested in other virtualization frameworks such as VMware or Citrix XenServer. If that is your situation, the case for switching to Hyper-V and SCVMM might be less clear. But even if you don’t install SCVMM or Hyper-V, you can still use Lab Manager by using standard environments.

With standard environments, you get many of the benefits of lab management, but without the ability to save and quickly set up fresh environments. Instead, you’d have to use your third-party machine manager to set up new machines.

When you assign a machine to a standard environment, Lab Manager will automatically install a test agent and couple it to your test controller. This makes the machine ready for an automatic build-deploy-test workflow and for test data collection. (In Visual Studio 2010, you have to install the test agent manually, but coupling it to the test controller is automatic.)

How to use lab environments

Prerequisites

To enable your team to use lab environments, you first have to set up:

  • Visual Studio Team Foundation Server, with the Lab Manager feature enabled.
  • A test controller, linked to your team project in Team Foundation Server.
  • (Preferable, but not mandatory) System Center Virtual Machine Manager (SCVMM) and Hyper-V.

You only need to set up these things once for the whole team, so we have put the details in the Appendix. If someone else has kindly set up SCVMM, Lab Manager, and a test controller, just continue on here.

Lab center

You manage environments by using Lab Center, which is part of Microsoft Test Manager (MTM). MTM is installed as part of Visual Studio Ultimate or Test Professional. You’ll find it on the Windows Start menu under Visual Studio. If it’s your first time using it, you’ll be asked for the URL of your team project collection. Switch to the Lab Center view (it’s the alternative to Test Center). On the Environments page, you’ll see a list of environments that are in use by your team. Some of them might be marked “in use” by individual team members:

JJ159341.C0DD8E47801E316B77E376F020E16388(en-us,PandP.10).png

 

Managing environments in Lab Center

(Use the Home button if you want to switch to another team project.)

More information is available from the MSDN website topic: Getting Started with Lab Management.

Connecting to a lab environment

If your team has been using lab environments for a while, then when you open Lab Center, you might already see some environments that are available to use. Pick an environment with a status of Ready, without an In Use flag, and that looks as if it has the characteristics you want, which ought to be indicated by its name. Select it and choose Connect.

The Environment View opens. From here you can log into any of the machines in the environment.

JJ159341.551E17E6D88750A24E53C143E0C6CB73(en-us,PandP.10).png

 

The environment view

Typically, a deployed environment will have a recent build of your system already installed. If you’re sure that it’s free for you to use, you could decide to run some tests on it. However, make sure you know your team’s conventions; for example, if the environment’s name contains the name of a team member, ask if it is ok to use.

Using a deployed (running) environment

Log in. Choose the Connect button to open a console view of the environment. From there you can log into any of its machines. More about the Connect button can be found on MSDN in the topic How to: Connect to a Virtual Environment.

Reserve the environment. You can mark it as In Useto discourage other team members from interfering with it. This doesn’t prevent access by others, but simply sets a flag in Lab Center.

Revert a virtual environment to a clean snapshot. In the environment viewer, look at the Snapshots tab. If the Snapshots tab isn’t available, then this is a standard environment composed of existing machines. You might need to make sure that the latest version of your system is installed.

In a virtual environment, the team member who created the environment should have made a snapshot immediately after installing the system under test. Select the snapshot and restore the environment to that state. If there isn’t a snapshot available, that’s (hopefully) because the previous user has already restored it to the clean state. Again, you might need to check the conventions of your team.

Explore and test your system. Now you can start testing your system, which is the topic of the next chapter.

Restore the snapshot when you’re done with a virtual environment, to restore it to the newly installed state. This makes it easier for other team members to use. This option isn’t available for standard environments, so you might want to clean up any test materials that you have created.

Clear the “in use” flag when you’re done.Typically, a team will keep a number of running environments that contain a recent build, and share them. Reusing the environment and restoring it to its initial snapshot is the quickest way of assigning an environment for a test run.

Deploying an environment

If there is no running environment that is suitable for what you want to do, you can look for one in the library. The library contains a selection of stored virtual environments that have previously been created by your colleagues. You can learn more from the topic: Using a Virtual Lab for Your Application Lifecycle, on MSDN.

JJ159341.DDEFBF57250131604C3C6FD0605EAC74(en-us,PandP.10).png

 

The environment library in MTM Lab Center

(If the library isn’t available, that might mean that your team has not set Lab Manager to use SCVMM. But you can still create standard environments, which are made up of computers not controlled by SCVMM. Skip to the section about them near the end of this chapter. Alternatively, you could set up SCVMM as we describe in the Appendix.)

Environments stored in the library are templates; you can’t connect to one because its virtual machines aren’t running. Instead, you must first deploy it. Deploying copies the virtual machines from the library to the virtual machine host, and then starts them.

In MTM, in Lab Center, choose Deploy. Choose an environment from the list. They should have names that help you decide which one you want.

After you have picked an environment, Lab Center takes a few minutes to copy the virtual machines and to make sure that the test agents (which help deploy software and collect data) are running.

Eventually the environment is listed under the Lab tab as Ready (or Running in Visual Studio 2010). Then you’re all set to use it. If it shows up as Not Ready, then try the Repair command. This reinstalls test agents and reconnects them to your test controller. In most cases that fixes it.

Install your system

Typically, stored environments contain installations of the base platform: operating systems, databases, and so on. They don’t usually include an installation of the system under test. Your next step is therefore to install the latest build of your system.

To help choose a good recent build, open the build status report in your web browser. The URL is similar to http://contoso-tfs:8080/tfs/web. Click on Builds. You might have to set the date and other filters. The quality and location of each build is summarized.

In Lab Center, under the Lab tab, select the running environment and choose Connect. Log into the environment’s machines.

Use the installer (typically an .msi file) that is generated by the build process. The location can be obtained from the build status reports. Pick an installer that was generated from the Debug build configuration. You need to put each component on the right machine. Each machine has a role name such as Client, Web Server, or Database, to help you make the right choice.

Later we’ll discuss how you can write scripts to automate the deployment of the system under test.

Review the name you gave to the environment to make sure it reflects the system and build you installed.

Take a snapshot of the environment

Create a snapshot of the environment. This will enable subsequent users to get the environment back to its nice clean state. Do this immediately after you have installed your system, and before you run any tests, other than perhaps a quick smoke test to make sure the installation is OK.

You can create a snapshot either from the Snapshots tab in Environment Viewer, or from the context menu of the environment in the Lab listing.

Use it

After you’ve taken a snapshot, you can start using it as we described earlier. When you’ve finished testing, you can revert to the snapshot.

Delete it (eventually)

Delete an environment when the build it uses is superseded.

Creating a new virtual environment

What if there are no environments in the stored library, or none have the mix of machines you need? Then you’ll have to create one. And if you’re feeling generous, you could add it to the library for other team members to use.

You can either store an environment directly in the library, or you can create it as a running environment and then store it in the library. Storing directly is preferable if you don’t need to configure the constituent virtual machines in any way.

To add a new environment directly to the library, open MTM; choose Lab Center, Library, Environments, and then the New command.

Follow link to expand image

 

Creating a new environment in the library

Alternatively, to create a new running environment that you can store later, choose Lab Center, Lab, and then New. In the wizard, choose SCVMM Environment. (In Visual Studio 2010, the New command has a submenu, New Virtual Environment.)

In either method, you continue through the wizard to choose virtual machines from the library. If your team has been working for a while, there should be a good stock of virtual machines. Their names should indicate what software is installed on them.

Choose library machines that have type Template if they are available. Unlike a plain virtual machine, you can deploy more than one copy of a template. This is because when a template VM is deployed, it gets a new ID so that there are no naming conflicts on your network. Using templates to create a stored environment allows more than one copy of it to be deployed at a time.

JJ159341.11F88DD461E5D5629797A0A64190C808(en-us,PandP.10).png

 

Creating a new virtual environment

You have to name each machine uniquely within your new lab environment. Notice that the name of the computer in the environment is not automatically the same as its name in the domain or workgroup.

You also have to assign a role name to each machine, such as Desktop Client or Web Server. More than one machine can have the same role name. There is a predefined set to choose from, but you can also invent your own role names. These roles are used to help deploy the correct software to each machine. If you automate the deployment process, you will use these names; if you deploy manually, they will just help you remember which machine you intended for each software component.

When you complete the wizard, there will be a few minutes’ wait while VMs are copied.

MTM should now show that your environment is in the library, or that it is already deployed as a running environment, depending on what method of creation you chose to begin with. If it’s in the library, you can deploy it as we described before.

After creating an environment, you typically deploy software components and then keep the environment in existence until you want to move to a new build. Different team members might use it, or you might keep it to yourself. You can mark an environment as “In Use” to discourage others from interfering with it while your work is in progress.

Stored and running machines

The lab manager library can store both individual virtual machines and complete environments. There are command buttons for creating new environments, storing them in the library, and for deploying environments from the library. You have to shut down an environment before you can store it.

JJ159341.915E41C06D69548CC8AE9616387F13A5(en-us,PandP.10).png

 

Stored and deployed environments

Creating and importing virtual machines

You can store individual virtual machines from the test host to the library. Therefore, if your team starts off with a set of virtual machines in the library that include a basic set of platforms—for example, Windows 7 and Windows Server 2008—then you can deploy a machine in an environment, add extra bits, and then store it back in the library.

JJ159341.50A8846DD943403543185DF65F8DC37E(en-us,PandP.10).png

 

System Center Virtual Machine Manager (SCVMM)

But how do you create those first virtual machines? For this you need to access SCVMM, on which Lab Manager is based. It’s typically an administrator’s task, so you’ll find those details in the Appendix. Briefly:

  1. You can create a new machine in the SCVMM console and then install an operating system on it, either with your DVD or from your corporate PXE server.
  2. Every test machine needs a copy of the Team Foundation Server Test Agent, which you can get from the Team Foundation Server installation DVD.
  3. Use the SCVMM console to store the VM in the library as a template. This is preferable to storing it as a plain VM.
  4. In Lab Manager, use the Import command on the Library tab in order to make the SCVMM library items visible in the Lab Center library.

JJ159341.84682E4E4DA4A8940F44677579E69D0A(en-us,PandP.10).png

 

How environments are managed

Composed environments

A composed environment is made up of virtual machines that are already running. When you compose an environment from running machines, they are assigned to your environment; when you delete the environment, they are returned to the available pool. You can create a composed environment very quickly because there is no copying of virtual machines.

We recommend composed environments for quick exploratory tests of a recent build. The team should periodically place new machines in the pool on which a recent build is installed. Team members should remember to delete composed environments when they are no longer using them.

JJ159341.B7D97B097F6BCBF97980D943DEF5EEA8(en-us,PandP.10).png

 

Composed environments

In Visual Studio 2012, you make a composed environment the same way you create a virtual environment: by choosing New and then SCVMM environment. In the wizard, you’ll see that the list of available machines includes both VM templates and running pool machines. If you want, you can mix pool machines and freshly created VMs both in the same environment. For example, you might use new VMs for your system under test, and a pool machine for a database of test data, or a fake of an external system. Because the external system doesn’t change, there is no need to keep creating new versions of it.

In Visual Studio 2010, use the New Composed Environment command and choose machines from the list.

Standard environments

Standard environments are made up of existing computers. They can be either physical or virtual machines, or a mixture. They must be domain-joined.

You can create standard environments even if your team hasn’t yet set up SCVMM. For example, if you are already using VMware to run virtual machines and don’t want to switch to Hyper-V and SCVMM, you can use Lab Manager to set up standard environments. You can’t stop, start, or take snapshots of standard environments, but Lab Manager will install test agents on them and you can use them to run a build-deploy-test workflow.

You can also use standard environments when it is important to use a real machine—for example, in performance tests.

To create a standard environment, click New and then choose Standard Environment.

(In Visual Studio 2010, choose New Physical Environment. You must manually install test and lab agents on the computers. These agents can be installed from the Team Foundation Server DVD.)

For an example, see Lab Management walkthrough using Visual Studio 11 Developer Preview Virtual Machine on the Visual Studio Lab Management team blog.

Summary

There’s a lot of pain and overhead in configuring physical boxes to build test environments. The task is made much easier by Visual Studio Lab Manager, particularly if you use virtual environments.

With Lab Manager you can:

  • Manage the allocation of lab machines, grouping them into lab environments.
  • Configure machines for the collection of test data.
  • Rapidly create fresh virtual environments already set up with a base platform of operating system, database, and so on.

Differences between Visual Studio 2010 and Visual Studio 2012

  • System Center Virtual Machine Manager 2012. Lab Management in Visual Studio 2012 works with SCVMM 2012 in addition to SCVMM 2008.
  • Standard environments. Lab Manager in Visual Studio 2012 is easier to use with third-party virtualization frameworks as well as physical computers. It will install test agents if necessary.
  • Test agents. In Visual Studio 2010, you must install test and lab agents on the machines that you want to use in the lab. In Visual Studio 2012, there is only one type of agent, and it is installed automatically by Lab Manager on each of the machines in a lab environment. You can still install the test agent yourself to save time when lab environments are created.
  • Compatibility. Most combinations of 2010 and 2012 RC products work together. For example, you can create environments on Visual Studio Team Foundation Server 2010 using Microsoft Test Manager 2012 RC.

JJ159341.133AB26540FC407EBE448F1A9C7558A7(en-us,PandP.10).png

System Center Virtual Machine Manager (VMM) 2012 as Private Cloud Enabler (3/5): Deployment with Service Template

By this time, I assume we all have some clarity that virtualization is not cloud. There are indeed many and significant differences between the two. A main departure is the approaches of deploying apps. In the 3rd article of the 5-part series as listed below, I would like to examine service-based deployment introduced in VMM 2012 for building a private cloud.

  • Part 1. Private Cloud Concepts
  • Part 2. Fabric, Oh, Fabric
  • Part 3. Deployment with Service Template (This article)
  • Part 4. Working with Service Templatesimage
  • Part 5. App Controller

VMM 2012 has the abilities to carry out both traditional virtual machine (VM)-centric and emerging service-based deployments. The formal is virtualization-focused and operated at a VM level, while the latter is service-centric approach and intended for private cloud deployment.

This article is intended for those with some experience administering VMM 2008 R2 infrastructure. And notice in cloud computing, “service” is a critical and must-understand concept which I have discussed elsewhere. And just to be clear, in the context of cloud computing, a “service” and an “application” means the same thing, since in cloud everything to user is delivered as a service, for example SaaS, PaaS, and IaaS. Throughout this article, I use the terms, service and application, interchangeably.

VM-Centric Deployment

In virtualization, deploying a server has becomes conceptually shipping/building and booting from a (VHD) file. Those who would like to refresh their knowledge of virtualization are invited to review the 20-Part Webcast Series on Microsoft Virtualization Solutions.

Virtualization has brought many opportunities for IT to improve processes and operations. With system management software such as System Center Virtual Machine Manager 2008 R2 or VMM 2008 R2, we can deploy VMs and installs OS to a target environment with few or no operator interventions. And from an application point of view, with or without automation the associated VMs are essentially deployed and configured individually.image For instance, a multi-tier web application like the one shown above is typically deployed with a pre-determined number of VMs, followed by installing and configuring application among the deployed VMs individually based on application requirements. Particularly when there is a back-end database involved, a system administrator typically must follow a particular sequence to first bring a target database server instance on line by configuring specific login accounts with specific db roles, securing specific ports, and registering in AD before proceeding with subsequent deployment steps. These operator interventions are required likely due to lack of a cost-effective, systematic, and automatic way for streamlining and managing the concurrent and event-driven inter-VM dependencies which become relevant at various moments during an application deployment.

Despite there may be a system management infrastructure in place like VMM 2008 R2 integrated with other System Center members, at an operational level VMs are largely managed and maintained individually in a VM-centric deployment model. And perhaps more significantly, in a VM-centric deployment too often it is labor-intensive and with relatively high TCO to deploy a multi-tier application “on demand” (in other words, as a service) and deploy multiple times, run multiple releases concurrently in the same IT environment, if it is technically feasible at all. Now in VMM 2012, the ability to deploy services on demand, deploy multiple times, run multiple releases concurrently in the same environment become noticeably straightforward and amazing simple with a service-based deployment model.

Service-Based Deployment

In a VM-centric model, there lacks an effective way to address event-driven and inter-VMs dependencies during a deployment, nor there is a concept of fabric which is an essential abstraction of cloud computing. imageIn VMM 2012, a service-based deployment means all the resources encompassing an application, i.e. the configurations, installations, instances, dependencies, etc. are deployed and managed as one entity with fabric . The integration of fabric in VMM 2012 is a key delivery and clearly illustrated in VMM 2012 admin console as shown on the left. And the precondition for deploying services to a private cloud is all about first laying out the private cloud fabric.

Constructing Fabric

To deploy a service, the process normally employs administrator and service accounts to carry out the tasks of installing and configuring infrastructure and application on servers, networking, and storage based on application requirements. Here servers collectively act as a compute engine to provide a target runtime environment for executing code. Networking is to interconnect all relevant application resources and peripherals to support all management and communications need, while the storage is where code and data actually resides and maintained. In VMM 2012, the servers, networking, and storage infrastructure components are collectively managed with a single concept as private cloud fabric.

There are three resource pools/nodes encompassing fabric: Servers, Networking, and Storage. Servers contain various types of servers including virtualization host groups, PXE, Update (i.e. WSUS) and other servers. Host groups are container to logically group servers with virtualization hosting capabilities and ultimately represent the physical boxes where VMs can be possibly deployed to, either with specific network settings or dynamically selected by VMM Intelligent Placement, as applicable, based on defined criteria. VMM 2012 can manage Hyper-V based, VMware, as well as other virtualization solutions. During adding a host into a host group, VMM 2012 installs an agent on a target host which then becomes a managed resource of the fabric.

A Library Server is a repository where the resources for deploying services and VMs are available via network shares. As a Library Server is added into fabric, by specifying the network shares defined in the Library Server, file-based resources like VM templates, VHDs, iso images, service templates, scripts, server app-v packages, etc. are become available and to be used as building blocks for composing VM and service templates. As various types of servers are brought in the Server pool, the coverage expanded and capabilities increased as if additional fibers are weaved into fabric.

Networking presents the wiring among resources repositories, running instances, deployed clouds and VMs, and the intelligence for managing and maintaining the fabric. It essentially forms the nervous system to filter noises, isolate traffic, and establish interconnectivity among VMs based on how Logical Networks and Network Sites are put in place.

Storage reveals the underlying  storage complexities and how storage is virtualized. In VMM 2012, a cloud administrator can discover, classify and provision remote storage on supported storage arrays through the VMM 2012 console. VMM 2012 fully automates the assignment of storage to a Hyper-V host or Hyper-V host cluster, and tracks the storage that is managed by VMM 2012.

Deploying Private Cloud

A leading feature of VMM 2012 is the ability to deploy a private cloud, or more specifically to deploy a service to a private cloud. The focus of this article is to depict the operational aspects of deploying a private cloud with the assumption that an intended application has been well tested, signed off, and sealed for deployment. And the application resources including code, service template, scripts, server app-v packages, etc. are packaged and provided to a cloud administrator for deployment. In essence, this package has all the intelligence, settings, and contents needed to be deployed as a service. This self-contained package can then be easily deployed on demand by validating instance-dependent global variables and repeating the deployment tasks on a target cloud. The following illustrated the concept where a service is deployed in update releases and various editions with specific feature compositions, while all running concurrently in VMM 2012 fabric. Not only this is relative easy to do by streamlining and automating all deployment tasks with a service template, the service template can also be configured and deploy to different private clouds.

image

The secret sauce is a service template which includes all the where, what, how, and when of deploying all the resources of an intended application as a service. It should be apparent that the skill sets and amount of efforts to develop a solid service template apparently are not trivial. Because a service template not only needs to include the intimate knowledge of an application, but the best practices of Windows deployment in addition to system and network administrations, server app-v, and system management of Windows servers and workloads. The following is a sample service template of StockTrader imported into VMM 2012 and viewed with Designer where StockTrader is a sample application for cloud deployment downloaded from Windows Connect.

image

Here are the logical steps I follow to deploy StockTrader with VMM 2012 admin console:

  • Step 1: Acquire the Stock Trader application package from Windows Connect.
  • Step 2: Extract and place the package in a designated network share of a target Library Server of VMM 2012 and refresh the Library share. By default, the refresh cycle of a Library Server is every 60 minutes. To make the newly added resources immediately available, refreshing an intended Library share will validate and re-index the resources in added network shares.
  • Step 3: Import the service templates of Stock Trader and follow the step-by-step guide to remap the application resources.
  • Step 4: Identify/create a target cloud with VMM 2012 admin console.
  • Step 5: Open Designer to validate the VM templates included in the service template. Make sure SQLAdminRAA is correctly defined as RunAs account.
  • Step 6: Configure deployment of the service template and validate global variables in specialization page.
  • Step 7: Deploy Stock Trader to a target cloud and monitor the progress in Job panel.
  • Step 8: Troubleshoot the deployment process, as needed, restart the deployment job, and repeat the step as needed
  • Step 9: Upon successful deployment of the service, test the service and verify the results.

A successful deployment of Stock Trader with minimal instances in my all-in-one-laptop demo environment (running in Lenovo W510 with sufficient RAM) took about 75 to 90 minutes as reported in Job Summary shown below.

StockTraderDeployment

Once the service template is successfully deployed, Stock Trader becomes a service in the target private cloud supported by VMM 2012 fabric. The following two screen captures show a Pro Release of Stock Trader deployed to a private cloud in VMM 2012 and the user experience of accessing a trader’s home page.

image

image

Not If, But When

Witnessing the way the IT industry has been progressing, I envision that private cloud will soon become, just like virtualization, a core IT competency and no longer a specialty. While private cloud is still a topic that is being actively debated and shaped, the upcoming release of VMM 2012 just in time presents a methodical approach for constructing private cloud based on a service-based deployment with fabric. It is a high-speed train and the next logical step for enterprise to accelerate private cloud adoption.

Closing Thoughts

I here forecast the future is mostly cloudy with scattered showers. In the long run, I see a clear cloudy day coming.

Be ambitious and opportunistic is what I will encourage everyone. When it comes to Microsoft private cloud, the essentials are Windows Server 2008 R2 SP1 with Hyper-V and VMM 2012. And those who first master these skills will stand out, become the next private cloud subject matter experts, and lead the IT pro communities. While recognizing private cloud adoption is not a technology issue, but a culture shift and an opportunity of career progression, IT pros must make a first move.

In an upcoming series of articles tentatively titled “Deploying StockTrader as Service to Private Cloud with VMM 2012,” I will walk through the operations of the above steps and detail the process of deploying a service template to a private cloud