Category Archives: ALM

New TFS Power Tools Available!! TFS Extensions for Source Control and Work Item Tracking

The TFS Productivity Tools project was designed to provide TFS administrators with helper tools when doing Source Control or Work Item Tracking tasks.

For running these extensions the installation of VSTS 2010 Professional or newer is required.

4214.TFS_2D00_LicenseOverview[1]

VisualStudio Extensions

Extension binary/source Description Runs from Visual Studio Window
ChangeLinkTypes.vsix Modify Work Item link types. Team Explorer
DestroyWorkItems.vsix Completely destroy Work Items. Query Results
Export2Word.vsix Export Work Items to Microsoft Word document. Team Explorer
ExtendedMerge.vsix
ExtendedMerge2012.vsix
Provide workaround for several merge features not implemented by TFS 2010/2012:

  1. TFS merge leads to bulk check-in operation that puts files from all previous changesets into one big merge changeset.
  2. TFS allows only for consecutive changesets being cherry-peeked by merge operation.
  3. TFS doesn’t allow choosing changesets for cherry-peek merge by selecting work items.
  4. TFS merge dialog doesn’t have “force” and “baseless” options.
Source Control Explorer,
Query Results
GetPreview.vsix Emulate commandline task and write outputs to Output window:
tf.exe get /recursive /preview “itemspec”
Source Control Explorer
ModifyCheckinDate.vsix 1. Update modification time for checked-out files to their latest check-in time.
2. Directly modify changeset’s check-in time in TFS database.
Source Control Explorer,
History

Custom WorkItem tracking controls

Extension binary/source Description
WITDataGridView.dll
[under development]
Envelope for standard WindowsForms.DataGridView control.
This control serializes table data to XML format and saves it as an WorkItem attachment with default file size limit of 2Mb.

Custom Build workflow activities

Extension binary/source Description
QueueNewBuild.dll
[under development]
Contains activities – QueueNewBuild, QueueNewBuildBegin, QueueNewBuildEnd.
It works like LoadAndInvokeWorkflow but is intended exclusively for build workflows. QueueNewBuildBegin when put inside ParallelSequence can run builds simultanously in separate threads based on free build Agents.

Installation

For installing anyone of the extensions double-click on VSIX file and follow the installer instructions.

image001

ChangeLinkTypes

This extension modifies link types between the Work Items returned from Work Item query.
To execute it select any query under Work Items node in Team Explorer window and click on “Change Query Link Types”

image002

After that, in next dialog choose original and new wanted link types.

image003

DestroyWorkItems

This extension completely destroys (not closes) Work Items returned from Work Item query.
To execute it select a number of Work Items in Query Results window and click on “Destroy Work Items”

image004

Export2Word

This extension exports Work Items returned from Work Item query to Microsoft Word document in paragraph style.
To execute it select any query under Work Items node in Team Explorer window and click on “Export to Microsoft Word”

image005

Exported document sample

ExtendedMerge

Extended Merge extension provides workaround for several merge features not implemented by TFS:

  1. TFS merge leads to bulk check-in operation that puts files from all previous changesets into one big merge changeset.
  2. TFS allows only for consecutive changesets being cherry-peeked by merge operation.
  3. TFS doesn’t allow choosing changesets for cherry-peek merge by selecting work items.
  4. TFS merge dialog doesn’t have “force” and “baseless” options.

Initializing ExtendedMerge extension

The list of merge candidates can be obtained in two ways:

  1. From Source Control Explorer window.
    In this case all history changesets for a specific server path are parsed.
    image006
  2. From Query Results window.
    In this case all changesets, linked to the selected Work Items are parsed.
    image007

After initialization stage extension opens Visual Studio tool window pane with a grid that shows useful information about parsed changesets:

  • Checked/unchecked status.
  • Work item ID.
  • Changeset ID.
  • Changeset check-in date
  • Changeset creator.
  • The change types this changeset contains.
  • Possible merge options
    • None – merge is impossible (don’t mess with “discard” commandline option)
    • Baseless – baseless merge
    • Force – force merge
    • Candidate – regular merge
  • Merge source path (can be modified).
  • Changeset comment field.

image008

The Merge Target Location field contains the path to a folder or a branch in source control.

Merge types are calculated based on shared Merge Target Location path and an individual changeset Source Path.

Running ExtendedMerge extension

Extension runs all actions from toolbar buttons (some of them are duplicated on grid context menu also).

image009

Menu item name Menu item description
Link to Work Items When checking in merge results link new changesets to work items the same way as they were linked in original changesets.
Normal Merge Do regular merge based on merge candidates.
Conservative Merge Use “Conservative” merge option. It produces more merge conflicts.
Refresh Refill changeset merge types.
Merge Do merge.
Resolve Show merge conflicts.
Changeset details Show changeset details.
Work Item details Show Work Item details.
Navigate to Server Path Navigate to server path in Source Control Explorer.
Edit Server Path Edit server paths (one or multiple).
Copy to Clipboard Copy selected changeset details to clipboard.
Mark All Items Check all grid items.
Unmark All Items Uncheck all grid items.

Resolving merge conflicts

During merge operation merge conflicts can occur.
In this case click on OK button and resolve conflicts with standard TFS Resolve Conflicts dialog.

image010

image011

GetPreview

This extension emulates TFS commandline operation: tf.exe get /recursive /preview “itemspec”
To execute it select a path in Source Control Explorer window and click on “Emulate Get-Preview”.

The results are print to Output window.

image012

ModifyCheckinDate

This extension consists from 2 parts:

  1. Updates modification time for checked-out files to their latest check-in time.
    It is accessible from Source Control window & History window.
  2. Directly modifies changeset’s check-in time in TFS database.
    It actually runs SQL statement: UPDATE tbl_Changeset SET CreationDate=’?’ WHERE ChangeSetId=‘?’
    It is accessible from History window.

image013

image014

Release Notes

  • The ExtendedMerge uses a number of internal Microsoft classes. It can possibly lead to some tool malfunction in the next TFS versions:
    Feature Classes
    Show browse server folder dialog Microsoft.TeamFoundation.Build.Controls.VersionControlHelper.ShowServerFolderBrowser
    Show merge progress dialog Microsoft.TeamFoundation.VersionControl.Controls.ProgressMerge
    Show resolve conflicts dialog Microsoft.TeamFoundation.Client.Arguments
    Microsoft.TeamFoundation.VersionControl.CommandLine.CommandResolve
  • The GetPreview uses environment variable: %VS100COMNTOOLS%

How to : From the Trenches – Use SharePoint to Implement an ALM in Your Orginisation

After my successful creation and implementation of an ALM for Business Connexion using the SharePoint Platform, I thought I’d share the lessons I have learned and show you step for step how you can implement your own ALM leveraging the power of the SharePoint Platform

slide5[1]

In this article,

  • An Overview : SharePoint Application Lifecycle Management:
  • Learn how to plan and manage Application Lifecycle Management (ALM) in Microsoft SharePoint 2010 projects by using Microsoft Visual Studio 2010 and Microsoft SharePoint Designer 2010.
  • Also learn what to consider when setting up team development environments,
  • Establishing upgrade management processes,
  • Creating a standard SharePoint development model.
  • Extending your SharePoint ALM to include other Departments like Java, Mobile, .Net and even SAP Development
Introduction to Application Lifecycle Management in SharePoint 2010

The Microsoft SharePoint 2010 development platform, which includes Microsoft SharePoint Foundation 2010 and Microsoft SharePoint Server 2010, contains many capabilities to help you develop, deploy, and update customizations and custom functionalities for your SharePoint sites. The activities that take advantage of these capabilities all fall under the category of Application Lifecycle Management (ALM).

Key considerations when establishing ALM processes include not only the development and testing practices that you use before the initial deployment of a single customization, but also the processes that you must implement to manage updates and integrate customizations and custom functionality on an existing farm.

This article discusses the capabilities and tools that you can use when implementing an ALM process on a SharePoint farm, and also specific concerns and things to consider when you create and hone your ALM process for SharePoint development.

This article assumes that each development team will develop a unique ALM process that fits its specific size and needs, so its guidance is necessarily broad. However, it also assumes that regardless of the size of your team and the specific nature of your custom solutions, you will need to address similar sets of concerns and use capabilities and tools that are common to all SharePoint developers.

The guidance in this article will help you as create a development model that exploits all the advantages of the SharePoint 2010 platform and addresses the needs of your organization.

SharePoint Application Lifecycle Management: An Overview

Although the specific details of your SharePoint 2010 ALM process will differ according the requirements of your organizations, most development teams will follow the same general set of steps. Figure 1 depicts an example ALM process for a midsize or large SharePoint 2010 deployment. Obviously, the process and required tasks depend on the project size.

Figure 1. Example ALM process
Example ALM process

The following are the specific steps in the process illustrated in Figure 1 (see corresponding callouts 1 through 10):

  1. Someone (for example, a project manager or lead developer) collects initial requirements and turns them into tasks.
  2. Developers use Microsoft Visual Studio Team Foundation Server 2010 or other tools to track the development progress and store custom source code.
  3. Because source code is stored in a centralized location, you can create automated builds for integration and unit testing purposes. You can also automate testing activities to increase the overall quality of the customizations.
  4. After custom solutions have successfully gone through acceptance testing, your development team can continue to the pre-production or quality assurance environment.
  5. The pre-production environment should resemble the production environment as much as possible. This often means that the pre-production environment has the same patch level and configurations as the production environment. The purpose of this environment is to ensure that your custom solutions will work in production.
  6. Occasionally, copy the production database to the pre-production environment, so that you can imitate the upgrade actions that will be performed in the production environment.
  7. After the customizations are verified in the pre-production environment, they are deployed either directly to production or to a production staging environment and then to production.
  8. After the customizations are deployed to production, they run against the production database.
  9. End users work in the production environment, and give feedback and ideas concerning the different functionalities. Issues and bugs are reported and tracked through established reporting and tracking processes.
  10. Feedback, bugs, and other issues in the production environment are turned into requirements, which are prioritized and turned into developer tasks. Figure 2 shows how multiple developer teams can work with and process bug reports and change requests that are received from end users of the production environment. The model in Figure 2 also shows how development teams might coordinate their solution packages. For example, the framework team and the functionality development team might follow separate versioning models that must be coordinated as they track bugs and changes.
    Figure 2. Change management involving multiple developer teams
    Change management involving multiple teams

Integrating Testing and Build Verification Environments into a SharePoint 2010 ALM Process

In larger projects, quality assurance (QA) personnel might use an additional build verification or user acceptance testing (UAT) farm to test and verify the builds in an environment that more closely resembles the production environment.

Typically, a build verification farm has multiple servers to ensure that custom solutions are deployed correctly. Figure 3 shows a potential model for relating development integration and testing environments, build verification farms, and production environments. In this particular model, the pre-production or QA farm and the production farm switch places after each release. This model minimizes any downtime that is related to maintaining the environments.

Figure 3. Model for relating development integration and testing environments
Model for relating development environments

Integrating SharePoint Designer 2010 into a SharePoint 2010 ALM Process

Another significant consideration in your ALM model is Microsoft SharePoint Designer 2010. SharePoint 2010 is an excellent platform for no-code solutions, which can be created and then deployed directly to the production environment by using SharePoint Designer 2010. These customizations are stored in the content database and are not stored in your source code repository.

General designer activities and how they interact with development activities are another consideration. Will you be creating page layouts directly within your production environment, or will you deploy them as part of your packaged solutions? There are advantages and disadvantages to both options.

Your specific ALM model depends completely on the custom solutions and the customizations that you plan to make, and on your own policies. Your ALM process does not have to be as complex as the one described in this section. However, you must establish a firm ALM model early in the process as you plan and create your development environment and before you start creating your custom solutions.

Next, we discuss specific tools and capabilities that are related to SharePoint 2010 development that you can use when considering how to create a model for SharePoint ALM that will work best for your development team.

Solution Packages and SharePoint Development Tools

One major advantage of the SharePoint 2010 development platform is that it provides the ability to save sites as solution packages. A solution package is a deployable, reusable package stored in a CAB file with a .wsp extension. You can create a solution package either by using the SharePoint 2010 user interface (UI) in the browser, SharePoint Designer 2010, or Microsoft Visual Studio 2010. In the browser and SharePoint Designer 2010 UIs, solution packages are also called templates. This flexibility enables you to create and design site structures in a browser or in SharePoint Designer 2010, and then import these customizations into Visual Studio 2010 for more development. Figure 4 shows this process.

Figure 4. Flow through the SharePoint development tools
Flow through the SharePoint development tools

When the customizations are completed, you can deploy your solution package to SharePoint for use. After modifying the existing site structure by using a browser, you can start the cycle again by saving the updated site as a solution package.

This interaction among the tools also enables you to use other tools. For example, you can design a workflow process in Microsoft Visio 2010 and then import it to SharePoint Designer 2010 and from there to Visual Studio 2010. For instructions on how to design and import a workflow process, see Create, Import, and Export SharePoint Workflows in Visio.

For more information about creating solution packages in SharePoint Designer 2010, see Save a SharePoint Site as a Template. For more information about creating solution packages in Visual Studio 2010, see Creating SharePoint Solution Packages.

Using SharePoint Designer 2010 as a Development Tool

SharePoint Designer 2010 differs from Microsoft Office SharePoint Designer 2007 in that its orientation has shifted from the page to features and functionality. The improved UI provides greater flexibility for creating and designing different functionalities. It provides rich tooling for building complete, reusable, and process-centric applications. For more information about the new capabilities and features of SharePoint Designer 2010, see Getting Started with SharePoint Designer.

You can also use SharePoint Designer 2010 to modify modular components developed with Visual Studio 2010. For example, you can create Web Parts and other controls in Visual Studio 2010, deploy them to a SharePoint farm, and then edit them in SharePoint Designer 2010.

The primary target users for SharePoint Designer 2010 are IT personnel and information workers who can use this application to create customizations in a production environment. For this reason, you must decide on an ALM model for your particular environment that defines which kinds of customizations will follow the complete ALM development process and which customizations can be done by using SharePoint Designer 2010. Developers are secondary target users. They can use SharePoint Designer 2010 as a part of their development activities, especially during initial creation of customization packages and also for rapid development and prototyping. Your ALM process must also define where and how to fit SharePoint Designer 2010 into the broader development model.

A key challenge of using SharePoint Designer 2010 is that when you use it to modify files, all of your changes are stored in the content database instead of in the file system. For example, if you customize a master page for a specific site by using SharePoint Designer 2010 and then design and deploy new branding elements inside a solution package, the changes are not available for the site that has the customized master page, because that site is using the version of the master page that is stored in the content database.

To minimize challenges such as these, SharePoint Designer 2010 contains new features that enable you to control usage of SharePoint Designer 2010 in a specific environment. You can apply these control settings at the web application level or site collection level. If you disable some action at the web application level, that setting cannot be changed at the site collection level.

SharePoint Designer 2010 makes the following settings available:

  • Allow site to be opened in SharePoint Designer 2010.
  • Allow customization of files.
  • Allow customization of master pages and layout pages.
  • Allow site collection administrators to see the site URL structure.

Because the primary purpose of SharePoint Designer 2010 is to customize content on an existing site, it does not support source code control. By default, pages that you customize by using SharePoint Designer 2010 are stored inside a versioned SharePoint library. This provides you with simple support for versioning, but not for full-featured source code control.

Ads by CeheuapMMeAd Options
Importing Solution Packages into Visual Studio 2010

When you save a site as a solution package in the browser (from the Save as Template page in Site Settings), SharePoint 2010 stores the site as a solution package (.wsp) file and places it in the Solution Gallery of that site collection. You can then download the solution package from the Solution Gallery and import it into Visual Studio 2010 by using the Import SharePoint Solution Package template, as shown in Figure 5.

Figure 5. Import SharePoint Solution Package template
Import SharePoint Solution Package template

SharePoint 2010 solution packages contain many improvements that take advantage of new capabilities that are available in its feature framework. The following list contains some of the new feature elements that can help you manage your development projects and upgrades.

  • SourceVersion for WebFeature and SiteFeature
  • WebTemplate feature element
  • PropertyBag feature element
  • $ListId:Lists
  • WorkflowAssociation feature element
  • CustomSchema attribute on ListInstance
  • Solution dependencies

After you import your project, you can start customizing it any way you like.

Note Note
Because this capability is based on the WebTemplate feature element, which is based on a corresponding site definition, the resulting solution package will contain definitions for everything within the site. For more information about creating and using web templates, see Web Templates.

Visual Studio 2010 supports source code control (as shown in Figure 6), so that you can store the source code for your customizations in a safer and more secure central location, and enable easy sharing of customizations among developers.

Figure 6. Visual Studio 2010 source code control
Visual Studio 2010 source code control

The specific way in which your developers access this source code and interact with each other depends on the structure of your team development environment. The next section of this article discusses key concerns and considerations that you should consider when you build a team development environment for SharePoint 2010.

Team Development Environment for SharePoint 2010: An Overview

As in any ALM planning process, your SharePoint 2010 planning should include the following steps:

  1. Identify and create a process for initiating projects.
  2. Identify and implement a versioning system for your source code and other deployed resources.
  3. Plan and implement version control policies.
  4. Identify and create a process for work item and defect tracking and reporting.
  5. Write documentation for your requirements and plans.
  6. Identify and create a process for automated builds and continuous integration.
  7. Standardize your development model for repeatability.

Microsoft Visual Studio Team Foundation Server 2010 (shown in Figure 7) provides a good potential platform for many of these elements of your ALM model.

Figure 7. Visual Studio 2010 Team Foundation Server
Visual Studio 2010 Team Foundation Server

When you have established your model for team development, you must choose either a collection of tools or Microsoft Visual Studio Team Foundation Server 2010 to manage your development. Microsoft Visual Studio Team Foundation Server 2010 provides direct integration into Visual Studio 2010, and it can be used to manage your development process efficiently. It provides many capabilities, but how you use it will depend on your projects.

You can use the Microsoft Visual Studio Team Foundation Server 2010 for the following activities:

  • Tracking work items and reporting the progress of your development. Microsoft Visual Studio Team Foundation Server 2010 provides tools to create and modify work items that are delivered not only from Visual Studio 2010, but also from the Visual Studio 2010 web client.
  • Storing all source code for your custom solutions.
  • Logging bugs and defects.
  • Creating, executing, and managing your testing with comprehensive testing capabilities.
  • Enabling continuous integration of your code by using the automated build capabilities.

Microsoft Visual Studio Team Foundation Server 2010 also provides a basic installation option that installs all required functionalities for source control and automated builds. These are typically the most used capabilities of Microsoft Visual Studio Team Foundation Server 2010, and this option helps you set up your development environment more easily.

Setting Up a Team Development Environment for SharePoint 2010

SharePoint 2010 must be installed on a development computer to take full advantage of its development capabilities. If you are developing only remote applications, such as solutions that use SharePoint web services, the client object model, or REST, you could potentially develop solutions on a computer where SharePoint 2010 is not installed. However, even in this case, your developers’ productivity would suffer, because they would not be able to take advantage of the full debugging experience that comes with having SharePoint 2010 installed directly on the development computer.

The design of your development environment depends on the size and needs of your development team. Your choice of operating system also has a significant impact on the overall design of your team development process. You have three main options for creating your development environments, as follows:

  1. You can run SharePoint 2010 directly on your computer’s client operating system. This option is available only when you use the 64-bit version of Windows 7, Windows Vista Service Pack 1, or Windows Vista Service Pack 2.
  2. You can use the boot to Virtual Hard Drive (VHD) option, which means that you start your laptop by using the operating system in VHD. This option is available only when you use Windows 7 as your primary operating system.
  3. You can use virtualization capabilities. If you choose this option, you have a choice of many options. But from an operational viewpoint, the option that is most likely the easiest to implement is a centralized virtualized environment that hosts each developer’s individual development environment.

The following sections take a closer look at these three options.

SharePoint 2010 on a Client Operating System

If you are using the 64-bit version of Windows 7, Windows Vista Service Pack 1, or Windows Vista Service Pack 2, you can install SharePoint Foundation 2010 or SharePoint Server 2010. For more information about installing SharePoint 2010 on supported operating systems, see Setting Up the Development Environment for SharePoint 2010 on Windows Vista, Windows 7, and Windows Server 2008.

Figure 8 shows how a computer that is running a client operating system would operate within a team development environment.

Figure 8. Computer running a client operating system in a team development environment
Computer running a client operating system

A benefit of this approach is that you can take full advantage of any of your existing hardware that is running one of the targeted client operating systems. You can also take advantage of pre-existing configurations, domains, and enterprise resources that your enterprise supports. This could mean that you would require little or no additional IT support. Your developers would also face no delays (such as booting up a virtual machine or connecting to an environment remotely) in accessing their development environments.

However, if you take this approach, you must ensure that your developers have access to sufficient hardware resources. In any development environment, you should use a computer that has an x64-capable CPU, and at least 2 gigabytes (GB) of RAM to install and run SharePoint Foundation 2010; 4 GB of RAM is preferable for good performance. You should use a computer that has 6 GB to 8 GB of RAM to install and run SharePoint Server 2010.

A disadvantage of this approach is that your environments will not be centrally managed, and it will be difficult to keep all of your project-dependent environmental requirements in sync. It might also be advisable to write batch files that start and stop some of the SharePoint-related services so that when your developers are not working with SharePoint 2010, these services will not consume resources and degrade the performance of their computers.

The lack of centralized maintenance could hurt developer productivity in other ways. For example, this might be an unwieldy approach if your team is working on a large Microsoft SharePoint Online project that is developing custom solutions for multiple services (for example, the equivalents of http://intranet, http://mysite, http://teams, http://secure, http://search, http://partners, and http://www.internet.com) and deploying these solutions in multiple countries or regions.

If you are developing on a computer that is running a client operating system in a corporate domain, each development computer would have its own name (and each local domain name would be different, such as http://dev 1 or http://dev2). If each developer is implementing custom functionalities for multiple services, you must use different port numbers to differentiate each service (for example, http://dev1 for http://intranet and http://dev1:81 for http://mysite). If all of your developers are using the same Visual Studio 2010 projects, the project debugging URL must be changed manually whenever a developer takes the latest version of a project from your source code repository.

This would create a manual step that could hurt developer productivity, and it would also diminish the efficiency of any scripts that you have written for setting up development environments, because the individual environments are not standardized. Some form of centralization with virtualization is preferable for large enterprise development projects.

SharePoint 2010 on Windows 7 and Booting to Virtual Hard Drive

If you are using Windows 7, you can also create a VHD out of an existing Windows Server 2008 image on which SharePoint 2010 is installed in Windows Hyper-V, and then configure Windows 7 with BDCEdit.exe so that it boots directly to the operating system on the VHD. To learn more about this kind of configuration, see Deploy Windows on a Virtual Hard Disk with Native Boot and Boot from VHD in Win 7.

Figure 9 shows how a computer that is running Windows 7 and booting to VHD would operate within a team development environment.

Figure 9. Windows 7 and booting to VHD in a team environment
Windows 7 and booting to VHD in a team environment

An advantage of this approach is the flexibility of having multiple dedicated environments for an individual project, enabling you to isolate each development environment. Your developers will not accidentally cross-reference any artifacts within their projects, and they can create project-dependent environments.

However, this option has considerable hardware requirements, because you are using the available hardware and resources directly on your computers.

SharePoint 2010 in Centralized Virtualized Environments

In a centralized virtualized environment, you host your development environments in one centralized location, and developers access these environments through remote connections. This means that you use Windows Hyper-V in the centralized location and copy a VHD for every developer as needed. Each VHD is configured to be available from the corporate network, so that when it starts, it can be accessed by using remote connections.

Figure 10 shows how a centralized virtualized team development environment would operate.

Figure 10. Centralized virtualized team development environment
Centralized virtualized development environment

An advantage of this approach is that the hardware requirements for individual developer computers are relatively few because the actual work happens in a centralized environment. Developers could even use computers with 1 GB of RAM as their clients and then connect remotely to the centralized location. You can also manage environments easily from one centralized location, making adjustments to them whenever necessary.

Your centralized host will have significantly high hardware requirements, but developers can easily start and stop these environments. This enables you to use the hardware that you have allocated for your development environments more efficiently. Additionally, this approach provides a ready platform for more extensive testing environments for your custom code (such as multi-server farms).

After you set up your team development environment, you can start taking advantage of the deployment and upgrade capabilities that are included with the new solution packaging model in SharePoint 2010. The following sections describe how to take advantage of these new capabilities in your ALM model.

Models for Solution Lifecycle Management in SharePoint 2010

The SharePoint 2010 solution packaging model provides many useful features that will help you plan for deploying custom solutions and managing the upgrade process. You can implement assembly versioning by applying binding redirects in your web application configuration file. You can also apply versioning to your feature upgrades, and feature upgrade actions enable you to manage changes that will be necessary on your existing sites to accommodate feature upgrades. These upgrade actions can be handled declaratively or programmatically.

The feature upgrade query object model enables you to create queries in your code that look for features on your existing sites that can be upgraded. You can use this object model to obtain relevant information about all of the features and feature versions that are deployed on your SharePoint 2010 sites. In your solution manifest file, you can also configure the type of Internet Information Services (IIS) recycling to perform during a solution upgrade.

The following sections go into greater details about these capabilities and how you can use them.

Using Assembly BindingRedirect with SharePoint 2010 Assemblies

The BindingRedirect feature element can be added to your web applications configuration file. It enables you to redirect from earlier versions of installed assemblies to newer versions. Figure 11 shows how the XML configuration from the solution manifest file instructs SharePoint to add binding redirection rules to the web application configuration file. These rules forward any reference to version 1.0 of the assembly to version 2.0. This is required in your solution manifest file if you are upgrading a custom solution that uses assembly versioning and if there are existing instances of the solution and the assembly on your sites.

Figure 11. Binding redirection rules in a solution manifest file
Binding redirection rules in a solution manifest

It is a best practice to use assembly versioning, because it gives you an easy way to track the versions of a solution that are deployed to your production environments.

SharePoint 2010 Feature Versioning

The support for feature versioning in SharePoint 2010 provides many capabilities that you can use when you are upgrading features. For example, you can use the SPFeature.Version property to determine which versions of a feature are deployed on your farm, and therefore which features must be upgraded. For a code sample that demonstrates how to do this, see Version.

Feature versioning in SharePoint 2010 also enables you to define a value for the SPFeatureDependency.MinimumVersion property to handle feature dependencies. For example, you can use the MinimumVersion property to ensure that a particular version of a dependent feature is activated. Feature dependencies can be added or removed in each new version of a feature.

The SharePoint 2010 feature framework has also enhanced the object model level to support feature versioning more easily. You can use the QueryFeatures method to retrieve a list of features, and you can specify both feature version and whether a feature requires an upgrade. The QueryFeatures method returns an instance of SPFeatureQueryResultCollection, which you can use to access all of the features that must be updated. This method is available from multiple scopes, because it is available from the SPWebService, SPWebApplication, SPContentDatabase, and SPSite classes. For more information about this overloaded method, see QueryFeatures(), QueryFeatures(), QueryFeatures(), and QueryFeatures(). For an overview of the feature upgrade object model, see Feature Upgrade Object Model.

The following section summarizes many of the new upgrade actions that you can apply when you are upgrading from one version of a feature to another.

SharePoint 2010 Feature Upgrade Actions

Upgrade actions are defined in the Feature.xml file. The SPFeatureReceiver class contains a FeatureUpgrading method, which you can use to define actions to perform during an upgrade. This method is called during feature upgrade when the feature’s Feature.xml file contains one or more <CustomUpgradeAction> tags, as shown in the following example.

<UpgradeActions>
  <CustomUpgradeAction Name="text">
    ...
  </CustomUpgradeAction>
</UpgradeActions>

Each custom upgrade action has a name, which can be used to differentiate the code that must be executed in the feature receiver. As shown in following example, you can parameterize custom action instances.

<Feature xmlns="http://schemas.microsoft.com/sharepoint/">
  <UpgradeActions>
    <VersionRange EndVersion ="2.0.0.0">
      <!-- First action-->
      <CustomUpgradeAction Name="example">
        <Parameters>
          <Parameter Name="parameter1">Whatever</Parameter>
          <Parameter Name="anotherparameter">Something meaningful</Parameter>
          <Parameter Name="thirdparameter">additional configurations</Parameter>
        </Parameters>
      </CustomUpgradeAction>
      <!-- Second action-->
      <CustomUpgradeAction Name="SecondAction">
        <Parameters>
          <Parameter Name="SomeParameter1">Value</Parameter>
          <Parameter Name="SomeParameter2">Value2</Parameter>
          <Parameter Name="SomeParameter3">Value3</Parameter>
        </Parameters>
      </CustomUpgradeAction>
    </VersionRange>
  </UpgradeActions>
</Feature>

This example contains two CustomUpgradeAction elements, one named example and the other named SecondAction. Both elements have different parameters, which are dependent on the code that you wrote for the FeatureUpgrading event receiver. The following example shows how you can use these upgrade actions and their parameters in your code.

 <summary>
 Called when feature instance is upgraded for each of the custom upgrade actions in the Feature.xml file.
 </summary>
 <param name="properties">Feature receiver properties</param>
 <param name="upgradeActionName">Upgrade action name</param>
 <param name="parameters">Custom upgrade action parameters</param>

public override  FeatureUpgrading(SPFeatureReceiverProperties properties, 
                                        string upgradeActionName, 
                                        System.Collections.Generic.IDictionary<string, string> parameters)
{

    // Do not do anything, if feature scope is not correct.
     (properties.Feature.Parent  SPWeb)
    {

        // Log that feature scope is incorrect.
        return;
    }

    switch (upgradeActionName)
    {
         "example":
            FeatureUpgradeManager.UpgradeAction1(parameters["parameter1"], parameters["AnotherParameter"],
                                                 parameters["ThirdParameter"]);
            break;
         "SecondAction":
            FeatureUpgradeManager.UpgradeAction1(parameters["SomeParameter1"], parameters["SomeParameter2"],
                                                 parameters["SomeParameter3"]);
            break;
        default:

            // Log that code for action does not exist.
            break;
    }
}

You can have as many upgrade actions as you want, and you can apply them to version ranges. The following example shows how you can apply upgrade actions to version ranges of a feature.

<Feature xmlns="http://schemas.microsoft.com/sharepoint/">
  <UpgradeActions>
    <VersionRange BeginVersion="1.0.0.0" EndVersion ="2.0.0.0">
      ...
    </VersionRange>
    <VersionRange BeginVersion="2.0.0.1" EndVersion="3.0.0.0">
      ...
    </VersionRange>
    <VersionRange BeginVersion="3.0.0.1" EndVersion="4.0.0.0">
      ...
    </VersionRange>
  </UpgradeActions>
</Feature>

The AddContentTypeField upgrade action can be used to define additional fields for an existing content type. It also provides the option of pushing these changes down to child instances, which is often the preferred behavior. When you initially deploy a content type to a site collection, a definition for it is created at the site collection level. If that content type is used in any subsite or list, a child instance of the content type is created. To ensure that every instance of the specific content type is updated, you must set the PushDown attribute to , as shown in the following example.

<Feature xmlns="http://schemas.microsoft.com/sharepoint/">
  <UpgradeActions>
    <VersionRange EndVersion ="2.0.0.0">
      <AddContentTypeField ContentTypeId="0x0101002b0e208ace0a4b7e83e706b19f32cab9"
                           FieldId="{ccbcd479-94c9-4f3a-95c4-58897da434fe}"
                           PushDown="True"/>
    </VersionRange>
  </UpgradeActions>
</Feature>

For more information about working with content types programmatically, see Introduction to Content Types.

The ApplyElementManifests upgrade action can be used to apply new artifacts to a SharePoint 2010 site without reactivating features. Just as you can add new elements to any new SharePoint elements.xml file, you can instruct SharePoint to apply content from a specific elements file to sites where a given feature is activated.

You can use this upgrade action if you are upgrading an existing feature whose FeatureActivating event receiver performs actions that you do not want to execute again on sites where the feature is deployed. The following example demonstrates how to include this upgrade action in a Feature.xml file.

<Feature xmlns="http://schemas.microsoft.com/sharepoint/">
  <UpgradeActions>
    <VersionRange EndVersion ="2.0.0.0">
      <ApplyElementManifests>
        <ElementManifest Location="AdditionalV2Fields\Elements.xml"/>
      </ApplyElementManifests>
    </VersionRange>
  </UpgradeActions>
</Feature>

An example of a use case for this upgrade action involves adding new .webpart files to a feature in a site collection. You can use the ApplyElementManifest upgrade action to add those files without reactivating the feature. Another example would involve page layouts, which contain initial Web Part instances that are defined in the file element structure of the feature element file. If you reactivate this feature, you will get duplicates of these Web Parts on each of the page layouts. In this case, you can use the ElementManifest element of the ApplyElementManifests upgrade action to add new page layouts to a site collection that uses the feature without reactivating the feature.

The MapFile element enables you to map a URL request to an alternative URL. The following example demonstrates how to include this upgrade action in a Feature.xml file.

<Feature xmlns="http://schemas.microsoft.com/sharepoint/">
  <UpgradeActions>
    <MapFile FromPath="Features\MapPathDemo_MapPathDemo\PageDeployment\MyExamplePage.aspx"
             ToPath="Features\MapPathDemo_MapPathDemo\PageDeployment\MyExamplePage2.aspx" />
  </UpgradeActions>
</Feature>

Mapping URLs in this way would be useful to you in a case where you have to deploy a new version of a page that was customized by using SharePoint Designer 2010. The resulting customized page would be served from the content database. When you deploy the new version of the page, the new version will not appear because content for that page is coming from the database and not from the file system. You could work around this problem by using the MapFile element to redirect requests for the old version of the page to the newer version.

It is important to understand that the FeatureUpgrading method is called for each feature instance that will be updated. If you have 10 sites in your site collection and you update a web-scoped feature, the feature receiver will be called 10 times for each site context. For more information about how to use these new declarative feature elements, see Feature.xml Changes.

Upgrading SharePoint 2010 Features: A High-Level Walkthrough

This section describes at a high level how you can put these feature-versioning and upgrading capabilities to work. When you create a new version of a feature that is already deployed on a large SharePoint 2010 farm, you must consider two different scenarios: what happens when the feature is activated on a new site and what happens on sites where the feature already exists. When you add new content to the feature, you must first update all of the existing definitions and include instructions for upgrading the feature where it is already deployed.

For example, perhaps you have developed a content type to which you must add a custom site column named City. You do this in the following way:

  1. Add a new element file to the feature. This element file defines the new site column and modifies the Feature.xml file to include the element file.
  2. Update the existing definition of the content type in the existing feature element file. This update will apply to all sites where the feature is newly deployed and activated.
  3. Define the required upgrade actions for the existing sites. In this case, you must ensure that the newly added element file for the additional site column is deployed and that the new site column is associated with the existing content types. To achieve these two objectives, you add the ApplyElementManifests and the AddContentTypeField upgrade actions to your Feature.xml file.

When you deploy the new version of the feature to existing sites and upgrade it, the upgrade actions are applied to sites one by one. If you have defined custom upgrade actions, the FeatureUpgrading method will be called as many times as there are instances of the feature activated in your site collection or farm.

Figure 12 shows how the different components of this scenario work together when you perform the upgrade.

Figure 12. Components of a feature upgrade that adds a new element to an existing feature
Add a new element to an existing feature

Different sites might have different versions of a feature deployed on them. In this case, you can create version ranges, which define specific actions to perform when you are upgrading from one version to another. If a version range is not defined, all upgrade actions will be applied during each upgrade.

Figure 13 shows how different upgrade actions can be applied to version ranges.

Figure 13. Applying different upgrade actions to version ranges.
Applying upgrade actions to version ranges

In this example, if a given site is upgrading directly from version 1.0 to version 3.0, all configurations will be applied because you have defined specific actions for upgrading from version 1.0 to version 2.0 and from 2.0 to version 3.0. You have also defined actions that will be applied regardless of feature version.

Code Design Guidelines for Upgrading SharePoint 2010 Features

To provide more flexibility for your code, you should not place your upgrade code directly inside the FeatureUpgrading event receiver. Instead, put the code in some centralized location and refer to it inside the event receiver, as shown in Figure 14.

Figure 14. Centralized feature upgrade manager
Centralized feature upgrade manager

By placing your upgrade code inside a centralized utility class, you increase both the reusability and the testability of your code, because you can perform the same actions in multiple locations. You should also try to design your custom upgrade actions as generically as possible, using parameters to make them applicable to specific upgrade scenarios.

Solution Lifecycles: Upgrading SharePoint 2010 Solutions

If you are upgrading a farm (full-trust) solution, you must first deploy the new version of your solution package to a farm.

Execute either of the following scripts from a command prompt to deploy updates to a SharePoint farm. The first example uses the Stsadm.exe command-line tool.

stsadm -o upgradesolution -name solution.wsp -filename solution.wsp

The second example uses the Update-SPSolution Windows PowerShell cmdlet.

UpdateSPSolution Identity contoso_solution.wsp LiteralPath c:\contoso_solution_v2.wsp GACDeployment

After the new version is deployed, you can perform the actual upgrade, which executes the upgrade actions that you defined in your Feature.xml files.

A farm solution upgrade can be performed either farm-wide or at a more granular level by using the object model. A farm-wide upgrade is performed by using the Psconfig command-line tool, as shown in the following example.

psconfig -cmd upgrade -inplace b2b
NoteNote
This tool causes a service break on the existing sites. During the upgrade, all feature instances throughout the farm for which newer versions are available will be upgraded.

You can also perform upgrades for individual features at the site level by using the Upgrade method of the SPFeature class. This method causes no service break on your farm, but you are responsible for managing the version upgrade from your code. For a code example that demonstrates how to use this method, see SPFeature.Upgrade.

Upgrading a sandboxed solution at the site collection level is much more straightforward. Just upload the SharePoint solution package (.wsp file) that contains the upgraded features. If you have a previous version of a sandboxed solution in your solution gallery and you upload a newer version, an Upgrade option appears in the UI, as shown in Figure 15.

Figure 15. Upgrading a sandboxed solution
Upgrading a sandboxed solution

After you select the Upgrade option and the upgrade starts, all features in the sandboxed solution are upgraded.

Conclusion

This article has discussed some considerations and examples of Application Lifecycle Management (ALM) design that are specific to SharePoint 2010, and it has also enumerated and described the most important capabilities and tools that you can integrate into the ALM processes that you choose to establish in your enterprise. The SharePoint 2010 feature framework and solution packaging model provide flexibility and power that you can put to work in your ALM processes.

How To : Automate a Test Case in Microsoft Test Manager & Setup Automated Build-Deploy-Test Workflows

To automate a test case, link it to a coded test method. You can link any unit test, coded UI test, or generic test to a test case. You’ll want to link a test method that performs the test described by the test case. Typically these are integration tests.

The results of automated and manual tests appear together. If the test cases are linked to backlog items, stories, or other requirements, you can review the test results by requirement.

You can make links one at a time, or you can generate test cases from an assembly of test classes.

  1. Using Visual Studio, create or choose a test method. It can be an ordinary test method, a coded UI test, ordered test, or a generic test method.

    Check the method into Team Foundation Server.

    Keep the solution open in Visual Studio.

  2. Open the test case in Visual Studio.
    Open Test Case Using Microsoft Visual Studio
  3. Associate the test method with your test case.
    Associate Automation With Test Case

    If you want to change or delete the association later, choose Remove Association.

We don’t recommend linking load tests or web tests to test cases.

  1. Open a Developer Command Prompt, and change directory to the output director of your Visual Studio solution.

    cd MySolution\MyProject\bin\Debug

  2. To import all the test methods from the solution:

    tcm testcase /collection: CollectionUrl /teamproject:MyProject /import /storage:MyAssembly.dll /category:”MyIntegrationTestCategory

    The category parameter is optional but recommended. You only want to create test cases from integration or system tests, which you can mark by using the [TestCategory (“category”)] attribute.

  3. In the Test hub in Team Web Access or in Microsoft Test Manager, use Add Existing to add the test cases to a test suite.
Ads by CeheuapMMeAd Options

Provide the build location so that the test method can be found.

  1. In Microsoft Test Manager, choose Testing Center, , Properties.
  2. Under Builds, set Filter for builds. You can set the build definition and quality attribute of the builds you want to choose from.
  3. Choose Modify to assign a build to the test plan. You can compare your current build with a build you plan to take. The associated items list shows the changes to work items between the builds. You can then assign the latest build to take and use for testing with this plan. For more information, see What development has been done since a previous build?.
I’m not using Team Foundation Build to build my application and tests. How can I run automated lab tests?
Create a build definition that contains just the location where your assemblies are shared. Then create a fake instance of this build from the developer command prompt:

TfsCreateBuild.exe /collection:http://tfsservername:8080/tfs/collectionname /project: projectname /builddefinition:”MyBuildDefinition” /buildnumber:”FakeBuild_1.0″

Specify the build definition in your test plan.

To run your automated tests tests using Microsoft Test Manager, you must use a lab environment. It must have roles for each of the client and server machines used in your tests. (If you’ve used lab environments for manual tests, notice that automated tests must have a machine for the client role.)

  1. Create or choose either a standard lab environment or an SCVMM lab environment.

    If you create a new environment, choose a machine for each role.

    The machines tab in the new environment wizard.

    If you’re planning to run coded UI tests, configure it on the Advanced page of the wizard. This sets the test agent to run as a user. You have to supply a user name under which the agent will run.

    We recommend that you use a different user account than the lab service account used by the test controller.

    The advanced tab in the new environment wizard.
  2. Set the test plan to use your environment for automated tests.
    Automation on test plan properties
  3. If you want to collect more than the basic diagnostic data from the test machines, create a test settings file.
    New test settings

    In the test settings wizard, choose the data you want to collect for each machine.

    Select diagnostics for each machine role

Start automated tests the same way you do manual tests.

In Microsoft Test Manager, choose Testing Center, . Then select a test suite or an individual test and choose .

If you want to run a test in a different environment or with different test settings, choose Run with Options.

If you want to run an automated test manually, choose Run with Options.

If you have multiple build configurations, the test assemblies to run the automated tests are searched for recursively from the root directory of the build drop folder. If it is important which assemblies are selected when you run your automated tests, you should use Run with options to specify the build configuration.

Ads by CeheuapMMeAd Options
  1. In Microsoft Test Manager, choose Testing Center, , Analyze Test Runs.
  2. Double-click a test run to open it and view the details. You can:
    • Update the title of the test run to reflect the outcome.
    • Choose Resolution to indicate a reason, if the test failed.
    • Add comments.
    • View the details of an individual test.
    • Create a bug.
Q: Can I generate the test method from a manual run of the test case?
Yes. Verifying Code by Using UI Automation (Various Blog Post can be found on my Blog about Coded UI Test Automation)

Q: I want my automated test to repeat with different data. Do I use the same test parameters that the manual version of the test case uses?
To make the automated test iterate over different data, write that into the code of the test method.

Test parameters are only used with the manual version of the test. They aren’t visible to the automated test code.

Automated build-deploy-test workflows

You can use a build-deploy-test on Team Foundation Server to deploy and test your application when you run a build. This lets you schedule and run the build, deployment, and testing of your application with one build process. Build-deploy-test work with Lab Management to deploy your applications to a lab environment and run tests on them as part of the build process.

If your lab environment is an SCVMM environment, you can also use workflows to create and restore snapshots that automatically create a clean environment before you run tests, and to the state of your environment when a test fails. This ensures that each test isn’t influenced by changes to the lab environment from previous test runs. In addition, it ensures that testers can accurately reproduce that state of a lab environment when they reproduce bugs.

Requirements

  • Visual Studio Ultimate, Visual Studio Premium, Visual Studio Test Professional

You can use a build-deploy-test in the following scenarios:

Tip Tip
Build, or Build and Test: If you are building your application in a drop folder without deploying it to a lab environment, then you can use the default build process template. For more information, see Use the Default Template for your build process. If you also want to test your application without deploying it, see Run tests in your build process
  • Build, Deploy, and Test − Build your application, then deploy it and run tests on it in a lab environment. This workflow enables you to run a series of tests from a test plan, on a deployed application, as part of your build process. This scenario is common when running build verification tests.
  • Deploy and Test − This scenario is similar to the “build, deploy, and test” scenario, except a new build isn’t created during the workflow. Instead, the workflow uses an existing build from a drop folder.
  • Deploy Only – Deploy an existing build from a drop folder to a lab environment without running automated tests during the workflow. Once a build has passed your build verification tests, and is ready to be sent to a test team, you might want to send that build to the test team so they can run additional tests that aren’t part of your workflow. This scenario is common when running manual tests.
  • Build and Deploy – This scenario is similar to the “deploy only” scenario, except a new build is created during the workflow.

A build-deploy-test workflow is a Windows Workflow file that defines how a build definition will run a build, deploy an application, and run tests. A build-deploy-test workflow is created in a build definition by choosing a build process template called the lab default template (LabDefaultTemplate.11.xaml), and configuring the settings.

You can also create a customized build process template for your workflow depending on your requirements. You configure your build definition after you set up your build machine, test machines, and lab environments.

The deployment settings in a -deploy-test workflow define how an application is deployed by specifying the deployment scripts to run on in your lab environment. You can specify a lab management role to run each deployment script on, or you can specify a machine in your lab environment.

Creating deployment scripts is a major part of setting up -deploy-test workflows. Deployment scripts copy files from your build to your lab environment, and then run your installation packages.

The following diagram describes how a build is deployed by a build-deploy-test workflow:

Dataflow for deployment scripts.

The following steps are displayed in the diagram above.

  1. The build-deploy-test workflow starts a build, and then gets the deployment scripts.
  2. The build definition copies the build files to the drop location.
  3. The workflow runs each deployment script in the working directory of the specific machine or machine role that the script is assigned to.
  4. Each deployment script retrieves build files from the drop location.
  5. Each deployment script copies or installs the specified build files onto machines in the lab environment.

A Look at : ‘Kaizan” and its Philospophy in Kanban

kaizen%20not%20kaizan[1]

Kaizen, or rapid improvement processes, often is considered to be the “building block” of all lean production methods. Kaizen focuses on eliminating waste, improving productivity, and achieving sustained continual improvement in targeted activities and processes of an organization.

Lean production is founded on the idea of kaizen – or continual improvement. This philosophy implies that small, incremental changes routinely applied and sustained over a long period result in significant improvements. The kaizen strategy aims to involve workers from multiple functions and levels in the organization in working together to address a problem or improve a process.

The team uses analytical techniques, such as value stream mapping and “the 5 whys”, to identify opportunities quickly to eliminate waste in a targeted process or production area. The team works to implement chosen improvements rapidly (often within 72 hours of initiating the kaizen event), typically focusing on solutions that do not involve large capital outlays.

Periodic follow-up events aim to ensure that the improvements from the kaizen “blitz” are sustained over time. Kaizen can be used as an analytical method for implementing several other lean methods, including conversions to cellular manufacturing and just-in-time production systems.

Top of page

Method and Implementation Approach

Rapid continual improvement processes typically require an organization to foster a culture where employees are empowered to identify and solve problems. Most organizations implementing kaizen-type improvement processes have established methods and ground rules that are well communicated in the organization and reinforced through training. The basic steps for implementing a kaizen “event” are outlined below, although organizations typically adapt and sequence these activities to work effectively in their unique circumstances.

Phase 1: Planning and Preparation. The first challenge is to identify an appropriate target area for a rapid improvement event. Such areas might include: areas with substantial work-in-progress; an administrative process or production area where significant bottlenecks or delays occur; areas where everything is a “mess” and/or quality or performance does not meet customer expectations; and/or areas that have significant market or financial impact (i.e., the most “value added” activities).

Once a suitable production process, administrative process, or area in a factory is selected, a more specific “waste elimination” problem within that area is chosen for the focus of the kaizen event ( i.e., the specific problem that needs improvement, such as lead time reduction, quality improvement, or production yield improvement). Once the problem area is chosen, managers typically assemble a cross-functional team of employees.

It is important for teams to involve workers from the targeted administrative or production process area, although individuals with “fresh perspectives” may sometimes supplement the team. Team members should all be familiar with the organization’s rapid improvement process or receive training on it prior to the “event”. Kaizen events are generally organized to last between one day and seven days, depending on the scale of the targeted process and problem. Team members are expected to shed most of their operational responsibilities during this period, so that they can focus on the kaizen event.

Phase 2: Implementation. The team first works to develop a clear understanding of the “current state” of the targeted process so that all team members have a similar understanding of the problem they are working to solve. Two techniques are commonly used to define the current state and identify manufacturing wastes:

  • Five Whys. Toyota developed the practice of asking “why” five times and answering it each time to uncover the root cause of a problem. An example is shown below.Repeating “Why” Five Times1
    1. Why did the machine stop?
      There was an overload, and the fuse blew.
    2. Why was there an overload?
      The bearing was not sufficiently lubricated.
    3. Why was it not lubricated sufficiently?
      The lubrication pump was not pumping sufficiently.
    4. Why was it not pumping sufficiently?
      The shaft of the pump was worn and rattling.
    5. Why was the shaft worn out?
      There was no strainer attached, and metal scrap got in.
  • Value Stream Mapping. This technique involves flowcharting the steps, activities, material flows, communications, and other process elements that are involved with a process or transformation (e.g., transformation of raw materials into a finished product, completion of an administrative process). Value stream mapping helps an organization identify the non-value-adding elements in a targeted process. This technique is similar to process mapping, which is frequently used to support pollution prevention planning in organizations. In some cases, value stream mapping can be used in phase 1 to identify areas for which to target kaizen events.

During the kaizen event, it is typically necessary to collect information on the targeted process, such as measurements of overall product quality; scrap rate and source of scrap; a routing of products; total product distance traveled; total square feet occupied by necessary equipment; number and frequency of changeovers; source of bottlenecks; amount of work-in-progress; and amount of staffing for specific tasks. Team members are assigned specific roles for research and analysis. As more information is gathered, team members add detail to value stream maps of the process and conduct time studies of relevant operations (e.g., takt time, lead-time).

Once data is gathered, it is analyzed and assessed to find areas for improvement. Team members identify and record all observed waste, by asking what the goal of the process is and whether each step or element adds value towards meeting this goal. Once waste, or non-value added activity, is identified and measured, team members then brainstorm improvement options. Ideas are often tested on the shopfloor or in process “mock-ups”. Ideas deemed most promising are selected and implemented. To fully realize the benefits of the kaizen event, team members should observe and record new cycle times, and calculate overall savings from eliminated waste, operator motion, part conveyance, square footage utilized, and throughput time.

Phase 3: Follow-up. A key part of a kaizen event is the follow-up activity that aims to ensure that improvements are sustained, and not just temporary. Following the kaizen event, team members routinely track key performance measures (i.e., metrics) to document the improvement gains. Metrics often include lead and cycle times, process defect rates, movement required, square footage utilized, although the metrics vary when the targeted process is an administrative process. Follow-up events are sometimes scheduled at 30 and 90-days following the initial kaizen event to assess performance and identify follow-up modifications that may be necessary to sustain the improvements. As part of this follow-up, personnel involved in the targeted process are tapped for feedback and suggestions. As discussed under the 5S method, visual feedback on process performance are often logged on scoreboards that are visible to all employees.

Top of page

Implications for Environmental Performance

Potential Benefits:
At its core, kaizen represents a process of continuous improvement that creates a sustained focus on eliminating all forms of waste from a targeted process. The resulting continual improvement culture and process is typically very similar to those sought under environmental management systems (EMS), ISO 14001, and pollution prevention programs. An advantage of kaizen is that it involves workers from multiple functions who may have a role in a given process, and strongly encourages them to participate in waste reduction activities. Workers close to a particular process often have suggestions and insights that can be tapped about ways to improve the process and reduce waste. Organizations have found, however, that it is often difficult to sustain employee involvement and commitment to continual improvement activities (e.g., P2, environmental management) that are not necessarily perceived to be directly related to core operations. In some cases, kaizen may provide a vehicle for engaging broad-based organizational participation in continual improvement activities that target, in part, physical wastes and environmental impacts.
Kaizen can be a powerful tool for uncovering hidden wastes or waste-generating activities and eliminating them.
Kaizen focuses on waste elimination activities that optimize existing processes and that can be accomplished quickly without significant capital investment. This creates a higher likelihood of quick, sustained results.
Potential Shortcomings:
Failure to involve environmental personnel in a quick kaizen event can potentially result in changes that do not satisfy applicable environmental regulatory requirements (e.g., waste handling requirements, permitting requirements). Care should be taken to consult with environmental staff regarding changes made to environmentally sensitive processes.
Failure to incorporate environmental considerations into kaizen can potentially result in solutions that do not consider inherent environmental risk associated with new processes. For example, an organization might select a change in process chemistry that addresses one improvement need (e.g., product quality, process cycle time) but that might be sub-optimal if the organization considered the material hazards or toxicity and the associated chemical and hazardous waste management obligations.
By not explicitly incorporating environmental considerations into kaizen, valuable pollution prevention and sustainability opportunities may be disregarded. For example, an evident opportunity to conserve water resources may not be explored if water use is not expensive and therefore not considered a wasteful expense that needs to be addressed. Similarly, including environmental considerations in the kaizen event goals can lead to solutions that rely less on hazardous materials or that create less hazardous wastes.

Useful Resources

Productivity Press Development Team. Kaizen for the Shopfloor (Portland, Oregon: Productivity Press, 2002).

Soltero, Conrad and Gregory Waldrip. “Using Kaizen to Reduce Waste and Prevent Pollution.” Environmental Quality Management (Spring 2002), 23-37.

A Look At : Application Management and Governance in SharePoint 2013

Summary:Learn how to govern applications for SharePoint 2013 by creating a customization policy and understanding the app model, branding, and life-cycle management.

8322.sharepoint_2D00_2010_5F00_4855E582[1]

How will you manage the applications that are developed for your environment? What customizations do you allow in your applications, and what are your processes for managing those applications?

 

For effective and manageable applications, your organization should consider the following:

  • Customization policy   SharePoint 2013 includes customizable features and capabilities that span multiple product areas, such as business intelligence, forms, workflow, and content management. Customization can introduce risks to the stability, maintenance, and security of the environment. To support customization while controlling its scope, you should develop a customization policy.
  • Life-cycle management   Follow best practices to manage applications and keep your environments in sync.
  • Branding   If you are designing an information architecture and a set of sites to use across an organization, consider including branding in your governance plan. A formal set of branding policies helps ensure that sites consistently use enterprise imagery, fonts, themes, and other design elements.
  • Solutions or apps for SharePoint?   Decide whether a solution or an app for SharePoint would be the best choice for specific customizations.

Get developer guidance about customizing and branding SharePoint 2013 on MSDN: Build sites for SharePoint 2013.

Foundation icon This article is part of a set of articles about governance. The following articles describe other aspects of governance:

The What is governance? poster gives a summary of this content. Download the PDF version or Visio version, or Zoom into the model in full detail with Zoom.it from Microsoft.

Determine the types of customizations you want to allow and how to manage them. Your customization policy should include:

  • Service-level descriptions   What are the parameters for supporting and managing customizations in your environments? See Service-level agreements.
  • Guidelines for updating customizations   How do you manage changes to customizations, and how do you roll out those changes to your environments? Consider ways to manage source code, such as a source control system and standards for documenting the code.
  • Processes for analyzing   How do you understand whether a particular customization is working well in your environment, or how do you decide which ones to create, change, or retire?
  • Approved tools for customization   Consider development standards, such as coding best practices and the tools that you will to use across your organization. For example, you should decide whether to allow the use of SharePoint Designer 2013 and Design Manager, and specify which site elements can be customized and by whom.
  • Process for piloting and testing customizations   How do you test and deploy customizations? How many people should be in a pilot testing group? What are your standards for testing and validating customizations?
  • Who is responsible for ongoing support   Who will be responsible for supporting customizations in your environments—individual teams or a central group?
  • Guidelines for packaging and deploying customizations   Do you have individual packages for each, or do you include several in a feature or solution? Which customizations should be apps for SharePoint instead of solutions? How do you ensure that customizations in one environment do not affect the rest of your SharePoint implementation?
  • Specific policies regarding each potential type of customization   What types of customizations do you allow?

    For more information about kinds of customizations and their potential risks, see the Customizations table later in this article. For more information about processes for managing customizations, see the white paper SharePoint Products and Technologies customization policy. Most of this content still applies to SharePoint 2013.

  • Policies around using the App Catalog and SharePoint Store Which apps for SharePoint do you want to make available to your organization? Can users purchase apps directly? See Solutions or apps for SharePoint? later in this article for more information.

The highly customizable design of SharePoint products enables you to provide the look, behavior, or functionality that meets your business needs. Customizations can introduce risk to your environment, whether that risk is to the environment’s performance, availability, or supportability. Conversely, a “no customizations” policy severely restricts your organization’s ability to take advantage of the SharePoint platform.

All customizations are not the same. You must decide carefully which kinds of customizations to allow in your environment. You must ensure the customizations support the performance, availability, and supportability you want for your environment. Your governance policy should balance a level of acceptable risk against the business needs for your organization.

What is considered a customization? All of the following are considered kinds of customizations in SharePoint products:

  • Configuration   Using the SharePoint user interface to configure SharePoint products.
  • Branding   Changing logos, styles, colors, master pages and page layouts, and so on to create a custom look for your SharePoint sites. See more about branding.
  • Custom code   Using developer tools to add or change functionality in SharePoint products or to interact with other applications. Risk can vary depending on kind of functionality and level of trust (full trust solutions should be rarely used; consider apps for SharePoint first).
    TipTip:
    Sandboxed solutions are deprecated in this release, so they are not the best option for custom code in the long term

Some customizations have very little risk or impact on your environment. Others have the potential for much higher risk and impact. The following table provides examples of different kinds of customizations, the risk level associated with that kind of customization, and potential issues that you might face if you allow that kind of customization.

Customizations

Risk level Types of customizations and examples Considerations or impact
Unsupported/High Unsupported customizations such as direct changes to the database schema or modifying files on the file system.
  • Will not be supported through Microsoft Customer Support.
  • Will be unable to upgrade.

Do not use.

Moderate to high Creating applications that interact with or redirect actions in key pipelines, such as events, claims, and so on.
  • Potential for service outage or performance issues.
  • Might require rework at upgrade.
Moderate to low Using a custom Web Part outside a sandbox environment, creating custom actions such as adding a menu item, or creating a custom site provisioning process.
  • Short or long-term performance issues or page errors.
  • Might require rework at upgrade.
Low Using solutions in a sandbox environment. Short-term performance issues; you can avoid some performance issues by using resource throttling and quotas.
Very low to no risk Using apps for SharePoint or using functionality within the product or configurations, such as associating a workflow with a list or using an instance of a built in Web Part. Minor configuration or page errors that would have to be addressed. Apps can be uninstalled or updated.
NoteNote:
For more information about customizations and upgrade, see Considerations for specific customizations.

 

 

Also, when you think through the customizations to allow in your environment, consider carefully whether a particular customization is necessary. If it recreates functionality that is already available in the product (such as creating a Web Part that does the same thing as the Content Editor Web Part or the Content by Query Web Part), then that might be unnecessary work.

Consider first whether the standard functionality can do what you want, or check the SharePoint Store to see if there is an app for SharePoint available that does what you need.

Follow these best practices to manage applications based on SharePoint 2013 throughout their life cycle:

  • Use separate development, preproduction, and production environments, and keep these environments as synchronized as possible so that you can accurately test your customizations.
  • Test all customizations before releasing the first time and after any updates have been made before you release them to your production environment.
  • Use source code control and solution and feature versioning to track changes to code.

Development, test, and production environments

Consistent branding with a corporate style guide makes for more cohesive-looking sites and easier development. Store approved themes in the theme gallery for consistency so that users will know when they visit the site that they are in the right place.

SharePoint 2013 includes a new feature to use for branding, Design Manager. By using Design Manager, you can create a visual design for your website with whatever web design tool or HTML editor you prefer and then upload that design into SharePoint. Design Manager is the central hub and interface where you manage all aspects of a custom design.

Creating the visual design of a site often fits into a larger process, in which multiple people or organizations are involved. For a roadmap of the tasks from a larger perspective, see Design and branding in SharePoint 2013.

SharePoint 2013 has a new development model based on apps for SharePoint. Apps for SharePoint are self-contained pieces of functionality that extend the capabilities of a SharePoint website. An app may include SharePoint features such as lists, workflows, and site pages, but it can also use a remote web application and remote data in SharePoint. An app has few or no dependencies on any other software on the device or platform where it is installed, other than what is built into the platform. Apps have no custom code that runs on the SharePoint servers.

The guidance for whether to use apps for SharePoint or SharePoint solutions is to:

  • Design apps for end users

    Apps for SharePoint:

    • Are easy for users (tenant administrators and site owners) to discover and install.
    • Use safe SharePoint extensions.
    • Provide the flexibility to develop future upgrades.
    • Can integrate with cloud-based resources.
    • Are available for both SharePoint Online and on-premises SharePoint sites.
  • Use farm solutions for administrators

    SharePoint solutions:

    • Can access the server-side object-model APIs that are needed to extend SharePoint management, configuration, and security
    • Can extend Central Administration, Windows PowerShell cmdlets, timer jobs, custom backups, and so on.
    • Are installed by administrators.
    • Can have farm, web application, or site-collection scope.

Go to MSDN to get more information about the new development model, Apps for SharePoint compared with SharePoint solutions, and Deciding between apps for SharePoint and SharePoint solutions.

Set a policy for using apps for SharePoint in your organization. Can users purchase and download apps? How do you make your organization’s apps available? How do you tell if they’re being used?

  • SharePoint Store   Determine whether users can purchase or download apps from the SharePoint Store.
  • App Catalog   Make specific apps for SharePoint available to your users by adding them to the App Catalog.
  • App requests   Configure app requests to control which apps are purchased and how many licenses are available.
  • Monitor apps   Monitor specific apps in SharePoint Server 2013 to check for errors and to track usage.

In the market

Microsoft Site Templates Upgraded and are now available

 

One of the main goals of the application templates is to provide a demonstration of the application building power in SharePoint and as a potential starting point for larger, more robust applications. While these templates are fully functional and usable out-of-the-box, I’ll be happy to reply on your comments and supporting you as needed.

 

note: those templates were collected from several resources and no source code for them.

All templates are compatible with SharePoint Server 2010 and Foundation Server 2010.

Case Management

The Case Management application template helps case managers track the status and tasks required to complete their work. When a case is created, standard tasks and documents are created which are modified based on the work each case manager has completed.

Clinical Trial Initiation and Management

For those who work in Academic Medical Centers, the Clinical Trial Initiation and Management application template helps teams manage the process of tracking clinical trial protocols, objective setting, subject selection and budget activities.

Employee Activities Site
employee activities
The Employee Activities Site application template helps departments, such as HR and Marketing, manage the creation and attendance of events for employees.

Employee Training Scheduling and Materials

The Employee Training Scheduling and Materials application template helps Instructors add new courses and organize course materials. Employees use the site to schedule attendance at a course, track courses they’ve attended and to provide feedback.

Employee Training 01

Employee Training

Employee Training 03

Absence Request and Vacation Schedule Management

The Absence Request and Vacation Schedule Management application template helps provider departments manage requests for out of office days and provides dashboards showing which users are signed up for a set of responsibilities

Event Planning

The Event Planning application template helps teams organize events efficiently through the use online registration, schedules, communication and feedback.

Discussion Database

The Discussion Database application template provides a location where team members can create and reply to discussion topics.

Team Work Site

The Team Work Site application template provides a place where clinical and business teams, can upload background documents, track scheduled calendar events and submit action items that result from team meetings.

Document Library and Review

The Document Library and Review application template helps people to manage the review cycle common to processes like publication, knowledge management and project plan development.

Knowledgebase

The Knowledgebase application template helps teams manage the information that is resident within their organization. The template enables team members to upload/create documents using Web-based tools and tag them with relevant identifying information.

Policies and Procedures Solution Accelerator

The Policies and Procedures Solution Accelerator assists healthcare organizations create, maintain, publish and easily access policy and procedure information. It also provides the ability to upload documents, maintain a version history and manage tasks.

Board of Directors

The Board of Directors application template provides a single location for an external group of members to store and locate common documents such as quarterly reviews, shareholder meeting notes and annual strategy documents.

Business Performance Reporting

The Business Performance Reporting application template helps health organization managers track the satisfaction of internal customers/patients through a combination of surveys and discussions.

Request for Proposal

The Request for Proposal application template helps manage the process of creating and releasing an initial RFP, collecting submissions of proposals and formally accepting the selected proposal from amongst those submitted.

Compliance Process Support Site

The Compliance Process Support Site application template helps both teams and executive sponsors to manage compliance implementation endeavors, such as HIPAA.

Expense Reimbursement and Approval

The Expense Reimbursement and Approval application template helps manage elements of the expense approval process, including creation and approval. Users can monitor the status of their reimbursement request through a filtered view listing.

Scorecards Solution Accelerator

The Scorecards solution accelerator acts as a template for configuring a management dashboard to track organizational metrics. It contains four example dashboards ranging from a primary care practice to a healthcare organization’s CEO dashboard.

Call Center
call center
The Call Center application template helps departments manage the process of handling customer service requests. The application template helps teams manage service requests from issue identification to cause analysis and resolution.

Help Desk
help desk
The Help Desk application template helps departments manage the process of handling service requests. Team members use the application template to identify a service request, manage identification of the root cause and track solution status.

Physical Asset Tracking and Management

The Physical Asset Tracking and Management application template helps departments, such as Facilities, BioMedical, Surgery, etc. manage requests and the tracking of physical assets.

Inventory Tracking
inventory
The Inventory Tracking application template helps organizations track elements associated with inventory, including creation of inventory. Users are notified when each part reaches the reorder quantity and helps manage customer and supplier information.

Cafeteria Menu Management

The Cafeteria Menu Management application template helps hospital Food & Nutrition staff easily communicate daily menu choices to hospital staff and visitors. It allows staff to develop/schedule menus and provide related nutritional information.

Budgeting and Tracking Multiple Projects

The Budgeting and Tracking Multiple Projects application template helps project teams track and budget multiple, interrelated sets of activities. Management tools such as assignment of new tasks, Gantt Charts and common status designators.

Change Request Management
change request management
The Change Request Management application template helps users track risks associated with a design change. Team members can submit a change request, notifying stakeholders of the risks involved with the change.

IT Team Workspace

The IT Team Workspace application template helps teams manage the development, deployment and support of software projects. It also includes help desk functionality, allowing team members to guide service requests from initiation to resolution.

Project Tracking Workspace

The Project Tracking Workspace application template helps small team projects manage project information in a single location. The application template provides a place where a team can list and view project issues and tasks.

 

 

 

How To : Set up Git on your dev machine (configure, create, clone, add)

Git-Logo[2]Applies to : Visual Studio 2013

When you start using Visual Studio with Git, choose the way that works best for you and the kind of project you’re working on. For example, you can start an experimental solo effort in a new or an existing local repository and continue developing there as long as you want.

Or you can join a collaborative effort in a remote Git repository, hosted either in Team Foundation Server (TFS) or on another service.

Before you start

What do you want to do?

Start from a local repository


You can create a local repository on your dev machine—whether or not you have a network connection—and start developing right away: coding, committing, branching, and merging code. When you’re ready to collaborate with your team, you can publish one or more branches from your local repository into a team project.

Create a new solution under local Git version control


You’ve got an idea for a new app, so you want to experiment on your dev machine. In less than a minute you can use Visual Studio with Git to create a new code project under local version control. (And no Internet required!)Create a new code project (Keyboard: Ctrl + Shift + N). We suggest that you put your new project in c:\Users\YourName\Source\Repos\.

New Project Choose Git Source Control

Put an existing solution under local Git version control


You’ve already got an app in progress and you want to start working on it under local Git version control.
Tip Tip
Before you add the solution to Git version control, we recommend you first move the solution to the TFS Git default location: c:\Users\YourName\Source\Repos\
  1. If you have not already done so, open your solution, (Keyboard: Ctrl + Shift + O) and then open Solution Explorer (Keyboard: Ctrl + Alt + L).
  2. Add your solution to source control.Adding a solution to version control
  3. On the Choose Source Control dialog box, choose Git.
  4. Now that your repository is created, you are ready to commit your files. Go to the Changes page (Keyboard:Ctrl + 0, G) and commit.Open changes page

    Committing the new solution(If you are prompted to configure your user name and email address, do that now. See Configure Git settings.)

    The commit succeeded

 

You can create an empty local repository and add files later. It’s possible to track your changes to the files whether or not they are part of a solution. Or, if you already have a local repository, just start working with it in Visual Studio.Open the Connect page (Keyboard: Press Ctrl + 0, C).

Team Explorer Connect pageTo create an empty local repository, choose New. To open a local repository that already exists on your dev machine, choose Add.

Creating a new local Git repositorySpecify the local path and then choose Create or Add.

Publish your local repository into TFS


When you are ready to share your code and collaborate with your teammates, publish your local repository intoTFS.

  1. Make sure you have committed all your changes in the local repository. See Manage and commit your changes.
  2. If you haven’t already done so, create a new a new team project (choose Git version control) or create a new Git repository in an existing Git team project.
  3. From the Connect page (Keyboard:Ctrl + 0, C), connect to the empty Git repository and publish the local repository to it.Publishing a local repository into TFS

 

Your friends have invited you to work with them on a new project. Or maybe you are setting up a new project or a new dev machine. You can use Visual Studio and Git to collaborate on TFS (on-premises or in the cloud), on CodePlex, or on a third-party service such asGitHub orBitbucket.What do you want to do?

 

If you haven’t already done so, go ahead and create or get access to a Git team project.From Visual Studio: Go to the Team Explorer Connect page (Keyboard: Press Ctrl + 0, C) and then connect to the team project.

Connect to the Git team project(If the team project you want to open is not listed, choose Select Team Projects and then connect to the team project.)

From the web: Open a team project from its home page in your web browser (Keyboard: Ctrl + 0, A).

Open a team project from web accessAfter you connect to the Git team project, if you have not already done so, you must clone it to your dev machine before you can work in it.

Prompt to clone the remote repository

Cloning a Git repository in a team projectJust specify the local path and choose Clone.

Clone a remote Git repository from a third-party service


Does your team have some code in GitHub or another service such as CodePlex or Bitbucket? To start working in Visual Studio, clone the code to your dev machine.Cloning a remote third-party repository

Note Note
You can use Visual Studio’s Git capabilities with services other than TFS. However, if you use these repositories, you will not be able to use TFS features such as project planning and tracking and Team Foundation Build.

 

To customize your Git settings, you must be connected to a local or remote Git repository. Open the Git Settings page.Opening the Git Settings page

  • Apply global settings Apply global Git settings to control aspects of how Git functions for the current user on the dev machine. For example, you can specify how you identify yourself on the changes you commit.
  • Apply repository settings Apply settings to control how Git functions in each individual local repository on your dev machine. For example, you can fine tune how the system blocks clutter from entering your user experience and repository.
  • Apply more settings Visual Studio respects all Git settings but provides you with control over only a few of them. Use the Git command prompt to customize all Git settings.

 

Git global settings User Name and Email Address: Git associates each commit you create with your name and email address. When you start using Visual Studio with Git on your dev machine, if you connect to a Git team project first, then Visual Studio fills in your name and email address for you.Default Repository Location: Specify the default root directory where you want to create or clone new local Git repositories.

Author images: Use images to more easily see the author of each commit.

  • If your Git repo remote origin is in a TFS Git team project, team members can specify their images in their TFS profiles. How? See tips below.
  • If your Git repo remote origin is in a non-TFS Git service (such as CodePlex, GitHub, or Bitbucket), select Enable download of author images from 3rd party source, and then ask team members to set up Gravatar accounts for their email addresses.
Note Note
Enable download of author images from 3rd party source also works for TFS Git team projects in cases where the author has not supplied a profile image.

An example of how author images enhance the collaborative experience:

Git author image examples: branches and history

 

Adding Git repository setting files If your repository does not have settings files, you should probably use Visual Studio to add some default files that apply the most typically useful settings. You’ll avoid distraction and potential clutter in your repository from non-source files such aslocally-built binaries..gitignore file: See Use the Git ignore file to avoid file clutter in your work and in your repository.

.gitattributes file: To specify options such as how the system handles line-breaks, specify a .gitattributes file. See Customizing Git – Git Attributes

 

Commit your repository settings files: In most cases you should commit and push these files so that everyone else on your team uses the same repository settings on their dev machines.

Committing settings file changes

Apply more Git settings


You can specify three kinds of Git settings, listed in order ofsupersedence:

  • Repository settings apply to the work done in the local repository.
  • Global settings apply to the work done by the current user on the dev machine.
  • System settings apply to all work done on the client dev machine. (Visual Studio respects these settings, but does not expose them.)

If you need to modify system settings, or if you prefer the command prompt, then modify your Git settings from there. See Work from the Git command prompt, Customizing Git – Git Configuration, and git-config command.

Q & A


Q: I’m really new to all this. How can I get more help?


A: Follow a step-by-step walkthrough to get started using Git to work locally on a new project and then to begin collaborating with a team on Visual Studio Online.
A: In most cases, it’s best to use a short, understandable folder path. For example: C:\Users\YourName\Source\Repos\FabrikamGit\SolutionName\.Some tips on effective folder names:

  • Keep all folder, sub-folder, and file names short to simplify your work and avoid potential long-path issues that can occur with some types of code projects.
  • Avoid whitespace if you want make command-line operations a little easier to perform.
A: If your Git repo remote origin is in a TFS Git team project, you can specify your images in your TFS profile from your web browser (Keyboard: Ctrl + 0, A).On the Home page, choose Web Access My Profile link on Account menu

A: Yes, any contributor to your team project can claim any user name and any email address they want when authoring a commit. However, TFS does authenticate who pushes the commit. To see who pushed a commit, open your team project in your web browser (Keyboard: Ctrl + 0, A). Open the commit you want to examine from the Commits section, and then expand the commit details.

Commit 'Pushed by" field

A look at : ALM and Lab Environments

What is a lab environment?

A lab environment is a collection of computers that are managed as a single unit, and on which you deploy the system under test along with test software. Here is a typical configuration of machines in a lab environment:

JJ159341.75955053636946458F51D7F086CADE6D(en-us,PandP.10).png

 

Typical lab environment configuration

This one is set up for automated tests of an ice cream vending service. The software product itself consists of a web service that runs on Internet Information Services (IIS) and a database that runs on a separate machine. The tests drive a web browser on a client machine.

With a lab environment, you can run a build-deploy-test workflow in which you can automatically build your system, deploy its components to the appropriate machines in the environment, run the tests, and collect test data. (The fully automated version of this is described in Chapter 5, “Automating System Tests.”)

The workflow is controlled by a test controller with the help of test agents installed on each test machine. The test controller runs on a separate computer.

Now you might ask why you need lab environments, since you could deploy your system and tests to any machines you choose.

Well, you could, but lab environments make several things easier:

  • You can set up automated build-deploy-test workflows. The scripts in the workflow use the lab role names of the machines, such as “Web Client,” so that they are independent of the domain names of the computers.
  • The results of tests can be shown on charts that relate them to requirements.
  • Lab Manager automatically installs test agents on each machine, enabling test data to be collected. Lab Manager manages the test settings of the virtual environment, which define what data to collect.
  • You can view the consoles of the machines through a single viewer, switching easily from one machine to the other.
  • Lab environments manage the allocation of machines to tests for reasons that include preventing two team members from mistakenly assigning the same machine to different tests.

Lab environments come in two varieties. A standard lab environment (roughly equivalent to a physical environment in Visual Studio 2010) can be composed of any computers that you have available, such as physical computers or virtual machines running on third-party frameworks.

An SCVMM environment is made up entirely of virtual machines controlled by System Center Virtual Machine Manager (SCVMM). SCVMM environments provide you with several valuable facilities; they allow you to:

  • Create fresh test environments within minutes. You can store a complete environment in a library and deploy running copies of it. For example, you could store an environment of three machines containing a web client, a web server, and a database. Whenever you want to test a system in that configuration, you deploy and run a new instance of it.
  • Take snapshots of the states of the machines. For example whenever you start a test, you can revert to a snapshot that you took when everything was freshly installed. Also, when you find a bug, you can take a snapshot of the environment for later investigation.
  • Pause and resume all the virtual machines in the environment at the same time.

Standard environments are useful for tests that have to run on real hardware, such as some kinds of performance tests. You can also use them if you haven’t installed SCVMM or Hyper-V, as would be the case if, for example, you already use another virtualization framework. But as you can see, we think there are great benefits to using SCVMM environments.

Stored SCVMM environments

Because you can store them in a library, SCVMM environments help to make your tests repeatable; when you run them for the next build, or when a new release is planned after a six-month break, you can be sure that the tests are running under the same conditions.

JJ159341.94F2E0510A25C5D3BB02389020654B0D(en-us,PandP.10).png

 

A stored SCVMM environment

For example, on Fabrikam’s ice cream sales project, the team often wants to deploy and test a new build of the sales system. It has several components that have to be installed on different machines. Of course, the sales system software is a new build each time. But the platform software, such as operating system, database, and web browser don’t change.

So at the start of the project, the team creates an environment that has the platform software, but no installation of the ice cream system. In addition, each machine has a test agent. The Fabrikam team stores this environment in the library as a template.

Whenever a new build is to be tested, a team member selects the stored platform environment, and chooses Deploy. Lab Manager takes a few minutes to copy and start the environment. Then they only have to install the latest build of the system under test.

While an environment is running, its machines execute on one or more virtualization hosts that have been set up by the system administrator. The stored version from which new copies can be deployed is stored on an SCVMM library server.

Lab management with third-party virtualization frameworks

Some teams have already invested in other virtualization frameworks such as VMware or Citrix XenServer. If that is your situation, the case for switching to Hyper-V and SCVMM might be less clear. But even if you don’t install SCVMM or Hyper-V, you can still use Lab Manager by using standard environments.

With standard environments, you get many of the benefits of lab management, but without the ability to save and quickly set up fresh environments. Instead, you’d have to use your third-party machine manager to set up new machines.

When you assign a machine to a standard environment, Lab Manager will automatically install a test agent and couple it to your test controller. This makes the machine ready for an automatic build-deploy-test workflow and for test data collection. (In Visual Studio 2010, you have to install the test agent manually, but coupling it to the test controller is automatic.)

How to use lab environments

Prerequisites

To enable your team to use lab environments, you first have to set up:

  • Visual Studio Team Foundation Server, with the Lab Manager feature enabled.
  • A test controller, linked to your team project in Team Foundation Server.
  • (Preferable, but not mandatory) System Center Virtual Machine Manager (SCVMM) and Hyper-V.

You only need to set up these things once for the whole team, so we have put the details in the Appendix. If someone else has kindly set up SCVMM, Lab Manager, and a test controller, just continue on here.

Lab center

You manage environments by using Lab Center, which is part of Microsoft Test Manager (MTM). MTM is installed as part of Visual Studio Ultimate or Test Professional. You’ll find it on the Windows Start menu under Visual Studio. If it’s your first time using it, you’ll be asked for the URL of your team project collection. Switch to the Lab Center view (it’s the alternative to Test Center). On the Environments page, you’ll see a list of environments that are in use by your team. Some of them might be marked “in use” by individual team members:

JJ159341.C0DD8E47801E316B77E376F020E16388(en-us,PandP.10).png

 

Managing environments in Lab Center

(Use the Home button if you want to switch to another team project.)

More information is available from the MSDN website topic: Getting Started with Lab Management.

Connecting to a lab environment

If your team has been using lab environments for a while, then when you open Lab Center, you might already see some environments that are available to use. Pick an environment with a status of Ready, without an In Use flag, and that looks as if it has the characteristics you want, which ought to be indicated by its name. Select it and choose Connect.

The Environment View opens. From here you can log into any of the machines in the environment.

JJ159341.551E17E6D88750A24E53C143E0C6CB73(en-us,PandP.10).png

 

The environment view

Typically, a deployed environment will have a recent build of your system already installed. If you’re sure that it’s free for you to use, you could decide to run some tests on it. However, make sure you know your team’s conventions; for example, if the environment’s name contains the name of a team member, ask if it is ok to use.

Using a deployed (running) environment

Log in. Choose the Connect button to open a console view of the environment. From there you can log into any of its machines. More about the Connect button can be found on MSDN in the topic How to: Connect to a Virtual Environment.

Reserve the environment. You can mark it as In Useto discourage other team members from interfering with it. This doesn’t prevent access by others, but simply sets a flag in Lab Center.

Revert a virtual environment to a clean snapshot. In the environment viewer, look at the Snapshots tab. If the Snapshots tab isn’t available, then this is a standard environment composed of existing machines. You might need to make sure that the latest version of your system is installed.

In a virtual environment, the team member who created the environment should have made a snapshot immediately after installing the system under test. Select the snapshot and restore the environment to that state. If there isn’t a snapshot available, that’s (hopefully) because the previous user has already restored it to the clean state. Again, you might need to check the conventions of your team.

Explore and test your system. Now you can start testing your system, which is the topic of the next chapter.

Restore the snapshot when you’re done with a virtual environment, to restore it to the newly installed state. This makes it easier for other team members to use. This option isn’t available for standard environments, so you might want to clean up any test materials that you have created.

Clear the “in use” flag when you’re done.Typically, a team will keep a number of running environments that contain a recent build, and share them. Reusing the environment and restoring it to its initial snapshot is the quickest way of assigning an environment for a test run.

Deploying an environment

If there is no running environment that is suitable for what you want to do, you can look for one in the library. The library contains a selection of stored virtual environments that have previously been created by your colleagues. You can learn more from the topic: Using a Virtual Lab for Your Application Lifecycle, on MSDN.

JJ159341.DDEFBF57250131604C3C6FD0605EAC74(en-us,PandP.10).png

 

The environment library in MTM Lab Center

(If the library isn’t available, that might mean that your team has not set Lab Manager to use SCVMM. But you can still create standard environments, which are made up of computers not controlled by SCVMM. Skip to the section about them near the end of this chapter. Alternatively, you could set up SCVMM as we describe in the Appendix.)

Environments stored in the library are templates; you can’t connect to one because its virtual machines aren’t running. Instead, you must first deploy it. Deploying copies the virtual machines from the library to the virtual machine host, and then starts them.

In MTM, in Lab Center, choose Deploy. Choose an environment from the list. They should have names that help you decide which one you want.

After you have picked an environment, Lab Center takes a few minutes to copy the virtual machines and to make sure that the test agents (which help deploy software and collect data) are running.

Eventually the environment is listed under the Lab tab as Ready (or Running in Visual Studio 2010). Then you’re all set to use it. If it shows up as Not Ready, then try the Repair command. This reinstalls test agents and reconnects them to your test controller. In most cases that fixes it.

Install your system

Typically, stored environments contain installations of the base platform: operating systems, databases, and so on. They don’t usually include an installation of the system under test. Your next step is therefore to install the latest build of your system.

To help choose a good recent build, open the build status report in your web browser. The URL is similar to http://contoso-tfs:8080/tfs/web. Click on Builds. You might have to set the date and other filters. The quality and location of each build is summarized.

In Lab Center, under the Lab tab, select the running environment and choose Connect. Log into the environment’s machines.

Use the installer (typically an .msi file) that is generated by the build process. The location can be obtained from the build status reports. Pick an installer that was generated from the Debug build configuration. You need to put each component on the right machine. Each machine has a role name such as Client, Web Server, or Database, to help you make the right choice.

Later we’ll discuss how you can write scripts to automate the deployment of the system under test.

Review the name you gave to the environment to make sure it reflects the system and build you installed.

Take a snapshot of the environment

Create a snapshot of the environment. This will enable subsequent users to get the environment back to its nice clean state. Do this immediately after you have installed your system, and before you run any tests, other than perhaps a quick smoke test to make sure the installation is OK.

You can create a snapshot either from the Snapshots tab in Environment Viewer, or from the context menu of the environment in the Lab listing.

Use it

After you’ve taken a snapshot, you can start using it as we described earlier. When you’ve finished testing, you can revert to the snapshot.

Delete it (eventually)

Delete an environment when the build it uses is superseded.

Creating a new virtual environment

What if there are no environments in the stored library, or none have the mix of machines you need? Then you’ll have to create one. And if you’re feeling generous, you could add it to the library for other team members to use.

You can either store an environment directly in the library, or you can create it as a running environment and then store it in the library. Storing directly is preferable if you don’t need to configure the constituent virtual machines in any way.

To add a new environment directly to the library, open MTM; choose Lab Center, Library, Environments, and then the New command.

Follow link to expand image

 

Creating a new environment in the library

Alternatively, to create a new running environment that you can store later, choose Lab Center, Lab, and then New. In the wizard, choose SCVMM Environment. (In Visual Studio 2010, the New command has a submenu, New Virtual Environment.)

In either method, you continue through the wizard to choose virtual machines from the library. If your team has been working for a while, there should be a good stock of virtual machines. Their names should indicate what software is installed on them.

Choose library machines that have type Template if they are available. Unlike a plain virtual machine, you can deploy more than one copy of a template. This is because when a template VM is deployed, it gets a new ID so that there are no naming conflicts on your network. Using templates to create a stored environment allows more than one copy of it to be deployed at a time.

JJ159341.11F88DD461E5D5629797A0A64190C808(en-us,PandP.10).png

 

Creating a new virtual environment

You have to name each machine uniquely within your new lab environment. Notice that the name of the computer in the environment is not automatically the same as its name in the domain or workgroup.

You also have to assign a role name to each machine, such as Desktop Client or Web Server. More than one machine can have the same role name. There is a predefined set to choose from, but you can also invent your own role names. These roles are used to help deploy the correct software to each machine. If you automate the deployment process, you will use these names; if you deploy manually, they will just help you remember which machine you intended for each software component.

When you complete the wizard, there will be a few minutes’ wait while VMs are copied.

MTM should now show that your environment is in the library, or that it is already deployed as a running environment, depending on what method of creation you chose to begin with. If it’s in the library, you can deploy it as we described before.

After creating an environment, you typically deploy software components and then keep the environment in existence until you want to move to a new build. Different team members might use it, or you might keep it to yourself. You can mark an environment as “In Use” to discourage others from interfering with it while your work is in progress.

Stored and running machines

The lab manager library can store both individual virtual machines and complete environments. There are command buttons for creating new environments, storing them in the library, and for deploying environments from the library. You have to shut down an environment before you can store it.

JJ159341.915E41C06D69548CC8AE9616387F13A5(en-us,PandP.10).png

 

Stored and deployed environments

Creating and importing virtual machines

You can store individual virtual machines from the test host to the library. Therefore, if your team starts off with a set of virtual machines in the library that include a basic set of platforms—for example, Windows 7 and Windows Server 2008—then you can deploy a machine in an environment, add extra bits, and then store it back in the library.

JJ159341.50A8846DD943403543185DF65F8DC37E(en-us,PandP.10).png

 

System Center Virtual Machine Manager (SCVMM)

But how do you create those first virtual machines? For this you need to access SCVMM, on which Lab Manager is based. It’s typically an administrator’s task, so you’ll find those details in the Appendix. Briefly:

  1. You can create a new machine in the SCVMM console and then install an operating system on it, either with your DVD or from your corporate PXE server.
  2. Every test machine needs a copy of the Team Foundation Server Test Agent, which you can get from the Team Foundation Server installation DVD.
  3. Use the SCVMM console to store the VM in the library as a template. This is preferable to storing it as a plain VM.
  4. In Lab Manager, use the Import command on the Library tab in order to make the SCVMM library items visible in the Lab Center library.

JJ159341.84682E4E4DA4A8940F44677579E69D0A(en-us,PandP.10).png

 

How environments are managed

Composed environments

A composed environment is made up of virtual machines that are already running. When you compose an environment from running machines, they are assigned to your environment; when you delete the environment, they are returned to the available pool. You can create a composed environment very quickly because there is no copying of virtual machines.

We recommend composed environments for quick exploratory tests of a recent build. The team should periodically place new machines in the pool on which a recent build is installed. Team members should remember to delete composed environments when they are no longer using them.

JJ159341.B7D97B097F6BCBF97980D943DEF5EEA8(en-us,PandP.10).png

 

Composed environments

In Visual Studio 2012, you make a composed environment the same way you create a virtual environment: by choosing New and then SCVMM environment. In the wizard, you’ll see that the list of available machines includes both VM templates and running pool machines. If you want, you can mix pool machines and freshly created VMs both in the same environment. For example, you might use new VMs for your system under test, and a pool machine for a database of test data, or a fake of an external system. Because the external system doesn’t change, there is no need to keep creating new versions of it.

In Visual Studio 2010, use the New Composed Environment command and choose machines from the list.

Standard environments

Standard environments are made up of existing computers. They can be either physical or virtual machines, or a mixture. They must be domain-joined.

You can create standard environments even if your team hasn’t yet set up SCVMM. For example, if you are already using VMware to run virtual machines and don’t want to switch to Hyper-V and SCVMM, you can use Lab Manager to set up standard environments. You can’t stop, start, or take snapshots of standard environments, but Lab Manager will install test agents on them and you can use them to run a build-deploy-test workflow.

You can also use standard environments when it is important to use a real machine—for example, in performance tests.

To create a standard environment, click New and then choose Standard Environment.

(In Visual Studio 2010, choose New Physical Environment. You must manually install test and lab agents on the computers. These agents can be installed from the Team Foundation Server DVD.)

For an example, see Lab Management walkthrough using Visual Studio 11 Developer Preview Virtual Machine on the Visual Studio Lab Management team blog.

Summary

There’s a lot of pain and overhead in configuring physical boxes to build test environments. The task is made much easier by Visual Studio Lab Manager, particularly if you use virtual environments.

With Lab Manager you can:

  • Manage the allocation of lab machines, grouping them into lab environments.
  • Configure machines for the collection of test data.
  • Rapidly create fresh virtual environments already set up with a base platform of operating system, database, and so on.

Differences between Visual Studio 2010 and Visual Studio 2012

  • System Center Virtual Machine Manager 2012. Lab Management in Visual Studio 2012 works with SCVMM 2012 in addition to SCVMM 2008.
  • Standard environments. Lab Manager in Visual Studio 2012 is easier to use with third-party virtualization frameworks as well as physical computers. It will install test agents if necessary.
  • Test agents. In Visual Studio 2010, you must install test and lab agents on the machines that you want to use in the lab. In Visual Studio 2012, there is only one type of agent, and it is installed automatically by Lab Manager on each of the machines in a lab environment. You can still install the test agent yourself to save time when lab environments are created.
  • Compatibility. Most combinations of 2010 and 2012 RC products work together. For example, you can create environments on Visual Studio Team Foundation Server 2010 using Microsoft Test Manager 2012 RC.

JJ159341.133AB26540FC407EBE448F1A9C7558A7(en-us,PandP.10).png

How To : Design the Physical Architecture to Support Collaborative Development and ALM of SharePoint Foundation 2010 Application

Introduction

This article explains the physical architecture which fits best in collaborative development and ALM of SharePoint Foundation 2010 application and what are the servers and tools needed and how they play key roles in ALM of SharePoint Foundation 2010. The purpose of this article is to provide overall understanding of various servers and farms connected to each other in SharePoint Foundation.

Background

Basic understanding of different server OS & SharePoint Foundation 2010 is required.

Solution

Application Life-cycle Management (ALM) is the co-ordination of development life-cycle activities—including requirements, modeling, development, build, and testing. Recently, ALM has expanded beyond the application and the software development life cycle to also include business solution governance, infrastructure management, operations, and support.

You can use ALM to help align your organization in the context of a software solution in business, development, and operations. With an application development platform that supports ALM, you can provide integration between the various tools used and activities performed within each of these capabilities.

There are main four types of staging servers with standalone developer’s environment which plays a key role in ALM of SharePoint 2010 application:

  1. Development SharePoint Farm
  2. Team foundation server
  3. Integration/Testing Farm
  4. Production Farm
    +
    Developer’s Workstation

The below figure is a physical architecture which depicts how each sever is interconnected to support collaborative development and ALM for SharePoint Foundation 2010 application:

Click to enlarge image

Development SharePoint Farm

A SharePoint farm is fundamentally a collection of SharePoint role servers that provide for the base infrastructure required to house SharePoint sites. The farm level is the highest level of SharePoint architecture, providing a distinct operational boundary for a SharePoint environment. Each farm in an environment is a self-encompassing unit made up of one or more servers, such as web servers, service application servers, and SharePoint database servers.

SharePoint development farm needed for the developers in an organization that makes heavy use of SharePoint often need environments to test new applications, web parts, solutions, and other SharePoint customization. These developers often need a sandbox area where these farm level features and solutions can be tested.

I have considered two-tier topology for SharePoint Foundation 2010 farm. However it will be entirely based on the need of your application. If your application is a relatively small intranet application, then you can choose single tier topology or if you are going to integrate other search server with foundation, then you can choose three-tier topology with application server as a middle tier (Remember that SharePoint Foundation 2010 doesn’t include enterprise search). It may make sense to deploy one or more development farms so that developers have the opportunity to run their tests and develop software for SharePoint independent of the existing production environment.

There are basically two types of servers included in two-tier development farm of SharePoint Foundation 2010:

  1. Web server
  2. Content database server

In the above figure, there are three front-end web servers and one SharePoint content database server. However you can choose a single front-end web server connected to content database server based on your application need and architecture of production environment. All web servers share the same content database. This is called two-tier deployment farm where SharePoint server component and content database are installed on separate server. As I mentioned before, you can choose one-tier, two-tier or three-tier deployment topology based on your application architecture and topology of production architecture.

Each web server has SharePoint Foundation 2010 and SharePoint extension for TFS 2010 install on it. It needs SharePoint extension for TFS 2010 to connect with Team Foundation Server for source control, build management & project management.

Advantage of Development SharePoint Farm:

  1. Single place where SharePoint Admin can integrate all the final artifacts from multiple developers.
  2. Developer can sync with latest SharePoint site on its standalone developer workstation.
  3. Admin can easily approve artifacts and migrate to integration server.
  4. It is a unit testing environment for developers where they can test dependent functionality or farm level features.

Team Foundation Server

Team Foundation Server plays a key role in ALM which provides source control, build management and work item. You can have TFS installed on the same server which has content database server but if you are going to use build management of TFS, then it is advisable to have separate Team Foundation Server because it utilizes CPU intensively when it processes the builds.

As per the above figure, there are separate Team foundation servers which are connected to SharePoint Farm as well as standalone development workstation so that it can provide source control for customized content as well as developer’s artifacts and resources.

Advantages of TFS
  1. Source control for SharePoint artifacts and customization
  2. Build management for SharePoint
  3. Work item and bug tracking tool for SharePoint
  4. Admin console for all management activity
  5. Easy integration with SharePoint foundation server and VS 2010
  6. Easy check-in & check-out
  7. Web based console to manage ALM activity

Developer’s Workstation

As per the above figure, developers’ environment includes two developers workstation. In practice, you can take as many workstations as your development team size.

Developer workstation should have Windows 7 or Windows vista operating system with standalone SharePoint foundation server with local content database. So that one developer’s work doesn’t affect another developer and he can debug artifacts locally.

Developer workstation will include the following stuff installed:

  1. Windows 7 or Windows vista 64 bit OS
  2. Stand alone SharePoint Foundation server 2010
  3. SharePoint designer 2010
  4. Visual Studio 2010 (connected to TFS)

Developer workstation should be connected to Team Foundation Server 2010 so that when developer finally completes his artifact, then he can check-in his artifact in TFS so that other developers can take the latest code from TFS if needed. This way, parallel development can happen without affecting other developer’s work.

Integration/Testing Farm

Any production SharePoint environment should have a test environment in which new SharePoint web parts, solutions, service packs, patches, and add-ons can be tested. It is critical to deploy test farms, because many SharePoint add-ons could potentially disrupt or corrupt the formatting or structure of a production environment, and trying to test these new solutions on site collections or different web applications is not enough because the solutions often install directly on the SharePoint servers themselves. If there is an issue, the issue will be reflected in the entire farm.

Integration or testing server farm should be similar to the existing environments, with the same add-ons and solutions installed and should ideally include restores of production site collections to make it as similar as possible to the existing production environment. All changes and new products or solutions installed into an environment should subsequently be tested first in this environment.

Integration/testing servers will have final SharePoint sites and site collection as per the business requirements. QA will test all the business functionality here. Customer can also do their ‘User acceptance test’ before going live to the production server.

After user acceptance test passed, all the sites & site collection will be deployed on production server.

Advantage of Integration testing server:

  1. Clean environments and same physical architecture as production
  2. QA can test all dependent business functionality at one place
  3. Customer can participate in UAT
  4. Easy deployment/migration from integration testing server to production server

Production Farm

The final stage is rolling your farm into a production environment. At this stage, you will have incorporated the necessary solution and infrastructure adjustments that were identified during the user acceptance test stage. These servers are generally in the customer’s premises. Development team and testing team do not have control over it.

There are various 3rd party tools available in the market for SharePoint data protection, administration, migration, compliance and integration.

ImageGen[1]

Summary

So this way, you can design physical architecture where Development SharePoint Farm and developer’s workstation are integrated with TFS 2010. TFS and Content database are connected to testing server or testing farm where all the artifacts and content will be integrated in testing server for QA and UAT. Finally after UAT, it will be deployed on production farm.

You can use VM (Virtual Machine) for all the servers and workstation for effective infrastructure because if server crashes due to some reason, then you can quickly create a new VM for the needed OS from images.

Note: In the above figure, integration/Testing farm and production farm is a single server just for clear understanding but it will be as large as development farm with number of front-end web server and content database server in reality. All the server OS is Windows Server 2008 R2 SP2 64 bit. Please visit here for more information on hardware & software requirements for SharePoint Foundation 2010.

A Look At : Visual Studio 2013 Update 3 CTP2

avatar[2]

New technology improvements in Visual Studio 2013 Update 3 CTP 2

 

Technology improvements

The following technology improvements were made in this release.

CodeLens

  • CodeLens jobs that are running on the Team Foundation Server job agent have been optimized for performance specifically while processing branching and merging changesets.

Debugger

  • If you have more than one monitor, Visual Studio will remember which monitor a Windows Store application was last run on.
  • You can debug x86 applications that are built by .NET native.
  • When you analyze managed memory dump files, you can go to Definition and Find All References of the selected type.
  • You can debug the dump files from .NET Native applications by using Visual Studio debugger.

General

  • The Application Insights Tools for Visual Studio are now included in Visual Studio 2013 Update 3 CTP2. This initial integration as part of CTP2 includes some software updates and performance improvements.

IntelliTrace

  • You can skip straight to the details of performance events that are exported from Application Insights to IntelliTrace.

Profiler

  • The Performance and Diagnostics hub can open profiling sessions (.diagsession files) that were exported from the F12 tools in the latest developer preview of Internet Explorer 11.
  • Windows Presentation Foundation (WPF) and Win32 applications are supported by the new Memory Usage Tool in the Performance and Diagnostics Hub. For more information about how to use the tool to troubleshoot issues in native and managed memory, go to the following blog post:
    Diagnosing memory issues with the new Memory Usage Tool in Visual Studio

Release Management

  • You can useWindowsPowerShell or theWindowsPowerShell Desired State Configuration (DSC) feature to deploy and manage configuration data. Additionally, you can deploy to the following environments without having to set up Microsoft Deployment Agent:
    • Microsoft Azure environments
    • On-premise environments (Standard environments)

Testing Tools

  • You can add custom fields and custom work flows for test plans and test suites.
  • You can use Manage Test Suites permission for granting access to test suites.
  • You can track changes to test plans and test suites by using work item history.

For more information about these features, see the following Visual Studio Developer Tools blog article:

Test Plan and Test Suite Customization with TFS2013 Update3

Visual Studio IDE

  • CodeLens authors and changes indicators are now available for Git repositories.
  • In Code Map, links are styled by using colors, and they display in the improved Legend.
  • Debugger Map automatically zooms to the call stack entry of interest and preserves user’s zoom preferences.
  • You can drag binaries from the Windows file explorer to a code map, and then start exploring binaries by using Code Map.

Known issues

Testing Tools

  • When you try to upgrade an existing TFS server that has Test management data to Visual Studio 2013 Team Foundation Server Update 3 CTP2 in JPN or CHS, the upgrade of Test Case Management service does not work.

Visual Studio IDE

  • In Visual Studio 2013 Ultimate Update 3 CTP2 localized (non en-us) drops, when trying to request a Code Map, or a Dependency Graph for the solution, the directed graph is not produced.

 

For more information on Visual Studio 2013 and other upgrades, visit http://support.microsoft.com/kb/2933779/en-us

List of all Visual Studio ALM Virtual Machines

List of all Visual Studio ALM Virtual Machines

COURTESY OF THE ALM RANGERS @

 

image_thumb%25255B3%25255D[1]

Given the growing list of virtual machines we have published to showcase various application lifecycle management scenarios, I created this blog post to be a permanent location you can bookmark any time you want to find the latest and greatest. An easy-to-remember URL for this page is http://aka.ms/ALMVMs.

Visual Studio 2013 ALM Virtual Machine and Hands-on-Labs / Demo Scripts
Last Update: January 9, 2014
This virtual machine is based on the RTM release of Visual Studio 2013 and includes hands-on-labs / demo scripts which showcase the new ALM capabilities introduced in this release. This VM was also upgraded on January 9, 2014 to include the content and hands-on-labs / demo scripts for capabilities which were originally introduced in Visual Studio 2010/2012.

Visual Studio 2012 Update 2 ALM Virtual Machine and Hands-on-Labs / Demo Scripts
Last Updated: April 17, 2013
This is the primary ALM virtual machine which demonstrates many of the scenarios introduced in Visual Studio 2010/2012 for application lifecycle management. This includes project management, source control, developer productivity and collaboration, testing, lab management, and IntelliTrace.

Team Foundation Server 2012 and Project Server 2013 Integration Virtual Machine and Hands-on-Labs / Demo Scripts
Last Updated: April 17, 2013
This VM highlights the integration scenarios which are possible between Team Foundation Server and Project Server which allow development teams to automatically synchronize the status of their projects with a centralized project management office (PMO).

Team Foundation Server 2012 and System Center 2012 Operations Manager Integration Virtual Machine and Hands-on-Lab / Demo Script
Last Updated: February 7, 2013
This VM highlights the integration scenarios which are possible between System Center 2012 Operations Manager and Team Foundation Server 2012 which allow operations teams to easily surface incidents from production in a rich, actionable way for developers to quickly diagnose these problems.

A Look At : The importance of people in a SharePoint project

Image

As with all other sizeable new business software implementations, a successful SharePoint deployment is one that is well thought-out and carefully managed every step of the way.

However in one key respect a SharePoint deployment is different from most others in the way it should be carried out. Whereas the majority of ERP solutions are very rigid in terms of their functionality and in the nature of the business problems they solve, SharePoint is far more of a jack-of-all-trades type of system. It’s a solution that typically spreads its tentacles across several areas within an organisation, and which has several people putting in their two cents worth about what functions SharePoint should be geared to perform.

So what is the best approach? And what makes for a good SharePoint project manager?

From my experience with SharePoint implementations, I would say first and foremost that a SharePoint deployment should be approached from a business perspective, rather than from a strictly technology standpoint. A SharePoint project delivered within the allotted time and budget can still fail if it’s executed without the broader business objectives in mind. If the project manager understands, and can effectively demonstrate, how SharePoint can solve the organisation’s real-world business problems and increase business value, SharePoint will be a welcome addition to the organisation’s software arsenal.

Also crucial is an understanding of people. An effective SharePoint project manager understands the concerns, limitations and capabilities of those who will be using the solution once it’s implemented. No matter how technically well-executed your SharePoint implementation is, it will amount to little if hardly anyone’s using the system. The objective here is to maximise user adoption and engagement, and this can be achieved by maximising user involvement in the deployment process.

 

Rather than only talk to managers about SharePoint and what they want from the system, also talk to those below them who will be using the product on a day-to-day basis. This means not only collaborating with, for example, the marketing director but also with the various marketing executives and co-ordinators.

 

It means not only talking with the human resources manager but also with the HR assistant, and so on. By engaging with a wide range of (what will be) SharePoint end-users and getting them involved in the system design process, the rate of sustained user adoption will be a lot higher than it would have been otherwise.

 

An example of user engagement in action concerns a SharePoint implementation I oversaw for an insurance company. The business wanted to improve the tracking of its documentation using a SharePoint-based records management system. Essentially the system was deployed to enhance the management and flow of health insurance and other key documentation within the organisation to ensure that the company meets its compliance obligations.

 

The project was a great success, largely because we ensured that there was a high level of end-user input right from the start. We got all the relevant managers and staff involved from the outset, we began training people on SharePoint early on and we made sure the change management part of the process was well-covered.

 

Also, and very importantly, the business value of the project was sharply defined and clearly explained from the get-go. As everyone set about making the transition to a SharePoint-driven system, they knew why it was important to the company and why it was going to be good for them too.

By contrast a follow-up SharePoint project for the company some months later was not as successful. Why? Because with that project, in which the company abandoned its existing intranet and developed a new one, the business benefits were poorly defined and were not effectively communicated to stakeholders. That particular implementation was driven by the company’s IT department which approached the project from a technical, rather than a business, perspective. User buy-in was not sought and was not achieved.

 

When the SharePoint solution went live hardly anyone used it because they didn’t see why they should. No-one had educated them on that. That’s the danger when you don’t engage all your prospective system end-users throughout every phase of a SharePoint implementation project.

As can be seen, while it is of course critical that the technical necessities of a SharePoint deployment be met, that’s only part of the picture. Without people using the system, or with people using the system to less than its maximum potential, the return on your SharePoint investment will never materialize.

Comprehensive engagement with all stakeholders, that’s where the other part of the picture comes in. That’s where a return on investment, an investment of time and effort, will most assuredly be achieved.

Microsoft releases .Net 4.5.2 Framework and Developer Pack

You can download the releases now,

Image

We incorporated feedback we received for the .NET Framework 4.5.1 from different feedback sources to provide a faster release cadence. In this blog post we will talk about some of the new features we are delivering in the .NET Framework 4.5.2.

ASP.NET improvements

  • New HostingEnvironment.QueueBackgroundWorkItem method that lets you schedule small background work items. ASP.NET tracks these items and prevents IIS from abruptly terminating the worker process until all background work items have completed. These will enable ASP.NET applications to reliably schedule Async work items.
  • New HttpResponse.AddOnSendingHeaders and HttpResponseBase.AddOnSendingHeaders methods are more reliable and efficient than HttpApplication.PreSendRequestContent and HttpApplication.PreSendRequestHeaders. These APIs let you inspect and modify response headers and status codes as the HTTP response is being flushed to the client application. These reliability improvements minimize deadlocks and crashes between IIS and ASP.NET.
  • New HttpResponse.HeadersWritten and HttpResponseBase.HeadersWritten properties that return Boolean values to indicate whether the response headers have been written. You can use these properties to make sure that calls to APIs such as HttpResponse.StatusCode succeeds. This enables shared hosting scenarios for ASP.NET applications.

High DPI Improvements is an opt-in feature toenable resizing according to the system DPI settings for several glyphs or icons for the following Windows Forms controls: DataGridView, ComboBox, ToolStripComboBox, ToolStripMenuItem and Cursor. Here are examples of before and after views once this change is opted into.

.NET 4.5.1 Controls with High DPI setting .NET 4.5.2 Controls with High DPI setting
The red error glyph barely shows up and will eventually disappear with high scaling The red error glyph scales correctly.
The ToolStripMenu drop down arrow is barely visible, eventually won’t be usable with high scaling The drop down arrow in the ToolStripMenu scales correctly

Distributed transactions enhancement enables promotion of local transactions to Microsoft Distributed Transaction Coordinator (MSDTC) transactions without the use of another application domain or unmanaged code. This has a significant positive impact on the performance of distributed transactions.

More robust profiling with new profiling APIs that require dependent assemblies that are injected by the profiler to be loadable immediately, instead of being loaded after the app is fully initialized. This change does not affect users of the existing ICorProfiler APIs. Before this feature, diagnostics tools that do IL instrumentation via profiling API could cause unhandled exceptions to be thrown, unexpectedly terminating the process.

Improved activity tracing support in runtime and framework – The .NET Framework 4.5.2 enables out-of-process, Event Tracing for Windows (ETW)-based activity tracing for a larger surface area. This enables Application Performance Management vendors to provide lightweight tools that accurately track the costs of individual requests and activities that cross threads. These events are raised only when ETW controllers enable them.

For more information on usage of these features please refer to “What’s New in the .NET Framework 4.5.2”. Besides these features, there are many reliability and performance improvements across different areas of the .NET Framework.

Here are additional installers – pick package(s) most suitable for your needs based on your deployment scenario:

  • .NET Framework 4.5.2 Web Installer – A Bootstrapper that pulls in components based on the target OS/platform specs on which the .NET Framework is being deployed. Internet access is required.
  • .NET Framework 4.5.2 Offline Installer – The Full Package for offline deployments. Internet access is not required.
  • .NET Framework 4.5.2 Language Packs – Language specific support. You need to install the .NET Framework (language neutral) package before installing one or more language packs.
  • .NET Framework 4.5.2 Developer Pack – This will install .NET Framework Multi-targeting pack for building apps targeting .NET Framework 4.5.2 and also .NET Framework 4.5.2 runtime. Useful for build machines that need both the runtime and the multi-targeting pack

 

Great Agile Develipment Tool – Agile Planner

Great Agile Development project – /http://agileplanner.codeplex.com/

Project Description

This project is to develop an interation planning tool for agile project management.

What’s new?

Release: 1.0.0, run in Visual Studio integrated mode. See how to use for details.

What is Agile Planner?

This tool is for agile project teams, who currently are using sticky notes on the wall. With this tool stories, backlog and iterations are managed in a graphic designer, saved as files within visual studio projects and can be exported to images, reports and etc.

Main features are

  • Stories can be drag and dropped between backlog and iterations
  • Iteration’s capacity calculated automatically base on stories within it
  • Stories are rendered base on customizable status or priority color schema
  • Diagram can be exported to PNG image for printing, documentation and sharing

Here are examples.

Stories rendered based on status
agile-stories-on-status-s.png

Stories rendered base on priority
agile-stories-on-priority-s.png

 

How to use Agile Planner

1. Install
To install Agile Planner,

  • Download the runtime binary zip file from the latest releases
  • Extract all files from the runtime binary zip file
  • Run the windows installer AgilePlanner.msi (requires elevated command prompt under Vista and administrator on XP/2003).

2. Get Started
To start use Agile Planner in Visual Studio 2008 project

  • Start Visual Studio 2008, create new project or load existing project
  • Right click the project name and select menu “Add | New Item …”

add-new-item.PNG

  • Select AgilePlanner
  • Dismiss the security warning if it shows up

You will be presented a designing environment like below.

designer.PNG

  1. Graphical designer for iterations and stories
  2. Toolbox for iterations and stories
  3. Treeview Explorer for iterations and stories
  4. Property window for iterations and stories

3. Plan your project’s iterations

  • Create iterations by dragging iteration tool from toolbox to the graphical designer
  • Create Stories by dragging story tool from toolbox to the backlog and iterations
  • Edit stories’ properties such as name, capacity, priority and status the property window

add-stories.PNG

Notice: the capacity of iteration updated automatically after dragged stories between iterations and after updated the stories’ capacity property, so that you can balance the work load between iterations.

4. Render the diagram
The stories can be colored based on either their status or priority. To switch between these two options, right click the diagram and select menu “Color on Status” or “Color on Priority”. The color schema are customizable as the property of the project.

render-in-priority.PNG

5. Export
The rendered diagram can be exported to png file by right clicking on the diagram and select menu “Export to image”.

exported.png

How to use it?

See how to use

How To : Use SharePoint Dashboards & MSRS Reports for your Agile Development Life Cycle

The Problem We Solve

Agile BI is not a term many would associate with MSRS Reports and SharePoint Dashboards. While many organizations first turn to the Microsoft BI stack because of its familiarity, stitching together Microsoft’s patchwork of SharePoint, SQL Server, SSAS, MSRS, and Office creates administrative headaches and requires considerable time spent integrating and writing custom code.

This Showcase outlines the ease of accomplishing three of the most fundamental BI tasks with LogiXML technology as compared to MSRS and SharePoint:

  • Building a dashboard with multiple data sources
  • Creating interactive reports that reduce the load on IT by providing users self-service
  • Integrating disparate data sources

Read below to learn how an agile BI methodology can make your life much easier when it comes to dashboards and reports. Don’t feel like reading?

Building a Dashboard with LogiXML vs. MSRS + SharePoint

Microsoft’s only solution for dashboards is to either write your own code from scratch, manipulate SharePoint to serve a purpose for which it wasn’t initially designed, or look to third party apps. Below are some of the limitations to Microsoft’s approach to dashboards:

  • Limited Pre-Built Elements: Microsoft components come with only limited libraries of pre-built elements. In addition to actual development work, you will need to come up with an idea of how everything will work together. This necessitates becoming familiar with best practices in dashboards and reporting.
  • Sophisticated Development Expertise Required: While Microsoft components provide basic capabilities, anything more sophisticated is development resource-intensive and requires you to take on design, execution, and delivery. Any complex report visualizations and logic, such as interactive filters, must be written in code by the developer.
  • Limited Charts and Visualizations: Microsoft has a smaller sub-set of charts and visualization tools. If you want access to the complete library of .NET-capable charts, you’ll still need to OEM another charting solution at additional expense.
  • Lack of Integrated Workflow: Microsoft does not include workflow features sets out of the box in their BI offering.

LogiXML technology is centered on Logi Studio: an elemental, agile BI design environment which lets you simply choose from hundreds of powerful and configurable pre-built elements. Logi’s pre-built elements equip developers with tools to speed development, as well as the processes and logic required to build and manage BI projects. Below is a screen shot of the Logi Studio while building new dashboards.

agile-bi.jpg

Start a free LogiXML trial now.

Logi developers can easily create static or user-customizable dashboards using the Dashboard element. A dashboard is a collection of panels containing Logi reports, which in turn contain table, charts, images, etc. At runtime, the user can customize the dashboard by rearranging these panels on the browser page, by showing or hiding them, and even by changing their contents using adjustable reporting criteria. The data displayed within the panels can be configured, as in any Logi report, to link to other reports, providing drill-down functionality.

 

logi2.jpg

The dashboard displayed above has tabs and user customization enabled. The Dashboard element provides customization features, such as drag-and-drop panel positioning, support for built-in parameters the user can access to adjust the panel’s data contents, and a panel selection list that determines which panels will be displayed. AJAX techniques are utilized for web server interactions, allowing selective updates of portions of the dashboard. Dashboard customizations can be saved on an individual-user basis to create a highly personalized view of the data.

The Dashboard Wizard

The ‘Create a Dashboard’ wizard assists developers in creating dashboards by populating the report definition with the necessary dashboard-related elements. You can easily point to any data source by selecting from a variety of DataLayer types, including SQL, StoredProcedures, Web Services, Files, and more. A simple to use drag and drop SQL Query builder is also integrated, to offer a guided approach to constructing queries when connecting to your database.

logi3.jpg

Using the Dashboard Element

The Dashboard element is used to create the top level structure for all of your interactive panels within the final output. Under your dashboards, you can optionally add any number of Dashboard Panels, Panel Parameters for dynamic filtering, and even automatic refresh features with AJAX-based refresh timers.

logi4.jpg

Changing Appearance Using Themes and Style Sheets

The appearance of a dashboard can be changed easily by assigning a theme to your report. In addition, or as an alternative, you can change dashboard appearance using style. The Dashboard element has its own Cascading Style Sheet (CSS) file containing predefined classes that affect the display colors, font sizes, button labels, and spacing seen when the dashboard is displayed. You can override these classes by adding classes with the same name to your own style sheet file.

See us build a BI app with 3 data sources in under 10 minutes.

Ad Hoc Reporting Creation with LogiXML: Analysis Grid

The Analysis Grid is a managed reporting feature giving end users virtual ad hoc capability. It is an easy to use tool that allows business users to analyze and manipulate data and outputs in multiple and powerful ways.

logi5.jpg

Start a free LogiXML trial now.

Create an Analysis Grid by using the “Create Analysis Grid” wizard, or by simply adding the AnalysisGrid element into your definition file. Like the dashboard, data for the Analysis Grid can be accessed from any of the data options, including SQL databases, web sources, or files. You also have the option to launch the interactive query builder wizard for easy, drag-drop, SQL query creation.

The Analysis Grid is composed of three main parts: the data grid itself, i.e. a table of data to be analyzed; various action buttons at the top, allowing the user to perform actions such as create new columns with custom calculations, sort columns, add charts, and perform aggregations; and the ability to export the grid to Excel, CSV, or PDF format.

The Analysis Grid makes it easy to perform what-if analyses through features like filtering. The Grid also makes data-presentation impactful through visualization features including data driven color formatting, inline gauges, and custom formula creation.

Ad Hoc Reporting Creation with Microsoft

While simple ad hoc capabilities, such as enabling the selection of parameters like date ranges, can be accomplished quickly and easily with Microsoft, more sophisticated ad hoc analysis is challenging due to the following shortcomings.

Platform Integration Problems

Microsoft BI strategy is not unified and is strongly tied to SQL Server. To obtain analysis capabilities, you must build cubes through to the Analysis Service, which is a separate product with its own different security architecture. Next, you will need to build reports that talk to SQL server, also using separate products.

Dashboards require a SharePoint portal which is, again, a separate product with separate requirements and licensing. If you don’t use this, you must completely code your dashboards from scratch. Unfortunately, Microsoft Reporting Services doesn’t play well with Analysis Services or SharePoint since these were built on different technologies.

SharePoint itself offers an out of the box portal and dashboard solution but unfortunately with a number of significant shortcomings. SharePoint was designed as a document management and collaboration tool as opposed to an interactive BI dashboard solution. Therefore, in order to have a dashboard solution optimized for BI, reporting, and interactivity you are faced with two options:

  • Build it yourself using .NET and a combination of third party components
  • Buy a separate third party product

Many IT professionals find these to be rather unappealing options, since they require evaluating a new product or components, and/or a lot of work to build and make sure it integrates with the rest of the Microsoft stack.

Additionally, while SQL Server and other products support different types of security architectures, Analysis Services only has support for using integrated Windows NT security models to access cubes and therefore creates integration challenges.

Moreover, for client/ad hoc tools, you need Report Writer, a desktop product, or Excel – another desktop application. In addition to requiring separate licenses, these products don’t even talk to one another in the same ways, as they were built by different companies and subsequently acquired by Microsoft.

Each product requires a separate and often disconnected development environment with different design and administration features. Therefore to manage Microsoft BI, you must have all of these development environments available and know how to use them all.

Integration of Various Data Sources: LogiXML vs. Microsoft

LogiXML is data neutral, allowing you to easily connect to all of your organization’s data spread across multiple applications and databases. You can connect with any data source or data model and even combine data sources such as current data accessed through a web service with past data in spreadsheets.

Integration of Various Data Sources with Microsoft

Working with Microsoft components for BI means you will be faced with the challenge of limited support for non-Microsoft based databases and outside data sources. The Microsoft BI stack is centered on SQL Server databases and therefore the data source is optimized to work with SQL Server. Unfortunately, if you need outside content it can be very difficult to integrate.

Finally, Microsoft BI tools are designed with the total Microsoft experience in mind and are therefore optimized for Internet Explorer. While other browsers and devices might be useable, the experience isn’t optimized and may potentially lack in features or visualize differently.

 

Free & Licensed Windows 8, Azure, Office 365, SharePoint On-Premise and Online Tools, Web Parts, Apps available.
For more detail visit https://sharepointsamurai.wordpress.com or contact me at tomas.floyd@outlook.com

Visio for Developers in Office 365

In this post, I’ll introduce some of the new features of interest to developers in Visio 2013. Among these features are:

  • New file format
  • Robust updates to themes
  • The change shape feature (that allows you to replace one shape with another while Maintaining shape text)
  • New shape effects
  • Improvements to commenting
  • Coauthoring on SharePoint Server 2013
  • Customizable image clipping
  • Relative geometry
  • Support for Business Connectivity Services (BCS) data
  • Updates to Visio Services in Microsoft SharePoint Server 2013
  • Duplicate page feature

At the end of the post, I provide you with some additional resources for both Visio and general Office development.

 

New file format

Visio 2013 introduces a new file format, based on the Open Packaging Conventions (OPC) standard (ISO 29500, Part 2) and the XML elements from the previous Visio XML file format (.vdx). It is a zipped, XML-based file format similar to the file formats used in other applications.

Because the new file format is supported by both Visio 2013 and Visio Services in Microsoft SharePoint Server 2013, you can save a Visio drawing directly to a SharePoint Server library without having to first publish the file as a Visio Web Drawing (.vdw). Even so, Visio Services can still read and display Visio Web Drawing files.

The new file format includes the following file types (by extension):

  • .vsdx (Visio drawing)
  • .vsdm (Visio macro-enabled drawing)
  • .vssx (Visio stencil)
  • .vssm (Visio macro-enabled stencil)
  • .vstx (Visio template)
  • .vstm (Visio macro-enabled template)

By using existing support for reading and writing to the file format package (such as System.IO.Packaging) and for parsing XML (System.Xml.Linq), you can work with the new file formats programmatically.

Visio 2013 retains the ability to read the old file formats (.vsd, .vss, .vst, .vdx, .vsx, .vtx, .vdw, .vwi). Visio 2013 does not save to the previous Visio XML file format (.vdx). Solutions or tools that consume the previous Visio XML file format (.vdx) files may need to be refactored in order to read the new file format and its schemas.

Visio Services retains the ability to display the Visio Web Drawing (.vdw) format in the browser. It now also renders the new Visio drawing (.vsdx) and Visio macro-enabled drawing (.vsdm) formats.

For more information about the new file format, see the article How to: Manipulate the Visio 2013 file format programmatically.

Themes

Themes have been redesigned in Visio 2013, making use of a greater variety of effects and styles including the integration of Shape Art effects. Users can now decide on an overarching style by applying a theme, personalize the diagram with theme variants, and highlight individual shapes with Quick Styles. ShapeSheet developers can take advantage of these features with new functions and cells in the ShapeSheet.

The user interface for applying theme variants is shown in the following figure.

 

 

You can also manipulate themes at the Page, Shape, and Selection object level. New APIs for working with themes include Page.SetTheme method, Page.SetThemeVariant method, Shape.SetQuickStyle method, and the Selection.SetQuickStyle method.

For more information about new VBA objects and members in Visio 2013, see the Visio Automation reference. For more information about the new ShapeSheet cells in Visio 2013, see the article What’s new for ShapeSheet developers in Visio 2013.

Change Shape

Visio 2013 includes a shape replacement API that enables you to swap one or more shapes for another shape contained in a stencil, while retaining some of the local values from the original shape, like the shape text shape, shape data, or shape formatting. Shape developers can update the ShapeSheet settings of their custom shapes to specify the Change Shape behavior for their shapes. Among the new APIs for Change Shape are the Shape.ReplaceShape and Selection.ReplaceShape methods and the ReplaceShapesEvent object.

The Change Shape feature lets you easily change a shape (in this case, the green rectangle)…

 

…to another shape, the green diamond.

For more information about the Change Shape feature, see Eric Schmidt’s blog post, Change shapes in Visio 2013.

For more information about new VBA objects and members in Visio 2013, see the Visio Automation reference. For more information about the new ShapeSheet cells in Visio 2013, see the article What’s new for ShapeSheet developers in Visio 2013.

Shape effects

New shape effects such as bevel, 3-D rotation, glow, reflection, and sketching have been added to Visio 2013. The ShapeSheet includes new cells for working with these effects. The following figure shows a shape to which effects have been applied.

You can also use Office VBA objects such as TextFrame2, GlowFormat, and ReflectionFormat and their members to apply shape effects.

For more information about the new ShapeSheet cells in Visio 2013, see the article What’s new for ShapeSheet developers in Visio 2013.

Commenting

Visio 2013 includes a new commenting framework. Comments can now be associated with a particular shape or page. Visio 2013 includes two new objects, Comments and Comment. New APIs for accessing comments programmatically include the Document.Comments, Page.Comments, Shape.Comments, and Page.ShapeComments properties.

The following images show what comments looked like in Visio 2010 and what they look like in Visio 2013.

 

 

Visio Services includes JavaScript APIs to read the comments from a page or shape in a diagram.

Note: You can no longer access comments in the ShapeSheet.

Coauthoring

Visio 2013 includes the ability to co-author diagrams stored on SharePoint or OneDrive. Developers have access to the Document.AfterDocumentMerge event which provides information about diagram changes due to coauthoring. Solution developers also have the ability to disable coauthoring to suit their custom needs by using the NoCoauth cell on the Document ShapeSheet.

Customizable image clipping

Visio 2013 supports defining a Custom Image Clipping path to crop images to any shape. This extends the capacities of Visio 2010, which supported clipping images in a rectangular way. This functionality is available in the ShapeSheet by using the ClippingPath cell in the Foreign Image Info section.

Relative geometries

In previous versions of Visio, shape geometry was defined by formulas that depended on the height or width of the shape. For example, in Visio 2010 the vertices of many built-in Visio shapes were defined by multiplying the height or width of the shape by a constant. These shapes had Geometry sections that included MoveTo or LineTo rows (for example) with formulas like Width1 and Height0.

Visio 2013 now supports relative geometry in the ShapeSheet. Shape developers can now use relative geometries to specify geometries as simple values or formulas, which multiply by the height or width automatically. You can now express Shape vertices by using constants, for instance—you no longer need to express vertices as multiples of the shape width or height. This makes it easier for you to create shapes that have better performance and smaller file sizes. New rows include the RelMoveTo and RelLineTo rows where the X and Y cell values are automatically multiplied by the width or height of the shape (respectively).

Support for Business Connectivity Services (BCS) data

Visio 2013 diagrams can now be connected to external lists on SharePoint Server 2013 servers. An external list is a content source external to SharePoint (for example, a SQL Server table) that has been connected to a SharePoint list by using Microsoft Business Connectivity Services (BCS). Visio Services supports the ability to refresh the Visio diagrams as the data updates.

For more information about what’s new in Visio Services, see the article Visio Services in SharePoint 2013. For more information about Business Connectivity Services (BCS), see Business Connectivity Services in SharePoint 2013.

Improvements in Visio Services

Visio Services in Microsoft SharePoint Server 2013 includes many improvements. As mentioned previously, Visio Services supports the new Visio file format (.vsdx and .vsdm). Visio Services has expanded data refresh and recalculation, including the ability to recalculate formulas across an entire diagram.

For more information about what’s new in Visio Services, see the article Visio Services in SharePoint 2013.

Duplicate page

You can now copy a page and all of its shapes within the same document in Visio 2013. Accordingly, the Page object has a new method, Duplicate, which duplicates the page and returns a new Page object.

Additional resources

Microsoft Application Insights – A comprehensive guide : Part 1 – Getting Started

Application Insights Overview

With Application Insights, we have the same excellent capabilities from the Application Performance Monitoring (APM) feature (formerly AviCode) available in SCOM 2012 R2 – with the exception that it’s all run from the cloud as part of Visual Studio Online. With this type of monitoring for your applications, you can ensure that they are available and performing optimally, while leveraging usage data to drive improvements and trends.

In relation to the Microsoft Management Agent used for both Application Insights and SCOM 2012 R2, they share the same source code but have a slight difference with serialization that determines which REST interface and location the agent sends its performance data to.

Application Insights supports both .NET and Java web applications. On the Java side of the house, it supports monitoring Tomcat 6, Tomcat 7 or JBoss 6. For the purposes of this deep-dive series however, I’m going to concentrate on demonstrating its capabilities monitoring .NET web applications and I’ll save the Java monitoring for a later post.

You can use Application Insights to monitor web applications that are running in an on-premise/virtual machine scenario and of course, it’s also fully supported to monitor web applications running as a web role in Windows Azure Cloud Services. If you’re a Windows Phone app developer, you might be interested in the capability to view usage trends and other analytical data as users download and use your app on a daily basis.

The method of getting these different environments monitored varies depending on scenario but typically, it’s a straight-forward enough process as you’ll understand when you read through this blog series.

Regardless of the environment you’re monitoring with Application Insights, you can ensure you’re kept up to date with any performance issues (slow responses, uncaught exceptions etc.) by enabling email notification direct to your inbox. If you want to use Visual Studio to view the stack trace to help triage the problem, then this is an easy option too.

So, that’s a high-level overview of what Application Insights can do, now let’s get started!

Creating Your Account

The first thing you’ll need to do is to create a new Visual Studio Online account by clicking on the following link to sign up:

http://www.visualstudio.com/products/visual-studio-online-overview-vs

Click on the ‘Ready to Go?’ tile

Enter your Microsoft Account (formerly Windows Live ID) details, then click the Sign In button. If you don’t yet have a Microsoft Account, you can sign up for a new one here.

Input all your details into the ‘Create a Visual Studio Online Account’ window, then hit the Create Account button to move on.

Once you’ve created your Visual Studio Online account, you’ll need to specify a name for your first project. Call it what you want, then hit the Create Project button to finish.

Now, at the Overview screen of your Home page, you should see a Blue tile titled ‘Try Application Insights’ as shown below. If you can’t see the Blue AI tile, then click the Help button and choose the ‘Display Announcement’ option from the resulting menu.

Click on the Blue tile and you’ll be taken to the *Insights view where you’ll be prompted for an invitation code to gain access

Invitation Code? I hear you ask.
Don’t worry if you don’t have one, even though AI is still only available as a Preview, the good folks over at Microsoft have made a public code available at the following link:
Type in your code, then hit the Get Started button to enter the new world of Application Insights!
That’s it for Part 1 of this series. In Part 2, we’ll start work on building a demo .NET web application environment for us to give our new Application Insights account a test drive in.

Troubleshooting issues with the “Eligible MSDN Subscriber” license type

As you adopt Visual Studio Online (VSO) and assign licenses to your users you may want to assign the “Eligible MSDN Subscriber” license type to team members. MSDN subscriptions are purchased outside of VSO and assigned to individual users. Before an MSDN subscriber can log in to VSO as an eligible MSDN subscriber, the subscription process must first be completed. The general flow looks like this:

Image

  • MSDN subscription is purchase by or assigned to a team member. Assignment could be via the Volume Licensing Service Center, MPN, etc.
  • Team member receives an email asking them to activate their subscription by signing in with a Microsoft account and registering with the subscription details provided.
  • Team member activates the subscription and associates a Microsoft account (MSA) with the subscription. The team member logs in to msdn.microsoft.com/en-us/subscriptions/manage using this MSA to manage the subscription going forward.
  • A VSO admin adds the MSA (the one the team member associated with their subscription in the previous step) to the Users Hub (https://Contoso.visualstudio.com/_user) and assigns them an “Eligible MSDN Subscriber” license.
  • If 24-48 hours have passed since the subscription was activated, the team member should be able to log in to VSO as an “Eligible MSDN Subscriber” and enjoy full access to all VSO features (a benefit of this license level).

 

Sometimes though the team member may run into problems after this process. Specifically, they may try to log in and receive an error: 

 

Figure 1: (403 Forbidden \ …does not have license rights to access this account \ VSS012019: No MSDN subscription found for this user)

 

The team member could be seeing this error for a number of reasons: 

1. The specific MSDN subscription may not be eligible for Visual Studio Online access
Not all MSDN subscription types include VSO as a benefit. Please refer to the list of eligible MSDN subscriptions to verify.

 

2. The MSDN subscription was assigned to the user within the last 48 hours
There is a delay between MSDN subscription assignments and when VSO will see it \ allow the user to log in. If the subscription was assigned less than 24 hours ago please wait 24-48 hours and have the user try again.

 

3. The “Software Downloads” benefit has not been provided with the MSDN subscription
VSO usage is tied to the “Software Downloads” benefit of an MSDN subscription. In order to access VSO when assigned the “Eligible MSDN Subscriber” license type, the user’s MSDN subscription must include this “Software Downloads” benefit.  Here’s an example of enabling that benefit via the Volume Licensing Service Center:

 

 

4. The wrong email address has been added to the VSO Users Hub
This one is important. When an MSDN subscription is assigned to a user they receive an email (usually to their work email address) asking them to associate their new MSDN subscription with a Microsoft account\email address (MSA). This could be the same address where they received the mail or the user could pick a completely different MSA. The MSA they pick to associate with their new MSDN subscription is the one that needs to be added to the VSO Users Hub (https://Contoso.visualstudio.com/_user) and assigned the “Eligible MSDN Subscriber” license. It’s the one they use to manage their MSDN subscription via https://msdn.microsoft.com/subscriptions/manage.

Check with your user to see what MSA they’ve associated with & use to manage their MSDN subscription, and ensure that’s the one on the Users Hub.

 

If all this checks out, have the team member try this:

  1. Go to msdn.microsoft.com/en-us/subscriptions/manage and log in with your Microsoft account
  2. From the “My Subscriptions” section in the top-left, copy the name, email and Subscriber ID for the MSDN subscription you want to use with VSO. Paste this into Notepad or other text editor.
  3. Click “Add an existing subscription to my account” in the “My Subscriptions” section in the top-left.
  4. Fill out the form with the values you copied in step # 3 and then click NEXT.
  5. Acknowledge and accept the license notice and then click ACCEPT.

This will not change your subscription in any way but will essentially reactivate it and with any luck this will allow you to log in to VSO.

 

I hope you are not having any issues with the “Eligible MSDN Subscriber” license type in VSO, but if you are please run through this checklist to try and fix them. If you are still blocked, please open support case and we can assist:

 

How To : Get data from Windows Azure Marketplace into your Office application

ImageThis post walks through a published app for Office, along the way showing you everything you need to get started building your own app for Office that uses a data service from the Windows Azure Marketplace.

Ever wondered how to get premium, curated data from Windows Azure Marketplace, into your Office applications, to create a rich and powerful experience for your users? If you have, you are in luck.

Introducing the first ever app for Office that builds this integration with the Windows Azure Marketplace – US Crime Stats. This app enables users to insert crime statistics, provided by DATA.GOV, right into an Excel spreadsheet, without ever having to leave the Office client.

One challenge faced by Excel users is finding the right set of data, and apps for Office provides a great opportunity to create rich, immersive experiences by connecting to premium data sources from the Windows Azure Marketplace.

What is the Windows Azure Marketplace?

The Windows Azure Marketplace (also called Windows Azure Marketplace DataMarket or just DataMarket) is a marketplace for datasets, data services and complete applications. Learn more about Windows Azure Marketplace.

This blog article is organized into two sections:

  1. The U.S. Crime Stats Experience
  2. Writing your own Office Application that gets data from the Windows Azure Marketplace

The US Crime Stats Experience

You can find the app on the Office Store. Once you add the US Crime Stats app to your collection, you can go to Excel 2013, and add the US Crime Stats app to your spreadsheet.

Figure 1. Open Excel 2013 spreadsheet

blog_CrimeStats_fig01

Once you choose US Crime Stats, the application is shown in the right pane. You can search for crime statistics based on City, State, and Year.

Figure 2. US Crime Stats app is shown in the right task pane

blog_CrimeStats_fig02

Once you enter the city, state, and year, click ‘Insert Crime Data’ and the data will be inserted into your spreadsheet.

Figure 3. Data is inserted into an Excel 2013 spreadsheet

blog_CrimeStats_fig03

What is going on under the hood?

In short, when the ‘Insert Crime Data’ button is chosen, the application takes the input (city, state, and year) and makes a request to the DataMarket services endpoint for DATA.GOV in the form of an OData Call. When the response is received, it is then parsed, and inserted into the spreadsheet using the JavaScript API for Office.

Writing your own Office application that gets data from the Windows Azure Marketplace

Prerequisites for writing Office applications that get data from Windows Azure Marketplace

How to write Office applications using data from Windows Azure Marketplace

The MSDN article, Create a Marketplace application, covers everything necessary for creating a Marketplace application, but below are the steps in order.

  1. Register with the Windows Azure Marketplace:
    • You need to register your application first on the Windows Azure Marketplace Application Registration page. Instructions on how to register your application for the Windows Azure Marketplace are found in the MSDN topic, Register your Marketplace Application.
  2. Authentication:
  3. Receiving Data from the Windows Azure Marketplace DataMarket service

Application Analytics: What Every Developer Should Know

Imagine how much more efficient your developers would be if they didn’t have to guess which features were being used and which could safely be deprecated?

What would the impact on user satisfaction be if exception data replete with usage context were delivered before your users had a chance to complain?

How would the quality of your software improve if your test plans were aligned with actual usage patterns and user preferences in production?

Application analytics is the branch of analytics purpose-built to make these scenarios a reality; to satisfy the “selfish interests” application stakeholders, e.g. development, test, product owners, operations, etc.

 

Diagram of Application Analytics cycles

Figure 1: Application analytics enhances both development and operations by providing deep insight into application adoption and user behavior within established development and operations platforms.

Application analytics integrates application usage data, app-centric analytics software and heuristics integrated into development and operations.

The diversity of today’s analytics solutions is (as it should be) customer driven. For example, web analytics’ principle customer is marketing and sales resulting in a focus on page views, clicks and conversions. Web analytics solutions share common:

  • Objectives: monetization of web properties,
  • Requirements: analysis of visitors, impressions, clicks and conversions, and
  • Restrictions: meeting privacy and performance obligations.

In agile parlance, application analytics encompass analytics solutions where the primary customer is one or more application development “personas” who share common objectives, requirements and restrictions.

The Agile Manifesto states: that development’s “highest priority is to satisfy the customer through early and continuous delivery of valuable software.” In that context, development’s success can only be accurately measured where users and their applications meet – at the “point of work” (or play). Application analytics provides empirical evidence of application usage and end-user behavior that, when properly integrated into a development process, provides:

  • Insight into user requirements,
  • Validation of development priorities and an
  • Objective measure of test plan accuracy and completeness

Examples include:

  • The Microsoft customer experience improvement program (CEIP) was “created to give all Microsoft customers the ability to contribute to the design and development of Microsoft products.” CEIP collects information about how Microsoft programs are being used “in the wild”.
The Agile Manifesto also states that “working software is the primary measure of progress.” Operations’ mission is to get the most out of today’s applications – future application iterations cannot address immediate stability, performance, user experience, or security concerns. Application analytics, when properly integrated into operations and support, provides:

  1. Application adoption and usage metrics within a specific operations framework,
  2. Production incident alerts from application exceptions, and
  3. Organizational adoption and productivity analysis connecting application investment to enterprise ROI.

Examples include:

  • PreEmptive Analytics Community Edition that gives developers using Microsoft Visual Studio 2012 Professional the ability to create their own CEIP by allowing development and operations to identify and quickly respond to application exceptions that occur in production.

Given these objectives, the value of application analytics seem obvious, but the details can make it difficult. Collecting, analyzing and acting on application runtime data poses unique challenges both in terms of the kinds of data that need to be gathered and the metrics that measure success.

Effective application analytics implementations must accommodate the diversity of today’s applications and the emergence of cloud, mobile and distributed computing platforms. The following application analytics requirements make plain why narrower analytics technologies should never be expected to fully satisfy development objectives.

The runtime data that streams from an application is typically far more complex and heterogeneous than what is typically streamed from a web page or portal.

Data types Runtime telemetry: the variety, semantics and location of application runtime data
Feature An application feature is not a click. A feature can span one or more methods, incorporate multiple components, run across runtime surfaces and even be implemented multiple times in different languages, e.g. Windows Presentation Foundation (WPF), Microsoft Silverlight and HTML5. Measuring usage and performance of an arbitrarily defined scope is required for monitoring across devices and platforms.
Application Data Many of today’s applications are data-driven where the actual behavior itself is encoded in the data. Knowing what templates, workflows and other “modern” content is being processed can be more valuable than knowing which workflow or rendering engine processed that data.
Session Session information can be defined differently within an app server, a mobile session, within a browser or distributed across all of the above serviced by a cloud-based service.
Event Unhandled exceptions, caught and thrown exceptions, unexpected performance or suspicious user behavior can all constitute a “production event.”
Application Applications are often comprised of multiple components – some on-premises and some service-based. These applications (and their components) are versioned at unpredictable cadences. Calculating the workflow across distributed applications and then reconciling this activity over time and across versions is an applications analytics requirement.
Stack While many applications are run inside a “sandbox”, e.g. mobile, Windows Runtime, Windows Azure – many other applications have full access to the underlying OS and computing metal. Tracking screen resolution, chip manufacturers and hardware availability is often essential to understanding user experience and application behavior.
Identity A users’ identity can be defined and tracked by device ID, IP address, user credentials, software license and more. Application analytics solutions must have the capacity to enforce privacy and security policies at both client and aggregate levels. Ensuring effective data governance is a precursor to effectively analyzing the resulting runtime data.
Given the complexity, diversity and distribution of today’s production platforms, it is no longer possible to simulate production. Application analytics can fill this gap only if there is comprehensive support for today’s computing platforms.

Categories Runtime architectures and technologies: existing and emerging languages and platforms
Architectures and surfaces Applications are much more than a simple presentation layer and a sequence of user actions. Instrumentation must span client-server, cloud services (public and private), web servlet and mobile platforms, architectures and surfaces.
Languages and runtimes Today’s applications will incorporate managed, native and scripted components including the Microsoft .NET Framework, C++, Java and JavaScript.
For application analytics to have impact, the right information has to be delivered to the right roles at the right time and in the right context. This includes integrating the instrumentation task within the development and build process and surfacing application analytics inside the development, testing, deployment, and management phases.

Flowchart showing five phases in sequence

Figure 2: Five functional phases of application analytics implementation.

DevOps phase IDE & application lifecycle management (ALM) integration: role and use case driven scenarios
1. Instrumentation Instrumentation is the logic inside an application that creates the runtime data to be analyzed. Instrumentation can be coded via an API (required for native and scripted apps) or injected post-compile into managed assemblies.
2. Build and deploy Apps can be built manually, as part of a continuous build process and automated to support cloud platforms. Support for the various manufacturing processes and payload formats is required to ensure efficient and scalable deployments.
3. Runtime data management Managing runtime data will require scale, governance, and security controls. Application runtime data management requirements shift dramatically across industry, use case and jurisdictional boundaries.
4. Runtime data publishing Different stakeholders require distinct presentations and analysis; developers, architects, product owners and line of business management bring different perspectives and priorities to the same underlying runtime data. Reports, dashboards, export and programmatic access are all typically required when use cases span sprint planning, customer support and business performance monitoring.
5. Integration Integration of application analytics into development platforms, e.g. Visual Studio and TFS, operations, e.g. operations manager, and customer relationship management, e.g. Microsoft Dynamics through reports and event scheduling delivers the “last mile” of application analytics’ productivity.

The cure cannot be worse that the disease. For application analytics, this means that the inclusion of application analytics into development and operations cannot result in productivity, performance, security or user experience risks greater than those that it is designed to mitigate. Given the many forms and roles that today’s applications can take, this is no small task.

Risk management Restrictions: performance, stability, privacy, and complexity
Performance and stability Collecting, caching, and transmitting runtime data efficiently across devices without performance penalties (when working properly) while not impacting application stability or user experience if/when one or more aspects of the analytics solution should fail. This can be especially challenging when considering special dependencies such as battery life, data plans, network characteristics…
Security and privacy Consumer, business-to-business, and line-of-business applications each come with their own security and privacy obligations. These obligations are further fragmented by industry and by jurisdiction. Application analytics instrumentation, transport and content management must be extensible and able to enforce these requirements on an application-by-application basis.
Complexity Complexity introduces waste and risk – and ultimately resistance to adoption. Integration into existing platforms, processes and methodologies is a requisite to effective application analytics implementations.

Visual Studio 2012 includes PreEmptive Analytics for TFS Community Edition (PA for TFS CE), an application analytics solution that monitors exceptions and creates or updates work items inside Team Foundation Server (TFS) based upon user-defined thresholds.

PA for TFS CE is designed to track unhandled exceptions on applications running on the .NET Framework and Java runtimes.  Support for the five phases of application analytics as follows:

DevOps phase PreEmptive Analytics for TFS Community Edition
1. Instrumentation Instrumentation is accomplished with Dotfuscator Community Edition. Unhandled exception monitoring with optional opt-in and user-feedback is supported for the .NET Framework, Silverlight, Microsoft Windows Phone and XNA applications. An API for Java applications is also available as a free download that includes support for Android. Support for native code, JavaScript and Java is provided by PreEmptive Solutions.
2. Build and deploy Dotfuscator Community Edition is interactive. The command line interface and MSBuild support is available with Dotfuscator Professional from PreEmptive Solutions.
3. Runtime data management A server-side data collector is included withVisual Studio Team Foundation Server 2012. The collector endpoint is referenced via a URL embedded into the application being monitored as part of the instrumentation phase above. The collector can sit next to the TFS server, on an entirely different server, and even inside Microsoft Windows Azure.
4. Runtime data publishing An aggregator service is also included with TFS in Visual Studio 2012 that polls the collector endpoint. When user-defined thresholds are met, the aggregator will create (or update) a Production Incident work item inside TFS in Visual Studio 2012.
5. Integration In Visual Studio 2012, TFS work items created by PA for TFS CE are tracked, assigned, prioritized and reported against as any other first class work item type.

Screen shot of location on Tools menu

Figure 3: Finding PreEmptive Analytics inside Visual Studio 2012 off of the tools menu

Screen shot showing Dotfuscator CE

Figure 4: Instrumentation. Inside Dotfuscator CE, adding the setup attribute identifies the collector endpoint for runtime data. The endpoint can be on-premises next to a TFS server or hosted on Windows Azure remotely.

Screen shot of Visual Studio showing integration

Figure 5: Visual Studio 2012 integration. Production Incident work items are surfaced inside Visual Studio automatically when volume thresholds are met. In this “All Items” query, you can see the type of exception, how many exceptions of this type have been detected and on how many machines. You can see more detail on this work item below including a stack track as well as the work item’s assignment, prioritization and classification.

Screen shot of example summary chart

Figure 6: Reporting. One example of summary charts included with PA for TFS CE shows the status of all open incidents.

In addition to enhanced TFS integration and instrumentation options, the Professional Edition of PreEmptive Analytics includes feature, session and user analytics designed to measure trends, usage patterns and user preferences throughout the lifetime of a production application.

Use case Community Edition Professional
Track unhandled on .NET Framework and Java applications Yes Yes
Automatically create and update TFS work items in Visual Studio 2012 Yes Yes
Provide opt-in and user-feedback options at runtime Yes Yes
Support Visual Studio 2010 Yes
Track caught and thrown exceptions Yes
Support custom data and extensible rule and work item definitions Yes
Support JavaScript and native application monitoring Yes
Measure feature and session usage Yes
  • Development has unique requirements that are not being met by web, BI or other non-development centered analytics solutions.
  • Application analytics offers specific capabilities designed to meet development and operation’s needs.
  • Visual Studio 2012 offers integrated application analytics “out of the box” with opportunities to extend these capabilities through integration and partner options.

You can visit these sites to learn more:

  1. http://www.preemptive.com/pa
  2. http://www.microsoft.com/visualstudio/11/en-us/products/alm
  3. http://blogs.msdn.com/b/bharry/archive/2012/04/11/preemptive-analytics-in-visual-studio-and-tfs-11.aspx

Microsoft Patterns and Practices : An overview of the Security Development Lifeycle

Pre-SDL Requirements: Security Training

SDL Practice 1: Training Requirements

All members of a software development team must receive appropriate training to stay informed about security basics and recent trends in security and privacy. Individuals in technical roles (developers, testers, and program managers) that are directly involved with the development of software programs must attend at least one unique security training class each year.

Basic software security training should cover foundational concepts such as:

  • Secure design, including the following topics:
    • Attack surface reduction
    • Defense in depth
    • Principle of least privilege
    • Secure defaults
    • Threat modeling, including the following topics:
      • Overview of threat modeling
      • Design implications of a threat model
      • Coding constraints based on a threat model
      • Secure coding, including the following topics:
        • Buffer overruns (for applications using C and C++)
        • Integer arithmetic errors (for applications using C and C++)
        • Cross-site scripting (for managed code and Web applications)
        • SQL injection (for managed code and Web applications)
        • Weak cryptography
        • Security testing, including the following topics:
          • Differences between security testing and functional testing
          • Risk assessment
          • Security testing methods
          • Privacy, including the following topics:
            • Types of privacy-sensitive data
            • Privacy design best practices
            • Risk assessment
            • Privacy development best practices
            • Privacy testing best practices

The preceding training establishes an adequate knowledge baseline for technical personnel. As time and resources permit, training in advanced concepts may be necessary. Examples include, but are not limited to, the following:

  • Advanced security design and architecture
  • Trusted user interface design
  • Security vulnerabilities in detail
  • Implementing custom threat mitigations

Phase One: Requirements

SDL Practice 2: Security Requirements

The need to consider security and privacy “up front” is a fundamental aspect of secure system development. The optimal point to define trustworthiness requirements for a software project is during the initial planning stages. This early definition of requirements allows development teams to identify key milestones and deliverables, and permits the integration of security and privacy in a way that minimizes any disruption to plans and schedules. Security and privacy requirements analysis is performed at project inception and includes specification of minimum security requirements for the application as it is designed to run in its planned operational environment and specification and deployment of a security vulnerability/work item tracking system.

SDL Practice 3: Quality Gates/Bug Bars

Quality gates and bug bars are used to establish minimum acceptable levels of security and privacy quality. Defining these criteria at the start of a project improves the understanding of risks associated with security issues and enables teams to identify and fix security bugs during development. A project team must negotiate quality gates (for example, all compiler warnings must be triaged and fixed prior to code check-in) for each development phase, and then have them approved by the security advisor, who may add project-specific clarifications and more stringent security requirements as appropriate. The project team must also illustrate compliance with the negotiated quality gates in order to complete the Final Security Review (FSR).

A bug bar is a quality gate that applies to the entire software development project. It is used to define the severity thresholds of security vulnerabilities—for example, no known vulnerabilities in the application with a “critical” or “important” rating at time of release. The bug bar, once set, should never be relaxed. A dynamic bug bar is a moving target that is likely to be poorly understood within the development organization.

SDL Practice 4: Security and Privacy Risk Assessment

Security risk assessments (SRAs) and privacy risk assessments (PRAs) are mandatory processes that identify functional aspects of the software that require deep review. Such assessments must include the following information:

  1. (Security) Which portions of the project will require threat models before release?
  2. (Security) Which portions of the project will require security design reviews before release?
  3. (Security) Which portions of the project (if any) will require penetration testing by a mutually agreed upon group that is external to the project team?
  4. (Security) Are there any additional testing or analysis requirements the security advisor deems necessary to mitigate security risks?
  5. (Security) What is the specific scope of the fuzz testing requirements?
  6. (Privacy) What is the Privacy Impact Rating? The answer to this question is based on the following guidelines:
  • P1 High      Privacy Risk. The feature, product, or service stores or transfers PII,      changes settings or file type associations, or installs software.
  • P2      Moderate Privacy Risk. The sole behavior that affects privacy in the      feature, product, or service is a one-time, user-initiated, anonymous data      transfer (for example, the user clicks on a link and the software goes out      to a Web site).
  • P3 Low      Privacy Risk. No behaviors exist within the feature, product, or service      that affect privacy. No anonymous or personal data is transferred, no PII      is stored on the machine, no settings are changed on the user’s behalf,      and no software is installed.

Phase Two: Design

SDL Practice 5: Design Requirements

The optimal time to influence a project’s design trustworthiness is early in its life cycle. It is critically important to consider security and privacy concerns carefully during the design phase. Mitigation of security and privacy issues is much less expensive when performed during the opening stages of a project life cycle. Project teams should refrain from the practice of “bolting on” security and privacy features and mitigations near the end of a project’s development.

A formal exception or bug     deferral method should be considered as part of any software development     process. Many applications are based on legacy designs and code, so it may     be necessary to defer certain security or privacy measures as a result of technical     constraints.

In addition, it is crucially important for project teams to understand the distinction between “secure features” and “security features.” It is quite possible to implement security features, which are in fact, insecure. Secure features are defined as features whose functionality is well engineered with respect to security, including rigorous validation of all data before processing or cryptographically robust implementation of libraries for cryptographic services. The term security features describes program functionality with security implications, such as Kerberos authentication or a firewall.

The design requirements activity contains a number of required actions. Examples include the creation of security and privacy design specifications, specification review, and specification of minimal cryptographic design requirements. Design specifications should describe security or privacy features that will be directly exposed to users, such as those that require user authentication to access specific data or user consent before use of a high-risk privacy feature. In addition, all design specifications should describe how to securely implement all functionality provided by a given feature or function. It’s a good practice to validate design specifications against the application’s functional specification. The functional specification should:

  • Accurately and completely describe the intended use of a feature or function.
  • Describe how to deploy the feature or function in a secure fashion.

SDL Practice 6: Attack Surface Reduction

Attack surface reduction is closely aligned with threat modeling, although it addresses security issues from a slightly different perspective. Attack surface reduction is a means of reducing risk by giving attackers less opportunity to exploit a potential weak spot or vulnerability. Attack surface reduction encompasses shutting off or restricting access to system services, applying the principle of least privilege, and employing layered defenses wherever possible.

SDL Practice 7: Threat Modeling

The preferred method for threat modeling is to use the SDL     Threat Modeling Tool. The SDL Threat Modeling Tool is based on the STRIDE threat classification taxonomy.

Threat modeling is used in environments where there is meaningful security risk. It is a practice that allows development teams to consider, document, and discuss the security implications of designs in the context of their planned operational environment and in a structured fashion. Threat modeling also allows consideration of security issues at the component or application level. Threat modeling is a team exercise, encompassing program/project managers, developers, and testers, and represents the primary security analysis task performed during the software design stage.

Phase Three: Implementation

SDL Practice 8: Use Approved Tools

All development teams should define and publish a list of approved tools and their associated security checks, such as compiler/linker options and warnings. This list should be approved by the security advisor for the project team. Generally speaking, development teams should strive to use the latest version of approved tools to take advantage of new security analysis functionality and protections.

SDL Practice 9: Deprecate Unsafe Functions

Many commonly used functions and APIs are not secure in the face of the current threat environment. Project teams should analyze all functions and APIs that will be used in conjunction with a software development project and prohibit those that are determined to be unsafe. Once the banned list is determined, project teams should use header files (such as banned.h and strsafe.h), newer compilers, or code scanning tools to check code (including legacy code where appropriate) for the existence of banned functions, and replace those banned functions with safer alternatives.


Generally speaking, development teams should decide the optimal frequency for performing static analysis – to     balance productivity with adequate security coverage.

SDL Practice 10: Static Analysis

Project teams should perform static analysis of source code. Static analysis of source code provides a scalable capability for security code review and can help ensure that secure coding policies are being followed. Static code analysis by itself is generally insufficient to replace a manual code review. The security team and security advisors should be aware of the strengths and weaknesses of static analysis tools and be prepared to augment static analysis tools with other tools or human review as appropriate.

Phase Four: Verification

SDL Practice 11: Dynamic Program Analysis

Run-time verification of software programs is necessary to ensure that a program’s functionality works as designed. This verification task should specify tools that monitor application behavior for memory corruption, user privilege issues, and other critical security problems. The SDL process uses run-time tools like AppVerifier, along with other techniques such as fuzz testing, to achieve desired levels of security test coverage.

SDL Practice 12: Fuzz Testing

Fuzz testing is a specialized form of dynamic analysis used to induce program failure by deliberately introducing malformed or random data to an application. The fuzz testing strategy is derived from the intended use of the application and the functional and design specifications for the application. The security advisor may require additional fuzz tests or increases in the scope and duration of fuzz testing.

SDL Practice 13: Threat Model and Attack Surface Review

It is common for an application to deviate significantly from the functional and design specifications created during the requirements and design phases of a software development project. Therefore, it is critical to re-review threat models and attack surface measurement of a given application when it is code complete. This review ensures that any design or implementation changes to the system have been accounted for, and that any new attack vectors created as a result of the changes have been reviewed and mitigated.

Phase Five: Release

SDL Practice 14: Incident Response Plan

Every software release subject to the requirements of the SDL must include an incident response plan. Even programs with no known vulnerabilities at the time of release can be subject to new threats that emerge over time. The incident response plan should include:

  • An identified sustained engineering (SE) team, or if the team is too small to have SE resources, an emergency response plan (ERP) that identifies the appropriate engineering, marketing, communications, and management staff to act as points of first contact in a security emergency.
  • On-call contacts with decision-making authority that are available 24 hours a day, seven days a week.
  • Security servicing plans for code inherited from other groups within the organization.
  • Security servicing plans for licensed third-party code, including file names, versions, source code, third-party contact information, and contractual permission to make changes (if appropriate).

SDL Practice 15: Final Security Review

The Final Security Review (FSR) is a deliberate examination of all the security activities performed on a software application prior to release. The FSR is performed by the security advisor with assistance from the regular development staff and the security and privacy team leads. The FSR is not a “penetrate and patch” exercise, nor is it a chance to perform security activities that were previously ignored or forgotten. The FSR usually includes an examination of threat models, exception requests, tool output, and performance against the previously determined quality gates or bug bars. The FSR results in one of three different outcomes:

  • Passed FSR. All security and privacy issues identified by the FSR process are fixed or mitigated.
  • Passed FSR with exceptions. All security and privacy issues identified by the FSR process are fixed or mitigated and/or all exceptions are satisfactorily resolved. Those issues that cannot be addressed (for example, vulnerabilities posed by legacy “design level” issues) are logged and corrected in the next release.
  • FSR with escalation. If a team does not meet all SDL requirements and the security advisor and the product team cannot reach an acceptable compromise, the security advisor cannot approve the project, and the project cannot be released. Teams must either address whatever SDL requirements that they can prior to launch or escalate to executive management for a decision.

SDL Practice 16: Release/Archive

Software release to manufacturing (RTM) or release to Web (RTW) is conditional on completion of the SDL process. The security advisor assigned to the release must certify (using the FSR and other data) that the project team has satisfied security requirements. Similarly, for all products that have at least one component with a Privacy Impact Rating of P1, the project’s privacy advisor must certify that the project team has satisfied the privacy requirements before the software can be shipped.

In addition, all pertinent information and data must be archived to allow for post-release servicing of the software. This includes all specifications, source code, binaries, private symbols, threat models, documentation, emergency response plans, license and servicing terms for any third-party software and any other data necessary to perform post-release servicing tasks.

Optional Security Activities

Optional security activities are generally performed when a software application is likely to be used in critical environments or scenarios. They are often specified by a security advisor as part of a negotiated set of additional requirements to ensure a greater level of security analysis for certain software components. The practices in this section provide examples of optional security tasks and should not be considered an exhaustive list.

Manual Code Review

Manual code review is an optional task in the SDL and is usually performed by highly skilled individuals on the application security team and/or the security advisor. While analysis tools can do much of the work of finding and flagging vulnerabilities, they are not perfect. As a result, manual code review is usually focused on the “critical” components of an application. Most often it is used where sensitive data, such as personally identifiable information (PII), is processed or stored. It is also used to examine other critical functionality such as cryptographic implementations.


Penetration Testing

Any issues identified during penetration     testing must be addressed and resolved before the project is approved for     release.

Penetration testing is a white box security analysis of a software system performed by skilled security professionals simulating the actions of a hacker. The objective of a penetration test is to uncover potential vulnerabilities resulting from coding errors, system configuration faults, or other operational deployment weaknesses. Penetration tests are often performed in conjunction with automated and manual code reviews to provide a greater level of analysis than would ordinarily be possible.

Vulnerability Analysis of Similar Applications

Many reputable sources of information about software vulnerabilities can be found on the Internet. In some cases, the analysis of vulnerabilities found in analogous software applications can shed light on potential design or implementation issues in software under development.

Other Process Requirements

Root Cause Analysis

While not traditionally a part of the software development process, root cause analysis plays an important part in ensuring software security. Upon discovery of a previously unknown vulnerability, an investigation should be performed to ascertain precisely where the security processes failed. These vulnerabilities can be attributed to a variety of causes, including human error, tool failure, and policy failure. The goal of root cause analysis is to understand the precise nature of the failure. This information helps to ensure that errors of the same type are accounted for in future revisions of the SDL.

Periodic Process Updates

Software threats are not static. As a result, the process used to secure software cannot be static. Organizations should take the knowledge learned from practices such as root cause analysis, policy changes, and improvements in technology and automation, and apply them to the SDL on a predictable schedule. Generally speaking, a yearly update schedule should suffice. The exception to this rule is when new, previously unknown vulnerability types are identified. This phenomenon requires immediate, out-of-cycle revision of the SDL to ensure proper mitigations are in place going forward.

Application Security Verification Process

Organizations developing secure software will naturally want a means to verify that the processes outlined in the Microsoft SDL have been followed. Access to centralized development and test data helps decision-making in a number of important scenarios, such as the Final Security Review, SDL requirement exception handling, and security audits. The process of verifying application security involves a number of different processes and actors:

  • A specially designated application should be used to track compliance with the SDL. This application serves as the central repository for all SDL process artifacts, including (but not limited to) design and implementation notes, threat models, tool log uploads, and other process attestations. As with any critical application, it should use access controls to ensure:
  • Only authorized personnel can use the application.
  • Strong separation between roles. For example, a developer may be able to use the application and upload data, but should be prohibited from accessing functionality reserved for the security and privacy advisors, security team leads, and testers.
    • The security and privacy team leads are responsible for ensuring that the data necessary for an objective judgment is properly categorized and entered into the tracking application.
    • The information entered into the tracking application is used by the security and privacy advisors to provide the analysis framework for the Final Security Review.
    • The security and privacy advisors are responsible for reviewing the data entered into the tracking application (including the FSR results and other additional security tasks assigned by the advisors) and certifying that all requirements are met and/or all exceptions are satisfactorily resolved.

This document focuses on the Advanced level of the SDL Optimization Model, where rudimentary tracking processes are (in most instances) insufficient to the task. However, organizations with less sophisticated processes or smaller resource pools—those who fit within the Basic or Standardized levels of the SDL Optimization Model—can likely make do with a simpler tracking process.

It is very important that the tracking and verification process accurately capture:

  • The security and privacy requirements of the organization (for example, no known critical vulnerabilities at release).
  • The functional and technical requirements of the application under development.
  • The application’s operational context.

For example, if a development team creates a process control application to run in a critical environment, the proper investment of time and resources must be allocated to the creation and maintenance of the tracking process to enable objective analyses by the organization’s security and privacy principals, executive leadership, and relevant third parties such as compliance auditors or evaluators.

Put differently, skimping on the tracking process inevitably leads to problems later, usually during an emergency. Ensure that reliable systems are in place to answer critical questions at critical times.

Resources :

Home page of the SDL – http://www.microsoft.com/security/sdl/default.aspx

In need of a formal test case management system? Look no further than VS2013 Web Access (TWA)

In Visual Studio 2012 Microsoft provided test case management and test execution capabilities in TFS Web Access. It was part of Visual Studio 2012 Update 2.

Now in Visual Studio 2013, new capabilities and features have been added to create and modify test plans in Visual Studio 2013 Web Access (TWA).

You don’t have to switch to Microsoft Test Manager for Test Plans, Test Suites and Shared Steps creation. The entire ‘Test Case Management’ can be done from the Web Access.

TWA connects you to Visual Studio Team Foundation Server (TFS) or Team Foundation Service to manage source code, work items, builds, and test efforts.

In order to access Test tab for Web Access, we need to provide Full access to the Windows user or Group who is accessing the functionality.

web-access-testing01

Observe the Test tab. If the settings are not configured, we need to go to Access Levels from the Control Panel where we can find the 3 different Access Level settings as shown here

access-levels

The default option is Limited Access. We need to add the user to get Test tab by setting the option to Full.

Now observe the Test tab (right next to the Build tab as seen in Figure 1). I had already created Test Plan and Test Suites using Microsoft Test Manager 2013. These artifacts start appearing in the Web Access under the Test tab immediately.

The Test Plan has 3 Test Suites – Requirement based, Static and Query based. Each suite has some Test Cases. We can also see the different status of an individual test case.

test-case-status

Note: I am also enclosing a similar Test Plan screenshot from Microsoft Test Manager 2013 for comparing the two. Observe the categorization of test status with Microsoft Test Manager.

microsoft-test-manager

In case we want to customize the columns, we can do so as follows. The bracketed quantity as seen in the screenshot below shows the width of the columns. We can add certain columns from left hand side and the view will change.

web-access-testing03

We can even create a new Test Plan. The Test Plan can have a name, Area Path and Iteration Path. Features like configuration settings, Test Settings and Test Environment can be added with Microsoft Test Manager 2013.

create-test-plan

Once a new Test Plan is added, we can add Test Suites to it. Test Suites can be of three types – Requirement Based, Static or Query Based. Even Shared Steps can be added if required. Creation of any kind of Test Suites will provide option for adding new or existing test cases to the Suite.

test-plan

A new Test Case can either be created with normal IDE or using Grid.

new-test-case

We can provide all the details to the Test Case like Title, Iteration, Area, and Assigned To (ownership of Test Case). The Steps to the test case can be added as action and expected result. If the test case is testing a requirement, Tested Backlog Items tab can be selected and a link can be provided to it. Any attachments we add to the test step can be viewed inline when the test case gets executed. If the attachment is in the form of image, the test step will show the actual image. If the attachment is in the form of a file, the file name with size appears. You can also add attachments when you run test case. These can be  log files etc.

If we select the option of creation of New Test Case with Grid, we get a similar screen as follows.

gird-test-case

We can add actions and expected result to each test case. Create a new Test Case by providing the title. All the test cases can stored in Team Foundation Server at once. When we have finished creation of Test Plan, Test Suites and Test Cases, we can start executing the Test Case. While running the test case, we have optionsof running the default way with Web Access or using the client (Microsoft Test Manager).

Once we start test runner, the test case with some steps is displayed in left hand pane (around 1/3 area of screen) and remaining area is available for actual execution as seen with Microsoft Test Manager.

test-runner

Once the execution starts and we encounter an error, we can create a bug, add a comment to it or add an attachment. Once the execution is completed, we can save and close the runner. We have various options to mark the test case as Pass, Fail, Block or Not applicable The test case can also be marked as Paused. For Paused Test Case, we later get Resume test as the option.

web-access-testing20

While creating a bug with Web Access, it is possible to add comments, attachments along with the bug; however we cannot create a rich bug. For creation of rich bug we will have to use Microsoft Test Manager 2013. The links in terms of comments, attachments can be seen in the bug as follows.

web-access-testing22

While Testing with Web Access there are some limitations. Basic testing can be executed but creation of rich bug, viewing test results, exploratory testing requires the client Microsoft Test Manager 2013.

However despite of these limitations, being able to plan tests, manage full test suite and executing test cases right in your Visual Studio using the Lightweight browser-based Team Web Access, helps us improve quality in software projects, without leaving your favorite IDE workspace.

As the following illustration shows, you can access a number of features from the Home page of TWA. You switch to different views and pages by first choosing one of the context view links at the top and then one of the pages within the context view. You can switch context between teams and team projects from the project Context Menu Icon context menu toward the top-right of each page.  You access the administration pages by choosing the Settings icon  gear Settings icon.

Home page (Team Web Access)

Important noteImportant
The links and pages that you have access to depend on: (1) the Web Access Permissions group to which you are assigned: Limited, Standard, and Full, see Change access levels. Or, (2) whether the resource has been configured for your team project or team project collection.  The following links appear on the home page for the associated Web Access Permissions group shown in parenthesis:

  • View backlog (Full): Opens the Product Backlog page which provides access to both the product backlog and iteration or sprint pages. See Create and organize the product backlog.
  • View board (Full): Opens the Task Board page used to review progress during a sprint and update information for work performed. See Work in sprints.
  • View work items: Opens the Work Items page used to create work items and work item queries. See Query for Bugs, Tasks, or Other Work Items.
  • Request feedback (Full): Opens the Feedback Request form to invite stakeholders to provide feedback. See Request and review feedback.
  • Go to project portal: Requires a project portal has been enabled for your team project. See Access a Team Project Portal or Process Guidance.
  • View reports:  Requires Reporting to be enabled for the instance of TFS. See Add a report server.
  • Open new instance of Visual Studio: Opens an instance of Visual Studio 2012 and automatically connects to the team project context selected in TWA. Requires that you have a recent version of Visual Studio installed.

 

Free integration guide -Microsoft Dynamics CRM Online and Office 365

Combining the online services of Office 365 with Microsoft Dynamics CRM Online empowers your teams to work where and when they want with best-of-class cloud services.

This guide is intended for Microsoft Dynamics CRM administrators and technical decision makers interested in exploring Office 365 services and how they integrate with Microsoft Dynamics CRM Online. Integration with Office 365 becomes increasingly relevant to

Microsoft Dynamics CRM Online users as management of Microsoft Dynamics CRM Online shifts to the Microsoft online services environment.

For a .pdf version of this document: Integration Guide: Microsoft Dynamics CRM Online and Office 365 please visit – http://download.microsoft.com/download/D/4/F/D4F5A3C3-E3CB-48C9-85DE-4ED0B7FFBD60/CRMO365Integration.pdf

Some of what this paper covers:

  • Add an Office 365 trial subscription to Microsoft Dynamics CRM Online
  • Set up CRM Online to use Exchange Online
  • Set up CRM Online to use SharePoint Online
  • Set up CRM Online to use Lync Online
  • Set up CRM Online to  use Yammer

Looking for a standard Project Management approach to developing SharePoint Solutions? Look no further than Microsot’s SharePoint Server 2013 Application Life Cycle Management

Microsoft SharePoint Server 2013 gives developers several options for creating and deploying applications that are based on SharePoint technologies, for both on-premises and in hosted or public cloud platforms. SharePoint Server 2013 offers increased flexibility in the shape applications can take as well as new options for using standards-based technologies with applications. Although these application capabilities and deployment options afforded by the new application model inSharePoint provide an effective means for developers to create new and immersive applications, developers must be able to infuse quality, testing and ALM considerations into the development process. This article applies common ALM concepts and practices to application development using SharePoint Server 2013 technologies.

What’s new

SharePoint Server 2013 establishes a new paradigm for implementing applications. Because of this shift in application development with SharePoint technologies, developers and architects should have a thorough understanding of the new application development patterns, practices, and deployment models for SharePoint Server 2013. It’s important to note that while the application model for developing solutions with SharePoint has changed, many of the patterns used for solution development including choice of technologies, implementation techniques are in line with existing web application development technologies.

The following resources outline the application types that can be constructed using SharePoint Server 2013 technologies and contain considerations for both on-premises and cloud applications. To understand hosting options for apps for SharePoint, see Choose patterns for developing and hosting your app for SharePoint.

Additionally, Microsoft advises customers to evaluate the technologies used when developing applications with SharePoint Server 2013 as there is a wider set of choices for solution implementation. When creating applications, customers can focus on leveraging standards-based technologies such as HTML5 and JavaScript for presentation and user experience layers, while OData and OAuth can be leveraged for service-based access to back end services including SharePoint. Customers should consider carefully whether full trust code (that is, compiled assemblies deployed to SharePoint) are required. although continuing to use that development paradigm, while still valid and required in some situations, does impose significant overhead on the ALM process.

For more information about the new flexible development technologies for applications on SharePoint Server 2013, see SharePoint 2013 development overview.

Benefits and changes

Because SharePoint-supported application development technologies now offer a more flexible assortment of languages and programming architectures, developers need to adapt existing ALM practices around mainstream development techniques to accommodate for their presence within SharePoint. Concepts such as testing, build establishment, deployment, and quality control, can be expanded to include deployment to SharePoint as a SharePoint application. This may mean that although many developers that are accustomed to writing and deploying server-side farm solutions that extend the core capabilities of SharePoint, common ALM practices for the new flexible development model facilitated by SharePoint Server 2013 applications must be applied to the implementation process.

As customers continue the transition to cloud-hosted implementations of SharePoint Server 2013, developers will need to understand how to extend ALM concepts to include development, testing, and deployment target environments that sit outside the physical boundaries of the organization. This includes evaluating the technology strategy for conducting application development, testing, and deployment.

Developers and architects alike can become well-versed in synthesizing solutions that consist of multiple application components that span or combine different types of hosting options. During this adaptation process, ALM procedures should be applied unilaterally to these applications. For example, developers may need to deploy an application that spans on-premises services deployment (that is, IIS, ASP.NET, MVC, WebAPI, and WCF), Windows Azure, SharePoint Server 2013, and SQL Azure, while also being able to test the application components to determine quality or whether any regressions have been introduced since a previous build. These requirements may signify a significant shift in how developers and teams regard the daily build and deployment process that is a well-known procedure for on-premises or server-side solutions.

Development team considerations

For organizations that have more than one application developer or architect, team development for SharePoint Server 2013 should be carefully planned to provide the highest-quality applications as well as support sufficient developer productivity. Because the method for conducting application development has increased in flexibility, teams will need to be clear and confident not only on ALM practices and patterns, but also on how each developer will write code and ensure that quality code becomes part of the application build process.

These considerations begin with selecting the appropriate development environment. Traditionally, development has been relegated to conducting separate development in virtual machines that are connected to a common code repository that provided build, deployment, and testing capabilities, like Visual Studio TFS 2012. TFS is still a strong instrumental component of an ALM strategy, and central to the development effort, but teams should consider how to leverage TFS across the different types of development environment options.

Depending on the target environment, the solution type (that is, which components will be on-premises and which will be hosted in cloud infrastructure or services), developers can now select from a combination of new development environment options. These options will consist of new choices such as the SharePoint developer site template, an Office 365 developer tenant, as well as legacy choices such as virtual machine-based development using Hyper-V in Windows 8 or Windows Server 2012.

The following section describes development environment considerations for application developers and development teams.

The selection of a development environment should be made based on multiple factors. These considerations are largely influenced by the type of application being developed as well as the target platform for the application. Traditionally, when creating applications for SharePoint Server 2010, developers would provision virtual machines and conduct development in isolation. This was due to the fact that deployment of full trust solutions may have required restarts of core SharePoint dependencies, such as IIS, which would prevent multiple developers from using a single SharePoint environment. Because development technologies have changed and the options for developers creating applications have increased, developers and teams should understand the choice of development environments available to them. Figure 1 shows the development environment and tool mix, and includes the types of solutions that can be deployed to the target environments.

Figure 1. Development environment components and tools

The app development environment can include Office 365, Visual Studio, and virtual machines.

Click to see enlargement.

Development environment philosophy

Because of the investments made in how applications can be designed and implemented using SharePoint Server 2013, developers should determine if there is a need to conduct development using server-side code. As developers create applications that use the cloud-hosted model, the requirement to conduct development that relies on virtualized environments, specifically for SharePoint, diminishes. Developers should seek to build solutions with the remote-development model that uses existing cloud-based (both public and private) infrastructure. If development environments can be quickly and easily provisioned without having to create and orchestrate virtualization, developers can invest more time in focusing on development productivity and quality, rather than infrastructure management.

The decision to require a virtualized instance of SharePoint Server 2013 versus the new SharePoint development site template will depend on whether or not the application requires full trust code to be deployed to SharePoint and run there. If no full trust code is required, we recommend using the developer site template, which can be found in Office 365 development tenants or within an organization’s implementation of on-premises SharePoint. Developer site templates are designed for developers to deploy applications directly to SharePoint from Visual Studio. Office 365 developer sites are preconfigured for application isolation and OAuth so that developers can begin writing and testing applications right away.

The following sections describe in detail when developers can use the different environment options to build applications.

O365 development sites (public cloud)

Figure 2 shows how developers can use Office 365 as a development environment and includes the types of tools produce SharePoint applications that can be hosted in Office 365.

Figure 2. Office 365 app development

Build apps for SharePoint with Office 365, Visual Studio, and "Napa."

Click to see enlargement.

Developers with MSDN subscriptions can obtain a development tenant that contains aSharePointDeveloper Site. The SharePointDeveloper Site is preconfigured for developing applications. Users can use not only Visual Studio 2012 in developing applications, but with Office 365 developer sites, “Napa” Office 365 Development Tools can be used within the site to construct applications. For more information about getting started with anOffice 365 Developer Site, see Sign up for an Office 365 Developer Site.

Developers can start creating applications that will be hosted in Office 365, on-premises or on other infrastructure in a provider-hosted model. The benefit of this environment is that infrastructure, virtualization and other hosting considerations for a SharePoint development environment are abstracted by Office 365, allowing developers to create applications instantly. A prime consideration for this type of development environment is that applications that require full trust code to be deployed toSharePoint cannot be accommodated. Microsoft recommends using the SharePoint client-side object model (CSOM) and client-side technologies such asJavaScript as much as possible. In the event that full trust code is required (but deployment of the code to run on SharePoint is not required), we recommend deploying the server-side code in an autohosted or provider-hosted model. Note that these full trust code solutions that are deployed to provider-hosted infrastructure also use the CSOM but can use languages such as C#. It’s also important to note that these applications deployed in a provider-hosted model can use other technology stacks and still use the CSOM to interact with SharePoint Server 2013.

Development teams creating separate features or applications that contain a larger solution will need a centralized deployment target to integration test components. Because each developer is creating features or applications on their own Office 365 developer site, a centralized site collection in a target tenant or on-premises environment should be provisioned so that each developer’s application components can be deployed there. This approach will allow for a centralized place to conduct integration testing between solution components. The testing section of this document reviews this process in more detail.

“Napa” Office 365 Development ToolsOffice 365 development tools

The “Napa” Office 365 Development Toolsdevelopment tools can be used by developers for the simpler creation of applications within an Office 365 developer site. The intention of the “Napa” Office 365 Development Tools tools is for developers, or power users who are proficient in client-side technologies, to quickly develop and deploy applications in a prototype, proof-of-concept or rapid business solution scenario. These tools provide a means of developing application functionality on SharePoint. However, during the lifecycle of an application, there may be points at which the application should be imported into Visual Studio. These conditions are outlined as follows”

  • When more than one developer has to contribute or develop part of the solution

  • When an application reaches a level of dependence by users whom requires the application of lifecycle management practices

  • When functional requirements for the application change over time to require supplementary solution components (such as compiled services or data sources)

  • When the application requires integration with other applications or solution components

  • When developers have to use quality control measures such as automated builds and testing

Once these or other similar conditions occur, developers must export the solution into a source controlled environment such as TFS and apply ALM design considerations and procedures to the application’s future development.

Development sites (remote development)

For organizations or developers who choose not to use Office 365 developer sites as a primary means for SharePoint application development, on-premises developer sites can be used to develop SharePoint applications. In this model, the Office 365 developer sites’ capability is replaced with on-premises developer sites hosted within a SharePoint farm. Customers can create a development private cloud by deploying a SharePoint farm to house developer site instances. Customers can supply their own governance automation in order to provide developer site template creation or use the SharePoint in-product capabilities to provision developer site instances. Figure 3 illustrates this setup.

Figure 3. On-premises app development with the developer site template

Build apps for SharePoint in an on-premises deployment of SharePoint 2013 with the developer site template

Click to see enlargement.

Figure 3 describes the development tools and application types that can be enabled with developer sites when using an on-premises SharePoint farm as a host. Note that “Napa” Office 365 Development ToolsOffice 365 development tools cannot be used in this environment as they are a capability only present in Office 365 development sites.

TheSharePoint farm that hosts Developer Site instances must be monitored and meet service and recovery point and time level objectives so that developers who rely on them to create applications can be productive and not experience outages. Customers can apply private cloud concepts such as elasticity and scale units and a management fabric to this environment. Operations and management have to be applied to the SharePoint farm where the developer sites are hosted also. This will help control unmonitored sprawl of multiple developer sites that are stale or unused and provide a way to understand when the environment has to scale.

Customers can decide to use infrastructure as a service (IaaS) capabilities like Windows Azure to host theSharePoint farms that contain and host developer sites, or their own on-premises virtual or physical environments. Note that using this model does not require a SharePoint installation for each developer. Remote application development will only require Visual Studio and Office and SharePoint 2013 development tools on the developer work station.

Developers must establish provider-hosted infrastructure to deploy the provider-hosted applications. Although provider-hosted components of a SharePoint application can be implemented in a wide-array of technologies, developers must provide an infrastructure for hosting those components of the application that run outside SharePoint. For example, if a team is developing a SharePoint application whose user experience and other components reside in anASP.NET application, the development team should use local versions of IIS,SQL Server, and so on engage in traditional ALM team development patterns for ASP.NET.

Self-contained farm environments (virtualized farm development)

For those solutions that require the deployment of full trust code to run on a SharePoint farm a full (often virtualized) implementation of SharePoint Server 2013 will be required. For guidance on how to create an on-premises development environment for SharePoint, see How to: Set up an on-premises development environment for apps for SharePoint.

Figure 4 shows the types of applications that can be created using an on-premises virtualized environment.

Figure 4. On-premises development with a virtual environment

Build apps for SharePoint in a virtual on-premises environment

Click to see enlargement.

Developers can conduct remote development for the SharePoint and cloud-hosted applications within their own SharePoint farms as well as the development of full trust farm solutions. These farms are often hosted within a virtualization server running either on the developer’s workstation or in a centralized virtualization private cloud that can easily be accessible to developers. The SharePoint farm environment is usually separate from other developers’ farms and provides a level of isolation that is required when developing full trust code that may require the restart of critical services (that is IIS).

Remote development can occur within the self-contained farm as well as the development of full trust code as each development farm is isolated and dedicated to a single developer.

Organizations or developers will have to manage and update the SharePoint farms running within the virtual computers. For developers who are contributing to a single application, parity across the development farms running inside the virtual computers must be maintained. This practice ensures that each component of code developed for the application has consistency. Other common considerations are a standard configuration for the farms including domain access and credentials, service application credentials, testing identities or accounts and other environmental configuration elements (such as certificates).

Similar to a centralized farm for development sites, these virtual machines running developerSharePoint farms can be hosted in IaaS platforms such as Windows Azure, and on-premises private cloud offerings.

Note that, although virtual machines offer a great deal of isolation and independence from other developer virtual machines, teams should strive to have uniformity between the virtual machine configurations. This includes common domain, account and security, SharePoint configurations and a connection to a source control repository such as Visual Studio Team Foundation Server (TFS).

When constructing SharePoint applications, there are several considerations that have to be addressed to provide governance and common development practices for consistency and quality. When applying ALM principles to SharePoint application development, developers must focus on technical considerations as well as process-driven considerations.

The support of an ALM platform, such as Visual Studio Team Foundation Server 2012, is usually a requirement when conducting application development especially with teams of developers working on the same set of projects. SharePoint applications, like other technical solutions, require code repository management and versioning, build services, testing services, and release management practices. The following section describes considerations for ALM as applied to the different application models for SharePoint applications.

Overview

For each type of SharePoint application, the ALM considerations must be applied without variation in concept. However, practices and procedures around build, testing, and change management must be adjusted.

Some SharePoint applications will use client-side technologies. Most developers who have SharePoint Server 2010 application development experience will have to adjust to developing and applying ALM principles to non-compiled code. This adjustment includes applying concepts such as a “build” to a solution that may not have compiled code. ALM platforms such as Visual Studio 2012 have built-in capabilities to validate builds by first compiling the code, and second, by running build verification tests (BVTs) against the build.

For SharePoint applications, the process relating to build and testing should remain consistent with traditional application development processes. This includes the creation of a build schedule by the ALM platform, which will compile the solution and deploy it into the target environment.

Build processes

The SharePoint application build processes are facilitated by the ALM platform. Visual Studio Team Foundation Server 2012 provides both build and test services that can be triggered on solution check in from Visual Studio 2012 (continuous integration) or at specified scheduled intervals.

SharePoint build components

When planning build processes for SharePoint application development, developers have to consider the interactions between the components, as shown in Figure 5.

Figure 5. SharePoint-hosted app build components

Visual Studio builds work with app manifests, pages, and supporting files.

The illustration in Figure 5 is a logical representation of a SharePoint application. This illustration shows a SharePoint-hosted app and highlights key application objects as part of a Visual Studio 2012SharePoint-hosted app project. The SharePoint app project contains the features, package, and manifest that will be registered with SharePoint. The project also contains pages, script libraries, and other elements of user experience that constitute the SharePoint application. In addition, the SharePoint project has supporting files which include the necessary certificates for deployment to a target SharePoint environment.

Figure 6. Provider-hosted and autohosted app build components

Provider-hosted apps contain both SharePoint app packages and cloud-hosted components.

Figure 6 shows a SharePoint cloud-hosted application (that is, autohosted or provider-hosted). The main difference in the project structure is that the Visual Studio 2012 solution contains a SharePoint application project in addition to one or more projects that contain the cloud-hosted application components. These may include web applications, SQL database projects, or service applications that will be deployed to Windows Azure or an on-premises provider hosted infrastructure (such as ASP.NET) and other solution components. For guidance on packaging and deployment with high-trust applications, see How to: Package and publish high-trust apps for SharePoint 2013.

Figure 7. ALM with Visual Studio Team Foundation Server

TFS can be configured to conduct build and deployment activities with a SharePoint application through build definitions.

Figure 7 shows TFS as the ALM platform. Teams will use TFS to store code and conduct team development either using TFS deployed on-premises or using Microsoft cloud-based TFS services. TFS can be configured to conduct build and deployment activities with a SharePoint application through build definitions. TFS can also be used to conduct build verification tests (BVTs) that may be automated through the execution of coded UI tests that are part of the build definition.

Figure 8. TFS build targets

Scripts executed by a TFS build definition will deploy the SharePoint application components to SharePoint Online and on-premises SharePoint.

Figure 8 shows the target environments where scripts executed by a TFS build definition will deploy the SharePoint application components. For SharePoint-hosted applications this includes deployment to either SharePoint Online or to on-premises SharePoint application catalogs.

For cloud-hosted SharePoint applications, the components of the solution that require additional infrastructure outside SharePoint are deployed to target environments. For autohosted applications, this will be Windows Azure. For provider-hosted applications, this infrastructure can be Windows Azure, or other on-premises or IaaS-hosted environments.

Creating a build for SharePoint applications

TFS provides build services that can compile solutions checked into source control and place the output in a centralized drop location for deployment to target environments in an automated manner. The primary method of configuring TFS to conduct automated builds, deployments, and testing of SharePoint applications is to create a build definition in Visual Studio. The build definition contains information about which code projects to compile, as well as any post-compilation activities such as testing and actual deployment to the target environments. For more information about the Team Foundation Build Service, see Set up Team Foundation Build Service.

To achieve continuous integration, the build definition can be triggered when developers check in code. Additionally, the build definition can be scheduled to execute at set intervals.

ForSharePoint applications, developers should use the Office/SharePoint 2013 Continuous Integration with TFS 2012 build definitions project to achieve scheduled builds or continuous integration. This project provides build definitions, Windows PowerShell scripts, and process instructions on how to configure Visual Studio Online or an on-premises version of TFS to build and deploy SharePoint applications in a continuous integration model. Developers should download the components in this project and configure their instance of TFS accordingly. For instructions on how to configure TFS with the provided build definition for SharePoint applications and configure the build definition to use the Windows PowerShell scripts to deploy the SharePoint application to a target environment, see the Office/SharePoint 2013 Continuous Integration with TFS 2012 documentation.

Configuring build and deployment procedures

Figure 9 shows a standard process for SharePoint application builds and deployments when the build definition has been created, configured, and deployed to the team’s instance of TFS.

Figure 9. Build and deployment process with TFS

TFS build services execute the steps defined by the SharePoint application build definition.

Click to see enlargement.

The developer checks in the SharePoint application Visual Studio 2012 solution. Depending on the desired configuration (that is, continuous integration or scheduled build), TFS build services will execute the steps defined by the SharePoint application build definition. This definition, configured by developers, contains the continuous integration build process template as well as post-build instructions to execute Windows PowerShell scripts for application deployment. Note that the SharePoint Online Management Shell extensions will be required in order to deploy the application to SharePoint Online. For more information about SharePoint Online Management Shell extensions, see SharePoint Online Management Shell page on the Download Center.

Once the build has been triggered, TFS will compile the projects associated with the SharePoint application and execute Windows PowerShell scripts to deploy the solution to the target SharePoint environment.

Trusting the SharePoint application

Following deployment of the application components to the target environments, it is important to note that before anyone accessing the application, including automated tests that may be part of the build, a tenant (or site collection) administrator will have to trust the application on the app information page in SharePoint. This requirement applies to autohosted and SharePoint–hosted apps. This manual process represents a change in the build process as tests that would typically run following the deployment to the target environment will have to be suspended until the application is trusted.

Note that for cloud-hosted (auto and provider) applications, developers can deploy the non-SharePoint components to the cloud-hosted infrastructure separately from the application package that is deployed to SharePoint.

Figure 10. Deploying non-SharePoint components

As developers make changes to the solution that represents the SharePoint application, there may be circumstances where changes are made to the projects within the solution that do not apply to the SharePoint application project itself.

Click to see enlargement.

As shown in Figure 10, when developers make changes to the solution that represents the SharePoint application, there may be circumstances where changes are made to the projects within the solution that do not apply to theSharePoint application project itself. In this circumstance, the SharePoint application project does not have to be redeployed as it has not changed. The changes associated with the cloud-hosted projects must be redeployed.

Changes to the application that will be deployed to infrastructure outside SharePoint can be done so separately from the application components that get deployed into the target site collection or tenant. For developers, this means that an automated build process can be created to deploy the cloud-hosted components on a frequent (triggered) basis and separately from the SharePoint application project. Thus, the manual step to approve the application’s permission on the app information page in SharePoint is not required, allowing for a more continuous deployment and testing process for the build definition. The SharePoint application component of the solution would only have to be deployed in a circumstance were the items within this project changed and required redeployment.

Testing

As described in the build processes section, application testing is a method of determining whether the compilation and deployment of the application was successful. By using testing as a means of verifying the build and deployment of the application, the team is provided with an understanding of quality, as well as a way of knowing when a recent change to the application’s code has compromised the SharePoint application.

Figure 11 shows the types of testing approaches that are best used with SharePoint application models.

Figure 11. Testing approaches

Coded UI tests should be leveraged against SharePoint-hosted applications where the business logic and the user experience reside in the same layer.

Click to see enlargement.

Figure 11 suggests the use of different types of tests for testing SharePoint applications by type. Coded UI tests should be used against SharePoint-hosted applications where the business logic and the user experience reside in the same layer. While business logic can be abstracted to JavaScript libraries, a primary means of testing that logic will be through the user experience.

Cloud-hosted applications (that is, autohosted and provider-hosted) can use fully coded UI tests while also using unit tests for verification of the service components of the solution. This provides developer confidence in the quality of the application’s hosted infrastructure implementation from a functional perspective.

The following sections review the considerations for coded UI tests and other test types in relation to SharePoint applications.

Client-side code and coded UI tests

For build verification testing (BVT) as well as complete system testing, coded UI tests are recommended. These tests rely on recorded actions to test not only the business logic and middle tier of the application, but the user experience as well. For SharePoint applications that use client-side code, much of the business logic’s entry points and execution may exist in the user experience tier. For this reason, coded UI tests can not only test the user experience, but the business logic of the application as well. For more information about the coded UI test, see Verifying Code by Using UI Automation.

Coded UI tests can be used in SharePoint-hosted apps where much of the UX and the business logic may be intermixed. These tests, like others can be run from a build definition in TFS so that they can verify an application’s functionality after deployment (and the application is trusted by SharePoint).

Non-coded UI tests

For circumstances where the application logic exists outside the application’s UX layer, such as in cloud-hosted applications, a combination of coded UI tests and non-coded UI tests should be leveraged. Tests such as traditional unit tests can be used to validate the build quality of service logic that is implemented on a provider-hosted infrastructure. This provides the developer with a holistic confidence in the provider-hosted components of the solution function, and are covered from a testing point of view.

Web performance and load tests

Web performance and load tests provide developers with the confidence that the application functions under expected or anticipated user loads. This testing includes determining the application’s capability to concurrently handle a predictable user base that will reasonably scale over time. Both coded UI and unit tests can be the source of the web performance and load test. Using an ALM platform like TFS, these tests can be used to load test the application.

Note that the testing of the infrastructure is not a primary goal of these tests when using them to test SharePoint applications. The infrastructure, whether SharePoint-hosted or provider-hosted, should have a similar load and performance baseline established. The web performance and load tests for the application will highlight infrastructure challenges, but should be regarded primarily as a means to test the application’s performance.

For more information about web performance and load tests, see Run performance tests on an application before a release.

Quality and testing environments

Many organizations have several testing environments that may be either physical, or virtual and separate from each other. These environments can vary based on a team’s ALM process, regulatory requirements or a combination of both. To determine the number and types of testing environments that teams should use, the following guidance is based on functional practices specific to SharePoint applications, but also uses ALM practices for software development at large.

Developer testing

Developer testing can occur in the environment where the developers are creating their component of the solution. Multiple developers, working on different aspects or components of a larger application, will each have unit tests, coded UI tests, and the application code deployed into their development site.

Figure 12. Developer testing process

Developers will execute tests from Visual Studio against the solution components deployed in their own Office 365 or on-premises developer site.

Click to see enlargement.

Developers will execute tests from Visual Studio against the solution components deployed in their own Office 365 or on-premises developer site. For cloud-hosted applications, the tests will also be executed from Visual Studio against the solution components that reside on provider-hosted infrastructure. These components will reside in the developer’s Windows Azure subscription.

Note that this approach assumes that developers either have individual Office 365 developer sites and Windows Azure subscriptions, which are supplied through MSDN subscriptions. Even if developers are creating applications for on-premises deployment, these developer services can be used for development and testing.

If developers do not have these services, or are required to do development entirely on-premises, then they will execute tests for their components against an on-premises farm’s developer site collection and developer-specific, provider-hosted infrastructure. The provider-hosted infrastructure can reside in developer-dedicated virtual machines. For the development of full-trust solutions, developers would require their own virtual SharePoint farm and provider-hosted infrastructure.

Integration and systems testing

In order to test the application, all of the development components should be assembled and deployed in a centralized environment. This integration environment provides a place where developers can deploy and observe the components of the solution they created interacting with other solution components written by other developers.

Figure 13. Integration testing environment

TFS will build and deploy the SharePoint application and any required components to the target environments.

Click to see enlargement.

For this type of testing, the ALM platform will build and deploy the SharePoint application and any required components to the target environments. For SharePoint-hosted applications, this will either be an Office 365 site or an on-premises/IaaS SharePoint Server 2013 site collection specifically established for integration and systems testing. For SharePoint cloud-hosted applications, TFS will also deploy the components to a centralized Windows Azure subscription where the services will be configured specifically for integration/systems testing. TFS will then execute coded UI or unit tests against the SharePoint application, as well as any components that the solution requires on hosted infrastructure.

UAT and QA testing

For user acceptance testing (UAT), organizations often have separate environments where this function is performed apart from integration and systems testing. Separating these testing environments prevents the cadence of automated continuous release and testing from interfering with UAT activities where users may be executing tests against a specified build of the application over an extended period of time.

Figure 14. UAT testing

Users assigned to conduct acceptance testing or organizational testing resources conduct test scripts in a stable environment that is focused on a well-publicized build version of the application.

Click to see enlargement.

As shown in Figure 14, users assigned to conduct acceptance testing or organizational testing resources conduct test scripts in a stable environment that is focused on a well-publicized build of the application. While code deployment and testing continues in the integration environment, users will conduct manual testing to validate that the application meets required use or test cases. The application and any provider-hosted infrastructure will be deployed, typically by a release manager, into this testing environment. An automated deployment is possible as well. This sort of deployment uses a dedicated UAT build definition in TFS that mirrors the one that conducts deployment for the integration and systems testing environment.

For cloud-hosted infrastructure, deployment into a Windows Azure subscription that is shared with the integration and systems test environments is possible if the services are named and configured to be deployed side by side as different services or databases. This approach provides a set of services and databases in the testing Windows Azure subscription for integration and systems testing as well as UAT and QA testing, as shown in Figure 15.

Figure 15. Integration and UAT testing

Deployment into a Windows Azure subscription that is shared with the integration/systems test environment is possible if the services are named and configured to be deployed side by side as different services or databases.

Click to see enlargement.

Code promotion practices

The code promotion process between the development and testing environments as well as the production environment should be done using a well-defined release management process. In Figure 16, developers conduct deployment of their solution components to development environments for unit testing.

Figure 16. Release management process

Following a check-in to TFS, an automated build procedure will compile and deploy the solution to the target environment where build verification tests will be executed as part of the build definition in TFS.

Click to see enlargement.

Following a check-in to TFS, an automated build procedure will compile and deploy the solution to the target integration and system testing environment where build verification tests will be executed as part of the build definition in TFS. This approach includes deploying the provider-hosted components of the solution to the target environment (Windows Azure or on-premises environments). Note that for Windows Azure infrastructure, the Windows Azure subscription can be the same one used for both integration and system testing as well as UAT and QA assuming they are deployed to different namespaces and separate SQL databases.

A release manager or a separate TFS build definition, manually invoked in most cases, can deploy to the UA or TQA environment. This approach helps control the build version that users will be testing against. Release managers can pick up the builds from a TFS share and execute the deployment process themselves. From promotion to production, release management will be involved to deploy the application to the production environment and monitor its installation and build verification through tests.

Microsoft has specific guidance on how application developers can upgrade applications. The SharePoint Server 2013 platform supports the notification of new application versions to users.

For considerations on establishing a strategy around SharePoint application upgrades and patching, see How to: Update apps for SharePoint.

For changes to applications, the recommended pattern to follow is consistent with existing code development and sustained engineering patterns. This includes disciplined branching and merging for bug fixes and feature development as well as incremental deployments to target application catalogs. The preceding guidance can be used to complete changes to applications for SharePoint and deploy them to target application catalogs or the store.

The information in App for SharePoint update process provides additional tactical guidance on the techniques for updating SharePoint applications. This includes accelerating deployment testing by shortening the cycle by which application updates are reflected in the farm in test environments. Additionally, this article has guidance on how to accommodate for state within various application deployment models.

The Case For Agile Over Waterfall

This article came about as the result of a recent trip I made to a customer. I was presenting on TFS and made the, oft repeated, statement that Agile is better than Waterfall. Now I have to admit that I have never really had anyone challenge me on this assumption since most of the people I know just accept this as truth.

On this particular day there were a couple of project managers in the audience and they were none too pleased about my assertion. For the rest of the hour, we went back and forth on the issue. Following are a few of exchanges to the best of my recollection:

 

Exchange 1

Project managers: “You can’t say that agile is better than waterfall, it simply isn’t true.”

Me: “I have twenty years of evidence backing up my claim and I have personally seen it work this way.”

 

Exchange 2

Project managers: “Well, we have government regulations we deal with and you just don’t understand what we do here.”

Me: “I have [another customer] that has to deal with HIPPA requirements and they use Agile so I don’t think your requirements are that strict, are they?”

 

Exchange 3

Project managers: “We don’t use pure Waterfall we use a modified version.”

Me: “So..you’ve already modified your methodology because it’s inadequate. Why not finish the job?”

 

 

Essentially the arguments went on from there but were just variations on these three exchanges. To be fair, I tried to explain that I believe Waterfall has it’s place in many, many areas just not in software development but this argument fell on deaf ears. So it got me to thinking: “Am I right?” I had looked at some evidence years ago and had proven it out myself on countless occasions but was that still the case all these years later? Did Agile methodologies still hold sway over Waterfall? Did the evidence prove it? To that end I have assembled evidence for myself and for anyone else who has to fight the Agile battle. Please feel free to add your own evidence (pro or con) in the comments.

 

Please note that I deal in empirical data. Period. I can find any number of people (including myself) who have had good/bad experiences with Waterfall and good/bad experiences with Agile. On the main, I’ve found Agile to be better in the situations I have dealt with but I have seen plenty of people swear to Waterfall the same way. To even the playing field I have to go to objective data as the measure rather than feelings. The data I found shows many data points but there are two that I think are vital for people to understand:

  1. Overall, Agile methodologies are better for software development projects than traditional project management (Waterfall).
  2. Even Agile projects fail; having a superior methodology is no substitute for a good team and solid project discipline.

 

Waterfall Over Agile

I looked for any empirical data that would show traditional project management (Waterfall) beating Agile methodologies for software development. After several hours I gave up. I don’t know if it exists but if it does the data showing Waterfall beating Agile is very hard to find.

 

Agile Over Waterfall

<

p>The exact opposite is true for Agile being better for software development than traditional project management. There were too many studies to include so i decided to focus on one of the key studies that would be most appealing to management. To that end, I found an excellent article written by Dr. David F. Rico, PMP, ACP, CSM entitled “The Business Value of Using Agile Project Management for New Products and Services” (http://davidfrico.com/rico-apm-roi.pdf). It is an extremely concise and well documented survey of data in support of Agile and is a “study of studies” that really summarizes the Agile impact. I suggest you read the entire article but below are the most compelling pieces of data showing improvement of Agile over traditional project management within several studies:

image

http://davidfrico.com/rico-apm-roi.pdf

 

 

In the same article Dr. Rico also mentions the 2008 Maryland study that “[…]developed a database with over 153 data points on the costs and benefits of agile project management from 72 studies.” [Rico, D. F. (2008). What is the ROI of agile vs. traditional methods? An analysis of extreme programming, testdriven development, pair programming, and scrum (using real options). TickIT International, 10(4), 9-18]:

image

http://davidfrico.com/rico-apm-roi.pdf

 

 

Additionally, in 2009 there was another study that examined the “[c]ost and benefit metrics, models, and measures were developed based on 52 data points from 32 studies.” [Rico, D. F., Sayani, H. H., & Sone, S. (2009). The business value of agile software methods: Maximizing ROI with just-in-time processes and documentation. Ft. Lauderdale, FL: J. Ross Publishing.]:

image

http://davidfrico.com/rico-apm-roi.pdf

 

 

Naturally there are many, many more articles on the benefits of Agile. One other one that springs to mind is Ben Linder’s article entitled, “Evidence of Success of Agile Projects”. (http://www.infoq.com/news/2012/11/success-agile-projects). Linders cites references to other studies that have shown Agile project success as well as guidance for implementation. Feel free to add you own favorite study in the comments section and, if you have a study that shows Waterfall beating Agile I will gladly add it to the main article for balance.

Visualize and Debug Code Execution with Call Stacks in Visual Studio 2013

Create a code map to trace the call stack visually while you’re debugging. You can make notes on the map to track what the code is doing so you can focus on finding bugs.

Debugging with call stacks on code maps

You’ll need:

See: Video: Debug visually with Code Map debugger integration (Channel 9)Map the call stackMake notes about the codeUpdate the map with the next call stackAdd related code to the mapFind bugs using the mapQ & A

  1. Start debugging.
  2. When your app enters break mode or you step into a function, choose Code Map. (Keyboard: Ctrl + Shift + `)Choose Code Map to start mapping call stackThe current call stack appears in orange on a new code map:See call stack on code mapThe map might not look interesting at first, but it updates automatically while you continue debugging. See Update the map with the next call stack.
Add comments to track what’s happening in the code. To add a new line in a comment, press Shift + Return.Add comment to call stack on code map
Run your app to the next breakpoint or step into a function. The map adds a new call stack.Update code map with next call stack
Now you’ve got a map – what next? If you’re working with Visual C# .NET or Visual Basic .NET, add items, such as fields, properties, and other methods, to track what’s happening in the code.Double-click a method to see its code definition. (Keyboard: Select the method on the map and press F12)Go to code definition for a method on code mapAdd the items that you want to track on the map.Show fields in a method on call stack code mapFields related to a method on call stack code map

Here you can easily see which methods use the same fields. The most recently added items appear in green.

Continue building the map to see more code.

See methods that use a field: call stack code map

Methods that use a field on call stack code map

Visualizing your code can help you find bugs faster. For example, suppose you’re investigating a bug in a drawing program. When you draw a line and try to undo it, nothing happens until you draw another line.So you set breakpoints, start debugging, and build a map like this one:Add another call stack to code mapYou notice that all the user gestures on the map call Repaint, except for undo. This might explain why undo doesn’t work immediately.After you fix the bug and continue running the program, the map adds the new call from undo to Repaint:Add new method call to call stack on code map

  • Not all calls appear on the map. Why?By default, only your code appears on the map. To see external code, turn it on in the Call Stack window or turn off Enable Just My Code in the Visual Studio debugging options.
  • Does changing the map affect the code?Changing the map doesn’t affect the code in any way. Feel free to rename, move, or remove anything on the map.
  • What does this message mean: “The diagram may be based on an older version of the code”?The code might have changed after you last updated the map. For example, a call on the map might not exist in code anymore. Close the message, then try rebuilding the solution before updating the map again.
  • How do I control the map’s layout?Open the Layout menu on the map toolbar:
    • Change the default layout.
    • To stop rearranging the map automatically, turn off Automatic Layout when Debugging.
    • To rearrange the map as little as possible when you add items, turn off Incremental Layout.
  • Can I share the map with others?You can export the map, send it to others if you have Microsoft Outlook, or save it to your solution so you can check it into Team Foundation version control.Share call stack code map with others
  • How do I stop the map from adding new call stacks automatically?Choose Button - Show call stack on code map automatically on the map toolbar. To manually add the current call stack to the map, press Ctrl + Shift + `.The map will continue highlighting existing call stacks on the map while you’re debugging.
  • What do the item icons and arrows mean?To get more info about an item, look at the item’s tooltip. You can also look at the Legend to learn what each icon means.What do icons on the call stack code map mean?

See: Map the call stackMake notes about the codeUpdate the map with the next call stackAdd related code to the mapFind bugs using the map

New from the ALM Rangers – Better Unit Testing with the Fakes framework

This is Part 1 in a Series about the Microsoft Agile SDLC

Unit Testing – With Fakes

For modern development teams, the value of effective and efficient unit testing is something everyone can agree on.

Fast, reliable, automated tests that enable developers to verify that their code does what they think it should, add significantly to overall code quality. Creating good, effective unit tests is harder than it seems though.

A good unit test is like a good scientific experiment: it isolates as many variables as possible (these are called control variables) and then validates or rejects a specific hypothesis about what happens when the one variable (the independent variable) changes.

Creating code that allows for this kind of isolation puts strain on the design, idioms, and patterns used by developers. In some cases, the code is designed so that isolating one component from another is easy.

However, in most other cases, achieving this isolation is very difficult. Often, it’s so difficult that, for many developers, it is unachievable.

First included in Visual Studio 2012, Microsoft Fakes helps you — our developers — cross this gap. It makes it easier and faster to create well-isolated unit tests when you do have systems that are “testable,” letting you focus on writing good tests and not on test plumbing.

It also enables you to isolate and test code that is not traditionally easy to test, by using a technology called Shims. Shims use runtime interception to let you detour around challenging dependencies and replace them with something you can control.

As we have mentioned, being able to create this control variable is imperative when creating high-quality, fast-running unit tests.

Shims provide a very powerful capability that will let you circumvent all kinds of roadblocks when unit testing your code. As with all powerful tools, there are a number of patterns, techniques and other “gotchas” that can take time to learn.

This guidance document provides you with a jump-start on acquiring that knowledge by sharing a large number of examples and techniques for effectively using Microsoft Fakes in your projects.

We are happy to introduce this excellent guidance document produced by the Visual Studio ALM Rangers.

We are sure that it will help you and your team realize the power and capabilities Microsoft Fakes provides you in creating better unit tests and better code.

FREE Ebook : http://aka.ms/ebookFakes

SharePoint Samurai