Category Archives: TFS

New Azure To TFS Deploy Tool Available

Deploy Windows Azure project directly from TFS 2010 Build Server

 

Build Definition Template
DeployToAzure allows automating deployment of Windows Azure project and making it a part of TFS 2010 build process without using PowerShell and Azure Management CmdLets.

Solution includes:

  • A set of custom workflow actions wrapping Azure Management API operations such as GetDeployment, GetOperationStatus, NewDeployment, RemoveDeployment and SetDeploymentStatus;
  • Helper actions such as FindPackageAndConfigurationFiles, LoadCertificate and WaitForOperationToComplete;
  • Designer activity DeployToAzure implementing deployment logic ;
  • Reusable build definition template.

How it works :

Create build definition

Open New Build Definition dialog. Select Process tab. Click New button in Build process template section. Choose Select an existing XAML file option and specify path to DefaultTemplateWithDeploymentToAzure.xaml in your source control.

processtemplate

Deployment to Azure section will appear in Build process parameters. Click Refresh button if you don’t see it.

d2asection

Now define build properties. First ,open 1. Required / Items to build dialog and select your solution and specify configuration to build.

Open Deployment to Azure section and provide following parameters:

  • API Certificate store location – store location of your management certificate. Select LocalMachine if certificate was created by command above.
  • API Certificate Thumbprint – thumbprint of management certificate.
  • API Certificate store – store where management certificate is located. Select Root if certificate was created by command above.
  • Cloud Project – cloud project to be packaged and deployed.  It will be built with the same configuration as the one specified for solution building.
  • Deployment label – label of deployment. Label can contain same set of macros as Build Label.
  • Hosted Service Name – DNS Prefix of Hosted service. You can find it on Windows Azure portal.
  • Service configuration – service configuration to be used for deployment, for example Cloud. Keep this field empty to use default configuration.
  • Slot – select Staging or Production.
  • Storage Service Name – DNS Prefix of storage service which will be used to upload deployment package.
  • Subscription Id – Azure subscription ID.
  • Wait for roles to start – set to true if build should wait for all instances to start.
  • Initialization Timeout – if above is true, specify timeout for build to wait before generate timeout exception.

Contact me at floyd_tomas@yahoo.com for this and other Azure, SharePoint, Office365,  TFS and Agile Tools

hero-for-hire_basic-layout_600micorosftazurelogo[1]

New TFS Power Tools Available!! TFS Extensions for Source Control and Work Item Tracking

The TFS Productivity Tools project was designed to provide TFS administrators with helper tools when doing Source Control or Work Item Tracking tasks.

For running these extensions the installation of VSTS 2010 Professional or newer is required.

4214.TFS_2D00_LicenseOverview[1]

VisualStudio Extensions

Extension binary/source Description Runs from Visual Studio Window
ChangeLinkTypes.vsix Modify Work Item link types. Team Explorer
DestroyWorkItems.vsix Completely destroy Work Items. Query Results
Export2Word.vsix Export Work Items to Microsoft Word document. Team Explorer
ExtendedMerge.vsix
ExtendedMerge2012.vsix
Provide workaround for several merge features not implemented by TFS 2010/2012:

  1. TFS merge leads to bulk check-in operation that puts files from all previous changesets into one big merge changeset.
  2. TFS allows only for consecutive changesets being cherry-peeked by merge operation.
  3. TFS doesn’t allow choosing changesets for cherry-peek merge by selecting work items.
  4. TFS merge dialog doesn’t have “force” and “baseless” options.
Source Control Explorer,
Query Results
GetPreview.vsix Emulate commandline task and write outputs to Output window:
tf.exe get /recursive /preview “itemspec”
Source Control Explorer
ModifyCheckinDate.vsix 1. Update modification time for checked-out files to their latest check-in time.
2. Directly modify changeset’s check-in time in TFS database.
Source Control Explorer,
History

Custom WorkItem tracking controls

Extension binary/source Description
WITDataGridView.dll
[under development]
Envelope for standard WindowsForms.DataGridView control.
This control serializes table data to XML format and saves it as an WorkItem attachment with default file size limit of 2Mb.

Custom Build workflow activities

Extension binary/source Description
QueueNewBuild.dll
[under development]
Contains activities – QueueNewBuild, QueueNewBuildBegin, QueueNewBuildEnd.
It works like LoadAndInvokeWorkflow but is intended exclusively for build workflows. QueueNewBuildBegin when put inside ParallelSequence can run builds simultanously in separate threads based on free build Agents.

Installation

For installing anyone of the extensions double-click on VSIX file and follow the installer instructions.

image001

ChangeLinkTypes

This extension modifies link types between the Work Items returned from Work Item query.
To execute it select any query under Work Items node in Team Explorer window and click on “Change Query Link Types”

image002

After that, in next dialog choose original and new wanted link types.

image003

DestroyWorkItems

This extension completely destroys (not closes) Work Items returned from Work Item query.
To execute it select a number of Work Items in Query Results window and click on “Destroy Work Items”

image004

Export2Word

This extension exports Work Items returned from Work Item query to Microsoft Word document in paragraph style.
To execute it select any query under Work Items node in Team Explorer window and click on “Export to Microsoft Word”

image005

Exported document sample

ExtendedMerge

Extended Merge extension provides workaround for several merge features not implemented by TFS:

  1. TFS merge leads to bulk check-in operation that puts files from all previous changesets into one big merge changeset.
  2. TFS allows only for consecutive changesets being cherry-peeked by merge operation.
  3. TFS doesn’t allow choosing changesets for cherry-peek merge by selecting work items.
  4. TFS merge dialog doesn’t have “force” and “baseless” options.

Initializing ExtendedMerge extension

The list of merge candidates can be obtained in two ways:

  1. From Source Control Explorer window.
    In this case all history changesets for a specific server path are parsed.
    image006
  2. From Query Results window.
    In this case all changesets, linked to the selected Work Items are parsed.
    image007

After initialization stage extension opens Visual Studio tool window pane with a grid that shows useful information about parsed changesets:

  • Checked/unchecked status.
  • Work item ID.
  • Changeset ID.
  • Changeset check-in date
  • Changeset creator.
  • The change types this changeset contains.
  • Possible merge options
    • None – merge is impossible (don’t mess with “discard” commandline option)
    • Baseless – baseless merge
    • Force – force merge
    • Candidate – regular merge
  • Merge source path (can be modified).
  • Changeset comment field.

image008

The Merge Target Location field contains the path to a folder or a branch in source control.

Merge types are calculated based on shared Merge Target Location path and an individual changeset Source Path.

Running ExtendedMerge extension

Extension runs all actions from toolbar buttons (some of them are duplicated on grid context menu also).

image009

Menu item name Menu item description
Link to Work Items When checking in merge results link new changesets to work items the same way as they were linked in original changesets.
Normal Merge Do regular merge based on merge candidates.
Conservative Merge Use “Conservative” merge option. It produces more merge conflicts.
Refresh Refill changeset merge types.
Merge Do merge.
Resolve Show merge conflicts.
Changeset details Show changeset details.
Work Item details Show Work Item details.
Navigate to Server Path Navigate to server path in Source Control Explorer.
Edit Server Path Edit server paths (one or multiple).
Copy to Clipboard Copy selected changeset details to clipboard.
Mark All Items Check all grid items.
Unmark All Items Uncheck all grid items.

Resolving merge conflicts

During merge operation merge conflicts can occur.
In this case click on OK button and resolve conflicts with standard TFS Resolve Conflicts dialog.

image010

image011

GetPreview

This extension emulates TFS commandline operation: tf.exe get /recursive /preview “itemspec”
To execute it select a path in Source Control Explorer window and click on “Emulate Get-Preview”.

The results are print to Output window.

image012

ModifyCheckinDate

This extension consists from 2 parts:

  1. Updates modification time for checked-out files to their latest check-in time.
    It is accessible from Source Control window & History window.
  2. Directly modifies changeset’s check-in time in TFS database.
    It actually runs SQL statement: UPDATE tbl_Changeset SET CreationDate=’?’ WHERE ChangeSetId=‘?’
    It is accessible from History window.

image013

image014

Release Notes

  • The ExtendedMerge uses a number of internal Microsoft classes. It can possibly lead to some tool malfunction in the next TFS versions:
    Feature Classes
    Show browse server folder dialog Microsoft.TeamFoundation.Build.Controls.VersionControlHelper.ShowServerFolderBrowser
    Show merge progress dialog Microsoft.TeamFoundation.VersionControl.Controls.ProgressMerge
    Show resolve conflicts dialog Microsoft.TeamFoundation.Client.Arguments
    Microsoft.TeamFoundation.VersionControl.CommandLine.CommandResolve
  • The GetPreview uses environment variable: %VS100COMNTOOLS%

How to : From the Trenches – Use SharePoint to Implement an ALM in Your Orginisation

After my successful creation and implementation of an ALM for Business Connexion using the SharePoint Platform, I thought I’d share the lessons I have learned and show you step for step how you can implement your own ALM leveraging the power of the SharePoint Platform

slide5[1]

In this article,

  • An Overview : SharePoint Application Lifecycle Management:
  • Learn how to plan and manage Application Lifecycle Management (ALM) in Microsoft SharePoint 2010 projects by using Microsoft Visual Studio 2010 and Microsoft SharePoint Designer 2010.
  • Also learn what to consider when setting up team development environments,
  • Establishing upgrade management processes,
  • Creating a standard SharePoint development model.
  • Extending your SharePoint ALM to include other Departments like Java, Mobile, .Net and even SAP Development
Introduction to Application Lifecycle Management in SharePoint 2010

The Microsoft SharePoint 2010 development platform, which includes Microsoft SharePoint Foundation 2010 and Microsoft SharePoint Server 2010, contains many capabilities to help you develop, deploy, and update customizations and custom functionalities for your SharePoint sites. The activities that take advantage of these capabilities all fall under the category of Application Lifecycle Management (ALM).

Key considerations when establishing ALM processes include not only the development and testing practices that you use before the initial deployment of a single customization, but also the processes that you must implement to manage updates and integrate customizations and custom functionality on an existing farm.

This article discusses the capabilities and tools that you can use when implementing an ALM process on a SharePoint farm, and also specific concerns and things to consider when you create and hone your ALM process for SharePoint development.

This article assumes that each development team will develop a unique ALM process that fits its specific size and needs, so its guidance is necessarily broad. However, it also assumes that regardless of the size of your team and the specific nature of your custom solutions, you will need to address similar sets of concerns and use capabilities and tools that are common to all SharePoint developers.

The guidance in this article will help you as create a development model that exploits all the advantages of the SharePoint 2010 platform and addresses the needs of your organization.

SharePoint Application Lifecycle Management: An Overview

Although the specific details of your SharePoint 2010 ALM process will differ according the requirements of your organizations, most development teams will follow the same general set of steps. Figure 1 depicts an example ALM process for a midsize or large SharePoint 2010 deployment. Obviously, the process and required tasks depend on the project size.

Figure 1. Example ALM process
Example ALM process

The following are the specific steps in the process illustrated in Figure 1 (see corresponding callouts 1 through 10):

  1. Someone (for example, a project manager or lead developer) collects initial requirements and turns them into tasks.
  2. Developers use Microsoft Visual Studio Team Foundation Server 2010 or other tools to track the development progress and store custom source code.
  3. Because source code is stored in a centralized location, you can create automated builds for integration and unit testing purposes. You can also automate testing activities to increase the overall quality of the customizations.
  4. After custom solutions have successfully gone through acceptance testing, your development team can continue to the pre-production or quality assurance environment.
  5. The pre-production environment should resemble the production environment as much as possible. This often means that the pre-production environment has the same patch level and configurations as the production environment. The purpose of this environment is to ensure that your custom solutions will work in production.
  6. Occasionally, copy the production database to the pre-production environment, so that you can imitate the upgrade actions that will be performed in the production environment.
  7. After the customizations are verified in the pre-production environment, they are deployed either directly to production or to a production staging environment and then to production.
  8. After the customizations are deployed to production, they run against the production database.
  9. End users work in the production environment, and give feedback and ideas concerning the different functionalities. Issues and bugs are reported and tracked through established reporting and tracking processes.
  10. Feedback, bugs, and other issues in the production environment are turned into requirements, which are prioritized and turned into developer tasks. Figure 2 shows how multiple developer teams can work with and process bug reports and change requests that are received from end users of the production environment. The model in Figure 2 also shows how development teams might coordinate their solution packages. For example, the framework team and the functionality development team might follow separate versioning models that must be coordinated as they track bugs and changes.
    Figure 2. Change management involving multiple developer teams
    Change management involving multiple teams

Integrating Testing and Build Verification Environments into a SharePoint 2010 ALM Process

In larger projects, quality assurance (QA) personnel might use an additional build verification or user acceptance testing (UAT) farm to test and verify the builds in an environment that more closely resembles the production environment.

Typically, a build verification farm has multiple servers to ensure that custom solutions are deployed correctly. Figure 3 shows a potential model for relating development integration and testing environments, build verification farms, and production environments. In this particular model, the pre-production or QA farm and the production farm switch places after each release. This model minimizes any downtime that is related to maintaining the environments.

Figure 3. Model for relating development integration and testing environments
Model for relating development environments

Integrating SharePoint Designer 2010 into a SharePoint 2010 ALM Process

Another significant consideration in your ALM model is Microsoft SharePoint Designer 2010. SharePoint 2010 is an excellent platform for no-code solutions, which can be created and then deployed directly to the production environment by using SharePoint Designer 2010. These customizations are stored in the content database and are not stored in your source code repository.

General designer activities and how they interact with development activities are another consideration. Will you be creating page layouts directly within your production environment, or will you deploy them as part of your packaged solutions? There are advantages and disadvantages to both options.

Your specific ALM model depends completely on the custom solutions and the customizations that you plan to make, and on your own policies. Your ALM process does not have to be as complex as the one described in this section. However, you must establish a firm ALM model early in the process as you plan and create your development environment and before you start creating your custom solutions.

Next, we discuss specific tools and capabilities that are related to SharePoint 2010 development that you can use when considering how to create a model for SharePoint ALM that will work best for your development team.

Solution Packages and SharePoint Development Tools

One major advantage of the SharePoint 2010 development platform is that it provides the ability to save sites as solution packages. A solution package is a deployable, reusable package stored in a CAB file with a .wsp extension. You can create a solution package either by using the SharePoint 2010 user interface (UI) in the browser, SharePoint Designer 2010, or Microsoft Visual Studio 2010. In the browser and SharePoint Designer 2010 UIs, solution packages are also called templates. This flexibility enables you to create and design site structures in a browser or in SharePoint Designer 2010, and then import these customizations into Visual Studio 2010 for more development. Figure 4 shows this process.

Figure 4. Flow through the SharePoint development tools
Flow through the SharePoint development tools

When the customizations are completed, you can deploy your solution package to SharePoint for use. After modifying the existing site structure by using a browser, you can start the cycle again by saving the updated site as a solution package.

This interaction among the tools also enables you to use other tools. For example, you can design a workflow process in Microsoft Visio 2010 and then import it to SharePoint Designer 2010 and from there to Visual Studio 2010. For instructions on how to design and import a workflow process, see Create, Import, and Export SharePoint Workflows in Visio.

For more information about creating solution packages in SharePoint Designer 2010, see Save a SharePoint Site as a Template. For more information about creating solution packages in Visual Studio 2010, see Creating SharePoint Solution Packages.

Using SharePoint Designer 2010 as a Development Tool

SharePoint Designer 2010 differs from Microsoft Office SharePoint Designer 2007 in that its orientation has shifted from the page to features and functionality. The improved UI provides greater flexibility for creating and designing different functionalities. It provides rich tooling for building complete, reusable, and process-centric applications. For more information about the new capabilities and features of SharePoint Designer 2010, see Getting Started with SharePoint Designer.

You can also use SharePoint Designer 2010 to modify modular components developed with Visual Studio 2010. For example, you can create Web Parts and other controls in Visual Studio 2010, deploy them to a SharePoint farm, and then edit them in SharePoint Designer 2010.

The primary target users for SharePoint Designer 2010 are IT personnel and information workers who can use this application to create customizations in a production environment. For this reason, you must decide on an ALM model for your particular environment that defines which kinds of customizations will follow the complete ALM development process and which customizations can be done by using SharePoint Designer 2010. Developers are secondary target users. They can use SharePoint Designer 2010 as a part of their development activities, especially during initial creation of customization packages and also for rapid development and prototyping. Your ALM process must also define where and how to fit SharePoint Designer 2010 into the broader development model.

A key challenge of using SharePoint Designer 2010 is that when you use it to modify files, all of your changes are stored in the content database instead of in the file system. For example, if you customize a master page for a specific site by using SharePoint Designer 2010 and then design and deploy new branding elements inside a solution package, the changes are not available for the site that has the customized master page, because that site is using the version of the master page that is stored in the content database.

To minimize challenges such as these, SharePoint Designer 2010 contains new features that enable you to control usage of SharePoint Designer 2010 in a specific environment. You can apply these control settings at the web application level or site collection level. If you disable some action at the web application level, that setting cannot be changed at the site collection level.

SharePoint Designer 2010 makes the following settings available:

  • Allow site to be opened in SharePoint Designer 2010.
  • Allow customization of files.
  • Allow customization of master pages and layout pages.
  • Allow site collection administrators to see the site URL structure.

Because the primary purpose of SharePoint Designer 2010 is to customize content on an existing site, it does not support source code control. By default, pages that you customize by using SharePoint Designer 2010 are stored inside a versioned SharePoint library. This provides you with simple support for versioning, but not for full-featured source code control.

Ads by CeheuapMMeAd Options
Importing Solution Packages into Visual Studio 2010

When you save a site as a solution package in the browser (from the Save as Template page in Site Settings), SharePoint 2010 stores the site as a solution package (.wsp) file and places it in the Solution Gallery of that site collection. You can then download the solution package from the Solution Gallery and import it into Visual Studio 2010 by using the Import SharePoint Solution Package template, as shown in Figure 5.

Figure 5. Import SharePoint Solution Package template
Import SharePoint Solution Package template

SharePoint 2010 solution packages contain many improvements that take advantage of new capabilities that are available in its feature framework. The following list contains some of the new feature elements that can help you manage your development projects and upgrades.

  • SourceVersion for WebFeature and SiteFeature
  • WebTemplate feature element
  • PropertyBag feature element
  • $ListId:Lists
  • WorkflowAssociation feature element
  • CustomSchema attribute on ListInstance
  • Solution dependencies

After you import your project, you can start customizing it any way you like.

Note Note
Because this capability is based on the WebTemplate feature element, which is based on a corresponding site definition, the resulting solution package will contain definitions for everything within the site. For more information about creating and using web templates, see Web Templates.

Visual Studio 2010 supports source code control (as shown in Figure 6), so that you can store the source code for your customizations in a safer and more secure central location, and enable easy sharing of customizations among developers.

Figure 6. Visual Studio 2010 source code control
Visual Studio 2010 source code control

The specific way in which your developers access this source code and interact with each other depends on the structure of your team development environment. The next section of this article discusses key concerns and considerations that you should consider when you build a team development environment for SharePoint 2010.

Team Development Environment for SharePoint 2010: An Overview

As in any ALM planning process, your SharePoint 2010 planning should include the following steps:

  1. Identify and create a process for initiating projects.
  2. Identify and implement a versioning system for your source code and other deployed resources.
  3. Plan and implement version control policies.
  4. Identify and create a process for work item and defect tracking and reporting.
  5. Write documentation for your requirements and plans.
  6. Identify and create a process for automated builds and continuous integration.
  7. Standardize your development model for repeatability.

Microsoft Visual Studio Team Foundation Server 2010 (shown in Figure 7) provides a good potential platform for many of these elements of your ALM model.

Figure 7. Visual Studio 2010 Team Foundation Server
Visual Studio 2010 Team Foundation Server

When you have established your model for team development, you must choose either a collection of tools or Microsoft Visual Studio Team Foundation Server 2010 to manage your development. Microsoft Visual Studio Team Foundation Server 2010 provides direct integration into Visual Studio 2010, and it can be used to manage your development process efficiently. It provides many capabilities, but how you use it will depend on your projects.

You can use the Microsoft Visual Studio Team Foundation Server 2010 for the following activities:

  • Tracking work items and reporting the progress of your development. Microsoft Visual Studio Team Foundation Server 2010 provides tools to create and modify work items that are delivered not only from Visual Studio 2010, but also from the Visual Studio 2010 web client.
  • Storing all source code for your custom solutions.
  • Logging bugs and defects.
  • Creating, executing, and managing your testing with comprehensive testing capabilities.
  • Enabling continuous integration of your code by using the automated build capabilities.

Microsoft Visual Studio Team Foundation Server 2010 also provides a basic installation option that installs all required functionalities for source control and automated builds. These are typically the most used capabilities of Microsoft Visual Studio Team Foundation Server 2010, and this option helps you set up your development environment more easily.

Setting Up a Team Development Environment for SharePoint 2010

SharePoint 2010 must be installed on a development computer to take full advantage of its development capabilities. If you are developing only remote applications, such as solutions that use SharePoint web services, the client object model, or REST, you could potentially develop solutions on a computer where SharePoint 2010 is not installed. However, even in this case, your developers’ productivity would suffer, because they would not be able to take advantage of the full debugging experience that comes with having SharePoint 2010 installed directly on the development computer.

The design of your development environment depends on the size and needs of your development team. Your choice of operating system also has a significant impact on the overall design of your team development process. You have three main options for creating your development environments, as follows:

  1. You can run SharePoint 2010 directly on your computer’s client operating system. This option is available only when you use the 64-bit version of Windows 7, Windows Vista Service Pack 1, or Windows Vista Service Pack 2.
  2. You can use the boot to Virtual Hard Drive (VHD) option, which means that you start your laptop by using the operating system in VHD. This option is available only when you use Windows 7 as your primary operating system.
  3. You can use virtualization capabilities. If you choose this option, you have a choice of many options. But from an operational viewpoint, the option that is most likely the easiest to implement is a centralized virtualized environment that hosts each developer’s individual development environment.

The following sections take a closer look at these three options.

SharePoint 2010 on a Client Operating System

If you are using the 64-bit version of Windows 7, Windows Vista Service Pack 1, or Windows Vista Service Pack 2, you can install SharePoint Foundation 2010 or SharePoint Server 2010. For more information about installing SharePoint 2010 on supported operating systems, see Setting Up the Development Environment for SharePoint 2010 on Windows Vista, Windows 7, and Windows Server 2008.

Figure 8 shows how a computer that is running a client operating system would operate within a team development environment.

Figure 8. Computer running a client operating system in a team development environment
Computer running a client operating system

A benefit of this approach is that you can take full advantage of any of your existing hardware that is running one of the targeted client operating systems. You can also take advantage of pre-existing configurations, domains, and enterprise resources that your enterprise supports. This could mean that you would require little or no additional IT support. Your developers would also face no delays (such as booting up a virtual machine or connecting to an environment remotely) in accessing their development environments.

However, if you take this approach, you must ensure that your developers have access to sufficient hardware resources. In any development environment, you should use a computer that has an x64-capable CPU, and at least 2 gigabytes (GB) of RAM to install and run SharePoint Foundation 2010; 4 GB of RAM is preferable for good performance. You should use a computer that has 6 GB to 8 GB of RAM to install and run SharePoint Server 2010.

A disadvantage of this approach is that your environments will not be centrally managed, and it will be difficult to keep all of your project-dependent environmental requirements in sync. It might also be advisable to write batch files that start and stop some of the SharePoint-related services so that when your developers are not working with SharePoint 2010, these services will not consume resources and degrade the performance of their computers.

The lack of centralized maintenance could hurt developer productivity in other ways. For example, this might be an unwieldy approach if your team is working on a large Microsoft SharePoint Online project that is developing custom solutions for multiple services (for example, the equivalents of http://intranet, http://mysite, http://teams, http://secure, http://search, http://partners, and http://www.internet.com) and deploying these solutions in multiple countries or regions.

If you are developing on a computer that is running a client operating system in a corporate domain, each development computer would have its own name (and each local domain name would be different, such as http://dev 1 or http://dev2). If each developer is implementing custom functionalities for multiple services, you must use different port numbers to differentiate each service (for example, http://dev1 for http://intranet and http://dev1:81 for http://mysite). If all of your developers are using the same Visual Studio 2010 projects, the project debugging URL must be changed manually whenever a developer takes the latest version of a project from your source code repository.

This would create a manual step that could hurt developer productivity, and it would also diminish the efficiency of any scripts that you have written for setting up development environments, because the individual environments are not standardized. Some form of centralization with virtualization is preferable for large enterprise development projects.

SharePoint 2010 on Windows 7 and Booting to Virtual Hard Drive

If you are using Windows 7, you can also create a VHD out of an existing Windows Server 2008 image on which SharePoint 2010 is installed in Windows Hyper-V, and then configure Windows 7 with BDCEdit.exe so that it boots directly to the operating system on the VHD. To learn more about this kind of configuration, see Deploy Windows on a Virtual Hard Disk with Native Boot and Boot from VHD in Win 7.

Figure 9 shows how a computer that is running Windows 7 and booting to VHD would operate within a team development environment.

Figure 9. Windows 7 and booting to VHD in a team environment
Windows 7 and booting to VHD in a team environment

An advantage of this approach is the flexibility of having multiple dedicated environments for an individual project, enabling you to isolate each development environment. Your developers will not accidentally cross-reference any artifacts within their projects, and they can create project-dependent environments.

However, this option has considerable hardware requirements, because you are using the available hardware and resources directly on your computers.

SharePoint 2010 in Centralized Virtualized Environments

In a centralized virtualized environment, you host your development environments in one centralized location, and developers access these environments through remote connections. This means that you use Windows Hyper-V in the centralized location and copy a VHD for every developer as needed. Each VHD is configured to be available from the corporate network, so that when it starts, it can be accessed by using remote connections.

Figure 10 shows how a centralized virtualized team development environment would operate.

Figure 10. Centralized virtualized team development environment
Centralized virtualized development environment

An advantage of this approach is that the hardware requirements for individual developer computers are relatively few because the actual work happens in a centralized environment. Developers could even use computers with 1 GB of RAM as their clients and then connect remotely to the centralized location. You can also manage environments easily from one centralized location, making adjustments to them whenever necessary.

Your centralized host will have significantly high hardware requirements, but developers can easily start and stop these environments. This enables you to use the hardware that you have allocated for your development environments more efficiently. Additionally, this approach provides a ready platform for more extensive testing environments for your custom code (such as multi-server farms).

After you set up your team development environment, you can start taking advantage of the deployment and upgrade capabilities that are included with the new solution packaging model in SharePoint 2010. The following sections describe how to take advantage of these new capabilities in your ALM model.

Models for Solution Lifecycle Management in SharePoint 2010

The SharePoint 2010 solution packaging model provides many useful features that will help you plan for deploying custom solutions and managing the upgrade process. You can implement assembly versioning by applying binding redirects in your web application configuration file. You can also apply versioning to your feature upgrades, and feature upgrade actions enable you to manage changes that will be necessary on your existing sites to accommodate feature upgrades. These upgrade actions can be handled declaratively or programmatically.

The feature upgrade query object model enables you to create queries in your code that look for features on your existing sites that can be upgraded. You can use this object model to obtain relevant information about all of the features and feature versions that are deployed on your SharePoint 2010 sites. In your solution manifest file, you can also configure the type of Internet Information Services (IIS) recycling to perform during a solution upgrade.

The following sections go into greater details about these capabilities and how you can use them.

Using Assembly BindingRedirect with SharePoint 2010 Assemblies

The BindingRedirect feature element can be added to your web applications configuration file. It enables you to redirect from earlier versions of installed assemblies to newer versions. Figure 11 shows how the XML configuration from the solution manifest file instructs SharePoint to add binding redirection rules to the web application configuration file. These rules forward any reference to version 1.0 of the assembly to version 2.0. This is required in your solution manifest file if you are upgrading a custom solution that uses assembly versioning and if there are existing instances of the solution and the assembly on your sites.

Figure 11. Binding redirection rules in a solution manifest file
Binding redirection rules in a solution manifest

It is a best practice to use assembly versioning, because it gives you an easy way to track the versions of a solution that are deployed to your production environments.

SharePoint 2010 Feature Versioning

The support for feature versioning in SharePoint 2010 provides many capabilities that you can use when you are upgrading features. For example, you can use the SPFeature.Version property to determine which versions of a feature are deployed on your farm, and therefore which features must be upgraded. For a code sample that demonstrates how to do this, see Version.

Feature versioning in SharePoint 2010 also enables you to define a value for the SPFeatureDependency.MinimumVersion property to handle feature dependencies. For example, you can use the MinimumVersion property to ensure that a particular version of a dependent feature is activated. Feature dependencies can be added or removed in each new version of a feature.

The SharePoint 2010 feature framework has also enhanced the object model level to support feature versioning more easily. You can use the QueryFeatures method to retrieve a list of features, and you can specify both feature version and whether a feature requires an upgrade. The QueryFeatures method returns an instance of SPFeatureQueryResultCollection, which you can use to access all of the features that must be updated. This method is available from multiple scopes, because it is available from the SPWebService, SPWebApplication, SPContentDatabase, and SPSite classes. For more information about this overloaded method, see QueryFeatures(), QueryFeatures(), QueryFeatures(), and QueryFeatures(). For an overview of the feature upgrade object model, see Feature Upgrade Object Model.

The following section summarizes many of the new upgrade actions that you can apply when you are upgrading from one version of a feature to another.

SharePoint 2010 Feature Upgrade Actions

Upgrade actions are defined in the Feature.xml file. The SPFeatureReceiver class contains a FeatureUpgrading method, which you can use to define actions to perform during an upgrade. This method is called during feature upgrade when the feature’s Feature.xml file contains one or more <CustomUpgradeAction> tags, as shown in the following example.

<UpgradeActions>
  <CustomUpgradeAction Name="text">
    ...
  </CustomUpgradeAction>
</UpgradeActions>

Each custom upgrade action has a name, which can be used to differentiate the code that must be executed in the feature receiver. As shown in following example, you can parameterize custom action instances.

<Feature xmlns="http://schemas.microsoft.com/sharepoint/">
  <UpgradeActions>
    <VersionRange EndVersion ="2.0.0.0">
      <!-- First action-->
      <CustomUpgradeAction Name="example">
        <Parameters>
          <Parameter Name="parameter1">Whatever</Parameter>
          <Parameter Name="anotherparameter">Something meaningful</Parameter>
          <Parameter Name="thirdparameter">additional configurations</Parameter>
        </Parameters>
      </CustomUpgradeAction>
      <!-- Second action-->
      <CustomUpgradeAction Name="SecondAction">
        <Parameters>
          <Parameter Name="SomeParameter1">Value</Parameter>
          <Parameter Name="SomeParameter2">Value2</Parameter>
          <Parameter Name="SomeParameter3">Value3</Parameter>
        </Parameters>
      </CustomUpgradeAction>
    </VersionRange>
  </UpgradeActions>
</Feature>

This example contains two CustomUpgradeAction elements, one named example and the other named SecondAction. Both elements have different parameters, which are dependent on the code that you wrote for the FeatureUpgrading event receiver. The following example shows how you can use these upgrade actions and their parameters in your code.

 <summary>
 Called when feature instance is upgraded for each of the custom upgrade actions in the Feature.xml file.
 </summary>
 <param name="properties">Feature receiver properties</param>
 <param name="upgradeActionName">Upgrade action name</param>
 <param name="parameters">Custom upgrade action parameters</param>

public override  FeatureUpgrading(SPFeatureReceiverProperties properties, 
                                        string upgradeActionName, 
                                        System.Collections.Generic.IDictionary<string, string> parameters)
{

    // Do not do anything, if feature scope is not correct.
     (properties.Feature.Parent  SPWeb)
    {

        // Log that feature scope is incorrect.
        return;
    }

    switch (upgradeActionName)
    {
         "example":
            FeatureUpgradeManager.UpgradeAction1(parameters["parameter1"], parameters["AnotherParameter"],
                                                 parameters["ThirdParameter"]);
            break;
         "SecondAction":
            FeatureUpgradeManager.UpgradeAction1(parameters["SomeParameter1"], parameters["SomeParameter2"],
                                                 parameters["SomeParameter3"]);
            break;
        default:

            // Log that code for action does not exist.
            break;
    }
}

You can have as many upgrade actions as you want, and you can apply them to version ranges. The following example shows how you can apply upgrade actions to version ranges of a feature.

<Feature xmlns="http://schemas.microsoft.com/sharepoint/">
  <UpgradeActions>
    <VersionRange BeginVersion="1.0.0.0" EndVersion ="2.0.0.0">
      ...
    </VersionRange>
    <VersionRange BeginVersion="2.0.0.1" EndVersion="3.0.0.0">
      ...
    </VersionRange>
    <VersionRange BeginVersion="3.0.0.1" EndVersion="4.0.0.0">
      ...
    </VersionRange>
  </UpgradeActions>
</Feature>

The AddContentTypeField upgrade action can be used to define additional fields for an existing content type. It also provides the option of pushing these changes down to child instances, which is often the preferred behavior. When you initially deploy a content type to a site collection, a definition for it is created at the site collection level. If that content type is used in any subsite or list, a child instance of the content type is created. To ensure that every instance of the specific content type is updated, you must set the PushDown attribute to , as shown in the following example.

<Feature xmlns="http://schemas.microsoft.com/sharepoint/">
  <UpgradeActions>
    <VersionRange EndVersion ="2.0.0.0">
      <AddContentTypeField ContentTypeId="0x0101002b0e208ace0a4b7e83e706b19f32cab9"
                           FieldId="{ccbcd479-94c9-4f3a-95c4-58897da434fe}"
                           PushDown="True"/>
    </VersionRange>
  </UpgradeActions>
</Feature>

For more information about working with content types programmatically, see Introduction to Content Types.

The ApplyElementManifests upgrade action can be used to apply new artifacts to a SharePoint 2010 site without reactivating features. Just as you can add new elements to any new SharePoint elements.xml file, you can instruct SharePoint to apply content from a specific elements file to sites where a given feature is activated.

You can use this upgrade action if you are upgrading an existing feature whose FeatureActivating event receiver performs actions that you do not want to execute again on sites where the feature is deployed. The following example demonstrates how to include this upgrade action in a Feature.xml file.

<Feature xmlns="http://schemas.microsoft.com/sharepoint/">
  <UpgradeActions>
    <VersionRange EndVersion ="2.0.0.0">
      <ApplyElementManifests>
        <ElementManifest Location="AdditionalV2Fields\Elements.xml"/>
      </ApplyElementManifests>
    </VersionRange>
  </UpgradeActions>
</Feature>

An example of a use case for this upgrade action involves adding new .webpart files to a feature in a site collection. You can use the ApplyElementManifest upgrade action to add those files without reactivating the feature. Another example would involve page layouts, which contain initial Web Part instances that are defined in the file element structure of the feature element file. If you reactivate this feature, you will get duplicates of these Web Parts on each of the page layouts. In this case, you can use the ElementManifest element of the ApplyElementManifests upgrade action to add new page layouts to a site collection that uses the feature without reactivating the feature.

The MapFile element enables you to map a URL request to an alternative URL. The following example demonstrates how to include this upgrade action in a Feature.xml file.

<Feature xmlns="http://schemas.microsoft.com/sharepoint/">
  <UpgradeActions>
    <MapFile FromPath="Features\MapPathDemo_MapPathDemo\PageDeployment\MyExamplePage.aspx"
             ToPath="Features\MapPathDemo_MapPathDemo\PageDeployment\MyExamplePage2.aspx" />
  </UpgradeActions>
</Feature>

Mapping URLs in this way would be useful to you in a case where you have to deploy a new version of a page that was customized by using SharePoint Designer 2010. The resulting customized page would be served from the content database. When you deploy the new version of the page, the new version will not appear because content for that page is coming from the database and not from the file system. You could work around this problem by using the MapFile element to redirect requests for the old version of the page to the newer version.

It is important to understand that the FeatureUpgrading method is called for each feature instance that will be updated. If you have 10 sites in your site collection and you update a web-scoped feature, the feature receiver will be called 10 times for each site context. For more information about how to use these new declarative feature elements, see Feature.xml Changes.

Upgrading SharePoint 2010 Features: A High-Level Walkthrough

This section describes at a high level how you can put these feature-versioning and upgrading capabilities to work. When you create a new version of a feature that is already deployed on a large SharePoint 2010 farm, you must consider two different scenarios: what happens when the feature is activated on a new site and what happens on sites where the feature already exists. When you add new content to the feature, you must first update all of the existing definitions and include instructions for upgrading the feature where it is already deployed.

For example, perhaps you have developed a content type to which you must add a custom site column named City. You do this in the following way:

  1. Add a new element file to the feature. This element file defines the new site column and modifies the Feature.xml file to include the element file.
  2. Update the existing definition of the content type in the existing feature element file. This update will apply to all sites where the feature is newly deployed and activated.
  3. Define the required upgrade actions for the existing sites. In this case, you must ensure that the newly added element file for the additional site column is deployed and that the new site column is associated with the existing content types. To achieve these two objectives, you add the ApplyElementManifests and the AddContentTypeField upgrade actions to your Feature.xml file.

When you deploy the new version of the feature to existing sites and upgrade it, the upgrade actions are applied to sites one by one. If you have defined custom upgrade actions, the FeatureUpgrading method will be called as many times as there are instances of the feature activated in your site collection or farm.

Figure 12 shows how the different components of this scenario work together when you perform the upgrade.

Figure 12. Components of a feature upgrade that adds a new element to an existing feature
Add a new element to an existing feature

Different sites might have different versions of a feature deployed on them. In this case, you can create version ranges, which define specific actions to perform when you are upgrading from one version to another. If a version range is not defined, all upgrade actions will be applied during each upgrade.

Figure 13 shows how different upgrade actions can be applied to version ranges.

Figure 13. Applying different upgrade actions to version ranges.
Applying upgrade actions to version ranges

In this example, if a given site is upgrading directly from version 1.0 to version 3.0, all configurations will be applied because you have defined specific actions for upgrading from version 1.0 to version 2.0 and from 2.0 to version 3.0. You have also defined actions that will be applied regardless of feature version.

Code Design Guidelines for Upgrading SharePoint 2010 Features

To provide more flexibility for your code, you should not place your upgrade code directly inside the FeatureUpgrading event receiver. Instead, put the code in some centralized location and refer to it inside the event receiver, as shown in Figure 14.

Figure 14. Centralized feature upgrade manager
Centralized feature upgrade manager

By placing your upgrade code inside a centralized utility class, you increase both the reusability and the testability of your code, because you can perform the same actions in multiple locations. You should also try to design your custom upgrade actions as generically as possible, using parameters to make them applicable to specific upgrade scenarios.

Solution Lifecycles: Upgrading SharePoint 2010 Solutions

If you are upgrading a farm (full-trust) solution, you must first deploy the new version of your solution package to a farm.

Execute either of the following scripts from a command prompt to deploy updates to a SharePoint farm. The first example uses the Stsadm.exe command-line tool.

stsadm -o upgradesolution -name solution.wsp -filename solution.wsp

The second example uses the Update-SPSolution Windows PowerShell cmdlet.

UpdateSPSolution Identity contoso_solution.wsp LiteralPath c:\contoso_solution_v2.wsp GACDeployment

After the new version is deployed, you can perform the actual upgrade, which executes the upgrade actions that you defined in your Feature.xml files.

A farm solution upgrade can be performed either farm-wide or at a more granular level by using the object model. A farm-wide upgrade is performed by using the Psconfig command-line tool, as shown in the following example.

psconfig -cmd upgrade -inplace b2b
NoteNote
This tool causes a service break on the existing sites. During the upgrade, all feature instances throughout the farm for which newer versions are available will be upgraded.

You can also perform upgrades for individual features at the site level by using the Upgrade method of the SPFeature class. This method causes no service break on your farm, but you are responsible for managing the version upgrade from your code. For a code example that demonstrates how to use this method, see SPFeature.Upgrade.

Upgrading a sandboxed solution at the site collection level is much more straightforward. Just upload the SharePoint solution package (.wsp file) that contains the upgraded features. If you have a previous version of a sandboxed solution in your solution gallery and you upload a newer version, an Upgrade option appears in the UI, as shown in Figure 15.

Figure 15. Upgrading a sandboxed solution
Upgrading a sandboxed solution

After you select the Upgrade option and the upgrade starts, all features in the sandboxed solution are upgraded.

Conclusion

This article has discussed some considerations and examples of Application Lifecycle Management (ALM) design that are specific to SharePoint 2010, and it has also enumerated and described the most important capabilities and tools that you can integrate into the ALM processes that you choose to establish in your enterprise. The SharePoint 2010 feature framework and solution packaging model provide flexibility and power that you can put to work in your ALM processes.

How To : Use Git Tools for TFS Integration

Git – TFS Integration – Why it matters

 taking-your-version-control-to-a-next-level-with-tfs-and-git-1-638[1]

For many small development shops, the idea of using TFS and a centralized source control repository is anathema. The mere thought of being restricted by a software configuration manager on how and when to branch or merge cuts against everything they cherish in software development.

 

Git is their natural and chosen ground for managing source code. The freedom and flexibility of using Git enables them to work where they are. This is especially true if they are working as part of a distributed team on modular projects.

Microsoft addressed many of the existing concerns with TFS source control with the advent of TFS 2012 and local workspaces. However, even though local workspaces enable great flexibility in offline work, they are still ultimately tied to a central repository and the policies and restrictions imposed on it.

 

Enter Git support in TFS. Git support currently comes in two forms; stand alone Git support in Visual Studio and Git support with TFS.

Git support with Visual Studio is completely straightforward. Simply change the source control plug-in selection to the Microsoft Git Provider and all the power and flexibility of Git is available to the Visual Studio developer such as private branches and online collaboration with Git hosts such as GitHub and BitBucket.

Source

Configuring Git for Visual Studio Source Control

However, from an ALM perspective, the real power and the compelling feature of Microsoft’s integration with Git is the ability to work with TFS.

 

Developers still get all the advantages and flexibility of Git, but can also take advantage of the ALM features of TFS such as work item tracking, team tools and integrated build. The Git – TFS integration gets us much closer to the ultimate goal of true cross-platform support in a single ALM toolset.

 

The TFS – Git integration can be utilized a couple of ways. The first option is the ability to essentially synchronize a Git repository with TFS source control with the Git-TF utility. This utility makes it easy to clone sources from TFS, fetch updates from TFS and push changes back to TFS.

 

What’s more, it fully supports TFS shelvesets and work item integration, which presents some exciting possibilities. The features and functionality Git-TF provides makes it a compelling solution and a credible compromise between centrally managed teams with source control and distributed teams with distributed source control.

The second option, available now only through Microsoft’s hosted TFS Service, is the ability for organizations to create TFS Team Projects with Git hosted source control (this ability is reportedly planned for on premise TFS support in the next release). This is a fairly exciting development.

 

Having the choice between native TFS version control and Git when creating a team project opens many doors that hitherto were locked shut.

remoteRepo1

XCode IDE connected to a TFS hosted repository

Eclipse, XCode, Visual Studio and any other IDE that supports Git can now be used to leverage the powerful ALM features TFS provides.

As an ALM consultant, that’s the part that excites me the most. Hosting all development efforts in a single environment; an environment that supports all the various technologies in play and being able to track and manage those efforts with agility and transparency is a huge benefit to any organization that provides multiple platform solutions.

 

Even those who don’t, will now have the option to at least evaluate the feasibility of utilizing TFS in development environments not typically associated with a Microsoft project.

The mythical promised land of cross-platform ALM may have just become quite less mythical.

 

———————Microsoft’s Tool for Git and TFS Integration – ———————————————————————–

 http://gittf.codeplex.com

2352.vs_5F00_heart_5F00_git_5F00_thumb_5F00_66558A9C[1]

Working with Teams

The Git-TF tool is most easily used by a single developer or multiple developers working independently with their own isolated Git repos. That is, each developer uses Git-TF to clone a local repo where they can then use Git to manage their local development that will eventually be checked in to TFS. In this “hub and spoke” configuration, all code is shared through TFS at the “hub” and each developer using Git becomes a “spoke”. Developers looking to collaborate using Git’s distributed sharing capabilities will want to work in a specific configuration described below.

Most often, developers collaborating with Git have cloned from a common repo. When it comes time to share divergent changes, conflict resolution is easy because each repository shares the same common base version. Many times, conflicts are automatically resolved. One of the keys to this merging of histories is that each commit is assigned a unique identifier that is generated by the contents of the commit. When working with Git-TF, two repositories cloned from the same TFS path will not have the same commit IDs unless the clones were done at the same point in TFS history, and with the same depth. In the event that two Git repos that were independently cloned using Git-TF share changes directly, the result will be a baseless merge of the repositories and a large number of conflicts. For this reason, it is not recommended that teams using Git-TF ever share changes directly through Git (i.e. using git push and git pull).

Instead, it is recommended that a team working with Git-TF and collaborating with Git do so by designating a single repo as the point of contact with TFS. This configuration may look as follows for a team of three developers:

          [TFS]      [Shared Git repo]
            |         ^ (2)  |       \
            |        /       |        \
            |       /        |         \
            V (1)  /         V (3)      V (4)
       [Alice's Repo]   [Bob's Repo]   [Charlie's Repo]
 

In the configuration above the actions would be as follows:

  1. Using the git tf clone command, Alice clones a path from TFS into a local Git repo.
  2. Next, Alice uses git push to push the commit created in her local Git repo into the team’s shared Git repo.
  3. Bob can then use git clone to clone down the changes that Alice pushed.
  4. Charlie can also use git clone to clone down the changes that Alice pushed.

Both Bob and Charlie only ever interact with the team’s shared Git repo using git push and git pull. They can also interact directly with one another’s repos (or with Alice’s) , but should never use Git-TF commands to interact with TFVC.

When working with the team, Alice will typically develop locally and use git push and git pull to share changes with the team. When the team decides they have changes to share with TFS, Alice will use a git tf checkin to share those changes (typically a git tf checkin –shallow will be used). Likewise, if there are changes that the team needs from TFVC, Alice will perform a git tf pull, using the –merge or –rebase options as appropriate, and then use git push to share the changes with the team.

Note that (until Issue 77 is addressed) all changes coming into the TFVC repository will come in as if from Alice’s TFS identity. This is fine if only Alice has an identity on that TFVC project but it may well not be what you want if Bob and Charlie also had valid identities in that TFS project.

Rebase vs. Merge

Once changes have been fetched from TFS using git tf pull (or git tf fetch), those changes must either be merged with the HEAD or have any changes since the last fetch rebased on top of FETCH_HEAD. Git-TF allows developers to work in either manner, though if the repo that is sharing changes with TFS has shared any commits with other Git users, then this rebase may result in significant conflicts (see The Perils of Rebasing). For this reason, it is recommended that any team working in the aforementioned team configuration use git tf pull with the default –merge option (or use git merge FETCH_HEAD to incorporate changes made in TFS after fetching manually).

Recommended Git Settings

When using the Git-TF tools, there are a few recommended settings that should make it easier to work with other developers that are using TFS.

Line Endings

core.autocrlf = false

Git has a feature to allow line endings to be normalized for a repository, and it provides options for how those line endings should be set when files are checked out. TFS does not have any feature to normalize line endings – it stores exactly what is checked in by the user. When using Git-TF, choosing to normalize line endings to Unix-style line endings (LF) will likely result in TFS users (especially those using VS) changing the line endings back to Windows-style line endings (CRLF). As a result, it is recommended to set the core.autocrlf option to false, which will keep line endings unchanged in the Git repo.

Ignore case

core.ignorecase = true

TFS does not allow multiple files that differ only in case to exist in the same folder at the same time. Git users working on non-Windows machines could commit files to their repo that differ only in case, and attempting to check in those changes to TFS will result in an error. To avoid these types of errors, the core.ignorecase option should be set to true.

How To : Design the Physical Architecture to Support Collaborative Development and ALM of SharePoint Foundation 2010 Application

Introduction

This article explains the physical architecture which fits best in collaborative development and ALM of SharePoint Foundation 2010 application and what are the servers and tools needed and how they play key roles in ALM of SharePoint Foundation 2010. The purpose of this article is to provide overall understanding of various servers and farms connected to each other in SharePoint Foundation.

Background

Basic understanding of different server OS & SharePoint Foundation 2010 is required.

Solution

Application Life-cycle Management (ALM) is the co-ordination of development life-cycle activities—including requirements, modeling, development, build, and testing. Recently, ALM has expanded beyond the application and the software development life cycle to also include business solution governance, infrastructure management, operations, and support.

You can use ALM to help align your organization in the context of a software solution in business, development, and operations. With an application development platform that supports ALM, you can provide integration between the various tools used and activities performed within each of these capabilities.

There are main four types of staging servers with standalone developer’s environment which plays a key role in ALM of SharePoint 2010 application:

  1. Development SharePoint Farm
  2. Team foundation server
  3. Integration/Testing Farm
  4. Production Farm
    +
    Developer’s Workstation

The below figure is a physical architecture which depicts how each sever is interconnected to support collaborative development and ALM for SharePoint Foundation 2010 application:

Click to enlarge image

Development SharePoint Farm

A SharePoint farm is fundamentally a collection of SharePoint role servers that provide for the base infrastructure required to house SharePoint sites. The farm level is the highest level of SharePoint architecture, providing a distinct operational boundary for a SharePoint environment. Each farm in an environment is a self-encompassing unit made up of one or more servers, such as web servers, service application servers, and SharePoint database servers.

SharePoint development farm needed for the developers in an organization that makes heavy use of SharePoint often need environments to test new applications, web parts, solutions, and other SharePoint customization. These developers often need a sandbox area where these farm level features and solutions can be tested.

I have considered two-tier topology for SharePoint Foundation 2010 farm. However it will be entirely based on the need of your application. If your application is a relatively small intranet application, then you can choose single tier topology or if you are going to integrate other search server with foundation, then you can choose three-tier topology with application server as a middle tier (Remember that SharePoint Foundation 2010 doesn’t include enterprise search). It may make sense to deploy one or more development farms so that developers have the opportunity to run their tests and develop software for SharePoint independent of the existing production environment.

There are basically two types of servers included in two-tier development farm of SharePoint Foundation 2010:

  1. Web server
  2. Content database server

In the above figure, there are three front-end web servers and one SharePoint content database server. However you can choose a single front-end web server connected to content database server based on your application need and architecture of production environment. All web servers share the same content database. This is called two-tier deployment farm where SharePoint server component and content database are installed on separate server. As I mentioned before, you can choose one-tier, two-tier or three-tier deployment topology based on your application architecture and topology of production architecture.

Each web server has SharePoint Foundation 2010 and SharePoint extension for TFS 2010 install on it. It needs SharePoint extension for TFS 2010 to connect with Team Foundation Server for source control, build management & project management.

Advantage of Development SharePoint Farm:

  1. Single place where SharePoint Admin can integrate all the final artifacts from multiple developers.
  2. Developer can sync with latest SharePoint site on its standalone developer workstation.
  3. Admin can easily approve artifacts and migrate to integration server.
  4. It is a unit testing environment for developers where they can test dependent functionality or farm level features.

Team Foundation Server

Team Foundation Server plays a key role in ALM which provides source control, build management and work item. You can have TFS installed on the same server which has content database server but if you are going to use build management of TFS, then it is advisable to have separate Team Foundation Server because it utilizes CPU intensively when it processes the builds.

As per the above figure, there are separate Team foundation servers which are connected to SharePoint Farm as well as standalone development workstation so that it can provide source control for customized content as well as developer’s artifacts and resources.

Advantages of TFS
  1. Source control for SharePoint artifacts and customization
  2. Build management for SharePoint
  3. Work item and bug tracking tool for SharePoint
  4. Admin console for all management activity
  5. Easy integration with SharePoint foundation server and VS 2010
  6. Easy check-in & check-out
  7. Web based console to manage ALM activity

Developer’s Workstation

As per the above figure, developers’ environment includes two developers workstation. In practice, you can take as many workstations as your development team size.

Developer workstation should have Windows 7 or Windows vista operating system with standalone SharePoint foundation server with local content database. So that one developer’s work doesn’t affect another developer and he can debug artifacts locally.

Developer workstation will include the following stuff installed:

  1. Windows 7 or Windows vista 64 bit OS
  2. Stand alone SharePoint Foundation server 2010
  3. SharePoint designer 2010
  4. Visual Studio 2010 (connected to TFS)

Developer workstation should be connected to Team Foundation Server 2010 so that when developer finally completes his artifact, then he can check-in his artifact in TFS so that other developers can take the latest code from TFS if needed. This way, parallel development can happen without affecting other developer’s work.

Integration/Testing Farm

Any production SharePoint environment should have a test environment in which new SharePoint web parts, solutions, service packs, patches, and add-ons can be tested. It is critical to deploy test farms, because many SharePoint add-ons could potentially disrupt or corrupt the formatting or structure of a production environment, and trying to test these new solutions on site collections or different web applications is not enough because the solutions often install directly on the SharePoint servers themselves. If there is an issue, the issue will be reflected in the entire farm.

Integration or testing server farm should be similar to the existing environments, with the same add-ons and solutions installed and should ideally include restores of production site collections to make it as similar as possible to the existing production environment. All changes and new products or solutions installed into an environment should subsequently be tested first in this environment.

Integration/testing servers will have final SharePoint sites and site collection as per the business requirements. QA will test all the business functionality here. Customer can also do their ‘User acceptance test’ before going live to the production server.

After user acceptance test passed, all the sites & site collection will be deployed on production server.

Advantage of Integration testing server:

  1. Clean environments and same physical architecture as production
  2. QA can test all dependent business functionality at one place
  3. Customer can participate in UAT
  4. Easy deployment/migration from integration testing server to production server

Production Farm

The final stage is rolling your farm into a production environment. At this stage, you will have incorporated the necessary solution and infrastructure adjustments that were identified during the user acceptance test stage. These servers are generally in the customer’s premises. Development team and testing team do not have control over it.

There are various 3rd party tools available in the market for SharePoint data protection, administration, migration, compliance and integration.

ImageGen[1]

Summary

So this way, you can design physical architecture where Development SharePoint Farm and developer’s workstation are integrated with TFS 2010. TFS and Content database are connected to testing server or testing farm where all the artifacts and content will be integrated in testing server for QA and UAT. Finally after UAT, it will be deployed on production farm.

You can use VM (Virtual Machine) for all the servers and workstation for effective infrastructure because if server crashes due to some reason, then you can quickly create a new VM for the needed OS from images.

Note: In the above figure, integration/Testing farm and production farm is a single server just for clear understanding but it will be as large as development farm with number of front-end web server and content database server in reality. All the server OS is Windows Server 2008 R2 SP2 64 bit. Please visit here for more information on hardware & software requirements for SharePoint Foundation 2010.

A Look At : Visual Studio 2013 Update 3 CTP2

avatar[2]

New technology improvements in Visual Studio 2013 Update 3 CTP 2

 

Technology improvements

The following technology improvements were made in this release.

CodeLens

  • CodeLens jobs that are running on the Team Foundation Server job agent have been optimized for performance specifically while processing branching and merging changesets.

Debugger

  • If you have more than one monitor, Visual Studio will remember which monitor a Windows Store application was last run on.
  • You can debug x86 applications that are built by .NET native.
  • When you analyze managed memory dump files, you can go to Definition and Find All References of the selected type.
  • You can debug the dump files from .NET Native applications by using Visual Studio debugger.

General

  • The Application Insights Tools for Visual Studio are now included in Visual Studio 2013 Update 3 CTP2. This initial integration as part of CTP2 includes some software updates and performance improvements.

IntelliTrace

  • You can skip straight to the details of performance events that are exported from Application Insights to IntelliTrace.

Profiler

  • The Performance and Diagnostics hub can open profiling sessions (.diagsession files) that were exported from the F12 tools in the latest developer preview of Internet Explorer 11.
  • Windows Presentation Foundation (WPF) and Win32 applications are supported by the new Memory Usage Tool in the Performance and Diagnostics Hub. For more information about how to use the tool to troubleshoot issues in native and managed memory, go to the following blog post:
    Diagnosing memory issues with the new Memory Usage Tool in Visual Studio

Release Management

  • You can useWindowsPowerShell or theWindowsPowerShell Desired State Configuration (DSC) feature to deploy and manage configuration data. Additionally, you can deploy to the following environments without having to set up Microsoft Deployment Agent:
    • Microsoft Azure environments
    • On-premise environments (Standard environments)

Testing Tools

  • You can add custom fields and custom work flows for test plans and test suites.
  • You can use Manage Test Suites permission for granting access to test suites.
  • You can track changes to test plans and test suites by using work item history.

For more information about these features, see the following Visual Studio Developer Tools blog article:

Test Plan and Test Suite Customization with TFS2013 Update3

Visual Studio IDE

  • CodeLens authors and changes indicators are now available for Git repositories.
  • In Code Map, links are styled by using colors, and they display in the improved Legend.
  • Debugger Map automatically zooms to the call stack entry of interest and preserves user’s zoom preferences.
  • You can drag binaries from the Windows file explorer to a code map, and then start exploring binaries by using Code Map.

Known issues

Testing Tools

  • When you try to upgrade an existing TFS server that has Test management data to Visual Studio 2013 Team Foundation Server Update 3 CTP2 in JPN or CHS, the upgrade of Test Case Management service does not work.

Visual Studio IDE

  • In Visual Studio 2013 Ultimate Update 3 CTP2 localized (non en-us) drops, when trying to request a Code Map, or a Dependency Graph for the solution, the directed graph is not produced.

 

For more information on Visual Studio 2013 and other upgrades, visit http://support.microsoft.com/kb/2933779/en-us

How To : Use Powershell and TFS together

The absolute basics

Where does a newbie to Windows PowerShell start—particularly in regards to TFS? There are a few obvious places. I’m hardly the first person to trip across the natural peanut-butter-and-chocolate nature of TFS and Windows PowerShell together. In fact, the TFS Power Tools contain a set of cmdlets for version control and a few other functions.

Image

There is one issue when downloading them, however. The “typical” installation of the Power Tools leaves out the Windows PowerShell cmdlets! So make sure you choose “custom” and select those Windows PowerShell cmdlets manually.

After they’re installed, you also might need to manually add them to Windows PowerShell before you can start using them. If you try Get-Help for one of the cmdlets and see nothing but an error message, you know you’ll need to do so (and not simply use Update-Help, as the error message implies).

Fortunately, that’s simple. Using the following command will fix the issue:

add-pssnapin Microsoft.TeamFoundation.PowerShell

See the before and after:

Image of command output

A better way to review what’s in the Power Tools and to get the full list of cmdlets installed by the TFS Power Tools is to use:

Get-Command -module Microsoft.TeamFoundation.PowerShell

This method doesn’t depend on the developers including “TFS” in all the cmdlet names. But as it happens, they did follow the Cmdlet Development Guidelines, so both commands return the same results.

Something else I realized when working with the TFS PowerShell cmdlets: for administrative tasks, like those I’m most interested in, you’ll want to launch Windows PowerShell as an administrator. And as long-time Windows PowerShell users already know, if you want to enable the execution of remote scripts, make sure that you set your script execution policy to RemoteSigned. For more information, see How Can I Write and Run a Windows PowerShell Script?.

Of all the cmdlets provided with the TFS Power Tools, one of my personal favorites is Get-TfsServer, which lets me get the instance ID of my server, among other useful things.  My least favorite thing about the cmdlets in the Power Tools? There is little to no useful information for TFS cmdlets in Get-Help. Awkward! (There’s a community bug about this if you want to add your comments or vote on it.)

A different favorite: Get-TFSItemHistory. His following example not only demonstrates the power of the cmdlets, but also some of their limitations:

Get-TfsItemHistory -HistoryItem . -Recurse -Stopafter 5 |

    ForEach-Object { Get-TfsChangeset -ChangesetNumber $_.ChangesetId } |

    Select-Object -ExpandProperty Changes |

    Select-Object -ExpandProperty Item

This snippet gets the last five changesets in or under the current directory, and then it gets the list of files that were changed in those changesets. Sadly, this example also highlights one of the shortcomings of the Power Tools cmdlets: Get-TfsItemHistory cannot be directly piped to Get-TfsChangeset because the former outputs objects with ChangesetId properties, and the latter expects a ChangesetNumber parameter.

One of the nice things is that raw TFS API objects are being returned, and the snap-ins define custom Windows PowerShell formatting rules for these objects. In the previous example, the objects are instances of VersionControl.Client.Item, but the formatting approximates that seen with Get-ChildItem.

So the cmdlets included in the TFS Power Tools are a good place to start if you’re just getting started with TFS and Windows PowerShell, but they’re somewhat limited in scope. Most of them are simply piping results of the tf.exe commands that are already available in TFS. You’ll probably find yourself wanting to do more than just work with these.

 

Microsoft Application Insights – A comprehensive guide : Part 1 – Getting Started

Application Insights Overview

With Application Insights, we have the same excellent capabilities from the Application Performance Monitoring (APM) feature (formerly AviCode) available in SCOM 2012 R2 – with the exception that it’s all run from the cloud as part of Visual Studio Online. With this type of monitoring for your applications, you can ensure that they are available and performing optimally, while leveraging usage data to drive improvements and trends.

In relation to the Microsoft Management Agent used for both Application Insights and SCOM 2012 R2, they share the same source code but have a slight difference with serialization that determines which REST interface and location the agent sends its performance data to.

Application Insights supports both .NET and Java web applications. On the Java side of the house, it supports monitoring Tomcat 6, Tomcat 7 or JBoss 6. For the purposes of this deep-dive series however, I’m going to concentrate on demonstrating its capabilities monitoring .NET web applications and I’ll save the Java monitoring for a later post.

You can use Application Insights to monitor web applications that are running in an on-premise/virtual machine scenario and of course, it’s also fully supported to monitor web applications running as a web role in Windows Azure Cloud Services. If you’re a Windows Phone app developer, you might be interested in the capability to view usage trends and other analytical data as users download and use your app on a daily basis.

The method of getting these different environments monitored varies depending on scenario but typically, it’s a straight-forward enough process as you’ll understand when you read through this blog series.

Regardless of the environment you’re monitoring with Application Insights, you can ensure you’re kept up to date with any performance issues (slow responses, uncaught exceptions etc.) by enabling email notification direct to your inbox. If you want to use Visual Studio to view the stack trace to help triage the problem, then this is an easy option too.

So, that’s a high-level overview of what Application Insights can do, now let’s get started!

Creating Your Account

The first thing you’ll need to do is to create a new Visual Studio Online account by clicking on the following link to sign up:

http://www.visualstudio.com/products/visual-studio-online-overview-vs

Click on the ‘Ready to Go?’ tile

Enter your Microsoft Account (formerly Windows Live ID) details, then click the Sign In button. If you don’t yet have a Microsoft Account, you can sign up for a new one here.

Input all your details into the ‘Create a Visual Studio Online Account’ window, then hit the Create Account button to move on.

Once you’ve created your Visual Studio Online account, you’ll need to specify a name for your first project. Call it what you want, then hit the Create Project button to finish.

Now, at the Overview screen of your Home page, you should see a Blue tile titled ‘Try Application Insights’ as shown below. If you can’t see the Blue AI tile, then click the Help button and choose the ‘Display Announcement’ option from the resulting menu.

Click on the Blue tile and you’ll be taken to the *Insights view where you’ll be prompted for an invitation code to gain access

Invitation Code? I hear you ask.
Don’t worry if you don’t have one, even though AI is still only available as a Preview, the good folks over at Microsoft have made a public code available at the following link:
Type in your code, then hit the Get Started button to enter the new world of Application Insights!
That’s it for Part 1 of this series. In Part 2, we’ll start work on building a demo .NET web application environment for us to give our new Application Insights account a test drive in.

The Essential How To : Upgrade from Team Foundation Server 2012 to 2013.

This blog gives you the step by step actions required for the upgrade from Team Foundation Server 2012 to 2013.

> Prerequisites: (IMPORTANT)

Make sure you have appropriate accounts with permissions for SQL, SharePoint, TFS and Machine level Admin privileges on both the servers.

  1. You must be an Administrator on the Server.
  2. Must be a part of the Farm Administrator’s group in SharePoint.

A detailed information about this can be found here.

   Back up all the databases from TFS 2012.

Make sure you stop all TFS services, using the TFSServiceControl.exe quiesce command. More info here.

Databases to backup,

  1. The Configuration Database (For instance, Tfs_Configuration)
  2. The Collection Database(s) (For instance, Tfs_DefaultCollection)
  3. The Warehouse Database* (For instance, Tfs_Warehouse)
  4. The Analysis Database* (For instance, Tfs_Analysis)
  5. Report Server Databases* (For instance, ReportServer and ReportServerTempDB)
  6. SharePoint Content Databases** (For instance, WSS_Content)

* If you have configured Reporting in TFS 2012

** If you have configured SharePoint in TFS 2012

Also, you have to back up the Encryption keys of ReportServer database.

   In the new Server,

  1. Install a compatible version of SharePoint (Foundation/Standard/Enterprise)Note: Configuration fails with SQL Server 2012 RTM. You need SP1 update installed.

> Step 1: Restoring Databases.

On the new server, using the SQL Server Management Studio, Connect to the Database engine and restore all the databases (Configuration, Collection(s), SharePoint Content Warehouse and Reporting). Also, connect to the Analysis Engine and restore the Analysis Database.

clip_image002

If you have Reports configured and want to do the same in 2013, go to Step 2, else skip it.

> Step 2: Configure Reporting.

Open Reporting Services Configuration Manager,

clip_image004

Click on Database on the left pane and click on Change Database.

Choose an existing report server database and click on .

clip_image006

Enter the credentials and click on .

clip_image009

Select ReportServer database and to next. Review and complete the wizard.

clip_image011

Now, you’ll have to restore the Encryption key. To do that Click on Encryption Keys on the left pane and click on Restore.

Specify the file and enter the password you used to back up the encryption key and click on OK.

clip_image013

Go to Report Manager URL, and Click on the URLs to see if you are able to successfully browse through the reports.

> Step 3: Install and Configure SharePoint.

Install a compatible version of SharePoint and run the SharePoint Products Configuration Wizard.

To check compatible versions, refer to the Team Foundation Server 2013 Installation Guide, available here.

Click on Next.

clip_image017

Configuring a new farm would require reset of some services. Confirm by clicking on Yes.

clip_image019

Select Create a new server farm and click on Next.

clip_image021

Enter the Database Server and give a name for the configuration Database. Specify the service account that has the permissions to access that database server.

clip_image023

The recent versions of SharePoint would ask for a passphrase to secure the farm configuration data. Enter a passphrase.

clip_image025

Click on Specify Port Number and give “17012”. TFS usually uses this port for SharePoint. You can however give another unused port number too.
Select NTLM for default security settings.

clip_image027

Review and complete the configuration wizard.

clip_image029

Now that we’ve configured SharePoint, go to SharePoint Central Admin page, give the admin credentials and create a new web application for TFS. It will automatically create a new content database for you.
If you want to restore the content database from the previous version, say SharePoint 2010, you must first upgrade the content database and attach it to the web application.

  • First, Click on Application Management -> Manage Content Databases.
    Select the web application you created for TFS and remove that content database.
  • Upgrade and Attach the old content database by using the Mount-SPContentDatabase. Example,
    Mount-SPContentDatabase “MyDatabase” -DatabaseServer “MyServer” -WebApplication http://sitename
    (More information on that command, here.)

> Step 4: Configure Team Foundation Server.

Once we’ve restored all the databases and the encryption key and/or configured reporting, we are all set to upgrade TFS to the latest version.
Start Team Foundation Administration Console and Click on Application tier.

clip_image031

Click on Configure Installed Features and choose Upgrade.

clip_image032

Enter the SQL Server/Instance name and Click on list available databases.

clip_image033

that you have taken backup and click on next (Yes, taking a backup is that important)

Choose the Account and Authentication Method.

clip_image034

Select the Configure Reporting for use with Team Foundation Server and click on .

clip_image035

Note: If your earlier deployment was not using Reporting Service then you would not be able to add Reporting Service during the upgrade (this option would be disabled). You can configure Reporting Service with TFS later on after the upgrade is complete.

Give the Reporting Services Instance name and Click on Populate URLs to populate the Report Server and Report Manager URLs. Click on .

clip_image036

Since we are configuring with Reports, we need to specify the Warehouse database. Enter the New SQL Instance that hosts the Warehouse database, do a Test and Click on List Available Databases. Select Tfs_Warehouse and click on Next.

clip_image037

In the next screen, Input the New Analysis Services Instance, do a Test and Click on Next.
Note: If you’ve restored the Analysis Database (Tfs_Analysis) from the previous instance, TFS will automatically identify and use the same.

clip_image038

Enter the Report Reader account Credentials, do a Test and click on Next.

clip_image039

Check Configure SharePoint Products for Use with Team Foundation Server. Click on Next.
Note: This is for Single Server Configuration, meaning SharePoint is installed on the same server as TFS.

clip_image040

Note: If your earlier deployment was not using SharePoint then you would not be able to add SharePoint during the upgrade (this option would be disabled). You can configure SharePoint with TFS later on after the upgrade is complete.

In this screen, make sure you point to the correct SharePoint farm. Click on Test to test out the connection to the server. In our case, this is a Single-Server deployment. We’ve configured SharePoint manually, and created a web for TFS.

clip_image041

Make sure all the Readiness Checks are passed without any errors. Click on Configure.

clip_image042

Now, the basic TFS Configuration is completed successfully. Click on to initiate the Project Collection(s) upgrade.

clip_image043

That’s it. Project Collections are now upgraded and attached.

clip_image044

Click on next to review a summary and this wizard.

clip_image045

We are almost done. Notice that in the summary of TFS Administration Console, we still see the old server URL.

This is very important.

We need to change this to reflect the new server using the Change URLs.

clip_image047

clip_image049

Do a test on both the URLs and Click on OK.

Now, try browsing the Notification URL to see if you are able to view the web access without any errors.

clip_image051

Next, Click on the Team Project Collections in the left pane, Select your Collection(s) and click on Team Projects. See if you have the projects listed.

clip_image053

Under SharePoint site, check if the URL points to the correct location, if not, Click on Edit Default Site Location and edit it.

clip_image055

See if the URL under Reports Folder points to the correct location, if not, Click on Edit Default Folder Location and edit it.

clip_image057

Next, Click on Reporting in the left pane and see if the configurations are saved and are started.

clip_image059

> Step 5: Configuring Extensions for SharePoint (Only when SharePoint is on a different server)

If you’ve configured SharePoint on a different server, you need to add Extensions for SharePoint products manually.
First you need to Remote SharePoint Extensions on the server available as part of the TFS installation media. Run the configuration wizard for Extensions for SharePoint. For a detailed blog on how to do that click here.

That’s it. With that TFS is configured correctly.
As a final step, Team Explorer, Connect to the Team Foundation Server and create a new Team Project. See if it creates properly with Reports and a SharePoint site.

Now, your new TFS 2013 is up and running, with upgraded collections.

How does TFS backup scheduling tool handle collection add/delete/detach?

We are used to creating collections upon request. This implies collections would be created/attached/detached continuously.

What is the impact of these on TFS backup plan?
Does it see new collections or should we re-configure each time?
What about detached collections?

I took some time to probe this. This is what I did. Firstly I scheduled a backup plan.

clip_image002

I have a simple TFS setup. There are no SharePoint or Reporting databases (these aren’t usually affected by collection level activities). Three databases are backed up

    1. Tfs_Configuration
    2. Tfs_DefaultCollection
    3. Tfs_Warehouse

    clip_image004

    I waited for some time to confirm the backups run fine.
    clip_image006

    Then I added another collection – backup_test.

    clip_image008

    But that collection is not listed in the wizard. We still see only the three initial databases.

    clip_image010

    So should we add it? – By manually reconfiguring the backup plan? That would be tedious and also would create new backupset.xml.
    Luckily no – we don’t have to.
    We should wait for the next scheduled job – in my case a transactional backup in another 3 minutes.

    And voila! Once the transaction job is kicked off – the new collection is added to the existing plan.

    clip_image012

    If you look into the backup folder you will see that it took a full backup of the new collection first and then the transaction log (instead of just the transaction). This is to maintain the SQL backup relationship between a full backup and a transaction/differential backup – the LSN continuity.

    Moving forward this database would be backed just like the other databases in the plan.

    The same behavior can be seen in a TFS collection delete or detach. The backup plan would modify itself when its next job is kicked off.

    PS – Do note this doesn’t apply to Reporting or SharePoint database. Tool doesn’t pick new databases, while a delete of the databases listed in the plan may break the backup schedule. You would have to re-configure the plan to make changes.

    Application Analytics: What Every Developer Should Know

    Imagine how much more efficient your developers would be if they didn’t have to guess which features were being used and which could safely be deprecated?

    What would the impact on user satisfaction be if exception data replete with usage context were delivered before your users had a chance to complain?

    How would the quality of your software improve if your test plans were aligned with actual usage patterns and user preferences in production?

    Application analytics is the branch of analytics purpose-built to make these scenarios a reality; to satisfy the “selfish interests” application stakeholders, e.g. development, test, product owners, operations, etc.

     

    Diagram of Application Analytics cycles

    Figure 1: Application analytics enhances both development and operations by providing deep insight into application adoption and user behavior within established development and operations platforms.

    Application analytics integrates application usage data, app-centric analytics software and heuristics integrated into development and operations.

    The diversity of today’s analytics solutions is (as it should be) customer driven. For example, web analytics’ principle customer is marketing and sales resulting in a focus on page views, clicks and conversions. Web analytics solutions share common:

    • Objectives: monetization of web properties,
    • Requirements: analysis of visitors, impressions, clicks and conversions, and
    • Restrictions: meeting privacy and performance obligations.

    In agile parlance, application analytics encompass analytics solutions where the primary customer is one or more application development “personas” who share common objectives, requirements and restrictions.

    The Agile Manifesto states: that development’s “highest priority is to satisfy the customer through early and continuous delivery of valuable software.” In that context, development’s success can only be accurately measured where users and their applications meet – at the “point of work” (or play). Application analytics provides empirical evidence of application usage and end-user behavior that, when properly integrated into a development process, provides:

    • Insight into user requirements,
    • Validation of development priorities and an
    • Objective measure of test plan accuracy and completeness

    Examples include:

    • The Microsoft customer experience improvement program (CEIP) was “created to give all Microsoft customers the ability to contribute to the design and development of Microsoft products.” CEIP collects information about how Microsoft programs are being used “in the wild”.
    The Agile Manifesto also states that “working software is the primary measure of progress.” Operations’ mission is to get the most out of today’s applications – future application iterations cannot address immediate stability, performance, user experience, or security concerns. Application analytics, when properly integrated into operations and support, provides:

    1. Application adoption and usage metrics within a specific operations framework,
    2. Production incident alerts from application exceptions, and
    3. Organizational adoption and productivity analysis connecting application investment to enterprise ROI.

    Examples include:

    • PreEmptive Analytics Community Edition that gives developers using Microsoft Visual Studio 2012 Professional the ability to create their own CEIP by allowing development and operations to identify and quickly respond to application exceptions that occur in production.

    Given these objectives, the value of application analytics seem obvious, but the details can make it difficult. Collecting, analyzing and acting on application runtime data poses unique challenges both in terms of the kinds of data that need to be gathered and the metrics that measure success.

    Effective application analytics implementations must accommodate the diversity of today’s applications and the emergence of cloud, mobile and distributed computing platforms. The following application analytics requirements make plain why narrower analytics technologies should never be expected to fully satisfy development objectives.

    The runtime data that streams from an application is typically far more complex and heterogeneous than what is typically streamed from a web page or portal.

    Data types Runtime telemetry: the variety, semantics and location of application runtime data
    Feature An application feature is not a click. A feature can span one or more methods, incorporate multiple components, run across runtime surfaces and even be implemented multiple times in different languages, e.g. Windows Presentation Foundation (WPF), Microsoft Silverlight and HTML5. Measuring usage and performance of an arbitrarily defined scope is required for monitoring across devices and platforms.
    Application Data Many of today’s applications are data-driven where the actual behavior itself is encoded in the data. Knowing what templates, workflows and other “modern” content is being processed can be more valuable than knowing which workflow or rendering engine processed that data.
    Session Session information can be defined differently within an app server, a mobile session, within a browser or distributed across all of the above serviced by a cloud-based service.
    Event Unhandled exceptions, caught and thrown exceptions, unexpected performance or suspicious user behavior can all constitute a “production event.”
    Application Applications are often comprised of multiple components – some on-premises and some service-based. These applications (and their components) are versioned at unpredictable cadences. Calculating the workflow across distributed applications and then reconciling this activity over time and across versions is an applications analytics requirement.
    Stack While many applications are run inside a “sandbox”, e.g. mobile, Windows Runtime, Windows Azure – many other applications have full access to the underlying OS and computing metal. Tracking screen resolution, chip manufacturers and hardware availability is often essential to understanding user experience and application behavior.
    Identity A users’ identity can be defined and tracked by device ID, IP address, user credentials, software license and more. Application analytics solutions must have the capacity to enforce privacy and security policies at both client and aggregate levels. Ensuring effective data governance is a precursor to effectively analyzing the resulting runtime data.
    Given the complexity, diversity and distribution of today’s production platforms, it is no longer possible to simulate production. Application analytics can fill this gap only if there is comprehensive support for today’s computing platforms.

    Categories Runtime architectures and technologies: existing and emerging languages and platforms
    Architectures and surfaces Applications are much more than a simple presentation layer and a sequence of user actions. Instrumentation must span client-server, cloud services (public and private), web servlet and mobile platforms, architectures and surfaces.
    Languages and runtimes Today’s applications will incorporate managed, native and scripted components including the Microsoft .NET Framework, C++, Java and JavaScript.
    For application analytics to have impact, the right information has to be delivered to the right roles at the right time and in the right context. This includes integrating the instrumentation task within the development and build process and surfacing application analytics inside the development, testing, deployment, and management phases.

    Flowchart showing five phases in sequence

    Figure 2: Five functional phases of application analytics implementation.

    DevOps phase IDE & application lifecycle management (ALM) integration: role and use case driven scenarios
    1. Instrumentation Instrumentation is the logic inside an application that creates the runtime data to be analyzed. Instrumentation can be coded via an API (required for native and scripted apps) or injected post-compile into managed assemblies.
    2. Build and deploy Apps can be built manually, as part of a continuous build process and automated to support cloud platforms. Support for the various manufacturing processes and payload formats is required to ensure efficient and scalable deployments.
    3. Runtime data management Managing runtime data will require scale, governance, and security controls. Application runtime data management requirements shift dramatically across industry, use case and jurisdictional boundaries.
    4. Runtime data publishing Different stakeholders require distinct presentations and analysis; developers, architects, product owners and line of business management bring different perspectives and priorities to the same underlying runtime data. Reports, dashboards, export and programmatic access are all typically required when use cases span sprint planning, customer support and business performance monitoring.
    5. Integration Integration of application analytics into development platforms, e.g. Visual Studio and TFS, operations, e.g. operations manager, and customer relationship management, e.g. Microsoft Dynamics through reports and event scheduling delivers the “last mile” of application analytics’ productivity.

    The cure cannot be worse that the disease. For application analytics, this means that the inclusion of application analytics into development and operations cannot result in productivity, performance, security or user experience risks greater than those that it is designed to mitigate. Given the many forms and roles that today’s applications can take, this is no small task.

    Risk management Restrictions: performance, stability, privacy, and complexity
    Performance and stability Collecting, caching, and transmitting runtime data efficiently across devices without performance penalties (when working properly) while not impacting application stability or user experience if/when one or more aspects of the analytics solution should fail. This can be especially challenging when considering special dependencies such as battery life, data plans, network characteristics…
    Security and privacy Consumer, business-to-business, and line-of-business applications each come with their own security and privacy obligations. These obligations are further fragmented by industry and by jurisdiction. Application analytics instrumentation, transport and content management must be extensible and able to enforce these requirements on an application-by-application basis.
    Complexity Complexity introduces waste and risk – and ultimately resistance to adoption. Integration into existing platforms, processes and methodologies is a requisite to effective application analytics implementations.

    Visual Studio 2012 includes PreEmptive Analytics for TFS Community Edition (PA for TFS CE), an application analytics solution that monitors exceptions and creates or updates work items inside Team Foundation Server (TFS) based upon user-defined thresholds.

    PA for TFS CE is designed to track unhandled exceptions on applications running on the .NET Framework and Java runtimes.  Support for the five phases of application analytics as follows:

    DevOps phase PreEmptive Analytics for TFS Community Edition
    1. Instrumentation Instrumentation is accomplished with Dotfuscator Community Edition. Unhandled exception monitoring with optional opt-in and user-feedback is supported for the .NET Framework, Silverlight, Microsoft Windows Phone and XNA applications. An API for Java applications is also available as a free download that includes support for Android. Support for native code, JavaScript and Java is provided by PreEmptive Solutions.
    2. Build and deploy Dotfuscator Community Edition is interactive. The command line interface and MSBuild support is available with Dotfuscator Professional from PreEmptive Solutions.
    3. Runtime data management A server-side data collector is included withVisual Studio Team Foundation Server 2012. The collector endpoint is referenced via a URL embedded into the application being monitored as part of the instrumentation phase above. The collector can sit next to the TFS server, on an entirely different server, and even inside Microsoft Windows Azure.
    4. Runtime data publishing An aggregator service is also included with TFS in Visual Studio 2012 that polls the collector endpoint. When user-defined thresholds are met, the aggregator will create (or update) a Production Incident work item inside TFS in Visual Studio 2012.
    5. Integration In Visual Studio 2012, TFS work items created by PA for TFS CE are tracked, assigned, prioritized and reported against as any other first class work item type.

    Screen shot of location on Tools menu

    Figure 3: Finding PreEmptive Analytics inside Visual Studio 2012 off of the tools menu

    Screen shot showing Dotfuscator CE

    Figure 4: Instrumentation. Inside Dotfuscator CE, adding the setup attribute identifies the collector endpoint for runtime data. The endpoint can be on-premises next to a TFS server or hosted on Windows Azure remotely.

    Screen shot of Visual Studio showing integration

    Figure 5: Visual Studio 2012 integration. Production Incident work items are surfaced inside Visual Studio automatically when volume thresholds are met. In this “All Items” query, you can see the type of exception, how many exceptions of this type have been detected and on how many machines. You can see more detail on this work item below including a stack track as well as the work item’s assignment, prioritization and classification.

    Screen shot of example summary chart

    Figure 6: Reporting. One example of summary charts included with PA for TFS CE shows the status of all open incidents.

    In addition to enhanced TFS integration and instrumentation options, the Professional Edition of PreEmptive Analytics includes feature, session and user analytics designed to measure trends, usage patterns and user preferences throughout the lifetime of a production application.

    Use case Community Edition Professional
    Track unhandled on .NET Framework and Java applications Yes Yes
    Automatically create and update TFS work items in Visual Studio 2012 Yes Yes
    Provide opt-in and user-feedback options at runtime Yes Yes
    Support Visual Studio 2010 Yes
    Track caught and thrown exceptions Yes
    Support custom data and extensible rule and work item definitions Yes
    Support JavaScript and native application monitoring Yes
    Measure feature and session usage Yes
    • Development has unique requirements that are not being met by web, BI or other non-development centered analytics solutions.
    • Application analytics offers specific capabilities designed to meet development and operation’s needs.
    • Visual Studio 2012 offers integrated application analytics “out of the box” with opportunities to extend these capabilities through integration and partner options.

    You can visit these sites to learn more:

    1. http://www.preemptive.com/pa
    2. http://www.microsoft.com/visualstudio/11/en-us/products/alm
    3. http://blogs.msdn.com/b/bharry/archive/2012/04/11/preemptive-analytics-in-visual-studio-and-tfs-11.aspx

    In need of a formal test case management system? Look no further than VS2013 Web Access (TWA)

    In Visual Studio 2012 Microsoft provided test case management and test execution capabilities in TFS Web Access. It was part of Visual Studio 2012 Update 2.

    Now in Visual Studio 2013, new capabilities and features have been added to create and modify test plans in Visual Studio 2013 Web Access (TWA).

    You don’t have to switch to Microsoft Test Manager for Test Plans, Test Suites and Shared Steps creation. The entire ‘Test Case Management’ can be done from the Web Access.

    TWA connects you to Visual Studio Team Foundation Server (TFS) or Team Foundation Service to manage source code, work items, builds, and test efforts.

    In order to access Test tab for Web Access, we need to provide Full access to the Windows user or Group who is accessing the functionality.

    web-access-testing01

    Observe the Test tab. If the settings are not configured, we need to go to Access Levels from the Control Panel where we can find the 3 different Access Level settings as shown here

    access-levels

    The default option is Limited Access. We need to add the user to get Test tab by setting the option to Full.

    Now observe the Test tab (right next to the Build tab as seen in Figure 1). I had already created Test Plan and Test Suites using Microsoft Test Manager 2013. These artifacts start appearing in the Web Access under the Test tab immediately.

    The Test Plan has 3 Test Suites – Requirement based, Static and Query based. Each suite has some Test Cases. We can also see the different status of an individual test case.

    test-case-status

    Note: I am also enclosing a similar Test Plan screenshot from Microsoft Test Manager 2013 for comparing the two. Observe the categorization of test status with Microsoft Test Manager.

    microsoft-test-manager

    In case we want to customize the columns, we can do so as follows. The bracketed quantity as seen in the screenshot below shows the width of the columns. We can add certain columns from left hand side and the view will change.

    web-access-testing03

    We can even create a new Test Plan. The Test Plan can have a name, Area Path and Iteration Path. Features like configuration settings, Test Settings and Test Environment can be added with Microsoft Test Manager 2013.

    create-test-plan

    Once a new Test Plan is added, we can add Test Suites to it. Test Suites can be of three types – Requirement Based, Static or Query Based. Even Shared Steps can be added if required. Creation of any kind of Test Suites will provide option for adding new or existing test cases to the Suite.

    test-plan

    A new Test Case can either be created with normal IDE or using Grid.

    new-test-case

    We can provide all the details to the Test Case like Title, Iteration, Area, and Assigned To (ownership of Test Case). The Steps to the test case can be added as action and expected result. If the test case is testing a requirement, Tested Backlog Items tab can be selected and a link can be provided to it. Any attachments we add to the test step can be viewed inline when the test case gets executed. If the attachment is in the form of image, the test step will show the actual image. If the attachment is in the form of a file, the file name with size appears. You can also add attachments when you run test case. These can be  log files etc.

    If we select the option of creation of New Test Case with Grid, we get a similar screen as follows.

    gird-test-case

    We can add actions and expected result to each test case. Create a new Test Case by providing the title. All the test cases can stored in Team Foundation Server at once. When we have finished creation of Test Plan, Test Suites and Test Cases, we can start executing the Test Case. While running the test case, we have optionsof running the default way with Web Access or using the client (Microsoft Test Manager).

    Once we start test runner, the test case with some steps is displayed in left hand pane (around 1/3 area of screen) and remaining area is available for actual execution as seen with Microsoft Test Manager.

    test-runner

    Once the execution starts and we encounter an error, we can create a bug, add a comment to it or add an attachment. Once the execution is completed, we can save and close the runner. We have various options to mark the test case as Pass, Fail, Block or Not applicable The test case can also be marked as Paused. For Paused Test Case, we later get Resume test as the option.

    web-access-testing20

    While creating a bug with Web Access, it is possible to add comments, attachments along with the bug; however we cannot create a rich bug. For creation of rich bug we will have to use Microsoft Test Manager 2013. The links in terms of comments, attachments can be seen in the bug as follows.

    web-access-testing22

    While Testing with Web Access there are some limitations. Basic testing can be executed but creation of rich bug, viewing test results, exploratory testing requires the client Microsoft Test Manager 2013.

    However despite of these limitations, being able to plan tests, manage full test suite and executing test cases right in your Visual Studio using the Lightweight browser-based Team Web Access, helps us improve quality in software projects, without leaving your favorite IDE workspace.

    As the following illustration shows, you can access a number of features from the Home page of TWA. You switch to different views and pages by first choosing one of the context view links at the top and then one of the pages within the context view. You can switch context between teams and team projects from the project Context Menu Icon context menu toward the top-right of each page.  You access the administration pages by choosing the Settings icon  gear Settings icon.

    Home page (Team Web Access)

    Important noteImportant
    The links and pages that you have access to depend on: (1) the Web Access Permissions group to which you are assigned: Limited, Standard, and Full, see Change access levels. Or, (2) whether the resource has been configured for your team project or team project collection.  The following links appear on the home page for the associated Web Access Permissions group shown in parenthesis:

    • View backlog (Full): Opens the Product Backlog page which provides access to both the product backlog and iteration or sprint pages. See Create and organize the product backlog.
    • View board (Full): Opens the Task Board page used to review progress during a sprint and update information for work performed. See Work in sprints.
    • View work items: Opens the Work Items page used to create work items and work item queries. See Query for Bugs, Tasks, or Other Work Items.
    • Request feedback (Full): Opens the Feedback Request form to invite stakeholders to provide feedback. See Request and review feedback.
    • Go to project portal: Requires a project portal has been enabled for your team project. See Access a Team Project Portal or Process Guidance.
    • View reports:  Requires Reporting to be enabled for the instance of TFS. See Add a report server.
    • Open new instance of Visual Studio: Opens an instance of Visual Studio 2012 and automatically connects to the team project context selected in TWA. Requires that you have a recent version of Visual Studio installed.

     

    A look at the new features of Visual Studio 2013 – Part 1 : Git for TFS 2013

    This is a mini series of blog posts looking at the various new functionality of

    Visual Studio 2013

    Part 1 : Git for TFS 2013

    One of the great new features of TFS 2013 is the addition of Git as a source code repository.

    Git is a Distributed Version Control System (DVCS) that has gained a lot of popularity in the past few years.  Git allows you and your team to work completely disconnected by keeping a copy of your source code locally, including all your change history.

    By doing this, you are able to commit your changes locally, do file comparisons, create branches, merge your code, and much more.  Once you are ready to share your changes with the rest of the team, you are able to push your changes to the centralized Git repository contained in your TFS Server.

    Git is not replacing Team Foundation Version Control (TFVC) but it does give you another option for you and your team to use.

    So starting with TFS 2013 and Team Foundation Service, when you create a new Team Project, you are able to decide what source control repository you will use.

    Implementation

    TFS’s Git implementation is based on msysGit so this is not just a small subset of Git functionality.  If you are already used to working with Git, you should be able to get up and running quickly.

    Even though the Git commands that you will use are the same as other Git implementations, the backend is very different.  As you may know, Team Foundation Server uses SQL Server to store all of its data, and TFS Git is no exception.  This means that your backup and restore procedures won’t change whether you are using TFVC or Git.

    This implementation also means that all the integration points that you have when using TFVC are also available when using Git.  This includes Work Item associations, build integration (Continuous Integration, Gated Builds, Associated Changesets in the Build Summary), alerts, and more.

    Getting Started

    Before you get started, you have to create a Team Project that uses Git for Source Control.  To do so, you follow the same steps as you would normally follow to create a Team Project from Team Explorer, but you will now see a new step called “Specify Source Control Settings” which will allow you to pick between Team Foundation Version Control and Git.

    As described in this step: “Git is a Distributed Version Control System (DVCS) that uses a local repository to track and version files.  Changes are shared with other developers by pushing and pulling changes through a remote, shared repository.”

    Once you get through the Team Project creation wizard, you will now have a fully-working Team Project with all the great features that you are already used to (and some new ones with 2013), but instead of using TFVC you will now use Git.

    Working with Git from Visual Studio 2013

    When you open Visual Studio and connect to your Team Project, Team Explorer will look and behave differently since it is aware that you are using Git.

    This is what Team Explorer looks like when you connect to a Team Project that uses Git:

    Before you can start working with any code stored in TFS, you have to clone the repository (see highlighted link above). This creates the mapping between your local Git repository and your Git repository that is hosted by TFS.

    Clicking on the link allows you to specify the location of the server repository and initializes your local repository:

    As part of your local repository initialization, you will see a hidden .git folder, and two files used by Git: .gitattributes and .gitignore.

    You can now get started with either a brand new project (in my case I don’t yet have anything in TFS) or making changes to existing projects.

    When you are done with your changes, you are able to access similar features than what you would get when working with TFVC, except that the workflow changes.  Git expects you to commit locally at least once and then you would push those changes to the server.

    Since your commits don’t affect the rest of your team until you push them to the server, you should feel encouraged to commit often since Git makes branching, merging, and rollbacks a very trivial process.

    When you are ready to commit, you are taken to the  “Changes” tab in Team Explorer.  Here, you can enter a comment, select your included changes, and associate your commit to a Work Item:

    You can create associations two different ways.  You can either select the “Add Work Item by ID”, the way you are probably used to doing it. Or, as part of your comment, you can enter the Work Item ID prepended by a hash-sign.

    For example, in my comment above, I’m associating my commit to Work Item ID 2 by entering my comment like this: “Created new project #2”.  One of the reasons for this feature is that in some cases, your team members may need to access your source control from outside of Visual Studio, for example, by using the Git’s command-line support, and this allows you to still create associations with your TFS Work Items.

    You can now push your changes to your server by accessing the Unsynced Commits tab in Team Explorer. You can access it from the link in the Changes tab or from your Team Explorer home.  When you get there, you will see a list of all your local commits.

    Before pushing your changes to the server, you should submit a Pull request, which will allow you to resolve any conflicts with changes made by the rest of your team.  Once those conflicts are resolved, you can proceed with a Push operation, which will move all your local commits to the centralized repository.

    Once you push your changes to the server, you are able to view History by selecting the “View History” option from the “Actions” dropdown in the Unsynced Commits tab.  Double-clicking on one of the Changesets in the history window brings up the Commit Details, which shows you my commit comment,  Related Work Items (remember that I used the hashtag to associate), and the files that were affected.

    Branching

    One of the most important features of Git is its ability to painlessly create branches and merge between your branches.

    Since these operations are performed locally, you don’t need any special permissions to perform them in your local repository.  Team Explorer gives you an whole section dedicated just to branching:

    From here you can create new branches, merge between them, view Unsynced changes in each branch, and publish your branches to your local repository.

    Creating a Branch:

    Merging between branches:

    Build Integration

    Just like when you are using Team Foundation Version Control, you are able to get full integration with the TFS automated build system.  When setting up a Build Definition, you are able to specify your repository name, the Branch that should be used to pull code from, and you are even able to pull code from a Git repository outside of TFS.

    The Build Process Arguments are also a little different since you are now dealing with a different structure than you would when connecting to a TFVC repository, but once you configure the build, all build operations work the same as they would in prior versions of TFS.

    This is an awesome new addition to TFS, which will give you and your team another option for source control.  If you have team members working outside of Visual Studio, they are able to connect to Git from their favorite Git plugin and collaborate with the rest of the team.

    The greatest part is that all the integrations that we all love about TFS is still there, so you get all the great features of Git while being able to collaborate with the rest of the team using Work Item Tracking, Build Integration, Microsoft Test Manager, and more.

    Now go out and give it a try!