Category Archives: Testing

How To : Automate a Test Case in Microsoft Test Manager & Setup Automated Build-Deploy-Test Workflows

To automate a test case, link it to a coded test method. You can link any unit test, coded UI test, or generic test to a test case. You’ll want to link a test method that performs the test described by the test case. Typically these are integration tests.

The results of automated and manual tests appear together. If the test cases are linked to backlog items, stories, or other requirements, you can review the test results by requirement.

You can make links one at a time, or you can generate test cases from an assembly of test classes.

  1. Using Visual Studio, create or choose a test method. It can be an ordinary test method, a coded UI test, ordered test, or a generic test method.

    Check the method into Team Foundation Server.

    Keep the solution open in Visual Studio.

  2. Open the test case in Visual Studio.
    Open Test Case Using Microsoft Visual Studio
  3. Associate the test method with your test case.
    Associate Automation With Test Case

    If you want to change or delete the association later, choose Remove Association.

We don’t recommend linking load tests or web tests to test cases.

  1. Open a Developer Command Prompt, and change directory to the output director of your Visual Studio solution.

    cd MySolution\MyProject\bin\Debug

  2. To import all the test methods from the solution:

    tcm testcase /collection: CollectionUrl /teamproject:MyProject /import /storage:MyAssembly.dll /category:”MyIntegrationTestCategory

    The category parameter is optional but recommended. You only want to create test cases from integration or system tests, which you can mark by using the [TestCategory (“category”)] attribute.

  3. In the Test hub in Team Web Access or in Microsoft Test Manager, use Add Existing to add the test cases to a test suite.
Ads by CeheuapMMeAd Options

Provide the build location so that the test method can be found.

  1. In Microsoft Test Manager, choose Testing Center, , Properties.
  2. Under Builds, set Filter for builds. You can set the build definition and quality attribute of the builds you want to choose from.
  3. Choose Modify to assign a build to the test plan. You can compare your current build with a build you plan to take. The associated items list shows the changes to work items between the builds. You can then assign the latest build to take and use for testing with this plan. For more information, see What development has been done since a previous build?.
I’m not using Team Foundation Build to build my application and tests. How can I run automated lab tests?
Create a build definition that contains just the location where your assemblies are shared. Then create a fake instance of this build from the developer command prompt:

TfsCreateBuild.exe /collection:http://tfsservername:8080/tfs/collectionname /project: projectname /builddefinition:”MyBuildDefinition” /buildnumber:”FakeBuild_1.0″

Specify the build definition in your test plan.

To run your automated tests tests using Microsoft Test Manager, you must use a lab environment. It must have roles for each of the client and server machines used in your tests. (If you’ve used lab environments for manual tests, notice that automated tests must have a machine for the client role.)

  1. Create or choose either a standard lab environment or an SCVMM lab environment.

    If you create a new environment, choose a machine for each role.

    The machines tab in the new environment wizard.

    If you’re planning to run coded UI tests, configure it on the Advanced page of the wizard. This sets the test agent to run as a user. You have to supply a user name under which the agent will run.

    We recommend that you use a different user account than the lab service account used by the test controller.

    The advanced tab in the new environment wizard.
  2. Set the test plan to use your environment for automated tests.
    Automation on test plan properties
  3. If you want to collect more than the basic diagnostic data from the test machines, create a test settings file.
    New test settings

    In the test settings wizard, choose the data you want to collect for each machine.

    Select diagnostics for each machine role

Start automated tests the same way you do manual tests.

In Microsoft Test Manager, choose Testing Center, . Then select a test suite or an individual test and choose .

If you want to run a test in a different environment or with different test settings, choose Run with Options.

If you want to run an automated test manually, choose Run with Options.

If you have multiple build configurations, the test assemblies to run the automated tests are searched for recursively from the root directory of the build drop folder. If it is important which assemblies are selected when you run your automated tests, you should use Run with options to specify the build configuration.

Ads by CeheuapMMeAd Options
  1. In Microsoft Test Manager, choose Testing Center, , Analyze Test Runs.
  2. Double-click a test run to open it and view the details. You can:
    • Update the title of the test run to reflect the outcome.
    • Choose Resolution to indicate a reason, if the test failed.
    • Add comments.
    • View the details of an individual test.
    • Create a bug.
Q: Can I generate the test method from a manual run of the test case?
Yes. Verifying Code by Using UI Automation (Various Blog Post can be found on my Blog about Coded UI Test Automation)

Q: I want my automated test to repeat with different data. Do I use the same test parameters that the manual version of the test case uses?
To make the automated test iterate over different data, write that into the code of the test method.

Test parameters are only used with the manual version of the test. They aren’t visible to the automated test code.

Automated build-deploy-test workflows

You can use a build-deploy-test on Team Foundation Server to deploy and test your application when you run a build. This lets you schedule and run the build, deployment, and testing of your application with one build process. Build-deploy-test work with Lab Management to deploy your applications to a lab environment and run tests on them as part of the build process.

If your lab environment is an SCVMM environment, you can also use workflows to create and restore snapshots that automatically create a clean environment before you run tests, and to the state of your environment when a test fails. This ensures that each test isn’t influenced by changes to the lab environment from previous test runs. In addition, it ensures that testers can accurately reproduce that state of a lab environment when they reproduce bugs.

Requirements

  • Visual Studio Ultimate, Visual Studio Premium, Visual Studio Test Professional

You can use a build-deploy-test in the following scenarios:

Tip Tip
Build, or Build and Test: If you are building your application in a drop folder without deploying it to a lab environment, then you can use the default build process template. For more information, see Use the Default Template for your build process. If you also want to test your application without deploying it, see Run tests in your build process
  • Build, Deploy, and Test − Build your application, then deploy it and run tests on it in a lab environment. This workflow enables you to run a series of tests from a test plan, on a deployed application, as part of your build process. This scenario is common when running build verification tests.
  • Deploy and Test − This scenario is similar to the “build, deploy, and test” scenario, except a new build isn’t created during the workflow. Instead, the workflow uses an existing build from a drop folder.
  • Deploy Only – Deploy an existing build from a drop folder to a lab environment without running automated tests during the workflow. Once a build has passed your build verification tests, and is ready to be sent to a test team, you might want to send that build to the test team so they can run additional tests that aren’t part of your workflow. This scenario is common when running manual tests.
  • Build and Deploy – This scenario is similar to the “deploy only” scenario, except a new build is created during the workflow.

A build-deploy-test workflow is a Windows Workflow file that defines how a build definition will run a build, deploy an application, and run tests. A build-deploy-test workflow is created in a build definition by choosing a build process template called the lab default template (LabDefaultTemplate.11.xaml), and configuring the settings.

You can also create a customized build process template for your workflow depending on your requirements. You configure your build definition after you set up your build machine, test machines, and lab environments.

The deployment settings in a -deploy-test workflow define how an application is deployed by specifying the deployment scripts to run on in your lab environment. You can specify a lab management role to run each deployment script on, or you can specify a machine in your lab environment.

Creating deployment scripts is a major part of setting up -deploy-test workflows. Deployment scripts copy files from your build to your lab environment, and then run your installation packages.

The following diagram describes how a build is deployed by a build-deploy-test workflow:

Dataflow for deployment scripts.

The following steps are displayed in the diagram above.

  1. The build-deploy-test workflow starts a build, and then gets the deployment scripts.
  2. The build definition copies the build files to the drop location.
  3. The workflow runs each deployment script in the working directory of the specific machine or machine role that the script is assigned to.
  4. Each deployment script retrieves build files from the drop location.
  5. Each deployment script copies or installs the specified build files onto machines in the lab environment.

A Look at : ‘Kaizan” and its Philospophy in Kanban

kaizen%20not%20kaizan[1]

Kaizen, or rapid improvement processes, often is considered to be the “building block” of all lean production methods. Kaizen focuses on eliminating waste, improving productivity, and achieving sustained continual improvement in targeted activities and processes of an organization.

Lean production is founded on the idea of kaizen – or continual improvement. This philosophy implies that small, incremental changes routinely applied and sustained over a long period result in significant improvements. The kaizen strategy aims to involve workers from multiple functions and levels in the organization in working together to address a problem or improve a process.

The team uses analytical techniques, such as value stream mapping and “the 5 whys”, to identify opportunities quickly to eliminate waste in a targeted process or production area. The team works to implement chosen improvements rapidly (often within 72 hours of initiating the kaizen event), typically focusing on solutions that do not involve large capital outlays.

Periodic follow-up events aim to ensure that the improvements from the kaizen “blitz” are sustained over time. Kaizen can be used as an analytical method for implementing several other lean methods, including conversions to cellular manufacturing and just-in-time production systems.

Top of page

Method and Implementation Approach

Rapid continual improvement processes typically require an organization to foster a culture where employees are empowered to identify and solve problems. Most organizations implementing kaizen-type improvement processes have established methods and ground rules that are well communicated in the organization and reinforced through training. The basic steps for implementing a kaizen “event” are outlined below, although organizations typically adapt and sequence these activities to work effectively in their unique circumstances.

Phase 1: Planning and Preparation. The first challenge is to identify an appropriate target area for a rapid improvement event. Such areas might include: areas with substantial work-in-progress; an administrative process or production area where significant bottlenecks or delays occur; areas where everything is a “mess” and/or quality or performance does not meet customer expectations; and/or areas that have significant market or financial impact (i.e., the most “value added” activities).

Once a suitable production process, administrative process, or area in a factory is selected, a more specific “waste elimination” problem within that area is chosen for the focus of the kaizen event ( i.e., the specific problem that needs improvement, such as lead time reduction, quality improvement, or production yield improvement). Once the problem area is chosen, managers typically assemble a cross-functional team of employees.

It is important for teams to involve workers from the targeted administrative or production process area, although individuals with “fresh perspectives” may sometimes supplement the team. Team members should all be familiar with the organization’s rapid improvement process or receive training on it prior to the “event”. Kaizen events are generally organized to last between one day and seven days, depending on the scale of the targeted process and problem. Team members are expected to shed most of their operational responsibilities during this period, so that they can focus on the kaizen event.

Phase 2: Implementation. The team first works to develop a clear understanding of the “current state” of the targeted process so that all team members have a similar understanding of the problem they are working to solve. Two techniques are commonly used to define the current state and identify manufacturing wastes:

  • Five Whys. Toyota developed the practice of asking “why” five times and answering it each time to uncover the root cause of a problem. An example is shown below.Repeating “Why” Five Times1
    1. Why did the machine stop?
      There was an overload, and the fuse blew.
    2. Why was there an overload?
      The bearing was not sufficiently lubricated.
    3. Why was it not lubricated sufficiently?
      The lubrication pump was not pumping sufficiently.
    4. Why was it not pumping sufficiently?
      The shaft of the pump was worn and rattling.
    5. Why was the shaft worn out?
      There was no strainer attached, and metal scrap got in.
  • Value Stream Mapping. This technique involves flowcharting the steps, activities, material flows, communications, and other process elements that are involved with a process or transformation (e.g., transformation of raw materials into a finished product, completion of an administrative process). Value stream mapping helps an organization identify the non-value-adding elements in a targeted process. This technique is similar to process mapping, which is frequently used to support pollution prevention planning in organizations. In some cases, value stream mapping can be used in phase 1 to identify areas for which to target kaizen events.

During the kaizen event, it is typically necessary to collect information on the targeted process, such as measurements of overall product quality; scrap rate and source of scrap; a routing of products; total product distance traveled; total square feet occupied by necessary equipment; number and frequency of changeovers; source of bottlenecks; amount of work-in-progress; and amount of staffing for specific tasks. Team members are assigned specific roles for research and analysis. As more information is gathered, team members add detail to value stream maps of the process and conduct time studies of relevant operations (e.g., takt time, lead-time).

Once data is gathered, it is analyzed and assessed to find areas for improvement. Team members identify and record all observed waste, by asking what the goal of the process is and whether each step or element adds value towards meeting this goal. Once waste, or non-value added activity, is identified and measured, team members then brainstorm improvement options. Ideas are often tested on the shopfloor or in process “mock-ups”. Ideas deemed most promising are selected and implemented. To fully realize the benefits of the kaizen event, team members should observe and record new cycle times, and calculate overall savings from eliminated waste, operator motion, part conveyance, square footage utilized, and throughput time.

Phase 3: Follow-up. A key part of a kaizen event is the follow-up activity that aims to ensure that improvements are sustained, and not just temporary. Following the kaizen event, team members routinely track key performance measures (i.e., metrics) to document the improvement gains. Metrics often include lead and cycle times, process defect rates, movement required, square footage utilized, although the metrics vary when the targeted process is an administrative process. Follow-up events are sometimes scheduled at 30 and 90-days following the initial kaizen event to assess performance and identify follow-up modifications that may be necessary to sustain the improvements. As part of this follow-up, personnel involved in the targeted process are tapped for feedback and suggestions. As discussed under the 5S method, visual feedback on process performance are often logged on scoreboards that are visible to all employees.

Top of page

Implications for Environmental Performance

Potential Benefits:
At its core, kaizen represents a process of continuous improvement that creates a sustained focus on eliminating all forms of waste from a targeted process. The resulting continual improvement culture and process is typically very similar to those sought under environmental management systems (EMS), ISO 14001, and pollution prevention programs. An advantage of kaizen is that it involves workers from multiple functions who may have a role in a given process, and strongly encourages them to participate in waste reduction activities. Workers close to a particular process often have suggestions and insights that can be tapped about ways to improve the process and reduce waste. Organizations have found, however, that it is often difficult to sustain employee involvement and commitment to continual improvement activities (e.g., P2, environmental management) that are not necessarily perceived to be directly related to core operations. In some cases, kaizen may provide a vehicle for engaging broad-based organizational participation in continual improvement activities that target, in part, physical wastes and environmental impacts.
Kaizen can be a powerful tool for uncovering hidden wastes or waste-generating activities and eliminating them.
Kaizen focuses on waste elimination activities that optimize existing processes and that can be accomplished quickly without significant capital investment. This creates a higher likelihood of quick, sustained results.
Potential Shortcomings:
Failure to involve environmental personnel in a quick kaizen event can potentially result in changes that do not satisfy applicable environmental regulatory requirements (e.g., waste handling requirements, permitting requirements). Care should be taken to consult with environmental staff regarding changes made to environmentally sensitive processes.
Failure to incorporate environmental considerations into kaizen can potentially result in solutions that do not consider inherent environmental risk associated with new processes. For example, an organization might select a change in process chemistry that addresses one improvement need (e.g., product quality, process cycle time) but that might be sub-optimal if the organization considered the material hazards or toxicity and the associated chemical and hazardous waste management obligations.
By not explicitly incorporating environmental considerations into kaizen, valuable pollution prevention and sustainability opportunities may be disregarded. For example, an evident opportunity to conserve water resources may not be explored if water use is not expensive and therefore not considered a wasteful expense that needs to be addressed. Similarly, including environmental considerations in the kaizen event goals can lead to solutions that rely less on hazardous materials or that create less hazardous wastes.

Useful Resources

Productivity Press Development Team. Kaizen for the Shopfloor (Portland, Oregon: Productivity Press, 2002).

Soltero, Conrad and Gregory Waldrip. “Using Kaizen to Reduce Waste and Prevent Pollution.” Environmental Quality Management (Spring 2002), 23-37.

How To : Create, Edit and Maintaining a Coded UI Test for Silverlight Application

Using the Microsoft Visual Studio 2013 Coded UI Test plugin for Silverlight, you can create Coded UI Tests or action recordings for Silverlight 5.0 applications.

Using Microsoft Microsoft Visual Studio 2010 Feature Pack 2, you can create coded UI tests or action recordings for Silverlight 4 applications. Action recordings let you fast forward through steps in a manual test. For more information about action recordings or coded UI tests, see How to: Create an Action Recording or How to: Create a Coded UI Test.

In this walkthrough, you will learn the procedures that are required to test a Silverlight control in a Silverlight based application. The walkthrough takes you through the following procedures:

Prerequisites

 

For this walkthrough you will need:

To prepare the walkthrough

  1. Verify that you have the Silverlight 4 developer runtime available at Silverlight Developer 4 for Developers.

  2. Verify that you have completed the procedures in Walkthrough: Creating a RIA Services Solution.

    The result will be a simple Silverlight application that uses a Silverlight grid control. Later, you will use the grid control in this walkthrough and perform coded UI tests on it.

  3.  

     

    For more information about supported and unsupported Silverlight controls, see How to: Set Up Your Silverlight Application for Testing.

  4. With the RIAServicesExample you created in Walkthrough: Creating a RIA Services Solution running, copy the address of the Web application to the clipboard or a notepad file. For example, the address might resemble this: http://localhost: <port number>/RIAServicesExampleTestPage.aspx.

Add the SilverlightUIAutomationHelper.dll to Your Silverlight 4 Project

 

To test your Silverlight applications, you must add Microsoft.VisualStudio.TestTools.UITest.Extension.SilverlightUIAutomationHelper.dll as a reference to your Silverlight 4 application so that the Silverlight controls can be identified. This helper assembly instruments your Silverlight application to enable the information about a control to be available to the Silverlight plugin API that you use in your coded UI test or is used for an action recording.This assembly cannot be redistributed. Therefore, you must add this reference conditionally when you want to build the application. By taking this approach the assembly is not redistributed when you deploy your software to a customer.

To add the SilverlightUIAutomationHelper.dll

  1. For each Silverlight project in your solution that you want to test, you must add the SilverlightUIAutomationHelper.dll. In Solution Explorer, right-click the RIAServicesExample project, select Unload Project.

    The project is displayed in Solution Explorer as RIAServicesExample (unavailable).

  2. Right-click the project again and then click Edit RIAServicesExample.csproj.

    The RIAServicesExample.csproj file is opened in the Code Editor. You will see <PropertyGroup> nodes followed by <ItemGroup> nodes. You must make the following two modifications:

    1. To set the production condition, add the following entry to the first <PropertyGroup> node:

       
      <Production Condition="'$(Production)'==''">False</Production>
      
    2. To add the DLL when the build is not a production build, insert the following <Choose> node after the <PropertyGroup> nodes, but before the <ItemGroup> nodes:

       
      <Choose>
         <When Condition=" '$(Production)'=='False' ">
               <ItemGroup>
                 <Reference Include="Microsoft.VisualStudio.TestTools.UITest.Extension.SilverlightUIAutomationHelper">
                 </Reference>
               </ItemGroup>
             </When>
       </Choose>
      
  3. To save the file, click Save.

  4. To reload these changes, right-click the server project and then click Reload Project

    Caution noteCaution

    If you have multiple Silverlight projects that you want to test, you must follow these steps for each project.

    Important noteImportant

    To remove the SilverlightUIAutomationHelper.dll so that it is not redistributed with your production code, set the production condition value to true in the first <PropertyGroup> node. In in this manner, the DLL is no longer added as a reference by the Choose node that you added to the project in the previous procedure. You can also set an environment variable named Production to the value True. Then you can use msbuild to build the Silverlight project and remove the SilverlightUIAutomationHelper.dll.

Create a Coded UI Test for RIAServicesExample Silverlight Application

 

To Create a Coded UI Test

  1. In Solution Explorer, right-click the solution, click Add and then select New Project.

    The Add New Project dialog box appears.

  2. In the Installed Templates pane, expand either Visual C# or Visual Basic, and then select Test.

  3. In the middle pane, select the Test Project template.

  4. Click OK.

    In Solution Explorer, the new test project named TestProject1 is added to your solution. Either the UnitTest1.cs or UnitTest1.vb file appears in the Code Editor. You can close the UnitTest1 file because it is not used in this walkthrough.

  5. In Solution Explorer, right-click TestProject1, click Add and then select Coded UI test.

    The Generate Code for Coded UI Test dialog box appears.

  6. Select the Record actions, edit UI map or add assertions option and then click OK.

    The UIMap – Coded UI Test Builder appears.

    For more information about the options in the dialog box, see How to: Create a Coded UI Test.

  7. Click Start Recording on the UIMap – Coded UI Test Builder. In several seconds, the Coded UI Test Builder will be ready.

    Start recording UI

  8. Launch Internet Explorer.

  9. In Internet Explorer’s address bar, enter the address of the Web application that you copied in a previous procedure. For example:

    http://localhost: <port number>/RIAServicesExampleTestPage.aspx

  10. Click one or two of the column headers to sort the data.

  11. Close Internet Explorer.

  12. On the UIMap – Coded UI Test Builder, click Generate Code.

  13. In the Method Name type SimpleSilverlightAppTest and then click Add and Generate. In several seconds, the Coded UI test appears and is added to the Solution.

  14. Close the UIMap – Coded UI Test Builder.

    The CodedUITest1.cs file appears in the Code Editor.

    NoteNote

    You can assign a unique automation property based on the type of Silverlight control in your application. For more information, see Set a Unique Automation Property for Silverlight Controls for Testing.

Run the Coded UI Test on the RIAServicesExample Silverlight Application

 

To run the coded UI test

  • On the Test menu, select Windows and then click Test View.In Test View, select CodedUITestMethod1 under the Test Name column and then click Run Selection in the toolbar.

    The coded UI test should successfully run using the Silverlight data grid control.

How To : Design the Physical Architecture to Support Collaborative Development and ALM of SharePoint Foundation 2010 Application

Introduction

This article explains the physical architecture which fits best in collaborative development and ALM of SharePoint Foundation 2010 application and what are the servers and tools needed and how they play key roles in ALM of SharePoint Foundation 2010. The purpose of this article is to provide overall understanding of various servers and farms connected to each other in SharePoint Foundation.

Background

Basic understanding of different server OS & SharePoint Foundation 2010 is required.

Solution

Application Life-cycle Management (ALM) is the co-ordination of development life-cycle activities—including requirements, modeling, development, build, and testing. Recently, ALM has expanded beyond the application and the software development life cycle to also include business solution governance, infrastructure management, operations, and support.

You can use ALM to help align your organization in the context of a software solution in business, development, and operations. With an application development platform that supports ALM, you can provide integration between the various tools used and activities performed within each of these capabilities.

There are main four types of staging servers with standalone developer’s environment which plays a key role in ALM of SharePoint 2010 application:

  1. Development SharePoint Farm
  2. Team foundation server
  3. Integration/Testing Farm
  4. Production Farm
    +
    Developer’s Workstation

The below figure is a physical architecture which depicts how each sever is interconnected to support collaborative development and ALM for SharePoint Foundation 2010 application:

Click to enlarge image

Development SharePoint Farm

A SharePoint farm is fundamentally a collection of SharePoint role servers that provide for the base infrastructure required to house SharePoint sites. The farm level is the highest level of SharePoint architecture, providing a distinct operational boundary for a SharePoint environment. Each farm in an environment is a self-encompassing unit made up of one or more servers, such as web servers, service application servers, and SharePoint database servers.

SharePoint development farm needed for the developers in an organization that makes heavy use of SharePoint often need environments to test new applications, web parts, solutions, and other SharePoint customization. These developers often need a sandbox area where these farm level features and solutions can be tested.

I have considered two-tier topology for SharePoint Foundation 2010 farm. However it will be entirely based on the need of your application. If your application is a relatively small intranet application, then you can choose single tier topology or if you are going to integrate other search server with foundation, then you can choose three-tier topology with application server as a middle tier (Remember that SharePoint Foundation 2010 doesn’t include enterprise search). It may make sense to deploy one or more development farms so that developers have the opportunity to run their tests and develop software for SharePoint independent of the existing production environment.

There are basically two types of servers included in two-tier development farm of SharePoint Foundation 2010:

  1. Web server
  2. Content database server

In the above figure, there are three front-end web servers and one SharePoint content database server. However you can choose a single front-end web server connected to content database server based on your application need and architecture of production environment. All web servers share the same content database. This is called two-tier deployment farm where SharePoint server component and content database are installed on separate server. As I mentioned before, you can choose one-tier, two-tier or three-tier deployment topology based on your application architecture and topology of production architecture.

Each web server has SharePoint Foundation 2010 and SharePoint extension for TFS 2010 install on it. It needs SharePoint extension for TFS 2010 to connect with Team Foundation Server for source control, build management & project management.

Advantage of Development SharePoint Farm:

  1. Single place where SharePoint Admin can integrate all the final artifacts from multiple developers.
  2. Developer can sync with latest SharePoint site on its standalone developer workstation.
  3. Admin can easily approve artifacts and migrate to integration server.
  4. It is a unit testing environment for developers where they can test dependent functionality or farm level features.

Team Foundation Server

Team Foundation Server plays a key role in ALM which provides source control, build management and work item. You can have TFS installed on the same server which has content database server but if you are going to use build management of TFS, then it is advisable to have separate Team Foundation Server because it utilizes CPU intensively when it processes the builds.

As per the above figure, there are separate Team foundation servers which are connected to SharePoint Farm as well as standalone development workstation so that it can provide source control for customized content as well as developer’s artifacts and resources.

Advantages of TFS
  1. Source control for SharePoint artifacts and customization
  2. Build management for SharePoint
  3. Work item and bug tracking tool for SharePoint
  4. Admin console for all management activity
  5. Easy integration with SharePoint foundation server and VS 2010
  6. Easy check-in & check-out
  7. Web based console to manage ALM activity

Developer’s Workstation

As per the above figure, developers’ environment includes two developers workstation. In practice, you can take as many workstations as your development team size.

Developer workstation should have Windows 7 or Windows vista operating system with standalone SharePoint foundation server with local content database. So that one developer’s work doesn’t affect another developer and he can debug artifacts locally.

Developer workstation will include the following stuff installed:

  1. Windows 7 or Windows vista 64 bit OS
  2. Stand alone SharePoint Foundation server 2010
  3. SharePoint designer 2010
  4. Visual Studio 2010 (connected to TFS)

Developer workstation should be connected to Team Foundation Server 2010 so that when developer finally completes his artifact, then he can check-in his artifact in TFS so that other developers can take the latest code from TFS if needed. This way, parallel development can happen without affecting other developer’s work.

Integration/Testing Farm

Any production SharePoint environment should have a test environment in which new SharePoint web parts, solutions, service packs, patches, and add-ons can be tested. It is critical to deploy test farms, because many SharePoint add-ons could potentially disrupt or corrupt the formatting or structure of a production environment, and trying to test these new solutions on site collections or different web applications is not enough because the solutions often install directly on the SharePoint servers themselves. If there is an issue, the issue will be reflected in the entire farm.

Integration or testing server farm should be similar to the existing environments, with the same add-ons and solutions installed and should ideally include restores of production site collections to make it as similar as possible to the existing production environment. All changes and new products or solutions installed into an environment should subsequently be tested first in this environment.

Integration/testing servers will have final SharePoint sites and site collection as per the business requirements. QA will test all the business functionality here. Customer can also do their ‘User acceptance test’ before going live to the production server.

After user acceptance test passed, all the sites & site collection will be deployed on production server.

Advantage of Integration testing server:

  1. Clean environments and same physical architecture as production
  2. QA can test all dependent business functionality at one place
  3. Customer can participate in UAT
  4. Easy deployment/migration from integration testing server to production server

Production Farm

The final stage is rolling your farm into a production environment. At this stage, you will have incorporated the necessary solution and infrastructure adjustments that were identified during the user acceptance test stage. These servers are generally in the customer’s premises. Development team and testing team do not have control over it.

There are various 3rd party tools available in the market for SharePoint data protection, administration, migration, compliance and integration.

ImageGen[1]

Summary

So this way, you can design physical architecture where Development SharePoint Farm and developer’s workstation are integrated with TFS 2010. TFS and Content database are connected to testing server or testing farm where all the artifacts and content will be integrated in testing server for QA and UAT. Finally after UAT, it will be deployed on production farm.

You can use VM (Virtual Machine) for all the servers and workstation for effective infrastructure because if server crashes due to some reason, then you can quickly create a new VM for the needed OS from images.

Note: In the above figure, integration/Testing farm and production farm is a single server just for clear understanding but it will be as large as development farm with number of front-end web server and content database server in reality. All the server OS is Windows Server 2008 R2 SP2 64 bit. Please visit here for more information on hardware & software requirements for SharePoint Foundation 2010.

How Microsoft’s Research Team is making Testing and the use of Pex & Moles Fun and Interesting

Try it out on the web

Go to www.pex4fun.com, and click Learn to start tutorials.

Main Publication to cite

Nikolai Tillmann, Jonathan De Halleux, Tao Xie, Sumit Gulwani, and Judith Bishop, Teaching and Learning Programming and Software Engineering via Interactive Gaming, in Proc. 35th International Conference on Software Engineering (ICSE 2013), Software Engineering Education (SEE), May 2013

 

Massive Open Online Courses (MOOCs) have recently gained high popularity among various universities and even in global societies. A critical factor for their success in teaching and learning effectiveness is assignment grading. Traditional ways of assignment grading are not scalable and do not give timely or interactive feedback to students.

 

To address these issues, we present an interactive-gaming-based teaching and learning platform called Pex4Fun. Pex4Fun is a browser-based teaching and learning environment targeting teachers and students for introductory to advanced programming or software engineering courses. At the core of the platform is an automated grading engine based on symbolic execution.

 

In Pex4Fun, teachers can create virtual classrooms, customize existing courses, and publish new learning material including learning games. Pex4Fun was released to the public in June 2010 and since then the number of attempts made by users to solve games has reached over one million.

 

Our work on Pex4Fun illustrates that a sophisticated software engineering technique – automated test generation – can be successfully used to underpin automatic grading in an online programming system that can scale to hundreds of thousands of users.

 

 

Code Hunt is an educational coding game.

Play win levels, earn points!

Analyze with the capture code button

Code Hunt is a game! The player, the code hunter, has to discover missing code fragments. The player wins points for each level won with extra bonus for elegant solutions.

Code in Java or C#

Discover a code fragment

Play in Java or C#… or in both! Code Hunt allows you to play in those two curly-brace languages. Code Hunt provides a rich editing experience with syntax coloring, squiggles, search and keyboard shortcuts.

Learn algorithms

Discover a code fragment

As players progresses the sectors, they learn about arithmetic operators, conditional statements, loops, strings, search algorithms and more. Code Hunt is a great tool to build or sharpen your algorithm skills. Starting from simple problems, Code Hunt provides fun for the most skilled coders.

Graded for correctness and quality

Modify the code to match the code fragment

At the core of the game experience is an automated grading engine based on dynamic symbolic execution. The grading engine automatically analyzes the user code and the secret code to generate the result table.

MOOCs with Office Mix

Add Code Hunt to your presentations

Code Hunt can included in any PowerPoint presentation and publish as an Office Mix Online Lesson. Use this PowerPoint template to create Code Hunt-themed presentations.

Web no installs, it just works

It just works

Code Hunt runs in most modern browsers including Internet Explorer 10, 11 and recent versions of Chrome and Firefox. Yup, it works on iPad.

Extras play your own levels

Play your own levels

Extra Zones with new sectors and levels can be created and reused. Read designer usage manual to create your own zone.

Compete so you think you can code

Compete

Code Hunt can be used to run small scale or large scale, private or public, coding competition. Each competition gets its own set of sectors and levels and its own leaderboard to determine the outcome. Please contact codehunt@microsoft.com for more information about running your own competition using Code Hunt.

Credits the team

Capture the working code fragment

Code Hunt was developed by the Research in Software Engineering (RiSE) group and Connections group at Microsoft Research. Go to our Microsoft Research page to find a list of publications around Code Hunt.

PressurePoint – great tool to Stress, Load and Performance test your SharPoint Site

SharePoint2013

 

Awesome tool developed by

 MargrietBruggeman

This version of PressurePoint only works with SharePoint 2013.

There’s a generic version of PressurePoint that works for all versions of SharePoint and even normal web sites at: http://gallery.technet.microsoft.com/PressurePoint-Dragon-for-58648ae4

Requires: The presence of the .NET 4.5 framework, because it makes extensive use of Parallel Programming techniques. Supports anonymous and Windows (NTLM) authentication.

About PressurePoint

When you apply enough pressure, every application you or somebody else builds has a point where it breaks. I call this point the pressure point.

I’d say it’s a strong advisory positive to undertake some activities to find out where the pressure point of the application that you’re responsible for lies. Several kinds of tests are commonly used to find out about these:

  • Performance testing, the umbrella term for testing applications responsiveness and stability. Following, I’ll list some more specific relevant types of performance testing.
  • Load testing, makes requests of an application to simulate normal or anticipated load conditions. This kind of test helps greatly when you want to determine what your end users should expect.
  • Endurance testing, tests if an application is able to hold up under continuous prolonged, but normal or expected, load. Typically looks for memory consumption and gradually decreasing performance.
  • Stress testing, here, you try to find the breaking point by applying maximum application capacity and observe in what ways the application breaks. It finds bottlenecks and root causes for performance degradation.
  • Spike testing, applies a sudden and dramatic increase in load and sees how the application responds to that.
  • Isolation testing, tests a specific part of the application. Usually, this involves an area that has proved to be troublesome.

It helps a lot if such tests are repeated throughout development/test/staging/production environments. This allows you to get a feel for your application.

During these tests, you’ll typically look at server response time (instead of rendering time), the time it takes the client to make the request and get the final response back. Because of this, I can advise to execute performance tests as close to the server or server farm as possible to eliminate network latency issues.

Most of the times, as an application developer or admin you don’t have much or any control over the network and you’ll be more interested how the specific application holds up.

Also, but this is quite obvious, if you can avoid it don’t place test clients on the server or server farm itself, or on the host hosting the virtual machines containing server or server farms. This can have quite the effect on the test outcome, although I have to say that in my experience the effect is limited enough to be able to undertake meaningful performance tests launched from the server or server farm. Other quick tips: it typically works better if you execute performance tests using multiple client computers and you should preferably execute performance tests using multiple user accounts.

Whatever types of tests you’re planning to do, please remember that forgetting to do any type of performance testing will result in an interesting product release experience. Lately, I can’t keep track anymore of the number of times companies contact me wishing they would have spent some time doing performance testing.

Lots of Tools

There are lots of tools out there that can help you do performance testing, but in my experience (and I have looked at 100+ of these tools) there are two types of tools: tools that are just a preview of a commercial version and too limited to do anything useful without buying the license and then there are tools that are insanely complex to use. See my blog post at http://sharepointdragons.com/2012/12/26/the-great-free-performance-load-and-stress-testing-tools-that-can-be-used-with-sharepoint-verdict/ for more information. The following overview at http://en.wikipedia.org/wiki/Test_tool is also nice and more objective (well, it would be more accurate to say that it refrains from giving any opinion).

So, it depends on your situation how to proceed. If you have budget, you can buy a great performance test tool and use that. I found myself in situations where I had to do performance testing in companies that didn’t have a budget to invest in performance tooling. There was also another issue…

About SharePoint

As I mainly work in SharePoint environments, I prefer to use a tool that is able to do performance testing specifically targeted towards SharePoint. I found none. During my SharePoint testing, uhm, dare I say, adventures, I found that SharePoint page requests are typically handled just fine and it’s hard to get a SharePoint environment to its knees just doing that. Request times tend to increase linearly, which is a good sign for an application. On top, SharePoint handles excessive page requests gracefully, without falling back in throwing all kinds of errors. Things get a lot more interesting and dangerous when you do one of the following things:

  • Execute custom code
  • Upload and retrieve documents of various sizes and batch sizes
  • Work with custom SharePoint Services, such as Search, Forms Services or SQL Server Reporting Services (let’s just say I picked out these as examples for no particular reason)

When using a testing tool that doesn’t have knowledge about SharePoint, it will be quite hard to test these aspects.

My conclusion

It may come as no surprise that eventually I decided that it was easier to build my own tool that has specific knowledge about SharePoint, can be extended by me at will, and is easy to use. Making extensive use of the .NET parallel programming capabilities, I found it was quite easy to do. When I was done, I decided that I wanted to share the basic version of it (basic, since I build custom extensions in it dedicated to the projects I’m doing) with the community. Later, I’m planning to add a specific version dedicated to SharePoint 2013, but I’m not quite there yet.

What to look for?

Doing performance testing in SharePoint environments without knowing what to look for is not the most useful thing one can do with one’s time. There are specific performance counters you should look out for on SharePoint WFE’s and different ones to check out on the back-end databases server. Depending on your needs, you might also need to spend some time coming up with the right set of performance counters you need for monitoring dedicated application servers. If you want to learn more about this topic, I can definitely recommend my gallery contribution at: http://gallery.technet.microsoft.com/PowerShell-script-for-59cf3f70 I’d also recommend the use of my SharePoint Flavored Weblog Reader (SFWR) tool at http://gallery.technet.microsoft.com/The-SharePoint-Flavored-5b03f323 which helps to analyze IIS log files.

Whether you use these tools or not: bear in mind that running a performance test tool without analyzing what happens on the server is absolutely useless!

How to use the PressurePoint Dragon for SharePoint

PressurePoint is a command line tool that reads an XML file that describes the test you want to execute. Currently, it only supports Windows (NTLM) or anonymous authentication. When you download the PressurePoint ZIP file it contains three things:

  • PressurePoint.exe, the actual performance test tool that can be executed by calling it from the command line. It requires the presence of the .NET 4 framework since it makes extensive use of parallel programming techniques.
  • PressurePoint.exe.config, the configuration file that is mandatory for the PressurePoint tool. Check out the TestLocation app setting and point it to the location of the XML file describing your test:

    Copy Code

    XML
    Edit|Remove
      <appSettings> 
        <add key="TestLocation" value="C:\Clients\XYZ\PressurePoint\test.xml"/> 
    </appSettings> 
    
  • Test.xml, an example XML Test Description file describing an example test.

Explanation of the structure of a Test Description file

The test description file can do a couple of simple things. It contains a test body that is repeated x times, determined by the repeat attribute of the <Test> element.

Copy Code

XML
Edit|Remove
<!--?xml version="1.0" encoding="utf-8" ?> 
<Test repeat="10"> 
[body omitted for clarity] 
</Test>

The <Test> element is the root element and only occurs once. It contains 1 or more <Session> elements. In a Session, you can specify important configuration info, such as the user name (user attribute), password (password attribute), domain name (domain attribute), the number of concurrent users that start a session (e.g. 20 instances of user A start executing the actions as described in a session) via the concurrentUsers attribute, a friendly name that is outputted to the console window to make it easier to identify which session is executed at a given time (friendlySessionName attribute).

Please note: If you’re using anonymous authentication, the values for user, password, and domain can just be left blank.

The following example shows the Session section:

Copy Code

XML
Edit|Remove
<Session user="administrator" password="verySecret" domain="lc" concurrentUsers="1" friendlySessionName="SessionA"> 
[body omitted for clarity] 
</Session>

Then there are various actions that can be used within a Session. These are:

  • Comment, outputs a text to the console window. Example:

    Copy Code

    XML
    Edit|Remove
    <Comment>Start Moon session A for administrator</Comment> 
    
  • Request, makes a request to a page. Please note: specify a page here, instead of a generic site url such as http://moon. Because right now, PressurePoint doesn’t support redirects. Example:

    Copy Code

    XML
    Edit|Remove
    <Request>http://moon/pages/default.aspx</Request>
  • DelaySeconds, waits for a given amount of time to simulate think time. Example:

    Copy Code

    XML
    Edit|Remove
    <DelaySeconds value="3" />
  • RandomDelaySeconds, waits for a random amount of time within a given range to provide a more realistic simulation of think time (which might not be what you want, since the action keeps the test more predictable. Example:

    Copy Code

    XML
    Edit|Remove
    <RandomDelaySeconds min="1" max="3" />
  • RandomRequest, makes a random request to a page from a given list. Example:

    Copy Code

    XML
    Edit|Remove
    <RandomRequest> 
      <URL>http://moon/pages/default.aspx</URL> 
      <URL>http://moon:28827/sitepages/home.aspx</URL> 
    <!--RandomRequest>  
    

    The next example is a full blown example of a single session by a single user repeated 10 times:

Copy Code

XML
Edit|Remove
<?xml version="1.0" encoding="utf-8" ?> 
<Test repeat="10"> 
  <Session user="administrator" password="superSecret" domain="lc" concurrentUsers="1" friendlySessionName="SessionA"> 
    <Comment>Start Moon session A for administrator</Comment>    <Request>http://moon/pages/default.aspx</Request> 
    <RandomRequest> 
      <URL>http://moon/pages/default.aspx</URL> 
      <RandomDelaySeconds min="1" max="3" />  <URL>http://moon:28827/sitepages/home.aspx</URL> 
    </RandomRequest> 
  </Session> 
</Test>

The next example shows how to simulate 1000 concurrent users, using 2 different user accounts in a test that is repated 100 times:

Copy Code

XML
Edit|Remove
<?xml version="1.0" encoding="utf-8" ?> 
<Test repeat="100"> 
  <Session user="administrator" password="secretPwd" domain="test" concurrentUsers="500" friendlySessionName="SessionA"> 
    <Comment>Start session "Home page" for administrator</Comment>    <Request>http://mysrv/sitepages/home.aspx</Request> 
  </Session>  
 
  <Session user="jBlack" password="superSecret" domain="test" concurrentUsers="500" friendlySessionName="SessionA"> 
    <Comment>Start session A for Jack Black</Comment>    <Request>http://mysrv/sitepages/home.aspx</Request> 
  </Session> 
</Test>

The following section contains SharePoint 2013 specific actions.

  • ClientSite, fetches the URL of a SharePoint site collection. Looks like this:

    Copy code

    XML
    Edit|Remove
    <ClientSite  <Url>http://moon</Url> 
    </ClientSite> 
    

 

Quick tips for constructing performance test cases

The following link contains interesting information about the typical type of use of a SharePoint environment: http://office.microsoft.com/en-us/windows-sharepoint-services-it/capacity-planning-for-windows-sharepoint-services-HA001160774.aspx . The quick take away is this:

  • Light usage: the end user makes 20 requests per hour.
  • Typical usage: the end user makes 36 requests per hour.
  • Heavy usage: the end user makes 60 requests per hour.
  • Extreme usage: the end user makes 120 requests per hour.

This will help you build test cases that are more realistic; especially in situations where the customer isn’t really sure how much the application will be used. Concerning this topic, I’ve also found the following topic to be quite interesting: http://blogs.technet.com/b/wbaer/archive/2007/07/06/requests-per-second-required-for-sharepoint-products-and-technologies.aspx

As a final guideline, I’ve also worked with the following rule of thumb that may help you: in a typical enterprise application, 1% of the users makes a request per second during peak time, in an enterprise application that is used extremely, 3% of the users makes a request per second during peak time.

Support Tools

It can be frustrating to try a new community tool that doesn’t seem to work. It makes you wonder whether you made a mistake in constructing the XML for the test case, or whether the tool simply doesn’t work. I’ve built two tools that support PressurePoint: Ping Dragon for SharePoint 2010 (http://gallery.technet.microsoft.com/Ping-Dragon-for-SharePoint-70fb299e ) and WinPing Dragon for SharePoint 2010 (http://gallery.technet.microsoft.com/WinPing-Dragon-for-eefb6dd3 ). The tools fulfill a single purpose: ping SharePoint using the same method leveraged by PressurePoint. In other words, if these tools work, PressurePoint will work too. The difference between both support tools is that the WinPing Dragon tool hides the password from view, while the Ping Dragon doesn’t.

What’s going on under the covers?

Usethe Resource Monitor tool (resmon.exe) to “check the heartbeat” of PressurePoint, since the tool is a bit of a black box to you and watching it doing its work can be a boring experience. Resource Monitor clearly shows how PressurePoint is building up to the point where it can simulate the load you require to simulate the number of different users and sessions you need. PressurePoint executes each session in a separate thread and Resource Monitor will show an increase of the PressurePoint thread counter until it approximates the intended load.

The System image normally, as you’d expect, has the highest number of active threads (a couple of 100s), but once you’re simulating loads of 100s or even 1000s of end users,

PressurePoint surpasses this. One of the things that I found interesting was that it can take quite a long time until you get to the point where you can actually run 100s or even 1000s of separate threads in a single application (on the environments I’ve tested it on, it can take 1 hour or more to reach those kinds of numbers). It makes sense, since those are a lot of threads, other threads finish their work, and your system has other tasks to take care of. But still, before building the tool, I didn’t anticipate this.