Category Archives: Debugging

A Look At : Visual Studio 2013 Update 3 CTP2


New technology improvements in Visual Studio 2013 Update 3 CTP 2


Technology improvements

The following technology improvements were made in this release.


  • CodeLens jobs that are running on the Team Foundation Server job agent have been optimized for performance specifically while processing branching and merging changesets.


  • If you have more than one monitor, Visual Studio will remember which monitor a Windows Store application was last run on.
  • You can debug x86 applications that are built by .NET native.
  • When you analyze managed memory dump files, you can go to Definition and Find All References of the selected type.
  • You can debug the dump files from .NET Native applications by using Visual Studio debugger.


  • The Application Insights Tools for Visual Studio are now included in Visual Studio 2013 Update 3 CTP2. This initial integration as part of CTP2 includes some software updates and performance improvements.


  • You can skip straight to the details of performance events that are exported from Application Insights to IntelliTrace.


  • The Performance and Diagnostics hub can open profiling sessions (.diagsession files) that were exported from the F12 tools in the latest developer preview of Internet Explorer 11.
  • Windows Presentation Foundation (WPF) and Win32 applications are supported by the new Memory Usage Tool in the Performance and Diagnostics Hub. For more information about how to use the tool to troubleshoot issues in native and managed memory, go to the following blog post:
    Diagnosing memory issues with the new Memory Usage Tool in Visual Studio

Release Management

  • You can useWindowsPowerShell or theWindowsPowerShell Desired State Configuration (DSC) feature to deploy and manage configuration data. Additionally, you can deploy to the following environments without having to set up Microsoft Deployment Agent:
    • Microsoft Azure environments
    • On-premise environments (Standard environments)

Testing Tools

  • You can add custom fields and custom work flows for test plans and test suites.
  • You can use Manage Test Suites permission for granting access to test suites.
  • You can track changes to test plans and test suites by using work item history.

For more information about these features, see the following Visual Studio Developer Tools blog article:

Test Plan and Test Suite Customization with TFS2013 Update3

Visual Studio IDE

  • CodeLens authors and changes indicators are now available for Git repositories.
  • In Code Map, links are styled by using colors, and they display in the improved Legend.
  • Debugger Map automatically zooms to the call stack entry of interest and preserves user’s zoom preferences.
  • You can drag binaries from the Windows file explorer to a code map, and then start exploring binaries by using Code Map.

Known issues

Testing Tools

  • When you try to upgrade an existing TFS server that has Test management data to Visual Studio 2013 Team Foundation Server Update 3 CTP2 in JPN or CHS, the upgrade of Test Case Management service does not work.

Visual Studio IDE

  • In Visual Studio 2013 Ultimate Update 3 CTP2 localized (non en-us) drops, when trying to request a Code Map, or a Dependency Graph for the solution, the directed graph is not produced.


For more information on Visual Studio 2013 and other upgrades, visit

FREE Microsoft Dynamics CRM 2011 List Component for Microsoft SharePoint Server 2010



CRM2011 – SharePoint 2010 Integration? Glue CRM 2011 & Share Point 2010 together? Make CRM 2011 and Share Point 2010 converse? I wasn’t sure what to call this exactly. “Hooking together” works for me!

Now that we have a CRM 2011 instance and a Share Point site working, let’s get them connected up! Go to this website and download Microsoft Dynamics CRM 2011 List Component for Microsoft SharePoint Server 2010:

Accept the License Terms.

Extract the files to a folder (I chose C:\CRM List).

You will get a prompt “The Installation is complete.” Click OK.

Let’s go over to the Share Point Central Administration Server to install the list component we just extracted. Connect to http://localhost:48835/ (your port might be different, be aware of this). Click Manage web applications.

Click the new Share Point site, and then “General Settings” (the blue cogs).

Scroll down to Browser File Handling and choose Permissive, Click OK.

Let’s head back over to our new Share Point Site. Click Site Actions up top left, and then “Site Settings”.

Under Galleries click “Solutions”.

Click the Word “Solutions” up top (you have to click the word “Solutions”, even though it looks selected), and then click “Upload Solution”.

Select the .wsp component that we extracted wayyy back at the top of this. I used C:CRM List as my extract folder. Click OK.

You’ll get prompted at this point, I couldn’t active the control on this screen (but it still needs to be done). We need to make sure some services are running to activate the solution. Click Close.

Head back to the Share Point Central Administration. http://localhost:48835. Found at

Click System Settings –> Manage Services on this server

Click Start beside “Share Point Foundation Sandboxed Code Service”. I also started “Microsoft SharePoint Foundation Subscription Settings Service (by accident)” so that’s why that ones started.

Now to head back to our Share Point site http://localhost:39083/

Under Galleries click “Solutions”.

Click Solutions again, select crmlistcomponent, and the click “Activate” up top. Activate is now un-greyed out! Click Activate!

The solution has now been activated! Hurray!

There seems to be some confusion whether or not you need to run a power shell script to enable Activation of Share Point 2010 solutions (AllowHtcExtn). According to what I’ve read, you would need to run this if Share Point 2010 is running on a domain controller. I didn’t have to do this (and we’re on a domain controller), and I’ve yet to run into a problem with .htc stuff. Even in the Microsoft Dynamics CRM 2011 Readme it says:
“If you are using Microsoft SharePoint Server 2010 (On-Premises), you must add .htc extensions to the list of allowed file types:
a. Copy the AllowHtcExtn.ps1 script file to the server that is running Microsoft SharePoint Server 2010.
b. In the Windows PowerShell window or in the SharePoint Management Console, run the command: AllowHtcExtn.ps1 .
Example: AllowHtcExtn.ps1 http://servername%E2%80%9D

Some people say the script works for them , and some say that using just the blog method (what we did) works
The sharepoint configuration is complete at this point. You probably want to take a snapshot, name it “After Sharepoint Configuration”. Let’s head over to our CRM server (localhost:85).

In CRM Click Settings –> Document Management –> Document Management Settings

Select the entities that you want to have documents enabled on. This will create a “Documents” area when you open an instance of the entity. I’ll just leave the defaults for now. At the bottom punch in your Share Point site that you’ve created and click Next. This is the Share Point server we installed the list component on. You’re not allowed to use localhost:port, just use the computer name:port like below.

Don’t select the box, otherwise it will relate the files to those entities. Without checking the box you will end up with something like Site/EntityName/Record Name (which is what I want, especially if you’re using custom entities). Click Next.

If “Libraries are being created in the path”, click Next.

Everything should “Succeed”, Click Finish.

Let’s test this bad boy out now.

Create a new account called “Test”.

Click Save! Click “Documents” on the left side. You’ll get a prompt saying that the folder (Test) is being created under “Account”. Click OK.

Click Add.

Now you’ll probably get these errors! /crmgrid/scripts/DialogContainer.js and 403 FORBIDDEN! Depressing. The only real information on this error was here: . It wasn’t very clear, but I stumbled through it. It seems that CRM 2011 doesn’t enjoy being called localhost. Let’s fix these up.

The fix for this was to run inetmgr –> Click Microsoft Dynamics CRM –> click Stop

Click “Bindings…” on the right side. Click “Edit” on the items that show “localhost” and change it to my machine name: “win-b80icqrvluf”. This is so it has a a “real” name to connect to.



Now click “Start” on the right side.

Head back over to the CRM (http://win-b80icqrvluf:85/CRMTest/main.aspx) make sure to use the host name, as it might give you the error if you use localhost. Open your Test Account again.

Click Documents –> Add, you should now see this popup (it can take a while to load for the first time on the VM). If you continue to get the error, stop both CRM 2011 and Share Point 2010 servers and restart them. If that doesn’t work, try restarting the whole server.

Pick a file, and click OK.

The file should be uploaded to Share Point now.

Head over to Share Point at http://win-b80icqrvluf:39083 and click “All Site Content” or “Libraries”.

Click Account.

You can see that CRM has created a folder “Test” (for our record). It creates 1 folder per record. Click it to see the files associated to that record!!

The files associated to the record “Test” in Accounts.

Share Point and CRM have combined into a super awesome force of doom. But we’re still missing 1 core piece of functionality (due to not picking a port when we installed CRM).



10 Must-Have Visual Studio Productivity Add-Ins I use everyday and recommend to every . Net Developer

Visual Studio provides a rich extensibility model that developers at Microsoft and in the community have taken advantage of to provide a host of quality add-ins. Some add-ins contribute significant how-did-I-live-without-this functionality, while others just help you automate that small redundant task you constantly find yourself performing.
10 Must-Have Add-Ins

VSWindowManager PowerToy
Cache Visualizer

In this article, I introduce you to some of the best Visual Studio add-ins available today that can be downloaded for free. I walk through using each of the add-ins, but because I am covering so many I only have room to introduce you to the basic functionality.
Each of these add-ins works with Visual Studio .NET 2003 and most of them already have versions available for Visual Studio 2005. If a Visual Studio 2005 version is not available as of the time of this writing, it should be shortly.


Test-driven development is the practice of writing unit tests before you write code, and then writing the code to make those tests pass. By writing tests before you write code, you identify the exact behavior your code should exhibit and, as a bonus, at the end you have 100 percent test coverage, which makes extensive refactoring possible.
NUnit gives you the ability to write unit tests using a simple syntax and then execute those tests one by one or altogether against your app. If you are using Visual Studio Team System, you have unit testing functionality built into the Visual Studio IDE. Before the Visual Studio Team System, there was TestDriven.NET, an add-in that integrates NUnit directly into the Visual Studio IDE. Using a non-Team System version of Visual Studio 2005 or Visual Studio .NET 2003, is, in my opinion, still the best solution available.
TestDriven.NET adds unit testing functionality directly to the Visual Studio IDE. Instead of writing a unit test, switching over to the NUnit GUI tool, running the test, then switching back to the IDE to code, and so on, you can do it all right in the IDE.


Figure 1 New Testing Options from TestDriven.NET 
After installing TestDriven.NET you will find a number of new menu items on the right-click context menu as shown in Figure 1. You can right-click directly on a unit test and run it. The results will be displayed in the output window as shown in Figure 2.


Figure 2 Output of a Unit Test 
While executing unit tests in the IDE is invaluable by itself, perhaps the best feature is that you can also quickly launch into the debugger by right-clicking on a test and selecting Test With | Debugger. This will launch the debugger and then execute your unit tests, hitting any breakpoints you have set in those tests.
In fact, it doesn’t even have to be a unit test for TestDriven.NET to execute it. You could just as easily test any public method that returns void. This means that if you are testing an old app and need to walk through some code, you can write a quick test and execute it right away.
TestDriven.NET is an essential add-in if you work with unit tests or practice test-driven development. (If you don’t already, you should seriously consider it.) TestDriven.NET was written by Jamie Cansdale and can be downloaded from


XML comments are invaluable tools when documenting your application. Using XML comments, you can mark up your code and then, using a tool like nDoc, you can generate help files or MSDN-like Web documentation based on those comments. The only problem with XML documentation is the time it takes to write it you often end up writing similar statements over and over again. The goal of GhostDoc is to automate the tedious parts of writing XML comments by looking at the name of your class or method, as well as any parameters, and making an educated guess as to how the documentation should appear based on recommended naming conventions. This is not a replacement for writing thorough documentation of your business rules and providing examples, but it will automate the mindless part of your documentation generation.
For instance consider the method shown here:

private void SavePerson(Person person) { }

After installing GhostDoc, you can right-click on the method declaration and choose Document this. The following comments will then be added to your document:

///  /// Saves the person. ///  ///Person. private void SavePerson(Person person) { }

As you can see, GhostDoc has automatically generated a summary based on how the method was named and has also populated the parameter comments. Don’t stop hereyou should add additional comments stating where the person is being saved to or perhaps give an example of creating and saving a person. Here is my comment after adding some additional information by hand:

///  /// Saves a person using the configured persistence provider. ///  ///The Person to be saved private void SavePerson(Person person) { }
Adding these extra comments is much easier since the basic, redundant portion is automatically generated by GhostDoc. GhostDoc also includes options that allow you to modify existing rules and add additional rules that determine what kind of comments should be generated.
GhostDoc was written by Roland Weigelt and can be downloaded from


Smart Paster
Strings play a large role in most applications, whether they are comments being used to describe the behavior of the system, messages being sent to the user, or SQL statements that will be executed. One of the frustrating parts of working with strings is that they never seem to paste correctly into the IDE. When you are pasting comments, the strings might be too long or not aligned correctly, leaving you to spend time inserting line breaks, comment characters, and tabbing. When working with strings that will actually be concatenated, you have to do even more work, usually separating the parts of the string and inserting concatenation symbols or using a string builder.
The Smart Paster add-in helps to limit some of this by providing a number of commands on the right-click menu that let you paste a string from the clipboard into Visual Studio using a certain format. After installing Smart Paster, you will see the new paste options available on the right-click menu (see Figure 3).


Figure 3 String Pasting Options from Smart Paster 
For instance, you might have the following string detailing some of your business logic:

To update a person record, a user must be a member of the customer service group or the manager group. After the person has been updated, a letter needs to be generated to notify the customer of the information change.

You can copy and paste this into Visual Studio using the Paste As | Comment option, and you would get the following:

//To update a person record a user must be a member of the customer //service group or the manager group. After the person has been updated //a letter needs to be generated to notify the customer of the //information change.
The correct comment characters and carriage returns are automatically inserted (you can configure at what length to insert a character return). If you were inserting this text without the help of Smart Paster, it would paste as one long line, forcing you to manually add all the line breaks and comment characters. As another example, let’s say you have the following error message that you need to insert values into at run time:

You do not have the correct permissions to perform <insert action>. You must be a member of the <insert group> to perform this action.

Using the Paste As | StringBuilder command, you can insert this string as a StringBuilder into Visual Studio. The results would look like this:

StringBuilder stringBuilder = new StringBuilder(134); stringBuilder.AppendFormat( @"You do not have the correct permissions to "); stringBuilder.AppendFormat( @"perform . You must be a member of "); stringBuilder.AppendFormat( @"the  to perform this action.");

It would then simply be a matter of modifying the code to replace the variables sections of the string:

StringBuilder stringBuilder = new StringBuilder(134); stringBuilder.AppendFormat( @"You do not have the correct permissions to "); stringBuilder.AppendFormat( @"perform {0}. You must be a member of ", action); stringBuilder.AppendFormat( @"the {0} to perform this action.", group);

Smart Paster is a time-saving add-in that eliminates a lot of the busy work associated with working with strings in Visual Studio. It was written by Alex Papadimoulis.


Throughout the process of software development, it is common to reuse small snippets of code. Perhaps you reuse an example of how to get an enum value from a string or a starting point on how to implement a certain pattern in your language of choice.
Visual Studio offers some built-in functionality for working with code snippets, but it assumes a couple of things. First, it assumes that you are going to store all of your snippets on your local machine, so if you switch machines or move jobs you have to remember to pack up your snippets and take them with you. Second, these snippets can only be viewed by you. There is no built-in mechanism for sharing snippets between users, groups, or the general public.
This is where CodeKeep comes to the rescue. CodeKeep is a Web application that provides a place for people to create and share snippets of code in any language. The true usefulness of CodeKeep is its Visual Studio add-in, which allows you to search quickly through the CodeKeep database, as well as submit your own snippets.
After installing CodeKeep, you can search the existing code snippets by selecting Tools | CodeKeep | Search, and then using the search screen shown in Figure 4.


Figure 4 Searching Code Snippets with CodeKeep 
From this screen you can view your own snippets or search all of the snippets that have been submitted to CodeKeep. When searching for snippets, you see all of the snippets that other users have submitted and marked as public (you can also mark code as private if you want to hide your bad practices). If you find the snippet you are looking for, you can view its details and then quickly copy it to the clipboard to insert into your code.
You can also quickly and easily add your own code snippets to CodeKeep by selecting the code you want to save, right-clicking, and then selecting Send to CodeKeep.This will open a new screen that allows you to wrap some metadata around your snippet, including comments, what language it is written in, and whether it should be private or public for all to see.
Whenever you write a piece of code and you can imagine needing to use it in the future, simply take a moment to submit it; this way, you won’t have to worry about managing your snippets or rewriting them in the future. Since CodeKeep stores all of your snippets on the server, they are centralized so you don’t have to worry about moving your code from system to system or job to job.
CodeKeep was written by Arcware’s Dave Donaldson and is available from


API calls within the .NET Framework. One of the hard parts of using P/Invoke is determining the method signature you need to use; this can often be an exercise in trial and error. Sending incorrect data types or values to an unmanaged API can often result in memory leaks or other unexpected results.
PInvoke.NET is a wiki that can be used to document the correct P/Invoke signatures to be used when calling unmanaged Win32 APIs. A wiki is a collaborative Web site that anyone can edit, which means there are thousands of signatures, examples, and notes about using P/Invoke. Since the wiki can be edited by anyone, you can contribute as well as make use of the information there.
While the wiki and the information stored there are extremely valuable, what makes them most valuable is the PInvoke.NET Visual Studio add-in. Once you have downloaded and installed the PInvoke.NET add-in, you will be able to search for signatures as well as submit new content from inside Visual Studio. Simply right-click on your code file and you will see two new context items: Insert PInvoke Signatures and Contribute PInvoke Signatures and Types.

Figure 5 Using PInvoke.NET 
When you choose Insert PInvoke Signatures, you’ll see the dialog box shown in Figure 5. Using this simple dialog box, you can search for the function you want to call. Optionally, you can include the module that this function is a part of. Now, a crucial part of all major applications is the ability to make the computer Beep. So I will search for the Beep function and see what shows up. The results can be seen in Figure 6.

Figure 6 Finding the Beep Function in PInvoke.NET 
.NET. The wiki suggests alternative managed APIs, letting you know that there is a new method System.Console.Beep in the .NET Framework 2.0.
There is also a link at the bottom of the dialog box that will take you to the corresponding page on the wiki for the Beep method. In this case, that page includes documentation on the various parameters that can be used with this method as well as some code examples on how to use it.
After selecting the signature you want to insert, click the Insert button and it will be placed into your code document. In this example, the following code would be automatically created for you:

[DllImport("kernel32.dll", SetLastError=true)] [return: MarshalAs(UnmanagedType.Bool)] static extern bool Beep( uint dwFreq, uint dwDuration);

You then simply need to write a call to this method and your computer will be beeping in no time.

The PInvoke.NET wiki and Visual Studio add-in take away a lot of the pain and research time sometimes involved when working with the Win32 API from managed code. The wiki can be accessed at, and the add-in can be downloaded from the Helpful Tools link found in the bottom-left corner of the site.


VSWindowManager PowerToy
The Visual Studio IDE includes a huge number of different Windows, all of which are useful at different times. If you are like me, you have different window layouts that you like to use at various points in your dev work. When I am writing HTML, I like to hide the toolbox and the task list window. When I am designing forms, I want to display the toolbox and the task list. When I am writing code, I like to hide all the windows except for the task list. Having to constantly open, close, and move windows based on what I am doing can be both frustrating and time consuming.
Visual Studio includes the concept of window layouts. You may have noticed that when you start debugging, the windows will automatically go back to the layout they were in the last time you were debugging. This is because Visual Studio includes a normal and a debugging window layout.
Wouldn’t it be nice if there were additional layouts you could use for when you are coding versus designing? Well, that is exactly what VSWindowManager PowerToy does.
After installing VSWindowManager PowerToy, you will see some new options in the Window menu as shown in Figure 7.


Figure 7 VSWindowManager Layout Commands 
The Save Window Layout As menu provides commands that let you save the current layout of your windows. To start using this power toy, set up your windows the way you like them for design and then navigate to the Windows | Save Window Layout As | My Design Layout command. This will save the current layout. Do the same for your favorite coding layout (selecting My Coding Layout), and then for up to three different custom layouts.
VSWindowManager will automatically switch between the design and coding layouts depending on whether you are currently viewing a designer or a code file. You can also use the commands on the Apply Window Layout menu to choose from your currently saved layouts. When you select one of the layouts you have saved, VSWindowManager will automatically hide, show, and rearrange windows so they are in the exact same layout as before.
VSWindowManager PowerToy is very simple, but can save you a lot of time and frustration. VSWindowManager is available from


Visual Studio makes creating Web services deceptively easy You simply create an .asmx file, add some code, and you are ready to go. ASP.NET can then create a Web Services Description Language (WSDL) file used to describe behavior and message patterns for your Web service.
There are a couple problems with letting ASP.NET generate this file for you. The main issue is that you are no longer in control of the contract you are creating for your Web service. This is where contract-first development comes to the rescue. Contract-first development, also called contract-driven development, is the practice of writing the contract (the WSDL file) for your Web service before you actually write the Web service itself. By writing your own WSDL file, you have complete control over how your Web service will be seen and used, including the interface and data structures that are exposed.
Writing a WSDL document is not a lot of fun. It’s kind of like writing a legal contract, but using lots of XML. This is where the WSContractFirst add-in comes into play. WSContractFirst makes it easier to write your WSDL file, and will generate client-side and server-side code for you, based on that WSDL file. You get the best of both worlds: control over your contract and the rapid development you are used to from Visual Studio style services development.
The first step to using WSContractFirst is to create your XML schema files. These files will define the message or messages that will be used with your Web services. Visual Studio provides an easy-to-use GUI interface to define your schemasthis is particularly helpful since this is one of the key steps of the Web service development process. Once you have defined your schemas you simply need to right-click on one of them and choose Create WSDL Interface Description. This will launch the Generate WSDL Wizard, the first step of which is shown in Figure 8.

Figure 8 Building a WSDL File with WSContractFirst  
Step 1 collects the basics about your service including its name, namespace, and documentation. Step 2 allows you to specify the .xsd files you want to include in your service. The schema you selected to launch this wizard is included by default. Step 3 allows you to specify the operations of your service. You can name the operation as well as specify whether it is a one-way or request/response operation. Step 4 gives you the opportunity to enter the details for the operations and messages. Step 5 allows you to specify whether a element should be created and whether or not to launch the code generation dialog automatically when this wizard is done. Step 6 lets you specify alternative .xsd paths. Once the wizard is complete, your new WSDL file is added to your project.
Now that you have your WSDL file there are a couple more things WSContractFirst, can do for you. To launch the code generation portion of WSContractFirst, you simply need to right-click on your WSDL file and select Generate Web Service Code. This will launch the code generation dialog box shown in Figure 9.

Figure 9 WSContractFirst Code Generation Options 
You can choose to generate a client-side proxy or a service-side stub, as well as configure some other options about the code and what features it should include. Using these code generation features helps speed up development tremendously.
If you are developing Web services using Visual Studio you should definitely look into WSContractFirst and contract-first development. WSContractFirst was written by Thinktecture’s Christian Weyer.


Your mouse probably has five buttons, so why are you only using three of them? The VSMouseBindings power toy provides an easy to use interface that lets you assign each of your mouse buttons to a Visual Studio command.
VSMouseBindings makes extensive use of the command pattern. You can bind mouse buttoms to various commands, such as open a new file, copy the selected text to the clipboard, or just about anything else you can do in Visual Studio. After installing VSMouseBindings you will see a new section in the Options dialog box called VsMouseBindings. The interface can be seen in Figure 10.

Figure 10 VSMouseBindings Options for Visual Studio 
As you can see, you can select a command for each of the main buttons. You probably shouldn’t mess around with the left and right mouse buttons, though, as their normal functionality is pretty useful. The back and forward buttons, however, are begging to be assigned to different commands. If you enjoy having functionality similar to a browser’s back and forward buttons, then you can set the buttons to the Navigate.Backward and Navigate.Forward commands inVisual Studio.
The Use this mouse shortcut in menu lets you set the scope of your settings. This means you can configure different settings when you are in the HTML designer as opposed to when you are working in the source editor.
VSMouseBindings is available from


Code is exponentially more readable when certain parts of that code are differentiated from the rest by using a different color text. Reading code in Visual Studio is generally much easier than trying to read code in an editor like Notepad.
Chances are you may have your own blog by now, or at least have spent some time reading them. Normally, when you try to post a cool code snippet to your blog it ends up being plain old text, which isn’t the easiest thing to read. This is where the CopySourceAsHTML add-in comes in to play. This add-in allows you to copy code as HTML, meaning you can easily post it to your blog or Web site and retain the coloring applied through Visual Studio.
After installing the CopySourceAsHTML add-in, simply select the code you want to copy and then select the Copy Source as HTML command from the right-click menu. After selecting this option you will see the dialog box shown in Figure 11.

Figure 11 Options for CopySourceAsHTML 

From here you can choose what kind of HTML view you want to create. This can include line numbers, specific tab sizes, and many other settings. After clicking OK, the HTML is saved to the clipboard. For instance, suppose you were starting with the following code snippet inside Visual Studio:

private long Add(int d, int d2) { return (long) d + d2; }
Figure 12 HTML Formatted Code  
After you select Copy As HTML and configure the HTML to include line numbers, this code will look like Figure 12 in the browser. Anything that makes it easier to share and understand code benefits all of us as it means more people will go to the trouble of sharing knowledge and learning from each other.
CopySourceAsHTML was written by Colin Coller and can be downloaded from


Cache Visualizer
Visual Studio 2005 includes a new debugging feature called visualizers, which can be used to create a human-readable view of data for use during the debugging process. Visual Studio 2005 includes a number of debugger visualizers by default, most notably the DataSet visualizer, which provides a tabular interface to view and edit the data inside a DataSet. While the default visualizers are very valuable, perhaps the best part of this new interface is that it is completely extensible. With just a little bit of work you can write your own visualizers to make debugging that much easier.
While a lot of users will write visualizers for their own custom complex types, some developers are already posting visualizers for parts of the Framework. I am going to look at one of the community-built visualizers that is already available and how it can be used to make debugging much easier.
The ASP.NET Cache represents a collection of objects that are being stored for later use. Each object has some settings wrapped around it, such as how long it will be cached for or any cache dependencies. There is no easy way while debugging to get an idea of what is in the cache, how long it will be there, or what it is watching. Brett Johnson saw that gap and wrote Cache Visualizer to examine the ASP.NET cache.
Once you have downloaded and installed the visualizer you will see a new icon appear next to the cache object in your debug windows, as shown in Figure 13.

Figure 13 Selecting Cache Visualizer While Debugging 
When you click on the magnifying glass to use the Cache Visualizer a dialog box appears that includes information about all of the objects currently stored in the ASP. NET cache, as you can see in Figure 14.

Figure 14 Cache Visualizer Shows Objects in the ASP.NET Cache 
Under Public Cache Entries, you can see the entries that I have added to the cache. The entries under Private Cache Entries are ones added by ASP.NET. Note that you can see the expiration information as well as the file dependency for the cache entry.
The Cache Visualizer is a great tool when you are working with ASP.NET. It is also representative of some of the great community-developed visualizers we will see in the future.


Wrapping It Up
While this article has been dedicated to freely available add-ins, there are also a host of add-ins that can be purchased for a reasonable price. I encourage you to check out these other options, as in some cases they can add some tremendous functionality to the IDE. This article has been a quick tour of some of the best freely available add-ins for Visual Studio. Each of these add-ins may only do a small thing, but together they help to increase your productivity and enable you to write better code.

modern.IE – Great tool for testing Web Sites! Get it now!

ImageThe modern.IE scan analyzes the HTML, CSS, and JavaScript of a site or application for common coding issues. It warns about practices such as incomplete specification of CSS properties, invalid or incorrect doctypes, and obsolete versions of popular JavaScript libraries.

It’s easiest to use modern.IE by going to the modern.IE site and entering the URL to scan there. To customize the scan, or to use the scan to process files behind a firewall, you can clone and build the files from this repo and run the scan locally.

How it works

The modern.IE local scan runs on a system behind your firewall; that system must have access to the internal web site or application that is to be scanned. Once the files have been analyzed, the analysis results are sent back to the modern.IE site to generate a complete formatted report that includes advice on remediating any issues. The report generation code and formatted pages from the modern.IE site are not included in this repo.

Since the local scan generates JSON output, you can alternatively use it as a standalone scanner or incorporate it into a project’s build process by processing the JSON with a local script.

The main service for the scan is in the app.js file; it acts as an HTTP server. It loads the contents of the web page and calls the individual tests, located in /lib/checks/. Once all the checks have completed, it responds with a JSON object representing the results.

Installation and configuration

  • Install node.js. You can use a pre-compiled Windows executable if desired. Version 0.10 or higher is required.
  • If you want to modify the code and submit revisions, Install git. You can choose GitHub for Windows instead if you prefer. Then clone this repository. If you just want to run locally then downloading then you just need to download the latest version from here and unzip it
  • Install dependencies. From the subdirectory, type: npm install
  • If desired, set an environment variable PORT to define the port the service will listen on. By default the port number is 1337. The Windows command to set the port to 8181 would be: set PORT=8181
  • Start the scan service: From the subdirectory, type: node app.js and the service should respond with a status message containing the port number it is using.
  • Run a browser and go to the service’s URL; assuming you are using the default port and are on the same computer, the URL would be: http://localhost:1337/
  • Follow the instructions on the page.


The project contains a set of unit tests in the /test/ directory. To run the unit tests, type grunt nodeunit.

JSON output

Once the scan completes, it produces a set of scan results in JSON format:

    "testName" : {
        "testName": "Short description of the test",
        "passed" : true/false,
        "data": { /* optional data describing the results found */ }

The data property will vary depending on the test, but will generally provide further detail about any issues found.

In need of a formal test case management system? Look no further than VS2013 Web Access (TWA)

In Visual Studio 2012 Microsoft provided test case management and test execution capabilities in TFS Web Access. It was part of Visual Studio 2012 Update 2.

Now in Visual Studio 2013, new capabilities and features have been added to create and modify test plans in Visual Studio 2013 Web Access (TWA).

You don’t have to switch to Microsoft Test Manager for Test Plans, Test Suites and Shared Steps creation. The entire ‘Test Case Management’ can be done from the Web Access.

TWA connects you to Visual Studio Team Foundation Server (TFS) or Team Foundation Service to manage source code, work items, builds, and test efforts.

In order to access Test tab for Web Access, we need to provide Full access to the Windows user or Group who is accessing the functionality.


Observe the Test tab. If the settings are not configured, we need to go to Access Levels from the Control Panel where we can find the 3 different Access Level settings as shown here


The default option is Limited Access. We need to add the user to get Test tab by setting the option to Full.

Now observe the Test tab (right next to the Build tab as seen in Figure 1). I had already created Test Plan and Test Suites using Microsoft Test Manager 2013. These artifacts start appearing in the Web Access under the Test tab immediately.

The Test Plan has 3 Test Suites – Requirement based, Static and Query based. Each suite has some Test Cases. We can also see the different status of an individual test case.


Note: I am also enclosing a similar Test Plan screenshot from Microsoft Test Manager 2013 for comparing the two. Observe the categorization of test status with Microsoft Test Manager.


In case we want to customize the columns, we can do so as follows. The bracketed quantity as seen in the screenshot below shows the width of the columns. We can add certain columns from left hand side and the view will change.


We can even create a new Test Plan. The Test Plan can have a name, Area Path and Iteration Path. Features like configuration settings, Test Settings and Test Environment can be added with Microsoft Test Manager 2013.


Once a new Test Plan is added, we can add Test Suites to it. Test Suites can be of three types – Requirement Based, Static or Query Based. Even Shared Steps can be added if required. Creation of any kind of Test Suites will provide option for adding new or existing test cases to the Suite.


A new Test Case can either be created with normal IDE or using Grid.


We can provide all the details to the Test Case like Title, Iteration, Area, and Assigned To (ownership of Test Case). The Steps to the test case can be added as action and expected result. If the test case is testing a requirement, Tested Backlog Items tab can be selected and a link can be provided to it. Any attachments we add to the test step can be viewed inline when the test case gets executed. If the attachment is in the form of image, the test step will show the actual image. If the attachment is in the form of a file, the file name with size appears. You can also add attachments when you run test case. These can be  log files etc.

If we select the option of creation of New Test Case with Grid, we get a similar screen as follows.


We can add actions and expected result to each test case. Create a new Test Case by providing the title. All the test cases can stored in Team Foundation Server at once. When we have finished creation of Test Plan, Test Suites and Test Cases, we can start executing the Test Case. While running the test case, we have optionsof running the default way with Web Access or using the client (Microsoft Test Manager).

Once we start test runner, the test case with some steps is displayed in left hand pane (around 1/3 area of screen) and remaining area is available for actual execution as seen with Microsoft Test Manager.


Once the execution starts and we encounter an error, we can create a bug, add a comment to it or add an attachment. Once the execution is completed, we can save and close the runner. We have various options to mark the test case as Pass, Fail, Block or Not applicable The test case can also be marked as Paused. For Paused Test Case, we later get Resume test as the option.


While creating a bug with Web Access, it is possible to add comments, attachments along with the bug; however we cannot create a rich bug. For creation of rich bug we will have to use Microsoft Test Manager 2013. The links in terms of comments, attachments can be seen in the bug as follows.


While Testing with Web Access there are some limitations. Basic testing can be executed but creation of rich bug, viewing test results, exploratory testing requires the client Microsoft Test Manager 2013.

However despite of these limitations, being able to plan tests, manage full test suite and executing test cases right in your Visual Studio using the Lightweight browser-based Team Web Access, helps us improve quality in software projects, without leaving your favorite IDE workspace.

As the following illustration shows, you can access a number of features from the Home page of TWA. You switch to different views and pages by first choosing one of the context view links at the top and then one of the pages within the context view. You can switch context between teams and team projects from the project Context Menu Icon context menu toward the top-right of each page.  You access the administration pages by choosing the Settings icon  gear Settings icon.

Home page (Team Web Access)

Important noteImportant
The links and pages that you have access to depend on: (1) the Web Access Permissions group to which you are assigned: Limited, Standard, and Full, see Change access levels. Or, (2) whether the resource has been configured for your team project or team project collection.  The following links appear on the home page for the associated Web Access Permissions group shown in parenthesis:

  • View backlog (Full): Opens the Product Backlog page which provides access to both the product backlog and iteration or sprint pages. See Create and organize the product backlog.
  • View board (Full): Opens the Task Board page used to review progress during a sprint and update information for work performed. See Work in sprints.
  • View work items: Opens the Work Items page used to create work items and work item queries. See Query for Bugs, Tasks, or Other Work Items.
  • Request feedback (Full): Opens the Feedback Request form to invite stakeholders to provide feedback. See Request and review feedback.
  • Go to project portal: Requires a project portal has been enabled for your team project. See Access a Team Project Portal or Process Guidance.
  • View reports:  Requires Reporting to be enabled for the instance of TFS. See Add a report server.
  • Open new instance of Visual Studio: Opens an instance of Visual Studio 2012 and automatically connects to the team project context selected in TWA. Requires that you have a recent version of Visual Studio installed.


Access OneDrive for Business using the SharePoint 2013 APIs

OneDrive for Business, a personal cloud library for business, is a place where users can store files and documents, sync them with their devices, and share them with others. It comes as a part of SharePoint Server 2013 or SharePoint Online (Office 365). Essentially it’s a SharePoint Document Library under the covers, so you can access it just like any other document library in SharePoint 2013 using the SharePoint APIs. Whether you use the client-side object model (CSOM) or Representational State Transfer (REST)—it’s your choice. In this post, learn how to construct the REST URLs to access files and folders in OneDrive for Business.

From a user’s perspective, to access your OneDrive for Business library, you simply click OneDrive in the Office 365 menu bar, as shown in Figure 1.

Figure 1. Office 365 menu bar
Figure 1. Office 365 menu bar

Or you can always navigate directly there using this URL pattern:
YourUserName_ YourO365DomainHere_onmicrosoft_com/.

But that’s from the end-user perspective. How do you access OneDrive for Business as a developer? In this example, we will use REST.

Note: If your Office 365 site is set up to use a custom domain—for example,—your MySite URL will be of the pattern, contoso_com/.

Start with the basics

  1. Sign in to your Office 365 SharePoint site, and navigate to your OneDrive for Business library using one of the two methods mentioned above.
  2. Click the Shared with Everyone folder and upload a document. For this example, the document name is myDocument.docx.
  3. To use the REST API to view the information on the uploaded document, construct a URL with the following pattern:
    Documents/Shared with Everyone/myDocument.docx’)
  4. Copy/paste it into your browser. The XML returned should look like this:
    Figure 2. Example of XML returned by the REST APIFigure 2. Example of XML returned by the REST API
  5. To download the document, append /$value to the URL. When prompted to save the file, name it myDocumentDownload.docx, and save it.

Work with documents and other files as “items”

  1. For definitive read/write guidance, see Working with lists and list items with REST on MSDN.
  2. To experiment, upload a couple of files to the root Documents folder in your OneDrive for Business library. Now you can test out a few REST read calls in your signed-in browser.
  3. Using this URL pattern:
    Append lists/Documents/items/ to it. Here you will get all the items.

    1. To get the metadata for a particular item, modify items/ to items(n)/ where (n) is the specific item number you want to view.
    2. To see the metadata for the file, append file/ (for example, items(n)/file/)
    3. To download the file, append $value (for example, items(n)/file/$value)
  4. You can also use in place of the above pattern lists/GetByTitle(‘Documents’)/…, and the API will return the same results.

Work with folders and files

  1. Files are often nested in folders, and you may need to drill down into the folder structure; or you may want to represent the folder structure and files in a user interface (UI). Using the following REST calls, you can also work with folders and files in a more logical way than just the “items(n)sequential location as the pattern shown above. This is where getting folders by relative URL and subsequently enumerating all the files within a folder is really handy.
    For definitive read/write guidance, see Working with folders and files with REST on MSDN.
  2. Assume the OneDrive file structure shown in Figure 3, where you have both folders and documents at the same level.
    Figure 3. SkyDrive file structure with folders and documents at the same level
    Figure 3. OneDrive file structure with folders and documents at the same level
  3. To retrieve all the folders, you will use GetFolderByServerRelativeUrl with the following URL pattern:
    To this URL, append GetFolderByServerRelativeUrl(‘/personal/YourUserName_YourO365DomainHere_onmicrosoft_com/Documents’)/folders/.
    All the folders will be returned. You can then subsequently use the ServerRelativeURL property for each folder to continue to “walk down” each folder until you reach its end node.

    Figure 4. ServerRelativeUrl property of a folderFigure 4. ServerRelativeUrl property of a folder
  4. Likewise, if you want to return metadata about all the files in a folder, simply replace folders/ with files/, and all the files will be enumerated.
    Figure 5. ServerRelativeUrl property of a file
    Figure 5. ServerRelativeUrl property of a file

    Then, if you want to retrieve the file, use the GetFileByServerRelativeUrl URL pattern, described in the first section above, with /$value appended to the URL.

The above URL patterns show how to construct the REST calls for use in the browser for simplicity. However, you can readily implement these URL patterns in your code.

For example, if you are developing an app for SharePoint, the app can call into a user’s MySite site collection and access their OneDrive for Business documents using REST or CSOM.

The REST call to get to the file would be:

To programmatically get the OneDrive for Business URL for the signed-in user, you can make a call to the user Profile service:

Remember, your app for SharePoint needs to request the right set of permissions in the app manifest to access OneDrive for Business content—for example, AllSites.Read—and if using the User Profile service: Social.Read. When you request a token from Access Control Service (ACS), make sure you have the right audience. In order to call OneDrive for Business, you need a token whose target audience is Also remember to encode all the query parameters in the URL.

This post does not detail these calls for CSOM, but the CSOM equivalents are available: see the CSOM, JSOM, and REST API Index. Other valuable resources are the articles on how to complete basic operations using CSOM and JSOM, and getting started with SharePoint 2013 REST.

Lastly, for sample code, download the Apps for SharePoint sample pack, which provides examples across C#, REST, and JavaScript. It contains useful samples, including:


Troubleshooting WCF Services during Runtime with WMI

One of the coolest features of WCF when it comes to troubleshooting is the WCF message logging and tracing feature. With message logging you can write all the messages your WCF service receives and returns to a log file. With tracing, you can log any trace message emitted by the WCF infrastructure, as well as traces emitted from your service code.

The issue with message logs and tracing is that you usually turn them off in production, or at the very least, reduce the amount of data they output, mainly to conserve disk space and reduce the latency involved in writing the logs to the disk. Usually this isn’t a problem, until you find yourself in the need to turn them back on, for example when you detect an issue with your service, and you need the log files to track the origin of the problem.

Unfortunately, changing the configuration of your service requires resetting it, which might result in loss of data, your service becoming unavailable for a couple of seconds, and possibly for the problem to be resolved on its own, if the reason for the strange behavior was due to a faulty state of the service.

There is however a way to change the logging configuration of the service during runtime, without needing to restart the service with the help of the Windows Management Instrumentation (WMI) environment.

In short, WMI provides you with a way to view information about running services in your network. You can view a service’s process information, service information, endpoint configuration, and even change some of the service’s configuration in runtime, without needing to restart the service.

Little has been written about the WMI support in WCF. The basics are documented on MSDN, and contain instructions on what you need to set in your configuration to make the WMI provider available. The MSDN article also provides a link to download the WMI Administrative Tools which you can use to manage services with WMI. However that tool requires some work on your end before getting you to the configuration you need to change, in addition to it requiring you to run IE as an administrator with backwards compatibility set to IE 9, which makes the entire process a bit tedious. Instead, I found it easier to use PowerShell to write six lines of script which do the job.

The following steps demonstrate how to create a WCF service with minimal message logging and tracing configuration, start it, test it, and then use PowerShell with WMI to change the logging configuration in runtime.

  1. Open Visual Studio 2012 and create a new project using the WCF Service Application template.

After the project is created, the service code is shown. Notice that in the GetDataUsingDataContract method, an exception is thrown when the composite parameter is null.

2. In Solution Explorer, right-click the Web.config file and then click Edit WCF Configuration.

3.In the Service Configuration Editor window, click Diagnostics, enable the WMI Provider, MessageLogging and Tracing.

By default, enabling message logging will enable logging of all the message from the transport layer and any malformed message. Enabling tracing will log all activities and any trace message with severity Warning and up (Warning, Error, and Critical). Although those settings are useful during development, in production we probably want to change them so we will get smaller log files with only the most relevant information.

4. Under MessageLogging, click the link next to Log Level, uncheck Transport Messages, and then click OK.

The above setting will only log malformed messages, which are messages that do not fit any of the service’s endpoints, and are therefore rejected by the service.

5. Under Tracing, click the link next to Trace Level, uncheck Activity Tracing, and then click OK.

The above setting will prevent every operation from being logged, unless those that output a trace message of warning and up. You can read more about the different types of trace messages on MSDN.

By default, message logging only logs the headers of a message. To also log the body of a message, we need to change the message logging configuration. Unfortunately, we cannot change that setting in runtime with WMI, so we will set it now.

6. In the configuration tree, expand Diagnostics, click Message Logging, and set the LogEntireMessage property to True.

7. Press Ctrl+S to save the changes, close the configuration editor window, and return to Visual Studio.

The trace listener we are using buffers the output and will only write to the log files when the buffer is full. Since this is a demonstration, we would like to see the output immediately, and therefore we need to change this behavior.

8. In Solution Explorer, open the Web.config file, locate the <system.diagnostics> section, and place the following xml before the </system.diagnostics> closing tag: <trace autoflush=”true”/>

Now let us run the service, test it, and check the created log files.

9. In Solution Explorer, click Service1.svc, and then press Ctrl+F5 to start the WCF Test Client without debugging.

10. In the WCF Test Client window, double-click the GetDataUsingDataContract node, and then click Invoke. Repeat this step 2-3 times.

Note: If a Security Warning dialog appears, click OK.

11. In the Request area, open the drop-down next to the composite parameter, and set it to (null).

12. Click Invoke and wait for the exception to show. Notice that the exception is general (“The server was unable to process the request due to an internal error.”) and does not provide any meaningful information about the true exception. Click Close to close the dialog.

Let us now check the content of the two log files. We should be able to see the traced exception, but the message wouldn’t have been logged.

13. Keep the WCF Test Client tool running and return to Visual Studio. Right-click the project and then click Open Folder in File Explorer.

14. n the File Explorer window, double-click the web_tracelog.svclog file. The file will open in the Service Trace Viewer tool.

15. n the Service Trace Viewer tool, click the 000000000000 activity in red, and then click the row starting with “Handling an exception”. In the Formatted tab, scroll down to view the exception information.

As you can see in the above screenshot, the trace file contains the entire exception information, including the message, and the stack trace.

Note: The configuration evaluation warning message which appears first on the list means that the service we are hosting does not have any specific configuration, and therefore is using a set of default endpoints. The two exceptions that follow are ones thrown by WCF after receiving two requests that did not match any endpoint. Those requests originated from the WCF Test Client tool when it attempted to locate the service’s metadata.

Next, we want to verify no message was logged for the above argument exception.

16. Return to the File Explorer window, select the web_messages.svclog file, and drag it to the Service Trace Viewer tool. Drop the file anywhere in the tool.

There are now two new rows for the malformed messages sent by the WCF Test Client metadata fetching. There is no logged message for the faulted service operation.

Imagine this is the state you now have in your production environment. You have a trace file that shows the service is experiencing problems, but you only see the exception. To properly debug such issues we need more information about the request itself, and any other information which might have been traced while processing the request.

To get all that information, we need to turn on activity tracing and include messages from the transport level in our logs.

If we open the Web.config file and change it manually, this would cause the Web application to restart, as discussed before. So instead, we will use WMI to change the configuration settings in runtime.

17. Keep the Service Trace Viewer tool open, and open a PowerShell window as an Administrator.

18. To get the WMI object for the service, type the following commands in the PowerShell window and press Enter: $wmiService = Get-WmiObject Service -filter “Name=’Service1′” -Namespace “root\servicemodel” -ComputerName localhost $processId = $wmiService.ProcessId $wmiAppDomain = Get-WmiObject AppDomainInfo -filter “ProcessId=$processId” -Namespace “root\servicemodel” -ComputerName localhost

Note: The above script assumes the name of the service is ‘Service1’. If you have changed the name of the service class, change the script and run it again. If you want to change the configuration of a remote service, replace the localhost value in the ComputeName parameter with your server name.

19. To turn on transport layer message logging, type the following command and press Enter: $wmiAppDomain.LogMessagesAtTransportLevel = $true

20. To turn on activity tracing, type the following command and press Enter: $wmiAppDomain.TraceLevel = “Warning, ActivityTracing”

21. Lastly, to save the changes you made to the service configuration, type the following command and press Enter: $wmiAppDomain.Put()

22. Switch back to the WCF Test Client. In the Request area, open the drop-down next to the composite parameter, and set it to a new CompositeType object. Click Invoke 2-3 times to generate several successful calls to the service.

23. In the Request area, open the drop-down next to the composite parameter, and set it to (null).

24. Click Invoke and wait for the exception to show. Click Close to close the dialog.

25. Switch back to the Service Trace Viewer tool and press F5 to refresh the activities list.

You will notice that now there is a separate set of logs for each request handled by the service. You can read more on how to use the Service Trace Viewer tool to view traces and troubleshoot WCF services on MSDN.

26. From the activity list, select the last row in red that starts with “Process action”.

You will notice that now you can see the request message, the exception thrown in the service operation, and the response message, all in the same place. In addition, the set of traces is shown for each activity separately, making it easy to identify a specific request and its related traces.

27. On the right pane, select the first “Message Log Trace” row, click the Message tab, and observe the body of the message.

Now that we have the logged messages, we can select the request message and try to figure out the cause of the exception. As you can see, the composite parameter is empty (nil).

If this was a production environment, you would probably want to restore the message logging and tracing to its original settings at this point. To do this, return to the PowerShell window, and re-run the command from before with their previous values:

$wmiAppDomain.LogMessagesAtTransportLevel = $false

$wmiAppDomain.TraceLevel = “Warning”


Before we conclude, now that your service is manageable through WMI, you can use other commands to get information about the service and its components. For example, the following command will return the service endpoints’ information: Get-WmiObject Endpoint -filter “ProcessId=$processId” -Namespace “root\servicemodel” -ComputerName localhost

The Exception Assistant in Visual Studio 2013

The Exception Assistant, which appears whenever a run-time exception occurs, shows the type of exception, troubleshooting tips, and corrective actions. The Exception Assistant can also be used to see the details of an exception object.

An exception is an object that inherits from the Exception class. An exception is thrown by code when a problem occurs, and it is passed up the stack until the application handles it or the program fails.

Note Note
The options available in dialog boxes, and the names and locations of menu commands you see, might differ from what is described in Help, depending on your active settings or edition. This Help page was written with General Development Settings in mind. To change your settings, choose Import and Export Settings on the Tools menu. For more information, see Customizing Development Settings.

The following table lists and describes an exception object’s properties. Depending on the type of exception, not all may appear.

Property Description
Data An IDictionary object that contains user-defined key/value pairs. The default is an empty collection.
FileName Name of the file causing the exception.
FusionLog Log file that describes why an assembly load failed.
HelpLink Link to the help file associated with the exception.
HResult Coded numerical value assigned to a specific exception.
InnerException Exception instance that caused the current exception. It is sometimes useful to catch an exception thrown in a helper routine and throw a new exception more indicative of the error, thereby providing more information. In such cases, the InnerException property is set to the original exception.
Message Message associated with the exception. This is displayed in the language specified by the CurrentUICulture property of the thread that throws the exception.
Source Name of the application or object that caused the exception. If Source is not set, the name of the assembly where the exception originated is returned.
StackTrace String representation of the method calls on the call stack at the time the current exception was thrown. The stack trace includes the source-file name and program line number if debugging information is available. StackTrace may not report as many method calls as expected, due to code transformations that occur during optimization. The stack trace is captured immediately before an exception is thrown.
TargetSite Method that throws the current exception. If the method that throws the exception is not available and the stack trace is not a null reference (Nothing in Visual Basic), TargetSite obtains the method from the stack trace. If the stack trace is a null reference, TargetSite also returns a null reference.

To find out more about an exception object

  • Click View Details in the Actions pane. A dialog box appears showing the properties of the exception.

The Exception Assistant dialog box appears when a run-time exception is thrown. The Exception Assistant displays the type of exception, provides additional information and links to troubleshooting tips, provides a way to search for additional help online, and allows the user to perform certain actions, such as viewing details of the exception.

To see a topic dealing with troubleshooting the type of exception you have encountered, click one of the tip messages displayed in the Troubleshooting Tips pane.

To perform actions associated with the exception, click one of the actions displayed in the action pane.

For information about how to enable or disable the Exception Assistant, see General, Debugging, Options Dialog Box.

Type of Exception
Displays the type of exception thrown.
Additional Information
Displays additional information about the exception.
Troubleshooting Tips
Displays links to troubleshooting tips that may help you discover the source of the exception.
Lists actions that can be performed, such as seeing more information about the exception object.
Get Help Online
Allows you to search for additional help online.

The ability to debug another process gives you extremely broad powers that you would not otherwise have, especially when debugging remotely. A malicious debugger could inflict widespread damage on the machine being debugged. Because of this, there are restrictions on who can do debugging. For more information, see Remote Debugging Permissions.

However, many developers do not realize that the security threat can also flow in the opposite direction. It is possible for malicious code in the debuggee process to jeopardize the security of the debugging machine: there are a number of security exploits that must be guarded against.

There is an implicit trust relationship between the code you are debugging, and the debugger. If you are willing to debug something, you should also be willing to run it. The bottom line is that you must be able to trust what you are debugging. If you cannot trust it, then you should not debug it, or you should debug it from a machine that you can afford to jeopardize, and in an isolated environment.In order to reduce the potential attack surface, debugging should be disabled on production machines. For the same reason, debugging should never be enabled indefinitely.

Here are some general recommendations that apply to all managed debugging.

  • Be careful when attaching to an untrusted user’s process: when you do so, you assume that it is trustworthy. When you attempt to attach to an untrusted user’s process, a security warning dialog box confirmation will appear asking whether you want to attach to the process. “Trusted users” include you, and a set of standard users commonly defined on machines that have the .NET Framework installed, such as aspnet, localsystem, networkservice, and localservice. For more information, see Security Warning: Attaching to a process owned by an untrusted user can be dangerous. If the following information looks suspicious or you are unsure, do not attach to this process.
  • Be careful when downloading a project off the Internet and loading it into Visual Studio. This is very risky to do even without debugging. When you do this, you are assuming that the project and the code that it contains are trustworthy.

For more information, see Debugging Managed Code.

Local debugging is generally safer than remote debugging. Remote debugging increases the total surface area that can be probed.The Visual Studio Remote Debugging Monitor (msvsmon.exe) is used in remote debugging, and there are several security recommendations for configuring it. The preferred way to configure the authentication mode is Windows Authentication, because No Authentication mode is insecure.

Error dialogWhen using Windows Authentication mode, be aware that granting an untrusted user permission to connect to msvsmon is dangerous, because the user is granted all your permissions on the computer..

Do not debug an unknown process on a remote machine: there are potential exploits that might affect the machine running the debugger, or that might compromise msvsmon.exe, the Visual Studio Remote Debugging Monitor. If you absolutely must debug an unknown process, try debugging locally, and use a firewall to keep any potential threats localized.

For more information, see Remote Debugging in Visual Studio.

It is safer to debug locally, but since you probably do not have Visual Studio installed on the web server, local debugging might not be practical. Generally, debugging Web services is done remotely, except during development, so the recommendations for remote debugging security also apply to Web services debugging. Here are some additional best practices. For more information, see Debugging XML Web Services.

  • Do not enable debugging on a Web server that has been compromised.
  • Make sure you know the Web server is secure before debugging it. If you are not sure it is secure, do not debug it.
  • Be especially careful if you are debugging a Web service that is exposed on the Internet.
Be aware of the trust status of external components that your program interacts with, especially if you did not write the code. Also be aware of components that Visual Studio or the debugger might use.
Two Visual Studio tools that require thinking about security are the following:

  • Source Server, which provides you with versions of source code from a source code repository. It is useful when you do not have the current version of a program’s source code. Security Warning: Debugger Must Execute Untrusted Command.
  • Symbol Server, which is used to supply the symbols needed to debug a crash during a system call.

See Specify Symbol (.pdb) and Source Files in the Visual Studio Debugger


Enhanced by Zemanta