Category Archives: My CV

ASE 4.0 Availability

Admin Script Editor

We have overcome all the obstacles that were holding us back from this release, so we are finally ready to share our latest build of ASE 4.0. We have tested on Windows 7 and 8, 32-bit and 64-bit, and are not anticipating you will run into any problems. If you do, please make us aware of any issues you enoucter using our online ticket system. We will maintain a list of known issues here in the support knowledge base.

We are considering this a release candidate but is available to customers only (not as a trial version). In order to download this release you will need to be current with maintenance on your license. We realize that it has been a very long time since we had an official release and we made it clear we would honor any expired maintenance extensions once we released. We’ve decided to do better than that and have…

View original post 106 more words


XI/PI: Understanding the RFC Adapter

SAP XI provides different ways for SAP systems to communicate via SAP XI. You have three options namely IDoc Adapters, RFC Adapters and Proxies. In one of the earlier posts that spoke about your first XI scenario, we learned configuring the IDoc receiver adapter. And in the coming articles, I shall throw light on different adapters. This article specifically deals with understanding basics of RFC adapter on sender and the receiver side.
SAP XI provides different ways for SAP systems to communicate via SAP XI. You have three options namely IDoc Adapters, RFC Adapters and Proxies. In one of the earlier posts that spoke about your first XI scenario, we learned configuring the IDoc receiver adapter. And in the coming articles, I shall throw light on different adapters. This article specifically deals with understanding basics of RFC adapter on sender and the receiver side.

SAP XI RFC Sender AdapterRFC Adapter converts the incoming RFC calls to XML and XML messages to outgoing RFC calls. We can have both synchronous (sRFC) and asynchronous (tRFC) communication with SAP systems. The former works with Best Effort QoS (Quality of Service) while the later by Exactly Once (EO).

Unlike IDoc adapter, RFC Adapter is installed on the J2EE Adapter Engine and can be monitored via Adapter Monitoring and Communication Channel Monitoring in the Runtime Workbench.

Now let us understand the configuration needed to set up RFC communication.

RFC Sender Adapter

In this case, Sender SAP system requests XI Integration Engine to process RFC calls. This could either be synchronous or asynchronous.

On the source SAP system, go to transaction SM59 and create a new RFC connection of type ‘T’ (TCP/IP Connection). On the Technical Settings tab, select “Registered Server Program” radio button and specify an arbitrary Program ID. Note that the same program ID must be specified in the configuration of the sender adapter communication channel. Also note that this program ID is case-sensitive.

When using the RFC call in your ABAP program you should specify the RFC destination created above. For example,


Also, in case you are setting up asynchronous interface, the RFC should be called in the background. For example,


SAP XI RFC Receiver AdapterNow, create the relevant communication channel in the XI Integration Directory. Select the Adapter Type as RFC Sender (Please see the figure above). Specify the Application server and Gateway service of the sender SAP system. Specify the program ID. Specify exactly the same program ID that you provided while creating the RFC destination in SAP system. Note that this program ID is case-sensitive. Provide Application server details and logon credentials in the RFC metadata repository parameter. Save and activate the channel. Note that the RFC definition that you import in the Integration Repository is used only at design time. At runtime, XI loads the metadata from the sender SAP system by using the credentials provided here.

RFC Receiver Adapter

In this case, XI sends the data in the RFC format (after conversion from XML format by the receiver adapter) to the target system where the RFC is executed.

Configuring the receiver adapter is even simpler. Create a communication channel in ID of type RFC Receiver (Please see the figure above on the left). Specify the RFC Client parameters like the Application server details, logon credentials etc and activate the channel.

Testing the Connectivity

Sometimes, especially when new SAP environments are setup, you may want to test their RFC connectivity to SAP XI before you create your actual RFC based interfaces/scenarios. There is a quick and easy way to accomplish this.

STFC_CONNECTION InputCreate a RFC destination of type ‘T’ in the SAP system as described previously. Then, go to XI Integration Repository and import the RFC Function Module STFC_CONNECTION from the SAP system. Activate your change list.

Configure sender and receiver communication channels in ID by specifying the relevant parameters of the SAP system as discussed previously. Remember that the Program ID in sender communication channel and RFC destination in SAP system must match (case-sensitive).

STFC_CONNECTION OutputAccordingly, complete the remaining ID configuration objects like Sender Agreement, Receiver Determination, Interface Determination and Receiver Agreement. No Interface mapping is necessary. Activate your change list.

Now, go back to the SAP system and execute the function module STFC_CONNECTION using transaction SE37. Specify the above RFC destination in ‘RFC target sys’ input box. You can specify any arbitrary input as REQUTEXT. If everything works fine, you should receive the same text as a response. You can also see two corresponding messages in SXMB_MONI transaction in SAP XI. This verifies the connection between SAP system and SAP XI.

How to : Use JQuery and JSON in MVC 5 for Autocomplete


Imagine that you want to create edit view for Company entity which has two properties: Name (type string) and Boss (type Person). You want both properties to be editable. For Company.Name simple text input is enough but for Company.Boss you want to use jQuery UI Autocomplete widget. This widget has to meet following requirements:

  • suggestions should appear when user starts typing person’s last name or presses down arrow key;
  • identifier of person selected as boss should be sent to the server;
  • items in the list should provide additional information (first name and date of birth);
  • user has to select one of the suggested items (arbitrary text is not acceptable);
  • the boss property should be validated (with validation message and style set for appropriate input field).

Above requirements appear quite often in web applications. I’ve seen many over-complicated ways in which they were implemented. I want to show you how to do it quickly and cleanly… The assumption is that you have basic knowledge about jQuery UI Autocomplete and ASP.NET MVC. In this post I will show only the code which is related to autocomplete functionality but you can download full demo project here. It’s ASP.NET MVC 5/Entity Framework 6/jQuery UI 1.10.4 project created in Visual Studio 2013 Express for Web and tested in Chrome 34, FF 28 and IE 11 (in 11 and 8 mode).

So here are our domain classes:

public class Company
    public int Id { get; set; } 

    public string Name { get; set; }

    public Person Boss { get; set; }
public class Person
    public int Id { get; set; }

    [DisplayName("First Name")]
    public string FirstName { get; set; }
    [DisplayName("Last Name")]
    public string LastName { get; set; }

    [DisplayName("Date of Birth")]
    public DateTime DateOfBirth { get; set; }

    public override string ToString()
        return string.Format("{0}, {1} ({2})", LastName, FirstName, DateOfBirth.ToShortDateString());

Nothing fancy there, few properties with standard attributes for validation and good looking display. Person class has ToString override – the text from this method will be used in autocomplete suggestions list.

Edit view for Company is based on this view model:

public class CompanyEditViewModel
    public int Id { get; set; }

    public string Name { get; set; }

    public int BossId { get; set; }

    [Required(ErrorMessage="Please select the boss")]
    public string BossLastName { get; set; }

Notice that there are two properties for Boss related data.

Below is the part of edit view that is responsible for displaying input field with jQuery UI Autocomplete widget for Boss property:

<div class="form-group">
    @Html.LabelFor(model => model.BossLastName, new { @class = "control-label col-md-2" })
    <div class="col-md-10">
        @Html.TextBoxFor(Model => Model.BossLastName, new { @class = "autocomplete-with-hidden", data_url = Url.Action("GetListForAutocomplete", "Person") })
        @Html.HiddenFor(Model => Model.BossId)
        @Html.ValidationMessageFor(model => model.BossLastName)

form-group and col-md-10 classes belong to Bootstrap framework which is used in MVC 5 web project template – don’t bother with them. BossLastName property is used for label, visible input field and validation message. There’s a hidden input field which stores the identifier of selected boss (Person entity). @Html.TextBoxFor helper which is responsible for rendering visible input field defines a class and a data attribute. autocomplete-with-hidden class marks inputs that should obtain the widget. data-url attribute value is used to inform about the address of action method that provides data for autocomplete. Using Url.Action is better than hardcoding such address in JavaScript file because helper takes into account routing rules which might change.

This is HTML markup that is produced by above Razor code:

<div class="form-group">
    <label class="control-label col-md-2" for="BossLastName">Boss</label>
    <div class="col-md-10">
        <span class="ui-helper-hidden-accessible" role="status" aria-live="polite"></span>
        <input name="BossLastName" class="autocomplete-with-hidden ui-autocomplete-input" id="BossLastName" type="text" value="Kowalski" 
         data-val-required="Please select the boss" data-val="true" data-url="/Person/GetListForAutocomplete" autocomplete="off">
        <input name="BossId" id="BossId" type="hidden" value="4" data-val-required="The BossId field is required." data-val-number="The field BossId must be a number." data-val="true">
        <span class="field-validation-valid" data-valmsg-replace="true" data-valmsg-for="BossLastName"></span>

This is JavaScript code responsible for installing jQuery UI Autocomplete widget:

$(function () {
        minLength: 0,
        source: function (request, response) {
            var url = $(this.element).data('url');
            $.getJSON(url, { term: request.term }, function (data) {
        select: function (event, ui) {
        change: function(event, ui) {
            if (!ui.item) {

Widget’s source option is set to a function. This function pulls data from the server by $.getJSON call. URL is extracted from data-url attribute. If you want to control caching or provide error handling you may want to switch to $.ajax function. The purpose of change event handler is to ensure that values for BossId and BossLastName are set only if user selected an item from suggestions list.

This is the action method that provides data for autocomplete:

public JsonResult GetListForAutocomplete(string term)
    Person[] matching = string.IsNullOrWhiteSpace(term) ?
        db.Persons.ToArray() :
        db.Persons.Where(p => p.LastName.ToUpper().StartsWith(term.ToUpper())).ToArray();

    return Json(matching.Select(m => new { id = m.Id, value = m.LastName, label = m.ToString() }), JsonRequestBehavior.AllowGet);

value and label are standard properties expected by the widget. label determines the text which is shown in suggestion list, value designate what data is presented in the input filed on which the widget is installed. id is custom property for indicating which Person entity was selected. It is used in select event handler (notice the reference to Selected is set as a value of hidden input field – this way it will be sent in HTTP request when user decides to save Company data.

Finally this is the controller method responsible for saving Company data:

public ActionResult Edit([Bind(Include="Id,Name,BossId,BossLastName")] CompanyEditViewModel companyEdit)
    if (ModelState.IsValid)
        Company company = db.Companies.Find(companyEdit.Id);
        if (company == null)
            return HttpNotFound();

        company.Name = companyEdit.Name;

        Person boss = db.Persons.Find(companyEdit.BossId);
        company.Boss = boss;
        db.Entry(company).State = EntityState.Modified;
        return RedirectToAction("Index");
    return View(companyEdit);

Pretty standard stuff. If you’ve ever used Entity Framework above method should be clear to you. If it’s not, don’t worry. For the purpose of this post the important thing to notice is that we can use companyEdit.BossId because it was properly filled by model binder thanks to our hidden input field.

That’s it, all requirements are met! Easy, huh?

You may be wondering why I want to use jQuery UI widget in Visual Studio 2013 project which by default uses Twitter Bootstrap. It’s true that Bootstrap has some widgets and plugins but after a bit of experimentation I’ve found that for some more complicated scenarios jQ UI does a better job. The set of controls is simply more mature…

How To : Use SharePoint Dashboards & MSRS Reports for your Agile Development Life Cycle

The Problem We Solve

Agile BI is not a term many would associate with MSRS Reports and SharePoint Dashboards. While many organizations first turn to the Microsoft BI stack because of its familiarity, stitching together Microsoft’s patchwork of SharePoint, SQL Server, SSAS, MSRS, and Office creates administrative headaches and requires considerable time spent integrating and writing custom code.

This Showcase outlines the ease of accomplishing three of the most fundamental BI tasks with LogiXML technology as compared to MSRS and SharePoint:

  • Building a dashboard with multiple data sources
  • Creating interactive reports that reduce the load on IT by providing users self-service
  • Integrating disparate data sources

Read below to learn how an agile BI methodology can make your life much easier when it comes to dashboards and reports. Don’t feel like reading?

Building a Dashboard with LogiXML vs. MSRS + SharePoint

Microsoft’s only solution for dashboards is to either write your own code from scratch, manipulate SharePoint to serve a purpose for which it wasn’t initially designed, or look to third party apps. Below are some of the limitations to Microsoft’s approach to dashboards:

  • Limited Pre-Built Elements: Microsoft components come with only limited libraries of pre-built elements. In addition to actual development work, you will need to come up with an idea of how everything will work together. This necessitates becoming familiar with best practices in dashboards and reporting.
  • Sophisticated Development Expertise Required: While Microsoft components provide basic capabilities, anything more sophisticated is development resource-intensive and requires you to take on design, execution, and delivery. Any complex report visualizations and logic, such as interactive filters, must be written in code by the developer.
  • Limited Charts and Visualizations: Microsoft has a smaller sub-set of charts and visualization tools. If you want access to the complete library of .NET-capable charts, you’ll still need to OEM another charting solution at additional expense.
  • Lack of Integrated Workflow: Microsoft does not include workflow features sets out of the box in their BI offering.

LogiXML technology is centered on Logi Studio: an elemental, agile BI design environment which lets you simply choose from hundreds of powerful and configurable pre-built elements. Logi’s pre-built elements equip developers with tools to speed development, as well as the processes and logic required to build and manage BI projects. Below is a screen shot of the Logi Studio while building new dashboards.


Start a free LogiXML trial now.

Logi developers can easily create static or user-customizable dashboards using the Dashboard element. A dashboard is a collection of panels containing Logi reports, which in turn contain table, charts, images, etc. At runtime, the user can customize the dashboard by rearranging these panels on the browser page, by showing or hiding them, and even by changing their contents using adjustable reporting criteria. The data displayed within the panels can be configured, as in any Logi report, to link to other reports, providing drill-down functionality.



The dashboard displayed above has tabs and user customization enabled. The Dashboard element provides customization features, such as drag-and-drop panel positioning, support for built-in parameters the user can access to adjust the panel’s data contents, and a panel selection list that determines which panels will be displayed. AJAX techniques are utilized for web server interactions, allowing selective updates of portions of the dashboard. Dashboard customizations can be saved on an individual-user basis to create a highly personalized view of the data.

The Dashboard Wizard

The ‘Create a Dashboard’ wizard assists developers in creating dashboards by populating the report definition with the necessary dashboard-related elements. You can easily point to any data source by selecting from a variety of DataLayer types, including SQL, StoredProcedures, Web Services, Files, and more. A simple to use drag and drop SQL Query builder is also integrated, to offer a guided approach to constructing queries when connecting to your database.


Using the Dashboard Element

The Dashboard element is used to create the top level structure for all of your interactive panels within the final output. Under your dashboards, you can optionally add any number of Dashboard Panels, Panel Parameters for dynamic filtering, and even automatic refresh features with AJAX-based refresh timers.


Changing Appearance Using Themes and Style Sheets

The appearance of a dashboard can be changed easily by assigning a theme to your report. In addition, or as an alternative, you can change dashboard appearance using style. The Dashboard element has its own Cascading Style Sheet (CSS) file containing predefined classes that affect the display colors, font sizes, button labels, and spacing seen when the dashboard is displayed. You can override these classes by adding classes with the same name to your own style sheet file.

See us build a BI app with 3 data sources in under 10 minutes.

Ad Hoc Reporting Creation with LogiXML: Analysis Grid

The Analysis Grid is a managed reporting feature giving end users virtual ad hoc capability. It is an easy to use tool that allows business users to analyze and manipulate data and outputs in multiple and powerful ways.


Start a free LogiXML trial now.

Create an Analysis Grid by using the “Create Analysis Grid” wizard, or by simply adding the AnalysisGrid element into your definition file. Like the dashboard, data for the Analysis Grid can be accessed from any of the data options, including SQL databases, web sources, or files. You also have the option to launch the interactive query builder wizard for easy, drag-drop, SQL query creation.

The Analysis Grid is composed of three main parts: the data grid itself, i.e. a table of data to be analyzed; various action buttons at the top, allowing the user to perform actions such as create new columns with custom calculations, sort columns, add charts, and perform aggregations; and the ability to export the grid to Excel, CSV, or PDF format.

The Analysis Grid makes it easy to perform what-if analyses through features like filtering. The Grid also makes data-presentation impactful through visualization features including data driven color formatting, inline gauges, and custom formula creation.

Ad Hoc Reporting Creation with Microsoft

While simple ad hoc capabilities, such as enabling the selection of parameters like date ranges, can be accomplished quickly and easily with Microsoft, more sophisticated ad hoc analysis is challenging due to the following shortcomings.

Platform Integration Problems

Microsoft BI strategy is not unified and is strongly tied to SQL Server. To obtain analysis capabilities, you must build cubes through to the Analysis Service, which is a separate product with its own different security architecture. Next, you will need to build reports that talk to SQL server, also using separate products.

Dashboards require a SharePoint portal which is, again, a separate product with separate requirements and licensing. If you don’t use this, you must completely code your dashboards from scratch. Unfortunately, Microsoft Reporting Services doesn’t play well with Analysis Services or SharePoint since these were built on different technologies.

SharePoint itself offers an out of the box portal and dashboard solution but unfortunately with a number of significant shortcomings. SharePoint was designed as a document management and collaboration tool as opposed to an interactive BI dashboard solution. Therefore, in order to have a dashboard solution optimized for BI, reporting, and interactivity you are faced with two options:

  • Build it yourself using .NET and a combination of third party components
  • Buy a separate third party product

Many IT professionals find these to be rather unappealing options, since they require evaluating a new product or components, and/or a lot of work to build and make sure it integrates with the rest of the Microsoft stack.

Additionally, while SQL Server and other products support different types of security architectures, Analysis Services only has support for using integrated Windows NT security models to access cubes and therefore creates integration challenges.

Moreover, for client/ad hoc tools, you need Report Writer, a desktop product, or Excel – another desktop application. In addition to requiring separate licenses, these products don’t even talk to one another in the same ways, as they were built by different companies and subsequently acquired by Microsoft.

Each product requires a separate and often disconnected development environment with different design and administration features. Therefore to manage Microsoft BI, you must have all of these development environments available and know how to use them all.

Integration of Various Data Sources: LogiXML vs. Microsoft

LogiXML is data neutral, allowing you to easily connect to all of your organization’s data spread across multiple applications and databases. You can connect with any data source or data model and even combine data sources such as current data accessed through a web service with past data in spreadsheets.

Integration of Various Data Sources with Microsoft

Working with Microsoft components for BI means you will be faced with the challenge of limited support for non-Microsoft based databases and outside data sources. The Microsoft BI stack is centered on SQL Server databases and therefore the data source is optimized to work with SQL Server. Unfortunately, if you need outside content it can be very difficult to integrate.

Finally, Microsoft BI tools are designed with the total Microsoft experience in mind and are therefore optimized for Internet Explorer. While other browsers and devices might be useable, the experience isn’t optimized and may potentially lack in features or visualize differently.


Free & Licensed Windows 8, Azure, Office 365, SharePoint On-Premise and Online Tools, Web Parts, Apps available.
For more detail visit or contact me at

Building Distributed Node.js Apps with Azure Service Bus Queue

Azure Service Bus Queues provides, a queue based, brokered messaging communication between apps, which lets developers build distributed apps on the Cloud and also for hybrid Cloud environments. Azure Service Bus Queue provides First In, First Out (FIFO) messaging infrastructure. Service Bus Queues can be leveraged for communicating between apps, whether the apps are hosted on the cloud or on-premises servers.


Service Bus Queues are primarily used for distributing application logic into multiple apps. For an example, let’s say, we are building an order processing app where we are building a frontend web app for receiving orders from customers and want to move order processing logic into a backend service where we can implement the order processing logic in an asynchronous manner. In this sample scenario, we can use Azure Service Bus Queue, where we can create an order processing message into Service Bus Queue from frontend app for processing the request. From the backend order processing app, we can receives the message from Queue and can process the request in an efficient manner. This approach also enables better scalability as we can separately scale-up frontend app and backend app.

For this sample scenario, we can deploy the frontend app onto Azure Web Role and backend app onto Azure Worker Role and can separately  scale-up both Web Role and Worker Role apps. We can also use Service Bus Queues for hybrid Cloud scenarios where we can communicate between apps hosted on Cloud and On-premises servers.    

Using Azure Service Bus Queues in Node.js Apps

In order to working with Azure Service Bus, we need to create a Service Bus namespace from Azure portal.


We can take the connection information of Service Bus namespace from the Connection Information tab in the bottom section, after choosing the Service Bus namespace.


Creating the Service Bus Client

Firstly, we need to install npm module azure to working with Azure services from Node.js app.

npm install azure

The code block below creates a Service Bus client object using the Node.js module azure.

var azure = require('azure');
var config=require('./config');

var serviceBusClient = azure.createServiceBusService(config.sbConnection);

We create the Service Bus client object by using createServiceBusService method of azure. In the above code block, we pass the Service Bus connection info from a config file. The azure module can also read the environment variables AZURE_SERVICEBUS_NAMESPACE and AZURE_SERVICEBUS_ACCESS_KEY for information required to connect with Azure Service Bus where we can call  createServiceBusService method without specifying the connection information.

Creating a Services Bus Queue

The createQueueIfNotExists method of Service Bus client object, returns the queue if it is already exists, or create a new Queue if it is not exists.

var azure = require('azure');
var config=require('./config');
var queue = 'ordersqueue';

var serviceBusClient = azure.createServiceBusService(config.sbConnection);

function createQueue() {
        console.log('Queue ' + queue+ ' exists');

Sending Messages to Services Bus Queue

The below function sendMessage sends a given message to Service Bus Queue

function sendMessage(message) {
 function(error) {
        if (error) {
            console.log('Message sent to queue');

The following code create the queue and sending a message to Queue by calling the methods createQueue and sendMessage which we have created in the previous steps.

var orderMessage={
 "OrderDate": new Date().toDateString()

We create a JSON object with properties OrderId and OrderDate and send this to the Service Bus Queue. We can send these messages to Queue for communicating with other apps where the receiver apps can read the messages from Queue and perform the application logic based on the messages we have provided.

Receiving Messages from Services Bus Queue

Typically, we will be receive the Service Bus Queue messages from a backend app. The code block below receives the messages from Service Bus Queue and extracting information from the JSON data.

var azure = require('azure');
var config=require('./config');
var queue = 'ordersqueue';

var serviceBusClient = azure.createServiceBusService(config.sbConnection);
function receiveMessages() {
      function (error, message) {
        if (error) {
        } else {
            var message = JSON.parse(message.body);
            console.log('Processing Order# ' + message.OrderId
                + ' placed on ' + message.OrderDate);

By default, the messages will be deleted from Service Bus Queue after reading the messages. This behaviour can be changed by specifying the optional parameter isPeekLock as true as sown in the below code block.

function receiveMessages() {
    serviceBusClient.receiveQueueMessage(queue,{ isPeekLock: true },
      function (error, message) {
        if (error) {
        } else {
            var message = JSON.parse(message.body);
          console.log('Processing Order# ' + message.OrderId
                + ' placed on ' + message.OrderDate);
        function (deleteError){
                console.delete('Message deleted from Queue');

Here the message will not be automatically deleted from Queue and we can explicitly delete the messages from Queue after reading and successfully implementing the application logic.

Hadoop : The Basics

Problems with conventional database system

While a large number of CPU cores can be placed in a single server, it’s not feasible to deliver input data (especially big data) to these cores fast enough for processing. Using hard drives that can individually sustain read speeds of approx. 100 MB/s, and 4 independent I/O channels, a 4 TB data set would take over 2 days to read. Thus a distributed system with many servers working in problem is necessary in the big data domain.

Solution: Apache Hadoop Framework

The Apache Hadoop framework supports distributed processing of large data sets using a cluster of commodity hardware that can scale up to thousands of machines. Each node in the cluster offers local computation and storage and is assumed to be prone to failures. It’s designed to detect and handle failures at the application layer, and therefore transparently delivers a highly-available service without the need for expensive hardware or complex programming. Performing distributed computing on large volumes of data has been done before, what sets Hadoop apart is its simplified programming model for client applications and seamless handling of distribution of data and work across the cluster.

 Architecture of Hadoop

Let’s begin by looking the basic architecture of Hadoop. A typical Hadoop multi-machine cluster consists of one or two “master” nodes (running NameNode and JobTracker processes), and many “slave” or “worker” nodes (running TaskTracker and DataNode processes) spread across many racks.  Two main components of the Hadoop framework are described below – a distributed file system to store large amounts of data, and a computational paradigm called MapReduce.

Hadoop multi node cluster

Hadoop Distributed File System (HDFS)

Since the complete data set is unlikely to fit on a single computer’s hard drive, a distributed file system which breaks up input data and stores it on different machines in the cluster is needed. Hadoop Distributed File System (HDFS) is a distributed and scalable file system which is included in the Hadoop framework. It is designed to store a very large amount of information (terabytes or petabytes) reliably and is optimized for long sequential streaming reads rather than random access into the files. HDFS also provides data location awareness (such as the name of the rack or the network switch where a node is). Reliability is achieved by replicating the data across multiple nodes in the cluster rather than traditional means such as RAID storage. The default replication value is 3, so data is stored on three nodes – two on the same rack, and one on a different rack. Thus a single machine failure does not result in any data being unavailable.

Individual machines in the cluster that store blocks of an individual files are referred to as DataNodes. DataNodes communicate with each other to rebalance data, and re-replicate it in response to system failures. The Hadoop framework schedules processes on the DataNodes that operate on the local subset of data (moving computation to data instead of the other way around), so data is read from local disk into the CPU without network transfers achieving high performance.

The metadata for the file system is stored by a single machine called the NameNode. The large block size and low amount of metadata per file allows NameNode to store all of this information in the main memory, allowing fast access to the metadata from clients. To open a file, a client contacts the NameNode, retrieves a list of DataNodes that contain the blocks that comprise the file, and then reads the file data in bulk directly from the DataNode servers in parallel, without directly involving the NameNode. A secondary NameNode is used to avoid a single point of failure, it regularly connects to the primary NameNode and builds snapshots of the directory information.

The Windows Azure HDInsight Service supports HDFS for storing data, but also uses an alternative approach called Azure Storage Vault (ASV) which provides a seamless HDFS interface to Azure Blob Storage, a general purpose Windows Azure storage solution that does not co-locate compute with storage, but offers other benefits. In our next blog, we will explore the HDInsight service in more detail.

MapReduce Programming Model

Hadoop programs must be written to conform to the “MapReduce” programming model which is designed for processing large volumes of data in parallel by dividing the work into a set of independent tasks. The records are initially processed in isolation by tasks called Mappers, and then their output is then brought together into a second set of tasks called Reducer as shown below.

MapReduce Process

MapReduce input comes from files loaded in the processing cluster in HDFS. The client applications submit MapReduce jobs to the JobTracker node which divides and pushes work out to available TaskTracker nodes in the cluster while trying  to keep the work as close to the data as possible. Hadoop internally manages the cluster topology issues as the rack-aware HDFS file system enables the JobTracker to know which nodes contain the data, and which other machines are nearby. If the work cannot be hosted on one of the node where the data resides, priority is given to nodes in the same rack. This reduces the data moved across the network.

When the mapping phase completes, the intermediate (key, value) pairs are exchanged between machines to send all values with the same key to a single reducer. The reduce tasks are spread across

the same nodes in the cluster as the mappers. This data transfer is taken care of by the Hadoop infrastructure (guided by the different keys and their associated values) without the individual map or reduce tasks communicating or being aware of one another’s existence. A heartbeat is sent from the TaskTracker to the JobTracker frequently to update its status. If any node or TaskTracker in the cluster fails or times out, that part of the job is rescheduled by the underlying Hadoop layer without any explicit action by the workers. The TaskTracker on each node is spawned off in a separate Java Virtual Machine process to prevent the TaskTracker itself from failing if the running job crashes the JVM. User-level tasks do not communicate explicitly with one another and workers continue to operate leaving the challenging aspects of partially restarting the program to the underlying Hadoop layer. Thus Hadoop distributed system is very reliable and fault tolerant.

Hadoop also has a very flat scalability curve. A Hadoop program requires no recoding to work on a much larger data set by using a larger cluster of machines.  Hadoop is designed for work that is batch-oriented rather than real-time in nature (due to the overhead involved in starting MapReduce programs), is very data-intensive, and lends itself to processing pieces of data in parallel. This includes use cases such as log or clickstream analysis, sophisticated data mining, web crawling indexing, archiving data for compliance etc.

Select Master Page App for SharePoint 2013 now available!! (Get the SharePoint 2010 Select Master Page Web Part Free)

In Publishing sites, there will be a layouts or application page through which we can set a custom
or another master page as a default master page. Unfortunately, this is missing in Team Sites.

This is what this solution is all about. It is targeted mainly for Team sites, since publishing sites already have a provision.

It adds a custom ribbon button in the Share and Track group of the Files group of Master Page Gallery. This is a SharePoint 2013 Hosted App. Refer the documentation for the technical details.


The following screen shots depict the functionality.


The custom ribbon button will not be enabled if a folder is selected or more than 1 item is selected.
But if a file is selected, the button will be enabled, irrespective of the file extension. Upon selecting a file and clicking on the ribbon button, a pop up dialog will appear with the text “Working on it..”.

Then a confirmation alert will appear, asking “Are you sure?”. Once confirmed by the user, a progress message will be displayed in the pop up dialog. If the file selected is not of .master extension, then the user will be displayed an alert “This will work only for master pages.”.

If a master page, which is already set as default, is selected and the ribbon button is clicked, the user will be displayed an alert “The file at <url> is the current default master page. So please select another master page.”. If another master page is selected, then the user will be displayed an alert “Master Page Changed Successfully.

Please press CTRL + F5 for changes to reflect.”. Once the user clicks OK on the alert, the pop up dialog also closes and pressing CTRL + F5 will reflect the updated master page. Any time, the user clicks OK or cancel on the alert screens, the parent screen will be refreshed and the current selection will be cleared.

The app requires a Full Control on the host web, since this is required for setting the master page and thats precisely the reason why, I couldn’t publish this in the Office store.

The app has been tested on IE9 and the latest version of Chrome and Firefox. It may not work on IE8 or lower version of other browsers also, in case they don’t support HTML5. Also, the app currently supports only English. Also, the app will set the default master only on the host web (where the app is installed) and not on the sub webs.

The app uses jQuery AJAX and REST APIs of SharePoint 2013.

To use the app, just upload the app (.app file) to the App Catalog and add/install it to the host team site and trust it and navigate to the Master Page Gallery and you are good to go.


With this App, you will also receive the FREE SharePoint 2010 Select Master Page Web Part!!

It adds a custom ribbon button in the Share and Track group of the Documents group of Master Page Gallery.

It is a Sandbox solution and it is implemented to set the master of only the root site of a site collection, though it can be customized / extended for sub sites. It requires a user to be at least a Site owner to avoid unnecessary manipulation of master page by contributors or other users. Refer the documentation for the technical details.

The following screen shots depict the functionality.



Using Word Automation Services and OpenXML to Change Document Formats

There are some tasks that are difficult when using the Welcome to the Open XML SDK 2.0 for Microsoft Office, such as repagination, conversion to other document formats such as PDF, or updating of the table of contents, fields, and other dynamic content in documents. Word Automation Services is a new feature of SharePoint 2010 that can help in these scenarios. It is a shared service that provides unattended, server-side conversion of documents into other formats, and some other important pieces of functionality. It was designed from the outset to work on servers and can process many documents in a reliable and predictable manner.Image


Using Word Automation Services, you can convert from Open XML WordprocessingML to other document formats. For example, you may want to convert many documents to the PDF format and spool them to a printer or send them by e-mail to your customers. Or, you can convert from other document formats (such as HTML or Word 97-2003 binary documents) to Open XML word-processing documents.

In addition to the document conversion facilities, there are other important areas of functionality that Word Automation Services provides, such as updating field codes in documents, and converting altChunk content to paragraphs with the normal style applied. These tasks can be difficult to perform using the Open XML SDK 2.0. However, it is easy to use Word Automation Services to do them. In the past, you used Word Automation Services to perform tasks like these for client applications. However, this approach is problematic. The Word client is an application that is best suited for authoring documents interactively, and was not designed for high-volume processing on a server. When performing these tasks, perhaps Word will display a dialog box reporting an error, and if the Word client is being automated on a server, there is no user to respond to the dialog box, and the process can come to an untimely stop. The issues associated with automation of Word are documented in the Knowledge Base article Considerations for Server-side Automation of Office.

This scenario describes how you can use Word Automation Services to automate processing documents on a server.

  • An expert creates some Word template documents that follow specific conventions. She might use content controls to give structure to the template documents. This provides a good user experience and a reliable programmatic approach for determining the locations in the template document where data should be replaced in the document generation process. These template documents are typically stored in a SharePoint document library.

  • A program runs on the server to merge the template documents together with data, generating a set of Open XML WordprocessingML (DOCX) documents. This program is best written by using the Welcome to the Open XML SDK 2.0 for Microsoft Office, which is designed specifically for generating documents on a server. These documents are placed in a SharePoint document library.

  • After generating the set of documents, they might be automatically printed. Or, they might be sent by e-mail to a set of users, either as WordprocessingML documents, or perhaps as PDF, XPS, or MHTML documents after converting them from WordprocessingML to the desired format.

  • As part of the conversion, you can instruct Word Automation Services to update fields, such as the table of contents.

Using the Welcome to the Open XML SDK 2.0 for Microsoft Office together with Word Automation Services enables you to create rich, end-to-end solutions that perform well and do not require automation of the Word client application.

One of the key advantages of Word Automation Services is that it can scale out to your needs. Unlike the Word client application, you can configure it to use multiple processors. Further, you can configure it to load balance across multiple servers if your needs require that.

Another key advantage is that Word Automation Services has perfect fidelity with the Word client applications. Document layout, including pagination, is identical regardless of whether the document is processed on the server or client.

Supported Source Document Formats

The supported source document formats for documents are as follows.

  1. Open XML File Format documents (.docx, .docm, .dotx, .dotm)

  2. Word 97-2003 documents (.doc, .dot)

  3. Rich Text Format files (.rtf)

  4. Single File Web Pages (.mht, .mhtml)

  5. Word 2003 XML Documents (.xml)

  6. Word XML Document (.xml)

Supported Destination Document Formats

The supported destination document formats includes all of the supported source document formats, and the following.

  1. Portable Document Format (.pdf)

  2. Open XML Paper Specification (.xps)

Other Capabilities of Word Automation Services

In addition to the ability to load and save documents in various formats, Word Automation Services includes other capabilities.

You can cause Word Automation Services to update the table of contents, the table of authorities, and index fields. This is important when generating documents. After generating a document, if the document has a table of contents, it is an especially difficult task to determine document pagination so that the table of contents is updated correctly. Word Automation Services handles this for you easily.

Open XML word-processing documents can contain various field types, which enables you to add dynamic content into a document. You can use Word Automation Services to cause all fields to be recalculated. For example, you can include a field type that inserts the current date into a document. When fields are updated, the associated content is also updated, so that the document displays the current date at the location of the field.

One of the powerful ways that you can use content controls is to bind them to XML elements in a custom XML part. See the article, Building Document Generation Systems from Templates with Word 2010 and Word 2007 for an explanation of bound content controls, and links to several resources to help you get started. You can replace the contents of bound content controls by replacing the XML in the custom XML part. You do not have to alter the main document part. The main document part contains cached values for all bound content controls, and if you replace the XML in the custom XML part, the cached values in the main document part are not updated. This is not a problem if you expect users to view these generated documents only by using the Word client. However, if you want to process the WordprocessingML markup more, you must update the cached values in the main document part. Word Automation Services can do this.

Alternate format content (as represented by the altChunk element) is a great way to import HTML content into a WordprocessingML document. The article, Building Document Generation Systems from Templates with Word 2010 and Word 2007 discusses alternate format content, its uses, and provides links to help you get started. However, until you open and save a document that contains altChunk elements, the document contains HTML, and not ordinary WordprocessingML markup such as paragraphs, runs, and text elements. You can use Word Automation Services to import the HTML (or other forms of alternative content) and convert them to WordprocessingML markup that contains familiar WordprocessingML paragraphs that have styles.

You can also convert to and from formats that were used by previous versions of Word. If you are building an enterprise-class application that is used by thousands of users, you may have some users who are using Word 2007 or Word 2003 to edit Open XML documents. You can convert Open XML documents so that they contain only the markup and features that are used by either Word 2007 or Word 2003.

Limitations of Word Automation Services

Word Automation Services does not include capabilities for printing documents. However, it is straightforward to convert WordprocessingML documents to PDF or XPS and spool them to a printer.

A question that sometimes occurs is whether you can use Word Automation Services without purchasing and installing SharePoint Server 2010. Word Automation Services takes advantage of facilities of SharePoint 2010, and is a feature of it. You must purchase and install SharePoint Server 2010 to use it. Word Automation Services is in the standard edition and enterprise edition.

By default, Word Automation Services is a service that installs and runs with a stand-alone SharePoint Server 2010 installation. If you are using SharePoint 2010 in a server farm, you must explicitly enable Word Automation Services.

To use it, you use its programming interface to start a conversion job. For each conversion job, you specify which files, folders, or document libraries you want the conversion job to process. Based on your configuration, when you start a conversion job, it begins a specified number of conversion processes on each server. You can specify the frequency with which it starts conversion jobs, and you can specify the number of conversions to start for each conversion process. In addition, you can specify the maximum percentage of memory that Word Automation Services can use.

The configuration settings enable you to configure Word Automation Services so that it does not consume too many resources on SharePoint servers that are part of your important infrastructure. The settings that you must use are dictated by how you want to use the SharePoint Server. If it is only used for document conversions, you want to configure the settings so that the conversion service can consume most of your processor time. If you are using the conversion service for low priority background conversions, you want to configure accordingly.


In addition to writing code that starts conversion processes, you can also write code to monitor the progress of conversions. This lets you inform users or post alert results when very large conversion jobs are completed.

Word Automation Services lets you configure four additional aspects of conversions.

  1. You can limit the number of file formats that it supports.

  2. You can specify the number of documents converted by a conversion process before it is restarted. This is valuable because invalid documents can cause Word Automation Services to consume too much memory. All memory is reclaimed when the process is restarted.

  3. You can specify the number of times that Word Automation Services attempts to convert a document. By default, this is set to two so that if Word Automation Services fails in its attempts to convert a document, it attempts to convert it only one more time (in that conversion job).

  4. You can specify the length of elapsed time before conversion processes are monitored. This is valuable because Word Automation Services monitors conversions to make sure that conversions have not stalled.

Unless you have installed a server farm, by default, Word Automation Services is installed and started in SharePoint Server 2010. However, as a developer, you want to alter its configuration so that you have a better development experience. By default, it starts conversion processes at 15 minute intervals. If you are testing code that uses it, you can benefit from setting the interval to one minute. In addition, there are scenarios where you may want Word Automation Services to use as much resources as possible. Those scenarios may also benefit from setting the interval to one minute.

To adjust the conversion process interval to one minute

  1. Start SharePoint 2010 Central Administration.

  2. On the home page of SharePoint 2010 Central Administration, Click Manage Service Applications.

  3. In the Service Applications administration page, service applications are sorted alphabetically. Scroll to the bottom of the page, and then click Word Automation Services . If you are installing a server farm and have installed Word Automation Services manually, whatever you entered for the name of the service is what you see on this page.

  4. In the Word Automation Services administration page, configure the conversion throughput field to the desired frequency for starting conversion jobs.

  5. As a developer, you may also want to set the number of conversion processes, and to adjust the number of conversions per worker process. If you adjust the frequency with which conversion processes start without adjusting the other two values, and you attempt to convert many documents, you make the conversion process much less efficient. The best value for these numbers should take into consideration the power of your computer that is running SharePoint Server 2010.

  6. Scroll to the bottom of the page and then click OK.

Because Word Automation Services is a service of SharePoint Server 2010, you can only use it in an application that runs directly on a SharePoint Server. You must build the application as a farm solution. You cannot use Word Automation Services from a sandboxed solution.

A convenient way to use Word Automation Services is to write a web service that you can use from client applications.

However, the easiest way to show how to write code that uses Word Automation Services is to build a console application. You must build and run the console application on the SharePoint Server, not on a client computer. The code to start and monitor conversion jobs is identical to the code that you would write for a Web Part, a workflow, or an event handler. Showing how to use Word Automation Services from a console application enables us to discuss the API without adding the complexities of a Web Part, an event handler, or a workflow.

Important note Important

Note that the following sample applications call Sleep(Int32) so that the examples query for status every five seconds. This would not be the best approach when you write code that you intend to deploy on production servers. Instead, you want to write a workflow with delay activity.

To build the application

  1. Start Microsoft Visual Studio 2010.

  2. On the File menu, point to New, and then click Project.

  3. In the New Project dialog box, in the Recent Template pane, expand Visual C#, and then click Windows.

  4. To the right side of the Recent Template pane, click Console Application.

  5. By default, Visual Studio creates a project that targets .NET Framework 4. However, you must target .NET Framework 3.5. From the list at the upper part of the File Open dialog box, select .NET Framework 3.5.

  6. In the Name box, type the name that you want to use for your project, such as FirstWordAutomationServicesApplication.

  7. In the Location box, type the location where you want to place the project.

    Figure 1. Creating a solution in the New Project dialog box

    Creating solution in the New Project box

  8. Click OK to create the solution.

  9. By default, Visual Studio 2010 creates projects that target x86 CPUs, but to build SharePoint Server applications, you must target any CPU.

  10. If you are building a Microsoft Visual C# application, in Solution Explorer window, right-click the project, and then click Properties.

  11. In the project properties window, click Build.

  12. Point to the Platform Target list, and select Any CPU.

    Figure 2. Target Any CPU when building a C# console application

    Changing target to any CPU

  13. If you are building a Microsoft Visual Basic .NET Framework application, in the project properties window, click Compile.

    Figure 3. Compile options for a Visual Basic application

    Compile options for Visual Basic applications

  14. Click Advanced Compile Options.

    Figure 4. Advanced Compiler Settings dialog box

    Advanced Compiler Settings dialog box

  15. Point to the Platform Target list, and then click Any CPU.

  16. To add a reference to the Microsoft.Office.Word.Server assembly, on the Project menu, click Add Reference to open the Add Reference dialog box.

  17. Select the .NET tab, and add the component named Microsoft Office 2010 component.

    Figure 5. Adding a reference to Microsoft Office 2010 component

    Add reference to Microsoft Office 2010 component

  18. Next, add a reference to the Microsoft.SharePoint assembly.

    Figure 6. Adding a reference to Microsoft SharePoint

    Adding reference to Microsoft SharePoint

The following examples provide the complete C# and Visual Basic listings for the simplest Word Automation Services application.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using Microsoft.SharePoint;
using Microsoft.Office.Word.Server.Conversions;

class Program
    static void Main(string[] args)
        string siteUrl = "http://localhost";
        // If you manually installed Word automation services, then replace the name
        // in the following line with the name that you assigned to the service when
        // you installed it.
        string wordAutomationServiceName = "Word Automation Services";
        using (SPSite spSite = new SPSite(siteUrl))
            ConversionJob job = new ConversionJob(wordAutomationServiceName);
            job.UserToken = spSite.UserToken;
            job.Settings.UpdateFields = true;
            job.Settings.OutputFormat = SaveFormat.PDF;
            job.AddFile(siteUrl + "/Shared%20Documents/Test.docx",
                siteUrl + "/Shared%20Documents/Test.pdf");

Replace the URL assigned to siteUrl with the URL to the SharePoint site.

To build and run the example

  1. Add a Word document named Test.docx to the Shared Documents folder in the SharePoint site.

  2. Build and run the example.

  3. After waiting one minute for the conversion process to run, navigate to the Shared Documents folder in the SharePoint site, and refresh the page. The document library now contains a new PDF document, Test.pdf.

In many scenarios, you want to monitor the status of conversions, to inform the user when the conversion process is complete, or to process the converted documents in additional ways. You can use the ConversionJobStatus class to query Word Automation Services about the status of a conversion job. You pass the name of the WordServiceApplicationProxy class as a string (by default, “Word Automation Services”), and the conversion job identifier, which you can get from the ConversionJob object. You can also pass a GUID that specifies a tenant partition. However, if the SharePoint Server farm is not configured for multiple tenants, you can pass null (Nothing in Visual Basic) as the argument for this parameter.

After you instantiate a ConversionJobStatus object, you can access several properties that indicate the status of the conversion job. The following are the three most interesting properties.

ConversionJobStatus Properties


Return Value


Number of documents currently in the conversion job.


Number of documents successfully converted.


The number of documents that failed conversion.

Whereas the first example specified a single document to convert, the following example converts all documents in a specified document library. You have the option of creating all converted documents in a different document library than the source library, but for simplicity, the following example specifies the same document library for both the input and output document libraries. In addition, the following example specifies that the conversion job should overwrite the output document if it already exists.


Console.WriteLine("Starting conversion job");
ConversionJob job = new ConversionJob(wordAutomationServiceName);
job.UserToken = spSite.UserToken;
job.Settings.UpdateFields = true;
job.Settings.OutputFormat = SaveFormat.PDF;
job.Settings.OutputSaveBehavior = SaveBehavior.AlwaysOverwrite;
SPList listToConvert = spSite.RootWeb.Lists["Shared Documents"];
job.AddLibrary(listToConvert, listToConvert);
Console.WriteLine("Conversion job started");
ConversionJobStatus status = new ConversionJobStatus(wordAutomationServiceName,
    job.JobId, null);
Console.WriteLine("Number of documents in conversion job: {0}", status.Count);
while (true)
    status = new ConversionJobStatus(wordAutomationServiceName, job.JobId,
    if (status.Count == status.Succeeded + status.Failed)
        Console.WriteLine("Completed, Successful: {0}, Failed: {1}",
            status.Succeeded, status.Failed);
    Console.WriteLine("In progress, Successful: {0}, Failed: {1}",
        status.Succeeded, status.Failed);

To run this example, add some WordprocessingML documents in the Shared Documents library. When you run this example, you see output similar to this code snippet,

Starting conversion job
Conversion job started
Number of documents in conversion job: 4
In progress, Successful: 0, Failed: 0
In progress, Successful: 0, Failed: 0
Completed, Successful: 4, Failed: 0

You may want to determine which documents failed conversion, perhaps to inform the user, or take remedial action such as removing the invalid document from the input document library. You can call the GetItems() method, which returns a collection of ConversionItemInfo() objects. When you call the GetItems() method, you pass a parameter that specifies whether you want to retrieve a collection of failed conversions or successful conversions. The following example shows how to do this.


Console.WriteLine("Starting conversion job");
ConversionJob job = new ConversionJob(wordAutomationServiceName);
job.UserToken = spSite.UserToken;
job.Settings.UpdateFields = true;
job.Settings.OutputFormat = SaveFormat.PDF;
job.Settings.OutputSaveBehavior = SaveBehavior.AlwaysOverwrite;
SPList listToConvert = spSite.RootWeb.Lists["Shared Documents"];
job.AddLibrary(listToConvert, listToConvert);
Console.WriteLine("Conversion job started");
ConversionJobStatus status = new ConversionJobStatus(wordAutomationServiceName,
    job.JobId, null);
Console.WriteLine("Number of documents in conversion job: {0}", status.Count);
while (true)
    status = new ConversionJobStatus(wordAutomationServiceName, job.JobId, null);
    if (status.Count == status.Succeeded + status.Failed)
        Console.WriteLine("Completed, Successful: {0}, Failed: {1}",
            status.Succeeded, status.Failed);
        ReadOnlyCollection<ConversionItemInfo> failedItems =
        foreach (var failedItem in failedItems)
            Console.WriteLine("Failed item: Name:{0}", failedItem.InputFile);
    Console.WriteLine("In progress, Successful: {0}, Failed: {1}", status.Succeeded,

To run this example, create an invalid document and upload it to the document library. An easy way to create an invalid document is to rename the WordprocessingML document, appending .zip to the file name. Then delete the main document part (known as document.xml), which is in the Word folder of the package. Rename the document, removing the .zip extension so that it contains the normal .docx extension.

When you run this example, it produces output similar to the following.

Starting conversion job
Conversion job started
Number of documents in conversion job: 5
In progress, Successful: 0, Failed: 0
In progress, Successful: 0, Failed: 0
In progress, Successful: 4, Failed: 0
In progress, Successful: 4, Failed: 0
In progress, Successful: 4, Failed: 0
Completed, Successful: 4, Failed: 1
Failed item: Name:

Another approach to monitoring a conversion process is to use event handlers on a SharePoint list to determine when a converted document is added to the output document library.

In some situations, you may want to delete the source documents after conversion. The following example shows how to do this. 

Console.WriteLine("Starting conversion job");
ConversionJob job = new ConversionJob(wordAutomationServiceName);
job.UserToken = spSite.UserToken;
job.Settings.UpdateFields = true;
job.Settings.OutputFormat = SaveFormat.PDF;
job.Settings.OutputSaveBehavior = SaveBehavior.AlwaysOverwrite;
SPFolder folderToConvert = spSite.RootWeb.GetFolder("Shared Documents");
job.AddFolder(folderToConvert, folderToConvert, false);
Console.WriteLine("Conversion job started");
ConversionJobStatus status = new ConversionJobStatus(wordAutomationServiceName,
    job.JobId, null);
Console.WriteLine("Number of documents in conversion job: {0}", status.Count);
while (true)
    status = new ConversionJobStatus(wordAutomationServiceName, job.JobId, null);
    if (status.Count == status.Succeeded + status.Failed)
        Console.WriteLine("Completed, Successful: {0}, Failed: {1}",
            status.Succeeded, status.Failed);
        Console.WriteLine("Deleting only items that successfully converted");
        ReadOnlyCollection<ConversionItemInfo> convertedItems =
        foreach (var convertedItem in convertedItems)
            Console.WriteLine("Deleting item: Name:{0}", convertedItem.InputFile);
    Console.WriteLine("In progress, Successful: {0}, Failed: {1}",
        status.Succeeded, status.Failed);
Console.WriteLine("Starting conversion job")
Dim job As ConversionJob = New ConversionJob(wordAutomationServiceName)
job.UserToken = spSite.UserToken
job.Settings.UpdateFields = True
job.Settings.OutputFormat = SaveFormat.PDF
job.Settings.OutputSaveBehavior = SaveBehavior.AlwaysOverwrite
Dim folderToConvert As SPFolder = spSite.RootWeb.GetFolder("Shared Documents")
job.AddFolder(folderToConvert, folderToConvert, False)
Console.WriteLine("Conversion job started")
Dim status As ConversionJobStatus = _
    New ConversionJobStatus(wordAutomationServiceName, job.JobId, Nothing)
Console.WriteLine("Number of documents in conversion job: {0}", status.Count)
While True
    status = New ConversionJobStatus(wordAutomationServiceName, job.JobId, _
    If status.Count = status.Succeeded + status.Failed Then
        Console.WriteLine("Completed, Successful: {0}, Failed: {1}", _
                          status.Succeeded, status.Failed)
        Console.WriteLine("Deleting only items that successfully converted")
        Dim convertedItems As ReadOnlyCollection(Of ConversionItemInfo) = _
        For Each convertedItem In convertedItems
            Console.WriteLine("Deleting item: Name:{0}", convertedItem.InputFile)
        Exit While
    End If
    Console.WriteLine("In progress, Successful: {0}, Failed: {1}",
                      status.Succeeded, status.Failed)
End While

The power of using Word Automation Services becomes clear when you use it in combination with the Welcome to the Open XML SDK 2.0 for Microsoft Office. You can programmatically modify a document in a document library by using the Welcome to the Open XML SDK 2.0 for Microsoft Office, and then use Word Automation Services to perform one of the difficult tasks by using the Open XML SDK. A common need is to programmatically generate a document, and then generate or update the table of contents of the document. Consider the following document, which contains a table of contents.

Figure 7. Document with a table of contents

Document with table of contents

Let’s assume you want to modify this document, adding content that should be included in the table of contents. This next example takes the following steps.

  1. Opens the site and retrieves the Test.docx document by using a Collaborative Application Markup Language (CAML) query.

  2. Opens the document by using the Open XML SDK 2.0, and adds a new paragraph styled as Heading 1 at the beginning of the document.

  3. Starts a conversion job, converting Test.docx to TestWithNewToc.docx. It waits for the conversion to complete, and reports whether it was converted successfully.

Console.WriteLine("Querying for Test.docx");
SPList list = spSite.RootWeb.Lists["Shared Documents"];
SPQuery query = new SPQuery();
query.ViewFields = @"<FieldRef Name='FileLeafRef' />";
query.Query =
        <FieldRef Name='FileLeafRef' />
        <Value Type='Text'>Test.docx</Value>
SPListItemCollection collection = list.GetItems(query);
if (collection.Count != 1)
    Console.WriteLine("Test.docx not found");
SPFile file = collection[0].File;
byte[] byteArray = file.OpenBinary();
using (MemoryStream memStr = new MemoryStream())
    memStr.Write(byteArray, 0, byteArray.Length);
    using (WordprocessingDocument wordDoc =
        WordprocessingDocument.Open(memStr, true))
        Document document = wordDoc.MainDocumentPart.Document;
        Paragraph firstParagraph = document.Body.Elements<Paragraph>()
        if (firstParagraph != null)
            Paragraph newParagraph = new Paragraph(
                new ParagraphProperties(
                    new ParagraphStyleId() { Val = "Heading1" }),
                new Run(
                    new Text("About the Author")));
            Paragraph aboutAuthorParagraph = new Paragraph(
                new Run(
                    new Text("Eric White")));
            firstParagraph.Parent.InsertBefore(newParagraph, firstParagraph);
    string linkFileName = file.Item["LinkFilename"] as string;
    file.ParentFolder.Files.Add(linkFileName, memStr, true);
Console.WriteLine("Starting conversion job");
ConversionJob job = new ConversionJob(wordAutomationServiceName);
job.UserToken = spSite.UserToken;
job.Settings.UpdateFields = true;
job.Settings.OutputFormat = SaveFormat.Document;
job.AddFile(siteUrl + "/Shared%20Documents/Test.docx",
    siteUrl + "/Shared%20Documents/TestWithNewToc.docx");
Console.WriteLine("After starting conversion job");
while (true)
    ConversionJobStatus status = new ConversionJobStatus(
        wordAutomationServiceName, job.JobId, null);
    if (status.Count == status.Succeeded + status.Failed)
        Console.WriteLine("Completed, Successful: {0}, Failed: {1}",
            status.Succeeded, status.Failed);

After running this example with a document similar to the one used earlier in this section, a new document is produced, as shown in Figure 8.

Figure 8. Document with updated table of contents

Document with updated table of contents

The Open XML SDK 2.0 is a powerful tool for building server-side document generation and document processing systems. However, there are aspects of document manipulation that are difficult, such a document conversions, and updating of fields, table of contents, and more. Word Automation Services fills this gap with a high-performance solution that can scale out to your requirements. Using the Open XML SDK 2.0 in combination with Word Automation Services enables many scenarios that are difficult when using only the Open XML SDK 2.0.

Common Techniques in Responsive Web Design

In this article, I’ll dive into some of the most common practices for building responsive site layouts and experiences. I’ll describe the emerging and available techniques for site layouts that flexibly resize based on screen real estate (referred to as “fluid grids”) so as to ensure that users get complete experiences across whatever screen size they are using. Additionally, I’ll show how to present rich media, especially images, and how developers can ensure that visitors on small-screen devices do not incur additional bandwidth costs for high-quality media.


As you play with some of the techniques I describe, here are a few ways to test what your site looks like on different devices resolutions:

  1. Benjamin Keen has a responsive Web design bookmarklet that you can add to your Favorites bar (or Bookmarks bar) on your browser of choice. You can click on this bookmarklet to test your site layout in different resolutions.
  2. If you’re using Windows 8, you can always test your page layout on Internet Explorer 10 by employing the Windows 8 snap modes. In Windows 8, you can use Internet Explorer 10 on your full screen (full mode), or you can multitask by docking the browser to snap mode, where it emulates the layout characteristics of a smart phone browser. Additionally, you can dock the browser into fill mode, where it occupies 1024 x 768 pixels (px) on a default Windows 8 screen size of 1366 x 768 px. This is a great proxy for how your site will look on iPad screens as well as traditional 4:3 screens.
  3. Lastly, you’ll probably do a lot of what you see in Figure 1 (image courtesy

Basic Testing for Responsive Web Design
Figure 1. Basic Testing for Responsive Web Design

Media Queries

Traditionally, developers have relied on sniffing out the browser’s user-agent string to identify whether a user is visiting a site from a PC or a mobile device. Often, after doing so, they redirect users to different subsites that serve up virtually the same content but with different layout and information design. For example, in the past, users who visited could see the traditional PC experience or get hardware-specific mobile experiences by being redirected to

But redirections require two separate engineering efforts. Also, this approach was optimized for two screen layouts (mobile with 320-px width and desktop with 1024-px width). It did not intelligently provide a great experience for users visiting from intermediate device sizes (such as tablets) as well as users with significantly larger screens.

CSS3 looks to help Web developers separate content creation (their page markup and functionality in HTML and JavaScript) from the presentation of that content and handle layout for different dimensions entirely within CSS via the introduction of media queries.

A media query is a way for a developer to write a CSS3 stylesheet and declare styles for all UI elements that are conditional to the screen size, media type and other physical aspects of the screen. With media queries, you can declare different styles for the same markup by asking the browser about relevant factors, such as device width, device pixel density and device orientation.

But even with CSS3, it’s very easy to fall into the trap of building three different fixed-width layouts for the same Web page markup to target common screen dimensions today (desktop, tablet and phone). However, this is not truly responsive Web design and doesn’t provide an optimal experience for all devices. Media queries are one part of the solution to providing truly responsive Web layout; the other is content that scales proportionally to fill the available screen. I’ll address this later.

What Does “Pixel” Mean Anymore?

The pixel has been used for Web design and layout for some time now and has traditionally referred to a single point on the user’s screen capable of displaying a red-blue-green dot. Pixel-based Web design has been the de facto way of doing Web layout, for declaring the dimensions of individual elements of a Web page as well as for typography. This is primarily because most sites employ images in their headers, navigation and other page UI elements and pick a site layout with a fixed pixel width in which their images look great.

However, the recent emergence of high-pixel-density screens and retina displays has added another layer of meaning to this term. In contemporary Web design, a pixel (that is, the hardware pixel we just discussed) is no longer the single smallest point that can be rendered by a screen.

Visit a Web site on your iPhone4, and its 640 x 960 px hardware screen will tell your browser that it has 320 x 480 px. This is probably for the best, since you don’t want a 640-px wide column of text fitted into a screen merely 2 inches wide. But what the iPhone screen and other high-density devices highlight is that we’re not developing for the hardware pixel anymore.

The W3C defines a reference pixel as the visual angle of 1 px on a device with 96 ppi density at an arm’s length distance from the reader. Complicated definitions aside, all this means is that when you design for modern-day, high-quality screens, your media queries will respond to reference pixels, also referred to as CSS pixels. The number of CSS pixels is often going to be less than the number of hardware pixels, which is a good thing! (Beware: hardware pixels are what most device-manufacturers use to advertise their high-quality phones, slates and retina displays—they’ll lead your CSS astray.)

This ratio of hardware pixels to CSS pixels is called device pixel ratio. A higher device pixel ratio just means that each CSS pixel is being rendered by more hardware pixels, which makes your text and layout look sharper.

Wikipedia has a great list of recent displays by pixel density, which includes device pixel ratio. You can always use CSS3 media queries to identify the device pixel ratio if you must, as so:

  1. /*Note that the below property device pixel ratio might need to be vendor-prefixed
  2.  for some browsers*/
  3. @media screen and (device-pixel-ratio: 1.5)
  4. {
  5.   /*adjust your layout for 1.5 hardware pixels to each reference pixel*/
  6. }
  7. @media screen and (device-pixel-ratio: 2)
  8. {
  9.   /*adjust your layout, font-sizes etc. for 2 hardware pixels to each reference pixel*/
  10. }

There are also some open source libraries that let developers calculate device pixel ratio using JavaScript across browsers, such as GetDevicePixelRatio by Tyson Matanich. Note that this result is available only in JavaScript, but it can be used to optimize image downloads so that high-quality images (with larger file sizes) are not downloaded on nonretina displays.

However, it is not recommended that you use device pixel ratio to define your page and content layout. While the reference pixel vs. hardware pixel disparity can be confusing, it’s easy to understand why this is crucial in offering users a better experience. An iPhone 3GS and iPhone 4 have approximately the same physical screen size and have identical use patterns, so it stands to reason that a block of text should have approximately the same physical size.

Similarly, just because you have an HDTV with a 1920 x 1080 p screen, this doesn’t mean sites should render content at this native resolution. Users sit several feet away from their TV and also use imprecise input mechanisms (such as joysticks) to interact with it, which is why it’s preferred that TV browsers pack multiple hardware pixels into a reference pixel. As long as you’ve designed your site with a 960-px wide layout for desktop browsers, the site will look comparable and be usable, regardless of whether your TV is 1080 p or an older model with 720 p.

As a general rule of thumb, your text content will look better on these devices. However, your image content may look pixelated and blurry. Thus, from a practical perspective, device pixel ratio matters most when you’re trying to serve high-quality photography/image data to your users on high-pixel-density screens. Moreover, you want to make sure that your brand logo looks sharp on your users’ fancy new phones. Later in this article, I’ll talk about techniques for implementing responsive images and point to some existing JavaScript libraries that can address this.

As we continue, I’ll use the term pixel to mean reference pixel and explicitly call out hardware pixel as needed.

Scaling Your Site Layout Responsively

Grid-based layout is a key component of Web site design. Most sites you visit can easily be visualized as a series of rectangles for page components such as headers, site navigation, content, sidebars, footers and so on.

Ideally, when we design responsive sites, we want to make the grid layout agnostic of the user’s screen size. This means we want our layout and content to scale to as much screen real estate as is available (within reason), instead of providing two or three fixed-width layouts.

Mobile-First Design

As I pointed out in the first article of this series, more than 12 percent of the world’s Web traffic comes from mobile devices. This fraction is significantly higher in nations with higher smartphone penetration and is expected to increase notably in the next few years as adoption picks up in Asia, Latin America and Africa.

Additionally, taking a mobile-first approach to site design is a good exercise in information design. Basically, it helps you prioritize the content and functionality that you want to make available on the mobile version of a site and then progressively enhance the site layout for larger devices. This way you ensure that your users have a valuable experience on their mobile devices—not just an afterthought to their desktop experience—and you can take advantage of additional real estate when available to provide a more visually engaging experience as well as navigation to additional “tier-two” content.

Case Study: The Redesigned

When you visit on a mobile phone or narrow your PC browser width (with screen width under 540 px), you see a single hero image as part of a touch-friendly, visually rich slide show advertising one product at a time. (See Figure 2.) The top products are highlighted in the Discover section. Additional navigation is below the fold or in an accordion-style menu that is collapsed by default and is exposed when the user taps the list icon. Similarly, the search box is hidden by default to conserve screen real estate—the user needs to tap the search icon. This way, the mobile layout shows top products and links one below the other and only requires vertical panning. Content and product links below the fold are prioritized from top to bottom. as Designed for Mobile Phones
Figure 2. as Designed for Mobile Phones

Once the width of the viewport exceeds 540 px (at which point we can assume that the user is no longer viewing the site on a phone but on a tablet or a low-resolution PC), you notice the first layout change (Figure 3). The search box is now visible by default, as is the top-level navigation, which was previously hidden under the list icon. The products in the Discover section are now presented in a single line, since they fit in the available width. Most importantly, notice that in this transition the hero image always occupies the available width of the screen. After Exceeding 540 px
Figure 3. After Exceeding 540 px

The next layout change, shown in Figure 4, occurs at a width of 640 px or higher. As always, the hero image takes up all available screen width. The individual items within the For Work section are laid out side-by-side. The additional real estate also allows the caption for the hero image to be presented in line with the image and with motion, which is very eye-catching.

Layout Change at 640 px or Higher
Figure 4.Layout Change at 640 px or Higher

The last layout change occurs at screen widths of 900 px and higher (Figure 5). The Discover menu floats to the left to take advantage of available horizontal space, which reduces the need for vertical scrolling.

Layout at Screen Widths of 900 px and Higher
Figure 5. Layout at Screen Widths of 900 px and Higher

Finally, and most importantly, the page layout—especially the hero image—continues to scale and fill the available horizontal real estate (until 1600 px) so as to maximize the impact of the visual eye-candy (Figure 6). In fact, this is the case for all screen widths between 200 px and 1600 px—there is never any wasted whitespace on the sides of the hero image. (Similarly, the relative layouts of the Discover and For Work sections don’t change, but the images scale proportionally as well.)

Maximizing Impact at Higher Resolutions
Figure 6. Maximizing Impact at Higher Resolutions

Techniques for Responsive Layout

Great, so how do we implement this kind of experience? Generally, the adaptive layout for Web sites boils down to two techniques:

  • Identify break points where your layout needs to change.
  • In between break points, scale content proportionally.

Let’s examine these techniques independently.

Scaling Content Proportionally Between Break Points

As pointed out in the evaluation of, the relative layout of the header, hero image, navigation area and content area on the home page do not change for a screen width of 900 px or higher. This is valuable because when users visit the site on a 1280 x 720 monitor, they are not seeing a 900-px wide Web site with more than 25 percent of the screen going to whitespace in the right and left margins.

Similarly, two users might visit the site, one with an iPhone 4 with 480 x 320 px resolution (in CSS pixels) and another using a Samsung Galaxy S3 with 640 x 360 px resolution. For any layout with a width less than 512 px, scales down the layout proportionally so that for both users the entire mobile browser is devoted to Web content and not whitespace, regardless of whether they are viewing the site in portrait or landscape mode.

There are a couple of ways to implement this, including the CSS3 proposal of fluid grids. However, this is not supported across major browsers yet. You can see this working on Internet Explorer 10 (with vendor prefixes), and MSDN has examples of the CSS3 grid implementation here and here.

In the meantime, we’re going to use the tried-and-tested methods of percentage-based widths to achieve a fluid grid layout. Consider the simplistic example illustrated in Figure 7, which has the following design requirements:

  1. A #header that spans across the width of the screen.
  2. A #mainContent div that spans 60 percent of the width of the screen.
  3. A #sideContent div that spans 40 percent of the screen width.
  4. 20-px fixed spacing between #mainContent and #sideContent.
  5. A #mainImage img element that occupies all the available width inside #mainContent, excluding a fixed 10-px gutter around it.

Set Up for a Fluid Grid
Figure 7. Set Up for a Fluid Grid

The markup for this page would look like the following:

  1. <!doctype html>
  2. <html>
  3. <head>
  4.   <title>Proportional Grid page</title>
  5.   <style>
  6.     body {
  7.       /* Note the below properties for body are illustrative only.
  8.          Not needed for responsiveness */
  9.       font-size:40px;
  10.       text-align: center;
  11.       line-height: 100px;
  12.       vertical-align: middle;
  13.     }
  14.     #header
  15.     {
  16.       /* Note the below properties for body are illustrative only.
  17.          Not needed for responsiveness */
  18.       height: 150px;
  19.       border: solid 1px blue;
  20.     }
  21.     #mainContent {
  22.       width: 60%;
  23.       float: right;
  24.       /*This way the mainContent div is always 60% of the width of its parent container,
  25.         which in this case is the  tag that defaults to 100% page width anyway */
  26.       background: #999999;
  27.       }
  28. #imageContainer {
  29.     margin:10px;
  30.     width: auto;
  31.     /*This forces there to always be a fixed margin of 10px around the image */
  32. }
  33. #mainImage {
  34.     width:100%;
  35.     /* the image grows proportionally with #mainContent, but still maintains 10px of gutter */
  36. }
  37. #sideContentWrapper {
  38.     width: 40%;
  39.     float: left;
  40. }
  41. #sideContent {
  42.     margin-right: 20px;
  43.     /* sideContent always keeps 20px of right margin, relative to its parent container, namely
  44.        #sideContentWrapper. Otherwise it grows proportionally. */
  45.     background: #cccccc;
  46.     min-height: 200px;
  47.     }
  48.   </style>
  49. </head>
  50. <body>
  51.   <div id=“header”>Header</div>
  52.   <div id=“mainContent”>
  53.     <div id=“imageContainer”>
  54.       <img id=“mainImage” src=“microsoft_pc_1.png” />
  55.     </div>
  56.     Main Content
  57.   </div>
  58.   <div id=“sideContentWrapper”>
  59.   <div id=“sideContent”>
  60.     Side Content
  61.   </div>
  62.   </div>
  63. </body>
  64. </html>

A similar technique is employed by Wikipedia for its pages. You’ll notice that the content of an article seems to always fit the available screen width. Most interestingly, the sidebars (the left navigation bar as well as the right column with the HTML5 emblem) have a fixed pixel width and seem to “stick” to their respective sides of the screen. The central area with the textual content grows and shrinks in response to the screen size. Figure 8 and Figure 9 show examples. Notice the sidebars remain at a fixed width, and the available width for the remaining text content in the center gets proportionally scaled.

Wikipedia on a 1920-px wide monitor
Figure 8. Wikipedia on a 1920-px wide monitor

Wikipedia on a 800-px wide monitor
Figure 9. Wikipedia on a 800-px wide monitor

Such an effect for a site with a fixed navigation menu on the left can easily be achieved with the following code:

  1. <!DOCTYPE html>
  2. <html>
  3.   <head><title>Fixed-width left navigation</title>
  4.   <style type=“text/css”>
  5.   body
  6.   {
  7.     /* Note the below properties for body are illustrative only.
  8.        Not needed for responsiveness */
  9.     font-size:40px;
  10.     text-align: center;
  11.     line-height: 198px;
  12.     vertical-align: middle;
  13. }
  14.  #mainContent
  15.  {
  16.     margin-left: 200px;
  17.     min-height: 200px;
  18.     background: #cccccc;
  19. }
  20. #leftNavigation
  21. {
  22.     width: 180px;
  23.     margin: 0 5px;
  24.     float: left;
  25.     border: solid 1px red;
  26.     min-height: 198px;
  27. }
  28. </style>
  29. </head>
  30. <body>
  31. <div id=“leftNavigation”>Navigation</div>
  32. <div id=“mainContent”>SomeContent</div>
  33. </body>
  34. </html>

Changing the Page Layout Based on Breakpoints

Proportional scaling is only part of the solution—because we don’t want to scale down all content equally for phones and other small-screen devices. This is where we can use CSS3 media queries to progressively enhance our site experience and add additional columns as screen size grows larger. Similarly, for small screen widths, we might use media queries to hide entire blocks of low-priority content. is a great resource to browse to see what kinds of layout changes sites undergo at their breakpoints. Consider the example of Simon Collision shown in Figure 10.

Simon Collision at Different Screen Sizes
Figure 10. Simon Collision at Different Screen Sizes

We can achieve a similar experience using CSS3 media queries. Let’s examine the simple example illustrated in Figure 11, where I have four divs: #red, #green, #yellow and #blue.

Example for CSS Media Queries
Figure 11. Example for CSS Media Queries

Here’s the sample code:

  1. <!doctype html>
  2. <html>
  3. <head>
  4. <title>Break points with media queries</title>
  5.   <style type=“text/css”>
  6.     /* default styling info*/
  7. /* four columns of stacked one below the other in a phone layout */
  8. /* remember to plan and style your sites mobile-first */
  10. #mainContent
  11. {
  12.   margin: 40px;
  13. }
  15. #red#yellow#green#blue
  16. {
  17.   height: 200px;
  18. }
  19. #red
  20. {
  21.   background: red;
  22. }
  23. #green
  24. {
  25.   background: green;
  26. }
  27. #yellow
  28. {
  29.   background: yellow;
  30. }
  31. #blue
  32. {
  33.   background: blue;
  34. }
  36. @media screen and (maxwidth:800pxand (minwidth:540px)
  37. {
  38.   /* this is the breakpoint where we transition from the first layout, of four side-by-side
  39.      columns, to the square layout with 2X2 grid */
  41.   #red, #blue, #green, #yellow {
  42.     width:50%;
  43.     display: inlineblock;
  44.   }
  45. }
  47. @media screen and (minwidth:800px)
  48. {
  49.   /*custom styling info for smartphones small screens;
  50.     All columns are just displayed one below the other */
  52.   #red, #yellow, #green, #blue {
  53.     width: 25%;
  54.     display: inlineblock;
  55.     white-space: nowrap;
  56.   }
  58. }
  60.   </style>
  61. </head>
  62. <body>
  63.   <div id=“mainContent”>
  64.     <div id=“red”></div><div id=“green”></div><div id=“yellow”></div><div id=“blue”></div>
  65.   </div>
  66. </body>
  67. </html>

Often though, you don’t need to write such stylesheets from scratch. After all, what’s Web development without taking advantage of the abundance of open-source frameworks out there and available, right? Existing grid-layout frameworks, such as Gumby Framework (which is built on top of Nathan Smith’s tried-and-true 960gs) and the Skeleton Framework, already provide out-of-box support for reordering the number of grid columns based on available screen width. Another great starting point, especially for a Wikipedia-esque layout, is the simply named CSS Grid. This provides users with the standard fixed-width left navigation bar, which disappears when the screen resolution shifts to that of tablets and smartphones, giving you a single-column layout.

More Media Queries

Depending on the needs of your site design, you might require other pieces of data about the device/viewport before making your CSS decisions. Media queries let you poll the browser for other attributes as well, such as:

And others are defined here.

Earlier, we broke down the two components of responsive layout to examine how they’re implemented. It’s crucial to remember that truly responsive layout is device agnostic—that is, not optimized for specific device widths—and is therefore a combination of the two techniques.

Images and Photos

Images are used on the Web for photo content as well as for styling (for background textures, custom borders and shadows and icons). Images make the Web beautiful, and we certainly want our sites to look rich and inviting to all users. However, the biggest concerns around images relate arguably to the most important part of the user experience—namely, performance and page load time.

Bandwidth Impact of Images

Our Web sites get served up in text—HTML, CSS and JavaScript. Often, these files don’t take more than 50 kilobytes or so to download. Images and other media are usually the most bandwidth-hungry parts of our pages. All the images on the homepage of a news site can add up to a couple of megabytes of content, which the browser must download as it renders the page. Additionally, if all the image content comes from separate files, each individual image file request causes additional network overhead. This is not a great experience for someone accessing your site on a 3G network, especially if you’re looking to serve up a gorgeous 8-megapixel panoramic landscape background. Besides, your user’s 320 x 480 px phone will not do justice to this high-quality image content. So, how do you ensure that users get a quick, responsive experience on their phones, which can then scale up to a richer media experience on larger devices?

Consider the following techniques, which you can combine to save users image downloads on the order of several hundred kilobytes, if not more, and provide a better performing experience.

Can You Replace Your Images with CSS?

CSS3 can help Web developers avoid using images altogether for a lot of common scenarios. In the past, developers have used images to achieve simple effects like text with custom fonts, drop-shadows, rounded corners, gradient backgrounds and so on.

Most modern browsers (Internet Explorer 10, Google Chrome, Mozilla Firefox and Safari) support the following CSS3 features, which developers can use to reduce the number of image downloads a user needs while accessing a site. Also, for older browsers, a number of these techniques degrade naturally (for example, the rounded border just gives way to a square border on Internet Explorer 8 and earlier), and this way your sites are still functional and usable on older browsers.

  • Custom font support using @font-face. With CSS3, you can upload custom fonts to your site (as long as you own the license to do so) and just point to them in your stylesheet. You don’t need to create images to capture your page titles and headers or embed custom fonts for impactful titles and headers
  • Background-gradients. Go to a lot of top sites, and you’ll notice that the background of the site is usually a gradient color, which helps the page look less “flat.” This can easily be achieved with CSS3, as seen here.
  • Rounded corners. CSS3 allows you to declaratively specify a border-radius for each of the four corners of an HTML element and avoid having to rely on those pesky 20 x 20 px images of circles to create a rounded box on your site design.
  • 2-D transforms. CSS3 allows you to declare 2-D transforms such as translate(), rotate(), skew() and others to change the appearance of your markup. IETestDrive has a great working example here. Common transforms such as rotation might cut back on the number of image downloads.
  • Box-shadow and text-shadow. Modern browsers support box-shadow and text-shadow, which allow site developers to make their content look more three dimensional and add prominence to important pieces of content (such as header text, images, floating menus and the like)

Some of these properties might require a browser-specific implementation (using a vendor prefix) in addition to the standard implementation. HTML5Please is a convenient resource for identifying whether you need to use additional vendor prefixing for your CSS3.

Now, to be fair, users visiting your site on older browsers will see a functional but less aesthetic version of your site content. But the trade-off here is to ensure that the ever-growing number of people who visit your sites via cutting-edge mobile devices and 3G Internet have a fast, responsive site experience.

Use JavaScript to Download the Right Image Size for the Right Context

If your site experience inherently relies on pictures, you need a solution that scales across the spectrum of devices and network conditions to offer users a compelling experience in the context of the device they use. This means that on high-quality cinema displays you want to wow your audience with high-quality (that is, large file size) images. At the same time, you don’t want to surface your 1600 x 1200 px photographs to users on a 4-inch cellphone screen with a metered 3G data connection.

While the W3C is working on proposals for how to declare different image sources for a given picture, a few emerging JavaScript technologies can help you get started right now.

Media Query Listeners

Media Query Listeners are supported in modern browsers. They let developers use JavaScript to verify whether certain media query conditions have been met, and accordingly decide what resources to download.

For example, say your Web page has a photograph that someone posted. As a developer, you need to do two things:

  • Decide the thresholds (or break points) for showing a high-quality (large-screen experience) or a small-screen experience, and based on that decision, download a high-quality set of resources or the low-bandwidth set of resources. Include the following script at load time to ensure that your page downloads the appropriate set of assets and provides users with the right experience:
  1. var mediaQueryList = window.matchMedia(“(min-width:480px)”);
  2. //NOTE: for IE10 you will have to use .msMatchMedia, the vendor-prefixed implementation
  3.  //instead
  4. isRegularScreen = mediaQueryList.matches;
  5. //this returns a Boolean which you can use to evaluate whether to use high-quality assets or
  6. //low-bandwidth assets
  8. if (isRegularScreen)
  9. {
  10.   //run script to download the high-quality images
  11. }
  12. else
  13. {
  14.   //the condition has failed, and user is on smartphone or snap-mode
  15.   //run script to download low-bandwidth images
  16. }
  • Optionally, add an event listener to watch for changes to the media size so that as a user resizes her browser window, you can run different sets of scripts to acquire high-quality resources as needed. For example, a user might first visit your site on Windows 8 in snap mode with a 320-px width. Later, the user might find your content interesting and open the browser in full-mode (and even share what she is seeing on her HDTV.) At this point, you might want to provide a better experience for your media:
  1. mediaQueryList.addListener(mediaSizeChange);
  2. function mediaSizeChange(mediaQueryList)
  3. {
  4.   //Executed whenever the media query changes from true to false or vice versa
  5.   if(mediaQueryList.matches)
  6.   {
  7.     //run script to acquire high-quality assets;
  8.   }
  9. else{
  10.   //in this case the user has gone from a large screen to small screen
  11.   //by resizing their browser down
  12.   //if the high-quality images are already downloaded
  13.   //we could treat this as a no-op and just use the existing high-quality assets
  15.   //alternatively, if the smaller image shows a clipped version of the high-quality image
  16.   //trigger the download of low-bandwidth images
  18.   }
  19. }

Custom JS Libraries

Of course, there are also custom libraries to help you with this. These libraries work in a similar way by identifying the size and resolution of the user’s device and then delivering, on-the-fly, a scaled-down version of your source image over the network. Here are some examples:

  • The Filament Group, which redesigned the Boston Globe site to be responsive, has a technique available here, which requires you to add some JavaScript files to your site and alter your site’s .htaccess file. Then, for each of your <img> tags, you provide a regular-size version as well as a hi-res version, and their plug-in takes care of the rest.
  1. <img src=“smallRes.jpg” data-fullsrc=“largeRes.jpg”>
  • A similar technique is available at The benefit of this technique is that it does not require developers to hand-code their markup to point to low-resolution and high-resolution images, nor does it require developers to manually upload two different versions of the same image.

·        Tyson Matanich has made publicly available the Polyfill codebase, which is the technique used by in its adaptive redesign detailed earlier. Tyson also sheds light on the rationale behind the available functionality in the Polyfill library in his blog post. Some of the optimizations that Tyson and his team have made for the Polyfill codebase include the following (which work across browsers, even on Internet Explorer 6):

  1. Allow developers to specify which images should load before the DOM is ready (must-have images for page content).
  2. Allow developers to specify which images to load only after the rest of the page is ready (for example, images in a slide show that will only toggle 10 seconds later).
  3. Allow developers to decide whether new images should be downloaded and swapped in at the time a browser is resized.

The blog post details all the optimizations that have been exposed to developers in the Polyfill API.


Sites use text to communicate organization and content to their users in two predominant ways, namely body text and header text. It’s definitely valuable to think through how your site is going to scale these across different contexts.

Body text is particularly interesting if your site features articles, blog posts and tons of written content that users consume. Your mobile users want to read the same 500-word article on their desktop, TV and 320-px width screens and, as the site developer, you want to balance readability with convenience (that is, not having to scroll too much). The width of the article’s body can be scaled up to match the screen size, but more than that, you can offer larger type and improved line spacing to further improve readability for users with bigger screens.

Blocks of text are usually most readable when they hold approximately 66 characters per line, so if your site really depends on readability of long articles, optimizing type responsively for users can really improve their overall experience.

The following example uses the CSS3 media query max-width to progressively increase the readability of paragraph text:

  1. /* pack content more tightly on mobile screens to reduce scrolling in really long articles */
  2. p {
  3.   font-size:0.6em;
  4.   line-height: 1em;
  5.   letter-spacing: -0.05em;
  6. }
  8. @media screen and (max-width:800px) and (min-width:400px)
  9. {
  10.   /* intermediate text density on tablet devices */
  11.   p
  12.   {
  13.     font-size:0.8em;
  14.     line-height: 1.2em;
  15.     letter-spacing: 0;
  16.   }
  17. }
  19. @media screen and (min-width:800px)
  20. {
  21.   /* text can be spaced out a little better on a larger screen */
  22.   p
  23.   {
  24.     font:1em ‘Verdana’, ‘Arial’, sans-serif;
  25.     line-height: 1.5em;
  26.     letter-spacing:0.05em;
  27.   }
  28. } has a great example of an article with responsively scaled type here.

Additionally, your site probably uses headlines to break up content—to make it easier for a user who is scanning through your site’s pages to quickly identify how information and functionality are structured. Sites often use headlines with large impactful type and add margins and padding.

Headers in HTML (specifically <h1>, <h2>, and similar tags) usually are automatically styled not just to use a large font-size attribute but also spacious margins and padding to ensure that they stand out and break the flow of content.

With a similar technique, you can consider scaling down the text size, margins, padding and other spacing attributes you use for your headlines as a function of the available device real-estate. You can also use available open-source solutions, such as FitText, to achieve this.

Optimizing Form Fields

If your site requires users to fill in forms, you might want to ensure that you can minimize interactions for touch users. This is especially relevant if you have a lot of text inputs.

HTML5 extends the type attribute for the <input> tag to let developers add semantic meaning to a textbox. For example, if a user is filling out a contact form, the phone number input can be marked up as <input type= “tel” /> and the email address field can be marked up as <input type= “email” />.

Modern browsers, especially those on touch devices, will parse this attribute and optimize the layout of the touch-screen keyboard accordingly. For example, when a user taps on a phone number field, the browser’s touch keyboard will prominently display a numpad, and when the user taps on the email address field, the touch keyboard will surface the @ key, as well as a .com key to minimize typing. This is a minor tweak that can really improve the overall form-filling experience on your site for users visiting via touchscreen phones and tablets.

10 Must-Have Visual Studio Productivity Add-Ins I use everyday and recommend to every . Net Developer

Visual Studio provides a rich extensibility model that developers at Microsoft and in the community have taken advantage of to provide a host of quality add-ins. Some add-ins contribute significant how-did-I-live-without-this functionality, while others just help you automate that small redundant task you constantly find yourself performing.
10 Must-Have Add-Ins

VSWindowManager PowerToy
Cache Visualizer

In this article, I introduce you to some of the best Visual Studio add-ins available today that can be downloaded for free. I walk through using each of the add-ins, but because I am covering so many I only have room to introduce you to the basic functionality.
Each of these add-ins works with Visual Studio .NET 2003 and most of them already have versions available for Visual Studio 2005. If a Visual Studio 2005 version is not available as of the time of this writing, it should be shortly.


Test-driven development is the practice of writing unit tests before you write code, and then writing the code to make those tests pass. By writing tests before you write code, you identify the exact behavior your code should exhibit and, as a bonus, at the end you have 100 percent test coverage, which makes extensive refactoring possible.
NUnit gives you the ability to write unit tests using a simple syntax and then execute those tests one by one or altogether against your app. If you are using Visual Studio Team System, you have unit testing functionality built into the Visual Studio IDE. Before the Visual Studio Team System, there was TestDriven.NET, an add-in that integrates NUnit directly into the Visual Studio IDE. Using a non-Team System version of Visual Studio 2005 or Visual Studio .NET 2003, is, in my opinion, still the best solution available.
TestDriven.NET adds unit testing functionality directly to the Visual Studio IDE. Instead of writing a unit test, switching over to the NUnit GUI tool, running the test, then switching back to the IDE to code, and so on, you can do it all right in the IDE.


Figure 1 New Testing Options from TestDriven.NET 
After installing TestDriven.NET you will find a number of new menu items on the right-click context menu as shown in Figure 1. You can right-click directly on a unit test and run it. The results will be displayed in the output window as shown in Figure 2.


Figure 2 Output of a Unit Test 
While executing unit tests in the IDE is invaluable by itself, perhaps the best feature is that you can also quickly launch into the debugger by right-clicking on a test and selecting Test With | Debugger. This will launch the debugger and then execute your unit tests, hitting any breakpoints you have set in those tests.
In fact, it doesn’t even have to be a unit test for TestDriven.NET to execute it. You could just as easily test any public method that returns void. This means that if you are testing an old app and need to walk through some code, you can write a quick test and execute it right away.
TestDriven.NET is an essential add-in if you work with unit tests or practice test-driven development. (If you don’t already, you should seriously consider it.) TestDriven.NET was written by Jamie Cansdale and can be downloaded from


XML comments are invaluable tools when documenting your application. Using XML comments, you can mark up your code and then, using a tool like nDoc, you can generate help files or MSDN-like Web documentation based on those comments. The only problem with XML documentation is the time it takes to write it you often end up writing similar statements over and over again. The goal of GhostDoc is to automate the tedious parts of writing XML comments by looking at the name of your class or method, as well as any parameters, and making an educated guess as to how the documentation should appear based on recommended naming conventions. This is not a replacement for writing thorough documentation of your business rules and providing examples, but it will automate the mindless part of your documentation generation.
For instance consider the method shown here:

private void SavePerson(Person person) { }

After installing GhostDoc, you can right-click on the method declaration and choose Document this. The following comments will then be added to your document:

///  /// Saves the person. ///  ///Person. private void SavePerson(Person person) { }

As you can see, GhostDoc has automatically generated a summary based on how the method was named and has also populated the parameter comments. Don’t stop hereyou should add additional comments stating where the person is being saved to or perhaps give an example of creating and saving a person. Here is my comment after adding some additional information by hand:

///  /// Saves a person using the configured persistence provider. ///  ///The Person to be saved private void SavePerson(Person person) { }
Adding these extra comments is much easier since the basic, redundant portion is automatically generated by GhostDoc. GhostDoc also includes options that allow you to modify existing rules and add additional rules that determine what kind of comments should be generated.
GhostDoc was written by Roland Weigelt and can be downloaded from


Smart Paster
Strings play a large role in most applications, whether they are comments being used to describe the behavior of the system, messages being sent to the user, or SQL statements that will be executed. One of the frustrating parts of working with strings is that they never seem to paste correctly into the IDE. When you are pasting comments, the strings might be too long or not aligned correctly, leaving you to spend time inserting line breaks, comment characters, and tabbing. When working with strings that will actually be concatenated, you have to do even more work, usually separating the parts of the string and inserting concatenation symbols or using a string builder.
The Smart Paster add-in helps to limit some of this by providing a number of commands on the right-click menu that let you paste a string from the clipboard into Visual Studio using a certain format. After installing Smart Paster, you will see the new paste options available on the right-click menu (see Figure 3).


Figure 3 String Pasting Options from Smart Paster 
For instance, you might have the following string detailing some of your business logic:

To update a person record, a user must be a member of the customer service group or the manager group. After the person has been updated, a letter needs to be generated to notify the customer of the information change.

You can copy and paste this into Visual Studio using the Paste As | Comment option, and you would get the following:

//To update a person record a user must be a member of the customer //service group or the manager group. After the person has been updated //a letter needs to be generated to notify the customer of the //information change.
The correct comment characters and carriage returns are automatically inserted (you can configure at what length to insert a character return). If you were inserting this text without the help of Smart Paster, it would paste as one long line, forcing you to manually add all the line breaks and comment characters. As another example, let’s say you have the following error message that you need to insert values into at run time:

You do not have the correct permissions to perform <insert action>. You must be a member of the <insert group> to perform this action.

Using the Paste As | StringBuilder command, you can insert this string as a StringBuilder into Visual Studio. The results would look like this:

StringBuilder stringBuilder = new StringBuilder(134); stringBuilder.AppendFormat( @"You do not have the correct permissions to "); stringBuilder.AppendFormat( @"perform . You must be a member of "); stringBuilder.AppendFormat( @"the  to perform this action.");

It would then simply be a matter of modifying the code to replace the variables sections of the string:

StringBuilder stringBuilder = new StringBuilder(134); stringBuilder.AppendFormat( @"You do not have the correct permissions to "); stringBuilder.AppendFormat( @"perform {0}. You must be a member of ", action); stringBuilder.AppendFormat( @"the {0} to perform this action.", group);

Smart Paster is a time-saving add-in that eliminates a lot of the busy work associated with working with strings in Visual Studio. It was written by Alex Papadimoulis.


Throughout the process of software development, it is common to reuse small snippets of code. Perhaps you reuse an example of how to get an enum value from a string or a starting point on how to implement a certain pattern in your language of choice.
Visual Studio offers some built-in functionality for working with code snippets, but it assumes a couple of things. First, it assumes that you are going to store all of your snippets on your local machine, so if you switch machines or move jobs you have to remember to pack up your snippets and take them with you. Second, these snippets can only be viewed by you. There is no built-in mechanism for sharing snippets between users, groups, or the general public.
This is where CodeKeep comes to the rescue. CodeKeep is a Web application that provides a place for people to create and share snippets of code in any language. The true usefulness of CodeKeep is its Visual Studio add-in, which allows you to search quickly through the CodeKeep database, as well as submit your own snippets.
After installing CodeKeep, you can search the existing code snippets by selecting Tools | CodeKeep | Search, and then using the search screen shown in Figure 4.


Figure 4 Searching Code Snippets with CodeKeep 
From this screen you can view your own snippets or search all of the snippets that have been submitted to CodeKeep. When searching for snippets, you see all of the snippets that other users have submitted and marked as public (you can also mark code as private if you want to hide your bad practices). If you find the snippet you are looking for, you can view its details and then quickly copy it to the clipboard to insert into your code.
You can also quickly and easily add your own code snippets to CodeKeep by selecting the code you want to save, right-clicking, and then selecting Send to CodeKeep.This will open a new screen that allows you to wrap some metadata around your snippet, including comments, what language it is written in, and whether it should be private or public for all to see.
Whenever you write a piece of code and you can imagine needing to use it in the future, simply take a moment to submit it; this way, you won’t have to worry about managing your snippets or rewriting them in the future. Since CodeKeep stores all of your snippets on the server, they are centralized so you don’t have to worry about moving your code from system to system or job to job.
CodeKeep was written by Arcware’s Dave Donaldson and is available from


API calls within the .NET Framework. One of the hard parts of using P/Invoke is determining the method signature you need to use; this can often be an exercise in trial and error. Sending incorrect data types or values to an unmanaged API can often result in memory leaks or other unexpected results.
PInvoke.NET is a wiki that can be used to document the correct P/Invoke signatures to be used when calling unmanaged Win32 APIs. A wiki is a collaborative Web site that anyone can edit, which means there are thousands of signatures, examples, and notes about using P/Invoke. Since the wiki can be edited by anyone, you can contribute as well as make use of the information there.
While the wiki and the information stored there are extremely valuable, what makes them most valuable is the PInvoke.NET Visual Studio add-in. Once you have downloaded and installed the PInvoke.NET add-in, you will be able to search for signatures as well as submit new content from inside Visual Studio. Simply right-click on your code file and you will see two new context items: Insert PInvoke Signatures and Contribute PInvoke Signatures and Types.

Figure 5 Using PInvoke.NET 
When you choose Insert PInvoke Signatures, you’ll see the dialog box shown in Figure 5. Using this simple dialog box, you can search for the function you want to call. Optionally, you can include the module that this function is a part of. Now, a crucial part of all major applications is the ability to make the computer Beep. So I will search for the Beep function and see what shows up. The results can be seen in Figure 6.

Figure 6 Finding the Beep Function in PInvoke.NET 
.NET. The wiki suggests alternative managed APIs, letting you know that there is a new method System.Console.Beep in the .NET Framework 2.0.
There is also a link at the bottom of the dialog box that will take you to the corresponding page on the wiki for the Beep method. In this case, that page includes documentation on the various parameters that can be used with this method as well as some code examples on how to use it.
After selecting the signature you want to insert, click the Insert button and it will be placed into your code document. In this example, the following code would be automatically created for you:

[DllImport("kernel32.dll", SetLastError=true)] [return: MarshalAs(UnmanagedType.Bool)] static extern bool Beep( uint dwFreq, uint dwDuration);

You then simply need to write a call to this method and your computer will be beeping in no time.

The PInvoke.NET wiki and Visual Studio add-in take away a lot of the pain and research time sometimes involved when working with the Win32 API from managed code. The wiki can be accessed at, and the add-in can be downloaded from the Helpful Tools link found in the bottom-left corner of the site.


VSWindowManager PowerToy
The Visual Studio IDE includes a huge number of different Windows, all of which are useful at different times. If you are like me, you have different window layouts that you like to use at various points in your dev work. When I am writing HTML, I like to hide the toolbox and the task list window. When I am designing forms, I want to display the toolbox and the task list. When I am writing code, I like to hide all the windows except for the task list. Having to constantly open, close, and move windows based on what I am doing can be both frustrating and time consuming.
Visual Studio includes the concept of window layouts. You may have noticed that when you start debugging, the windows will automatically go back to the layout they were in the last time you were debugging. This is because Visual Studio includes a normal and a debugging window layout.
Wouldn’t it be nice if there were additional layouts you could use for when you are coding versus designing? Well, that is exactly what VSWindowManager PowerToy does.
After installing VSWindowManager PowerToy, you will see some new options in the Window menu as shown in Figure 7.


Figure 7 VSWindowManager Layout Commands 
The Save Window Layout As menu provides commands that let you save the current layout of your windows. To start using this power toy, set up your windows the way you like them for design and then navigate to the Windows | Save Window Layout As | My Design Layout command. This will save the current layout. Do the same for your favorite coding layout (selecting My Coding Layout), and then for up to three different custom layouts.
VSWindowManager will automatically switch between the design and coding layouts depending on whether you are currently viewing a designer or a code file. You can also use the commands on the Apply Window Layout menu to choose from your currently saved layouts. When you select one of the layouts you have saved, VSWindowManager will automatically hide, show, and rearrange windows so they are in the exact same layout as before.
VSWindowManager PowerToy is very simple, but can save you a lot of time and frustration. VSWindowManager is available from


Visual Studio makes creating Web services deceptively easy You simply create an .asmx file, add some code, and you are ready to go. ASP.NET can then create a Web Services Description Language (WSDL) file used to describe behavior and message patterns for your Web service.
There are a couple problems with letting ASP.NET generate this file for you. The main issue is that you are no longer in control of the contract you are creating for your Web service. This is where contract-first development comes to the rescue. Contract-first development, also called contract-driven development, is the practice of writing the contract (the WSDL file) for your Web service before you actually write the Web service itself. By writing your own WSDL file, you have complete control over how your Web service will be seen and used, including the interface and data structures that are exposed.
Writing a WSDL document is not a lot of fun. It’s kind of like writing a legal contract, but using lots of XML. This is where the WSContractFirst add-in comes into play. WSContractFirst makes it easier to write your WSDL file, and will generate client-side and server-side code for you, based on that WSDL file. You get the best of both worlds: control over your contract and the rapid development you are used to from Visual Studio style services development.
The first step to using WSContractFirst is to create your XML schema files. These files will define the message or messages that will be used with your Web services. Visual Studio provides an easy-to-use GUI interface to define your schemasthis is particularly helpful since this is one of the key steps of the Web service development process. Once you have defined your schemas you simply need to right-click on one of them and choose Create WSDL Interface Description. This will launch the Generate WSDL Wizard, the first step of which is shown in Figure 8.

Figure 8 Building a WSDL File with WSContractFirst  
Step 1 collects the basics about your service including its name, namespace, and documentation. Step 2 allows you to specify the .xsd files you want to include in your service. The schema you selected to launch this wizard is included by default. Step 3 allows you to specify the operations of your service. You can name the operation as well as specify whether it is a one-way or request/response operation. Step 4 gives you the opportunity to enter the details for the operations and messages. Step 5 allows you to specify whether a element should be created and whether or not to launch the code generation dialog automatically when this wizard is done. Step 6 lets you specify alternative .xsd paths. Once the wizard is complete, your new WSDL file is added to your project.
Now that you have your WSDL file there are a couple more things WSContractFirst, can do for you. To launch the code generation portion of WSContractFirst, you simply need to right-click on your WSDL file and select Generate Web Service Code. This will launch the code generation dialog box shown in Figure 9.

Figure 9 WSContractFirst Code Generation Options 
You can choose to generate a client-side proxy or a service-side stub, as well as configure some other options about the code and what features it should include. Using these code generation features helps speed up development tremendously.
If you are developing Web services using Visual Studio you should definitely look into WSContractFirst and contract-first development. WSContractFirst was written by Thinktecture’s Christian Weyer.


Your mouse probably has five buttons, so why are you only using three of them? The VSMouseBindings power toy provides an easy to use interface that lets you assign each of your mouse buttons to a Visual Studio command.
VSMouseBindings makes extensive use of the command pattern. You can bind mouse buttoms to various commands, such as open a new file, copy the selected text to the clipboard, or just about anything else you can do in Visual Studio. After installing VSMouseBindings you will see a new section in the Options dialog box called VsMouseBindings. The interface can be seen in Figure 10.

Figure 10 VSMouseBindings Options for Visual Studio 
As you can see, you can select a command for each of the main buttons. You probably shouldn’t mess around with the left and right mouse buttons, though, as their normal functionality is pretty useful. The back and forward buttons, however, are begging to be assigned to different commands. If you enjoy having functionality similar to a browser’s back and forward buttons, then you can set the buttons to the Navigate.Backward and Navigate.Forward commands inVisual Studio.
The Use this mouse shortcut in menu lets you set the scope of your settings. This means you can configure different settings when you are in the HTML designer as opposed to when you are working in the source editor.
VSMouseBindings is available from


Code is exponentially more readable when certain parts of that code are differentiated from the rest by using a different color text. Reading code in Visual Studio is generally much easier than trying to read code in an editor like Notepad.
Chances are you may have your own blog by now, or at least have spent some time reading them. Normally, when you try to post a cool code snippet to your blog it ends up being plain old text, which isn’t the easiest thing to read. This is where the CopySourceAsHTML add-in comes in to play. This add-in allows you to copy code as HTML, meaning you can easily post it to your blog or Web site and retain the coloring applied through Visual Studio.
After installing the CopySourceAsHTML add-in, simply select the code you want to copy and then select the Copy Source as HTML command from the right-click menu. After selecting this option you will see the dialog box shown in Figure 11.

Figure 11 Options for CopySourceAsHTML 

From here you can choose what kind of HTML view you want to create. This can include line numbers, specific tab sizes, and many other settings. After clicking OK, the HTML is saved to the clipboard. For instance, suppose you were starting with the following code snippet inside Visual Studio:

private long Add(int d, int d2) { return (long) d + d2; }
Figure 12 HTML Formatted Code  
After you select Copy As HTML and configure the HTML to include line numbers, this code will look like Figure 12 in the browser. Anything that makes it easier to share and understand code benefits all of us as it means more people will go to the trouble of sharing knowledge and learning from each other.
CopySourceAsHTML was written by Colin Coller and can be downloaded from


Cache Visualizer
Visual Studio 2005 includes a new debugging feature called visualizers, which can be used to create a human-readable view of data for use during the debugging process. Visual Studio 2005 includes a number of debugger visualizers by default, most notably the DataSet visualizer, which provides a tabular interface to view and edit the data inside a DataSet. While the default visualizers are very valuable, perhaps the best part of this new interface is that it is completely extensible. With just a little bit of work you can write your own visualizers to make debugging that much easier.
While a lot of users will write visualizers for their own custom complex types, some developers are already posting visualizers for parts of the Framework. I am going to look at one of the community-built visualizers that is already available and how it can be used to make debugging much easier.
The ASP.NET Cache represents a collection of objects that are being stored for later use. Each object has some settings wrapped around it, such as how long it will be cached for or any cache dependencies. There is no easy way while debugging to get an idea of what is in the cache, how long it will be there, or what it is watching. Brett Johnson saw that gap and wrote Cache Visualizer to examine the ASP.NET cache.
Once you have downloaded and installed the visualizer you will see a new icon appear next to the cache object in your debug windows, as shown in Figure 13.

Figure 13 Selecting Cache Visualizer While Debugging 
When you click on the magnifying glass to use the Cache Visualizer a dialog box appears that includes information about all of the objects currently stored in the ASP. NET cache, as you can see in Figure 14.

Figure 14 Cache Visualizer Shows Objects in the ASP.NET Cache 
Under Public Cache Entries, you can see the entries that I have added to the cache. The entries under Private Cache Entries are ones added by ASP.NET. Note that you can see the expiration information as well as the file dependency for the cache entry.
The Cache Visualizer is a great tool when you are working with ASP.NET. It is also representative of some of the great community-developed visualizers we will see in the future.


Wrapping It Up
While this article has been dedicated to freely available add-ins, there are also a host of add-ins that can be purchased for a reasonable price. I encourage you to check out these other options, as in some cases they can add some tremendous functionality to the IDE. This article has been a quick tour of some of the best freely available add-ins for Visual Studio. Each of these add-ins may only do a small thing, but together they help to increase your productivity and enable you to write better code.

Latest Enterprise Library Patterns & Practices Resources

Microsoft Enterprise Library is a collection of reusable software components (application blocks) addressing common cross-cutting concerns. Each application block is now hosted in its own repository. This site serves as a hub for the entire Enterprise Library. The latest source code here only include the sample application, which utilizes all of the application blocks.

Enterprise Library Conceptual Architecture

Enterprise Library is actively developed by the patterns & practices team and in collaboration with the community. Together we are dedicated to building application blocks which help accelerate developer’s productivity on Microsoft platforms.







  • See individual blocks’ product backlogs

Design Pattern Automation

Software development projects are becoming bigger and more complex every day. The more complex a project the more likely the cost of developing and maintaining the software will far outweigh the hardware cost.

There’s a super-linear relationship between the size of software and the cost of developing and maintaining it. After all, large and complex software requires good engineers to develop and maintain it and good engineers are hard to come by and expensive to keep around.


Despite the high total cost of ownership per line of code, a lot of boilerplate code still is written, much of which could be avoided with smarter compilers. Indeed, most boilerplate code stems from repetitive implementation of design patterns. But some of these design patterns are so well-understood they could be implemented automatically if we could teach it to compilers.

Implementing the Observer pattern

Take, for instance, the Observer pattern. This design pattern was identified as early as 1995 and became the base of the successful Model-View-Controller architecture. Elements of this pattern were implemented in the first versions of Java (1995, Observable interface) and .NET (2001, INotifyPropertyChanged interface). Although the interfaces are a part of the framework, they still need to be implemented manually by developers.

The INotifyPropertyChanged interface simply contains one event named PropertyChanged, which needs to be signaled whenever a property of the object is set to a different value.

Let’s have a look at a simple example in .NET:

public Person : INotifyPropertyChanged

string firstName, lastName;
public event NotifyPropertyChangedEventHandler PropertyChanged;
protected void OnPropertyChanged(string propertyName)
if ( this.PropertyChanged != null ) {
this.PropertyChanged(this, new PropertyChangedEventArgs(propertyName));
public string FirstName
get { return this.firstName; }
this.firstName = value;
public string LastName
get { return this.lastName; }
set {
this.lastName = value;
public string FullName { get { return string.Format( “{0} {1}“, this.firstName, this.lastName); }}}

Properties eventually depend on a set of fields, and we have to raise the PropertyChanged for a property whenever we change a field that affects it.

Shouldn’t it be possible for the compiler to do this work automatically for us? The long answer is detecting dependencies between fields and properties is a daunting task if we consider all corner cases that can happen: properties can have dependencies on fields of other objects, they can call other methods, or even worse, they can call virtual methods or delegates unknown to the compiler. So, there is no general solution to this problem, at least if we expect compilation times in seconds or minutes and not hours or days. However, in real life, a large share of properties is simple enough to be fully understood by a compiler. So the short answer is, yes, a compiler could generate notification code for more than 90% of all properties in a typical application.

In practice, the same class could be implemented as follows:

public Person
public string FirstName { get; set; }
public string LastName { get; set; }
public string FullName { get { return string.Format( “{0} {1}“, this.FirstName, this.LastName); }}

This code tells the compiler what to do (implement INotifyPropertyChanged) and not how to do it.

Boilerplate Code is an Anti-Pattern

The Observer (INotifyPropertyChanged) pattern is just one example of pattern that usually causes a lot of boilerplate code in large applications. But a typical source base is full of patterns generating a lot of boilerplate. Even if they are not always recognized as “official” design patterns, they are patterns because they are massively repeating among a code base. The most common causes of code repetition are:

  • Tracing, logging
  • Precondition and invariant checking
  • Authorization and audit
  • Locking and thread dispatching
  • Caching
  • Change tracking (for undo/redo)
  • Transaction handling
  • Exception handling

These features are difficult to encapsulate using normal OO techniques and hence why they’re often implemented using boilerplate code. Is that such a bad thing?


Addressing cross-cutting concerns using boilerplate code leads to violation of fundamental principles of good software engineering

  • The Single Responsibility Principle is violated when multiple concerns are being implemented in the same method, such as Validation, Security, INotifyPropertyChanged, and Undo/Redo in a single property setter.
  • The Open/Closed Principle, which states that software entities should be open for extension, but closed for modification, is best respected when new features can be added without modifying the original source code.
  • The Don’t Repeat Yourself principle abhors code repetition coming out of manual implementation of design patterns.
  • The Loose Coupling principle is infringed when a pattern is implemented manually because it is difficult to alter the implementation of this pattern. Note that coupling can occur not only between two components, but also between a component and a conceptual design. Trading a library for another is usually easy if they share the same conceptual design, but adopting a different design requires many more modifications of source code.

Additionally, boilerplate renders your code:

  • Harder to read and reason with when trying to understand what it’s doing to address the functional requirement. This added layer of complexity has a huge bearing on the cost of maintenance considering software maintenance consists of reading code 75% of the time!
  • Larger, which means not only lower productivity, but also higher cost of developing and maintaining the software, not counting a higher risk of introducing bugs.
  • Difficult to refactor and change. Changing a boilerplate (fixing a bug perhaps) requires changing all the places where the boilerplate code had been applied. How do you even accurately identify where the boilerplate is used throughout your codebase which potentially span across many solutions and/or repositories? Find-and-replace…?

If left unchecked, boilerplate code has the nasty habit of growing around your code like vine, taking over more space each time it is applied to a new method until eventually you end up with a large codebase almost entirely covered by boilerplate code. In one of my previous teams, a simple data access layer class had over a thousand lines of code where 90% was boilerplate code to handle different types of SQL exceptions and retries.

I hope by now you see why using boilerplate code is a terrible way to implement patterns. It is actually an anti-pattern to be avoided because it leads to unnecessary complexity, bugs, expensive maintenance, loss of productivity and ultimately, higher software cost.

Design Pattern Automation and Compiler Extensions

In so many cases the struggle with making common boilerplate code reusable stems from the lack of native meta-programming support in mainstream statically typed languages such as C# and Java.

The compiler is in possession of an awful lot of information about our code normally outside our reach. Wouldn’t it be nice if we could benefit from this information and write compiler extensions to help with our design patterns?

A smarter compiler would allow for:

  1. Build-time program transformation: to allow us to add features whilst preserving the code semantics and keeping the complexity and number of lines of code in check, so we can automatically implement parts of a design pattern that can be automated;
  2. Static code validation: for build-time safety to ensure we have used the design pattern correctly or to check parts of a pattern that cannot be automated have been implemented according to a set of predefined rules.

Example: ‘using’ and ‘lock’ keywords in C#

If you want proof design patterns can be supported directly by the compiler, there is no need to look further than the using and lock keywords. At first sight, they are purely redundant in the language. But the designers of the language have recognized their importance and have created a specific keyword for them.

Let’s have a look at the using keyword. The keyword is actually a part of the larger Disposable Pattern, composed of the following participants:

  • Resources Objects are objects consuming any external resource, such as a database connection.
  • Resource Consumers are instruction block or objects that consume Resource Objects during a given lifetime.

The Disposable Pattern is ruled by the following principles:

  1. Resource Objects must implement IDisposable.
  2. Implementation of IDisposable.Dispose must be idempotent, i.e. may be safely called several times.
  3. Resource Objects must have a finalizer (called destructor in C++).
  4. Implementation of IDisposable.Dispose must call GC.SuppressFinalize.
  5. Generally, objects that store Resource Objects into their state (field) are also Resource Objects, and children Resource Objects should be disposed by the parent.
  6. Instruction blocks that allocate and consume a Resource Object should be enclosed with the using keyword (unless the reference to the resource is stored in the object state, see previous point).

As you can see, the Disposable Pattern is richer than it appears at first sight. How is this pattern being automated and enforced?

  • The core .NET library provides the IDisposable interface.
  • The C# compiler provides the using keyword, which automates generation of some source code (a try/finally block).
  • FxCop can enforce a rule that says any disposable class also implements a finalizer, and the Dispose method calls GC.SuppressFinalize.

Therefore, the Disposable Pattern is a perfect example of a design pattern directly supported by the .NET platform.

But what about patterns not intrinsically supported? They can be implemented using a combination of class libraries and compiler extensions. Our next example also comes from Microsoft.

Example: Code Contracts

Checking preconditions (and optionally postconditions and invariants) has long been recognized as a best practice to prevent defects in one component causing symptoms in another component. The idea is:

  • every component (every class, typically) should be designed as a “cell”;
  • every cell is responsible for its own health therefore;
  • every cell should check any input it receives from other cells.

Precondition checking can be considered a design pattern because it is a repeatable solution to a recurring problem.

Microsoft Code Contracts ( is a perfect example of design pattern automation. Based on plain-old C# or Visual Basic, it gives you an API for expressing validation rules in the form of pre-conditions, post-conditions, and object invariants. However, this API is not just a class library. It translates into build-time transformation and validation of your program.

I won’t delve into too much detail on Code Contracts; simply put, it allows you to specify validation rules in code which can be checked at build time as well as at run time. For example:

public Book GetBookById(Guid id)
Contract.Requires(id != Guid.Empty);
return Dal.Get<Book>(id);

public Author GetAuthorById(Guid id)
Contract.Requires(id != Guid.Empty);

return Dal.Get<Author>(id);

Its binary rewriter can (based on your configurations) rewrite your built assembly and inject additional code to validate the various conditions that you have specified. If you inspect the transformed code generated by the binary rewriter you will see something along the lines of:

  public Book GetBookById(Guid id)
if (__ContractsRuntime.insideContractEvaluation <= 4)
__ContractsRuntime.Requires(id != Guid.Empty, (string)null, "id !=
return Dal.Get<Program.Book>(id);
public Author GetAuthorById(Guid id)<
if (__ContractsRuntime.insideContractEvaluation <= 4)
__ContractsRuntime.Requires(id != Guid.Empty, (string)null, "id !=
return Dal.Get<Program.Author>(id);

For more information on Microsoft Code Contracts, please read Jon Skeet’s excellent InfoQ article here (

Whilst compiler extensions such as Code Contracts are great, officially supported extensions usually take years to develop, mature, and stabilize. There are so many different domains, each with its own set of problems, it’s impossible for official extensions to cover them all.

What we need is a generic framework to help automate and enforce design patterns in a disciplined way so we are able to tackle domain-specific problems effectively ourselves.

Generic Framework to Automate and Enforce Design Patterns

It may be tempting to see dynamic languages, open compilers (such as Roslyn), or re-compilers (such as Cecil) as solutions because they expose the very details of abstract syntax tree. However, these technologies operate at an excessive level of abstraction, making it very complex to implement any transformation but the simplest ones.

What we need is a high-level framework for compiler extension, based on the following principles:

1. Provide a set of transformation primitives, for instance:

  • intercepting method calls;
  • executing code before and after method execution;
  • intercepting access to fields, properties, or events;
  • introducing interfaces, methods, properties, or events to an existing class.

2. Provide a way to express where primitives should be applied: it’s good to tell the complier extension you want to intercept some methods, but it’s even better if we know which methods should be intercepted!

3. Primitives must be safely composable

It’s natural to want to be able to apply multiple transformations to the same location(s) in our code, so the framework should give us the ability to compose transformations.

When you’re able to apply multiple transformations simultaneously some transformations might need to occur in a specific order in relation to others. Therefore the ordering of transformations needs to follow a well-defined convention but still allow us to override the default ordering where appropriate.

4. Semantics of enhanced code should not be affected

The transformation mechanism should be unobtrusive and leave the original code unaltered as much as possible whilst at the same time providing capabilities to validate the transformations statically. The framework should not make it too easy to “break” the intent of the source code.

5. Advanced reflection and validation abilities

By definition, a design pattern contains rules defining how it should be implemented. For instance, a locking design pattern may define instance fields can only be accessed from instance methods of the same object. The framework must offer a mechanism to query methods accessing a given field, and a way to emit clean build-time errors.

Aspect-Oriented Programming

Aspect-Oriented Programming (AOP) is a programming paradigm that aims to increase modularity by allowing the separation of concerns.

An aspect is a special kind of class containing code transformations (called advices), code matching rules (barbarically called pointcuts), and code validation rules. Design patterns are typically implemented by one or several aspects. There are several ways to apply aspects to code, which greatly depend on each AOP framework. Custom attributes (annotations in Java) are a convenient way to add aspects to hand-picked elements of code More complex pointcuts can be expressed declaratively using XML (e.g. Microsoft Policy Injection Application Block) or a Domain-Specific Language (e.g. AspectJ or Spring), or programmatically using reflection (e.g. LINQ over System.Reflection with PostSharp).

The weaving process combines advice with the original source code at the specified locations (not less barbarically called joinpoints). It has access to meta-data about the original source code so, for compiled languages such as C# or Java, there is opportunity for the static weaver to perform static analysis to ensure the validity of the advice in relation to the pointcuts where they are applied.

Although aspect-oriented programming and design patterns have been independently conceptualized, AOP is an excellent solution to those who seek to automate design patterns or enforce design rules. Unlike low-level metaprogramming, AOP has been designed according to the principles cited above so anyone, and not only compiler specialists, can implement design patterns.

AOP is a programming paradigm and not a technology. As such, it can be implemented using different approaches. AspectJ, the leading AOP framework for Java, is now implemented directly in the Eclipse Java compiler. In .NET, where compilers are not open-source, AOP is best implemented as a re-compiler, transforming the output of the C# or Visual Basic compiler. The leading tool in .NET is PostSharp (see below). Alternatively, a limited subset of AOP can be achieved using dynamic proxies and service containers, and most dependency injection frameworks are able to offer at least method interception aspects.

Example: Custom Design Patterns with PostSharp

PostSharp is a development tool for the automation and enforcement of design patterns in Microsoft .NET and features the most complete AOP framework for .NET.

To avoid turning this article into a PostSharp tutorial, let’s take a very simple pattern: dispatching of method execution back and forth between a foreground (UI) thread and a background thread. This pattern can be implemented using two simple aspects: one that dispatches a method to the background thread, and another that dispatches it to the foreground thread. Both aspects can be compiled by the free PostSharp Express. Let’s look at the first aspect: BackgroundThreadAttribute.

The generative part of the pattern is simple: we just need to create a Task that executes that method, and schedule execution of that Task.

public sealed class BackgroundThreadAttribute : MethodInterceptionAspect
    public override void OnInvoke(MethodInterceptionArgs args)
Task.Run( args.Proceed );

The MethodInterceptionArgs class contains information about the context in which the method is invoked, such as the arguments and the return value. With this information, you will be able to invoke the original method, cache its return value, log its input arguments, or just about anything that’s required for your use case.

For the validation part of the pattern, we would like to avoid having the custom attribute applied to methods that have a return value or a parameter passed by reference. If this happens, we would like to emit a build-time error. Therefore, we have to implement the CompileTimeValidate method in our BackgroundThreadAttribute class:

// Check that the method returns 'void', has no out/ref argument.
public override bool CompileTimeValidate( MethodBase method ) {

MethodInfo methodInfo = (MethodInfo) method;

if ( methodInfo.ReturnType != typeof(void) ||
methodInfo.GetParameters().Any( p => p.ParameterType.IsByRef ) )
ThreadingMessageSource.Instance.Write( method, SeverityType.Error, "THR006",
method.DeclaringType.Name, method.Name );
return false;

return true;

The ForegoundThreadAttribute would look similar, using the Dispatcher object in WPF or the BeginInvoke method in WinForms.

The above aspect can be applied just like any other attributes, for example:

private static void ReadFile(string fileName)
DisplayText( File.ReadAll(fileName) );
[ForegroundThread] private void DisplayText( string content ) { this.textBox.Text = content;

The resulting source code is much cleaner than what we would get by directly using tasks and dispatchers.

One may argue that C# 5.0 addresses the issue better with the async and await keywords. This is correct, and is a good example of the C# team identifying a recurring problem that they decided to address with a design pattern implemented directly in the compiler and in core class libraries. While the .NET developer community had to wait until 2012 for this solution, PostSharp offered one as early as 2006.

How long must the .NET community wait for solutions to other common design patterns, for instance INotifyPropertyChanged? And what about design patterns that are specific to your company’s application framework?

Smarter compilers would allow you to implement your own design patterns, so you would not have to rely on the compiler vendor to improve the productivity of your team.

Downsides of AOP

I hope by now you are convinced that AOP is a viable solution to automate design patterns and enforce good design, but it’s worth bearing in mind that there are several downsides too:

1. Lack of staff preparation

As a paradigm, AOP is not taught in undergraduate programs, and it’s rarely touched at master level. This lack of education has contributed towards a lack of general awareness about AOP amongst the developer community.

Despite being 20 years old, AOP is misperceived as a ‘new’ paradigm which often proves to be the stumbling block for adoption for all but the most adventurous development teams.

Design patterns are almost the same age, but the idea that design patterns can be automated and validated is recent. We cited some meaningful precedencies in this article involving the C# compiler, the .NET class library, and Visual Studio Code Analysis (FxCop), but these precedencies have not been generalized into a general call for design pattern automation.

2. Surprise factor

Because staffs and students alike are not well prepared, there can be an element of surprise when they encounter AOP because the application has additional behaviors that are not directly visible from source code. Note: what is surprising is the intended effect of AOP, that the compiler is doing more than usual, and not any side effect.

There can also be some surprise of an unintended effect, when a bug in the use of an aspect (or in a pointcut) causes the transformation to be applied to unexpected classes and methods. Debugging such errors can be subtle, especially if the developer is not aware that aspects are being applied to the project.

These surprise factors can be addressed by:

  • IDE integration, which helps to visualize (a) which additional features have been applied to the source displayed in the editor and (b) to which elements of code a given aspect has been applied. At time of writing only two AOP frameworks provide correct IDE integration: AspectJ (with the AJDT plug-in for Eclipse) and PostSharp (for Visual Studio).
  • Unit testing by the developer – aspects, as well as the fact that aspects have been applied properly, must be unit tested as any other source code artifact.
  • Not relying on naming conventions when applying aspects to code, but instead relying on structural properties of the code such as type inheritance or custom attributes. Note that this debate is not unique to AOP: convention-based programming has been recently gaining momentum, although it is also subject to surprises.

3. Politics

Use of design pattern automation is generally a politically sensitive issue because it also addresses separation of concerns within a team. Typically, senior developers will select design patterns and implement aspects, and junior developers will use them. Senior developers will write validation rules to ensure hand-written code respects the architecture. The fact that junior developers don’t need to understand the whole code base is actually the intended effect.

This argument is typically delicate to tackle because it takes the point of view of a senior manager, and may injure the pride of junior developers.

Ready-Made Design Pattern Implementation with PostSharp Pattern Libraries

As we’ve seen with the Disposable Pattern, even seemingly simple design patterns can actually require complex code transformation or validation. Some of these transformations and validations are complex but still possible to implement automatically. Others can be too complex for automatic processing and must be done manually.

Fortunately, there are also simple design patterns that can be automated easily by anyone (exception handling, transaction handling, and security) with an AOP framework.

After many years of market experience, the PostSharp team began to provide highly sophisticated and optimized ready-made implementations of the most common design patterns after they realized most customers were implementing the same aspects over and over again.

PostSharp currently provides ready-made implementations for the following design patterns:

  • Multithreading: reader-writer-synchronized threading model, actor threading model, thread-exclusive threading model, thread dispatching;
  • Diagnostics: high-performance and detailed logging to a variety of back-ends including NLog and Log4Net;
  • INotifyPropertyChanged: including support for composite properties and dependencies on other objects;
  • Contracts: validation of parameters, fields, and properties.

Now, with ready-made implementations of design patterns, teams can start enjoying the benefits of AOP without learning AOP.


So-called high-level languages such as Java and C# still force developers to write code at an irrelevant level of abstraction. Because of the limitations of mainstream compilers, developers are forced to write a lot of boilerplate code, adding to the cost of developing and maintaining applications. Boilerplate stems from massive implementation of patterns by hand, in what may be the largest use of copy-paste inheritance in the industry.

The inability to automate design pattern implementation probably costs billions to the software industry, not even counting the opportunity cost of having qualified software engineers spending their time on infrastructure issues instead of adding business value.

However, a large amount of boilerplate could be removed if we had smarter compilers to allow us to automate implementation of the most common patterns. Hopefully, future language designers will understand design patterns are first-class citizens of modern application development, and should have appropriate support in the compiler.

But actually, there is no need to wait for new compilers. They already exist, and are mature. Aspect-oriented programming was specifically designed to address the issue of boilerplate code. Both AspectJ and PostSharp are mature implementations of these concepts, and are used by the largest companies in the world. And both PostSharp and Spring Roo provide ready-made implementations of the most common patterns. As always, early adopters can get productivity gains several years before the masses follow.

What is Kendo UI

Kendo UI is an HTML5, jQuery-based framework for building modern web apps. The framework features lots of UI widgets, a rich data vizualization framework, an auto-adaptive Mobile framework, and all of the tools needed for HTML5 app development, such as Data Binding, Templating, Drag-and-Drop API, and more.



Kendo UI comes in different bundles:

  • Kendo UI Web – HTML5 widgets for desktop browsing experience.
  • Kendo UI DataViz – HTML5 data vizualization widgets.
  • Kendo UI Mobile – HTML5 framework for building hybrid mobile applications.
  • Kendo UI Complete – includes Kendo UI Web, Kendo UI DataViz and Kendo UI Mobile.
  • Telerik UI for ASP.NET MVC – Kendo UI Complete plus ASP.NET MVC wrappers for Kendo UI Web, DataViz and Mobile.
  • Telerik UI for JSP – Kendo UI Complete plus JSP wrappers for Kendo UI Web and Kendo UI DataViz.
  • Telerik UI for PHP – Kendo UI Complete plus PHP wrappers for Kendo UI Web and Kendo UI DataViz.

Installing and Getting Started with Kendo UI

You can download all Kendo UI bundles from the download page.

The distribution zip file contains the following:

  • /examples – quick start demos.
  • /js – minified JavaScript files.
  • /src – complete source code. Not available in the trial distribution.
  • /styles – minified CSS files and theme images.
  • /wrappers – server-side wrappers. Available in Telerik UI for ASP.NET MVC, JSP or PHP.
  • changelog.html – Kendo UI release notes.

Using Kendo UI

To use Kendo UI in your HTML page you need to include the required JavaScript and CSS files.

Kendo UI Web

  1. Download Kendo UI Web and extract the distribution zip file to a convenient location.
  2. Copy the /js and /styles directories of the Kendo UI Web distribution to your web application root directory.
  3. Include the Kendo UI Web JavaScript and CSS files in the head tag of your HTML page. Make sure the common CSS file is registered before the theme CSS file. Also make sure only one combined script file is registered. For more information, please refer to the Javascript Dependencies page.
    <!-- Common Kendo UI Web CSS -->
    <link href="styles/kendo.common.min.css" rel="stylesheet" />
    <!-- Default Kendo UI Web theme CSS -->
    <link href="styles/kendo.default.min.css" rel="stylesheet" />
    <!-- jQuery JavaScript -->
    <script src="js/jquery.min.js"></script>
    <!-- Kendo UI Web combined JavaScript -->
    <script src="js/kendo.web.min.js"></script>
  4. Initialize a Kendo UI Web Widget (the KendoDatePicker in this example):
    <!-- HTML element from which the Kendo DatePicker would be initialized -->
    <input id="datepicker" />
    $(function() {
        // Initialize the Kendo DatePicker by calling the kendoDatePicker jQuery plugin

Here is the complete example:

<!--doctype html>
        <title>Kendo UI Web</title>
        <link href="styles/kendo.common.min.css" rel="stylesheet" />
        <link href="styles/kendo.default.min.css" rel="stylesheet" />
        <script src="js/jquery.min.js"></script>
        <script src="js/kendo.web.min.js"></script>
        <input id="datepicker" />
            $(function() {

Kendo UI DataViz

  1. Download Kendo UI DataViz and extract the distribution zip file to a convenient location.
  2. Copy the /js and /styles directories of the Kendo UI DataViz distribution to your web application root directory.
  3. Include the Kendo UI DataViz JavaScript and CSS files in the head tag of your HTML page:
    <!-- Kendo UI DataViz CSS -->
    <link href="styles/kendo.dataviz.min.css" rel="stylesheet" />
    <!-- jQuery JavaScript -->
    <script src="js/jquery.min.js"></script>
    <!-- Kendo UI DataViz combined JavaScript -->
    <script src="js/kendo.dataviz.min.js"></script>
  4. Initialize a Kendo UIDataViz Widget (the Kendo Radial Gauge in this example):
    <!-- HTML element from which the Kendo Radial Gauge would be initialized -->
    <div id="gauge"></div>
    $(function() {

Here is the complete example:

<!--doctype html>
        <title>Kendo UI DataViz</title>
        <link href="styles/kendo.dataviz.min.css" rel="stylesheet" />
        <script src="js/jquery.min.js"></script>
        <script src="js/kendo.dataviz.min.js"></script>
        <div id="gauge"></div>
        $(function() {

Kendo UI Mobile

  1. Download Kendo UI Mobile and extract the distribution zip file to a convenient location.
  2. Copy the /js and /styles directories of the Kendo UI Mobile distribution to your web application root directory.
  3. Include the Kendo UI Mobile JavaScript and CSS files in the head tag of your HTML page:
    <!-- Kendo UI Mobile CSS -->
    <link href="styles/" rel="stylesheet" />
    <!-- jQuery JavaScript -->
    <script src="js/jquery.min.js"></script>
    <!-- Kendo UI Mobile combined JavaScript -->
    <script src="js/"></script>
  4. Initialize a Kendo Mobile Application
    <!-- Kendo Mobile View -->
    <div data-role="view" data-title="View" id="index">
        <!--Kendo Mobile Header -->
        <header data-role="header">
            <!--Kendo Mobile NavBar widget -->
            <div data-role="navbar">
                <span data-role="view-title"></span>
        <!--Kendo Mobile ListView widget -->
        <ul data-role="listview">
          <li>Item 1</li>
          <li>Item 2</li>
        <!--Kendo Mobile Footer -->
        <footer data-role="footer">
            <!-- Kendo Mobile TabStrip widget -->
            <div data-role="tabstrip">
                <a data-icon="home" href="#index">Home</a>
                <a data-icon="settings" href="#settings">Settings</a>
    // Initialize a new Kendo Mobile Application
    var app = new;

Here is the complete example:

<!--doctype html>
        <title>Kendo UI Mobile</title>
        <link href="styles/" rel="stylesheet" />
        <script src="js/jquery.min.js"></script>
        <script src="js/"></script>
        <div data-role="view" data-title="View" id="index">
            <header data-role="header">
                <div data-role="navbar">
                    <span data-role="view-title"></span>
            <ul data-role="listview">
              <li>Item 1</li>
              <li>Item 2</li>
            <footer data-role="footer">
                <div data-role="tabstrip">
                    <a data-icon="home" href="#index">Home</a>
                    <a data-icon="settings" href="#settings">Settings</a>
        var app = new;

Server-side wrappers

Kendo UI provides server-side wrappers for ASP.NET, PHP and JSP. Those are classes (ASP.NET and PHP) or XML tags (JSP) which allow configuring the Kendo UI widgets with server-side code.

You can find more info about the server-side wrappers here:

  • Get Started with Telerik UI for ASP.NET MVC
  • Get Started with Telerik UI for JSP
  • Get Started with Telerik UI for PHP

Next Steps

Kendo UI videos

You can watch the videos in the Kendo UI YouTube channel.

Kendo UI Dojo

A lot of interactive tutorials are available in the Kendo UI Dojo.

Further reading

  1. Kendo UI Widgets
  2. Data Attribute Initialization
  3. Requirements


  1. Online demos
  2. Code library projects
  3. Examples availableongithub
    • ASP.NET MVC examples
    • ASP.NET MVC Kendo UI Music Store
    • ASP.NET WebForms examples
    • JSP examples
    • Kendo Mobile Sushi
    • PHP examples
    • Ruby on Rails examples

Help Us Improve Kendo UI Documentation, Samples, Tutorials and Demos

The Kendo UI team would LOVE your help to improve our documentation. We encourage you to contribute in the way that you choose:

Submit a New Issue at GitHub

Open a new issue on the topic if it does not exist already.When creating an issue, please provide a descriptive title, be as specific as possible and link to the document in question. If you can provide a link to the closest anchor to the issue, that is even better.

Update the Documentation at GitHub

This is the most direct method. Follow the contribution instructions. The basic steps are that you fork our documentation and submit a pull request. That way you can contribute to exactly where you found the error and our technical writing team just needs to approve your change request. Please use only standard Markdown and follow the directions at the link. If you find an issue in the docs, or even feel like creating new content, we are happy to have your contributions!


You can also go to the Kendo UI Forums and leave feedback. This method will take a bit longer to reach our documentation team, but if you like the accountability of forums and you want a fast reply from our amazing support team, leaving feedback in the Kendo UI forums guarantees that your suggestion has a support number and that we’ll follow up on it.Thank you for contributing to the Kendo UI community!

Microsoft Research tachles Ecosystem Modelling Rate

Peter Lee, the head of Microsoft Research shared some  highlights of the organization, in a recent interview with Scientific American:

Microsoft Research has

  • 1,100 Researchers
  • 13 Laboratories around the world
    • with a 14th opening soon in Brazil

To put it in perspective, Microsoft has

Making Microsoft Research about 1% of the organization.

In order to keep with the mission of:

“promoting open publication of all research results
and encouraging deep collaborations with academic researchers.”


Microsoft Research crafted the following

Open Access Policy

  • Retention of Rights:
    Microsoft Research retains a license to make our Works available to the research community in our online Microsoft Research open-access repository. 
  • Authorization to enter into publisher agreements
    Microsoft researchers are authorized to enter into standard publication agreements with Publishers on behalf of Microsoft,  subject to the rights retained by Microsoft as per the previous paragraph.
  • Deposit
    Microsoft Research will endeavor to make every Microsoft Research-authored article available to the public in an open-access repository.

The Open Access Policy introduction states:

“Microsoft Research is committed to disseminating the fruits of its research and scholarship as widely as possible because we recognize the benefits that accrue to scholarly enterprises from such wide dissemination, including more thorough review, consideration and critique, and general increase in scientific, scholarly and critical knowledge.


In adition to adopting this policy, Microsoft Research also:

“…encourage researchers with whom we collaborate, and to whom we provide support, to embrace open access policies, and we will respect the policies enacted by their institutions.”

The MSDN blog closes with perspective on the ongoing changes in the structure of scientific publishing:

We are undoubtedly in the midst of a transition in academic publishing—a transition affecting publishers, institutions, librarians and curators, government agencies, corporations, and certainly researchers—in their roles both as authors and consumers. We know that there remain nuances to be understood and adjustments to be made, but we are excited and optimistic about the impact that open access will have on scientific discovery.


The MSDN blog was authored by

  • Jim Pinkelman, Senior Director, Microsoft Research Connections, and
  • Alex Wade, Director for Scholarly Communication, Microsoft Research


Microsoft Research Tackles Ecosystem Modeling Rate This Josh Henretig 17 Jan 2013 8:03 AM Comments 0 What if there was a giant computer model that could dramatically enhance our understanding of the environment and lead to policy decisions that better support conservation and biodiversity?


A team of researchers at Microsoft Research are building just such a model that one day may eventually do just that, and have published an article today in Nature (paid access) arguing for other scientists to get on board and try doing the same. When Drew Purves, head of Microsoft’s Computational Ecology and Environmental Science Group (CEES) and his colleagues at Microsoft Research in Cambridge, United Kingdom, began working with the United Nations Environment Programme World Conservation Monitoring Center (UNEP-WCMC), they didn’t know they would end up modeling life at global scales.


“UNEP-WCMC is an international hub of important conservation activity, and we were pretty open-minded about exactly what we might do together,” says Purves. But they quickly realized that what was really needed was a general ecosystem model (GEM) – something that hasn’t been possible to date because of the vast scale involved. In turn, findings from a GEM could contribute to better informed policy decisions about biodiversity. But first, a primer on terminology. A GCM (general circulation model) is a mathematical model that mimics the physics and chemistry of the planet’s land, ocean and atmosphere. While scientists use these models to better understand how the earth’s climate systems work, they are also used to make predictions about climate change and inform public policy. Because these models have been so successful, members of the conservation community are looking for a model that could improve their understanding of biodiversity.


Building a GEM is challenging—but not impossible. Microsoft Research and the UNEP-WCMC have spent the past two years developing a prototype GEM for terrestrial and marine ecosystems. The prototype is dubbed the Madingley Model, and is built on top of another hugely ambitious project that the group just finished, modeling the global carbon cycle. With this as starting point, they set out to model all animal life too: herbivores, omnivores, and carnivores, of all sizes, on land in the sea. The Computational Ecology group were in a unique position to do this, because the group includes actual ecologists (like Purves), doing novel research within Microsoft Research itself. In addition, they’re developing novel software tools for doing this kind of science. That has helped the team as it’s come up against all kinds of computational and technical challenges.


Nonetheless, the model’s outputs have been widely consistent with current understandings of ecosystems. One challenge is that while some of the data needed to create an effective GEM has already been collected and is stored away in research institutions, more data is needed. A new major data-gathering program would be expensive, so supporters of GEMs are calling on governments around the world to support programs that manage large-scale collection of ecological and climate data. But if you build it, will they come?


Drew Purves knows building a realistic GEM is possible, but he believes the real challenge is constructing a model that will enable policy makers to manage our natural resources better – and that means making sure the predictions are accurate. If such an accurate, trustworthy model can be achieved, one day conservationists will be able to couple data from GEMs and models from other fields to provide a more comprehensive guide to global conservation policy. Finding solutions to climate change and ecosystem preservation is too big of a challenge for any one entity to tackle in isolation.


And that’s exactly why we think that computer modeling has potential. It’s another great example of the continually evolving role that technology will play in addressing the environmental challenges facing the planet—and we’re honored to be working hand in hand with the United Nations Environment Programme to begin solving those challenges. –

See more at:

NEW “Filter My Lists” Web Part now available + FREE Metro UI Master Page when ordering

“Filter My Lists” Web Part

Saves you time with optimal performance

Find what you are looking for with a few clicks, even in cluttered sites and lists with masses of items and documents.

Find exactly what you need and stop wasting your time browsing SharePoint.
Filter the content of multiple lists and libraries in a single   step.

Combine search and metadata filters

In a single panel combine item, document and attachment searches with metadata keyword searches and managed metadata filters.

Select multiple filter values from drop-down lists or alternatively use the keyword search of metadata fields with the help of wildcard characters and logical operators.

Export filtered views to Excel

Export filtered views and data to Excel. A print view enables you to print your results in a clear printable format with a single  click.

Keep views clear and concise

Provides a complete set of filters without cluttering list views and keeps your list views clear, concise and speedy. Enables you to filter SharePoint using columns which aren’t visible in list views.

Refine filters and save them for future use, whether private, to share with others or to use as default filters.

FREE Metro Style UI Master Page


Screen Capture Medium

Modern UI Master Page and Styles for SharePoint 2010.

This will give the Metro/Modern UI styling of SharePoint 2013 to your SharePoint 2010 team sites.

Features include:
– Quick launch styling
– Global navigation and drop-down styling
– Search box styling and layout change
– Web part header styling
– Segoe UI font

SharePoint 2013 Basic Search Center Branding Problem

So, I had thought we were in the clear from the old 2010 Search Center branding disaster.

For the most part custom branding applies pretty easily to search sites in SharePoint 2013 thanks to the fact that it just uses the default Seattle.master for search branding.



However there is a gotcha, specifically related to the Basic Search Center template. I think the problem is only this one template, but maybe there are other areas affected. I tested the Enterprise Search Center and the default search and neither had issues.

Basically what happens is when you are creating your custom branding, chances are you will be applying a customized master page (one that is edited with a mapped drive or SharePoint Designer), and the Basic Search Center uses a snippet of code block to try to hide the ribbon when the Web Part management panel is up (I have no idea why this was so important but I digress).

Okay, “so what” you might think… well code blocks are not permitted to run by default in customized master pages. They will work just fine in a custom master page deployed with a farm solution (according to comments below a sandbox solution will not fix the problem) but they will fail miserably in a customized master page like this:

4-27-2013 4-05-07 PM

So, how do you fix this problem. Well, easiest solution is to package your custom master page into a farm solution and apply it to the site. The error should go away immediately. That doesn’t really help if you are still iterating in development or if you are using SharePoint Online (farm solutions are not allowed there).

Another option is to edit the aspx files on the Basic Search Site. From a mapped drive or from SPD you can edit default.aspx and results.aspx removing this StyleBlock section:

  <SharePoint:StyleBlock runat="server"> 
    WebPartManager webPartManager = SPWebPartManager.GetCurrentWebPartManager(this.Page);
    if (webPartManager != null && webPartManager.DisplayMode == SPWebPartManager.BrowseDisplayMode)
    display: none;

Note: one gotcha you may run into with this method is sometimes the search web parts will error on the page when you refresh it. You can fix this by removing the old web parts and re-adding them. I’m not sure why you have to do this sometimes, but it’s a relatively painless fix.

For some of you, editing these search files won’t be an acceptable solution. I’m hopeful someone will create a nice sandbox solution to fix the problem like we had in 2010…

5 Agile Project Management Techniques You Can Start Using Today

Image“The secret of getting ahead is getting started. The secret of getting started is breaking your complex overwhelming tasks into small manageable tasks, and then starting on the first one.” – Mark Twain

Mark Twain wasn’t a Software Development manager in any way; however, his words still resonate very accurately when it comes to how you can get started adopting Agile project management techniques.

Here are the top 5 practices you can use TODAY regardless of your situation or environment.


1.Don’t call it “Agile” – The term Agile seems to have become fatigued or simply over/misused. In some organizations it even has negative connotations – synonymous with “developer centric” or “no documentation” or “no requirements.” This is not the intent of Agile. In addition, sometimes the term “Agile” conjures up dogma– a religious like push to have an “all or nothing” adoption of all things “Agile”. In many cases, this simply isn’t reality. Ultimately we adopt “Agile” project management practices to help control chaos and more importantly, reduce waste by helping us focus on the production of business value. The term “Agile” sometimes conjures up a nebulous end-state that seems unachievable or unpractical to a lot of organizations. So, why call it Agile? The goal of adopting any practice is virtually the same – to minimize waste. You don’t need to label any of these practices with “Agile” if you don’t want to and still get tremendous value.

2.Time-boxed High-Bandwidth Communication Cadences – The term “High Bandwidth” is used for a reason. Part of “Agile” is an acknowledgement that humans communicate more effectively when they are face-to-face – other forms being wasteful. We like face-to-face discussions – body language, facial expressions, and interactive conversation all add to the effectiveness of this form collaboration. Given this, you should start to sprinkle in regular cadences of high-bandwidth communication across your project. To make this even more effective, consider placing a time-box around these sessions to make sure everyone stays focused. Don’t mistake this with “have more meetings” – implementing this correctly results in having less more effective meetings focusing on greater team communication. Start with a daily 15 minute “standup” meeting where the team simply acknowledges what they are working on and if they are having any problems. Next, sprinkle in a bi-weekly “show-and-tell” to show off what was accomplished during that time period with your customers. If you can’t meet face to face, then think about other forms of higher bandwidth communication such as video chat with Skype or Lync. Try to NOT rely upon email for your primary form of discussion as meaning and intent are quickly lost or misinterpreted the more you rely upon written words for communication.


3.Be Visible: Find a way to simply and effectively communicate the current state of work. I might stress that this is not a 50 page print out of your Microsoft Project Gantt chart – but a very simple, highly visible board that shows what people are working on. This doesn’t need to be electronic and could be as simple as sticky notes on a whiteboard. There are lots and lots of examples of this – yet, you should start off with something simple to create and more importantly, simple to maintain and update.




4.Regular Checkup: The term “Post Mortem” is a horrible term. It means to investigate something after it is dead. Instead of using an autopsy to drive organizational learning why not have a “regular checkup” to make sure you are doing all that you can to keep the project and team healthy? This doesn’t need to be formal – it can be as easy as a “pizza Friday” on the last Friday of a month where the team gets together to chat about what they think is going well, and what they know needs fixing.

5.Define Done: Instead of having arbitrary numbers that represent the of “done” something is, create a checklist that you and your team agrees define exactly what it means to be “done” a task of different types. You can have a “done” checklist for your analysis tasks as well as any other step in the development process. Checklists are very easy to produce and should start off very simple. You can use your regular checkup meetings to add or remove items from your checklists to continue to capture team learning and improve consistency across your project and team members.

These 5 practices should not conflict with any method or model you are working on today and go a long way to helping you along your path to the adoption of even more Agile practices

The top 4 issues faced with an Agile SDLC

Despite trying to achieve simplicity, Agile teams may still run across difficult issues. In this section some of the more common challenges are explored.


It is not uncommon for a team to have incomplete work at the end of an iteration. Unfinished work is an important issue to identify as it signals a potential problem with one or more aspects of the team. When an iteration is planned, the team sets an expectation with the customer. When those expectations are not met, the customer could lose faith in the team’s ability to deliver, which introduces conditions that make success less achievable. Unfinished work should always be analyzed by the team during every iteration retrospective. This is where the team can better understand why the work was not completed and is likewise an opportunity to chart a better course going forward.

There are only a few things that teams can do to manage unfinished work. First, the team can move the work forward into the next iteration. This is normally done with work that has been started and is close to completion. Work that is not done—and was not even started— is usually moved back to the main backlog and is prioritized and rescheduled with the rest of the backlog items.

Handling Incomplete Work

Incomplete work normally does not count towards team velocity, however, some teams like to have an earned value approach. Most teams only factor in completed work when calculating velocity since one of the reasons work may be left incomplete is because more was scheduled in the iteration than its capacity allowed.

Teams should avoid extending the duration of an iteration to finish incomplete work. To reinforce the hard stop, some teams end iterations mid-week to reduce the temptation of using weekends as iteration buffers. Strict time boxing of iterations is an important tool in helping the team produce more predictable and repeatable results. If work does not fit within an iteration, the team should take the opportunity to assess the reason why so they can eliminate such issues in future iterations.


Bugs are a different type of requirement, one the team does not value. Bugs are waste in the eyes of an Agile team. Any time spent fixing a bug is time taken away from producing customer value, and is one of the reasons Agile teams strive for “zero defect” products. Nevertheless, bugs are an inevitable and must be addressed by all Agile teams. But being unpredictable, they cannot be planned as consistently as requirements from an iterative perspective.

Perhaps the most common way to handle bugs on a project is to allocate a particular amount of capacity in the iteration toward fixing bugs. Obviously, iterations early in the project will not need the same bug-fixing allocation as an iteration immediately preceding a production release, so this allocation should be adjusted by the team over time.

Handling Bugs

It is very difficult for even the most experienced Agile team to forecast bugs and predetermine how they will affect the time allocation in an iteration. Consequently, bugs are pulled into each iteration’s bug allocation based on its priority and impact. Since critical bugs can manifest daily, the bug backlog must also be managed daily, a process more commonly known as bug triage. Bug resolution is also very difficult to estimate since teams usually have to work to reproduce bugs and research the root cause before any time estimate for a bug can be made.



Every project will contain a degree of uncertainty. Agile teams encounter technical uncertainty (what technology, approach, or design to employ), as well as uncertain requirements. Uncertainty is usually resolved through a combination of experimentation and further research. Agile teams work to address uncertainty with a “spike.” Like almost all things on an Agile project, a spike is a timeboxed amount of time dedicated to addressing an uncertainty. For example, an Agile team may not know the correct approach for a certain technical problem or integration. In this case, the team will schedule a spike time boxed to 3 or 4 hours, where one or more members of the team would perform further research or experimentation required to help resolve the uncertainty. Spikes are scheduled in the iteration as any other requirement and decomposed into tasks accordingly.


Agile teams focus on removing waste and sharing knowledge. The following four meetings are critical to this process and should never be skipped because they are a critical best practice in Agile planning:

  1. Daily standup (Daily Scrum in Scrum)
  2. Iteration planning (Sprint planning in Scrum)
  3. Iteration review (Sprint review in Scrum)
  4. Iteration retrospective (Sprint retrospective in Scrum)

Some teams who adopt Agile struggle with process design which of necessity allocates meeting time to maximize communication and collaboration within an iteration. It can be tempting to skip these meetings to instead complete work, but beware. This can have a detrimental impact on the end result and hamper the team’s ability to achieve its project goals.


Many teams choose to stop doing iteration retrospectives, which are meetings held at the end of an iteration where the team gets together and openly share what they believe went well, and specifically, what needs improvement. Retrospective meetings are perhaps the most important part of any Agile process as they are a direct conduit to delivering customer value more effectively.


In fact, the retrospective meetings and the process improvement ideas that result in action are exactly what makes the team “agile.” Retrospective meetings are the critical mechanism to help drive continual and ongoing process improvement on a team.


Another potential meeting casualty is the daily standup, known in the Scrum methodology, as the Daily Scrum. The daily standup is typically held at the same time and location every single day and is strictly time-boxed. Each member of the team answers three questions.

  • What did you do yesterday?
  • What will you do today?
  • Are there any problems that are getting in your way?

Daily standups provide a mechanism to ensure that each team member has an understanding of what work has been done and what work remains. Daily standups are not status meetings and they are not meant to inspire discussion or problem solving – they are a mechanism for the team to share state and to uncover problems inhibiting their ability to achieve their goal.


Another version of the daily standup focuses more on the flow of work rather than individual inhibitors. Instead of the team answering the three questions above, the team looks at each of the items that are in progress and quickly discusses what can be done to usher the items through the workflow.


Again, the daily standup is an extremely important tool for teams to help identify issues during the development process providing the team with the opportunity to remove those issues before they impact delivery. Without these meetings teams can easily miss opportunities to resolve issues as quickly as possible.


Another critical meeting is the iteration review meeting. During these meetings the team meets with the product owner to demonstrate what was accomplished during the iteration. This meeting is simply a demonstration rather than a presentation. During this review meeting, the project is assessed against the iteration goal that was determined during the iteration planning meeting. This meeting is a critical feedback loop with the team. Neglecting to perform this meeting means that valuable feedback is ignored. Some teams feel that they haven’t produced enough during an iteration to warrant an iteration review.


However, Agile teams strive to get feedback on everything they produce as quickly and as early as possible to help ensure the appropriate value is being targeted and delivered. These are essential if the team is to make beneficial course corrections that may come out of the review meetings.


Perhaps one of the most important meetings of an Agile team is the iteration planning meeting. During the iteration planning meeting the business sponsor/users (referred to as the Product Owner in Scrum) describes the highest priority features to the entire team. The team has the ability to ask questions so that they can appropriately define the work they need to accomplish. In addition, during this meeting the team will help determine the goal for the iteration, helping to drive decision making and focus during the iteration. The iteration planning meeting is critical for a number of reasons.


First, it helps ensure that the entire team has a crystal clear understanding of the user requirements. It facilitates direct interaction with the users instead of relying upon documentation alone for the information transfer. Without this forum for communication and alignment, there is a strong chance the team will misunderstand project requirements. This cascades to poor decomposition of work, poor sizing and estimation, and most importantly, the team risks missing the opportunity to provide optimum value possible to its users.

SAP Weekend : Part 2 – Using the Microsoft BizTalk Server for B2B Integration with SharePoint

This is Part 2 of my past weekend’s activities with SharePoint and SAP Integration methods.


In this post I am looking at how to use the BizTalk Adapter with SharePoint



  • Abstract
  • Goal
  • Business Scenario
  • Environment
  • Document Flow
  • Integration Steps
  • .NET Support
  • Summary



In the past few years, the whole perspective of doing business has been moved towards implementing Enterprise Resource Planning Systems for the key areas like marketing, sales and manufacturing operations. Today most of the large organizations which deal with all major world markets, heavily rely on such key areas.

Operational Systems of any organization can be achieved from its worldwide network of marketing teams as well as from manufacturing and distribution techniques. In order to provide customers with realistic information, each of these systems need to be integrated as part of the larger enterprise.

This ultimately results into efficient enterprise overall, providing more reliable information and better customer service. This paper addresses the integration of Biztalk Server and Enterprise Resource Planning System and the need for their integration and their role in the current E-Business scenario.



There are several key business drivers like customers and partners that need to communicate on different fronts for successful business relationship. To achieve this communication, various systems need to get integrated that lead to evaluate and develop B2B Integration Capability and E–Business strategy. This improves the quality of business information at its disposal—to improve delivery times, costs, and offer customers a higher level of overall service.

To provide B2B capabilities, there is a need to give access to the business application data, providing partners with the ability to execute global business transactions. Facing internal integration and business–to–business (B2B) challenges on a global scale, organization needs to look for required solution.

To integrate the worldwide marketing, manufacturing and distribution facilities based on core ERP with variety of information systems, organization needs to come up with strategic deployment of integration technology products and integration service capabilities.


Business Scenario

Now take the example of this ABC Manufacturing Company: whose success is the strength of its European-wide trading relationships. Company recognizes the need to strengthen these relationships by processing orders faster and more efficiently than ever before.

The company needed a new platform that could integrate orders from several countries, accepting payments in multiple currencies and translating measurements according to each country’s standards. Now, the bottom line for ABC’s e-strategy was to accelerate order processing. To achieve this: the basic necessity was to eliminate the multiple collections of data and the use of invalid data.

By using less paper, ABC would cut processing costs and speed up the information flow. Keeping this long term goal in mind, ABC Manufacturing Company can now think of integrating its four key countries into a new business-to-business (B2B) platform.


Here is another example of this XYZ Marketing Company. Users visit on this company’s website to explore a variety of products for its thousands of customers all over the world. Now this company always understood that they could offer greater benefits to customers if they could more efficiently integrate their customers’ back-end systems. With such integration, customers could enjoy the advantages of highly efficient e-commerce sites, where a visitor on the Web could place an order that would flow smoothly from the website to the customer’s order entry system.


Some of those back-end order entry systems are built on the latest, most sophisticated enterprise resource planning (ERP) system on the market, while others are built on legacy systems that have never been upgraded. Different customers requires information formatted in different ways, but XYZ has no elegant way to transform the information coming out of website to meet customer needs. With the traditional approach:

For each new e-commerce customer on the site, XYZ’s staff needs to work for significant amounts of time creating a transformation application that would facilitate the exchange of information. But with better approach: XYZ needs a robust messaging solution that would provide the flexibility and agility to meet a range of customer needs quickly and effectively. Now again XYZ can think of integrating Customer Backend Systems with the help of business-to-business (B2B) platform.



Many large scale organizations maintain a centralized SAP environment as its core enterprise resource planning (ERP) system. The SAP system is used for the management and processing of all global business processes and practices. B2B integration mainly relies on the asynchronous messaging, Electronic Data Interchange (EDI) and XML document transformation mechanisms to facilitate the transformation and exchange of information between any ERP System and other applications including legacy systems.

For business document routing, transformation, and tracking, existing SAP-XML/EDI technology road map needs XML service engine. This will allow development of complex set of mappings from and to SAP to meet internal and external XML/EDI technology and business strategy. Microsoft BizTalk Server is the best choice to handle the data interchange and mapping requirements. BizTalk Server has the most comprehensive development and management support among business-to-business platforms. Microsoft BizTalk Server and BizTalk XML Framework version 2.0 with Simple Object Access Protocol (SOAP) version 1.1 provide precisely the kind of messaging solution that is needed to facilitate integration with cost effective manner.


Document Flow

Friends, now let’s look at the actual flow of document from Source System to Customer Target System using BizTalk Server. When a document is created, it is sent to a TCP/IP-based Application Linking and Enabling (ALE) port—a BizTalk-based receive function that is used for XML conversion. Then the document passes the XML to a processing script (VBScript) that is running as a BizTalk Application Integration Component (AIC). The following figure shows how BizTalk Server acts as a hub between applications that reside in two different organizations:

The data is serialized to the customer/vendor XML format using the Extensible Stylesheet Language Transformations (XSLT) generated from the BizTalk Mapper using a BizTalk channel. The XML document is sent using synchronous Hypertext Transfer Protocol Secure (HTTPS) or another requested transport protocol such as the Simple Mail Transfer Protocol (SMTP), as specified by the customer.

The following figure shows steps for XML document transformation:

The total serialized XML result is passed back to the processing script that is running as a BizTalk AIC. An XML “receipt” document then is created and submitted to another BizTalk channel that serializes the XML status document into a SAP IDOC status message. Finally, a Remote Function Call (RFC) is triggered to the SAP instance/client using a compiled C++/VB program to update the SAP IDOC status record. A complete loop of document reconciliation is achieved. If the status is not successful, an e-mail message is created and sent to one of the Support Teams that own the customer/vendor business XML/EDI transactions so that the conflict can be resolved. All of this happens instantaneously in a completely event-driven infrastructure between SAP and BizTalk.

Integration Steps

Let’s talk about a very popular Order Entry and tracking scenario while discussing integration hereafter. The following sections describe the high-level steps required to transmit order information from Order Processing pipeline Component into the SAP/R3 application, and to receive order status update information from the SAP/R3 application.

The integration of AFS purchase order reception with SAP is achieved using the BizTalk Adapter for SAP (BTS-SAP). The IDOC handler is used by the BizTalk Adapter to provide the transactional support for bridging tRFC (Transactional Remote Function Calls) to MSMQ DTC (Distributed Transaction Coordinator). The IDOC handler is a COM object that processes IDOC documents sent from SAP through the Com4ABAP service, and ensures their successful arrival at the appropriate MSMQ destination. The handler supports the methods defined by the SAP tRFC protocol. When integrating purchase order reception with the SAP/R3 application, BizTalk Server (BTS) provides the transformation and messaging functionality, and the BizTalk Adapter for SAP provides the transport and routing functionality.

The following two sequential steps indicate how the whole integration takes place:

  • Purchase order reception integration
  • Order Status Update Integration

Purchase Order Reception Integration

  1. Suppose a new pipeline component is added to the Order Processing pipeline. This component creates an XML document that is equivalent to the OrderForm object that is passed through the pipeline. This XML purchase order is in Commerce Server Order XML v1.0 format, and once created, is sent to a special Microsoft Message Queue (MSMQ) queue created specifically for this purpose.Writing the order from the pipeline to MSMQ:>

    The first step in sending order data to the SAP/R3 application involves building a new pipeline component to run within the Order Processing pipeline. This component must perform the following two tasks:

    A] Make an XML-formatted copy of the OrderForm object that is passing through the order processing pipeline. The GenerateXMLForDictionaryUsingSchema method of the DictionaryXMLTransforms object is used to create the copy.

    Private Function IPipelineComponent_Execute(ByVal objOrderForm As Object, _
        ByVal objContext As Object, ByVal lFlags As Long) As Long
    On Error GoTo ERROR_Execute
    Dim oXMLTransforms As Object
    Dim oXMLSchema As Object
    Dim oOrderFormXML As Object
    ' Return 1 for Success.
    IPipelineComponent_Execute = 1
    ' Create a DictionaryXMLTransforms object.
    Set oXMLTransforms = CreateObject("Commerce.DictionaryXMLTransforms")
    ' Create a PO schema object.
    Set oXMLSchema = oXMLTransforms.GetXMLFromFile(sSchemaLocation)
    ' Create an XML version of the order form.
    Set oOrderFormXML = oXMLTransforms.GenerateXMLForDictionaryUsingSchema_
        (objOrderForm, oXMLSchema)
    WritePO2MSMQ sQueueName, oOrderFormXML.xml, PO_TO_ERP_QUEUE_LABEL, _
    Exit Function
    App.LogEvent "QueuePO.CQueuePO -> Execute Error: " & _
    vbCrLf & Err.Description, vbLogEventTypeError
    ' Set warning level.
    IPipelineComponent_Execute = 2
    Resume Next
    End Function

    B] Send the newly created XML order document to the MSMQ queue defined for this purpose.

    Option Explicit
    ' MSMQ constants.
    ' Access modes.
    Const MQ_SEND_ACCESS = 2
    Const MQ_PEEK_ACCESS = 32
    ' Sharing modes. Const MQ_DENY_NONE = 0
    ' Transaction options. Const MQ_NO_TRANSACTION = 0
    ' Error messages.
    Const MQ_ERROR_QUEUE_NOT_EXIST = -1072824317
    Function WritePO2MSMQ(sQueueName As String, sMsgBody As String, _
        sMsgLabel As String, sServerName As String, _
        Optional MaxTimeToReachQueue As Variant) As Long
    Dim lMaxTime As Long
    If IsMissing(MaxTimeToReachQueue) Then
    lMaxTime = MaxTimeToReachQueue
    End If
    Dim objQueueInfo As MSMQ.MSMQQueueInfo
    Dim objQueue As MSMQ.MSMQQueue, objAdminQueue As MSMQ.MSMQQueue
    Dim objQueueMsg As MSMQ.MSMQMessage
    On Error GoTo MSMQ_Error
    Set objQueueInfo = New MSMQ.MSMQQueueInfo
    objQueueInfo.FormatName = "DIRECT=OS:" & sServerName & "\PRIVATE$\" & sQueueName
    Set objQueue = objQueueInfo.Open(MQ_SEND_ACCESS, MQ_DENY_NONE)
    Set objQueueMsg = New MSMQ.MSMQMessage
    objQueueMsg.Label = sMsgLabel ' Set the message label property
    objQueueMsg.Body = sMsgBody ' Set the message body property
    objQueueMsg.MaxTimeToReachQueue = lMaxTime
    objQueueMsg.send objQueue, MQ_SINGLE_MESSAGE
    On Error Resume Next
    Set objQueueMsg = Nothing
    Set objQueue = Nothing
    Set objQueueInfo = Nothing
    Exit Function
    App.LogEvent "Error in WritePO2MSMQ: " & Error
    Resume Next
    End Function
  2. A BTS MSMQ receive function picks up the document from the MSMQ queue and sends it to a BTS channel that has been configured for this purpose. Receiving the XML order from MSMQ: The second step in sending order data to the SAP/R3 application involves BTS receiving the order data from the MSMQ queue into which it was placed at the end of the first step. You must configure a BTS MSMQ receive function to monitor the MSMQ queue to which the XML order was sent in the previous step. This receive function forwards the XML message to the configured BTS channel for transformation.
  3. The third step in sending order data to the SAP/R3 application involves BTS transforming the order data from Commerce Server Order XML v1.0 format into ORDERS01 IDOC format. A BTS channel must be configured to perform this transformation. After the transformation is complete, the BTS channel sends the resulting ORDERS01 IDOC message to the corresponding BTS messaging port. The BTS messaging port is configured to send the transformed message to an MSMQ queue called the 840 Queue. Once the message is placed in this queue, the BizTalk Adapter for SAP is responsible for further processing. 
  4. BizTalk Adapter for SAP sends the ORDERS01document to the DCOM Connector (Get more information on DCOM Connector from, which writes the order to the SAP/R3 application. The DCOM Connector is an SAP software product that provides a mechanism to send data to, and receive data from, an SAP system. When an IDOC message is placed in the 840 Queue, the DOM Connector retrieves the message and sends it to SAP for processing. Although this processing is in the domain of the BizTalk Adapter for SAP, the steps involved are reviewed here as background information:
    • Determine the version of the IDOC schema in use and generate a BizTalk Server document specification.
    • Create a routing key from the contents of the Control Record of the IDOC schema.
    • Request a SAP Destination from the Manager Data Store given the constructed routing key.
    • Submit the IDOC message to the SAP System using the DCOM Connector 4.6D Submit functionality.

Order Status Update Integration

Order status update integration can be achieved by providing a mechanism for sending information about updates made within the SAP/R3 application back to the Commerce Server order system.

The following sequence of steps describes such a mechanism:

  1. BizTalk Adapter for SAP processing:
    After a user has updated a purchase order using the SAP client, and the IDOC has been submitted to the appropriate tRFC port, the BizTalk Adapter for SAP uses the DCOM connector to send the resulting information to the 840 Queue, packaged as an ORDERS01 IDOC message. The 840 Queue is an MSMQ queue into which the BizTalk Adapter for SAP places IDOC messages so that they can be retrieved and processed by interested parties. This process is within the domain of the BizTalk Adapter for SAP, and is used by this solution to achieve the order update integration.
  2. Receiving the ORDERS01 IDOC message from MSMQ:
    The second step in updating order status from the SAP/R3 application involves BTS receiving ORDERS01 IDOC message from the MSMQ queue (840 Queue) into which it was placed at the end of the first step. You must configure a BTS MSMQ receive function to monitor the 840 Queue into which the XML order status message was placed. This receive function must be configured to forward the XML message to the configured BTS channel for transformation.
  3. Transforming the order update from IDOC format:
    Using a BTS MSMQ receive function, the document is retrieved and passed to a BTS transformation channel. The BTS channel transforms the ORDERS01 IDOC message into Commerce Server Order XML v1.0 format, and then forwards it to the corresponding BTS messaging port. You must configure a BTS channel to perform this transformation.The following BizTalk Server (BTS) map demonstrates in the prototyping of this solution for transforming an SAP ORDERS01 IDOC message into an XML document in Commerce Server Order XML v1.0 format. It allows a change to an order in the SAP/R3 application to be reflected in the Commerce Server orders database.

    This map used in the prototype only maps the order ID, demonstrating how the order in the SAP/R3 application can be synchronized with the order in the Commerce Server orders database. The mapping of other fields is specific to a particular implementation, and was not done for the prototype.

< xsl:stylesheet xmlns:xsl='' 
xmlns:msxsl='urn:schemas-microsoft-com:xslt' xmlns:var='urn:var' 
xmlns:user='urn:user' exclude-result-prefixes='msxsl var user' 
< xsl:output method='xml' omit-xml-declaration='yes' />
< xsl:template match='/'>
< xsl:apply-templates select='ORDERS01'/>
< /xsl:template>
< xsl:template match='ORDERS01'>
< orderform>

'Connection from source node "BELNR" to destination node "OrderID"

< xsl:if test='E2EDK02/@BELNR'>
< xsl:attribute name='OrderID'>
; < xsl:value-of select='E2EDK02/@BELNR'/>
< /xsl:attribute>
< /xsl:if>
< /orderform>
< /xsl:template>
< /xsl:stylesheet>

The BTS message port posts the transformed order update document to the configured ASP page for further processing. The configured ASP page retrieves the message posted to it and uses the Commerce Server OrderGroupManager and OrderGroup objects to update the order status information in the Commerce Server orders database.

  • Updating the Commerce Server order system:
    The fourth step in updating order status from the SAP/R3 application involves updating the Commerce Server order system to reflect the change in status. This is accomplished by adding the page _OrderStatusUpdate.asp to the AFS Solution Site and configuring the BTS messaging port to post the transformed XML document to that page. The update is performed using the Commerce Server OrderGroupManager and OrderGroup objects.

    The routine ProcessOrderStatus is the primary routine in the page. It uses the DOM and XPath to extract enough information to find the appropriate order using the OrderGroupManager object. Once the correct order is located, it is loaded into an OrderGroup object so that any of the entries in the OrderGroup object can be updated as needed.

    The following code implements page _OrderStatusUpdate.asp:

    < %@ Language="VBScript" %>
    < % 
    const TEMPORARY_FOLDER = 2
    call Main()
    Sub Main()
    call ProcessOrderStatus( ParseRequestForm() )
    End Sub
    Sub ProcessOrderStatus(sDocument)
    Dim oOrderGroupMgr 
    Dim oOrderGroup 
    Dim rs
    Dim sPONum
    Dim oAttr 
    Dim vResult
    Dim vTracking 
    Dim oXML
    Dim dictConfig
    Dim oElement
    Set oOrderGroupMgr = Server.CreateObject("CS_Req.OrderGroupManager")
    Set oOrderGroup = Server.CreateObject("CS_Req.OrderGroup")
    Set oXML = Server.CreateObject("MSXML.DOMDocument")
    oXML.async = False
    If oXML.loadXML (sDocument) Then
    ' Get the orderform element.
    Set oElement = oXML.selectSingleNode("/orderform")
    ' Get the poNum.
    sPONum = oElement.getAttribute("OrderID")
    Set dictConfig = Application("MSCSAppConfig").GetOptionsDictionary("")
    ' Use ordergroupmgr to find the order by OrderID.
    oOrderGroupMgr.Initialize (dictConfig.s_CatalogConnectionString)
    Set rs = oOrderGroupMgr.Find(Array("order_requisition_number='" sPONum & "'"), _
        Array(""), Array(""))
    If rs.EOF And rs.BOF Then
    'Create a new one. - Not implemented in this version.
    ' Edit the current one.
    oOrderGroup.Initialize dictConfig.s_CatalogConnectionString, rs("User_ID")
    ' Load the found order.
    oOrderGroup.LoadOrder rs("ordergroup_id")
    ' For the purposes of prototype, we only update the status
    oOrderGroup.Value.order_status_code = 2 ' 2 = Saved order
    ' Save it
    vResult = oOrderGroup.SaveAsOrder(vTracking)
    End If
    WriteError "Unable to load received XML into DOM."
    End If
    End Sub Function ParseRequestForm()
    Dim PostedDocument
    Dim ContentType
    Dim CharSet
    Dim EntityBody
    Dim Stream
    Dim StartPos
    Dim EndPos
    ContentType = Request.ServerVariables( "CONTENT_TYPE" )
    ' Determine request entity body character set (default to us-ascii).
    CharSet = "us-ascii"
    StartPos = InStr( 1, ContentType, "CharSet=""", 1)
    If (StartPos > 0 ) then
    StartPos = StartPos + Len("CharSet=""")
    EndPos = InStr( StartPos, ContentType, """",1 )
    CharSet = Mid (ContentType, StartPos, EndPos - StartPos )
    End If
    ' Check for multipart MIME message.
    PostedDocument = ""
    if ( ContentType = "" or Request.TotalBytes = 0) then
    ' Content-Type is required as well as an entity body.
    Response.Status = "406 Not Acceptable"
    Response.Write "Content-type or Entity body is missing" & VbCrlf
    Response.Write "Message headers follow below:" & VbCrlf
    Response.Write Request.ServerVariables("ALL_RAW") & VbCrlf
    If ( InStr( 1,ContentType,"multipart/" ) >

    .NET Support

    This Multi-Tier Application Environment can be implemented successfully with the help of Web portal which utilizes the Microsoft .NET Enterprise Server model. The Microsoft BizTalk Server Toolkit for Microsoft .NET provides the ability to leverage the power of XML Web services and Visual Studio .NET to build dynamic, transaction-based, fault-tolerant systems with full access to existing applications.


    Microsoft BizTalk Server can help organizations quickly establish and manage Internet relationships with other organizations. It makes it possible for them to automate document interchange with any other organization, regardless of the conversion requirements and data formats used. This provides a cost-effective approach for integrating business processes across large Enterprises Resource Planning Systems. Integration process designed to facilitate collaborative e-commerce business processes. The process includes a document interchange engine, a business process execution engine, and a set of business document and server management tools. In addition, a business document editor and mapper tools are provided for managing trading partner relationships, administering server clusters, and tracking transactions.


    System Center Virtual Machine Manager (VMM) 2012 as Private Cloud Enabler (3/5): Deployment with Service Template

    By this time, I assume we all have some clarity that virtualization is not cloud. There are indeed many and significant differences between the two. A main departure is the approaches of deploying apps. In the 3rd article of the 5-part series as listed below, I would like to examine service-based deployment introduced in VMM 2012 for building a private cloud.

    • Part 1. Private Cloud Concepts
    • Part 2. Fabric, Oh, Fabric
    • Part 3. Deployment with Service Template (This article)
    • Part 4. Working with Service Templatesimage
    • Part 5. App Controller

    VMM 2012 has the abilities to carry out both traditional virtual machine (VM)-centric and emerging service-based deployments. The formal is virtualization-focused and operated at a VM level, while the latter is service-centric approach and intended for private cloud deployment.

    This article is intended for those with some experience administering VMM 2008 R2 infrastructure. And notice in cloud computing, “service” is a critical and must-understand concept which I have discussed elsewhere. And just to be clear, in the context of cloud computing, a “service” and an “application” means the same thing, since in cloud everything to user is delivered as a service, for example SaaS, PaaS, and IaaS. Throughout this article, I use the terms, service and application, interchangeably.

    VM-Centric Deployment

    In virtualization, deploying a server has becomes conceptually shipping/building and booting from a (VHD) file. Those who would like to refresh their knowledge of virtualization are invited to review the 20-Part Webcast Series on Microsoft Virtualization Solutions.

    Virtualization has brought many opportunities for IT to improve processes and operations. With system management software such as System Center Virtual Machine Manager 2008 R2 or VMM 2008 R2, we can deploy VMs and installs OS to a target environment with few or no operator interventions. And from an application point of view, with or without automation the associated VMs are essentially deployed and configured individually.image For instance, a multi-tier web application like the one shown above is typically deployed with a pre-determined number of VMs, followed by installing and configuring application among the deployed VMs individually based on application requirements. Particularly when there is a back-end database involved, a system administrator typically must follow a particular sequence to first bring a target database server instance on line by configuring specific login accounts with specific db roles, securing specific ports, and registering in AD before proceeding with subsequent deployment steps. These operator interventions are required likely due to lack of a cost-effective, systematic, and automatic way for streamlining and managing the concurrent and event-driven inter-VM dependencies which become relevant at various moments during an application deployment.

    Despite there may be a system management infrastructure in place like VMM 2008 R2 integrated with other System Center members, at an operational level VMs are largely managed and maintained individually in a VM-centric deployment model. And perhaps more significantly, in a VM-centric deployment too often it is labor-intensive and with relatively high TCO to deploy a multi-tier application “on demand” (in other words, as a service) and deploy multiple times, run multiple releases concurrently in the same IT environment, if it is technically feasible at all. Now in VMM 2012, the ability to deploy services on demand, deploy multiple times, run multiple releases concurrently in the same environment become noticeably straightforward and amazing simple with a service-based deployment model.

    Service-Based Deployment

    In a VM-centric model, there lacks an effective way to address event-driven and inter-VMs dependencies during a deployment, nor there is a concept of fabric which is an essential abstraction of cloud computing. imageIn VMM 2012, a service-based deployment means all the resources encompassing an application, i.e. the configurations, installations, instances, dependencies, etc. are deployed and managed as one entity with fabric . The integration of fabric in VMM 2012 is a key delivery and clearly illustrated in VMM 2012 admin console as shown on the left. And the precondition for deploying services to a private cloud is all about first laying out the private cloud fabric.

    Constructing Fabric

    To deploy a service, the process normally employs administrator and service accounts to carry out the tasks of installing and configuring infrastructure and application on servers, networking, and storage based on application requirements. Here servers collectively act as a compute engine to provide a target runtime environment for executing code. Networking is to interconnect all relevant application resources and peripherals to support all management and communications need, while the storage is where code and data actually resides and maintained. In VMM 2012, the servers, networking, and storage infrastructure components are collectively managed with a single concept as private cloud fabric.

    There are three resource pools/nodes encompassing fabric: Servers, Networking, and Storage. Servers contain various types of servers including virtualization host groups, PXE, Update (i.e. WSUS) and other servers. Host groups are container to logically group servers with virtualization hosting capabilities and ultimately represent the physical boxes where VMs can be possibly deployed to, either with specific network settings or dynamically selected by VMM Intelligent Placement, as applicable, based on defined criteria. VMM 2012 can manage Hyper-V based, VMware, as well as other virtualization solutions. During adding a host into a host group, VMM 2012 installs an agent on a target host which then becomes a managed resource of the fabric.

    A Library Server is a repository where the resources for deploying services and VMs are available via network shares. As a Library Server is added into fabric, by specifying the network shares defined in the Library Server, file-based resources like VM templates, VHDs, iso images, service templates, scripts, server app-v packages, etc. are become available and to be used as building blocks for composing VM and service templates. As various types of servers are brought in the Server pool, the coverage expanded and capabilities increased as if additional fibers are weaved into fabric.

    Networking presents the wiring among resources repositories, running instances, deployed clouds and VMs, and the intelligence for managing and maintaining the fabric. It essentially forms the nervous system to filter noises, isolate traffic, and establish interconnectivity among VMs based on how Logical Networks and Network Sites are put in place.

    Storage reveals the underlying  storage complexities and how storage is virtualized. In VMM 2012, a cloud administrator can discover, classify and provision remote storage on supported storage arrays through the VMM 2012 console. VMM 2012 fully automates the assignment of storage to a Hyper-V host or Hyper-V host cluster, and tracks the storage that is managed by VMM 2012.

    Deploying Private Cloud

    A leading feature of VMM 2012 is the ability to deploy a private cloud, or more specifically to deploy a service to a private cloud. The focus of this article is to depict the operational aspects of deploying a private cloud with the assumption that an intended application has been well tested, signed off, and sealed for deployment. And the application resources including code, service template, scripts, server app-v packages, etc. are packaged and provided to a cloud administrator for deployment. In essence, this package has all the intelligence, settings, and contents needed to be deployed as a service. This self-contained package can then be easily deployed on demand by validating instance-dependent global variables and repeating the deployment tasks on a target cloud. The following illustrated the concept where a service is deployed in update releases and various editions with specific feature compositions, while all running concurrently in VMM 2012 fabric. Not only this is relative easy to do by streamlining and automating all deployment tasks with a service template, the service template can also be configured and deploy to different private clouds.


    The secret sauce is a service template which includes all the where, what, how, and when of deploying all the resources of an intended application as a service. It should be apparent that the skill sets and amount of efforts to develop a solid service template apparently are not trivial. Because a service template not only needs to include the intimate knowledge of an application, but the best practices of Windows deployment in addition to system and network administrations, server app-v, and system management of Windows servers and workloads. The following is a sample service template of StockTrader imported into VMM 2012 and viewed with Designer where StockTrader is a sample application for cloud deployment downloaded from Windows Connect.


    Here are the logical steps I follow to deploy StockTrader with VMM 2012 admin console:

    • Step 1: Acquire the Stock Trader application package from Windows Connect.
    • Step 2: Extract and place the package in a designated network share of a target Library Server of VMM 2012 and refresh the Library share. By default, the refresh cycle of a Library Server is every 60 minutes. To make the newly added resources immediately available, refreshing an intended Library share will validate and re-index the resources in added network shares.
    • Step 3: Import the service templates of Stock Trader and follow the step-by-step guide to remap the application resources.
    • Step 4: Identify/create a target cloud with VMM 2012 admin console.
    • Step 5: Open Designer to validate the VM templates included in the service template. Make sure SQLAdminRAA is correctly defined as RunAs account.
    • Step 6: Configure deployment of the service template and validate global variables in specialization page.
    • Step 7: Deploy Stock Trader to a target cloud and monitor the progress in Job panel.
    • Step 8: Troubleshoot the deployment process, as needed, restart the deployment job, and repeat the step as needed
    • Step 9: Upon successful deployment of the service, test the service and verify the results.

    A successful deployment of Stock Trader with minimal instances in my all-in-one-laptop demo environment (running in Lenovo W510 with sufficient RAM) took about 75 to 90 minutes as reported in Job Summary shown below.


    Once the service template is successfully deployed, Stock Trader becomes a service in the target private cloud supported by VMM 2012 fabric. The following two screen captures show a Pro Release of Stock Trader deployed to a private cloud in VMM 2012 and the user experience of accessing a trader’s home page.



    Not If, But When

    Witnessing the way the IT industry has been progressing, I envision that private cloud will soon become, just like virtualization, a core IT competency and no longer a specialty. While private cloud is still a topic that is being actively debated and shaped, the upcoming release of VMM 2012 just in time presents a methodical approach for constructing private cloud based on a service-based deployment with fabric. It is a high-speed train and the next logical step for enterprise to accelerate private cloud adoption.

    Closing Thoughts

    I here forecast the future is mostly cloudy with scattered showers. In the long run, I see a clear cloudy day coming.

    Be ambitious and opportunistic is what I will encourage everyone. When it comes to Microsoft private cloud, the essentials are Windows Server 2008 R2 SP1 with Hyper-V and VMM 2012. And those who first master these skills will stand out, become the next private cloud subject matter experts, and lead the IT pro communities. While recognizing private cloud adoption is not a technology issue, but a culture shift and an opportunity of career progression, IT pros must make a first move.

    In an upcoming series of articles tentatively titled “Deploying StockTrader as Service to Private Cloud with VMM 2012,” I will walk through the operations of the above steps and detail the process of deploying a service template to a private cloud

    New Web Part released – List Search Web Part now available!!

    The List Search Web Part reads the entries from a Sharepoint List or Library (located anywhere in the site collection) and displays the selected user fields in a grid with an optional interactive search filter.

    It can be used for WSS3.0, MOSS 2007, Sharepoint 2010 and Sharepoint 2013.


    The following parameters can be configured:

    • Sharepoint Site
    • List Columns to be displayed
    • Filtering, Grouping, Searching, Paging and Sorting of rows
    • AZ Index
    • optional Header text

    Installation Instructions:

    1. download the List Search Web Part Installation Instructions
    2. either install the web part manually or deploy the feature to your server/farm as described in the instructions. 
    3. Security Note:
      if you get the following error message: “Only an administrator may enumerate through all user profiles“, you will need to grant the application pool account(s) for the web application(s) „Manage User Profiles” permissions within the User Profile Sevice (SSP in case of MOSS2007).  
      This ensures that the application pool is able to retrieve the list of user profiles. 
      To assign this permission, access your active “User Profile Service” (SP 2010 Server ) or the “Shared Services Provider” (MOSS2007) via Central Admin. 
      From the „User Profiles and My Sites” group, click “Personalization services permissions”.  
      Add the „Manage User Profiles” permission to  your application pool account(s).
    4. Configure the following Web Part properties in the Web Part Editor “Miscellaneous” pane section as needed:
      • Site Name: Enter the name of the site that contains the List or Library:
        – leave this field empty if the List is in the current site (eg. the Web Part is placed in the same site)
        – Enter a “/” character if the List is contained in the top site
        – Enter a path if the List in in a subsite of the current site (eg. in the form of “current site/subsite”)
      • List Name: Enter the name of the desired Sharepoint List or Library
        Example: Project Documents
      • View Name: Optionally enter the desired List View of the list specified above. A List View allows you to specify specific data filtering and sorting. 
        Leave this field empty if you want to use the List default view.
      • Field Template: Enter the List columns to be displayed (separated by semicolons).
        Pictures can be attached (via File Upload) to the Sharepoint List items and displayed using the symbolic “Picture” column name.
        If you want to allow users to edit their own entries, please add the symbolic “Username” column name to the Field Template. An “Edit” symbol will then displayed to allow the user to navigate to the corresponding Edit Form:Example:
        Type;Name;Title;Modified;Modified By;Created By

        Friendly Header Names:
        If you would like to display a “friendly header name” instead of the default property name please append it to the User property, separated by the “|” pipe symbol.

        Picture;LastName|Last Name;FirstName;Department;Email|Email Address

        Hiding individual columns:
        You can hide a column by prefixing it with a “!” character. 
        The following example hides the “Department” column: 

        Suppress Column wrapping:
        You can suppress the wrapping of text inside a column by prefixing it with a “^” character.

        Showing the E-Mail address as plain text:
        You can opt to display the plain e-mail address (instead of the envelope icon) by appending “/plain” to the WorkEmail column:

      • Group By: enter an optional User property to group the rows.
      • Sort By: enter the List column(s) to define the default sort order. You can add multiple properties separated by commas. Append “/desc” to sort the column descending.

        The columns headings can be clicked by the users to manually define the sort order.
      • AZ Index Column: enter an optional List column to display the AZ filter in the list header. 
        If an “!” character is appended to the property name, the “A” index will be forced when visiting the page.
        Example: LastName! 

      • Search Box: enter one or more List columns (separated by semicolons) to allow for interactive searching.Example: LastName;FirstName

        If you want to display a search filter as a dropdown combo, please enter it with a leading “@” character:

        Friendly Search Box Labels:
        If you would like to display a “friendly label” instead of the default property name please append it to the User property, separated by the “|” pipe symbol.
        WorkPhone|Office Phone;Office|Office Nbr


      • Align Search Filters vertically: allows you to align the seach input boxes vertically to save horizontal space:
      • Rows per page: the Staff Directory web part supports paging and lets you specify the desired number of rows per page. 
      • Image Height: specify the image height in pixels if you include the “Picture” property. 
        Enter “0” if you want to use the default picture size.
      • Header Text: enter an optional header text. Please note that you can embed HTML tags if needed. You can additionally specify the text to be displayed if the “Show all entries” option is unchecked and the users has not performed a search yet by appending a “|” character followed by the text.
        This is the regular header text|This text is only shown if the user has not yet performed a search
      • Detail View Page: enter an optional column name prefixed by “detailview=” to link a column to the item detail view page. Append the “/popup” option if you want to open the detail page in a Sharepoint 2010/2013 dialog popup window.
      • Alternating Row Color: enter the optional color of the alternating row background (leave blank to use default).
        Enter either the HTML color names (as eg. “red” etc.) or use hexadecimal RRGGBB coding (as eg. “#CCFFCC”). Enter the values without the double quotes.
        You can also change the default background color of the non-alternating rows by appending a second color value separated by a semicolon.
        Example: #ffffcc;#ffff99 

        The default Header style can be changed by adding the “AESD_Headerstyle” appSettings variable to the web.config “appSettings” section:

        add key=AESD_Headerstyle value=background:green;font-size:10pt;color:white


      • Show Column Headers: either show or suppress the List column header row.
      • Header Row CSS Style: enter the optionall header row CSS style(s) as needed.
      • Show Groups collapsed: either show the groups (if you specify a column in the “Group By” setting) collapsed or expanded when entering the page.
      • Enforce Security: hides the web part if user has no access to the site or the list. This avoids a login prompt if the user has not at least “View” permission on the list or site containing the list.
      • Show all entries: either show all directory entries or none when first visiting the page. 
        You can append a specific text to the “Header Text” field (see above) which is only displayed if this option is unchecked and no search has yet been performed by the user.
      • Open Links in new window: either open the links in a new window or in the same browser window.
      • Link Documents to Office365: open the Word, Excel and Powerpoint documents in the Office365 web viewer.
      • Show ‘Add New Item’ Button: either show or suppress the “Add new item button” to let users add new items to the list (this option is security-trimmed).
      • Export to CSV: Show/hide the “Export” button for Excel CSV File Export
      • CSV Separator: Enter the desired CSV field separator character (Default=Comma). Use a semicolon in countries which use the commas as a decimal separator.
      • Localization: enter the following 4 values (separated by semicolons) in your local language if you need to override the English strings corresponding to the 
        – Search button text, 
        – A..Z menu “View all” option, 
         the text displayed for Hyperlink columns 
        – the optional “Group By” name (if grouping is enabled)Default:
        Search;View all;Visit

      • License Key: enter your Product License Key (as supplied after purchase of the “Staff Directory Web Part” license key).
        Leave this field empty if you are using the free 30 day evaluation version.

     Contact me now at for the List Search Web Part and other Free & Paid Web Parts and Apps for SharePoint 2010, 2013, Azure, Office 365, SharePoint Online

    Virtualization vs. Private Cloud – What exactly is the difference? Part 1

    Virtualization vs. private cloud has confused many IT pros. Are they the same? Or different? In what way and how? We have already virtualized most of my computing resources, is a private cloud still relevant to us? These are questions I have been frequently asked. Before getting the answers, in the first article of the two-part series listed below I want to set a baseline.

    • Part 1: Cloud Computing Goes Far Beyond Virtualization (This article)
    • Part 2: A Private Cloud Delivers IT as a Service

    Lately, many IT shops have introduced virtualization into existing computing environment. Consolidating servers, mimicking production environment, virtualizing test networks, securing resources with honey pots, adding disaster recovery options, etc. are just a few applications of employing virtualization. Some also run highly virtualized IT with automation provided by system management solutions. I imagine many IT pros recognize the benefits of virtualization including better utilization of servers, associated savings by reducing the physical footprint, etc. Now we are moving into a cloud era, the question then becomes “Is virtualization the same with a private cloud?” or “We are already running a highly virtualized computing today, do we still need a private cloud? The answers to these questions should always start with “What business problems you are trying to address?” Then assess if a private cloud solution can fundamentally solve the problem, or perhaps virtualization is sufficient. This is of course assuming there is a clear understanding of what is virtualization and what is a private cloud. This point is that virtualization and cloud computing are not the same. They address IT challenges in different dimensions and operated in different scopes with different levels of impact on a business.


    image_thumb6To make a long story short, virtualization in the context of IT is to “isolate” computing resources such that an object (i.e. an application, a task, a component) in a layer above can be possibly operated without a concern of those changes made in the layers below. A lengthy discussion of virtualization is beyond the scope of this article. Nonetheless,let me point out that the terms, virtualization, and “isolation” are chosen for specific reasons since there are technical discrepancies between “virtualization” and “emulation”, “isolation” and “redirection.” Virtualization isolates computing resources, hence offers an opportunity to relocate and consolidate isolated resources for better utilization and higher efficiency. Virtualization is rooted in infrastructure management, operations, and deployment flexibility. It’s about consolidating servers, moving workloads, streaming desktops, and so on; which without virtualization are not technically feasible or may simply be cost-prohibitive.

    Cloud Computing

    Cloud computing on the other hand is a state, a concept, a set of capabilities. There are statements made on what to expect in general from cloud computing. A definition of cloud computing published in NIST SP-800-145 outlines the essential characteristics, how to deliver, and what kind of deployment models to be cloud-qualified. Chou further simplifies it and offers a plain and simple way to describe cloud computing with the 5-3-2 Principle as illustrated below.



    Unequivocally Different

    To realize the fundamental differences between virtualization and private cloud is therefore rather straightforward. In essence, virtualization is not based on the 5-3-2 Principle as opposed to cloud computing does. For instance, a self-serving model is not an essential component in virtualization, while it is essential in cloud computing. One can certainly argue some virtualization solution may include a self-serving component. The point is that self-service is not a necessary , nor sufficient condition for virtualization. While in cloud computing, self-service is a crucial concept to deliver anytime availability to user, which is what a service is all about. Furthermore, self-service is an effective mechanism to in the long run reduce training and support at all levels. It is a crucial vehicle to accelerate the ROI of a cloud computing solution and make it sustainable in the long run.

    So what are specifically about highly virtualized computing environment vs. a private cloud?

    How To : Implement a WCF 4 Routing Manager




    • Manageable Routing Service
    • Mapping physical to logical endpoints
    • Managing routing messages from Repository
    • No Service interruptions
    • Adding more outbound endpoints on the fly
    • Changing routing rules on the fly
    • .Net 4 Technologies


    Recently released Microsoft .Net 4 Technology represents a foundation technology for metadata driven model applications. This article focuses on one of the unique components from the WCF 4 model for logical connectivity such as a Routing Service. I will demonstrate how we can extend its usage for enterprise application driven by metadata stored in the Runtime Repository. For more details about the concept, strategy and implementation of the metadata driven applications, please visit my previous articles such as Contract Model and Manageable Services.   

    I will highlight the main features of the Routing Service, more details can be found in the following links [1], [2], [3].

    From the architectural style, the Router represents a logical connection between the inbound and outbound endpoints. This is a “short wire” virtual connection in the same appDomain space described by metadata stored in the router table. Based on these rules, the messages are exchanged via the router in the specific built-in router pattern. For instance, the WCF Routing Service allows the following Message Exchange Pattern (MEP) with contract options:

    • OneWay (SessionMode.Required, SessionMode.Allowed, TrasactionFlowOption.Allowed)
    • Duplex (OneWay, SessionMode.Required, CallbackContract, TrasactionFlowOption.Allowed)
    • RequestResponse (SessionMode.Allowed, TrasactionFlowOption.Allowed)

    In addition to the standard MEPs, the Routing Service has a built-in pattern for Multicast messaging and Error handling.



    The router process is very straightforward, where the untyped message received by inbound endpoint is forwarding to the outbound endpoint or multiple endpoints based on the prioritized rules represented by message filter types. The evaluation rules are started by highest priority. The router complexity is depended by number of inbound and outbound endpoints, type of message filters, MEPs and priorities.

    Note, that the MEP pattern between the routed inbound and outbound endpoints must be the same, for example: the OneWay message can be routed only to the OneWay endpoint, therefore for routing Request/Response message to the OneWay endpoint, the service mediator must be invoked to change the MEP and then back to the router.

    A routing service enables an application to physically decouple a process into the business oriented services and then logical connected (tight) in the centralized repository using a declaratively programming. From the architecture point of view, we can consider a routing service as a central integration point (hub) for private and public communication.

    The following picture shows this architecture:



    As we can see in the above picture, a centralized place of the connectivity represented by Routing Table. The Routing Table is the key component of the Routing service, where all connectivity is declaratively described. These metadata are projected on runtime for message flowing between the inbound to outbound points. Messages are routing between the endpoints in the full transparent manner. The above picture shows the service integration with the MSMQ Technology via routing. The queues can be plugged-in to the Routing Table integration part like another service.

    From the metadata driven model point of view, the Routing Table metadata are part of the logical business model, centralized in the Repository. The following picture shows an abstraction of the Routing Service to the Composite Application:




    The virtualization of the connectivity allows encapsulating a composite application from the physical connectivity. This is a great advantage for model driven architecture, where internally, all connectivity can be used well know contracts. Note, that the private channels (see the above picture – logical connectivity) are between the well know endpoints such as Routing Service and Composite Application. 

    Decoupling sources (consumers) from the target endpoints and their logical connection driven by metadata will enable our application integration process for additional features such as:   

    • Mapping physical to logical endpoints
    • Virtualization connectivity
    • Centralized entry point
    • Error handling with alternative endpoints
    • Service versioning
    • Message versioning
    • Service aggregation
    • Business encapsulation
    • Message filtering based on priority routing
    • Metadata driven model
    • Protocol bridging
    • Transacted message routing

    As I mentioned earlier, in the model driven architecture, the Routing Table is a part of the metadata stored in the Repository as a Logical Centralized Model. The model is created by design time and then physical decentralized to the target. Note, deploying model for its runtime projecting will require recycling the host process. To minimize this interruption, the Routing Service can help to isolate this glitch by managing the Routing Table dynamically.

    The WCF Routing Service has built-in a feature for updating a Routing Table at the runtime. The default bootstrap of the routing service is loading the table from the config file (routing section). We need to plug-in a custom service for updating the routing table in the runtime. That’s the job for Routing Manager component, see the following picture:



    Routing Manager is a custom WCF Service responsible for refreshing the Table located in the Routing Service from the config file or Repository. The first inbound message (after the host process is opened) will boot the Table from the Repository automatically. During the runtime time, when the routing service is active, the Routing Manager must receive an inquiry message to load a routing metadata from the Repository and then update the Table.

    Basically, the Routing Manager enables our virtualized composite application to manage this integration process without recycling a host process. There is one conceptual architectural change in this model. As we know, the Repository pushed (deployed) metadata for its runtime projecting to the host environment (for instance: IIS, WAS, etc.). This runtime metadata is stored in the local, private repository such as file system and it is used for booting our application, services.

    That’s a standard Push phase such as Model->Repository->Deploy. The Routing Service is a good candidate for introducing a concept of the Pull phase such as Runtime->Repository, where the model created during the design time can be changed during its processing in the transparent manner. Therefore, we can decide about the runtime metadata used by Pull model during the design time as well.

    Isolating metadata for boot projector and runtime update enables our Repository to administrate application without interruptions, for instance, we can change physical endpoint, binding, plug-in a new service version, new contract, etc. Of course, we can build more sophisticated tuning system, where runtime metadata can be created and/or modified by analyzer, etc. In this case, we have a full control loop between the Repository and Runtime.

    Finally, the following picture is an example of the manageable routing service:



    As the above picture shows, the manageable Routing Service represents central virtualizations of the connectivity between workflow services, services, queues and web services. The Runtime Repository represents a routing metadata for booting process and also runtime changes. The routing behavior can be changed on the fly in the common shareable database (Repository) and then it can be synchronized with runtime model by Routing Manager.

    One more “hidden” feature of the Routing Service can be seen in the above picture, such as scalability. Decomposition of the application into the small business oriented services and composition via a routing service; we can control scalability of the application. We can assign localhost or cluster address of the logical endpoints based on the routing rules.

    From the virtualization point of the view, the following picture shows a manageable composite application:


    As you can see, the Composite Application is driven by logical endpoints, therefore can be declared within the logical model without the physical knowledge where they are located. Only the Runtime Repository knows these locations and can be easily managed based on requirements, for example: dev, staging, QA, production, etc.

    This article is focusing on the Routing Manager hosted on IIS/WAS. I am assuming you have some working experience or understand features of the WCF4 Routing Service.

    OK, let’s get started with Concept and Design of the Manageable Routing Service.


    Concept and Design

    The concept and design of the Routing Manager hosted on IIS/WAS is based on the extension of the WCF4 Routing Service for additional features such as downloading metadata from Repository in the loosely coupled manner. The plumbing part is implemented as a common extension to services (RoutingService and RoutingManager) named as routingManager. Adding the routingManager behavior to the routing behavior (see the following code snippet), we can boot a routing service from the repositoryEndpointName.

    The routingManager behavior has the same attributes like routing one with additional attribute for declaration of the repository endpoint. As you can see, it is very straightforward configuration for startup routing service. Note, that the routingManager behavior is a pair to the routing behavior.

    Now, in the case of updating routing behavior on the runtime, we need to plug-in a Routing Manager service and its service behavior routingManager. The following code snippet shows an example of activations without the .svc file within the same application:

    and its service behavior:

    Note, that the repositoryEndpointName can be addressed to different Repository than we have at the process startup.


    Routing Manager Contract

    Routing Manager Contract allows to communicate with Routing Manager service out of the appDomain. The Contract Operations are designed for broadcasting and point to point patterns. Operation can be specified for specific target or for unknown target (*) based on the machine and application names. The following code snippet shows the IRoutingManager contract:

    [ServiceContract(Namespace = "urn:rkiss/2010/09/ms/core/routing", 
      SessionMode = SessionMode.Allowed)]
    public interface IRoutingManager
      [OperationContract(IsOneWay = true)]
      void Refresh(string machineName, string applicationName);
      void Set(string machineName, string applicationName, RoutingMetadata metadata);
      void Reset(string machineName, string applicationName);
      string GetStatus(string machineName, string applicationName);

    The Refresh operation represents an inquiry event for refreshing a routing table from the Repository. Based on this event, the Routing Manager is going to pick-up a “fresh” routing metadata from the Repository (addressed by repositoryEndpointName) and update the routing table.

    Routing Metadata Contract

    This is a Data contract between the Routing Manager and Repository. I decided to use it for the contract routing configuration section as xml formatted text. This selection gives me an integration and implementation simplicity and easy future migration. The following code snippet is an example of the routing section:

    and the following is Data Contract for repository:

    [DataContract(Namespace = "urn:rkiss/2010/09/ms/core/routing")]
    public class RoutingMetadata
      public string Config { get; set; }
      public bool RouteOnHeadersOnly { get; set; }
      public bool SoapProcessingEnabled { get; set; }
      public string TableName { get; set; }
      public string Version { get; set; }

    OK, that should be all from the concept and design point on the view, except one thing what was necessary to figure out, especially for hosting services on the IIS/WAS. As we know [1], there is a RoutingExtension class in the System.ServiceModel.Routing namespace with a “horse” method ApplyConfiguration for updating routing service internal tables. 

    I used the following “small trick” to access this RoutingExtension from another service such as RoutingManager hosted by its own factory.

    The first routing message will stored the reference of the RoutingExtension into the AppDomain Data Slot under the well-known name, such as value of the CurrentVirtualPath.

    serviceHostBase.Opened += delegate(object sender, EventArgs e)
        ServiceHostBase host = sender as ServiceHostBase;
        RoutingExtension re = host.Extensions.Find<RoutingExtension>();
        if (configuration != null && re != null)
            lock (AppDomain.CurrentDomain.FriendlyName)
                AppDomain.CurrentDomain.SetData(this.RouterKey, re);

    Note, that both services such as RoutingService and RoutingManager are hosted under the same virtual path, therefore the RoutingManager can get this Data Slot value and cast it to the RoutingExtension. The following code snippet shows this fragment:

    private RoutingExtension GetRouter(RoutingManagerBehavior manager)
      // ...
      lock (AppDomain.CurrentDomain.FriendlyName)
        return AppDomain.CurrentDomain.GetData(manager.RouterKey) as RoutingExtension;

    Ok, now it is the time to show the usage of the Routing Manager.


    Usage and Test

    The Manageable Router (RoutingService + RoutingManager) features and usage, hosted on the IIS/WAS, can be demonstrated via the following test solution. The solution consists of the Router and simulators for Client, Services and LocalRepository. 

    The primary focus is on the Router, as shown in the above picture. As you can see, there are no .svc files, etc., just configuration file only. That’s right. All tasks to setup and configure manageable Router are based on the metadata stored in the web.config and Repository (see later discussion).

    The Router project has been created as an empty web project under http://localhost/Router/ virtualpath, adding an assembly reference from RoutingManager project and declaring the following sections in the web.config.

    Let’s describe these sections in more details.

    Part 1. – Activations

    These sections are mandatory for any Router such as activation of the RoutingService and RoutingManager service, behavior extension for routingManager and declaring client endpoint for Repository connectivity. The following picture shows these sections:

    Note, this part also declares relative address for our Router. In the above example, the entry point for routing message is ~/Pilot.svc and access to RoutingManager has address ~/PilotManager.svc.

    Part 2. – Service Endpoints

    In these sections, we have to declare endpoints for two services such as RoutingService and RoutingManager. The RoutingService endpoints have untyped contracts defined in the System.ServiceModel.Routing namespace. In this test solution, we are using two contracts, one for notification (OneWay) and the other one is for RequestReply message exchange. The binding is used by basicHttpBinding, but it can be any standard or custom binding based on the requirements.

    The RoutingManager service is configured with simple basicHttpBinding endpoint, but in the production version, it should use a custom udp channel for broadcasting message to trigger the pull metadata process from the Repository across the cluster.


    Part 3. – Plumbing RoutingService and Manager together

    This section will attach a RoutingService to the RoutingManager for accessing its RoutingExtension in the runtime.

    The first extBehavior section is a configuration of the routing boot process. The second one is a section for downloading routing metadata during the runtime.

     OK, that’s all for creating a manageable Router.

    The following picture shows a full picture of the solution:



    As you can see, there is the Manageable Router (hosted in the IIS/WAS) for integration of the composite application. On the left hand side is the Client simulator to generate two operations (Notify and Echo) to the Service1 and/or Service2 based on the routing rules. The Client and Services are regular WCF self-hosted applications, the client can talk directly to the Services or via the Router.

    Local Repository

    Local Repository represents a metadata storage of the metadata such as logical centralized application model, deployment model, runtime model, etc. Models are created during the design time and based on the deployment model are pushed (physical decentralized) to the targets where they are projecting on the runtime. For example: The RouterManager assembly and web.config are metadata for deploying model from the Repository to the IIS/WAS Target.

    During the runtime, some components have capability to update behavior based on the new metadata pulled from the Repository. For example, this component is a manageable Router (RouterService + RouterManager). Building Enterprise Repository and Tools is not a simple task, see more details about Microsoft strategy here [6] and interesting example here [7], [8].

    In this article, I included very simple Local Repository for Routing metadata with self-hosting service to demonstrate a capability of the RoutingManager Service. The Routing metadata is described by system.serviceModel section. The following picture shows a configuration root section for remote routing metadata:


    As you can see, the content of the system.ServiceModel section is similar to the target web.config. Note, that the Repository is holding these sections only. They are related to the Routing metadata. By selecting RoutingTable tab, the routing rules will be displayed in the table form:

    Any changes in the Routing Table will update a local routing metadata by pressing the button Finnish, but runtime Router must be notified about this change by pressing button Refresh. In this scenario, the LocalRepository will send an inquiry message to the RoutingManager for pulling a new routing metadata. You can see this action in the Status tab.

    Routing Rules

    The above picture shows a Routing Table as a representation of the Routing Rules mapped to the routing section in the configuration metadata. This test solution has a pre-build routing ruleset with four rules. Let’s describes these rules. Note, we are using an XPath filter for message body, therefore the option RouteOnHeadersOnly must be unchecked. Otherwise the Router will throw an exception.

    The Router is deployed with two inbound endpoints such as SimplexDatagram and RequestReply, therefore the received message will be routed based on the following rules:

    Request/Reply rules

    First, as the highest priority (level 3) the message is buffered for xpath body expression

    starts-with(/s11:Envelope/s11:Body/rk:Echo/rk:topic, 2)

    if the xpath expression is true, then the copied message is routing to the outbound endpoint TE_Service1, else the copied message is forwarded to the TE_Service2 (see the filter aa)

    SimplexDatagram rules

    This is a multicast routing (same priority 2) to the two outbound endpoints such as TE_Service1 and TE_Service2. If the message cannot be delivered to the TE_Service2, the alternative endpoint from the backup list is used, such as Test1 (queue). Note, that the message is not buffered, it is passed directly (filterType = EndpointName) from endpointNotify endpoint.



    Testing Router is a very simple process. Launch the Client, Service1, Service2 and Local Repository programs, create VirtualDirectory in IIS/WAS for http://localhost/Router project and the solution is ready for testing. The following are instruction steps:

    1. On the Client form, press button Echo. You should see a message routed to the Service2
    2. Press button Echo again, the message is routed to the Service1
    3. Press button Echo again, the message is routed to the Service2
    4. Change Router combo box to: http://localhost/Router/Pilot.svc/Notify
    5. Press button Event, see notification messages in both services

    You can play with Local Repository to change the routing rules to see how the Router will handle the delivery messages.

    For testing a backup rule, create a transactional queue Test1 and close the Service2 console program and follow the Step 2. and 3. You should see messages in the Service1 and queue.



    There are two kinds of troubleshooting in the manageable Router. The first one is a built-in standard WCF tracing log and its viewing by Microsoft Service Trace Viewer program. The Router web.config file has already specified this section for diagnostics, but it is commented.

    The second way to troubleshoot router with focus on the message flowing is built-in custom message trace inspector. This inspector is injected automatically by RoutingManager and its service behavior. We can use DebugView utility program from Windows Sysinternals to view the trace outputs from the Router.


    Some Router Tips

    1. Centralizing physical outbound connections into one Router enables the usage of logical connections (alias addresses) for multiple applications. The following picture shows this schema:

    Instead of using physical outbound endpoints for each application router, we can create one master Router for virtualization of all public outbound endpoints. In this scenario, we need to manage only the master router for each physical outbound connection. The same strategy can be used for physical inbound endpoints. Another advantage of this router hierarchy is centralizing pre-processing and post-processing services.

    2. As I mentioned earlier, the Router enables decomposition of the business workflow into small business oriented services, for instance: managers, workers, etc. The composition of the business workflow is declared by Routing Table which represents some kind of dispatcher of the messages to the correct service. We should use ws binding with a custom headers to simplify dispatching messages via a router based on the headers only.



    Based on the concept and design, there are two pieces for implementation, such as RoutingManager service and its extension behavior. Both modules are using the same custom config library, where useful static methods are located for getting clr types from the metadata stored in the xml formatted resource. I used this library in my previous articles (.Net 3) and I extended it for the new routing section. 

    In the following code snippet I will show you how straightforward implementation is done.

    The following code snippet is a demonstration of the Refresh implementation for RoutingManager Service. We need to get some configurable properties from the routingManager behavior and access to the RouterExtension. Once we have them, the RepositoryProxy.GetRouting method is invoked to obtain routing metadata from the Repository.

    We get the configuration section in the xml formatted text like from the application config file. Now, using the “horse” config library, we can deserialize a text resource into the clr object, such as  MessageFilterTable<IEnumerable<ServiceEndpoint>>.

    Then the RoutingConfiguration instance is created and passed to the router.ApplyConfiguration process. The rest of the magic is done in the RoutingService.

    public void Refresh(string machineName, string applicationName)
      RoutingConfiguration rc = null;
      RoutingManagerBehavior manager = 
      RoutingExtension router = this.GetRouter(machineName, applicationName, manager);
        if (router != null)
          RoutingMetadata metadata = 
             Environment.MachineName, manager.RouterKey, manager.FilterTableName);
          string tn = metadata.TableName == null ? 
                      manager.FilterTableName : metadata.TableName;
          var ft = ServiceModelConfigHelper.CreateFilterTable(metadata.Config, tn);
          rc = new RoutingConfiguration(ft, metadata.RouteOnHeadersOnly);
          rc.SoapProcessingEnabled = metadata.SoapProcessingEnabled;
          // insert a routing message inspector
          foreach (var filter in rc.FilterTable)
            foreach (var se in filter.Value as IEnumerable<ServiceEndpoint>)
              if (se.Behaviors.Find<TraceMessageEndpointBehavior>() == null)
                se.Behaviors.Add(new TraceMessageEndpointBehavior());
      catch (Exception ex)
         RepositoryProxy.Event( ...);

    The last action in the above Refresh method is injecting the TraceMessageEndpointBehavior for troubleshooting messages within the RoutingService on the Trace output device.

    The next code snippets show some details from the ConfigHelper library:

    public static MessageFilterTable<IEnumerable<ServiceEndpoint>>CreateFilterTable(string config, string tableName)
      var model = ConfigHelper.DeserializeSection<ServiceModelSection>(config);
      if (model == null || model.Routing == null || model.Client == null)
        throw new Exception("Failed for validation ...");
      return CreateFilterTable(model, tableName);


    The following code snippet shows a generic method for deserializing a specific type section from the config xml formatted text:

    public static T DeserializeSection<T>(string config) where T : class
      T cfgSection = Activator.CreateInstance<T>();
      byte[] buffer = 
        new ASCIIEncoding().GetBytes(config.TrimStart(new char[]{'\r','\n',' '}));
      XmlReaderSettings xmlReaderSettings = new XmlReaderSettings();
      xmlReaderSettings.ConformanceLevel = ConformanceLevel.Fragment;
      using (MemoryStream ms = new MemoryStream(buffer))
        using (XmlReader reader = XmlReader.Create(ms, xmlReaderSettings))
            Type cfgType = typeof(ConfigurationSection);
            MethodInfo mi = cfgType.GetMethod("DeserializeSection", 
                    BindingFlags.Instance | BindingFlags.NonPublic);
            mi.Invoke(cfgSection, new object[] { reader });
          catch (Exception ex)
            throw new Exception("....");
      return cfgSection;

    Note, the above static method is a very powerful and useful method for getting any type of config section from the xml formatted text resource, which allows us using the metadata stored in the database instead of the file system – application config file. 




    In conclusion, this article described a manageable Router based on the WCF4 Routing Service. Manageable Router is allowing dynamically change routing rules from the centralized logical model stored in the Repository. The Router represents a virtualization component for mapping logical endpoints to the physical ones and it is a fundamental component in the model driven distributed architecture.








    [6] SQL Server Modeling CTP and Model-Driven Applications

    [7] Model Driven Content Based Routing using SQL Server Modeling CTP – Part I

    [8] Model Driven Content Based Routing using SQL Server Modeling CTP – Part II

    [9] Intermediate Routing


    Application Lifecycle Management in SharePoint 2013 & Office 365 using Team Foundation Server 2013 & Visual Studio Online – The Dev Environment(s)

    Ayman El-Hattab's Technology Blog

    This is the 3rd article of a series of posts on SharePoint 2013 & Office 365 Application Lifecycle Management:
    • Introduction
    • Infrastructure Overview
    • The Development Environment(s) à You are here!
    • The ALM Platform(s)
    • The Testing Environment(s)
    • Automated Build & Deployment for Full-Trust & Sandboxed Solutions
    • Automated Build & Deployment for SharePoint-hosted & Autohosted Apps
    • Automated Build & Deployment for Provider-hosted Apps (Azure-hosted)
    • Automated Build, Deployment & Testing for Full-Trust & Sandboxed Solutions
    • Automated Build, Deployment & Testing for Apps
    • Release Management Overview
    • Release Management for Apps
    • Release Management for Full-Trust & Sandboxed Solutions


    In the previous post, I have quickly given you an overview on the full ALM environment that we will be building. The environment comprises some on-premises components & services as well as some cloud services. The on-premises components & services will be combined & hosted in three Virtual Machines as explained earlier.

    Here is a…

    View original post 1,413 more words

    Power BI connectivity to SAP BusinessObjects BI

    Microsoft and SAP are jointly delivering business intelligence (BI) interoperability in Microsoft Excel, Microsoft Power BI for Office 365, and SAP BusinessObjects BI.


    Microsoft Power Query for Excel seamlessly connects to SAP BusinessObjects BI Universes enabling users to access and analyze data across the enterprise and share their data and insights through Power BI.

    This connectivity drives a single version of truth, instant productivity, and optimized business performance for your organization.


    Download Microsoft Power Query Preview for Excel

    Preview contains SAP BusinessObjects BI Universe connectivity.


    Microsoft Power Query Preview for Excel, providing SAP BusinessObjects BI Universe connectivity, is an add-in that provides a seamless experience for data discovery, data transformation and enrichment for Information Workers, BI professionals and other Excel users. This preview provides an early look into the upcoming SAP BusinessObjects BI Universe connectivity feature. As with most previews, this feature may appear differently in the final product.

    System Requirements

    Supported operating systems

    Windows 7, Windows 8, Windows 8.1, Windows Server 2008, Windows Vista

    • Windows Vista (requires .NET 3.5SP1)
    • Windows Server 2008 (requires .NET 3.5 SP1)
    • Windows 7
    • Windows 8
    • Windows 8.1

    The following Office versions are supported:

    • Microsoft Office 2010 Professional Plus with Software Assurance
    • Microsoft Office 2013 Professional Plus, Office 365 ProPlus or Excel 2013 Standalone

    Microsoft Power Query Preview for Excel requires Internet Explorer 9 or greater.

    Microsoft Power Query Preview for Excel is available for 32-bit (x86) and 64-bit (x64) platforms, your selection must match architecture of the installed version of Office.

    Installation Instructions

    Download the version of the Power Query add-in that matches the architecture (x86 or x64) of your Office installation. Run the MSI installer and follow the setup steps.

    Access and analyze your trusted enterprise data

    Learn how Microsoft and SAP deliver a combination of trusted enterprise data and familiar market leading tools.

    Single version of truth

    Deliver the latest, accurate and trusted data from across the enterprise, such as from SAP applications, directly into the hands of users in Microsoft Excel. They no longer need to constantly copy and paste or import data using a manual process leading to inaccuracy. Users can instead focus on leveraging their knowledge to analyze data from within and outside your organization. They can get answers and uncover new insights to better deal with the challenges facing your organization, eliminating costly decisions based on inaccurate data.

    Instant productivity

    Users can continue to work in their familiar Microsoft Excel environment with access to business friendly terms from SAP BusinessObjects BI Universes at their fingertips, allowing for deeper analysis on their own. Using familiar tools enables them to easily integrate data and insights into existing workflows without the need to learn new complex tools and skills. Any uncovered data and insights can be kept up to date with no hassle refreshing from on-premises and the cloud, increasing productivity.

    Optimized business performance

    Leveraging existing investments from both companies together enables your organization to unlock insights faster and react accordingly. Your organization can identify patterns, cost drivers, and opportunities for savings in an agile, accurate, and visual manner. Specific trends and goals can be measured while having visually attractive and up to date dashboards. Relying on trusted enterprise data reduces costs and increases profitability by allowing faster, better, and timelier decisions. All of this drives broader BI adoption to create an information driven culture across your organization.

    Unlocked data and insights

    Drive trusted enterprise data and insights from an SAP BusinessObjects BI Universe throughout your organization by sharing and collaborating with Power BI from anywhere. Anyone can create a collaborative BI site to share data and insights relying on the latest data from either on-premises or the cloud using scheduled refreshing. Users no longer have to struggle to think up and answer every single question in advance, instead interactively investigating data when they need it through natural language Q&A. Better yet, they can stay connected with mobile access to data and insights generating a deeper understanding of the business and communicating more effectively from anywhere.



    Microsft Patterns and Practices : A look at the Security Development Life Cycle (SDL)

    Microsoft Security Development Lifecycle (SDL) is an industry-leading software security assurance process. A Microsoft-wide initiative and a mandatory policy since 2004, the SDL has played a critical role in embedding security and privacy in Microsoft software and culture.

    Combining a holistic and practical approach, the SDL introduces security and privacy early and throughout all phases of the development process. It has led Microsoft to measurable and widely-recognized security improvements in flagship products such as Windows Vista and SQL Server. Microsoft is publishing its detailed SDL process guidance to provide transparency on the secure software development process used to develop its products.

    As part of the design phase of the SDL, threat modeling allows software architects to identify and mitigate potential security issues early, when they are relatively easy and cost-effective to resolve. Therefore, it helps reduce the total cost of development.

    •     The SDL Threat Modeling Tool Is Not Just a Tool for Security Experts
    • The SDL Threat ModelingTool is the first threat modeling tool which isn’t designed for security experts. It makes threat modeling easier for all developers by providing guidance on creating and analyzing threat models.
    The SDL Threat Modeling Tool enables any developer or software architect to:

    • Communicate about the security design of their systems
    • Analyze those designs for potential security issues using a proven methodology
    •           Suggest and manage mitigations for security issues
    • SDL Threat Modeling Process
      SDL Threat Modeling Process
    •     Capabilities and Innovations of the SDL Threat Modeling Tool
    • The SDL Threat Modeling Tool plugs into any issue-tracking system, making the threat modeling process a part of the standard development process.
    Innovative features include:

    • Integration: Issue-tracking systems
    • Automation: Guidance and feedback in drawing a model
    •  STRIDE per element framework: Guided analysis of threats and mitigations
    •   Reporting capabilities: Security activities and testing in the verification phase
    •   The Unique Methodology of the SDL Threat Modeling Tool
    • The SDL Threat Modeling Tool differs from other tools and approaches in two key areas:
    • It is designed for developers and centered on software Many threat modeling approaches center on assets or attackers. In contrast, the SDL approach to threat modeling is centered on the software. This new tool builds on activities that all software developers and architects are familiar with–such as drawing pictures for their software architecture.


    • It is focused on design analysis The term “threat modeling” can refer to either a requirements or a design analysis technique. Sometimes, it refers to a complex blend of the two. The Microsoft SDL approach to threat modeling is a focused design analysis technique.


    Great tool to handle storing constants in SharePoint Development



    Today I want to introduce something I’ve been working on recently which could be of use to you if you’re a SharePoint developer. Often when developing SharePoint solutions which require coding, the developer faces a decision about what to do with values he/she doesn’t want to ‘hardcode’ into the C# or VB.Net code. Example values for a SharePoint site/application/control could be:

    ‘AdministratorEmail’ – ‘’
    ‘SendWorkflowEmails’ – ‘true’

    Generally we avoid hardcoding such values, since if the value needs to be changed we have to recompile the code, test and redeploy. So, alternatives to hardcoding which folks might consider could be:

    • store values in appSettings section of web.config
    • store values in different places, e.g. Feature properties, event handler registration data, etc.
    • store values in a custom SQL table
    • store values in a SharePoint list

    Personally, although I like the facility to store complex custom config sections in web.config, I’m not a big fan of appSettings. If I need to change a value, I have to connect to the server using Remote Desktop and open and modify the file – if I’m in a server farm with multiple front-ends, I need to repeat this for each, and I’m also causing the next request to be slow because the app domain will unload to refresh the config. Going through the other options, the second isn’t great because we’d need to be reactivating Features/event receiver registrations every time (even more hassle), though the third (using SQL) is probably fine, unless I want a front-end to make modifying the values easier, in which case we’d have some work to do.

    So storing config values in a SharePoint list could be nice, and is the basis for my solution. Now I’d bet lots of people are doing this already – it’s pretty quick to write some simple code to fetch the value, and this means we avoid the hardcoding problem. However, the Config Store ‘framework’ goes further than this – for one thing it’s highly-optimized so you can be sure there is no negative performance impact, but there are also some bonuses in terms of where it can used from and the ease of deployment. So what I hope to show is that the Config Store takes things quite a bit further than just a simple implementation of ‘retrieving config values from a list’.

    Details (reproduced from the Codeplex site)

    The list used to store config items looks like this (N.B. the items shown are my examples, you’d add your own):

    There is a special content type associated with the list, so adding a new item is easy:


    ..and the content type is defined so that all the required fields are mandatory:


    Retrieving values

    Once a value has been added to the Config Store, it can be retrieved in code as follows (you’ll also need to add a reference to the Config Store assembly and ‘using’ statement of course):

    string sAdminEmail = ConfigStore.GetValue("MyApplication", "AdminEmail");

    Note that there is also a method to retrieve multiple values with a single query. This avoids the need to perform multiple queries, so should be used for performance reasons if your control/page will retrieve many items from the Config Store – think of it as a best practise. The code is slightly more involved, but should make sense when you think it through. We create a generic List of ‘ConfigIdentifiers’ (a ConfigIdentifier specifies the category and name of the item e.g. ‘MyApplication’, ‘AdminEmail’) and pass it to the ‘GetMultipleItems()’ method:

    List<ConfigIdentifier> configIds = new List<ConfigIdentifier>();

    ConfigIdentifier adminEmail = new ConfigIdentifier("MyApplication", "AdminEmail");
    ConfigIdentifier sendMails = new ConfigIdentifier("MyApplication", "SendWorkflowEmails");
    Dictionary<ConfigIdentifier, string> configItems = ConfigStore.GetMultipleValues(configIds);
    string sAdminEmail = configItems[adminEmail];
    string sSendMails = configItems[sendMails];

    ..the method returns a generic Dictionary containing the values, and we retrieve each one by passing the respective ConfigIdentifier we created earlier to the indexer.

    Other notes

      • All items are wrapped up in a Solution/Feature so there is no need to manually create site columns/content types/the Config Store list etc. There is also an install script for you to easily install the Solution.

        • Config items are cached in memory, so where possible there won’t even be a database lookup!

          • The Config Store is also designed to operate where no SPContext is present e.g. a list event receiver. In this scenario, it will look for values in your SharePoint web application’s web.config file to establish the URL for the site containing the Config Store (N.B. these web.config keys get automatically added when the Config Store is installed to your site). This also means it can be used outside your SharePoint application, e.g. a console app.

            • The Config Store can be moved from it’s default location of the root web for your site. For example sites I create usually have a hidden ‘config’ web, so I put the Config Store in here, along with other items. (To do this, create a new list (in whatever child web you want) from the ‘Configuration Store list’ template (added during the install), and modify the ‘ConfigWebName’/’ConfigListName’ keys which were added to your web.config to point to the new location. As an alternative if you already added 100 items which you don’t want to recreate, you could use my other tool, the SharePoint Content Deployment Wizard at to move the list.)

              • All source code and Solution/Feature files are included, so if you want to change anything, you can

                • Installation instructions are in the readme.txt in the download


                Hopefully you might agree that the Config Store goes beyond a simple implementation of ‘storing config values in a list’. For me, the key aspects are the caching and the fact that the entire solution is ‘joined up’, so it’s easy to deploy quickly and reliably as a piece of functionality. Many organizations are probably at the stage where their SharePoint codebase is becoming mature and perhaps based on a foundation of a ‘core API’ – the Config Store could provide a useful piece in such a toolbox.

                You can download the Config Store framework and all the source code from

                How To : Reserve Resources on the Calendar in SharePoint 2013 / Online

                I suppose, many of you know about a great calendar feature in SharePoint 2010 called resource reservation. It enables organization of meetings in useful interface that allows to select multiple resources such as meeting rooms, projector and other facilities and required participants, and next the time frame that is free for all participants and facilities in the calendar view.

                You can switch between week and day views.

                Here is a screenshot of the calendar with resource reservation and member scheduling features:

                You can change resources and participants in the form of your meeting, find free time frames in the diagram and check double booking:

                There are two ways to add the resource reservation feature into SharePoint 2010 calendar:

                1. Enable web feature ‘Group Work Lists’, add calendar and go to its settings. Click ‘Title, description and navigation’ link in ‘General settings’ section. Here check ‘Use this calendar to share member’s schedule?’ and ‘Use this calendar for Resource Reservation?’
                2. Create a site based on ‘Group Work Site’ template.

                Here is the detailed instructions:

                SharePoint 2013 on-premise

                After migration to SharePoint 2013 I discovered that these features were excluded from the new platform and saved only as backward compatibility.

                So, you can migrate your application with installed booking calendar from SharePoint 2010 to SharePoint 2013 and you will keep functionality of resource reservation but you cannot activate it on a new SharePoint 2013 application through default interface.

                Microsoft officially explained these restrictions by unpopularity of the resource reservation feature:

                First, I found a solution for SharePoint 2013 on-premise. It is possible to display the missing site templates including ‘Group Work Site’. Then you just need to create a site based on this template and you will get the calendar of resources.

                Go to C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\15\TEMPLATE\1033\XML, open WEBTEMP.XML file, find an element with ‘Group Work Site’ title attribute and change its Hidden attribute from FALSE to TRUE.

                SharePoint 2013 Online in Office 365

                Perfect, now we can use free SharePoint booking system based on the standard calendar. But what about SharePoint Online in Office 365? We do not have an access to WEBTEMP.XML in its file system.

                After some research I developed a sandbox solution that enables hidden ‘Group Work Lists’ feature and adds calendar with resource reservation and member scheduling features. Please, download it and follow the instructions to install:

                1. Go to the site collection settings.
                2. Open ‘Solutions’ area from ‘Web Designer Galleries’ section.
                3. Upload CalendarWithResources.wsp package and activate it.
                4. Now, navigate into the site where you wish to add the calendar with the enabled resource reservation feature.
                5. Open site settings -> site features.
                6. Activate ‘Calendar With Resources’ feature.

                Great, now you have Group Calendar with an ability to book resources and schedule meetings.

                This solution works for SharePoint 2013 on-premise as well, so you can use it instead of WEBTEMP.XML file modification.

                Free download :

                WSP File –


                Why Recruiters Are Bad For Your Career By Brandon Savage – Lets fix this South Africa!!

                Why Recruiters Are Bad For Your Career


                At some point or another, every technical person will conduct a job search. And either by design or accident, they will encounter the nemesis of job searching: The Recruiter. These individuals are employed by companies whose sole purpose is to serve as an intermediary between job seekers and potential employers. Their marketing literature will say that they match you to potential jobs, and since they spend their days looking around for potential job openings, they have a better grasp of whats out there than you do. It’s their claim, anyway.

                The big problem with recruiters is that they are typically paid based on two criteria: the salary of the jobs they put people in, and how many people they place. This might sound like a win-win, but really, it’s a win for the recruiter and a loss for the job candidate. What common strategies do recruiters use to lure job applicants, and why are they bad for you? Let’s take a look…


                Disclaimer: this post reflects my personal experience with recruiters. Your personal experience may vary, and that’s fine. Choosing to use recruiters is a personal decision, and one that should be entered into with full understanding. And there are also many recruiters who practice none of these tactics; those are people you should definitely seek out and get to know.
                Disclaimer: many large companies have recruiters either under contract or as employees. These types of recruiters (in-house recruiters) are not the kind of recruiters being discussed here. In-house recruiters are paid to headhunt, and working with them is perfectly fine and normal. The best way to know whether or not a recruiter is an in-house recruiter is to find out if the email address they have actually contains the domain name of the company you’re trying to work for.

                The Uninvited Solicitation

                Anyone in the technology industry will eventually be solicited by a recruiter who found their contact information somewhere and decided to contact them about an open position, out of the blue. The conversation (either via email or telephone) usually starts off with some praise (like “I reviewed your resume and I think you have some fantastic technical skills that could apply to this position I have open”) and then a pitch for a job that they’re trying to fill. This is often a flattering proposition: someone found YOU, and decided that YOUR skills would match THEIR job. But it’s a scam.

                How can you tell it’s a scam? There will often be tell-tale signs, some more subtle than others. Obvious ones include a recruiter who emails you, praising your skills as a match for this job, and then proceeds to describe a job in a completely different field from your background. That recruiter is bullshitting you, and is really not interested in finding “the best fit.” They just want to collect the commission. Other signs include jobs that are somewhat in your field, but not necessarily fitted to your skill set. Most of the time, if a recruiter says “I have a client who…” they’re lying. Most recruiters don’t have exclusive locks on jobs; they get their jobs the same way you do, they just happen to know who the hiring manager is and thus can make more direct contact. Submitting your resume the old fashioned way still gets it seen by the same person.

                The Vague, Rewritten Job Posting

                As previously mentioned, most recruiters working for staffing companies don’t have exclusive contracts to offer a job, actually screen candidates or are otherwise directly involved in the hiring process. Their role is largely self-defined, where they match candidates to a job posting; their success is dependent upon their network of contacts and their ability to get their candidates directly in front of the hiring manager.

                As a result, most recruiters are pretty vague about the company they’re posting for when they write a job posting. They’ll usually write something along the lines of “My client…” or “We have a client who…” They won’t post any identifying information about the company in the ad, and for good reason: if you knew what company was hiring, you could go apply for the job yourself!

                Recruiters will often post job descriptions that are vague, and more often than not, rewritten from the original. Recruiters are aiming to get the widest possible number of people interested in the posting, since that increases the applicant pool and increases the chances they’ll win a commission by making a placement. The result is that they will rewrite the job description, often adding keywords that they think might be related, and end up posting a job description that is vague, keyword-filled, and really useless for knowing what job they’re actually trying to find a candidate to fill.

                This is bad for you because it means that you cannot target yourself to a particular position as easily. Most hiring managers want to know how you’re going to satisfy their needs, and a shotgun approach to providing such satisfaction will turn them off. If they’re looking for someone with Postgres experience they don’t probably care that you worked with MySQL, Oracle, and SQL Server, for example. Speaking of rewriting…

                The Rewriting Of The Candidate’s Resume

                Recruiters will often ask candidates to send them a resume in Word format. This is often for two reasons: first, because there is no contract in place between the company and the recruiter, the recruiter doesn’t want to have the company be able to hire the applicant directly, thus bypassing the recruiter and his commission. The Word-based resume allows them to remove the contact information of the candidate before sending the information along. But second, and more dubious, many times recruiters rewrite resumes.

                That’s right: they’ll rewrite your resume to “match” the job description.

                The reasons this is bad for you should be obvious: lying on your resume is typically grounds for automatic rejection or termination, regardless of who was responsible (most employers won’t take “the recruiter rewrote it!” as an excuse). In addition, most recruiters are not technical but are convinced that keywords sell job candidates, so they’ll load up resumes with tons of bullshit terms, trying to match the resume to the job description to improve their chances of success. Finally, it’s possible you may not see the finished product, meaning you could get asked about something on your resume you’ve never even seen or heard of (that one is awkward).

                The Pre-Interview Interview At The Recruiter’s Office

                Car salesmen like to get people into the showroom. They know that if they can get people into the showroom, to make the investment of time in coming to the showroom, the sale becomes much easier. Recruiters are often car salesmen in better clothing, and practice the same philosophy. In recruiter world, this often takes the form of a pre-interview interview. It serves two purposes, both bad for the candidate.

                First and foremost, it causes you to make an investment in the position you’re applying for. You’ve invested the time to dress up, keep an appointment, and answer questions. Recruiters will tell you the purpose of the interview is to make sure you’re sane and qualified, but I’m firmly convinced that it’s really designed to have you make an investment of your time and energy in the recruiter and the role.

                The second (and often more dubious) reason is to get you to fill out paperwork for the recruiter. Remembering that the recruiter has no contract with the companies they’re trying to recruit for, many try and end run around this by making you sign paperwork (usually as part of a “job application”) promising not to accept a position with any company they put you in contact with, unless that offer comes through them. The goal here is to get one of the two parties in a situation where the recruiter can almost be guaranteed their commission; this has nothing to do with protecting the interests of the candidate, and everything to do with protecting the interests of the recruiter.

                Now that the candidate is under contract, it’s time for the recruiter’s next trick…

                The Unprompted Blasting Of Your Resume All Over Town

                You applied for a single position. You sent the recruiter a copy of your resume in Word, came down to his office, spent a couple hours being “interviewed” and signed a piece of paper promising to inform the recruiter if you got a job thanks to their efforts. Turns out the role you wanted either got filled or you weren’t a good match for. You move on to your next job prospect. And then the worst thing in the world happens.

                The next company refuse to call you in for an interview because they’ve already seen your resume. Seems that your recruiter sent it to them last week, but the company you’ve applied to doesn’t want to pay a 25% commission to hire someone. They know you’re under contract not to take their job if they don’t pay the commission. And so they aren’t able to work with you at this time.

                Now that the recruiter has you under contract, he’s free to do whatever he wants with your information. This is a curse upon your house. Every place the recruiter now sends your information is off-limits to you if they decide the other candidate is cheaper. You’ve just lost control of your job search.

                Sure, you can ask your recruiter to stop, but the damage has probably already been done. A recruiter’s success at their job depends on their ability to know pretty much everything going on in a given job field, which means there’s a chance everyone hiring for your field within 50 miles has gotten your resume and now can’t hire you.

                Think this can’t happen to you? This is exactly what happened to me once upon a time. In fact, I had contacted two recruiters, and they had both submitted me for the same job. I had even interviewed for that job, but when they realized that two recruiters had submitted me, they pulled out of the process, out of fear that one of them would sue if the other got the commission.

                The Complete Disregard For Your Preferences

                By now we’ve established what recruiters are after in the process. This often leads to recruiters putting you up for a job that you have no interest in winning. The vagueness of the job posting, as well as the vagueness of most recruiters, means that you may not have a good understanding of what job you’re interviewing for.

                For example, I had gone through the whole pre-interview interview (while avoiding signing paperwork) with a recruiter. I had spent a whole two hours in their office, expressing my preferences, likes, and dislikes. I had explicitly stated that I didn’t want to interview for Drupal-heavy jobs.

                They scheduled a call with a hiring manager, that ended up being rescheduled. I had made a major investment of time and effort in this interview by this point. In the first five minutes, the hiring manager described the position as being “primarily a Drupal developer with a few legacy applications that will eventually be moved to Drupal.” I was furious.

                Here I had wasted more than two hours in their office, plus travel time, plus scheduling and rescheduling the interview, plus actually having the interview, only to find out I had no interest in the job. I could have found that out in five minutes if they had simply been up front and saved all of us a lot of time and energy. But recruiters are typically selfish and don’t care about your preferences – they care about their commissions!

                And this focus on commissions leads to the last recruiter strategy that hurts developers…

                Playing Mr. Positive Until They Don’t Need You Anymore

                If you ever had a girlfriend who broke up with you after you wrecked your really nice car, causing you to realize it was the car and not you that she loved, you’ll understand how a recruiter behaves when you tell them you’re not interested in the job they have open.

                They’ll drop you like a hot potato.

                Oh, it’s not personal. It’s just that you’re not useful to them anymore. They’re after a commission, after all. They’re not social workers, they’re capitalists. Their product is you, and you suddenly have no value to the goal they’re trying to achieve. And so, they’ll stop returning calls. Until they need you again, that is.

                Recruiters are not really interested in taking a candidate, finding the best position for them, placing them in that position and making sure they’re happy. If they were, they would work with a candidate to find them a role that fit their experience and preferences, and go the extra mile. To date, I’ve never seen it.

                Telling You Things To Boost Your Ego, But Being Full Of Lies

                A recruiter will tell you lots of things, aimed at boosting your ego and also convincing you to work with the recruiter. They’ll tell you things like “I want to help you get the highest salary possible” or “I’m working on this for you.” All lies. Recruiters’ commissions are based on salary, so of course they want to get you the highest salary possible – for their own benefit. But remember: the commission a company pays to hire you will inevitably reduce the available cash for a given position, reducing your salary offer. And since a starting salary is often the place companies start from when giving raises, you will permanently reduce your lifetime earnings.

                As for a recruiter “working on this for you” that’s bullshit too. The recruiter is working on it for themselves. They’ve been tasked with filling that job. They don’t care if it’s you that fills it or the next guy who applies; they just want to get the job out of their portfolio.

                Working with recruiters is also a lot of bad news. Recruiters have three lines that they like to give candidates after interviews. The first is “the company has decided not to hire for this role at this time.” The second is “the company has already filled the position.” And the third is “the company has decided you’re not a good fit for the role.”

                The first line is the biggest amount of bullshit of the three. The company never decides not to hire; they decide the commission would be too expensive and so that’s what they tell the recruiter they’ve decided. Either the recruiter is too stupid to know he’s being lied to or doesn’t care; that’s what he tells you.

                The second line, about filling the position, may well be true. It may also be a knee-jerk reaction of the company to being contacted by a recruiter. Most companies will bite on a recruiter if the resume they get is top notch, but since recruiters take a shotgun approach to getting folks hired, most of the time this is not the case (you may well be the finest resume he has; but then again, if he’s rewritten it, maybe it sucks now). Either way, working with a recruiter is going to feel a lot like always being late to the party.

                Finally, and the most honest of the lines, is the company talking with you and then deciding to go an entirely different direction. This happens often, and at least you find out, but be prepared for this to happen more often with a recruiter. The company knows that they’re going to have to pay you the same as other candidates but they’ll also tack on an extra 25% for the recruiter, so if they’ve got qualified candidates who aren’t tied to a recruiter, you’re going to get this line from them almost instantly.

                The Bottom Line

                What’s the upshot of all this? I strongly recommend you avoid recruiters at all costs. While many people find jobs every day using a recruiter, the reality of the job market for developers is that good developers don’t need recruiters to find good positions. Recruiters do nothing but make it harder for you to find work, and their commission (which is based on your salary on hire) often drags down your pay package. Finding a job without a recruiter may take a bit longer, or be a bit more stressful, but is more rewarding, provides more flexibility, and ultimately improves the odds of getting hired in a great position without strings attached.

                Are there any honest recruiters?

                Of course! Sadly they are few and far between. However, it is possible to find a job with the right recruiter. Lonnie Brown is one such honest recruiter; give him a shout if you’re in the PHP job market!

                Windows Azure is now “Microsoft Azure”

                Thanks as always for keeping the community up to date Alexandre!!

                Alexandre Brisebois ☁

                This shift in Microsoft’s in strategy just makes sense, because it’s Cloud ecosystem has outgrown the “Windows” label.

                Microsoft Azure is about being able to host your solutions at scale. Whether we develop using Java, Node, PHP, .NET… we should be able to deploy to the cloud and benefit from its rich ecosystem.

                I for one, can’t wait to see the new branding. But for now, I will leave you with a quote from the blog post by the Microsoft Azure Team (

                Today we are announcing that Windows Azure will be renamed to Microsoft Azure, beginning April 3, 2014. This change reflects Microsoft’s strategy and focus on Azure as the public cloud platform for customers as well as for our own services Office 365, Dynamics CRM, Bing, OneDrive, Skype, and Xbox Live.

                Our commitment to deliver an enterprise-grade cloud platform for the world’s applications is greater…

                View original post 58 more words

                WebEssentials on Steroids – Grunt

                Yes you can. Grunt is completely independent of IDEs and texteditors and is a really helpful tool for all kinds of web development in any editor.

                Grunt is described as “a node based javascript task runner with which you can automate tasks like minification, compilation, unit testing, linting and more”. You can use it for example when you do not like to be dependent on an web server doing minification and bundling for you, or you simply like to be able to use tools that are not (yet) supported by the Studio ecosystem.

                I use it when I create web applications which requires typescript compilations, css and js minifications and separate configurations for development and live deployments. Initially I used WebEssentials for that. But I found that grunt gives me more flexibility and power.

                To use Grunt you only need to install Node (from on your computer. With that in place you tell grunt how to perform it’s magic with the help of two configuration files:

                1. package.config (which contains the npm information about your project and its dependencies)
                2. gruntfile.js (which contains information about your build tasks)

                Npm, the Node Package Manager, can help us create a package.config. Just enter npm init from your application folder:

                Follow the process in a 5 minutes screen recording here.


                Answer the questions (just use the default options if you like, it’s not important now) and it will create a package.config for you.

                Next we’ll open the project as a web site within Visual Studio:


                You can edit the information in the file if you like. But we’ll let the npm add some information for us automatically so please save the file before you go on to the next step.

                Whenever we use an external tool to edit files for us Visual Studio will notice and ask us if we like to refresh it from file, or keep the version in the editor. I find this a bit annoying and I like it better just when Visual Studio refreshes the files completely automatic (unless I changed them of course). You can change that in the option “Auto-load changes if saved”:


                Okay. Now let’s add the grunt specific features.

                First add grunt itself locally aswell as the task we will use. Go back to the command prompt and install grunt and grunt-ts for typescript compilation:

                npm install grunt-ts –save-dev

                Grunt-ts has grunt as a dependency so it will install that automatically. (Now dont be surprised by the loads of packages which comes with Grunt. It’s just something you need to get used to when it comes to npm packages.)


                When Npm is done installing everything you will get a command prompt like below. Also notice that package.config will got a few new lines:

                "devDependencies": {
                    "grunt": "~0.4.1",
                    "grunt-ts": "~0.9.1"

                Now we need to add the Grunt Command Line Interface. We install that globally by adding -g so we can use it from any project later:

                npm install grunt-cli -g

                Now let’s add the gruntfile.js with everything grunt needs to perform the particular tasks for us:

                module.exports = function (grunt) {
                    "use strict";
                    // load all grunt tasks
                        ts: {
                            options: {                    // use to override the default options, See :
                                target: 'es3',            // es3 (default) / or es5
                                module: 'amd',       // amd (default), commonjs
                                sourcemap: true,          // true  (default) | false
                                declaration: false,       // true | false  (default)
                                nolib: false,             // true | false (default)
                                comments: false           // true | false (default)
                            dev: {                          // a particular target  
                                src: ["ts/*.ts"], // The source typescript files, See :               
                                watch: 'ts',         // If specified, configures this target to watch the specified director for ts changes and reruns itself.
                                out: 'dev',
                                options: {                  // override the main options, See :
                                    sourcemap: true
                            live: {                          // a particular target  
                                src: ["ts/**/*.ts"], // The source typescript files, See :               
                                watch: 'ts',         // If specified, configures this target to watch the specified director for ts changes and reruns itself.
                                out: 'scripts/app.js',
                                options: {                  // override the main options, See :
                                    sourcemap: false
                    grunt.registerTask("default", ["ts"]);

                It’s quite a lot of code. But mostly self explanatory. Now lets add some necessary folders:

                • ts : for the typescript code files.
                • scripts : for the compiled javascript. Bundled to one file.
                • dev : for the compiled separate javascript files. With sourcemaps.

                Add a basic typescript file in the ts folder.

                Now run grunt,


                which will make it compile the typescript file and then keep running to watch for file changes.

                Open up the compiled javascript file within Visual Studio and arrange the windows if you like to the javascript file together with the typescript file:


                Notice it re-compiles and refreshes the javascript file whenever you change the typescript file, or any other typescript file with the ts folder.

                The last thing we’ll do is to stop the current grunt process (just by hitting Ctrl+C) and make grunt perform the live task instead:

                grunt ts:live

                See that we get the compiled file in the scripts folder instead, and without a source map.

                Read more about grunt-ts and grunt.

                Make grunt do more
                After we compile the live version of our script we like to minify it. For that we can use the uglify plugin:

                npm install grunt-contrib-uglify --save-dev

                -contrib- is for plugins that are being maintained by the grunt core team.

                Now change and add a few lines at the bottom in the gruntfile:

                module.exports = function (grunt) {
                    "use strict";
                        ts: {
                            options: {                      // use to override the default options, See :
                                target: 'es3',              // es3 (default) / or es5
                                module: 'amd',              // amd (default), commonjs
                                sourcemap: true,            // true  (default) | false
                                declaration: false,         // true | false  (default)
                                nolib: false,               // true | false (default)
                                comments: false             // true | false (default)
                            dev: {                          // a particular target  
                                src: ["ts/**/*.ts"],           // The source typescript files, See :               
                                watch: 'ts',                // If specified, configures this target to watch the specified director for ts changes and reruns itself.
                                out: 'dev',
                                options: {                  // override the main options, See :
                                    sourcemap: true
                            live: {                         // a particular target  
                                src: ["ts/**/*.ts"],       // The source typescript files, See :               
                                out: 'scripts/app.js',
                                options: {                  // override the main options, See :
                                    sourcemap: false
                        uglify: {
                            my_target: {
                                files: {
                                    'scripts/app.min.js': ['scripts/app.js']
                    grunt.loadNpmTasks('grunt-contrib-uglify'); // minifies
                    grunt.registerTask("default", ["ts:live"]);
                    grunt.registerTask("ts-dev", "Compile all typescript files to dev folder and watch for changes", ["ts:dev"]);
                    grunt.registerTask("ts-live-uglify", "Compiles all Typescript files into one and minifies it", ["ts:live", "uglify"]);

                Notice I have added “uglify”, a few explicit tasks, and descriptions. Now, if we like to compile the typescript for live usage and minify the js we run the command:

                grunt ts-live-uglify

                If we forget our task names we can list them with their descriptions by running:

                grunt -help


                Run MsBuild from grunt grunt-msbuild

                Visual Studio addin that adds your gruntfile commands to a context menu GruntLauncher

                Using SharePoint FAST to unlock SAP data and make it accesible to your entire business

                An important new mantra is search-driven applications. In fact, “search” is the new way of navigating through your information. In many organizations an important part of the business data is stored in SAP business suites.
                A frequently asked need is to navigate through the business data stored in SAP, via a user-friendly and intuitive application context.
                For many organizations (78% according to Microsoft numbers), SharePoint is the basis for the integrated employee environment. Starting with SharePoint 2010, FAST Enterprise Search Platform (FAST ESP) is part of the SharePoint platform.
                All analyst firms assess FAST ESP as a leader in their scorecards for Enterprise Search technology. For organizations that have SAP and Microsoft SharePoint administrations in their infrastructure, the FAST search engine provides opportunities that one should not miss.

                SharePoint Search

                Search is one of the supporting pillars in SharePoint. And an extremely important one, for realizing the SharePoint proposition of an information hub plus collaboration workplace. It is essential that information you put into SharePoint, is easy to be found again.

                By yourself of course, but especially by your colleagues. However, from the context of ‘central information hub’, more is needed. You must also find and review via the SharePoint workplace the data that is administrated outside SharePoint. Examples are the business data stored in Lines-of-Business systems [SAP, Oracle, Microsoft Dynamics], but also data stored on network shares.
                With the purchase of FAST ESP, Microsoft’s search power of the SharePoint platform sharply increased. All analyst firms consider FAST, along with competitors Autonomy and Google Search Appliance as ‘best in class’ for enterprise search technology.
                For example, Gartner positioned FAST as leader in the Magic Quadrant for Enterprise Search, just above Autonomy. In SharePoint 2010 context FAST is introduced as a standalone extension to the Enterprise Edition, parallel to SharePoint Enterprise Search.
                In SharePoint 2013, Microsoft has simplified the architecture. FAST and Enterprise Search are merged, and FAST is integrated into the standard Enterprise edition and license.

                SharePoint FAST Search architecture

                The logical SharePoint FAST search architecture provides two main responsibilities:

                1. Build the search index administration: in bulk, automated index all data and information which you want to search later. Depending on environmental context, the data sources include SharePoint itself, administrative systems (SAP, Oracle, custom), file shares, …
                2. Execute Search Queries against the accumulated index-administration, and expose the search result to the user.

                In the indexation step, SharePoint FAST must thus retrieve the data from each of the linked systems. FAST Search supports this via the connector framework. There are standard connectors for (web)service invocation and for database queries. And it is supported to custom-build a .NET connector for other ways of unlocking external system, and then ‘plug-in’ this connector in the search indexation pipeline. Examples of such are connecting to SAP via RFC, or ‘quick-and-dirty’ integration access into an own internal build system.
                In this context of search (or better: find) in SAP data, SharePoint FAST supports the indexation process via Business Connectivity Services for connecting to the SAP business system from SharePoint environment and retrieve the business data. What still needs to be arranged is the runtime interoperability with the SAP landscape, authentication, authorization and monitoring.
                An option is to build these typical plumping aspects in a custom .NET connector. But this not an easy matter. And more significant, it is something that nowadays end-user organizations do no longer aim to do themselves, due the involved development and maintenance costs.
                An alternative is to apply Duet Enterprise for the plumbing aspects listed. Combined with SharePoint FAST, Duet Enterprise plays a role in 2 manners: (1) First upon content indexing, for the connectivity to the SAP system to retrieve the data.
                The SAP data is then available within the SharePoint environment (stored in the FAST index files). Search query execution next happens outside of (a link into) SAP. (2) Optional you’ll go from the SharePoint application back to SAP if the use case requires that more detail will be exposed per SAP entity selected from the search result.  An example is a situation where it is absolutely necessary to show the actual status. As with a product in warehouse, how many orders have been placed?

                Security trimmed: Applying the SAP permissions on the data

                Duet Enterprise retrieves data under the SAP account of the individual SharePoint user. This ensures that also from the SharePoint application you can only view those SAP data entities whereto you have the rights according the SAP authorization model. The retrieval of detail data is thus only allowed if you are in the SAP system itself allowed to see that data.

                Due the FAST architecture, matters are different with search query execution. I mentioned that the SAP data is then already brought into the SharePoint context, there is no runtime link necessary into SAP system to execute the query. Consequence is that the Duet Enterprise is in this context not by default applied.
                In many cases this is fine (for instance in the customer example described below), in other cases it is absolutely mandatory to respect also on moment of query execution the specific SAP permissions.
                The FAST search architecture provides support for this by enabling you to augment the indexed SAP data with the SAP autorisations as metadata.
                To do this, you extend the scope of the FAST indexing process with retrieval of SAP permissions per data entity. This meta information is used for compiling ACL lists per data entity. FAST query execution processes this ACL meta-information, and checks each item in the search result whether it allowed to expose to this SharePoint [SAP] user.
                This approach of assembling the ACL information is a static timestamp of the SAP authorizations at the time of executing the FAST indexing process. In case the SAP authorizations are dynamic, this is not sufficient.
                For such situation it is required that at the time of FAST query execution, it can dynamically retrieve the SAP authorizations that then apply. The FAST framework offers an option to achieve this. It does require custom code, but this is next plugged in the standard FAST processing pipeline.
                SharePoint FAST combined with Duet Enterprise so provides standard support and multiple options for implementing SAP security trimming. And in the typical cases the standard support is sufficient.


                Applied in customer situation

                The above is not only theory, we actually applied it in real practice. The context was that of opening up of SAP Enterprise Learning functionality to operation by the employees from their familiar SharePoint-based intranet. One of the use cases is that the employee searches in the course catalog for a suitable training.

                This is a striking example of search-driven application. You want a classified list of available courses, through refinement zoom to relevant training, and per applied classification and refinement see how much trainings are available. And of course you also always want the ability to freely search in the complete texts of the courses.
                In the solution direction we make the SAP data via Duet Enterprise available for FAST indexation. Duet Enterprise here takes care of the connectivity, Single Sign-On, and the feed into SharePoint BCS. From there FAST takes over. Indexation of the exposed SAP data is done via the standard FAST index pipeline, searching and displaying the search results found via standard FAST query execution and display functionalities.
                In this application context, specific user authorization per SAP course elements does not apply. Every employee is allowed to find and review all training data. As result we could suffice with the standard application of FAST and Duet Enterprise, without the need for additional customization.


                Microsoft SharePoint Enterprise Search and FAST both are a very powerful tool to make the SAP business data (and other Line of Business administrations) accessible. The rich feature set of FAST ESP thereby makes it possible to offer your employees an intuitive search-driven user experience to the SAP data.

                COMING SOON – The “User Poll” Web Part for SharePoint 2010 & 2013

                The User Poll Web Part provides your SharePoint environment with a set of web parts to allow your end users to create simple polls. It does this without the hassle of the standard SharePoint surveys which is not intended to create a simple 1 question poll.

                The User Poll Web Part is a poll web part for SharePoint and it allows site users to quickly create polls anywhere in the Site Collection. The poll Web Part is designed to provide a user friendly interface: Important settings and actions are available from within the Web Part.

                There is no direct need to work with the SharePoint Web Part setting menu and poll data is managed from normal SharePoint lists. Administrators can manage and keep track of all created polls from a centralized list.

                A standard SharePoint installation also comes with a polling mechanism as part of the Survey Lists, but these surveys are complicated and require quite some time to configure.

                The The User Poll Web Part allows users to setup a single topic poll within a few minutes.

                The roadmap for the project is provided below.

                Basic functionality

                • Poll settings are configured directly from the web part display or SharePoint lists
                • Publish and unpublish functionality


                Project road map:

                • Release production build of The User Poll Web Part 2013
                • Automated security management on the poll response and answer list
                • Result view only web part
                • Add multiple HTML5 chart options (currently only horizontal bar)
                • Documentation

                Contact me at!!


                How To : Enable RSS feed in SharePoint 2013

                SharePoint 2013 have out of the box support for publishing RSS feed for lists and libraries.  For publishing portals it is important to expose the site contents as RSS feeds. In this walkthrough I am going to explain how you can configure RSS feed for lists, libraries etc. in the SharePoint portal with your own display.


                For the purpose of this walkthrough, I have created a picture Library named “MyPictures” and added five pictures to this library. The thumbnail view of the picture library is as follows.


                Now go to the library tab of your list page.


                You can see the RSS feed icon is disabled, this is because I didn’t enable the RSS feed for my site. Now let us see how we can enable the RSS feed settings.

                Enable RSS feed for Sites/Site Collection

                Initially you need to enable the RSS feed for the site collection and the site where you need to expose your contents as RSS feeds.

                Go to your top level site settings page, if you are in a sub site, make sure you click on “Go to top level site settings”


                Now In the Top level site settings page, you can find RSS link under site administration.


                In the RSS settings page, you can enable the RSS for the site or entire site collection. Also you can define certain properties such as Copy right, managing editor, web master etc. Click OK once you are done.


                If you want to disable the RSS feed for any site, go the site settings page and click RSS link and just uncheck the “Enable RSS” checkbox and click OK.

                Now let us see the effect of the changes we have made. Go to the “MyPictures” page and under library tab, see the RSS icon, you should see the icon is enabled now.


                Click on the RSS feed icon, you will see the RSS feed in the browser.


                Customize the RSS feed

                In SharePoint 2013, the RSS feed is exposed using listfeed.aspx, you can find the listfeed.aspx under the layouts folder.


                By default the RSS feed is using RSSXslt.aspx file, which renders XSL to the browser.


                You can find these files under layouts folder the 15 hive. The following is the path to layouts folder when SharePoint is installed in the C: drive.

                C:\Program Files\Common Files\microsoft shared\Web Server Extensions\15\TEMPLATE\LAYOUTS


                Though it is possible to customize RssXslt.aspx to apply your own design, it is not recommended as this is a SharePoint system file and there is a chance that this file can be replaced by patches, service packs etc.

                Though RSS feeds meant for machines, you may need to present the RSS feed in your site and provide subscription instructions. So you need to perform two things here, first you need to select the data for the RSS feed to expose and secondly you need to display the RSS feed in your site in a formatted way. Let us see how we can achieve the same.

                Choose the data for RSS Feed

                Navigate to your list page, under the library tab, click on Library settings icon.


                Once you enabled RSS for the site, you will see a column “Communications”, locate “RSS Settings” under Communications.


                Click on the RSS settings, you will see the below settings page


                First you can define whether you need to enable RSS for this list, in case if you don’t want a particular list to expose as RSS feed, you can say no for this.

                Under the RSS channel information; you can define a title, description and an Icon for the feed. Under document options, include file enclosures allows you to link the content of the file as encloses so that the feed reader can optionally download the file. Link RSS items directly to their files, since this is a picture library, I need the item to link directly to the image. You can select these options depending on your needs.


                In the columns section, you can select the columns that you need to include in the RSS feed, also you can specify the order of the columns in the feed.


                Under Item Limit, you can specify the maximum number of items and the maximum age for the item, by default items in the past 7 days will include, you can configure this based on your needs. Click OK once you are done.

                See the output XML generated by listfeed.aspx


                Display RSS feed in your page

                Now you may need to include this RSS feed in your page. In order to do that you can use the XML viewer web part available in SharePoint 2013 and specify a XSLT for transforming the feed to formatted output.

                First create a page and insert a web part, under the web part selection menu, select Content rollup as the category and then select the XML Viewer web part and click on the Add button.


                In the Web part settings you can define XML url and XSL url.


                I just created a simple XSL file, you can download the text version of the file from here.

                The following is the output generated in the browser once I applied the attached xsl.



                SharePoint 2013 supports exposing and consuming RSS feeds without writing a single line of code. You have complete control over on what content can be exposed as RSS feeds.

                Microsoft Announces Release of ASP.NET Identity 2.0.0

                Download this release

                You can download ASP.NET Identity from the NuGet gallery. You can install or update to these packages through NuGet using the NuGet Package Manager Console, like this:

                  What’s in this release?

                  Following is the list of features and major issues that were fixed in 2.0.0.

                  Two-Factor Authentication

                  ASP.NET Identity now support two-factor authentication. Two-factor authentication provides an extra layer of security to your user accounts in the case where your password gets compromised. Most  websites protect their data by having a user create an account on their website with a username and password. Passwords are not very secure and sometimes users choose weak passwords which can lead to user accounts being compromised.

                  SMS is the preferred way of sending codes but you can also use email in case the user does not have access to their phone. You can extend and write your own providers such as QR code generators and use Authenticator apps on phones to validate them.

                  There is also protection for brute force attacks against the two factor codes. If a user enters incorrect codes for a specified amount of time then the user account will be locked out for a specified amount of time. These values are configurable.

                  To try out this feature, you can install ASP.NET Identity Samples NuGet package (in an Empty ASP.NET app) and follow the steps to configure and run the project.

                  Account Lockout

                  Provide a way to Lockout out the user if the user enters their password or two-factor codes incorrectly. The number of invalid attempts and the timespan for the users are locked out can be configured.  A developer can optionally turn off Account Lockout for certain user accounts should they need to.

                  Account Confirmation

                  The ASP.NET Identity system now supports Account Confirmation by confirming the email of the user. This is a fairly common scenario in most websites today where when you register for a new account on the website, you are required to confirm your email before you could do anything in the website. Email Confirmation is useful because it prevents bogus accounts from being created. This is extremely useful if you are using email as a method of communicating with the users of your website such as Forum sites, banking, ecommerce, social web sites.

                  Note: To send emails you can configure SMTP Server or use some of the popular email services such as SendGrid ( which integrate nicely with Windows Azure and require no configuration on the application developer

                  In the sample project below, you need to hook up the Email service for sending emails. You will not be able to reset your password until you confirm your account

                  Password Reset

                  Password Reset is a feature where the user can reset their passwords if they have forgotten their password.

                  Security Stamp (Sign out everywhere)

                  Support a way to regenerate the Security Stamp for the user in cases when the User changes their password or any other security related information such as removing an associated login(such as Facebook, Google, Microsoft Account etc). This is needed to ensure that any tokens (cookies) generated with the old password are invalidated. In the sample project, if you change the users password then a new token is generated for the user and any previous tokens are invalidated.

                  This feature provides an extra layer of security to your application since when you change your password, you will be logged out where you have logged into this application. You can also extend this to Sign out from all places where you have logged in from. This sample shows how to do it.

                  You can configure this in Startup.Auth.cs by registering a CookieAuthenticationProvider as follows.

                  Code Snippet
                  1. app.UseCookieAuthentication(newCookieAuthenticationOptions {
                  2.                 AuthenticationType = DefaultAuthenticationTypes.ApplicationCookie,
                  3.                 LoginPath = newPathString(“/Account/Login”),
                  4.                 Provider = newCookieAuthenticationProvider {
                  5.                     // Enables the application to validate the security stamp when the user logs in.
                  6.                     // This is a security feature which is used when you change a password or add an external login to your account. 
                  7.                     OnValidateIdentity = SecurityStampValidator.OnValidateIdentity<ApplicationUserManager, ApplicationUser>(
                  8.                         validateInterval: TimeSpan.FromMinutes(30),
                  9.                         regenerateIdentity: (manager, user) => user.GenerateUserIdentityAsync(manager))
                  10.                 }
                  11.             });

                  Make the type of Primary Key be extensible for Users and Roles

                  In 1.0 the type of PK for Users and Roles was strings. This means when the ASP.NET Identity system was persisted in Sql Server using Entity Framework, we were using nvarchar. There were lots of discussions around this default implementation on Stack Overflow and based on the incoming feedback, we have provided an extensibility hook where you can specify what should be the PK of your Users and Roles table. This extensibility hook is particularly useful if you are migrating your application and the application was storing UserIds are GUIDs or ints.

                  Since you are changing the type of PK for Users and Roles, you need to plug in the corresponding classes for Claims, Logins which take in the correct PK. Following is a snippet of code which shows how you can change the PK to be int

                  For a full working sample please see


                  Code Snippet
                  2. publicclassApplicationUser : IdentityUser<int, CustomUserLogin, CustomUserRole, CustomUserClaim>
                  3. {
                  4. }
                  6. publicclassCustomRole : IdentityRole<int, CustomUserRole>
                  7. {
                  8.     public CustomRole() { }
                  9.     public CustomRole(string name) { Name = name; }
                  10. }
                  12. publicclassCustomUserRole : IdentityUserRole<int> { }
                  13. publicclassCustomUserClaim : IdentityUserClaim<int> { }
                  14. publicclassCustomUserLogin : IdentityUserLogin<int> { }
                  16. publicclassApplicationDbContext : IdentityDbContext<ApplicationUser, CustomRole, int, CustomUserLogin, CustomUserRole, CustomUserClaim>
                  17. {
                  18. }



                  Support IQueryable on Users and Roles

                  We have added support for IQueryable on UsersStore and RolesStore so you can easily get the list of Users and Roles.

                  For eg. the following code uses the IQueryable  and shows how you can get the list of Users from UserManager. You can do the same for getting list of Roles from RoleManager


                  Code Snippet
                  2. // GET: /Users/
                  3. publicasyncTask<ActionResult> Index()
                  4. {
                  5.     return View(await UserManager.Users.ToListAsync());
                  6. }

                  Delete User account

                  In 1.0, if you had to delete a User, you could not do it through the UserManager. We have fixed this issue in this release so you can do the following to delete a user

                  Code Snippet
                  1. var result = await UserManager.DeleteAsync(user);

                  IdentityFactory Middleware/ CreatePerOwinContext


                  You can use Factory implementation to get an instance of UserManager from the OWIN context. This pattern is similar to what we use for getting AuthenticationManager from OWIN context for SignIn and SignOut. This is a recommended way of getting an instance of UserManager per request for the application.

                  Following snippet of code shows how you can configure this middleware in StartupAuth.cs. This is in the sample project listed below.


                  Code Snippet
                  2. app.CreatePerOwinContext<ApplicationUserManager>(ApplicationUserManager.Create);

                  Following snippet of code shows how you can get an instance of UserManager

                  Code Snippet
                  1. HttpContext.GetOwinContext().GetUserManager<ApplicationUserManager>();


                  ASP.NET Identity uses EntityFramework for persisting the Identity system in Sql Server. To do this the Identity System has a reference to the ApplicationDbContext. The DbContextFactory Middleware returns you an instance of the ApplicationDbContext per request which you can use in your application.

                  Following code shows how you can configure it in StartupAuth.cs. The code for this middleware is in the sample project.

                  Code Snippet
                  1. app.CreatePerOwinContext(ApplicationDbContext.Create);

                  Indexing on Username

                  In ASP.NET Identity Entity Framework implementation, we have added a unique index on the Username using the new IndexAttribute in EF 6.1.0. We did this to ensure that Usernames are always unique and there was no race condition in which you could end up with duplicate usernames.

                  Enhanced Password Validator

                  The password validator that was shipped in ASP.NET Identity 1.0 was a fairly basic password validator which was only validating the minimum length. There is a new password validator which gives you more control over the complexity of the password. Please note that even if you turn on all the settings in this password, we do encourage you to enable two-factor authentication for the user accounts.

                  Code Snippet
                  1. // Configure validation logic for passwords
                  2.             manager.PasswordValidator = new PasswordValidator
                  3.             {
                  4.                 RequiredLength = 6,
                  5.                 RequireNonLetterOrDigit = true,
                  6.                 RequireDigit = true,
                  7.                 RequireLowercase = true,
                  8.                 RequireUppercase = true,
                  9.             };

                  You can also add Password policies as per your own requirements. The following sample shows you how you can extend Identity for this scenario.

                  ASP.NET Identity Samples NuGet package

                  We are releasing a Samples NuGet package to make it easier to install samples for ASP.NET Identity. This is a sample ASP.NET MVC application. Please modify the code to suit your application before you deploy this in production. The sample should be installed in an Empty ASP.NET application.

                  Following are the features in this samples package

                    • Initialize ASP.NET Identity to create an Admin user and Admin role.
                      • Since ASP.NET Identity is Entity Framework based in this sample, you can use the existing methods of initializing the database as you would have done in EF.
                    • Configure user and password validation.
                    • Register a user and login using username and password
                    • Login using a social account such as Facebook, Twitter, Google, Microsoft account etc.
                    • Basic User management
                      • Do Create, Update, List and Delete Users. Assign a Role to a new user.
                    • Basic Role management
                      • Do Create, Update, List and Delete Roles.
                    • Account Confirmation by confirming email.
                    • Password Reset
                    • Two-Factor authentication
                    • Account Lockout
                    • Security Stamp (Sign out everywhere)
                    • Configure the Db context, UserManager and RoleManager  using IdentityFactory Middleware/ PerOwinContext.
                    • The AccountController has been split into Account and Manage controller. This was done to simplify the account management code.

                    The sample is still in preview since we are still working on improving the sample and fixing issues with it but it is in a state where you can easily see how to add ASP.NET Identity features in an application.

                    Entity Framework 6.1.0

                    ASP.NET Identity 2.0.0 depends upon Entity Framework 6.1.0 which was also released earlier in the week. For more details please read this announcement post.

                    List of bugs fixed

                    You can look at all the bugs that were fixed in this release by clicking this link.

                    Samples/ Documentation

                      Known Issues/ Change list

                      Migrating from ASP.NET Identity 1.0 to 2.0.0

                      If you are migrating from ASP.NET Identity 1.0 to 2.0.0, then please refer to this article on how you can use Entity Framework Code First migrations to migrate your database

                      This article is based on migrating to ASP.NET Identity 2.0.0-alpha1 but the same steps apply to ASP.NET Identity 2.0.0

                      Following are some changes to be aware of while migrating

                        • The migrations adding the missing columns in the AspNetUsers table. One of the columns is ‘LockoutEnabled’ which is set to false by default. This means that for existing user accounts Account Lockout will not be enabled. To enable Account Lockout for existing users you need to set it to true  by setting the ‘defaultvalue:true’ in the migration code.
                        • In Identity 2.0 we changed the IdentityDbContext to handle generic User types differently. You will not see the discriminator column which is because the IdentityDbContext now works with ‘ApplicationUser’ instead of the generic ‘IdentityUser’. For apps that have more than one types deriving from IdentityUser, they need to change their DbContext to callout all the derived classes explicitly. For eg.
                        Code Snippet
                        1. publicclassApplicationDbContext : IdentityDbContext<IdentityUser>
                        2. {
                        3. public ApplicationDbContext()
                        4. : base(“DefaultConnection”, false)
                        5. {
                        6. }
                        8. protectedoverridevoid OnModelCreating(System.Data.Entity.DbModelBuilder modelBuilder)
                        9. {
                        10. base.OnModelCreating(modelBuilder);
                        11. modelBuilder.Entity<ApplicationUser>();
                        12. modelBuilder.Entity<FooUser>();
                        13. }
                        14. }

                        Migrating from ASP.NET Identity 2.0.0-Beta1 to 2.0.0

                        Following are the changes you will have to make to your application if you are upgrading from 2.0.0-Beta1 to 2.0.0 of Identity.

                          • We have added Account Lockout feature, which is new 2.0.0 RTM
                          • The GenerateTwoFactorAuthAsync generates the two factor auth only. The users need to explicitly call ‘NotifyTwoFactorTokenAsync’ to send the code.
                          • While migrating data the EF migrations may add ‘CreateIndex’ for existing indices.

                          Tired of apps crashing without displaying an error? Blame Apple! – MS MVP Opinion….. (I Love it! Time for Apple to take some responsibility as well))

                          Corey Roth [MVP]

                          A SharePoint MVP bringing you the latest time saving tips for SharePoint 2013, Office 365 / SharePoint Online and Visual Studio 2013.

                          Tired of apps crashing without displaying an error? Blame Apple!

                          Does anyone remember back in the day when an application had an error, it actually displayed some kind of error message?  Albeit the errors were usually cryptic, it was at least something that you could run with.  Now if you look across the app ecosystem, there is a disturbing trend.  When something goes wrong with an app, it’s going to just close and dump you to your home screen with no idea why.  It used to not be in this way though.  When an application had trouble starting up or it had some kind of unhandled exception, it would at least display some ugly modal dialog box and give you some kind of message and then it would close.  Maybe these errors weren’t user friendly (ok, they definitely weren’t), but you could at least provide the information to a developer or support person and they would have something to work with.  Now applications just crash and they dump you back to your home / start screen and the poor person that has to support it really has nothing to work off of.  My question is how as consumers have we been desensitized and believe that this is now acceptable behavior for an application.


                          I could be totally wrong here, but I believe this all started with Apple’s launch of the iOS App Store in 2008.  Apple, a company noted for putting user experience first, seems to be the root cause of this.  Back when I could stand using an iPhone, I remember the first time I launched an app and it had an issue.  It just closed and I was sitting there looking at my screen full of icons.  I launched the app again and again it crashed and took me back to the home screen.  Lame.  I kept trying to open the app and there was nothing I could do.  Finally, I uninstalled the app and reinstalled it and it started working again.  How is that for user experience?  This is the same company that shows a computer with an unhappy face when your hard drive goes bad.  I get that consumers don’t always need the details of an error, but they should be available.

                          I thought surely it might get better after the years, but in fact it has gotten worse.  Much worse.  You see, Apple convinced all of us that its was perfectly acceptable for an app to crash and not tell you why.  So much to the fact that when you started seeing competitors implement their own marketplaces, we see the same behavior.  Just a few weeks ago, I pull out the Delta app on my Windows Phone to try and check in for a flight (one of the rare exceptions that I didn’t fly Southwest).  My connection in the building was a bit sketchy so I launch the app, it tries to connect and has trouble and so what ends up happening?  That’s right the app crashes.  Frustrating.

                          The software development world is shifting into a world of sandboxes and Windows Store applications are no exception.  Just a few minutes ago, I tried to launch iHeartRadio.  It gets the screen loaded and then what does it do?  It crashes and I am sitting there staring at my prize-winning start screen (I won $10 in the Windows 8.1 contest. 🙂 ).  This happens fairly regularly with Windows Store apps, so the tradition continues.

                          Holy crap!  What if software developers start using this error handling technique on software that really matters?  I would hate for the software for air traffic control when my flight is coming in to just crash and drop the operator to a command prompt.  I sure don’t want that happening to the software that controls our defense systems.  I know this is a totally different user segment, but we’re already seeing some of this behavior in business products like SharePoint where we get messages like “Something went wrong.”

                          The point of this article isn’t to just beat up on Apple.  However, I really think it started here.  The other competing app stores just followed the same model.  As a result, we get the same experience. It seems for now that us as consumers have come to accept this behavior.  Maybe this is a limitation of the app store technologies in use and proper exception handling just isn’t feasible.  I don’t know.  I do know that some platforms have crash logs available that are buried in various places.  However, that doesn’t help consumers one bit.  All they know is that the app they wanted to run isn’t working. 

                          How do I think error handling should be done?  Well first, it should never just crash without giving you a reason why.  All exceptions should be caught, handled, and then close the application if necessary.  A consumer oriented error message should be displayed to the user.  Consumer don’t need to know the real reason why the application had an error, but they do need something to report.  The error message should also have a course of action besides “contact your system administrator”.  It should have a link to click on, a phone number to call, whatever makes sense.  Lastly, there should be a way to get more detailed information about the exact error directly from the error screen.  This can be a link that shows the actual error, stack dump, etc.  The error should have a unique id (such as a correlation id from SharePoint) so that a developer or the support team can go find it after the fact.  This assumes that the error information is being logged somewhere.  It should be.  Maybe, that’s a lot to ask for on a mobile device, but I don’t really think it is.

                          Am I completely off base here?  Do you remember applications crashing in this manner prior to iOS?  Leave a comment and let us hear your ideas!  I’m not saying the old cryptic error messages were much better, but I think as developers we need to proactively think about a better error handling experience.  Coming from a development background, I know that error handling is one of the last things we think about.  It shouldn’t be though.  With the invent of touch screen devices, consumers demand higher quality user experiences.  There are a lot of great applications and web sites out there that really put UX first.  These experiences are all great up until the point that something goes wrong.  As consumers, it’s time for us to demand a better error handling experience.

                          Are Cloud Services Still Relevant?

                          Can’t sum things better up than Alexander did – “Although Cloud Services are more complex, they’re a very flexible solution.”

                          Alexandre Brisebois ☁

                          image Recently, Windows Azure Webjobs were introduced to Windows Azure Web Sites. Webjobs are background tasks that come in 3 different flavors (Manual, Triggered and Continuous). These jobs can be used to execute tasks such as database maintenance, RSS aggregation and queue processing. As new features rollout for Windows Azure Web Sites, it’s reasonable to ask whether Cloud Services are still relevant.

                          I see Cloud Services as a flexible scale unit, where I can spread compute intensive workloads over an echo-system of Role instances. As always, with great power (flexibility) comes great responsibility (we need to care about details).

                          View original post 588 more words

                          Various Senior .Net Developet Positions available at MS Gold Partner and Part of the Britehouse Group! Contact me now! (Sorry. No recruiters – I am filling private positions)

                          Required (not-negotiable):

                          ·         A minimum of 4 years experience developing code in C# and / or VB.NET and ASP.NET

                          ·         A minimum of 48 months Visual Studio 2005 / 2006 and / or 2008 experience.

                          ·         A minimum of 48 months Transact-SQL (Stored procedures, views and triggers) experience.

                          ·         A minimum of 48 months relational database design implementation using MS SQL Server 2000 / 2005 and / or 2008 experience.

                          ·         A minimum of 48 months HTML experience.

                          ·         A minimum of 48 months Javascript experience.

                          ·         5 years experience of leading a development team.

                          ·         Proficiency in technical architecture and high-level design, as well as test framework design and implementation.

                          Senior Developers must be able to perform as a Tech-Lead developers with the following tasks:

                          ·         Technical lead for development, design and implementation of .NET based solutions as part of the projects team.

                          ·         Collaborate with Developers, Account Managers and Project Managers.

                          ·         Estimate development tasks and execute well on project schedules.
                          ·         Interact with clients to create requirement specifications for projects.
                          ·          Innovate new solutions and keep up with new emerging technologies.

                          ·         Mentoring of other developers.
                          Advantageous (nice-to-have):

                          ·         3 years computer science degree or equivalent.

                          Windows SharePoint Server.
                          Microsoft Office SharePoint Server.
                          Microsoft CRM.
                          Experience in web analytics.

                          A look at the 3 new options in the Task Parallel Library in .Net 4.5

                          Astute users of the Task Parallel Library might have noticed three new options available across TaskCreationOptions and TaskContinuationOptions in .NET 4.5: DenyChildAttach, HideScheduler, and (on TaskContinuationOptions) LazyCancellation.  I wanted to take a few minutes to share more about what these are and why we added them.


                          As a reminder, when a Task is created with TaskCreationOptions.AttachedToParent or TaskContinuationOptions.AttachedToParent, the creation code looks to see what task is currently running on the current thread (this Task’s Id is available from the static Task.CurrentId property, which will return null if there isn’t one).  If it finds there is one, the Task being created registers with that parent Task as a child, leading to two additional behaviors: the parent Task won’t transition to a completed state until all of its children have completed as well, and any exceptions from faulted children will propagate up to the parent Task (unless the parent Task observes those exceptions before it completes).  This parent/child relationship and hierarchy is visible in Visual Studio’s Parallel Tasks window.

                          If you’re responsible for all of the code in your solution, you have control over whether the tasks you create try to attach to a parent task.  But what if your code creates some Tasks, and from those Tasks calls out to code you don’t own?  The code you call might use AttachedToParent and attach children to your tasks.  Did you expect that?  Is your code reliable against that?  Have you done all of the necessary testing to ensure it?

                          For this situation, we introduced DenyChildAttach.  When a task uses AttachedToParent but finds there is no current Task, it just doesn’t attach to anything, behaving as if AttachedToParent wasn’t supplied.  If there is a parent task, but that parent task was created with DenyChildAttach, the same thing happens: the task using AttachedToParent won’t see a parent and thus won’t attach to anything, even though technically there was a task to which it could have been attached.  It’s a slight of hand or Jedi mind trick: “this is not the parent you’re looking for.”

                          With 20/20 hindsight, if we could do .NET 4 over again, I personally would chosen to make both sides of the equation opt-in.  Today, the child task gets to opt in to being a child by specifying AttachedToParent, but the parent must opt out if it doesn’t want to be one.  In retrospect, I think it would have been better if both sides had the choice to opt in, with the parent specifying a mythical flag like AllowsChildren to opt in rather than DenyChildAttach to opt out.  Nevertheless, this is just a question of default.  You’ll notice that the new Task.Run method internally specifies DenyChildAttach when creating its Tasks, in affect making this the default for the API we expect to become the most common way of launching tasks.  If you want explicit control over the TaskCreationOptions used, you can instead use the existing Task.Factory.StartNew method, which becomes the more advanced mechanism and allows you to control the options, object state, scheduler, and so on.


                          With code written in .NET 4, we saw this pattern to be relatively common:

                          private void button_Click(…)       {            … // #1 on the UI thread            Task.Factory.StartNew(() =>           {                … // #2 long-running work, so offloaded to non-UI thread            }).ContinueWith(t =>            {                … // #3 back on the UI thread            }, TaskScheduler.FromCurrentSynchronizationContext());        }

                          In other words, Tasks and continuations became a way to offload some work from the UI thread, and then run some follow-up work back on the UI thread.  This was accomplished by using the TaskScheduler.FromCurrentSynchronizationContext method, which looks up SynchronizationContext.Current and constructs a new TaskScheduler instance around it: when you schedule a Task to this TaskScheduler, the scheduler will then pass the task along to the SynchronizationContext to be invoked.

                          That’s all well and good, but it’s important to keep in mind the behavior of the Task-related APIs introduced in .NET 4 when no TaskScheduler is explicitly provided.  The TaskFactory class has a bunch of overloaded methods (e.g. StartNew), and when you construct a TaskFactory class, you have the option to provide a TaskScheduler.  Then, when you call one of its methods (like StartNew) that doesn’t take a TaskScheduler, the scheduler that was provided to the TaskFactory’s constructor is used.  If no scheduler was provided to the TaskFactory, then if you call an overload that doesn’t take a TaskScheduler, the TaskFactory ends up using TaskScheduler.Current at the time the call is made (TaskScheduler.Current returns the scheduler associated with whatever Task is currently running on that thread, or if there is no such task, it returns TaskScheduler.Default, which represents the ThreadPool).  Now, the TaskFactory returned from Task.Factory is constructed without a specific scheduler, so for example when you write Task.Factory.StartNew(Action), you’re telling TPL to create a Task for that Action and schedule it to TaskScheduler.Current.

                          In many situations, that’s the right behavior.  For example, let’s say you’re implementing a recursive divide-and-conquer problem, where you have a task that’s supposed to process some chunk of work, and it in turn subdivides its work and schedules tasks to process those chunks.  If that task was running on a scheduler representing a particular pool of threads, or if it was running on a scheduler that had a concurrency limit, and so on, you’d typically want those tasks it then created to also run on the same scheduler.

                          However, it turns out that in other situations, it’s not the right behavior.  And one such situation is like that I showed previously.  Imagine now that your code looked like this:

                          private void button_Click(…)       {            … // #1 on the UI thread            Task.Factory.StartNew(() =>           {                … // #2 long-running work, so offloaded to non-UI thread            }).ContinueWith(t =>            {                … // #3 back on the UI thread                Task.Factory.StartNew(() =>                {                    … // #4 compute-intensive work we want offloaded to non-UI thread (bug!)                });           }, TaskScheduler.FromCurrentSynchronizationContext());        }

                          This seems logical: we do some work on the UI thread, then we offload some work to the background, when that work completes we hop back to the UI thread, and then we kick off another task to run in the background.  Unfortunately, this is buggy.  Because the continuation was scheduled to TaskScheduler.FromCurrentSynchronizationContext, that scheduler is TaskScheduler.Current during the execution of the continuation.  And in that continuation we’re calling Task.Factory.StartNew using an overload that doesn’t accept a TaskScheduler.  Which means that this compute-intensive work is actually going to be scheduled back to the UI thread! Ugh.

                          There are of course already solutions to this.  For example, if you own all of this code, you could explicitly specify TaskScheduler.Default (the ThreadPool scheduler) when calling StartNew, or you could change the structure of the code so that the StartNew became a continuation off of the continuation, e.g.

                          private void button_Click(…)       {            … // #1 on the UI thread            Task.Factory.StartNew(() =>           {                … // #2 long-running work, so offloaded to non-UI thread            }).ContinueWith(t =>            {                … // #3 back on the UI thread           }, TaskScheduler.FromCurrentSynchronizationContext()).ContinueWith(t =>            {                … // #4 compute-intensive work we want offloaded to non-UI thread            });        }

                          But neither of those solutions are relevant if the code inside of the continuation is code you don’t own, e.g. if you’re calling out to some 3rd party code which might unsuspectingly use Task.Factory.StartNew without specifying a scheduler an inadvertently end up running its code on the UI thread.  This is why in production library code I write, I always explicitly specify the scheduler I want to use.

                          For .NET 4.5, we introduced the TaskCreationOptions.HideScheduler and TaskContinuationOptions.HideScheduler values.  When supplied to a Task, this makes it so that in the body of that Task, TaskScheduler.Current returns TaskScheduler.Default, even if the Task is running on a different scheduler: in other words, it hides it, making it look like there isn’t a Task running, and thus TaskScheduler.Default is returned.  This option helps to make your code more reliable if you find yourself calling out to code you don’t own. Again with our initial example, I can now specify HideScheduler, and my bug will be fixed:

                          private void button_Click(…)       {            … // #1 on the UI thread            Task.Factory.StartNew(() =>           {                … // #2 long-running work, so offloaded to non-UI thread            }).ContinueWith(t =>            {                … // #3 back on the UI thread                Task.Factory.StartNew(() =>                {                    … // #4 compute-intensive work we want offloaded to non-UI thread (bug!)                });           }, CancellationToken.None,                TaskContinuationOptions.HideScheduler,               TaskScheduler.FromCurrentSynchronizationContext());        }

                          One additional thing to note is around the new Task.Run method, which is really just a simple wrapper around Task.Factory.StartNew.  We expect Task.Run to become the most common method for launching new tasks, with developers falling back to using Task.Factory.StartNew directly only for more advanced situations where they need more fine-grained control, e.g. over which scheduler to be targeted.  I already noted that Task.Run specifies DenyChildAttach, so that no tasks created within a Task.Run task can attach to it.  Additionally, Task.Run always specifies TaskScheduler.Default, so that Task.Run always uses the ThreadPool and ignores TaskScheduler.Current.  So, even without HideScheduler, if I’d used Task.Run(Action) instead of Task.Factory.StartNew(Action) in my initially buggy code, it would have been fine.


                          Consider the following code:

                          Task a = Task.Run(…);       Task b = a.ContinueWith(…, cancellationToken);

                          The ContinueWith method will create Task ‘b’ such that ‘b’ will be scheduled when ‘a’ completes.  However, because a CancellationToken was provided to ContinueWith, if cancellation is requested before Task ‘a’ completes, then Task ‘b’ will just immediately transition to the Canceled state.  So far so good… there’s no point in doing any work for ‘b’ if we know the user wants to cancel it.  Might as well be aggressive about it.

                          But now consider a slightly more complicated variation:

                          Task a = Task.Run(…);       Task b = a.ContinueWith(…, cancellationToken);        Task c = b.ContinueWith(…);

                          Here there’s a second continuation, off of Task ‘b’, resulting in Task ‘c’.  When Task ‘b’ completes, regardless of what state ‘b’ completes in (RanToCompletion, Faulted, or Canceled), Task ‘c’ will be scheduled.  Now consider the following situation: Task ‘a’ starts running.  Then a cancellation request comes in before ‘a’ finishes, so ‘b’ transitions to Canceled as we’d expect.  Now that ‘b’ is completed, Task ‘c’ gets scheduled, again as we’d expect.  However, this now means that Task ‘a’ and Task ‘c’ could be running concurrently.  In many situations, that’s fine.  But if you’d constructed your chain of continuations under the notion that no two tasks in the chain could ever run concurrently, you’d be sorely disappointed.

                          Enter LazyCancellation.  By specifying this flag on a continuation that has a CancellationToken, you’re telling TPL to ignore that CancellationToken until the antecedent has already completed.  In other words, the cancellation check is lazy: rather than the continuation doing the work to register with the token to be notified of a cancellation request, it instead doesn’t do anything, and then only when the antecedent completes and the continuation is about to run does it poll the token and potentially transition to Canceled. In our previous example, if I did want to avoid ‘a’ and ‘c’ potentially running concurrently, we could have instead written:

                          Task a = Task.Run(…);       Task b = a.ContinueWith(…, cancellationToken,              TaskContinuationOptions.LazyCancellation, TaskScheduler.Default);        Task c = b.ContinueWith(…);

                          Here, even if cancellation is requested early, ‘b’ won’t transition to Canceled until ‘a’ completes, such that ‘c’ won’t be able to start until ‘a’ has completed, and all would be right in the world again.

                          The Essential How To : Upgrade from Team Foundation Server 2012 to 2013.

                          This blog gives you the step by step actions required for the upgrade from Team Foundation Server 2012 to 2013.

                          > Prerequisites: (IMPORTANT)

                          Make sure you have appropriate accounts with permissions for SQL, SharePoint, TFS and Machine level Admin privileges on both the servers.

                          1. You must be an Administrator on the Server.
                          2. Must be a part of the Farm Administrator’s group in SharePoint.

                          A detailed information about this can be found here.

                             Back up all the databases from TFS 2012.

                          Make sure you stop all TFS services, using the TFSServiceControl.exe quiesce command. More info here.

                          Databases to backup,

                          1. The Configuration Database (For instance, Tfs_Configuration)
                          2. The Collection Database(s) (For instance, Tfs_DefaultCollection)
                          3. The Warehouse Database* (For instance, Tfs_Warehouse)
                          4. The Analysis Database* (For instance, Tfs_Analysis)
                          5. Report Server Databases* (For instance, ReportServer and ReportServerTempDB)
                          6. SharePoint Content Databases** (For instance, WSS_Content)

                          * If you have configured Reporting in TFS 2012

                          ** If you have configured SharePoint in TFS 2012

                          Also, you have to back up the Encryption keys of ReportServer database.

                             In the new Server,

                          1. Install a compatible version of SharePoint (Foundation/Standard/Enterprise)Note: Configuration fails with SQL Server 2012 RTM. You need SP1 update installed.

                          > Step 1: Restoring Databases.

                          On the new server, using the SQL Server Management Studio, Connect to the Database engine and restore all the databases (Configuration, Collection(s), SharePoint Content Warehouse and Reporting). Also, connect to the Analysis Engine and restore the Analysis Database.


                          If you have Reports configured and want to do the same in 2013, go to Step 2, else skip it.

                          > Step 2: Configure Reporting.

                          Open Reporting Services Configuration Manager,


                          Click on Database on the left pane and click on Change Database.

                          Choose an existing report server database and click on .


                          Enter the credentials and click on .


                          Select ReportServer database and to next. Review and complete the wizard.


                          Now, you’ll have to restore the Encryption key. To do that Click on Encryption Keys on the left pane and click on Restore.

                          Specify the file and enter the password you used to back up the encryption key and click on OK.


                          Go to Report Manager URL, and Click on the URLs to see if you are able to successfully browse through the reports.

                          > Step 3: Install and Configure SharePoint.

                          Install a compatible version of SharePoint and run the SharePoint Products Configuration Wizard.

                          To check compatible versions, refer to the Team Foundation Server 2013 Installation Guide, available here.

                          Click on Next.


                          Configuring a new farm would require reset of some services. Confirm by clicking on Yes.


                          Select Create a new server farm and click on Next.


                          Enter the Database Server and give a name for the configuration Database. Specify the service account that has the permissions to access that database server.


                          The recent versions of SharePoint would ask for a passphrase to secure the farm configuration data. Enter a passphrase.


                          Click on Specify Port Number and give “17012”. TFS usually uses this port for SharePoint. You can however give another unused port number too.
                          Select NTLM for default security settings.


                          Review and complete the configuration wizard.


                          Now that we’ve configured SharePoint, go to SharePoint Central Admin page, give the admin credentials and create a new web application for TFS. It will automatically create a new content database for you.
                          If you want to restore the content database from the previous version, say SharePoint 2010, you must first upgrade the content database and attach it to the web application.

                          • First, Click on Application Management -> Manage Content Databases.
                            Select the web application you created for TFS and remove that content database.
                          • Upgrade and Attach the old content database by using the Mount-SPContentDatabase. Example,
                            Mount-SPContentDatabase “MyDatabase” -DatabaseServer “MyServer” -WebApplication http://sitename
                            (More information on that command, here.)

                          > Step 4: Configure Team Foundation Server.

                          Once we’ve restored all the databases and the encryption key and/or configured reporting, we are all set to upgrade TFS to the latest version.
                          Start Team Foundation Administration Console and Click on Application tier.


                          Click on Configure Installed Features and choose Upgrade.


                          Enter the SQL Server/Instance name and Click on list available databases.


                          that you have taken backup and click on next (Yes, taking a backup is that important)

                          Choose the Account and Authentication Method.


                          Select the Configure Reporting for use with Team Foundation Server and click on .


                          Note: If your earlier deployment was not using Reporting Service then you would not be able to add Reporting Service during the upgrade (this option would be disabled). You can configure Reporting Service with TFS later on after the upgrade is complete.

                          Give the Reporting Services Instance name and Click on Populate URLs to populate the Report Server and Report Manager URLs. Click on .


                          Since we are configuring with Reports, we need to specify the Warehouse database. Enter the New SQL Instance that hosts the Warehouse database, do a Test and Click on List Available Databases. Select Tfs_Warehouse and click on Next.


                          In the next screen, Input the New Analysis Services Instance, do a Test and Click on Next.
                          Note: If you’ve restored the Analysis Database (Tfs_Analysis) from the previous instance, TFS will automatically identify and use the same.


                          Enter the Report Reader account Credentials, do a Test and click on Next.


                          Check Configure SharePoint Products for Use with Team Foundation Server. Click on Next.
                          Note: This is for Single Server Configuration, meaning SharePoint is installed on the same server as TFS.


                          Note: If your earlier deployment was not using SharePoint then you would not be able to add SharePoint during the upgrade (this option would be disabled). You can configure SharePoint with TFS later on after the upgrade is complete.

                          In this screen, make sure you point to the correct SharePoint farm. Click on Test to test out the connection to the server. In our case, this is a Single-Server deployment. We’ve configured SharePoint manually, and created a web for TFS.


                          Make sure all the Readiness Checks are passed without any errors. Click on Configure.


                          Now, the basic TFS Configuration is completed successfully. Click on to initiate the Project Collection(s) upgrade.


                          That’s it. Project Collections are now upgraded and attached.


                          Click on next to review a summary and this wizard.


                          We are almost done. Notice that in the summary of TFS Administration Console, we still see the old server URL.

                          This is very important.

                          We need to change this to reflect the new server using the Change URLs.



                          Do a test on both the URLs and Click on OK.

                          Now, try browsing the Notification URL to see if you are able to view the web access without any errors.


                          Next, Click on the Team Project Collections in the left pane, Select your Collection(s) and click on Team Projects. See if you have the projects listed.


                          Under SharePoint site, check if the URL points to the correct location, if not, Click on Edit Default Site Location and edit it.


                          See if the URL under Reports Folder points to the correct location, if not, Click on Edit Default Folder Location and edit it.


                          Next, Click on Reporting in the left pane and see if the configurations are saved and are started.


                          > Step 5: Configuring Extensions for SharePoint (Only when SharePoint is on a different server)

                          If you’ve configured SharePoint on a different server, you need to add Extensions for SharePoint products manually.
                          First you need to Remote SharePoint Extensions on the server available as part of the TFS installation media. Run the configuration wizard for Extensions for SharePoint. For a detailed blog on how to do that click here.

                          That’s it. With that TFS is configured correctly.
                          As a final step, Team Explorer, Connect to the Team Foundation Server and create a new Team Project. See if it creates properly with Reports and a SharePoint site.

                          Now, your new TFS 2013 is up and running, with upgraded collections.

                          How does TFS backup scheduling tool handle collection add/delete/detach?

                          We are used to creating collections upon request. This implies collections would be created/attached/detached continuously.

                          What is the impact of these on TFS backup plan?
                          Does it see new collections or should we re-configure each time?
                          What about detached collections?

                          I took some time to probe this. This is what I did. Firstly I scheduled a backup plan.


                          I have a simple TFS setup. There are no SharePoint or Reporting databases (these aren’t usually affected by collection level activities). Three databases are backed up

                            1. Tfs_Configuration
                            2. Tfs_DefaultCollection
                            3. Tfs_Warehouse


                            I waited for some time to confirm the backups run fine.

                            Then I added another collection – backup_test.


                            But that collection is not listed in the wizard. We still see only the three initial databases.


                            So should we add it? – By manually reconfiguring the backup plan? That would be tedious and also would create new backupset.xml.
                            Luckily no – we don’t have to.
                            We should wait for the next scheduled job – in my case a transactional backup in another 3 minutes.

                            And voila! Once the transaction job is kicked off – the new collection is added to the existing plan.


                            If you look into the backup folder you will see that it took a full backup of the new collection first and then the transaction log (instead of just the transaction). This is to maintain the SQL backup relationship between a full backup and a transaction/differential backup – the LSN continuity.

                            Moving forward this database would be backed just like the other databases in the plan.

                            The same behavior can be seen in a TFS collection delete or detach. The backup plan would modify itself when its next job is kicked off.

                            PS – Do note this doesn’t apply to Reporting or SharePoint database. Tool doesn’t pick new databases, while a delete of the databases listed in the plan may break the backup schedule. You would have to re-configure the plan to make changes.

                            How To : Get data from Windows Azure Marketplace into your Office application

                            ImageThis post walks through a published app for Office, along the way showing you everything you need to get started building your own app for Office that uses a data service from the Windows Azure Marketplace.

                            Ever wondered how to get premium, curated data from Windows Azure Marketplace, into your Office applications, to create a rich and powerful experience for your users? If you have, you are in luck.

                            Introducing the first ever app for Office that builds this integration with the Windows Azure Marketplace – US Crime Stats. This app enables users to insert crime statistics, provided by DATA.GOV, right into an Excel spreadsheet, without ever having to leave the Office client.

                            One challenge faced by Excel users is finding the right set of data, and apps for Office provides a great opportunity to create rich, immersive experiences by connecting to premium data sources from the Windows Azure Marketplace.

                            What is the Windows Azure Marketplace?

                            The Windows Azure Marketplace (also called Windows Azure Marketplace DataMarket or just DataMarket) is a marketplace for datasets, data services and complete applications. Learn more about Windows Azure Marketplace.

                            This blog article is organized into two sections:

                            1. The U.S. Crime Stats Experience
                            2. Writing your own Office Application that gets data from the Windows Azure Marketplace

                            The US Crime Stats Experience

                            You can find the app on the Office Store. Once you add the US Crime Stats app to your collection, you can go to Excel 2013, and add the US Crime Stats app to your spreadsheet.

                            Figure 1. Open Excel 2013 spreadsheet


                            Once you choose US Crime Stats, the application is shown in the right pane. You can search for crime statistics based on City, State, and Year.

                            Figure 2. US Crime Stats app is shown in the right task pane


                            Once you enter the city, state, and year, click ‘Insert Crime Data’ and the data will be inserted into your spreadsheet.

                            Figure 3. Data is inserted into an Excel 2013 spreadsheet


                            What is going on under the hood?

                            In short, when the ‘Insert Crime Data’ button is chosen, the application takes the input (city, state, and year) and makes a request to the DataMarket services endpoint for DATA.GOV in the form of an OData Call. When the response is received, it is then parsed, and inserted into the spreadsheet using the JavaScript API for Office.

                            Writing your own Office application that gets data from the Windows Azure Marketplace

                            Prerequisites for writing Office applications that get data from Windows Azure Marketplace

                            How to write Office applications using data from Windows Azure Marketplace

                            The MSDN article, Create a Marketplace application, covers everything necessary for creating a Marketplace application, but below are the steps in order.

                            1. Register with the Windows Azure Marketplace:
                              • You need to register your application first on the Windows Azure Marketplace Application Registration page. Instructions on how to register your application for the Windows Azure Marketplace are found in the MSDN topic, Register your Marketplace Application.
                            2. Authentication:
                            3. Receiving Data from the Windows Azure Marketplace DataMarket service

                            modern.IE – Great tool for testing Web Sites! Get it now!

                            ImageThe modern.IE scan analyzes the HTML, CSS, and JavaScript of a site or application for common coding issues. It warns about practices such as incomplete specification of CSS properties, invalid or incorrect doctypes, and obsolete versions of popular JavaScript libraries.

                            It’s easiest to use modern.IE by going to the modern.IE site and entering the URL to scan there. To customize the scan, or to use the scan to process files behind a firewall, you can clone and build the files from this repo and run the scan locally.

                            How it works

                            The modern.IE local scan runs on a system behind your firewall; that system must have access to the internal web site or application that is to be scanned. Once the files have been analyzed, the analysis results are sent back to the modern.IE site to generate a complete formatted report that includes advice on remediating any issues. The report generation code and formatted pages from the modern.IE site are not included in this repo.

                            Since the local scan generates JSON output, you can alternatively use it as a standalone scanner or incorporate it into a project’s build process by processing the JSON with a local script.

                            The main service for the scan is in the app.js file; it acts as an HTTP server. It loads the contents of the web page and calls the individual tests, located in /lib/checks/. Once all the checks have completed, it responds with a JSON object representing the results.

                            Installation and configuration

                            • Install node.js. You can use a pre-compiled Windows executable if desired. Version 0.10 or higher is required.
                            • If you want to modify the code and submit revisions, Install git. You can choose GitHub for Windows instead if you prefer. Then clone this repository. If you just want to run locally then downloading then you just need to download the latest version from here and unzip it
                            • Install dependencies. From the subdirectory, type: npm install
                            • If desired, set an environment variable PORT to define the port the service will listen on. By default the port number is 1337. The Windows command to set the port to 8181 would be: set PORT=8181
                            • Start the scan service: From the subdirectory, type: node app.js and the service should respond with a status message containing the port number it is using.
                            • Run a browser and go to the service’s URL; assuming you are using the default port and are on the same computer, the URL would be: http://localhost:1337/
                            • Follow the instructions on the page.


                            The project contains a set of unit tests in the /test/ directory. To run the unit tests, type grunt nodeunit.

                            JSON output

                            Once the scan completes, it produces a set of scan results in JSON format:

                                "testName" : {
                                    "testName": "Short description of the test",
                                    "passed" : true/false,
                                    "data": { /* optional data describing the results found */ }

                            The data property will vary depending on the test, but will generally provide further detail about any issues found.

                            Creating Nintex Workflow Custom Actions

                            Nintex is a great product to create Workflows with within SharePoint. Much more flexible than SharePoint Designer, and far less complicated than using Visual Studio. There is an extensive list of Actions already for Nintex Workflow sometimes you need a Custom Action that is specific for your business such as bring back data from your bespoke application, or UNIX text file. This blog will explain the different parts of creating a custom action to you.

                            Above picture showing you collection of Actions that comes with Nintex Workflow.

                            To build a custom action, unfortunately, there is a collection of files you are require to create, even though there is only one real section that performs the actual business logic. The rest of it is all supporting code. Below is a basic project setup of all the files you need. I will explain each section throughout this post. With a walkthrough at the end of how to create a ReadFromPropertyBag Custom Action.


                            Before we can even start we require the following References.

                            • System.Workflow.Activities.dll (Not shown in above picture. Forgot to add before took picture.)
                            • Microsoft.SharePoint.dll
                            • Microsoft.SharePoint.WorkflowActions.dll
                            • System.Workflow.ComponentModel.dll
                            • Nintex.Workflow.dll – Can be found at C:\Program Files\Nintex\Nintex Workflow 2010\Binaries.
                            • Nintex.Workflow.ServerControls.dll – Can be found at C:\Program Files\Nintex\Nintex Workflow 2010\Binaries. (Not shown in above picture. Forgot to add before took picture.)


                            A WebApplication feature that when activated it will add the Custom Action to the Web Application and authorize it to be used within the web application.

                            CustomActions –ActionName – NWAFile

                            An element file, which holds an XML file which is all the required details for a NintexWorkflowActivity. It is this file that is read in by the Feature receiver to be able to add the custom action to the web application.

                            CustomActions – ActionName – ActionNameActivity

                            A class that is inherited by Nintex.Workflow.Activities.ProgressTrackingActivity. This file contains all the Dependency properties to the activity. The Dependency Properties are object properties that can be bound to other elements of a workflow, such as workflow variables or dependency properties of other activities. They are used to store the data that the activity will require or to output data from the activity. This is also the file that contains the execution of the actual business logic.

                            CustomActions – ActionName – ActionNameAdapter

                            A class that is inherited by Nintex.Workflow.Activities.Adapters.GenericRenderingAction. You will need to implement the abstract class of GenericRenderingAction. These implementations

                            • GetDefaultConfig() – define the parameters that the user can configure for this action and sets the default label for the action.
                            • ValidateConfig() – Adds logic to validate the configuration here, and will display any error messages to the user if there is an issue.
                            • AddActivityToWorkflow() – Creates an instance of the Activity and set its properties based on config. Then it adds it to the parent activity.
                            • GetConfig() – Reads the property from the context.Activity and update the values in the NWActionConfig.
                            • BuildSummary() – Constructs an Action Summary class to display details about this action.

                            I find the Adapter is very similar for every Action. Once you have the basic of one, just by adding an extra parameter or removing one you can quickly put an adapter together for any Custom Action.

                            Layouts – NintexWorkflow – CustomActions – ActionName – Images

                            I have two icon .png files in here. One is sized at 49×49 pixels and the other at 30×30 pixels. These files are referenced in the NWAFile, and used to display the custom action to the user in the toolbox area (30×30), or in the actual workflow itself (49×49).

                            You could add a third one here for Warning Icon. This is where the custom action isn’t configured. This would be a 49×49 pixel too.

                            Layouts – NintexWorkflow – CustomActions – ActionName – ActionNameDialog

                            An application page inherited from Nintex.Workflow.ServerControls.NintexLayoutBase. The dialog is what appears to the user when they have to configure the custom action through the browser. Here there is no code behind. You mainly display the controls in the aspx and set up two javascript functions to read in and read out the configuration on load and save.

                            Walkthrough creating ReadFromPropertyBag Custom Action.

                            Now that you understand the basic roles of all the files required to make one custom action I will walk through creating a custom action that will read from the current SPWeb property bag. The user will pass in the “Property name” to obtain the value.

                            If you create a Solution with the similar layout my solution layout above, replacing “ActionName” with ReadFromPropertyBag. Your solution and file layouts should look similar to below.


                            Starting with the ReadFromPropertyBagActivity file. This inherits Nintex.Workflow.Activities.ProgressTrackingActivity. We will first add all the public static DependencyProperties. The default ones are __ListItem, __Context and __ListId. Then we will add 2 of our own, One to hold the the Property Name and the ResultOuput. You can delete the designer.cs file.

                            Each DependencyProperty will have its own public property.

                            using System;
                            using System.Workflow.ComponentModel;
                            using Microsoft.SharePoint.Workflow;
                            using Microsoft.SharePoint.WorkflowActions;
                            using Nintex.Workflow;
                            using Microsoft.SharePoint;
                            namespace CFSP.CustomActions.ReadFromPropertyBag
                                public class ReadFromPropertyBagActivity : Nintex.Workflow.Activities.ProgressTrackingActivity
                                    public static DependencyProperty __ListItemProperty = DependencyProperty.Register("__ListItem", typeof (SPItemKey), typeof (ReadFromPropertyBagActivity));
                                    public static DependencyProperty __ContextProperty = DependencyProperty.Register("__Context", typeof (WorkflowContext), typeof (ReadFromPropertyBagActivity));
                                    public static DependencyProperty __ListIdProperty = DependencyProperty.Register("__ListId", typeof (string),typeof (ReadFromPropertyBagActivity));
                                    public static DependencyProperty PropertyProperty = DependencyProperty.Register("Property", typeof (string), typeof (ReadFromPropertyBagActivity));
                                    public static DependencyProperty ResultOutputProperty = DependencyProperty.Register("ResultOutput", typeof (string), typeof (ReadFromPropertyBagActivity));
                                    public WorkflowContext __Context
                                        get { return (WorkflowContext) base.GetValue(__ContextProperty); }
                                        set { base.SetValue(__ContextProperty, value); }
                                    public SPItemKey __ListItem
                                        get { return (SPItemKey) base.GetValue(__ListItemProperty); }
                                        set { base.SetValue(__ListItemProperty, value); }
                                    public string __ListId
                                        get { return (string) base.GetValue(__ListIdProperty); }
                                        set { base.SetValue(__ListIdProperty, value);}
                                    public string Property
                                        get { return (string) base.GetValue(PropertyProperty); }
                                        set { base.SetValue(PropertyProperty, value);}
                                    public string ResultOutput
                                        get { return (string) base.GetValue(ResultOutputProperty); }
                                        set {base.SetValue(ResultOutputProperty, value);}
                                    public ReadFromPropertyBagActivity()
                                   protected override ActivityExecutionStatus Execute(ActivityExecutionContext executionContext)
                                    protected override ActivityExecutionStatus HandleFault(ActivityExecutionContext executionContext, Exception exception)

                            Now we will need to override the Execute() method. This is the main business logic of your Custom Action.

                            protected override ActivityExecutionStatus Execute(ActivityExecutionContext executionContext)
                                     //Standard Nintex code to obtain context.
                                     ActivityActivationReference.IsAllowed(this, __Context.Web);
                                     NWWorkflowContext ctx = NWWorkflowContext.GetContext(
                                        new Guid(this.__ListId),
                                     //Get the property value.
                                     string resolvedProperty = ctx.AddContextDataToString(this.Property);
                                     var result = "";
                                     //Using the context get the property if it exists.
                                     if (ctx.Web.AllProperties.ContainsKey(resolvedProperty))
                                         result = ctx.Web.AllProperties[resolvedProperty].ToString();
                                     //store the result.
                                     this.ResultOutput = result;
                                     //End Execution.
                                     base.LogProgressEnd(ctx, executionContext);
                                     return ActivityExecutionStatus.Closed;

                            The last thing we need to do in this class is to handle if there is a fault during execution. Overwrite the HandleFault code with the following. You can make the error say whatever you like. I’m just referencing the item that called the workflow.

                            protected override ActivityExecutionStatus HandleFault(ActivityExecutionContext executionContext, Exception exception)
                                        Nintex.Workflow.Diagnostics.ActivityErrorHandler.HandleFault(executionContext, exception,
                                            this.WorkflowInstanceId, "Error Reading from Property Bag", __ListItem.Id, __ListId, __Context);
                                        return base.HandleFault(executionContext, exception);


                            Moving onto the Adapter file now. This class inherits the Nintex.Workflow.Activies.Adapters.GenericRenderingAction and needs to implement 5 overrides. I have also included two private constants strings. These are the property names we declared in the Activity class. Ensure these names match, or you will encounter errors later which takes a while to debug.

                            using System;
                            using System.Collections.Generic;
                            using System.Workflow.ComponentModel;
                            using Microsoft.SharePoint;
                            using Nintex.Workflow;
                            using Nintex.Workflow.Activities.Adapters;
                            namespace CFSP.CustomActions.ReadFromPropertyBag
                                public class ReadFromPropertyBagAdapter : GenericRenderingAction
                                   //Values should match the property names in the ReadFromPropertyBagActivity class.
                                    private const string PropertyProperty = "Property";
                                    private const string ResultOutputProperty = "ResultOutput";
                                   public override NWActionConfig GetDefaultConfig(GetDefaultConfigContext context)
                                        throw new NotImplementedException();
                                    public override bool ValidateConfig(ActivityContext context)
                                        throw new NotImplementedException();
                                    public override CompositeActivity AddActivityToWorkflow(PublishContext context)
                                        throw new NotImplementedException();
                                    public override NWActionConfig GetConfig(RetrieveConfigContext context)
                                        throw new NotImplementedException();
                                    public override ActionSummary BuildSummary(ActivityContext context)
                                        throw new NotImplementedException();

                            I will explain each override before showing you the code.

                            GetDefaultConfig sections allows you to set up the parameters for user input and outputs. If you wish the user to freely type a value use a PrimitiveValue. If you would like the user to use a predefined value that would be a variable somewhere in the workflow then use NWWorkflowVariable value. Typically the output would always be written back to a Workflow Variable so this will be a Variable type of NWWorkflowVariable. Add an ActivityParameter for each property.

                            public override NWActionConfig GetDefaultConfig(GetDefaultConfigContext context)
                                        NWActionConfig config = new NWActionConfig(this);
                                        //define the number of parameters one for each custom parameter.
                                        config.Parameters = new ActivityParameter[2];
                                        //define the parameters that the user can configure for this action.
                                        config.Parameters[0] = new ActivityParameter();
                                        config.Parameters[0].Name = PropertyProperty;
                                        config.Parameters[0].PrimitiveValue = new PrimitiveValue();
                                        config.Parameters[0].PrimitiveValue.Value = string.Empty;
                                        config.Parameters[0].PrimitiveValue.ValueType = SPFieldType.Text.ToString();
                                        config.Parameters[1] = new ActivityParameter();
                                        config.Parameters[1].Name = ResultOutputProperty;
                                        config.Parameters[1].Variable = new NWWorkflowVariable();
                                        //set the default label for the action.
                                        config.TLabel = ActivityReferenceCollection.FindByAdapter(this).Name;
                                        return config;

                            ValidateConfig section allows you to validate the values entered. Here I’m just ensuring the value are not blank. You would add a validation for each input property.

                            public override bool ValidateConfig(ActivityContext context)
                                        //Add logic to validate the configuration here.
                                        bool isValid = true;
                                        Dictionary<string, ActivityParameterHelper> parameters = context.Configuration.GetParameterHelpers();
                                        if (!parameters[PropertyProperty].Validate(typeof(string), context))
                                            isValid &= false;
                                            validationSummary.AddError("Property Bag", ValidationSummaryErrorType.CannotBeBlank);
                                        return isValid;

                            Validation is shown in image below.

                            AddActivityToWorkflow creates an instance of the Activity and set its properties based on config. You also bind the default properties. Assign each parameter you have here. Lastly attach the Activity Flags. Then add it all to the parent activity.

                            <strong>  </strong>public override CompositeActivity AddActivityToWorkflow(PublishContext context)
                                        Dictionary<string, ActivityParameterHelper> parameters = context.Config.GetParameterHelpers();
                                        ReadFromPropertyBagActivity activity = new ReadFromPropertyBagActivity();
                                        parameters[PropertyProperty].AssignTo(activity, ReadFromPropertyBagActivity.PropertyProperty, context);
                                        parameters[ResultOutputProperty].AssignTo(activity, ReadFromPropertyBagActivity.ResultOutputProperty, context);
                                        activity.SetBinding(ReadFromPropertyBagActivity.__ContextProperty, new ActivityBind(context.ParentWorkflow.Name, StandardWorkflowDataItems.__context));
                                        activity.SetBinding(ReadFromPropertyBagActivity.__ListItemProperty, new ActivityBind(context.ParentWorkflow.Name, StandardWorkflowDataItems.__item));
                                        activity.SetBinding(ReadFromPropertyBagActivity.__ListIdProperty, new ActivityBind(context.ParentWorkflow.Name, StandardWorkflowDataItems.__list));
                                        ActivityFlags f = new ActivityFlags();
                                        return null;

                            GetConfig reads the properties from the context.Activity and updates the values in the NWActionConfig. Add a new parameter for each property. You can see when we RetrieveValue from our activity, we are grabbing the corresponding DependencyProperty from our activity.

                            public override NWActionConfig GetConfig(RetrieveConfigContext context)
                                        //Read the properties from the context.ACtivity and update the values in the NWActionConfig
                                        NWActionConfig config = this.GetDefaultConfig(context);
                                        Dictionary<string, ActivityParameterHelper> parameters = config.GetParameterHelpers();
                                        parameters[PropertyProperty].RetrieveValue(context.Activity, ReadFromPropertyBagActivity.PropertyProperty, context);
                                        parameters[ResultOutputProperty].RetrieveValue(context.Activity, ReadFromPropertyBagActivity.ResultOutputProperty, context);
                                        return config;

                            BuildSummary is the last implemented override method. The code here writes out the summary displayed to the user after that have configured the action and hovered the mouse over the custom action.

                            public override ActionSummary BuildSummary(ActivityContext context)
                                       // Construct an ActionSummary class to display details about this action.
                                        Dictionary<string, ActivityParameterHelper> parameters = context.Configuration.GetParameterHelpers();
                                        return new ActionSummary("Retrieve the following Property bag: {0}", parameters[PropertyProperty].Value);

                            BuildSummary is displayed below on mouse hover once item has been configured.


                            The code behind for this aspx file inherits from Nintex.Workflow.ServerControls.NintexLayoutsBase. Apart from changing the inheriting type, there is no need to do anything else in the .cs file. In the aspx file we would have the basic structure. This structure contains the link up to your page behind, register all the Nintex controls required, the two main JavaScript functions to read and save the configuration, and lastly the display section of your page.

                            <%@ Page Language="C#" DynamicMasterPageFile="~masterurl/default.master" AutoEventWireup="true" CodeBehind="ReadFromPropertyBagDialog.aspx.cs" EnableEventValidation="false"
                                Inherits="CFSP.CustomActions.ReadFromPropertyBag.ReadFromPropertyBagDialog, $SharePoint.Project.AssemblyFullName$" %>
                            <%@ Register TagPrefix="Nintex" Namespace="Nintex.Workflow.ServerControls" Assembly="Nintex.Workflow.ServerControls, Version=, Culture=neutral, PublicKeyToken=913f6bae0ca5ae12" %>
                            <%@ Register TagPrefix="Nintex" TagName="ConfigurationPropertySection" src="~/_layouts/NintexWorkflow/ConfigurationPropertySection.ascx" %>
                            <%@ Register TagPrefix="Nintex" TagName="ConfigurationProperty" src="~/_layouts/NintexWorkflow/ConfigurationProperty.ascx" %>
                            <%@ Register TagPrefix="Nintex" TagName="DialogLoad" Src="~/_layouts/NintexWorkflow/DialogLoad.ascx" %>
                            <%@ Register TagPrefix="Nintex" TagName="DialogBody" Src="~/_layouts/NintexWorkflow/DialogBody.ascx" %>
                            <%@ Register TagPrefix="Nintex" TagName="SingleLineInput" Src="~/_layouts/NintexWorkflow/SingleLineInput.ascx" %>
                            <%@ Register TagPrefix="Nintex" TagName="PlainTextWebControl" Src="~/_layouts/NintexWorkflow/PlainTextWebControl.ascx" %>
                            <asp:Content ID="ContentHead" ContentPlaceHolderID="PlaceHolderAdditionalPageHead" runat="server">
                                <Nintex:DialogLoad runat="server" />
                                <script type="text/javascript" language="javascript">
                                    function TPARetrieveConfig() {
                                   //To Do
                                    function TPAWriteConfig() {
                                    //To Do
                                    onLoadFunctions[onLoadFunctions.length] = function () {
                                        dialogSectionsArray["<%= MainControls1.ClientID %>"] = true;
                            <asp:Content ID="ContentBody" ContentPlaceHolderID="PlaceHolderMain" runat="Server">
                              <Nintex:ConfigurationPropertySection runat="server" Id="MainControls1">
                              <Nintex:DialogBody runat="server" id="DialogBody">

                            First section we will fill in will be the Nintex:ConfigurationPropertySection within the ContentPlaceHolderID PlaceHolderMain. In here we need to create a Nintex:ConfigurationProperty for each configuration property. In our case here that will be the Property Bag Name and the Result. You can see from below for consistency I have given the ID of the controls the same name as the Dependency Properties. Also note that the output because I’m want the user to assign the results to a workflow property, I’m using the Nintex:VariableSelector control.

                            <Nintex:ConfigurationProperty runat="server" FieldTitle="Property Bag Property" RequiredField="True">
                                        <Nintex:SingleLineInput clearFieldOnInsert="true" filter="number" runat="server" id="propertyProperty"></Nintex:SingleLineInput>
                                <Nintex:ConfigurationProperty runat="server" FieldTitle="Result Output" RequiredField="False">
                                    <Nintex:VariableSelector id="resultOutput" runat="server" IncludeTextVars="True"></Nintex:VariableSelector>

                            Next we are going to look at the two JavaScript files. When the dialog page is rendered and saved it passed an XML file, known as the configXml. We need to read out and read into the XML file using XPath. Please note when you come to deploying, if you find that your dialog control loads, however the ribbon bar is disabled at the top of the dialog, it is most likely that you have an error in the JavaScript. This took me a while to diagnose, but now I know what causes the issue, it allowed me to fix it straight away.

                            From the TPARetrieveConfig code the [@Name=’ ‘] will always be the public property name you gave it in the ReadFromPropertyBagActivity.cs file. As you can see from the code below there is a different way to obtain the value depending if the configuration property is a PrimitiveValue or a WorkflowVariable. This you defined in the GetDefaultConfig() method within the ReadFromPropertyBagAdapter.cs file. Lastly if you are still having problems getting the value, ensure your XPath is correct by debugging the Javascript and viewing the configXML variable.

                            function TPARetrieveConfig() {
                                       setRTEValue('<%=propertyProperty.ClientID%>', configXml.selectSingleNode("/NWActionConfig/Parameters/Parameter[@Name='Property']/PrimitiveValue/@Value").text);
                                       document.getElementById('<%=resultOutput.ClientID%>').value = configXml.selectSingleNode("/NWActionConfig/Parameters/Parameter[@Name='ResultOutput']/Variable/@Name").text;

                            From the TPAWriteConfig code it is basically doing the opposite of TPARetrieveConfig, just it checks the dropdown control (resultOutput) that a value has been selected before saving.

                            function TPAWriteConfig() {
                                       configXml.selectSingleNode("/NWActionConfig/Parameters/Parameter[@Name='Property']/PrimitiveValue/@Value").text = getRTEValue('<%=propertyProperty.ClientID%>');
                                       var resultOuputCtrl = document.getElementById('<%=resultOutput.ClientID%>');
                                       if (resultOuputCtrl.value.length > 0) {
                                           configXml.selectSingleNode("/NWActionConfig/Parameters/Parameter[@Name='ResultOutput']/Variable/@Name").text = resultOuputCtrl.value;
                                       return true;


                            The NWA file as stated previously is just an XML file. This file is used by the Feature Receiver to register the custom action within SharePoint WebApplication.

                            First thing we need to do is from the properties window (press F4) we need to change the build action from None to Content, change the deployment type to ElementFile and remove the Path “\NWAFile”.

                            From within the XML file, remove all text and then place the following in.

                              <Name>Retrieve from Property Bag</Name>
                              <Category>CannonFodder Category</Category>
                              <Description>A custom action to retrieve a property from the SharePoint Web Property Bag.</Description>

                            Let me explain each line to you.

                            • Name – The display name of the custom action
                            • Category – The category in the toolbox area that the custom action will be displayed under.
                            • Description – A description of the category.
                            • ActivityType – The Namespace of the Activity.cs file.
                            • ActivityAssembly – The Full assembly name. (I’m using a token, which I’ll show how to set up afterwards)
                            • AdapterType – The Namespace of the Adapter.cs file.
                            • AdapterAssembly – The full assembly name. (I’m using a token, which I’ll show how to set up afterwards)
                            • HandlerUrl – The Nintex handler, this will always be ActivityServer.ashx
                            • Icon – The URL to the larger Icon.
                            • ToolboxIcon – The URL to the smaller icon.
                            • WarningIcon – The URL to the Warning Icon <-Not used in the above XML.
                            • ConfigurationDialogUrl – The URL to the Action Dialog file. Note that we don’t put /_layouts/NintexWorkflow at the front.
                            • ShowInCommonActions – If this custom action shows up in CommonActions on the toolbox.
                            • DocumentLibrariesOnly – If this custom action should only be used in DocumentLibraries or not.

                            Getting the Token $SharePoint.Project.AssemblyFullName$ to replace on a build.

                            At this point, save and close your solution. Now open up your .csproj file in Notepad or Notepad++. At the bottom of your first <PropertyGroup> section add the following XML. Save the file and re-open your solution in Visual Studio.


                            When you build your solution, Visual Studio will replace your token with the actual Full Assembly Name. More information about TokenReplacementFileExtensions.

                            WebApplication – Custom Action EventReceiver.cs

                            The Feature Event receiver is the final piece to our custom action. This will deploy or remove our custom action and make it available to the Web Application by modifying the Web.Config and registering the Action with Nintex within the farm. To add the custom action we use the nwa file. To remove it we need to know the namespace of the adapter, and assembly name. As you build more custom actions you can reuse this feature and just de-activate and re-activate each time you deploy a new custom action.

                            using System;
                            using System.Collections.ObjectModel;
                            using System.IO;
                            using System.Runtime.InteropServices;
                            using System.Xml;
                            using Microsoft.SharePoint;
                            using Microsoft.SharePoint.Administration;
                            using Nintex.Workflow;
                            using Nintex.Workflow.Administration;
                            using Nintex.Workflow.Common;
                            using System.Reflection;
                            namespace CFSP.CustomActions.Features.WebApplication___Custom_Actions
                                public class WebApplication___Custom_ActionsEventReceiver : SPFeatureReceiver
                                    public override void FeatureActivated(SPFeatureReceiverProperties properties)
                                        SPWebApplication parent = (SPWebApplication) properties.Feature.Parent;
                                        AddCustomAction(parent, properties, "ReadFromPropertyBagAction.nwa");
                                       //Add additional Custom Actions nwa files here.
                                    public override void FeatureDeactivating(SPFeatureReceiverProperties properties)
                                        SPWebApplication parent = (SPWebApplication) properties.Feature.Parent;
                                        RemoveCustomAction(parent, properties,
                                       //Remove additional Custom Actions here.
                                    protected void AddCustomAction(SPWebApplication parent, SPFeatureReceiverProperties properties,
                                        string pathToNWAFile)
                                        // First step is register the action to the Nintex Workflow database
                                        XmlDocument nwaXml = GetNWADefinition(properties, pathToNWAFile);
                                        ActivityReference newActivityReference = ActivityReference.ReadFromNWA(nwaXml);
                                        ActivityReference action = ActivityReferenceCollection.FindByAdapter(newActivityReference.AdapterType,
                                        if (action != null)
                                            // update the details if the adapter already exists
                                            ActivityReferenceCollection.UpdateActivity(action.ActivityId, newActivityReference.Name,
                                                newActivityReference.Description, newActivityReference.Category,
                                                newActivityReference.ActivityAssembly, newActivityReference.ActivityType,
                                                newActivityReference.AdapterAssembly, newActivityReference.AdapterType,
                                                newActivityReference.HandlerUrl, newActivityReference.ConfigPage,
                                                newActivityReference.RenderBehaviour, newActivityReference.Icon, newActivityReference.ToolboxIcon,
                                                newActivityReference.WarningIcon, newActivityReference.QuickAccess,
                                            ActivityReferenceCollection.AddActivity(newActivityReference.Name, newActivityReference.Description,
                                                newActivityReference.Category, newActivityReference.ActivityAssembly,
                                                newActivityReference.ActivityType, newActivityReference.AdapterAssembly,
                                                newActivityReference.AdapterType, newActivityReference.HandlerUrl, newActivityReference.ConfigPage,
                                                newActivityReference.RenderBehaviour, newActivityReference.Icon, newActivityReference.ToolboxIcon,
                                                newActivityReference.WarningIcon, newActivityReference.QuickAccess,
                                            action = ActivityReferenceCollection.FindByAdapter(newActivityReference.AdapterType,
                                        // Second step is to modify the web.config file to allow use of the activity in declarative workflows
                                        string activityTypeName = string.Empty;
                                        string activityNamespace = string.Empty;
                                        Utility.ExtractNamespaceAndClassName(action.ActivityType, out activityTypeName, out activityNamespace);
                                        AuthorisedTypes.InstallAuthorizedWorkflowTypes(parent, action.ActivityAssembly, activityNamespace,
                                        // Third step is to activate the action for the farm
                                        ActivityActivationReference reference = new ActivityActivationReference(action.ActivityId, Guid.Empty,
                                    protected void RemoveCustomAction(SPWebApplication parent, SPFeatureReceiverProperties properties,
                                        string adapterType, string adapterAssembly)
                                        ActivityReference action = ActivityReferenceCollection.FindByAdapter(adapterType, adapterAssembly);
                                        if (action != null)
                                            // Remove the action definition from the workflow configuration database if the Feature is not activated elsewhere
                                            if (!IsFeatureActivatedInAnyWebApp(parent, properties.Definition.Id))
                                            string activityTypeName = string.Empty;
                                            string activityNamespace = string.Empty;
                                            Utility.ExtractNamespaceAndClassName(action.ActivityType, out activityTypeName, out activityNamespace);
                                            // Remove the web.config entry
                                            Collection<SPWebConfigModification> modifications = parent.WebConfigModifications;
                                            foreach (SPWebConfigModification modification in modifications)
                                                if (modification.Owner == AuthorisedTypes.OWNER_TOKEN)
                                                    // OWNER_TOKEN is the owner for any web config modification added by Nintex Workflow
                                                    if (IsAuthorizedTypeMatch(modification.Value, action.ActivityAssembly, activityTypeName,
                                    private bool IsAuthorizedTypeMatch(string modification, string activityAssembly, string activityType,
                                        string activityNamespace)
                                        XmlDocument doc = new XmlDocument();
                                        if (doc.FirstChild.Name == "authorizedType")
                                            return (doc.SelectSingleNode("//@TypeName").Value == activityType
                                                    && doc.SelectSingleNode("//@Namespace").Value == activityNamespace
                                                    && doc.SelectSingleNode("//@Assembly").Value == activityAssembly);
                                        return false;
                                    private bool IsFeatureActivatedInAnyWebApp(SPWebApplication thisWebApplication, Guid thisFeatureId)
                                        SPWebService webService = SPWebService.ContentService;
                                        if (webService == null)
                                            throw new ApplicationException("Cannot access ContentService");
                                        SPWebApplicationCollection webApps = webService.WebApplications;
                                        foreach (SPWebApplication webApp in webApps)
                                            if (webApp != thisWebApplication)
                                                if (webApp.Features[thisFeatureId] != null)
                                                    return true;
                                        return false;
                                    private XmlDocument GetNWADefinition(SPFeatureReceiverProperties properties, string pathToNWAFile)
                                        using (Stream stream = properties.Definition.GetFile(pathToNWAFile))
                                            XmlDocument nwaXml = new XmlDocument();
                                            return nwaXml;

                            Deploying and checking everything has worked.

                            If you have done everything correctly, at the point go ahead and deploy your solution. Ensure your feature has been activated for a given web application. Then open Central Admin. Under the Nintex Workflow Management section select Manage allowed actions.

                            In Manage Allowed action you should see your Action, and that it is ticked. Meaning it is allowed to be used.

                            Let us go to our site now, and create a new Nintex Workflow for our custom list. My list has a Single line of text called Title and another one called PropertyValue. In the toolbar panel of my Nintex Workflow, I can now see my CannonFodder Category, and my custom action.

                            Drag this onto your page. If you find it doesn’t stick to your workflow page, go back and check your nwa file that all your Types and Assemblies match up correctly. Once it is on your page, configure this custom action. Your dialog will be presented to you.

                            Assign the Property Bag Property to the Item Property Title.

                            Create a new Workflow Variable and name this PBResult. Then assign Result Output to PBResult. Click Save on the dialog.

                            Under Libraries and List find the action Set field value and drag this onto the form underneath our custom action. Then configure it so that it sets our column PropertyValue to Workflow Data called PBResult that we created in the last step. Click Save on the dialog.

                            Lastly before we test this out, on the ribbon bar of the workflow page, under the Nintex Workflow 2010 tab, click Workflow Settings. Configure it so that it Starts when items are created.

                            Save and Publish the workflow.


                            I already have a value in my property bag called cann0nf0dderpb. So I’m going to create a new item in my list, and set the title to cann0nf0dderpb and save the form.

                            After a moment or two the workflow has kicked in. Once I refresh my list I can see that in PropertyValue, the value of my PropertyBag item is displayed. I purposely made the property bag value say ‘Nintex Workflow Worked’.

                            New Office 365 Tool available to help you re-design for the App Model

                            Learn about a tool that analyzes your SharePoint full-trust code solutions and Office add-ins and macros to help you redesign them for the app model. Security is important to us—your code remains private while using the tool.

                            The app model is a great tool that fully embraces the benefits of moving to the cloud, but migrating to the model can be a time-consuming task. SharePoint is a complex enterprise-level collaboration system, and custom solutions built on top of the SharePoint platform using full trust code don’t easily map to a cloud-based deployment. Similarly, Office client solutions – managed add-ins and VBA macros built on individual client object models – are widely deployed on desktops and need to be ported to work in the cloud. We understand that creating these solutions required a significant investment. We want to help you translate these solutions to cloud-friendly apps as painlessly as possible.

                            The SharePoint and VBA Code Analyzer—a tool to help you understand how you can refactor your SharePoint and Office client solutions to Office 365. Working with, one of our long-standing partners, we’ve created a web portal where you can upload your SharePoint and Office client solutions and get a complete analysis of the existing code. We’ll provide guidance and recommendations on what level of effort needs to be invested to move them to the cloud, so you can start refactoring your custom business solutions as soon as possible.

                            “But, wait!” you think. “I can’t send my company’s code where external parties look at it!” No worries—we have put several security measures in place to prevent unauthorized access, and the code runs through a completely black-box process. The analysis is done with automated tools which only collect metadata about files, lines of code, ASP.NET application pages, web parts, libraries, workflows, and other platform-dependent objects. We then use this data to generate reports on how you can map your existing code to the new model.

                            The tool is also hosted behind a digital certificate-enabled site, which ensures that everything that goes across the wire to our black box process is encrypted.

                            Learn about the all new Office Web Widgets

                            Client controls, such as the Office Web Widgets – Experimental, can greatly reduce the amount of time required to build apps, and at the same time, increase the quality of the apps. For this to be true, we have to be sure the widgets meet certain criteria:         



                              Widgets must be designed to be used in any webpage, even if the page is not hosted on SharePoint.




                              Widgets work within the Office controls runtime. This lets us to provide a common set of requirements and a consistent syntax to use the widgets.




                              Widgets that communicate back to SharePoint use the cross-domain library. The widgets don’t have a dependency on a particular server-side platform or technology. You can use the widgets regardless of your choice of server technology.




                              Widgets must coexist with other elements in the page. The inclusion of the widget to a page should not modify other elements in it.




                              Play nice with existing frameworks. We want to be sure you can still use the tools and technologies that you are used to.




                            Figure 1. An app using Office Web Widgets – Experimental

                                      Office Web Widgets - Experimental demo          

                            You can use the widgets by installing the Office Web Widgets – Experimental NuGet package from Visual Studio For more information, see Managing NuGet Packages Using the Dialog. You can also browse the NuGet gallery page.


                            Your feedback and comments helped us decide what widgets to provide. As you can see in Figure 1, the (1) People Picker and (2) Desktop List View widgets are ready for you to try and experiment. Please keep the feedback coming at the Office Developer Platform UserVoice site


                            You can also see the widgets in action in the Office Web Widgets – Experimental Demo code sample.





                                                                          People Picker widget                            





                                                                             You can use the experimental People Picker widget in apps to help your users find and select people and groups in a tenant. Users can start typing in the text box and the widget retrieves the people whose name or e-mail matches the text.


                            Figure 2. People Picker widget solving a query

                                        People Picker experimental control on a page            

                            You can declare the widget in the HTML markup or programmatically using JavaScript. In either case, you use a div element as a placeholder for the widget. You can also set properties and event handlers for the People Picker widget. The following table shows the available properties and events in the People Picker widget.


















                            JSON Object (list of strings)



                            Type of items the widget will resolve. Options:












                            Default to user only.









                            True/False. If False, the widget should allow selecting only one item at the time. Default=False.









                            If provided, the widget will limit the selection to items in this group. If not provided, the widget will query objects from the whole tenancy.






                            JSON array



                            List of items selected. Each item will return an object representing a user or group.









                            Event that fires when a new object is added to the selection. The handler function received the object added.









                            Event that fires when a new object is removed from the selection. The handler function received the object removed.









                            Either adding or removing objects triggers this event. No parameters are passed to the handler function.









                            Array of possible validation errors:























                            True=Show False=Don’t show









                            True= There are 1 or more validation errors False=There are no validation errors









                            Array of possible validation errors:























                            True=Display the errors False=Don’t display the errors




                            The CSS classes for the People Picker widget are defined in the Office.Controls.css style sheet. You can override the classes and style the widget for your app.


                            For more information, see How to: Use the experimental People Picker widget in apps for SharePoint and Use the People Picker experimental widget in an app code sample.





                                                                          Desktop List View widget                            






                            Your users can benefit from the List View widget and display the data in a list just like the regular List View widget, but you can use it in your apps that are not necessarily hosted in SharePoint.


                            Figure 3. Desktop List View widget displaying the data in a list

                                        Desktop List View experimental control on a page            

                            You can specify an existing view on the list, the widget renders the fields in the order that they appear in the view.



                                                Note                     Note                  

                            At this moment, the Desktop List View widget only displays the data. It doesn’t offer editing capabilities.




                            You can provide a placeholder for the widget using a div element. You can programmatically or declaratively use the widget.


                            You also can set properties or event handlers for the Desktop List View widget. The following table shows the available properties and events in the Desktop List View widget.





















                            URL of the list view to draw items from. It can be a relative URL in which case it will be assumed to be located on the app web itself or an absolute URL.









                            Name of the view to show. This is the programmatic name of the view (not its display name).









                            Event that fires when an item is selected on the list.









                            Event that fires when a new item is added to the list.









                            Event that fires when an item is removed from the list.









                            List of Selected items in JSON format.




                            The widget requires the SharePoint website style sheet. You can reference the SharePoint style sheet directly or use the chrome widget. For more information about the style sheet, see How to: Use a SharePoint website’s style sheet in apps for SharePoint and How to: Use the client chrome control in apps for SharePoint.


                            To see the List View widget in action, see the Use the Desktop List View experimental widget in an app code sample. Also see How to: Use the experimental Desktop List View widget in apps for .                    

                            Widgets can help to speed up the development process and reduce the cost and time-to-market of your apps. Office Web Widgets – Experimental provide widgets that you can use in your non-production apps. Your feedback and comments are welcome in the Office Developer Platform UserVoice site.



                            Looking for a standard Project Management approach to developing SharePoint Solutions? Look no further than Microsot’s SharePoint Server 2013 Application Life Cycle Management

                            Microsoft SharePoint Server 2013 gives developers several options for creating and deploying applications that are based on SharePoint technologies, for both on-premises and in hosted or public cloud platforms. SharePoint Server 2013 offers increased flexibility in the shape applications can take as well as new options for using standards-based technologies with applications. Although these application capabilities and deployment options afforded by the new application model inSharePoint provide an effective means for developers to create new and immersive applications, developers must be able to infuse quality, testing and ALM considerations into the development process. This article applies common ALM concepts and practices to application development using SharePoint Server 2013 technologies.

                            What’s new

                            SharePoint Server 2013 establishes a new paradigm for implementing applications. Because of this shift in application development with SharePoint technologies, developers and architects should have a thorough understanding of the new application development patterns, practices, and deployment models for SharePoint Server 2013. It’s important to note that while the application model for developing solutions with SharePoint has changed, many of the patterns used for solution development including choice of technologies, implementation techniques are in line with existing web application development technologies.

                            The following resources outline the application types that can be constructed using SharePoint Server 2013 technologies and contain considerations for both on-premises and cloud applications. To understand hosting options for apps for SharePoint, see Choose patterns for developing and hosting your app for SharePoint.

                            Additionally, Microsoft advises customers to evaluate the technologies used when developing applications with SharePoint Server 2013 as there is a wider set of choices for solution implementation. When creating applications, customers can focus on leveraging standards-based technologies such as HTML5 and JavaScript for presentation and user experience layers, while OData and OAuth can be leveraged for service-based access to back end services including SharePoint. Customers should consider carefully whether full trust code (that is, compiled assemblies deployed to SharePoint) are required. although continuing to use that development paradigm, while still valid and required in some situations, does impose significant overhead on the ALM process.

                            For more information about the new flexible development technologies for applications on SharePoint Server 2013, see SharePoint 2013 development overview.

                            Benefits and changes

                            Because SharePoint-supported application development technologies now offer a more flexible assortment of languages and programming architectures, developers need to adapt existing ALM practices around mainstream development techniques to accommodate for their presence within SharePoint. Concepts such as testing, build establishment, deployment, and quality control, can be expanded to include deployment to SharePoint as a SharePoint application. This may mean that although many developers that are accustomed to writing and deploying server-side farm solutions that extend the core capabilities of SharePoint, common ALM practices for the new flexible development model facilitated by SharePoint Server 2013 applications must be applied to the implementation process.

                            As customers continue the transition to cloud-hosted implementations of SharePoint Server 2013, developers will need to understand how to extend ALM concepts to include development, testing, and deployment target environments that sit outside the physical boundaries of the organization. This includes evaluating the technology strategy for conducting application development, testing, and deployment.

                            Developers and architects alike can become well-versed in synthesizing solutions that consist of multiple application components that span or combine different types of hosting options. During this adaptation process, ALM procedures should be applied unilaterally to these applications. For example, developers may need to deploy an application that spans on-premises services deployment (that is, IIS, ASP.NET, MVC, WebAPI, and WCF), Windows Azure, SharePoint Server 2013, and SQL Azure, while also being able to test the application components to determine quality or whether any regressions have been introduced since a previous build. These requirements may signify a significant shift in how developers and teams regard the daily build and deployment process that is a well-known procedure for on-premises or server-side solutions.

                            Development team considerations

                            For organizations that have more than one application developer or architect, team development for SharePoint Server 2013 should be carefully planned to provide the highest-quality applications as well as support sufficient developer productivity. Because the method for conducting application development has increased in flexibility, teams will need to be clear and confident not only on ALM practices and patterns, but also on how each developer will write code and ensure that quality code becomes part of the application build process.

                            These considerations begin with selecting the appropriate development environment. Traditionally, development has been relegated to conducting separate development in virtual machines that are connected to a common code repository that provided build, deployment, and testing capabilities, like Visual Studio TFS 2012. TFS is still a strong instrumental component of an ALM strategy, and central to the development effort, but teams should consider how to leverage TFS across the different types of development environment options.

                            Depending on the target environment, the solution type (that is, which components will be on-premises and which will be hosted in cloud infrastructure or services), developers can now select from a combination of new development environment options. These options will consist of new choices such as the SharePoint developer site template, an Office 365 developer tenant, as well as legacy choices such as virtual machine-based development using Hyper-V in Windows 8 or Windows Server 2012.

                            The following section describes development environment considerations for application developers and development teams.

                            The selection of a development environment should be made based on multiple factors. These considerations are largely influenced by the type of application being developed as well as the target platform for the application. Traditionally, when creating applications for SharePoint Server 2010, developers would provision virtual machines and conduct development in isolation. This was due to the fact that deployment of full trust solutions may have required restarts of core SharePoint dependencies, such as IIS, which would prevent multiple developers from using a single SharePoint environment. Because development technologies have changed and the options for developers creating applications have increased, developers and teams should understand the choice of development environments available to them. Figure 1 shows the development environment and tool mix, and includes the types of solutions that can be deployed to the target environments.

                            Figure 1. Development environment components and tools

                            The app development environment can include Office 365, Visual Studio, and virtual machines.

                            Click to see enlargement.

                            Development environment philosophy

                            Because of the investments made in how applications can be designed and implemented using SharePoint Server 2013, developers should determine if there is a need to conduct development using server-side code. As developers create applications that use the cloud-hosted model, the requirement to conduct development that relies on virtualized environments, specifically for SharePoint, diminishes. Developers should seek to build solutions with the remote-development model that uses existing cloud-based (both public and private) infrastructure. If development environments can be quickly and easily provisioned without having to create and orchestrate virtualization, developers can invest more time in focusing on development productivity and quality, rather than infrastructure management.

                            The decision to require a virtualized instance of SharePoint Server 2013 versus the new SharePoint development site template will depend on whether or not the application requires full trust code to be deployed to SharePoint and run there. If no full trust code is required, we recommend using the developer site template, which can be found in Office 365 development tenants or within an organization’s implementation of on-premises SharePoint. Developer site templates are designed for developers to deploy applications directly to SharePoint from Visual Studio. Office 365 developer sites are preconfigured for application isolation and OAuth so that developers can begin writing and testing applications right away.

                            The following sections describe in detail when developers can use the different environment options to build applications.

                            O365 development sites (public cloud)

                            Figure 2 shows how developers can use Office 365 as a development environment and includes the types of tools produce SharePoint applications that can be hosted in Office 365.

                            Figure 2. Office 365 app development

                            Build apps for SharePoint with Office 365, Visual Studio, and "Napa."

                            Click to see enlargement.

                            Developers with MSDN subscriptions can obtain a development tenant that contains aSharePointDeveloper Site. The SharePointDeveloper Site is preconfigured for developing applications. Users can use not only Visual Studio 2012 in developing applications, but with Office 365 developer sites, “Napa” Office 365 Development Tools can be used within the site to construct applications. For more information about getting started with anOffice 365 Developer Site, see Sign up for an Office 365 Developer Site.

                            Developers can start creating applications that will be hosted in Office 365, on-premises or on other infrastructure in a provider-hosted model. The benefit of this environment is that infrastructure, virtualization and other hosting considerations for a SharePoint development environment are abstracted by Office 365, allowing developers to create applications instantly. A prime consideration for this type of development environment is that applications that require full trust code to be deployed toSharePoint cannot be accommodated. Microsoft recommends using the SharePoint client-side object model (CSOM) and client-side technologies such asJavaScript as much as possible. In the event that full trust code is required (but deployment of the code to run on SharePoint is not required), we recommend deploying the server-side code in an autohosted or provider-hosted model. Note that these full trust code solutions that are deployed to provider-hosted infrastructure also use the CSOM but can use languages such as C#. It’s also important to note that these applications deployed in a provider-hosted model can use other technology stacks and still use the CSOM to interact with SharePoint Server 2013.

                            Development teams creating separate features or applications that contain a larger solution will need a centralized deployment target to integration test components. Because each developer is creating features or applications on their own Office 365 developer site, a centralized site collection in a target tenant or on-premises environment should be provisioned so that each developer’s application components can be deployed there. This approach will allow for a centralized place to conduct integration testing between solution components. The testing section of this document reviews this process in more detail.

                            “Napa” Office 365 Development ToolsOffice 365 development tools

                            The “Napa” Office 365 Development Toolsdevelopment tools can be used by developers for the simpler creation of applications within an Office 365 developer site. The intention of the “Napa” Office 365 Development Tools tools is for developers, or power users who are proficient in client-side technologies, to quickly develop and deploy applications in a prototype, proof-of-concept or rapid business solution scenario. These tools provide a means of developing application functionality on SharePoint. However, during the lifecycle of an application, there may be points at which the application should be imported into Visual Studio. These conditions are outlined as follows”

                            • When more than one developer has to contribute or develop part of the solution

                            • When an application reaches a level of dependence by users whom requires the application of lifecycle management practices

                            • When functional requirements for the application change over time to require supplementary solution components (such as compiled services or data sources)

                            • When the application requires integration with other applications or solution components

                            • When developers have to use quality control measures such as automated builds and testing

                            Once these or other similar conditions occur, developers must export the solution into a source controlled environment such as TFS and apply ALM design considerations and procedures to the application’s future development.

                            Development sites (remote development)

                            For organizations or developers who choose not to use Office 365 developer sites as a primary means for SharePoint application development, on-premises developer sites can be used to develop SharePoint applications. In this model, the Office 365 developer sites’ capability is replaced with on-premises developer sites hosted within a SharePoint farm. Customers can create a development private cloud by deploying a SharePoint farm to house developer site instances. Customers can supply their own governance automation in order to provide developer site template creation or use the SharePoint in-product capabilities to provision developer site instances. Figure 3 illustrates this setup.

                            Figure 3. On-premises app development with the developer site template

                            Build apps for SharePoint in an on-premises deployment of SharePoint 2013 with the developer site template

                            Click to see enlargement.

                            Figure 3 describes the development tools and application types that can be enabled with developer sites when using an on-premises SharePoint farm as a host. Note that “Napa” Office 365 Development ToolsOffice 365 development tools cannot be used in this environment as they are a capability only present in Office 365 development sites.

                            TheSharePoint farm that hosts Developer Site instances must be monitored and meet service and recovery point and time level objectives so that developers who rely on them to create applications can be productive and not experience outages. Customers can apply private cloud concepts such as elasticity and scale units and a management fabric to this environment. Operations and management have to be applied to the SharePoint farm where the developer sites are hosted also. This will help control unmonitored sprawl of multiple developer sites that are stale or unused and provide a way to understand when the environment has to scale.

                            Customers can decide to use infrastructure as a service (IaaS) capabilities like Windows Azure to host theSharePoint farms that contain and host developer sites, or their own on-premises virtual or physical environments. Note that using this model does not require a SharePoint installation for each developer. Remote application development will only require Visual Studio and Office and SharePoint 2013 development tools on the developer work station.

                            Developers must establish provider-hosted infrastructure to deploy the provider-hosted applications. Although provider-hosted components of a SharePoint application can be implemented in a wide-array of technologies, developers must provide an infrastructure for hosting those components of the application that run outside SharePoint. For example, if a team is developing a SharePoint application whose user experience and other components reside in anASP.NET application, the development team should use local versions of IIS,SQL Server, and so on engage in traditional ALM team development patterns for ASP.NET.

                            Self-contained farm environments (virtualized farm development)

                            For those solutions that require the deployment of full trust code to run on a SharePoint farm a full (often virtualized) implementation of SharePoint Server 2013 will be required. For guidance on how to create an on-premises development environment for SharePoint, see How to: Set up an on-premises development environment for apps for SharePoint.

                            Figure 4 shows the types of applications that can be created using an on-premises virtualized environment.

                            Figure 4. On-premises development with a virtual environment

                            Build apps for SharePoint in a virtual on-premises environment

                            Click to see enlargement.

                            Developers can conduct remote development for the SharePoint and cloud-hosted applications within their own SharePoint farms as well as the development of full trust farm solutions. These farms are often hosted within a virtualization server running either on the developer’s workstation or in a centralized virtualization private cloud that can easily be accessible to developers. The SharePoint farm environment is usually separate from other developers’ farms and provides a level of isolation that is required when developing full trust code that may require the restart of critical services (that is IIS).

                            Remote development can occur within the self-contained farm as well as the development of full trust code as each development farm is isolated and dedicated to a single developer.

                            Organizations or developers will have to manage and update the SharePoint farms running within the virtual computers. For developers who are contributing to a single application, parity across the development farms running inside the virtual computers must be maintained. This practice ensures that each component of code developed for the application has consistency. Other common considerations are a standard configuration for the farms including domain access and credentials, service application credentials, testing identities or accounts and other environmental configuration elements (such as certificates).

                            Similar to a centralized farm for development sites, these virtual machines running developerSharePoint farms can be hosted in IaaS platforms such as Windows Azure, and on-premises private cloud offerings.

                            Note that, although virtual machines offer a great deal of isolation and independence from other developer virtual machines, teams should strive to have uniformity between the virtual machine configurations. This includes common domain, account and security, SharePoint configurations and a connection to a source control repository such as Visual Studio Team Foundation Server (TFS).

                            When constructing SharePoint applications, there are several considerations that have to be addressed to provide governance and common development practices for consistency and quality. When applying ALM principles to SharePoint application development, developers must focus on technical considerations as well as process-driven considerations.

                            The support of an ALM platform, such as Visual Studio Team Foundation Server 2012, is usually a requirement when conducting application development especially with teams of developers working on the same set of projects. SharePoint applications, like other technical solutions, require code repository management and versioning, build services, testing services, and release management practices. The following section describes considerations for ALM as applied to the different application models for SharePoint applications.


                            For each type of SharePoint application, the ALM considerations must be applied without variation in concept. However, practices and procedures around build, testing, and change management must be adjusted.

                            Some SharePoint applications will use client-side technologies. Most developers who have SharePoint Server 2010 application development experience will have to adjust to developing and applying ALM principles to non-compiled code. This adjustment includes applying concepts such as a “build” to a solution that may not have compiled code. ALM platforms such as Visual Studio 2012 have built-in capabilities to validate builds by first compiling the code, and second, by running build verification tests (BVTs) against the build.

                            For SharePoint applications, the process relating to build and testing should remain consistent with traditional application development processes. This includes the creation of a build schedule by the ALM platform, which will compile the solution and deploy it into the target environment.

                            Build processes

                            The SharePoint application build processes are facilitated by the ALM platform. Visual Studio Team Foundation Server 2012 provides both build and test services that can be triggered on solution check in from Visual Studio 2012 (continuous integration) or at specified scheduled intervals.

                            SharePoint build components

                            When planning build processes for SharePoint application development, developers have to consider the interactions between the components, as shown in Figure 5.

                            Figure 5. SharePoint-hosted app build components

                            Visual Studio builds work with app manifests, pages, and supporting files.

                            The illustration in Figure 5 is a logical representation of a SharePoint application. This illustration shows a SharePoint-hosted app and highlights key application objects as part of a Visual Studio 2012SharePoint-hosted app project. The SharePoint app project contains the features, package, and manifest that will be registered with SharePoint. The project also contains pages, script libraries, and other elements of user experience that constitute the SharePoint application. In addition, the SharePoint project has supporting files which include the necessary certificates for deployment to a target SharePoint environment.

                            Figure 6. Provider-hosted and autohosted app build components

                            Provider-hosted apps contain both SharePoint app packages and cloud-hosted components.

                            Figure 6 shows a SharePoint cloud-hosted application (that is, autohosted or provider-hosted). The main difference in the project structure is that the Visual Studio 2012 solution contains a SharePoint application project in addition to one or more projects that contain the cloud-hosted application components. These may include web applications, SQL database projects, or service applications that will be deployed to Windows Azure or an on-premises provider hosted infrastructure (such as ASP.NET) and other solution components. For guidance on packaging and deployment with high-trust applications, see How to: Package and publish high-trust apps for SharePoint 2013.

                            Figure 7. ALM with Visual Studio Team Foundation Server

                            TFS can be configured to conduct build and deployment activities with a SharePoint application through build definitions.

                            Figure 7 shows TFS as the ALM platform. Teams will use TFS to store code and conduct team development either using TFS deployed on-premises or using Microsoft cloud-based TFS services. TFS can be configured to conduct build and deployment activities with a SharePoint application through build definitions. TFS can also be used to conduct build verification tests (BVTs) that may be automated through the execution of coded UI tests that are part of the build definition.

                            Figure 8. TFS build targets

                            Scripts executed by a TFS build definition will deploy the SharePoint application components to SharePoint Online and on-premises SharePoint.

                            Figure 8 shows the target environments where scripts executed by a TFS build definition will deploy the SharePoint application components. For SharePoint-hosted applications this includes deployment to either SharePoint Online or to on-premises SharePoint application catalogs.

                            For cloud-hosted SharePoint applications, the components of the solution that require additional infrastructure outside SharePoint are deployed to target environments. For autohosted applications, this will be Windows Azure. For provider-hosted applications, this infrastructure can be Windows Azure, or other on-premises or IaaS-hosted environments.

                            Creating a build for SharePoint applications

                            TFS provides build services that can compile solutions checked into source control and place the output in a centralized drop location for deployment to target environments in an automated manner. The primary method of configuring TFS to conduct automated builds, deployments, and testing of SharePoint applications is to create a build definition in Visual Studio. The build definition contains information about which code projects to compile, as well as any post-compilation activities such as testing and actual deployment to the target environments. For more information about the Team Foundation Build Service, see Set up Team Foundation Build Service.

                            To achieve continuous integration, the build definition can be triggered when developers check in code. Additionally, the build definition can be scheduled to execute at set intervals.

                            ForSharePoint applications, developers should use the Office/SharePoint 2013 Continuous Integration with TFS 2012 build definitions project to achieve scheduled builds or continuous integration. This project provides build definitions, Windows PowerShell scripts, and process instructions on how to configure Visual Studio Online or an on-premises version of TFS to build and deploy SharePoint applications in a continuous integration model. Developers should download the components in this project and configure their instance of TFS accordingly. For instructions on how to configure TFS with the provided build definition for SharePoint applications and configure the build definition to use the Windows PowerShell scripts to deploy the SharePoint application to a target environment, see the Office/SharePoint 2013 Continuous Integration with TFS 2012 documentation.

                            Configuring build and deployment procedures

                            Figure 9 shows a standard process for SharePoint application builds and deployments when the build definition has been created, configured, and deployed to the team’s instance of TFS.

                            Figure 9. Build and deployment process with TFS

                            TFS build services execute the steps defined by the SharePoint application build definition.

                            Click to see enlargement.

                            The developer checks in the SharePoint application Visual Studio 2012 solution. Depending on the desired configuration (that is, continuous integration or scheduled build), TFS build services will execute the steps defined by the SharePoint application build definition. This definition, configured by developers, contains the continuous integration build process template as well as post-build instructions to execute Windows PowerShell scripts for application deployment. Note that the SharePoint Online Management Shell extensions will be required in order to deploy the application to SharePoint Online. For more information about SharePoint Online Management Shell extensions, see SharePoint Online Management Shell page on the Download Center.

                            Once the build has been triggered, TFS will compile the projects associated with the SharePoint application and execute Windows PowerShell scripts to deploy the solution to the target SharePoint environment.

                            Trusting the SharePoint application

                            Following deployment of the application components to the target environments, it is important to note that before anyone accessing the application, including automated tests that may be part of the build, a tenant (or site collection) administrator will have to trust the application on the app information page in SharePoint. This requirement applies to autohosted and SharePoint–hosted apps. This manual process represents a change in the build process as tests that would typically run following the deployment to the target environment will have to be suspended until the application is trusted.

                            Note that for cloud-hosted (auto and provider) applications, developers can deploy the non-SharePoint components to the cloud-hosted infrastructure separately from the application package that is deployed to SharePoint.

                            Figure 10. Deploying non-SharePoint components

                            As developers make changes to the solution that represents the SharePoint application, there may be circumstances where changes are made to the projects within the solution that do not apply to the SharePoint application project itself.

                            Click to see enlargement.

                            As shown in Figure 10, when developers make changes to the solution that represents the SharePoint application, there may be circumstances where changes are made to the projects within the solution that do not apply to theSharePoint application project itself. In this circumstance, the SharePoint application project does not have to be redeployed as it has not changed. The changes associated with the cloud-hosted projects must be redeployed.

                            Changes to the application that will be deployed to infrastructure outside SharePoint can be done so separately from the application components that get deployed into the target site collection or tenant. For developers, this means that an automated build process can be created to deploy the cloud-hosted components on a frequent (triggered) basis and separately from the SharePoint application project. Thus, the manual step to approve the application’s permission on the app information page in SharePoint is not required, allowing for a more continuous deployment and testing process for the build definition. The SharePoint application component of the solution would only have to be deployed in a circumstance were the items within this project changed and required redeployment.


                            As described in the build processes section, application testing is a method of determining whether the compilation and deployment of the application was successful. By using testing as a means of verifying the build and deployment of the application, the team is provided with an understanding of quality, as well as a way of knowing when a recent change to the application’s code has compromised the SharePoint application.

                            Figure 11 shows the types of testing approaches that are best used with SharePoint application models.

                            Figure 11. Testing approaches

                            Coded UI tests should be leveraged against SharePoint-hosted applications where the business logic and the user experience reside in the same layer.

                            Click to see enlargement.

                            Figure 11 suggests the use of different types of tests for testing SharePoint applications by type. Coded UI tests should be used against SharePoint-hosted applications where the business logic and the user experience reside in the same layer. While business logic can be abstracted to JavaScript libraries, a primary means of testing that logic will be through the user experience.

                            Cloud-hosted applications (that is, autohosted and provider-hosted) can use fully coded UI tests while also using unit tests for verification of the service components of the solution. This provides developer confidence in the quality of the application’s hosted infrastructure implementation from a functional perspective.

                            The following sections review the considerations for coded UI tests and other test types in relation to SharePoint applications.

                            Client-side code and coded UI tests

                            For build verification testing (BVT) as well as complete system testing, coded UI tests are recommended. These tests rely on recorded actions to test not only the business logic and middle tier of the application, but the user experience as well. For SharePoint applications that use client-side code, much of the business logic’s entry points and execution may exist in the user experience tier. For this reason, coded UI tests can not only test the user experience, but the business logic of the application as well. For more information about the coded UI test, see Verifying Code by Using UI Automation.

                            Coded UI tests can be used in SharePoint-hosted apps where much of the UX and the business logic may be intermixed. These tests, like others can be run from a build definition in TFS so that they can verify an application’s functionality after deployment (and the application is trusted by SharePoint).

                            Non-coded UI tests

                            For circumstances where the application logic exists outside the application’s UX layer, such as in cloud-hosted applications, a combination of coded UI tests and non-coded UI tests should be leveraged. Tests such as traditional unit tests can be used to validate the build quality of service logic that is implemented on a provider-hosted infrastructure. This provides the developer with a holistic confidence in the provider-hosted components of the solution function, and are covered from a testing point of view.

                            Web performance and load tests

                            Web performance and load tests provide developers with the confidence that the application functions under expected or anticipated user loads. This testing includes determining the application’s capability to concurrently handle a predictable user base that will reasonably scale over time. Both coded UI and unit tests can be the source of the web performance and load test. Using an ALM platform like TFS, these tests can be used to load test the application.

                            Note that the testing of the infrastructure is not a primary goal of these tests when using them to test SharePoint applications. The infrastructure, whether SharePoint-hosted or provider-hosted, should have a similar load and performance baseline established. The web performance and load tests for the application will highlight infrastructure challenges, but should be regarded primarily as a means to test the application’s performance.

                            For more information about web performance and load tests, see Run performance tests on an application before a release.

                            Quality and testing environments

                            Many organizations have several testing environments that may be either physical, or virtual and separate from each other. These environments can vary based on a team’s ALM process, regulatory requirements or a combination of both. To determine the number and types of testing environments that teams should use, the following guidance is based on functional practices specific to SharePoint applications, but also uses ALM practices for software development at large.

                            Developer testing

                            Developer testing can occur in the environment where the developers are creating their component of the solution. Multiple developers, working on different aspects or components of a larger application, will each have unit tests, coded UI tests, and the application code deployed into their development site.

                            Figure 12. Developer testing process

                            Developers will execute tests from Visual Studio against the solution components deployed in their own Office 365 or on-premises developer site.

                            Click to see enlargement.

                            Developers will execute tests from Visual Studio against the solution components deployed in their own Office 365 or on-premises developer site. For cloud-hosted applications, the tests will also be executed from Visual Studio against the solution components that reside on provider-hosted infrastructure. These components will reside in the developer’s Windows Azure subscription.

                            Note that this approach assumes that developers either have individual Office 365 developer sites and Windows Azure subscriptions, which are supplied through MSDN subscriptions. Even if developers are creating applications for on-premises deployment, these developer services can be used for development and testing.

                            If developers do not have these services, or are required to do development entirely on-premises, then they will execute tests for their components against an on-premises farm’s developer site collection and developer-specific, provider-hosted infrastructure. The provider-hosted infrastructure can reside in developer-dedicated virtual machines. For the development of full-trust solutions, developers would require their own virtual SharePoint farm and provider-hosted infrastructure.

                            Integration and systems testing

                            In order to test the application, all of the development components should be assembled and deployed in a centralized environment. This integration environment provides a place where developers can deploy and observe the components of the solution they created interacting with other solution components written by other developers.

                            Figure 13. Integration testing environment

                            TFS will build and deploy the SharePoint application and any required components to the target environments.

                            Click to see enlargement.

                            For this type of testing, the ALM platform will build and deploy the SharePoint application and any required components to the target environments. For SharePoint-hosted applications, this will either be an Office 365 site or an on-premises/IaaS SharePoint Server 2013 site collection specifically established for integration and systems testing. For SharePoint cloud-hosted applications, TFS will also deploy the components to a centralized Windows Azure subscription where the services will be configured specifically for integration/systems testing. TFS will then execute coded UI or unit tests against the SharePoint application, as well as any components that the solution requires on hosted infrastructure.

                            UAT and QA testing

                            For user acceptance testing (UAT), organizations often have separate environments where this function is performed apart from integration and systems testing. Separating these testing environments prevents the cadence of automated continuous release and testing from interfering with UAT activities where users may be executing tests against a specified build of the application over an extended period of time.

                            Figure 14. UAT testing

                            Users assigned to conduct acceptance testing or organizational testing resources conduct test scripts in a stable environment that is focused on a well-publicized build version of the application.

                            Click to see enlargement.

                            As shown in Figure 14, users assigned to conduct acceptance testing or organizational testing resources conduct test scripts in a stable environment that is focused on a well-publicized build of the application. While code deployment and testing continues in the integration environment, users will conduct manual testing to validate that the application meets required use or test cases. The application and any provider-hosted infrastructure will be deployed, typically by a release manager, into this testing environment. An automated deployment is possible as well. This sort of deployment uses a dedicated UAT build definition in TFS that mirrors the one that conducts deployment for the integration and systems testing environment.

                            For cloud-hosted infrastructure, deployment into a Windows Azure subscription that is shared with the integration and systems test environments is possible if the services are named and configured to be deployed side by side as different services or databases. This approach provides a set of services and databases in the testing Windows Azure subscription for integration and systems testing as well as UAT and QA testing, as shown in Figure 15.

                            Figure 15. Integration and UAT testing

                            Deployment into a Windows Azure subscription that is shared with the integration/systems test environment is possible if the services are named and configured to be deployed side by side as different services or databases.

                            Click to see enlargement.

                            Code promotion practices

                            The code promotion process between the development and testing environments as well as the production environment should be done using a well-defined release management process. In Figure 16, developers conduct deployment of their solution components to development environments for unit testing.

                            Figure 16. Release management process

                            Following a check-in to TFS, an automated build procedure will compile and deploy the solution to the target environment where build verification tests will be executed as part of the build definition in TFS.

                            Click to see enlargement.

                            Following a check-in to TFS, an automated build procedure will compile and deploy the solution to the target integration and system testing environment where build verification tests will be executed as part of the build definition in TFS. This approach includes deploying the provider-hosted components of the solution to the target environment (Windows Azure or on-premises environments). Note that for Windows Azure infrastructure, the Windows Azure subscription can be the same one used for both integration and system testing as well as UAT and QA assuming they are deployed to different namespaces and separate SQL databases.

                            A release manager or a separate TFS build definition, manually invoked in most cases, can deploy to the UA or TQA environment. This approach helps control the build version that users will be testing against. Release managers can pick up the builds from a TFS share and execute the deployment process themselves. From promotion to production, release management will be involved to deploy the application to the production environment and monitor its installation and build verification through tests.

                            Microsoft has specific guidance on how application developers can upgrade applications. The SharePoint Server 2013 platform supports the notification of new application versions to users.

                            For considerations on establishing a strategy around SharePoint application upgrades and patching, see How to: Update apps for SharePoint.

                            For changes to applications, the recommended pattern to follow is consistent with existing code development and sustained engineering patterns. This includes disciplined branching and merging for bug fixes and feature development as well as incremental deployments to target application catalogs. The preceding guidance can be used to complete changes to applications for SharePoint and deploy them to target application catalogs or the store.

                            The information in App for SharePoint update process provides additional tactical guidance on the techniques for updating SharePoint applications. This includes accelerating deployment testing by shortening the cycle by which application updates are reflected in the farm in test environments. Additionally, this article has guidance on how to accommodate for state within various application deployment models.

                            Free Oracle Trial available on Azure! Run your Java Applications on Azure!

                            6 simple steps

                            1. Create a new virtual machine
                            2. Select the Oracle software you want
                            3. Configure your virtual machine
                            4. Select a virtual machine mode
                            5. Create an endpoint and review legal terms
                            6. Connect to your new virtual machine

                            icrosoft and Oracle now provide best in class, end to end, support for customers running mission critical Oracle on Windows Azure and Windows Server Hyper-V. Use your existing licenses to run Oracle software on Windows Azure and receive full support from Oracle.

                            Or, use license-included instances of Oracle Database, Oracle WebLogic Server, or the Java development environment on Windows Server – now in preview. Whether you’re an Oracle admin or a Java developer, you now have increased flexibility and choice in where to deploy your and the safety and security that comes from knowing you will be fully supported by Oracle.


                            What are the key aspects of this partnership?

                            • Customers can run supported Oracle software on Windows Server Hyper-V and in Windows Azure.
                            • Oracle provides license mobility for customers who want to run Oracle software on Windows Azure.
                            • Microsoft has added images with popular configurations of Oracle software including Java, Oracle Database and Oracle WebLogic Server to the Windows Azure image gallery.
                            • Microsoft now offers fully licensed and supported Java in Windows Azure.
                            • Oracle now offers Oracle Linux, with a variety of Oracle software, as preconfigured images on Windows Azure.
                             When will these new capabilities be available to customers?

                            • Effective immediately, customers can use their existing Oracle software licenses on Windows Azure – and we have now made this even easier with pre-configured images in the gallery. Oracle fully supports its software running on Windows Azure.
                            • License-included instances of Oracle Database and Oracle WebLogic Server are now in preview. General availability has been set for March 12, 2014.

                            Is Java a first class citizen on Windows Azure?

                            • Yes. Microsoft’s goal with Windows Azure has always been to embrace and provide development platforms that meet the needs of developers and IT pros.
                            • With this partnership, we’re furthering our commitment to providing a heterogeneous development environment on Windows Azure.
                            • Fully licensed and supported Java is now available to Windows Azure customers.

                            Oracle Database virtual machine images

                            • Oracle Database 12c Enterprise Edition (Preview) on Windows Server 2012
                            • Oracle Database 12c Standard Edition (Preview) on Windows Server 2012
                            • Oracle Database 11g R2 Enterprise Edition (Preview) on Windows Server 2008 R2
                            • Oracle Database 11g R2 Standard Edition (Preview) on Windows Server 2008 R2

                            Oracle WebLogic Server virtual machine images

                            • Oracle WebLogic Server 12c Enterprise Edition (Preview) on Windows Server 2012
                            • Oracle WebLogic Server 12c Standard Edition (Preview) on Windows Server 2012
                            • Oracle WebLogic Server 11g Enterprise Edition (Preview) on Windows Server 2008 R2
                            • Oracle WebLogic Server 11g Standard Edition (Preview) on Windows Server 2008 R2

                            Oracle Database and WebLogic Server virtual machine images

                            • Oracle Database 12c and WebLogic Server 12c Enterprise Edition (Preview) on Windows Server 2012
                            • Oracle Database 12c and WebLogic Server 12c Standard Edition (Preview) on Windows Server 2012
                            • Oracle Database 11g and WebLogic Server 11g Enterprise Edition (Preview) on Windows Server 2008 R2
                            • Oracle Database 11g and WebLogic Server 11g Standard Edition (Preview) on Windows Server 2008 R2

                            Java virtual machine images

                            • JDK 7 (Preview) on Windows Server 2012
                            • JDK 6 (Preview) on Windows Server 2012

                            To install a Java application server on your virtual machine

                            You can copy a Java application server to your virtual machine, or install a Java application server through an installer.

                            For purposes of this tutorial, Tomcat will be installed.

                            1. While logged on to your virtual machine, open a browser session to
                            2. Double-click the link for 32-bit/64-bit Windows Service Installer. Using this technique, Tomcat will be installed as a Windows service.
                            3. When prompted, choose to run the installer.
                            4. Within the Apache Tomcat Setup wizard, follow the prompts to install Tomcat. For purposes of this tutorial, accepting the defaults is fine. When you reach the Completing the Apache Tomcat Setup Wizard dialog, you can optionally check Run Apache Tomcat, to have Tomcat started now. Click Finish to complete the Tomcat setup process.

                            To start Tomcat

                            If you did not choose to run Tomcat in the Completing the Apache Tomcat Setup Wizard dialog, start it by opening a command prompt on your virtual machine and running net start Tomcat7.

                            You should now see Tomcat running if you run the virtual machine’s browser and open http://localhost:8080.

                            To see Tomcat running from external machines, you’ll need to create an endpoint and open a port.

                            To create an endpoint for your virtual machine

                            1. Log in to the Management Portal.
                            2. Click Virtual machines.
                            3. Click the name of the virtual machine that is running your Java application server.
                            4. Click Endpoints.
                            5. Click Add.
                            6. In the Add endpoint dialog, ensure Add standalone endpoint is checked and then click Next.
                            7. In the New endpoint details dialog:
                              1. Specify a name for the endpoint; for example, HttpIn.
                              2. Specify TCP for the protocol.
                              3. Specify 80 for the public port.
                              4. Specify 8080 for the private port.
                              5. Click the Complete button to close the dialog. Your endpoint will now be created.

                            To open a port in the firewall for your virtual machine

                            1. Log in to your virtual machine.
                            2. Click Windows Start.
                            3. Click Control Panel.
                            4. Click System and Security, click Windows Firewall, and then click Advanced Settings.
                            5. Click Inbound Rules and then click New Rule.

                            New inbound rule

                            1. For the new rule, select Port for the Rule type and then click Next.

                            New inbound rule port

                            1. Select TCP for the protocol and specify 8080 for the port, and then click Next.

                            New inbound rule

                            1. Choose Allow the connection and then click Next.

                            New inbound rule action

                            1. Ensure Domain, Private, and Public are checked for the profile and then click Next.

                            New inbound rule profile

                            1. Specify a name for the rule, such as HttpIn (the rule name is not required to match the endpoint name, however), and then click Finish.

                            New inbound rule name

                            At this point, your Tomcat web site should now be viewable from an external browser, using a URL of the form, where your_DNS_name is the DNS name you specified when you created the virtual machine.

                            Application lifecycle considerations

                            • You could create your own application web archive (WAR) and add it to the webapps folder. For example, create a basic Java Service Page (JSP) dynamic web project and export it as a WAR file, copy the WAR to the Apache Tomcat webapps folder on the virtual machine, then run it in a browser.
                            • By default when the Tomcat service is installed, it will be set to start manually. You can switch it to start automatically by using the Services snap-in. Start the Services snap-in by clicking Windows Start, Administrative Tools, and then Services. To set Tomcat to start automatically, double-click the Apache Tomcat service in the Services snap-in and set Startup type to Automatic, as shown in the following.

                              Setting a service to start automatically

                              The benefit of having Tomcat start automatically is it will start again if the virtual machine is rebooted (for example, after software updates that require a reboot are installed).

                            Visualize and Debug Code Execution with Call Stacks in Visual Studio

                            Create a code map to trace the call stack visually while you’re debugging. You can make notes on the map to track what the code is doing so you can focus on finding bugs.

                            Debugging with call stacks on code maps          


                            You’ll need:

                            See: Video: Debug visually with Code Map debugger integration (Channel 9)Map the call stackMake notes about the codeUpdate the map with the next call stackAdd related code to the mapFind bugs using the mapQ & A


                            Map the call stack              


                            1. Start debugging.
                          1. When your app enters break mode or you step into a function, choose Code Map. (Keyboard: Ctrl + Shift + `)

                            Choose Code Map to start mapping call stack                  


                            The current call stack appears in orange on a new code map:

                            See call stack on code map                  


                            The map might not look interesting at first, but it updates automatically while you continue debugging. See Update the map with the next call stack.


                            Make notes about the code              

                                        Add comments to track what’s happening in the code. To add a new line in a comment, press Shift + Return.

                            Add comment to call stack on code map          


                            Update the map with the next call stack              

                                        Run your app to the next breakpoint or step into a function. The map adds a new call stack.

                            Update code map with next call stack          


                            Add related code to the map              

                                        Now you’ve got a map – what next? If you’re working with Visual C# .NET or Visual Basic .NET, add items, such as fields, properties, and other methods, to track what’s happening in the code.

                            Double-click a method to see its code definition. (Keyboard: Select the method on the map and press F12)

                            Go to code definition for a method on code map            


                            Add the items that you want to track on the map.

                            Show fields in a method on call stack code map            


                            Fields related to a method on call stack code map            


                            Here you can easily see which methods use the same fields. The most recently added items appear in green.

                            Continue building the map to see more code.

                            See methods that use a field: call stack code map            


                            Methods that use a field on call stack code map          


                            Find bugs using the map              

                                        Visualizing your code can help you find bugs faster. For example, suppose you’re investigating a bug in a drawing program. When you draw a line and try to undo it, nothing happens until you draw another line.

                            So you set breakpoints, start debugging, and build a map like this one:

                            Add another call stack to code map            


                            You notice that all the user gestures on the map call Repaint, except for undo. This might explain why undo doesn’t work immediately.

                            After you fix the bug and continue running the program, the map adds the new call from undo to Repaint:

                            Add new method call to call stack on code map          


                            Q & A              


                            • Not all calls appear on the map. Why?                

                              By default, only your code appears on the map. To see external code, turn it on in the Call Stack window or turn off Enable Just My Code in the Visual Studio debugging options.

                            • Does changing the map affect the code?                

                              Changing the map doesn’t affect the code in any way. Feel free to rename, move, or remove anything on the map.

                            • What does this message mean: “The diagram may be based on an older version of the code”?                

                              The code might have changed after you last updated the map. For example, a call on the map might not exist in code anymore. Close the message, then try rebuilding the solution before updating the map again.

                            • How do I control the map’s layout?                

                              Open the Layout menu on the map toolbar:

                              • Change the default layout.

                              • To stop rearranging the map automatically, turn off Automatic Layout when Debugging.

                              • To rearrange the map as little as possible when you add items, turn off Incremental Layout.

                            • Can I share the map with others?                

                              You can export the map, send it to others if you have Microsoft Outlook, or save it to your solution so you can check it into Team Foundation version control.

                              Share call stack code map with others                


                            • How do I stop the map from adding new call stacks automatically?                

                              Choose Button - Show call stack on code map automatically on the map toolbar. To manually add the current call stack to the map, press Ctrl + Shift + `.

                              The map will continue highlighting existing call stacks on the map while you’re debugging.

                            • What do the item icons and arrows mean?                

                              To get more info about an item, look at the item’s tooltip. You can also look at the Legend to learn what each icon means.

                              What do icons on the call stack code map mean?