Tag Archives: WCF

How To : Access SAP Business Data From Silverlight 4 Clients Using WCF RIA Services And LINQ to SAP

Introduction

The introduction of Microsoft’s WCF RIA Services for Silverlight 4 simplified very much the development process of N-tier business applications using Silverlight and ASP.NET. By using this new technology, we can also easily access and integrate SAP business data in Silverlight clients.

This article shows how to provide a SAP domain service as web service that will be consumed by a Silverlight client. The sample application will allow the user to query customer data. The service uses LINQ to SAP from Theobald Software to connect to a SAP R/3 system.

Project Setup

The first step in setting up a new Silverlight 4 project with WCF RIA Services is to create a solution using the Visual Studio template Silverlight Navigation Application:

Screenshot-01.png - Click to enlarge imageVisual Studio 2010 then asks you to create an additional web application, which hosts the Silverlight application. It’s important to select the checkbox Enable WCF RIA Services (see screenshot below):

SAP2Silverlight/Screenshot-02.pngAfter clicking the Ok button, Visual Studio generates a solution with two projects, one Silverlight 4 project and one ASP.NET project. In the next section, we will create the SAP data access layer using the LINQ to SAP designer.

LINQ to SAP

The LINQ to SAP provider and its Visual Studio 2010 designer offers a very handy way to design SAP interfaces visually. The designer will generate the code for the SAP data access layer automatically, similar to LINQ to SQL. The LINQ provider is part of the .NET library ERPConnect.net from Theobald Software. The company offers a demo version for download on its homepage.

The next step is to create the needed LINQ to SAP file by opening the Add New Item dialog:

Screenshot-03.png - Click to enlarge imageLINQ to SAP is internally called LINQ to ERP.

Clicking the Add button will create a new ERP file and opens the LINQ designer. Now, drag the Function object from the toolbox and drop it onto the designer surface. If you have not entered the SAP connection data so far, you are now asked to do so:

Screenshot-04.png - Click to enlarge imageEnter the connection data for your SAP R/3 system and then click the Ok button. Next, search for and select the SAP function module named SD_RFC_CUSTOMER_GET. The function module provides a list of customer data.

The RFC Function modules dialog opens and lets you define the necessary parameters:

SAP2Silverlight/Screenshot-05.pngIn the above function dialog, change the method name to GetCustomers and mark the Pass checkbox for theNAME1 parameter in the Exports tab. Also set the variable name to namePattern. On the Tables tab, mark the Return checkbox for the table parameter CUSTOMER_T and set the table and structure name to CustomerTable andCustomerRow:

SAP2Silverlight/Screenshot-06.pngAfter clicking the Ok button and saving the ERP file, the LINQ designer will generate a SAPContext class which contains a method called GetCustomers with an input parameter named namePattern. This method executes a search for SAP customer data allowing the user to enter a wildcard pattern. The method returns a table of customer data:

SAP2Silverlight/Screenshot-07.pngOn the LINQ designer level (click on the free part of the LINQ designer surface) property, Create Object Outside Of Context Class must be set to True:

Screenshot-08.png - Click to enlarge imageNow, we finally add a Customer class which we use in our SAP domain service later on. This class and its values will be transmitted to the Silverlight client by the WCF RIA Services. It’s important to set the Key attribute on the identifier fields for WCF RIA Services, otherwise the project will not compile:

Screenshot-09.png - Click to enlarge imageThat’s it! We now have our SAP data access layer ready to use and can start adding the domain service in the next section.

SAP Domain Service

The next step is to add the SAP domain service to our web project. A domain service is a specialized WCF service and is one of the core constructs of WCF RIA Services. The service exposes operations that can be called from the client generated code. On the client side, we use the domain context to access the domain service on the server side.

Add a new Domain Service Class and name it SAPService:

Screenshot-10.png - Click to enlarge imageIn the upcoming dialog, create an empty domain service class by just clicking the Ok button:

SAP2Silverlight/Screenshot-11.pngNext, we add the service operation GetCustomers to the SAP service with a name pattern parameter. The operation then returns a list of Customer objects. The Query attribute limits the result set to 200 entries.

The operation uses the visually designed SAP data access logic to retrieve the SAP customer data. First of all, an instance of the SAPContext class will be created using a connection string (see sample in code). For more details regarding the SAP connection string, see the ERPConnect.net manual.

The LINQ to SAP context class contains the GetCustomers method which we will call using the given namePatternparameter. Next, the operation creates an instance of the Customer class for each customer record returned by SAP.

The license code for the ERPConnect.net library is set in the constructor of our domain service class.

Screenshot-12.png - Click to enlarge imageThat’s all we need on the server side.

In the next section, we will implement the Silverlight client.

Silverlight Client

The implementation of the client side is straightforward. The home view contains a DataGrid control to display the list of customer data as well as a search area with TextBox and Button controls to allow users to enter name search pattern.

The click event handler of the load button, called OnLoadButtonClick, will execute the SAP service. The boilerplate code to access the web service was generated by WCF RIA Services in the subfolder Generated_Code in the Silverlight project.

First of all, an instance of the SAPContext will be created. Then, we load the query GetCustomersQuery and execute the service operation on the server side using WCF RIA Services. If the domain service returns an error, the callback anonymous method will mark the error as handled and display the error message.

If the execution of the service operation succeeded, the result set gets displayed in the DataGrid control.

Screenshot-13.png - Click to enlarge imageThe next screenshot shows the final result:

Screenshot-14.png - Click to enlarge imageThat’s it.

Summary

This article has shown how easily SAP customer data can be integrated within Silverlight clients using tools like WCF RIA Services and LINQ to SAP. It is quite simple to extend the SAP service to integrate all kinds of operations.

Advertisements

How To : Design the Physical Architecture to Support Collaborative Development and ALM of SharePoint Foundation 2010 Application

Introduction

This article explains the physical architecture which fits best in collaborative development and ALM of SharePoint Foundation 2010 application and what are the servers and tools needed and how they play key roles in ALM of SharePoint Foundation 2010. The purpose of this article is to provide overall understanding of various servers and farms connected to each other in SharePoint Foundation.

Background

Basic understanding of different server OS & SharePoint Foundation 2010 is required.

Solution

Application Life-cycle Management (ALM) is the co-ordination of development life-cycle activities—including requirements, modeling, development, build, and testing. Recently, ALM has expanded beyond the application and the software development life cycle to also include business solution governance, infrastructure management, operations, and support.

You can use ALM to help align your organization in the context of a software solution in business, development, and operations. With an application development platform that supports ALM, you can provide integration between the various tools used and activities performed within each of these capabilities.

There are main four types of staging servers with standalone developer’s environment which plays a key role in ALM of SharePoint 2010 application:

  1. Development SharePoint Farm
  2. Team foundation server
  3. Integration/Testing Farm
  4. Production Farm
    +
    Developer’s Workstation

The below figure is a physical architecture which depicts how each sever is interconnected to support collaborative development and ALM for SharePoint Foundation 2010 application:

Click to enlarge image

Development SharePoint Farm

A SharePoint farm is fundamentally a collection of SharePoint role servers that provide for the base infrastructure required to house SharePoint sites. The farm level is the highest level of SharePoint architecture, providing a distinct operational boundary for a SharePoint environment. Each farm in an environment is a self-encompassing unit made up of one or more servers, such as web servers, service application servers, and SharePoint database servers.

SharePoint development farm needed for the developers in an organization that makes heavy use of SharePoint often need environments to test new applications, web parts, solutions, and other SharePoint customization. These developers often need a sandbox area where these farm level features and solutions can be tested.

I have considered two-tier topology for SharePoint Foundation 2010 farm. However it will be entirely based on the need of your application. If your application is a relatively small intranet application, then you can choose single tier topology or if you are going to integrate other search server with foundation, then you can choose three-tier topology with application server as a middle tier (Remember that SharePoint Foundation 2010 doesn’t include enterprise search). It may make sense to deploy one or more development farms so that developers have the opportunity to run their tests and develop software for SharePoint independent of the existing production environment.

There are basically two types of servers included in two-tier development farm of SharePoint Foundation 2010:

  1. Web server
  2. Content database server

In the above figure, there are three front-end web servers and one SharePoint content database server. However you can choose a single front-end web server connected to content database server based on your application need and architecture of production environment. All web servers share the same content database. This is called two-tier deployment farm where SharePoint server component and content database are installed on separate server. As I mentioned before, you can choose one-tier, two-tier or three-tier deployment topology based on your application architecture and topology of production architecture.

Each web server has SharePoint Foundation 2010 and SharePoint extension for TFS 2010 install on it. It needs SharePoint extension for TFS 2010 to connect with Team Foundation Server for source control, build management & project management.

Advantage of Development SharePoint Farm:

  1. Single place where SharePoint Admin can integrate all the final artifacts from multiple developers.
  2. Developer can sync with latest SharePoint site on its standalone developer workstation.
  3. Admin can easily approve artifacts and migrate to integration server.
  4. It is a unit testing environment for developers where they can test dependent functionality or farm level features.

Team Foundation Server

Team Foundation Server plays a key role in ALM which provides source control, build management and work item. You can have TFS installed on the same server which has content database server but if you are going to use build management of TFS, then it is advisable to have separate Team Foundation Server because it utilizes CPU intensively when it processes the builds.

As per the above figure, there are separate Team foundation servers which are connected to SharePoint Farm as well as standalone development workstation so that it can provide source control for customized content as well as developer’s artifacts and resources.

Advantages of TFS
  1. Source control for SharePoint artifacts and customization
  2. Build management for SharePoint
  3. Work item and bug tracking tool for SharePoint
  4. Admin console for all management activity
  5. Easy integration with SharePoint foundation server and VS 2010
  6. Easy check-in & check-out
  7. Web based console to manage ALM activity

Developer’s Workstation

As per the above figure, developers’ environment includes two developers workstation. In practice, you can take as many workstations as your development team size.

Developer workstation should have Windows 7 or Windows vista operating system with standalone SharePoint foundation server with local content database. So that one developer’s work doesn’t affect another developer and he can debug artifacts locally.

Developer workstation will include the following stuff installed:

  1. Windows 7 or Windows vista 64 bit OS
  2. Stand alone SharePoint Foundation server 2010
  3. SharePoint designer 2010
  4. Visual Studio 2010 (connected to TFS)

Developer workstation should be connected to Team Foundation Server 2010 so that when developer finally completes his artifact, then he can check-in his artifact in TFS so that other developers can take the latest code from TFS if needed. This way, parallel development can happen without affecting other developer’s work.

Integration/Testing Farm

Any production SharePoint environment should have a test environment in which new SharePoint web parts, solutions, service packs, patches, and add-ons can be tested. It is critical to deploy test farms, because many SharePoint add-ons could potentially disrupt or corrupt the formatting or structure of a production environment, and trying to test these new solutions on site collections or different web applications is not enough because the solutions often install directly on the SharePoint servers themselves. If there is an issue, the issue will be reflected in the entire farm.

Integration or testing server farm should be similar to the existing environments, with the same add-ons and solutions installed and should ideally include restores of production site collections to make it as similar as possible to the existing production environment. All changes and new products or solutions installed into an environment should subsequently be tested first in this environment.

Integration/testing servers will have final SharePoint sites and site collection as per the business requirements. QA will test all the business functionality here. Customer can also do their ‘User acceptance test’ before going live to the production server.

After user acceptance test passed, all the sites & site collection will be deployed on production server.

Advantage of Integration testing server:

  1. Clean environments and same physical architecture as production
  2. QA can test all dependent business functionality at one place
  3. Customer can participate in UAT
  4. Easy deployment/migration from integration testing server to production server

Production Farm

The final stage is rolling your farm into a production environment. At this stage, you will have incorporated the necessary solution and infrastructure adjustments that were identified during the user acceptance test stage. These servers are generally in the customer’s premises. Development team and testing team do not have control over it.

There are various 3rd party tools available in the market for SharePoint data protection, administration, migration, compliance and integration.

ImageGen[1]

Summary

So this way, you can design physical architecture where Development SharePoint Farm and developer’s workstation are integrated with TFS 2010. TFS and Content database are connected to testing server or testing farm where all the artifacts and content will be integrated in testing server for QA and UAT. Finally after UAT, it will be deployed on production farm.

You can use VM (Virtual Machine) for all the servers and workstation for effective infrastructure because if server crashes due to some reason, then you can quickly create a new VM for the needed OS from images.

Note: In the above figure, integration/Testing farm and production farm is a single server just for clear understanding but it will be as large as development farm with number of front-end web server and content database server in reality. All the server OS is Windows Server 2008 R2 SP2 64 bit. Please visit here for more information on hardware & software requirements for SharePoint Foundation 2010.

How To : Implement a WCF 4 Routing Manager

Contents

 

Features

  • Manageable Routing Service
  • Mapping physical to logical endpoints
  • Managing routing messages from Repository
  • No Service interruptions
  • Adding more outbound endpoints on the fly
  • Changing routing rules on the fly
  • .Net 4 Technologies

 Introduction

Recently released Microsoft .Net 4 Technology represents a foundation technology for metadata driven model applications. This article focuses on one of the unique components from the WCF 4 model for logical connectivity such as a Routing Service. I will demonstrate how we can extend its usage for enterprise application driven by metadata stored in the Runtime Repository. For more details about the concept, strategy and implementation of the metadata driven applications, please visit my previous articles such as Contract Model and Manageable Services.   

I will highlight the main features of the Routing Service, more details can be found in the following links [1], [2], [3].

From the architectural style, the Router represents a logical connection between the inbound and outbound endpoints. This is a “short wire” virtual connection in the same appDomain space described by metadata stored in the router table. Based on these rules, the messages are exchanged via the router in the specific built-in router pattern. For instance, the WCF Routing Service allows the following Message Exchange Pattern (MEP) with contract options:

  • OneWay (SessionMode.Required, SessionMode.Allowed, TrasactionFlowOption.Allowed)
  • Duplex (OneWay, SessionMode.Required, CallbackContract, TrasactionFlowOption.Allowed)
  • RequestResponse (SessionMode.Allowed, TrasactionFlowOption.Allowed)

In addition to the standard MEPs, the Routing Service has a built-in pattern for Multicast messaging and Error handling.

 

 

The router process is very straightforward, where the untyped message received by inbound endpoint is forwarding to the outbound endpoint or multiple endpoints based on the prioritized rules represented by message filter types. The evaluation rules are started by highest priority. The router complexity is depended by number of inbound and outbound endpoints, type of message filters, MEPs and priorities.

Note, that the MEP pattern between the routed inbound and outbound endpoints must be the same, for example: the OneWay message can be routed only to the OneWay endpoint, therefore for routing Request/Response message to the OneWay endpoint, the service mediator must be invoked to change the MEP and then back to the router.

A routing service enables an application to physically decouple a process into the business oriented services and then logical connected (tight) in the centralized repository using a declaratively programming. From the architecture point of view, we can consider a routing service as a central integration point (hub) for private and public communication.

The following picture shows this architecture:

 

 

As we can see in the above picture, a centralized place of the connectivity represented by Routing Table. The Routing Table is the key component of the Routing service, where all connectivity is declaratively described. These metadata are projected on runtime for message flowing between the inbound to outbound points. Messages are routing between the endpoints in the full transparent manner. The above picture shows the service integration with the MSMQ Technology via routing. The queues can be plugged-in to the Routing Table integration part like another service.

From the metadata driven model point of view, the Routing Table metadata are part of the logical business model, centralized in the Repository. The following picture shows an abstraction of the Routing Service to the Composite Application:

 

 

 

The virtualization of the connectivity allows encapsulating a composite application from the physical connectivity. This is a great advantage for model driven architecture, where internally, all connectivity can be used well know contracts. Note, that the private channels (see the above picture – logical connectivity) are between the well know endpoints such as Routing Service and Composite Application. 

Decoupling sources (consumers) from the target endpoints and their logical connection driven by metadata will enable our application integration process for additional features such as:   

  • Mapping physical to logical endpoints
  • Virtualization connectivity
  • Centralized entry point
  • Error handling with alternative endpoints
  • Service versioning
  • Message versioning
  • Service aggregation
  • Business encapsulation
  • Message filtering based on priority routing
  • Metadata driven model
  • Protocol bridging
  • Transacted message routing

As I mentioned earlier, in the model driven architecture, the Routing Table is a part of the metadata stored in the Repository as a Logical Centralized Model. The model is created by design time and then physical decentralized to the target. Note, deploying model for its runtime projecting will require recycling the host process. To minimize this interruption, the Routing Service can help to isolate this glitch by managing the Routing Table dynamically.

The WCF Routing Service has built-in a feature for updating a Routing Table at the runtime. The default bootstrap of the routing service is loading the table from the config file (routing section). We need to plug-in a custom service for updating the routing table in the runtime. That’s the job for Routing Manager component, see the following picture:

 

 

Routing Manager is a custom WCF Service responsible for refreshing the Table located in the Routing Service from the config file or Repository. The first inbound message (after the host process is opened) will boot the Table from the Repository automatically. During the runtime time, when the routing service is active, the Routing Manager must receive an inquiry message to load a routing metadata from the Repository and then update the Table.

Basically, the Routing Manager enables our virtualized composite application to manage this integration process without recycling a host process. There is one conceptual architectural change in this model. As we know, the Repository pushed (deployed) metadata for its runtime projecting to the host environment (for instance: IIS, WAS, etc.). This runtime metadata is stored in the local, private repository such as file system and it is used for booting our application, services.

That’s a standard Push phase such as Model->Repository->Deploy. The Routing Service is a good candidate for introducing a concept of the Pull phase such as Runtime->Repository, where the model created during the design time can be changed during its processing in the transparent manner. Therefore, we can decide about the runtime metadata used by Pull model during the design time as well.

Isolating metadata for boot projector and runtime update enables our Repository to administrate application without interruptions, for instance, we can change physical endpoint, binding, plug-in a new service version, new contract, etc. Of course, we can build more sophisticated tuning system, where runtime metadata can be created and/or modified by analyzer, etc. In this case, we have a full control loop between the Repository and Runtime.

Finally, the following picture is an example of the manageable routing service:

 

 

As the above picture shows, the manageable Routing Service represents central virtualizations of the connectivity between workflow services, services, queues and web services. The Runtime Repository represents a routing metadata for booting process and also runtime changes. The routing behavior can be changed on the fly in the common shareable database (Repository) and then it can be synchronized with runtime model by Routing Manager.

One more “hidden” feature of the Routing Service can be seen in the above picture, such as scalability. Decomposition of the application into the small business oriented services and composition via a routing service; we can control scalability of the application. We can assign localhost or cluster address of the logical endpoints based on the routing rules.

From the virtualization point of the view, the following picture shows a manageable composite application:

 

As you can see, the Composite Application is driven by logical endpoints, therefore can be declared within the logical model without the physical knowledge where they are located. Only the Runtime Repository knows these locations and can be easily managed based on requirements, for example: dev, staging, QA, production, etc.

This article is focusing on the Routing Manager hosted on IIS/WAS. I am assuming you have some working experience or understand features of the WCF4 Routing Service.

OK, let’s get started with Concept and Design of the Manageable Routing Service.

 

Concept and Design

The concept and design of the Routing Manager hosted on IIS/WAS is based on the extension of the WCF4 Routing Service for additional features such as downloading metadata from Repository in the loosely coupled manner. The plumbing part is implemented as a common extension to services (RoutingService and RoutingManager) named as routingManager. Adding the routingManager behavior to the routing behavior (see the following code snippet), we can boot a routing service from the repositoryEndpointName.

The routingManager behavior has the same attributes like routing one with additional attribute for declaration of the repository endpoint. As you can see, it is very straightforward configuration for startup routing service. Note, that the routingManager behavior is a pair to the routing behavior.

Now, in the case of updating routing behavior on the runtime, we need to plug-in a Routing Manager service and its service behavior routingManager. The following code snippet shows an example of activations without the .svc file within the same application:

and its service behavior:

Note, that the repositoryEndpointName can be addressed to different Repository than we have at the process startup.

 

Routing Manager Contract

Routing Manager Contract allows to communicate with Routing Manager service out of the appDomain. The Contract Operations are designed for broadcasting and point to point patterns. Operation can be specified for specific target or for unknown target (*) based on the machine and application names. The following code snippet shows the IRoutingManager contract:

[ServiceContract(Namespace = "urn:rkiss/2010/09/ms/core/routing", 
  SessionMode = SessionMode.Allowed)]
public interface IRoutingManager
{
  [OperationContract(IsOneWay = true)]
  void Refresh(string machineName, string applicationName);

  [OperationContract]
  void Set(string machineName, string applicationName, RoutingMetadata metadata);

  [OperationContract]
  void Reset(string machineName, string applicationName);

  [OperationContract]
  string GetStatus(string machineName, string applicationName);
}

The Refresh operation represents an inquiry event for refreshing a routing table from the Repository. Based on this event, the Routing Manager is going to pick-up a “fresh” routing metadata from the Repository (addressed by repositoryEndpointName) and update the routing table.

Routing Metadata Contract

This is a Data contract between the Routing Manager and Repository. I decided to use it for the contract routing configuration section as xml formatted text. This selection gives me an integration and implementation simplicity and easy future migration. The following code snippet is an example of the routing section:

and the following is Data Contract for repository:

[DataContract(Namespace = "urn:rkiss/2010/09/ms/core/routing")]
public class RoutingMetadata
{
  [DataMember]
  public string Config { get; set; }

  [DataMember]
  public bool RouteOnHeadersOnly { get; set; }

  [DataMember]
  public bool SoapProcessingEnabled { get; set; }

  [DataMember]
  public string TableName { get; set; }

  [DataMember]
  public string Version { get; set; }
}

OK, that should be all from the concept and design point on the view, except one thing what was necessary to figure out, especially for hosting services on the IIS/WAS. As we know [1], there is a RoutingExtension class in the System.ServiceModel.Routing namespace with a “horse” method ApplyConfiguration for updating routing service internal tables. 

I used the following “small trick” to access this RoutingExtension from another service such as RoutingManager hosted by its own factory.

The first routing message will stored the reference of the RoutingExtension into the AppDomain Data Slot under the well-known name, such as value of the CurrentVirtualPath.

serviceHostBase.Opened += delegate(object sender, EventArgs e)
{
    ServiceHostBase host = sender as ServiceHostBase;
    RoutingExtension re = host.Extensions.Find<RoutingExtension>();
    if (configuration != null && re != null)
    {
        re.ApplyConfiguration(configuration);

        lock (AppDomain.CurrentDomain.FriendlyName)
        {
            AppDomain.CurrentDomain.SetData(this.RouterKey, re);
        }
    }
};

Note, that both services such as RoutingService and RoutingManager are hosted under the same virtual path, therefore the RoutingManager can get this Data Slot value and cast it to the RoutingExtension. The following code snippet shows this fragment:

private RoutingExtension GetRouter(RoutingManagerBehavior manager)
{
  // ...
      
  lock (AppDomain.CurrentDomain.FriendlyName)
  {
    return AppDomain.CurrentDomain.GetData(manager.RouterKey) as RoutingExtension;
  }         
}

Ok, now it is the time to show the usage of the Routing Manager.

 

Usage and Test

The Manageable Router (RoutingService + RoutingManager) features and usage, hosted on the IIS/WAS, can be demonstrated via the following test solution. The solution consists of the Router and simulators for Client, Services and LocalRepository. 

The primary focus is on the Router, as shown in the above picture. As you can see, there are no .svc files, etc., just configuration file only. That’s right. All tasks to setup and configure manageable Router are based on the metadata stored in the web.config and Repository (see later discussion).

The Router project has been created as an empty web project under http://localhost/Router/ virtualpath, adding an assembly reference from RoutingManager project and declaring the following sections in the web.config.

Let’s describe these sections in more details.

Part 1. – Activations

These sections are mandatory for any Router such as activation of the RoutingService and RoutingManager service, behavior extension for routingManager and declaring client endpoint for Repository connectivity. The following picture shows these sections:

Note, this part also declares relative address for our Router. In the above example, the entry point for routing message is ~/Pilot.svc and access to RoutingManager has address ~/PilotManager.svc.

Part 2. – Service Endpoints

In these sections, we have to declare endpoints for two services such as RoutingService and RoutingManager. The RoutingService endpoints have untyped contracts defined in the System.ServiceModel.Routing namespace. In this test solution, we are using two contracts, one for notification (OneWay) and the other one is for RequestReply message exchange. The binding is used by basicHttpBinding, but it can be any standard or custom binding based on the requirements.

The RoutingManager service is configured with simple basicHttpBinding endpoint, but in the production version, it should use a custom udp channel for broadcasting message to trigger the pull metadata process from the Repository across the cluster.

 

Part 3. – Plumbing RoutingService and Manager together

This section will attach a RoutingService to the RoutingManager for accessing its RoutingExtension in the runtime.

The first extBehavior section is a configuration of the routing boot process. The second one is a section for downloading routing metadata during the runtime.

 OK, that’s all for creating a manageable Router.

The following picture shows a full picture of the solution:

 

 

As you can see, there is the Manageable Router (hosted in the IIS/WAS) for integration of the composite application. On the left hand side is the Client simulator to generate two operations (Notify and Echo) to the Service1 and/or Service2 based on the routing rules. The Client and Services are regular WCF self-hosted applications, the client can talk directly to the Services or via the Router.

Local Repository

Local Repository represents a metadata storage of the metadata such as logical centralized application model, deployment model, runtime model, etc. Models are created during the design time and based on the deployment model are pushed (physical decentralized) to the targets where they are projecting on the runtime. For example: The RouterManager assembly and web.config are metadata for deploying model from the Repository to the IIS/WAS Target.

During the runtime, some components have capability to update behavior based on the new metadata pulled from the Repository. For example, this component is a manageable Router (RouterService + RouterManager). Building Enterprise Repository and Tools is not a simple task, see more details about Microsoft strategy here [6] and interesting example here [7], [8].

In this article, I included very simple Local Repository for Routing metadata with self-hosting service to demonstrate a capability of the RoutingManager Service. The Routing metadata is described by system.serviceModel section. The following picture shows a configuration root section for remote routing metadata:

 

As you can see, the content of the system.ServiceModel section is similar to the target web.config. Note, that the Repository is holding these sections only. They are related to the Routing metadata. By selecting RoutingTable tab, the routing rules will be displayed in the table form:

Any changes in the Routing Table will update a local routing metadata by pressing the button Finnish, but runtime Router must be notified about this change by pressing button Refresh. In this scenario, the LocalRepository will send an inquiry message to the RoutingManager for pulling a new routing metadata. You can see this action in the Status tab.

Routing Rules

The above picture shows a Routing Table as a representation of the Routing Rules mapped to the routing section in the configuration metadata. This test solution has a pre-build routing ruleset with four rules. Let’s describes these rules. Note, we are using an XPath filter for message body, therefore the option RouteOnHeadersOnly must be unchecked. Otherwise the Router will throw an exception.

The Router is deployed with two inbound endpoints such as SimplexDatagram and RequestReply, therefore the received message will be routed based on the following rules:

Request/Reply rules

First, as the highest priority (level 3) the message is buffered for xpath body expression

starts-with(/s11:Envelope/s11:Body/rk:Echo/rk:topic, 2)

if the xpath expression is true, then the copied message is routing to the outbound endpoint TE_Service1, else the copied message is forwarded to the TE_Service2 (see the filter aa)

SimplexDatagram rules

This is a multicast routing (same priority 2) to the two outbound endpoints such as TE_Service1 and TE_Service2. If the message cannot be delivered to the TE_Service2, the alternative endpoint from the backup list is used, such as Test1 (queue). Note, that the message is not buffered, it is passed directly (filterType = EndpointName) from endpointNotify endpoint.

 

Test

Testing Router is a very simple process. Launch the Client, Service1, Service2 and Local Repository programs, create VirtualDirectory in IIS/WAS for http://localhost/Router project and the solution is ready for testing. The following are instruction steps:

  1. On the Client form, press button Echo. You should see a message routed to the Service2
  2. Press button Echo again, the message is routed to the Service1
  3. Press button Echo again, the message is routed to the Service2
  4. Change Router combo box to: http://localhost/Router/Pilot.svc/Notify
  5. Press button Event, see notification messages in both services

You can play with Local Repository to change the routing rules to see how the Router will handle the delivery messages.

For testing a backup rule, create a transactional queue Test1 and close the Service2 console program and follow the Step 2. and 3. You should see messages in the Service1 and queue.

 

Troubleshooting

There are two kinds of troubleshooting in the manageable Router. The first one is a built-in standard WCF tracing log and its viewing by Microsoft Service Trace Viewer program. The Router web.config file has already specified this section for diagnostics, but it is commented.

The second way to troubleshoot router with focus on the message flowing is built-in custom message trace inspector. This inspector is injected automatically by RoutingManager and its service behavior. We can use DebugView utility program from Windows Sysinternals to view the trace outputs from the Router.

 

Some Router Tips

1. Centralizing physical outbound connections into one Router enables the usage of logical connections (alias addresses) for multiple applications. The following picture shows this schema:

Instead of using physical outbound endpoints for each application router, we can create one master Router for virtualization of all public outbound endpoints. In this scenario, we need to manage only the master router for each physical outbound connection. The same strategy can be used for physical inbound endpoints. Another advantage of this router hierarchy is centralizing pre-processing and post-processing services.

2. As I mentioned earlier, the Router enables decomposition of the business workflow into small business oriented services, for instance: managers, workers, etc. The composition of the business workflow is declared by Routing Table which represents some kind of dispatcher of the messages to the correct service. We should use ws binding with a custom headers to simplify dispatching messages via a router based on the headers only.

 

Implementation

Based on the concept and design, there are two pieces for implementation, such as RoutingManager service and its extension behavior. Both modules are using the same custom config library, where useful static methods are located for getting clr types from the metadata stored in the xml formatted resource. I used this library in my previous articles (.Net 3) and I extended it for the new routing section. 

In the following code snippet I will show you how straightforward implementation is done.

The following code snippet is a demonstration of the Refresh implementation for RoutingManager Service. We need to get some configurable properties from the routingManager behavior and access to the RouterExtension. Once we have them, the RepositoryProxy.GetRouting method is invoked to obtain routing metadata from the Repository.

We get the configuration section in the xml formatted text like from the application config file. Now, using the “horse” config library, we can deserialize a text resource into the clr object, such as  MessageFilterTable<IEnumerable<ServiceEndpoint>>.

Then the RoutingConfiguration instance is created and passed to the router.ApplyConfiguration process. The rest of the magic is done in the RoutingService.

public void Refresh(string machineName, string applicationName)
{
  RoutingConfiguration rc = null;
  
  RoutingManagerBehavior manager = 
    OperationContext.Current.Host.Description.Behaviors.Find<RoutingManagerBehavior>();
  
  RoutingExtension router = this.GetRouter(machineName, applicationName, manager);

  try
  {
    if (router != null)
    {
      RoutingMetadata metadata = 
        RepositoryProxy.GetRouting(manager.RepositoryEndpointName, 
         Environment.MachineName, manager.RouterKey, manager.FilterTableName);
         
      string tn = metadata.TableName == null ? 
                  manager.FilterTableName : metadata.TableName;
        
      var ft = ServiceModelConfigHelper.CreateFilterTable(metadata.Config, tn);

      rc = new RoutingConfiguration(ft, metadata.RouteOnHeadersOnly);
      rc.SoapProcessingEnabled = metadata.SoapProcessingEnabled;
         
      router.ApplyConfiguration(rc);

      // insert a routing message inspector
      foreach (var filter in rc.FilterTable)
      {
        foreach (var se in filter.Value as IEnumerable<ServiceEndpoint>)
        {
          if (se.Behaviors.Find<TraceMessageEndpointBehavior>() == null)
            se.Behaviors.Add(new TraceMessageEndpointBehavior());
        }
      }
    }
  }
  catch (Exception ex)
  {
     RepositoryProxy.Event( ...);
  }
}

The last action in the above Refresh method is injecting the TraceMessageEndpointBehavior for troubleshooting messages within the RoutingService on the Trace output device.

The next code snippets show some details from the ConfigHelper library:

public static MessageFilterTable<IEnumerable<ServiceEndpoint>>CreateFilterTable(string config, string tableName)
{
  var model = ConfigHelper.DeserializeSection<ServiceModelSection>(config);
  
  if (model == null || model.Routing == null || model.Client == null)
    throw new Exception("Failed for validation ...");

  return CreateFilterTable(model, tableName);
}

 

The following code snippet shows a generic method for deserializing a specific type section from the config xml formatted text:

public static T DeserializeSection<T>(string config) where T : class
{
  T cfgSection = Activator.CreateInstance<T>();
  byte[] buffer = 
    new ASCIIEncoding().GetBytes(config.TrimStart(new char[]{'\r','\n',' '}));
  XmlReaderSettings xmlReaderSettings = new XmlReaderSettings();
  xmlReaderSettings.ConformanceLevel = ConformanceLevel.Fragment;

  using (MemoryStream ms = new MemoryStream(buffer))
  {
    using (XmlReader reader = XmlReader.Create(ms, xmlReaderSettings))
    {
      try
      {
        Type cfgType = typeof(ConfigurationSection);
        
        MethodInfo mi = cfgType.GetMethod("DeserializeSection", 
                BindingFlags.Instance | BindingFlags.NonPublic);
                
        mi.Invoke(cfgSection, new object[] { reader });
      }
      catch (Exception ex)
      {
        throw new Exception("....");
      }
    }
  }
  return cfgSection;
}

Note, the above static method is a very powerful and useful method for getting any type of config section from the xml formatted text resource, which allows us using the metadata stored in the database instead of the file system – application config file. 

 

 

Conclusion

In conclusion, this article described a manageable Router based on the WCF4 Routing Service. Manageable Router is allowing dynamically change routing rules from the centralized logical model stored in the Repository. The Router represents a virtualization component for mapping logical endpoints to the physical ones and it is a fundamental component in the model driven distributed architecture.

 

References:

[1] http://msdn.microsoft.com/en-us/library/ee517421(v=VS.100).aspx

[2] http://blogs.msdn.com/routingrules/archive/2010/02/09/routing-service-features-dynamic-reconfiguration.aspx

[3] http://weblogs.thinktecture.com/cweyer/2009/05/whats-new-in-wcf4-routing-service—or-look-ma-just-one-service-to-talk-to.html

[4] http://dannycohen.info/2010/03/02/wcf-4-routing-service-multicast-sample/

[5] http://blogs.profitbase.com/tsenn/?p=23

[6] SQL Server Modeling CTP and Model-Driven Applications

[7] Model Driven Content Based Routing using SQL Server Modeling CTP – Part I

[8] Model Driven Content Based Routing using SQL Server Modeling CTP – Part II

[9] Intermediate Routing

 

Now available – A SharePoint XML Indexing Connector

Most organizations have several systems holding their data. Data from these systems must be indexable and made available for search on the common Internal Search portal.

While most of the different data silos are able to dump or export their full dataset as XML, SharePoint does not include an OOTB general purpose XML indexing connector.

The SharePoint Server Search Connector Framework is known to be overly complex, and documentation out there about this subject is very limited.

There are basically two types of custom search connectors for SharePoint 2010 that can be implemented; the .Net Assembly Connector and the Custom Connector. More details about the differences between them can be found here. Mainly, a Custom Connector is agnostic of external content types, whereas each .NET Assembly Connector is specific to one external content type, and whenever the external content type changes, the .Net Assembly Connector must be re-compiled and re-deployed. If the entity model of the external system is dynamic and is large scale a Custom Connector should be considered over the .Net Assembly Connector.

Also, a Custom Connector provides administration user interface integration, but a .NET Assembly Connector does not.

The XML File Indexing Connector

The XML File Indexing Connector that is presented here is a custom search indexing connector that can be used to crawl and index XML files. In this series of posts I am going to first show you how to install, setup and configure the connector. In future posts I will go into more implementation details where we’ll look into code to see how the connector is implemented and how you can customize it to suit specific needs.

This post is divided into the following sections:

  • Installing and deploying the connector
  • Creating a new Content Source using the connector
  • Using the Start Address of the Content Source to configure the connector
  • Automatic and dynamic generation of Crawled Properties from XML elements
  • Full Crawl vs. Incremental Crawl
  • Optimizations and considerations when crawling large XML files
  • Future plans

Installing and deploying the connector

The package that can be downloaded at the bottom of this post, includes the following components:

  1. model.xml: This is the BCS model file for the connector
  2. XmlFileConnector.dll: This is the DLL file of the connector
  3. The Folder XmlFileConnector: This includes the Visual Studio Solution of the connector

Follow these steps to install the connector:

  1. Install the XmlFileConnector.dll in the Global Assembly Cache on the SharePoint application server(s)

gacutil -i “XmlFileConnector.dll”

  1. Open the SharePoint 2010 Management Shell on the application server.
  2. At the command prompt, type the following command to get a reference to your FAST Content SSA.

$fastContentSSA = Get-SPEnterpriseSearchServiceApplication -Identity “FASTContent SSA”

  1. Add the following registry key to the application server

[HKEY_LOCAL_MACHINE]SOFTWAREMicrosoftOfficeServer14.0SearchSetupProtocolHandlersxmldoc

Set the value of the registry key to “OSearch14.ConnectorProtocolHandler.1”

  1. Add the new Search Crawl Custom Connector

New-SPEnterpriseSearchCrawlCustomConnector -SearchApplication $fastContentSSA –Protocol xmldoc -Name xmldoc -ModelFilePath “XmlFileConnectorModel.xml”

  1. Restart the SharePoint Server Search 14 service. At the command prompt run:

net stop osearch14

net start osearch14

8.  Create a new Crawled Property Category for the XML File Connector. Open the FAST Search Server 2010 for SharePoint Management Shell and run the following command:

New-FASTSearchMetadataCategory -Name “Custom XML Connector” -Propset “BCC9619B-BFBD-4BD6-8E51-466F9241A27A”

 Note that the Propset GUID must be the one specified above, since this GUID is hardcoded in the Connector code as the Crawled Properties Category which will receive discovered Crawled Properties.

Creating a new Content Source using the XML File Connector

  1. Using the Central Administration UI, on the Search Administration Page of the FAST Content SSA, click Content Sources, then New Content Source.
  • Type a name for the content source, and in Content Source Type, select Custom Repository.
  • In Type of Repository select xmldoc.

  • In Start Addresses, type the URLs for the folders that contain the XML files you want to index. The URL should be inserted in the following format:

  • xmldoc://hostname/folder_path/#x=:doc:id;;urielm=url;;titleelm=title#

    The following section describes the different parts of the Start Address.

    Using the Start Address of the Content Source to configure the connector

    The Start Address specified for the Content Source must be of the following format. The XML File Connector will read this Start Address and use them when crawling the XML content.

    xmldoc://hostname/folder_path/#x=:doc:id;;urielm=url;;titleelm=title#

    xmldoc

    xmldoc is the protocol corresponding to the registry key we added when installing the connector.

    //hostname/folder_path/

    //hostname/folder_path/ is the full path to the folder conaining the XML files to crawl.

    Exmaple: //demo2010a/c$/enwiki

    #x=doc:id;;urielm=url;;titleelm=title#

    #x=doc:id;;urielm=url;;titleelm=title# is the special part of the Start Address that is used as configuration values by the connector:

    x=:doc:id

    Defines which elements in the XML file to use as document and identifier elements. This configuration parameter is mandatory.

    For example, say a we have an XML file as follows:

    <feed> <document> <id>Some id</id> <title></title> <url>some url</id> <field1>Content for field1</field1> <field2>Content for field2</field2> </document> <document> ... </document> </feed>

    Here the value for the x configuration parameter would be x=:document:id

    urielm=url

    urielm=url defines which element in the XML file to use as the URL. This will end up as the URL of the document used by the FS4SP processing pipeline and will go into the ”url” managed property. This configuration parameter can be left out. In this case, the default URL of the document will be as follows: xmldoc://id/[id value]

    titleelm=title

    titleelm=title defines which element in the XML file to use as the Title. This will end up as the Title of the document, and the value of this element will go into the title managed property. This configuration parameter can be left out. If the parameter is left out, then the title of the document will be set to ”notitle”.

    Automatic and dynamic generation of Crawled Properties from XML elements

    The XML File Connector uses advanced BCS techniques to automatically Discover crawled properties from the content of the XML files.

    All elements in the XML docuemt will be created as crawled properties. This provides the ability to dynamically crawl any XML file, without the need to pre-define the properties of the entities in the BCS Model file, and re-deploy the model file for each change.

    This is defined in the BCS Model file on the XML Document entity. The TypeDescriptor element named DocumentProperties, defines an list of dynamic property names and values. The property names in this list will automatically be discovered by the BCS framework and corresponding crawled properties will automatically be created for each property.

    The following snippet  from the BCS Model file shows how this is configured:

      <TypeDescriptor Name="DocumentProperties" TypeName="XmlFileConnector.DocumentProperty[], XmlFileConnector, Version=1.0.0.0, Culture=neutral, PublicKeyToken=109e5afacbc0fbe2" IsCollection="true">        xxxxx          xxxx          xxxxxx              

    In addition to the ability to discover crawled properties automatically from the XML content, the XMl File Connector also creates a default property with the name “XMLContent”. This property contains the raw XML of the document being processed. This enables the use of the XML content in a custom Pipeline extensibility stage for further processing.

    Example

    Say that we have the following XML file to index.

     Wikipedia: Nobel Charitable Trust  http://en.wikipedia.org/wiki/Nobel_Charitable_Trust  The Nobel Charitable Trust (NCT) is a charity set up by members of the Swedish Nobel family, i.e.    Michael Nobel Energy Award	http://en.wikipedia.org/wiki/Nobel_Charitable_Trust#Michael_Nobel_Energy_Award  References	http://en.wikipedia.org/wiki/Nobel_Charitable_Trust#References        ...      

    When running the connector the first time; we see the following Crawled Properties discovered in the Custom XML Connector Crawled Properties Category.

    Full Crawl vs. Incremental Crawl

    The BCS Search Connector Framework is implemented in such a way that keeps track of all crawled content in the Crawl Log Database. For each search Content Source, a log of all document ids that have been crawled is stored. This log is used when running subsequent crawls of the content source, be it either a full or an incremental crawl.

    When running an incremental crawl, the BCS framework compares the list of document ids it received from the connector against the list of ids stored in the crawl log database. If there are any document ids within the crawl log database that have not not been received from the connector, the BCS framework assumes that these documents have been deleted, and will attemp to issue deletion operations to the search system. This will cause many inconsistencies, and will make it very difficult to keep both  the actual dataset and the BCS crawl log in sync.

    So, when running either a Full Crawl or an Incremental Crawl of the Content Source, the full dataset of the XML files must be available for traversal. If there are any items missing in subsequent crawls, the SharePoint crawler will consider those as subject for deletion, and og ahead and delete those from the search index.

    One possible work around to tackle this limitation and try to avoid (re)-generating the full data set each time something minor changes, would be to split the XML content into files of different known update frequences, where content that is known to have higher update rates is placed in separate input folders with separate configured Conetent Sources within the FAST Content SSA.

    Optimizations and considerations when crawling large XML files

    When the XML File Connector starts crawling content, it will load and parse found XML files one at the time. So, for each XML file found in the input directory, the whole XML file is read into memory and cached for all subsequent operations by the crawler until all items found in the XML file have been submitted to the indexing subsystem. In that case, the memory cache is cleared, and the next file is loaded and parsed until all files have been processed.

    For the reason just described, it is recommended not to have large single XML files, but split the content across multiple XML files, each consisting of a number of items the is reasonable and can be easily parsed and cached in memory.

    Contact me at tomas.floyd to find out more about this Connector and other custom developed SharePoint and Office 365 Web Parts and Apps!!

    FREE – SharePoint 2013 Search Query Tool

    This tool helps you understand and learn how the available parameters on the Search REST service should be formatted.

     

    What does this tool offer:

    • Issue HTTP GET or POST search queries.
    • See how the different Query parameters are formatted.
    • Authenticate using different users to debug security trimming issues.
    • Use against your tenant on SharePoint Online and authenticate using your SPO User ID.

     

     

    Go get it from:

    http://sp2013searchtool.codeplex.com/

     

     

    Troubleshooting WCF Services during Runtime with WMI

    One of the coolest features of WCF when it comes to troubleshooting is the WCF message logging and tracing feature. With message logging you can write all the messages your WCF service receives and returns to a log file. With tracing, you can log any trace message emitted by the WCF infrastructure, as well as traces emitted from your service code.

    The issue with message logs and tracing is that you usually turn them off in production, or at the very least, reduce the amount of data they output, mainly to conserve disk space and reduce the latency involved in writing the logs to the disk. Usually this isn’t a problem, until you find yourself in the need to turn them back on, for example when you detect an issue with your service, and you need the log files to track the origin of the problem.

    Unfortunately, changing the configuration of your service requires resetting it, which might result in loss of data, your service becoming unavailable for a couple of seconds, and possibly for the problem to be resolved on its own, if the reason for the strange behavior was due to a faulty state of the service.

    There is however a way to change the logging configuration of the service during runtime, without needing to restart the service with the help of the Windows Management Instrumentation (WMI) environment.

    In short, WMI provides you with a way to view information about running services in your network. You can view a service’s process information, service information, endpoint configuration, and even change some of the service’s configuration in runtime, without needing to restart the service.

    Little has been written about the WMI support in WCF. The basics are documented on MSDN, and contain instructions on what you need to set in your configuration to make the WMI provider available. The MSDN article also provides a link to download the WMI Administrative Tools which you can use to manage services with WMI. However that tool requires some work on your end before getting you to the configuration you need to change, in addition to it requiring you to run IE as an administrator with backwards compatibility set to IE 9, which makes the entire process a bit tedious. Instead, I found it easier to use PowerShell to write six lines of script which do the job.

    The following steps demonstrate how to create a WCF service with minimal message logging and tracing configuration, start it, test it, and then use PowerShell with WMI to change the logging configuration in runtime.

    1. Open Visual Studio 2012 and create a new project using the WCF Service Application template.

    After the project is created, the service code is shown. Notice that in the GetDataUsingDataContract method, an exception is thrown when the composite parameter is null.

    2. In Solution Explorer, right-click the Web.config file and then click Edit WCF Configuration.

    3.In the Service Configuration Editor window, click Diagnostics, enable the WMI Provider, MessageLogging and Tracing.

    By default, enabling message logging will enable logging of all the message from the transport layer and any malformed message. Enabling tracing will log all activities and any trace message with severity Warning and up (Warning, Error, and Critical). Although those settings are useful during development, in production we probably want to change them so we will get smaller log files with only the most relevant information.

    4. Under MessageLogging, click the link next to Log Level, uncheck Transport Messages, and then click OK.

    The above setting will only log malformed messages, which are messages that do not fit any of the service’s endpoints, and are therefore rejected by the service.

    5. Under Tracing, click the link next to Trace Level, uncheck Activity Tracing, and then click OK.

    The above setting will prevent every operation from being logged, unless those that output a trace message of warning and up. You can read more about the different types of trace messages on MSDN. http://msdn.microsoft.com/en-us/library/ms733025(v=vs.110).aspx

    By default, message logging only logs the headers of a message. To also log the body of a message, we need to change the message logging configuration. Unfortunately, we cannot change that setting in runtime with WMI, so we will set it now.

    6. In the configuration tree, expand Diagnostics, click Message Logging, and set the LogEntireMessage property to True.

    7. Press Ctrl+S to save the changes, close the configuration editor window, and return to Visual Studio.

    The trace listener we are using buffers the output and will only write to the log files when the buffer is full. Since this is a demonstration, we would like to see the output immediately, and therefore we need to change this behavior.

    8. In Solution Explorer, open the Web.config file, locate the <system.diagnostics> section, and place the following xml before the </system.diagnostics> closing tag: <trace autoflush=”true”/>

    Now let us run the service, test it, and check the created log files.

    9. In Solution Explorer, click Service1.svc, and then press Ctrl+F5 to start the WCF Test Client without debugging.

    10. In the WCF Test Client window, double-click the GetDataUsingDataContract node, and then click Invoke. Repeat this step 2-3 times.

    Note: If a Security Warning dialog appears, click OK.

    11. In the Request area, open the drop-down next to the composite parameter, and set it to (null).

    12. Click Invoke and wait for the exception to show. Notice that the exception is general (“The server was unable to process the request due to an internal error.”) and does not provide any meaningful information about the true exception. Click Close to close the dialog.

    Let us now check the content of the two log files. We should be able to see the traced exception, but the message wouldn’t have been logged.

    13. Keep the WCF Test Client tool running and return to Visual Studio. Right-click the project and then click Open Folder in File Explorer.

    14. n the File Explorer window, double-click the web_tracelog.svclog file. The file will open in the Service Trace Viewer tool.

    15. n the Service Trace Viewer tool, click the 000000000000 activity in red, and then click the row starting with “Handling an exception”. In the Formatted tab, scroll down to view the exception information.

    As you can see in the above screenshot, the trace file contains the entire exception information, including the message, and the stack trace.

    Note: The configuration evaluation warning message which appears first on the list means that the service we are hosting does not have any specific configuration, and therefore is using a set of default endpoints. The two exceptions that follow are ones thrown by WCF after receiving two requests that did not match any endpoint. Those requests originated from the WCF Test Client tool when it attempted to locate the service’s metadata.

    Next, we want to verify no message was logged for the above argument exception.

    16. Return to the File Explorer window, select the web_messages.svclog file, and drag it to the Service Trace Viewer tool. Drop the file anywhere in the tool.

    There are now two new rows for the malformed messages sent by the WCF Test Client metadata fetching. There is no logged message for the faulted service operation.

    Imagine this is the state you now have in your production environment. You have a trace file that shows the service is experiencing problems, but you only see the exception. To properly debug such issues we need more information about the request itself, and any other information which might have been traced while processing the request.

    To get all that information, we need to turn on activity tracing and include messages from the transport level in our logs.

    If we open the Web.config file and change it manually, this would cause the Web application to restart, as discussed before. So instead, we will use WMI to change the configuration settings in runtime.

    17. Keep the Service Trace Viewer tool open, and open a PowerShell window as an Administrator.

    18. To get the WMI object for the service, type the following commands in the PowerShell window and press Enter: $wmiService = Get-WmiObject Service -filter “Name=’Service1′” -Namespace “root\servicemodel” -ComputerName localhost $processId = $wmiService.ProcessId $wmiAppDomain = Get-WmiObject AppDomainInfo -filter “ProcessId=$processId” -Namespace “root\servicemodel” -ComputerName localhost

    Note: The above script assumes the name of the service is ‘Service1’. If you have changed the name of the service class, change the script and run it again. If you want to change the configuration of a remote service, replace the localhost value in the ComputeName parameter with your server name.

    19. To turn on transport layer message logging, type the following command and press Enter: $wmiAppDomain.LogMessagesAtTransportLevel = $true

    20. To turn on activity tracing, type the following command and press Enter: $wmiAppDomain.TraceLevel = “Warning, ActivityTracing”

    21. Lastly, to save the changes you made to the service configuration, type the following command and press Enter: $wmiAppDomain.Put()

    22. Switch back to the WCF Test Client. In the Request area, open the drop-down next to the composite parameter, and set it to a new CompositeType object. Click Invoke 2-3 times to generate several successful calls to the service.

    23. In the Request area, open the drop-down next to the composite parameter, and set it to (null).

    24. Click Invoke and wait for the exception to show. Click Close to close the dialog.

    25. Switch back to the Service Trace Viewer tool and press F5 to refresh the activities list.

    You will notice that now there is a separate set of logs for each request handled by the service. You can read more on how to use the Service Trace Viewer tool to view traces and troubleshoot WCF services on MSDN. http://msdn.microsoft.com/en-us/library/aa751795(v=vs.110).aspx

    26. From the activity list, select the last row in red that starts with “Process action”.

    You will notice that now you can see the request message, the exception thrown in the service operation, and the response message, all in the same place. In addition, the set of traces is shown for each activity separately, making it easy to identify a specific request and its related traces.

    27. On the right pane, select the first “Message Log Trace” row, click the Message tab, and observe the body of the message.

    Now that we have the logged messages, we can select the request message and try to figure out the cause of the exception. As you can see, the composite parameter is empty (nil).

    If this was a production environment, you would probably want to restore the message logging and tracing to its original settings at this point. To do this, return to the PowerShell window, and re-run the command from before with their previous values:

    $wmiAppDomain.LogMessagesAtTransportLevel = $false

    $wmiAppDomain.TraceLevel = “Warning”

    $wmiAppDomain.Put()

    Before we conclude, now that your service is manageable through WMI, you can use other commands to get information about the service and its components. For example, the following command will return the service endpoints’ information: Get-WmiObject Endpoint -filter “ProcessId=$processId” -Namespace “root\servicemodel” -ComputerName localhost