Tag Archives: Visual Studio 2013

A Look At : Application Management and Governance in SharePoint 2013

Summary:Learn how to govern applications for SharePoint 2013 by creating a customization policy and understanding the app model, branding, and life-cycle management.

8322.sharepoint_2D00_2010_5F00_4855E582[1]

How will you manage the applications that are developed for your environment? What customizations do you allow in your applications, and what are your processes for managing those applications?

 

For effective and manageable applications, your organization should consider the following:

  • Customization policy   SharePoint 2013 includes customizable features and capabilities that span multiple product areas, such as business intelligence, forms, workflow, and content management. Customization can introduce risks to the stability, maintenance, and security of the environment. To support customization while controlling its scope, you should develop a customization policy.
  • Life-cycle management   Follow best practices to manage applications and keep your environments in sync.
  • Branding   If you are designing an information architecture and a set of sites to use across an organization, consider including branding in your governance plan. A formal set of branding policies helps ensure that sites consistently use enterprise imagery, fonts, themes, and other design elements.
  • Solutions or apps for SharePoint?   Decide whether a solution or an app for SharePoint would be the best choice for specific customizations.

Get developer guidance about customizing and branding SharePoint 2013 on MSDN: Build sites for SharePoint 2013.

Foundation icon This article is part of a set of articles about governance. The following articles describe other aspects of governance:

The What is governance? poster gives a summary of this content. Download the PDF version or Visio version, or Zoom into the model in full detail with Zoom.it from Microsoft.

Determine the types of customizations you want to allow and how to manage them. Your customization policy should include:

  • Service-level descriptions   What are the parameters for supporting and managing customizations in your environments? See Service-level agreements.
  • Guidelines for updating customizations   How do you manage changes to customizations, and how do you roll out those changes to your environments? Consider ways to manage source code, such as a source control system and standards for documenting the code.
  • Processes for analyzing   How do you understand whether a particular customization is working well in your environment, or how do you decide which ones to create, change, or retire?
  • Approved tools for customization   Consider development standards, such as coding best practices and the tools that you will to use across your organization. For example, you should decide whether to allow the use of SharePoint Designer 2013 and Design Manager, and specify which site elements can be customized and by whom.
  • Process for piloting and testing customizations   How do you test and deploy customizations? How many people should be in a pilot testing group? What are your standards for testing and validating customizations?
  • Who is responsible for ongoing support   Who will be responsible for supporting customizations in your environments—individual teams or a central group?
  • Guidelines for packaging and deploying customizations   Do you have individual packages for each, or do you include several in a feature or solution? Which customizations should be apps for SharePoint instead of solutions? How do you ensure that customizations in one environment do not affect the rest of your SharePoint implementation?
  • Specific policies regarding each potential type of customization   What types of customizations do you allow?

    For more information about kinds of customizations and their potential risks, see the Customizations table later in this article. For more information about processes for managing customizations, see the white paper SharePoint Products and Technologies customization policy. Most of this content still applies to SharePoint 2013.

  • Policies around using the App Catalog and SharePoint Store Which apps for SharePoint do you want to make available to your organization? Can users purchase apps directly? See Solutions or apps for SharePoint? later in this article for more information.

The highly customizable design of SharePoint products enables you to provide the look, behavior, or functionality that meets your business needs. Customizations can introduce risk to your environment, whether that risk is to the environment’s performance, availability, or supportability. Conversely, a “no customizations” policy severely restricts your organization’s ability to take advantage of the SharePoint platform.

All customizations are not the same. You must decide carefully which kinds of customizations to allow in your environment. You must ensure the customizations support the performance, availability, and supportability you want for your environment. Your governance policy should balance a level of acceptable risk against the business needs for your organization.

What is considered a customization? All of the following are considered kinds of customizations in SharePoint products:

  • Configuration   Using the SharePoint user interface to configure SharePoint products.
  • Branding   Changing logos, styles, colors, master pages and page layouts, and so on to create a custom look for your SharePoint sites. See more about branding.
  • Custom code   Using developer tools to add or change functionality in SharePoint products or to interact with other applications. Risk can vary depending on kind of functionality and level of trust (full trust solutions should be rarely used; consider apps for SharePoint first).
    TipTip:
    Sandboxed solutions are deprecated in this release, so they are not the best option for custom code in the long term

Some customizations have very little risk or impact on your environment. Others have the potential for much higher risk and impact. The following table provides examples of different kinds of customizations, the risk level associated with that kind of customization, and potential issues that you might face if you allow that kind of customization.

Customizations

Risk level Types of customizations and examples Considerations or impact
Unsupported/High Unsupported customizations such as direct changes to the database schema or modifying files on the file system.
  • Will not be supported through Microsoft Customer Support.
  • Will be unable to upgrade.

Do not use.

Moderate to high Creating applications that interact with or redirect actions in key pipelines, such as events, claims, and so on.
  • Potential for service outage or performance issues.
  • Might require rework at upgrade.
Moderate to low Using a custom Web Part outside a sandbox environment, creating custom actions such as adding a menu item, or creating a custom site provisioning process.
  • Short or long-term performance issues or page errors.
  • Might require rework at upgrade.
Low Using solutions in a sandbox environment. Short-term performance issues; you can avoid some performance issues by using resource throttling and quotas.
Very low to no risk Using apps for SharePoint or using functionality within the product or configurations, such as associating a workflow with a list or using an instance of a built in Web Part. Minor configuration or page errors that would have to be addressed. Apps can be uninstalled or updated.
NoteNote:
For more information about customizations and upgrade, see Considerations for specific customizations.

 

 

Also, when you think through the customizations to allow in your environment, consider carefully whether a particular customization is necessary. If it recreates functionality that is already available in the product (such as creating a Web Part that does the same thing as the Content Editor Web Part or the Content by Query Web Part), then that might be unnecessary work.

Consider first whether the standard functionality can do what you want, or check the SharePoint Store to see if there is an app for SharePoint available that does what you need.

Follow these best practices to manage applications based on SharePoint 2013 throughout their life cycle:

  • Use separate development, preproduction, and production environments, and keep these environments as synchronized as possible so that you can accurately test your customizations.
  • Test all customizations before releasing the first time and after any updates have been made before you release them to your production environment.
  • Use source code control and solution and feature versioning to track changes to code.

Development, test, and production environments

Consistent branding with a corporate style guide makes for more cohesive-looking sites and easier development. Store approved themes in the theme gallery for consistency so that users will know when they visit the site that they are in the right place.

SharePoint 2013 includes a new feature to use for branding, Design Manager. By using Design Manager, you can create a visual design for your website with whatever web design tool or HTML editor you prefer and then upload that design into SharePoint. Design Manager is the central hub and interface where you manage all aspects of a custom design.

Creating the visual design of a site often fits into a larger process, in which multiple people or organizations are involved. For a roadmap of the tasks from a larger perspective, see Design and branding in SharePoint 2013.

SharePoint 2013 has a new development model based on apps for SharePoint. Apps for SharePoint are self-contained pieces of functionality that extend the capabilities of a SharePoint website. An app may include SharePoint features such as lists, workflows, and site pages, but it can also use a remote web application and remote data in SharePoint. An app has few or no dependencies on any other software on the device or platform where it is installed, other than what is built into the platform. Apps have no custom code that runs on the SharePoint servers.

The guidance for whether to use apps for SharePoint or SharePoint solutions is to:

  • Design apps for end users

    Apps for SharePoint:

    • Are easy for users (tenant administrators and site owners) to discover and install.
    • Use safe SharePoint extensions.
    • Provide the flexibility to develop future upgrades.
    • Can integrate with cloud-based resources.
    • Are available for both SharePoint Online and on-premises SharePoint sites.
  • Use farm solutions for administrators

    SharePoint solutions:

    • Can access the server-side object-model APIs that are needed to extend SharePoint management, configuration, and security
    • Can extend Central Administration, Windows PowerShell cmdlets, timer jobs, custom backups, and so on.
    • Are installed by administrators.
    • Can have farm, web application, or site-collection scope.

Go to MSDN to get more information about the new development model, Apps for SharePoint compared with SharePoint solutions, and Deciding between apps for SharePoint and SharePoint solutions.

Set a policy for using apps for SharePoint in your organization. Can users purchase and download apps? How do you make your organization’s apps available? How do you tell if they’re being used?

  • SharePoint Store   Determine whether users can purchase or download apps from the SharePoint Store.
  • App Catalog   Make specific apps for SharePoint available to your users by adding them to the App Catalog.
  • App requests   Configure app requests to control which apps are purchased and how many licenses are available.
  • Monitor apps   Monitor specific apps in SharePoint Server 2013 to check for errors and to track usage.

In the market

Advertisements

A Look At : Visual Studio Codelens

A Visual Studio Full of Panels

Let’s say you’re looking at a code file, specifically a method.  Your Visual Studio environment may look like this:

image

I’m looking at the second Create method (the one that takes a Customer).  If I want to know where this method may be referenced, I can “Find All References”, either by selecting it from the context menu, or using Shift + F12. Now I have this:

image

Great!  Now, if I decide to change this code, will it will work?  Will my tests still work?  In order for me to figure that out, I need open my Test Explorer window.

image

Which gives me a slightly more cluttered VS environment:

image

(Now I can see my tests, but I still need to try and identify which tests actually exercise my method.)

Another great point of context to have is knowing if I’m looking at the latest version of my code.  I’d hate to make changes to an out-of-date version and grant myself a merge condition.  So next I need to see the history of the file.

image

Cluttering my environment even more (because I don’t want to take my eyes of my code, I need to snap it somewhere else), I get this:

image

Okay, time out.

Yes, this looks pretty cluttered, but I can organize my panels better, right?  I can move some panels to a second monitor if I want, right?  Right on both counts.  By doing so, I can get a multi-faceted view of the code I’m looking at.  However, what if I start looking at another method, or another file?  The “context” of those other panels don’t follow what I’m doing.  Therefore, if I open the EmployeesController.cs file, my “views” are out of sync!

image

That’s not fun.

A Visual Studio Full of Context

So all of the above illustrates two main benefits of something like CodeLens.  CodeLens inserts easy, powerful, at-a-glance context for the code your looking at.  If it’s not turned on, do so in Options:

image

While you’re there, look at all the information it’s going to give you!

Once you’ve enabled CodeLens, let’s reset to the top of our scenario and see what we have:

image

Notice an “overlay” line of text above each method.  That’s CodeLens in action. Each piece of information is called a CodeLens Indicator, and provides specific contextual information about the code you’re looking at.  Let’s look more closely.

image

References

image

References shows you exactly that – references to this method of code.  Click on that indicator and you can see and do some terrific things:

image

It shows you the references to this method, where those references are, and even allows you to display those references on a Code Map:

image

Tests

image

As you can imagine, this shows you tests for this method.  This is extremely helpful in understanding the viability of a code change.  This indicator lets you view the tests for this method, interrogate them, as well as run them.

image

As an example, if I double-click the failing test, it will open the test for me.  In that file, CodeLens will inform me of the error:

image

Dramatic pause: This CodeLens indicator is tremendously valuable in a TDD (Test Driven Development). Imagine sitting your test file and code file side-by-side, turning on “Run Tests After Build”, and using the CodeLens indicator to get immediate feedback about your progress.

Authors

image

This indicator gives you very similar information as the next one, but list the authors of this method for at-a-glance context.  Note that the latest author is the one noted in the CodeLens overlay.  Clicking on this indicator provides several options, which I’ll explain in the next section.

image

Changes

image

The Changes indicator tells you information about the history of the file at it exists in TFS, specifically Changesets.  First, the overlay tells you how many recent changes there are to this method in the current working branch.  Second, if you click on the indicator you’ll see there are several valuable actions you can take right from that context:

image

What are we looking at?

  • Recent check-in history of the file, including Changeset ID, Branch, Changeset comments, Changeset author, and Date/time.
  • Status of my file compared to history (notice the blue “Local Version” tag telling me that my code is 1 version behind current).
  • Branch icons tell me where each change came from (current/parent/child/peer branch, farther branch, or merge from parent/child/unrelated (baseless)).

Right-clicking on a version of the file gives you additional options:

image

  • I can compare my working/local version against the selected version
  • I can open the full details of the Changeset
  • I can track the Changeset visually
  • I can get a specific version of the file
  • I can even email the author of that version of the file
  • (Not shown) If I’m using Lync, I can also collaborate with the author via IM, video, etc.

This is a heck of a lot easier way to understand the churn or velocity of this code.

Incoming Changes

image

The Incoming Changes indicator was added in 2013 Update 2, and gives you a heads up about changes occurring in other branches by other developers.  Clicking on it gives you information like:

image

Selecting the Changeset gives you the same options as the Authors and Changes indicators.

This indicator has a strong moral for anyone who’s ever been burned by having to merge a bunch of stuff as part of a forward or reverse integration exercise:  If you see an incoming change, check in first!

Work Items (Bugs, Work Items, Code Reviews)

image

I’m lumping these last indicators together because they are effectively filtered views of the same larger content: work items.  Each of these indicators give you information about work items linked to the code in TFS.

image

image

Knowing if/when there were code reviews performed, tasks or bugs linked, etc., provides fantastic insight about how the code came to be.  It answers the “how” and “why” of the code’s current incarnation.

 

A couple final notes:

  • The indicators are cached so they don’t put unnecessary load on your machine.  As such they are scheduled to refresh at specific intervals.  If you don’t want to wait, you can refresh the indicators yourself by right-clicking the indicators and choosing “Refresh CodeLens Team Indicators”

image

  • There is an additional CodeLens indicator in the Visual Studio Gallery – the Code Health Indicator. It gives method maintainability numbers so you can see how your changes are affecting the overall maintainability of your code.
  • You can dock the CodeLens indicators as well – just know that if they dock, they act like other panels and will be static.  This means you’ll have to refresh them manually (this probably applies most to the References indicator).
  • If you want to adjust the font colors and sizes (perhaps to save screen real estate), you can do so in Tools –> Options –> Fonts and Colors.  Choose “Show settings for” and set it to “CodeLens”.

HTML5 SharePoint Pic Web Part Released and Available !!

This is a Sandbox web part control to display a matrix of image thumbnails.

For a build a Metro IDE or a Picture Gallery to show products, news, or a social team that integrates with pictures, etc. All this, from any SharePoint picture library.

Supports : SharePoint 2010 & 2013 On-Premise Web Part,  SharePoint Online Web Part

FEATURES OF THE WEB PART** ver. 1.0

     

**PREVIEW EXAMPLE OF THE CONTROL**





 
1

How To : Use the Modelling SDK to create UML Diagrams

Use Case Diagrams

A use case diagram is a summary of who uses your application and what they can do with it. It
describes the relationships among requirements, users, and the major components of the system, and
provides an overall view of how the system is used.

uml+activity+diagram+library+mgmt+book+return[1]
Activity Diagrams
Use case diagrams can be broken down into activity diagrams. An activity diagram shows the software
process as the fl ow of work through a series of actions. It can be a useful exercise to draw an
activity diagram showing the major tasks that a user will perform with the software application.

 

Sequence Diagrams

 

Sequence diagrams display interactions between different objects. This interaction usually takes
place as a series of messages between the different objects. Sequence diagrams can be considered an
alternate view to the activity diagram. A sequence diagram can show a clear view of the steps in a
use case. Figure 14-3 shows an example of a sequence diagram.
Component Diagrams

 

Component diagrams help visualize the high-level structure of the software system. They show the
major parts of a system and how those parts interact and depend on each other. One nice feature of
component diagrams is that they show how the different parts of the design interact with each other,
regardless of how those individual parts are actually implemented. Figure 14-4 shows an example of
a component diagram.

 

Class Diagrams

 

Class diagrams describe the objects in the application system. They do this without referencing any
particular implementation of the system itself. This type of UML modeling diagram is also referred
to as a conceptual class diagram. Figure 14-5 shows an example of a class diagram.

How to: Export UML Diagrams to Image Files

You can export a UML document from Visual Studio to an image that is under program control. For example, you might want to do this as part of automatic document generation.

If you want to export a document to an image manually, you can copy and paste the shapes from a diagram into other programs such as Word. You can also print documents to XPS format. For more information, see Export Images of Diagrams.

The following code defines a shortcut menu command, also known as a context menu command, that saves an image to a file.

Note Note

To make this code work as a menu command, you must incorporate it into a MEF component. For more information, seeHow to: Define a Menu Command on a Modeling Diagram.

The code first uses GetObject<T> to get the Diagram of the underlying implementation. This type has a methodCreateBitmap.

namespace SaveToImage
{
  using System.ComponentModel.Composition; // for [Import], [Export]
  using System.Drawing; // for Bitmap
  using System.Drawing.Imaging; // for ImageFormat
  using System.Linq; // for collection extensions
  using System.Windows.Forms; // for SaveFileDialog
  using Microsoft.VisualStudio.Modeling.Diagrams;
    // for Diagram
  using Microsoft.VisualStudio.Modeling.ExtensionEnablement;
    // for IGestureExtension, ICommandExtension, ILinkedUndoContext
  using Microsoft.VisualStudio.ArchitectureTools.Extensibility.Presentation;
    // for IDiagramContext
  using Microsoft.VisualStudio.ArchitectureTools.Extensibility.Uml;
    // for designer extension attributes


  /// 
  /// Called when the user clicks the menu item.
  /// 
  // Context menu command applicable to any UML diagram 
  [Export(typeof(ICommandExtension))]
  [ClassDesignerExtension]
  [UseCaseDesignerExtension]
  [SequenceDesignerExtension]
  [ComponentDesignerExtension]
  [ActivityDesignerExtension]
  class CommandExtension : ICommandExtension
  {
    [Import]
    IDiagramContext Context { get; set; }

    public void Execute(IMenuCommand command)
    {
      // Get the diagram of the underlying implementation.
      Diagram dslDiagram = Context.CurrentDiagram.GetObject();
      if (dslDiagram != null)
      {
        string imageFileName = FileNameFromUser();
        if (!string.IsNullOrEmpty(imageFileName))
        {
          Bitmap bitmap = dslDiagram.CreateBitmap(
           dslDiagram.NestedChildShapes,
           Diagram.CreateBitmapPreference.FavorClarityOverSmallSize);
          bitmap.Save(imageFileName, GetImageType(imageFileName));
        }
      }
    }

    /// 
    /// Called when the user right-clicks the diagram.
    /// Set Enabled and Visible to specify the menu item status.
    /// 
    ///
    public void QueryStatus(IMenuCommand command)
    {
      command.Enabled = Context.CurrentDiagram != null 
        && Context.CurrentDiagram.ChildShapes.Count() > 0;
    }

    /// 
    /// Menu text.
    /// 
    public string Text
    {
      get { return "Save To Image..."; }
    }


    /// 
    /// Ask the user for the path of an image file.
    /// 
    /// image file path, or null
    private string FileNameFromUser()
    {
      SaveFileDialog dialog = new SaveFileDialog();
      dialog.AddExtension = true;
      dialog.DefaultExt = "image.bmp";
      dialog.Filter = "Bitmap ( *.bmp )|*.bmp|JPEG File ( *.jpg )|*.jpg|Enhanced Metafile (*.emf )|*.emf|Portable Network Graphic ( *.png )|*.png";
      dialog.FilterIndex = 1;
      dialog.Title = "Save Diagram to Image";
      return dialog.ShowDialog() == DialogResult.OK ? dialog.FileName : null;
    }

    /// 
    /// Return the appropriate image type for a file extension.
    /// 
    ///
    /// 
    private ImageFormat GetImageType(string fileName)
    {
      string extension = System.IO.Path.GetExtension(fileName).ToLowerInvariant();
      ImageFormat result = ImageFormat.Bmp;
      switch (extension)
      {
        case ".jpg":
          result = ImageFormat.Jpeg;
          break;
        case ".emf":
          result = ImageFormat.Emf;
          break;
        case ".png":
          result = ImageFormat.Png;
          break;
      }
      return result;
    }
  }
}

How To : Design the Physical Architecture to Support Collaborative Development and ALM of SharePoint Foundation 2010 Application

Introduction

This article explains the physical architecture which fits best in collaborative development and ALM of SharePoint Foundation 2010 application and what are the servers and tools needed and how they play key roles in ALM of SharePoint Foundation 2010. The purpose of this article is to provide overall understanding of various servers and farms connected to each other in SharePoint Foundation.

Background

Basic understanding of different server OS & SharePoint Foundation 2010 is required.

Solution

Application Life-cycle Management (ALM) is the co-ordination of development life-cycle activities—including requirements, modeling, development, build, and testing. Recently, ALM has expanded beyond the application and the software development life cycle to also include business solution governance, infrastructure management, operations, and support.

You can use ALM to help align your organization in the context of a software solution in business, development, and operations. With an application development platform that supports ALM, you can provide integration between the various tools used and activities performed within each of these capabilities.

There are main four types of staging servers with standalone developer’s environment which plays a key role in ALM of SharePoint 2010 application:

  1. Development SharePoint Farm
  2. Team foundation server
  3. Integration/Testing Farm
  4. Production Farm
    +
    Developer’s Workstation

The below figure is a physical architecture which depicts how each sever is interconnected to support collaborative development and ALM for SharePoint Foundation 2010 application:

Click to enlarge image

Development SharePoint Farm

A SharePoint farm is fundamentally a collection of SharePoint role servers that provide for the base infrastructure required to house SharePoint sites. The farm level is the highest level of SharePoint architecture, providing a distinct operational boundary for a SharePoint environment. Each farm in an environment is a self-encompassing unit made up of one or more servers, such as web servers, service application servers, and SharePoint database servers.

SharePoint development farm needed for the developers in an organization that makes heavy use of SharePoint often need environments to test new applications, web parts, solutions, and other SharePoint customization. These developers often need a sandbox area where these farm level features and solutions can be tested.

I have considered two-tier topology for SharePoint Foundation 2010 farm. However it will be entirely based on the need of your application. If your application is a relatively small intranet application, then you can choose single tier topology or if you are going to integrate other search server with foundation, then you can choose three-tier topology with application server as a middle tier (Remember that SharePoint Foundation 2010 doesn’t include enterprise search). It may make sense to deploy one or more development farms so that developers have the opportunity to run their tests and develop software for SharePoint independent of the existing production environment.

There are basically two types of servers included in two-tier development farm of SharePoint Foundation 2010:

  1. Web server
  2. Content database server

In the above figure, there are three front-end web servers and one SharePoint content database server. However you can choose a single front-end web server connected to content database server based on your application need and architecture of production environment. All web servers share the same content database. This is called two-tier deployment farm where SharePoint server component and content database are installed on separate server. As I mentioned before, you can choose one-tier, two-tier or three-tier deployment topology based on your application architecture and topology of production architecture.

Each web server has SharePoint Foundation 2010 and SharePoint extension for TFS 2010 install on it. It needs SharePoint extension for TFS 2010 to connect with Team Foundation Server for source control, build management & project management.

Advantage of Development SharePoint Farm:

  1. Single place where SharePoint Admin can integrate all the final artifacts from multiple developers.
  2. Developer can sync with latest SharePoint site on its standalone developer workstation.
  3. Admin can easily approve artifacts and migrate to integration server.
  4. It is a unit testing environment for developers where they can test dependent functionality or farm level features.

Team Foundation Server

Team Foundation Server plays a key role in ALM which provides source control, build management and work item. You can have TFS installed on the same server which has content database server but if you are going to use build management of TFS, then it is advisable to have separate Team Foundation Server because it utilizes CPU intensively when it processes the builds.

As per the above figure, there are separate Team foundation servers which are connected to SharePoint Farm as well as standalone development workstation so that it can provide source control for customized content as well as developer’s artifacts and resources.

Advantages of TFS
  1. Source control for SharePoint artifacts and customization
  2. Build management for SharePoint
  3. Work item and bug tracking tool for SharePoint
  4. Admin console for all management activity
  5. Easy integration with SharePoint foundation server and VS 2010
  6. Easy check-in & check-out
  7. Web based console to manage ALM activity

Developer’s Workstation

As per the above figure, developers’ environment includes two developers workstation. In practice, you can take as many workstations as your development team size.

Developer workstation should have Windows 7 or Windows vista operating system with standalone SharePoint foundation server with local content database. So that one developer’s work doesn’t affect another developer and he can debug artifacts locally.

Developer workstation will include the following stuff installed:

  1. Windows 7 or Windows vista 64 bit OS
  2. Stand alone SharePoint Foundation server 2010
  3. SharePoint designer 2010
  4. Visual Studio 2010 (connected to TFS)

Developer workstation should be connected to Team Foundation Server 2010 so that when developer finally completes his artifact, then he can check-in his artifact in TFS so that other developers can take the latest code from TFS if needed. This way, parallel development can happen without affecting other developer’s work.

Integration/Testing Farm

Any production SharePoint environment should have a test environment in which new SharePoint web parts, solutions, service packs, patches, and add-ons can be tested. It is critical to deploy test farms, because many SharePoint add-ons could potentially disrupt or corrupt the formatting or structure of a production environment, and trying to test these new solutions on site collections or different web applications is not enough because the solutions often install directly on the SharePoint servers themselves. If there is an issue, the issue will be reflected in the entire farm.

Integration or testing server farm should be similar to the existing environments, with the same add-ons and solutions installed and should ideally include restores of production site collections to make it as similar as possible to the existing production environment. All changes and new products or solutions installed into an environment should subsequently be tested first in this environment.

Integration/testing servers will have final SharePoint sites and site collection as per the business requirements. QA will test all the business functionality here. Customer can also do their ‘User acceptance test’ before going live to the production server.

After user acceptance test passed, all the sites & site collection will be deployed on production server.

Advantage of Integration testing server:

  1. Clean environments and same physical architecture as production
  2. QA can test all dependent business functionality at one place
  3. Customer can participate in UAT
  4. Easy deployment/migration from integration testing server to production server

Production Farm

The final stage is rolling your farm into a production environment. At this stage, you will have incorporated the necessary solution and infrastructure adjustments that were identified during the user acceptance test stage. These servers are generally in the customer’s premises. Development team and testing team do not have control over it.

There are various 3rd party tools available in the market for SharePoint data protection, administration, migration, compliance and integration.

ImageGen[1]

Summary

So this way, you can design physical architecture where Development SharePoint Farm and developer’s workstation are integrated with TFS 2010. TFS and Content database are connected to testing server or testing farm where all the artifacts and content will be integrated in testing server for QA and UAT. Finally after UAT, it will be deployed on production farm.

You can use VM (Virtual Machine) for all the servers and workstation for effective infrastructure because if server crashes due to some reason, then you can quickly create a new VM for the needed OS from images.

Note: In the above figure, integration/Testing farm and production farm is a single server just for clear understanding but it will be as large as development farm with number of front-end web server and content database server in reality. All the server OS is Windows Server 2008 R2 SP2 64 bit. Please visit here for more information on hardware & software requirements for SharePoint Foundation 2010.

How Microsoft’s Research Team is making Testing and the use of Pex & Moles Fun and Interesting

Try it out on the web

Go to www.pex4fun.com, and click Learn to start tutorials.

Main Publication to cite

Nikolai Tillmann, Jonathan De Halleux, Tao Xie, Sumit Gulwani, and Judith Bishop, Teaching and Learning Programming and Software Engineering via Interactive Gaming, in Proc. 35th International Conference on Software Engineering (ICSE 2013), Software Engineering Education (SEE), May 2013

 

Massive Open Online Courses (MOOCs) have recently gained high popularity among various universities and even in global societies. A critical factor for their success in teaching and learning effectiveness is assignment grading. Traditional ways of assignment grading are not scalable and do not give timely or interactive feedback to students.

 

To address these issues, we present an interactive-gaming-based teaching and learning platform called Pex4Fun. Pex4Fun is a browser-based teaching and learning environment targeting teachers and students for introductory to advanced programming or software engineering courses. At the core of the platform is an automated grading engine based on symbolic execution.

 

In Pex4Fun, teachers can create virtual classrooms, customize existing courses, and publish new learning material including learning games. Pex4Fun was released to the public in June 2010 and since then the number of attempts made by users to solve games has reached over one million.

 

Our work on Pex4Fun illustrates that a sophisticated software engineering technique – automated test generation – can be successfully used to underpin automatic grading in an online programming system that can scale to hundreds of thousands of users.

 

 

Code Hunt is an educational coding game.

Play win levels, earn points!

Analyze with the capture code button

Code Hunt is a game! The player, the code hunter, has to discover missing code fragments. The player wins points for each level won with extra bonus for elegant solutions.

Code in Java or C#

Discover a code fragment

Play in Java or C#… or in both! Code Hunt allows you to play in those two curly-brace languages. Code Hunt provides a rich editing experience with syntax coloring, squiggles, search and keyboard shortcuts.

Learn algorithms

Discover a code fragment

As players progresses the sectors, they learn about arithmetic operators, conditional statements, loops, strings, search algorithms and more. Code Hunt is a great tool to build or sharpen your algorithm skills. Starting from simple problems, Code Hunt provides fun for the most skilled coders.

Graded for correctness and quality

Modify the code to match the code fragment

At the core of the game experience is an automated grading engine based on dynamic symbolic execution. The grading engine automatically analyzes the user code and the secret code to generate the result table.

MOOCs with Office Mix

Add Code Hunt to your presentations

Code Hunt can included in any PowerPoint presentation and publish as an Office Mix Online Lesson. Use this PowerPoint template to create Code Hunt-themed presentations.

Web no installs, it just works

It just works

Code Hunt runs in most modern browsers including Internet Explorer 10, 11 and recent versions of Chrome and Firefox. Yup, it works on iPad.

Extras play your own levels

Play your own levels

Extra Zones with new sectors and levels can be created and reused. Read designer usage manual to create your own zone.

Compete so you think you can code

Compete

Code Hunt can be used to run small scale or large scale, private or public, coding competition. Each competition gets its own set of sectors and levels and its own leaderboard to determine the outcome. Please contact codehunt@microsoft.com for more information about running your own competition using Code Hunt.

Credits the team

Capture the working code fragment

Code Hunt was developed by the Research in Software Engineering (RiSE) group and Connections group at Microsoft Research. Go to our Microsoft Research page to find a list of publications around Code Hunt.

A Look At : Visual Studio 2013 Update 3 CTP2

avatar[2]

New technology improvements in Visual Studio 2013 Update 3 CTP 2

 

Technology improvements

The following technology improvements were made in this release.

CodeLens

  • CodeLens jobs that are running on the Team Foundation Server job agent have been optimized for performance specifically while processing branching and merging changesets.

Debugger

  • If you have more than one monitor, Visual Studio will remember which monitor a Windows Store application was last run on.
  • You can debug x86 applications that are built by .NET native.
  • When you analyze managed memory dump files, you can go to Definition and Find All References of the selected type.
  • You can debug the dump files from .NET Native applications by using Visual Studio debugger.

General

  • The Application Insights Tools for Visual Studio are now included in Visual Studio 2013 Update 3 CTP2. This initial integration as part of CTP2 includes some software updates and performance improvements.

IntelliTrace

  • You can skip straight to the details of performance events that are exported from Application Insights to IntelliTrace.

Profiler

  • The Performance and Diagnostics hub can open profiling sessions (.diagsession files) that were exported from the F12 tools in the latest developer preview of Internet Explorer 11.
  • Windows Presentation Foundation (WPF) and Win32 applications are supported by the new Memory Usage Tool in the Performance and Diagnostics Hub. For more information about how to use the tool to troubleshoot issues in native and managed memory, go to the following blog post:
    Diagnosing memory issues with the new Memory Usage Tool in Visual Studio

Release Management

  • You can useWindowsPowerShell or theWindowsPowerShell Desired State Configuration (DSC) feature to deploy and manage configuration data. Additionally, you can deploy to the following environments without having to set up Microsoft Deployment Agent:
    • Microsoft Azure environments
    • On-premise environments (Standard environments)

Testing Tools

  • You can add custom fields and custom work flows for test plans and test suites.
  • You can use Manage Test Suites permission for granting access to test suites.
  • You can track changes to test plans and test suites by using work item history.

For more information about these features, see the following Visual Studio Developer Tools blog article:

Test Plan and Test Suite Customization with TFS2013 Update3

Visual Studio IDE

  • CodeLens authors and changes indicators are now available for Git repositories.
  • In Code Map, links are styled by using colors, and they display in the improved Legend.
  • Debugger Map automatically zooms to the call stack entry of interest and preserves user’s zoom preferences.
  • You can drag binaries from the Windows file explorer to a code map, and then start exploring binaries by using Code Map.

Known issues

Testing Tools

  • When you try to upgrade an existing TFS server that has Test management data to Visual Studio 2013 Team Foundation Server Update 3 CTP2 in JPN or CHS, the upgrade of Test Case Management service does not work.

Visual Studio IDE

  • In Visual Studio 2013 Ultimate Update 3 CTP2 localized (non en-us) drops, when trying to request a Code Map, or a Dependency Graph for the solution, the directed graph is not produced.

 

For more information on Visual Studio 2013 and other upgrades, visit http://support.microsoft.com/kb/2933779/en-us

FREE Web Part – Random “Quote of the day” SP 2010 Web Part

The “Random Quote of the Day” Web Part randomly selects a quote from the specified Sharepoint list or from the selected RSS feed.

A timer can then be set and the web part will read a new, random post and place it within the web part.

It is great for team/company motivation, to display code snippets in on a Team or KB Site – Your imagination is the limit.

A “Starter” Excel list containing quotes for a quick start is supplied with the download package.

Image

The Web Part can be used with Sharepoint 2010.

You can configure the following web part properties: the Sharepoint list the Sharepoint list column or the RSS Feed URL for external tips enable or suppress the daily calendar display show an optional picture or calendar show a tip every day or on every page refresh configure CSS settings for individual formatting

 

Contact me at tomas.floyd@outlook.com for this cool free web part – Totally free of charge

List of all Visual Studio ALM Virtual Machines

List of all Visual Studio ALM Virtual Machines

COURTESY OF THE ALM RANGERS @

 

image_thumb%25255B3%25255D[1]

Given the growing list of virtual machines we have published to showcase various application lifecycle management scenarios, I created this blog post to be a permanent location you can bookmark any time you want to find the latest and greatest. An easy-to-remember URL for this page is http://aka.ms/ALMVMs.

Visual Studio 2013 ALM Virtual Machine and Hands-on-Labs / Demo Scripts
Last Update: January 9, 2014
This virtual machine is based on the RTM release of Visual Studio 2013 and includes hands-on-labs / demo scripts which showcase the new ALM capabilities introduced in this release. This VM was also upgraded on January 9, 2014 to include the content and hands-on-labs / demo scripts for capabilities which were originally introduced in Visual Studio 2010/2012.

Visual Studio 2012 Update 2 ALM Virtual Machine and Hands-on-Labs / Demo Scripts
Last Updated: April 17, 2013
This is the primary ALM virtual machine which demonstrates many of the scenarios introduced in Visual Studio 2010/2012 for application lifecycle management. This includes project management, source control, developer productivity and collaboration, testing, lab management, and IntelliTrace.

Team Foundation Server 2012 and Project Server 2013 Integration Virtual Machine and Hands-on-Labs / Demo Scripts
Last Updated: April 17, 2013
This VM highlights the integration scenarios which are possible between Team Foundation Server and Project Server which allow development teams to automatically synchronize the status of their projects with a centralized project management office (PMO).

Team Foundation Server 2012 and System Center 2012 Operations Manager Integration Virtual Machine and Hands-on-Lab / Demo Script
Last Updated: February 7, 2013
This VM highlights the integration scenarios which are possible between System Center 2012 Operations Manager and Team Foundation Server 2012 which allow operations teams to easily surface incidents from production in a rich, actionable way for developers to quickly diagnose these problems.

How To : Use Javascript to enable Listview Folder Navigation

list view webpart is added to page and user navigate to different folders in the list view, there’s no way for users to know current folder hierarchy. So basically breadcrumb for the list view webpart missing. If there would be a way of showing users the exact location in the folder hierarchy the user is current in (as shown in the image below), wouldn’t be that great?


Image 1: Folder Navigation in action

Deploy the FolderNavigation.js File

Download the FolderNavigation.js and then you can deploy the script either in Layouts folder (in case of full trust solutions) or in Master Page gallery (in case of SharePoint Online or full trust). I would recommend to deploy in Master Page Gallery so that even if you move to cloud, it works without modification. If you deploy in Master page gallery, you don’t need to make any changes, but if you deploy in layouts folder, you need to make small changes in the script which is described in section ‘Deploy JS Link file in Layouts folder’.

 

Option 1: Deploy in Master Page Gallery (Suggested)

If you are dealing with SharePoint Online, you don’t have the option to deploy in Layouts folder. In that case you need to deploy it in Master page gallery. Note, deploying the script in other libraries (like site assets, site library) will not work, you need to deploy in master page gallery. Otherwise you can deploy in Layouts folder as described in next section. To deploy in master page gallery manually, please follow the steps:

  1. Download the JavaScript file attached.
  2. Navigate to Root web => site settings => Master Pages (under group ‘Web Designer Galleries’).
  3. From the ‘New Document’ ribbon try adding ’JavaScript Display Template’ and then upload the FolderNavigation.js file and set properties as shown below:

    Image 2: Upload the JavaScript file in master page gallery

    In the above image, we’ve specified the content type to ‘JavaScript Display Template’, ‘target control type’ to view to use the js file in list view. Also I’ve set target scope to ‘/’ which means all sites and subsites will be applied. If you have a site collection ‘/sites/HR’, then you need to use ‘/Sites/HR’ instead. You can also use List Template ID, if you need.

 

Option 2: Deploy in Layouts Folder

If you are deploying the FolderNavigation.js file in Layouts folder, you need to make small changes in the downloaded script’s RegisterModuleInti method as shown below:

RegisterModuleInit(FolderNavigation.js, folderNavigation);

 

In this case the ‘RegisterModuleInit’ first parameter will be the path relative to Layouts folder. If you deploy your file in path ‘/_Layouts/folder1’, the then you need to modify code as shown below:

RegisterModuleInit(Folder1/FolderNavigation.js, folderNavigation);

 

If you are deploying in other subfolders in Layouts folder, you need to update the path accordingly. What I’ve found till now, you can only deploy in Layouts and Master page gallery. But if you find deploying in other folders works, please share. Basically first paramter in RegisterModuleInti is the file either:

  • Relative to ‘_Layouts’ folder
  • Or Master page gallery in which case the path is started with ‘/_catalogs/masterpage’

 

Use the FolderNavigation.js in List View WebPart

Once you deploy the JavaScript file in Master page gallery or Layouts folder, you need to use it in List View WebPart. Once you deploy the FolderNavigation.js file, you can start using it in list view webpart. Edit the list view web part properties and then under ‘Miscellaneous’ section put the file url for JS Link as shown below:

Image 3: List View WebPart’s JS Like Propery

 

Few points to note for this JS Link:

  • if you have deployed the js file in Master Page Gallery, You can use ~site or ~SiteCollection token, which means current site or current site collection respectively. The URL for JS Link then might be ‘~siteCollection/_catalogs/masterpage/FolderNavigatin.js’ or  ‘~site/_catalogs/masterpage/FolderNavigatin.js’. If you deploy the file in Site Collection Master Page gallery only, you need to use ~siteCollection token in subsites so that it uses the JavaScript file from Site Collection.
  • If you have deployed in Layouts folder, you need to use corresponding path in the JS Link properties. For example if you are deploying the file in Layouts folder, then use ‘/_layouts/15/FolderNavigation.js’, if you are deploying in ‘Layouts/Folder1’ then, use ‘/_layouts/15/Folder1/FolderNavigation.js’. Just to inform again, if you deploy in Layouts folder, you need to make small changes in the JavaScript file as described under ‘Option 2: Deploy in Layouts Folder’ section.

 

JavaScript file Description

In case you are interested to know how the code works, the code snippet is given below:

JavaScript

function replaceQueryStringAndGet(url, key, value) { 
    var re = new RegExp("([?|&])" + key + "=.*?(&|$)""i"); 
    separator = url.indexOf('?') !== -1 ? "&" : "?"; 
    if (url.match(re)) { 
        return url.replace(re, '$1' + key + "=" + value + '$2'); 
    } 
    else { 
        return url + separator + key + "=" + value; 
    } 
} 
 
 
function folderNavigation() { 
    function onPostRender(renderCtx) { 
        if (renderCtx.rootFolder) { 
            var listUrl = decodeURIComponent(renderCtx.listUrlDir); 
            var rootFolder = decodeURIComponent(renderCtx.rootFolder); 
            if (renderCtx.rootFolder == '' || rootFolder.toLowerCase() == listUrl.toLowerCase()) 
                return; 
 
            //get the folder path excluding list url. removing list url will give us path relative to current list url 
            var folderPath = rootFolder.toLowerCase().indexOf(listUrl.toLowerCase()) == 0 ? rootFolder.substr(listUrl.length) : rootFolder; 
            var pathArray = folderPath.split('/'); 
            var navigationItems = new Array(); 
            var currentFolderUrl = listUrl; 
 
            var rootNavItem = 
                { 
                    title: 'Root', 
                    url: replaceQueryStringAndGet(document.location.href, 'RootFolder', listUrl) 
                }; 
            navigationItems.push(rootNavItem); 
 
            for (var index = 0; index < pathArray.length; index++) { 
                if (pathArray[index] == '') 
                    continue; 
                var lastItem = index == pathArray.length - 1; 
                currentFolderUrl += '/' + pathArray[index]; 
                var item = 
                    { 
                        title: pathArray[index], 
                        url: lastItem ? '' : replaceQueryStringAndGet(document.location.href, 'RootFolder'encodeURIComponent(currentFolderUrl)) 
                    }; 
                navigationItems.push(item); 
            } 
            RenderItems(renderCtx, navigationItems); 
        } 
    } 
 
 
    //Add a div and then render navigation items inside span 
    function RenderItems(renderCtx, navigationItems) { 
        if (navigationItems.length == 0return; 
        var folderNavDivId = 'foldernav_' + renderCtx.wpq; 
        var webpartDivId = 'WebPart' + renderCtx.wpq; 
 
 
        //a div is added beneth the header to show folder navigation 
        var folderNavDiv = document.getElementById(folderNavDivId); 
        var webpartDiv = document.getElementById(webpartDivId); 
        if(folderNavDiv!=null){ 
            folderNavDiv.parentNode.removeChild(folderNavDiv); 
            folderNavDiv =null; 
        } 
        if (folderNavDiv == null) { 
            var folderNavDiv = document.createElement('div'); 
            folderNavDiv.setAttribute('id', folderNavDivId) 
            webpartDiv.parentNode.insertBefore(folderNavDiv, webpartDiv); 
            folderNavDiv = document.getElementById(folderNavDivId); 
        } 
 
 
        for (var index = 0; index < navigationItems.length; index++) { 
            if (navigationItems[index].url == ''{ 
                var span = document.createElement('span'); 
                span.innerHTML = navigationItems[index].title; 
                folderNavDiv.appendChild(span); 
            } 
            else { 
                var span = document.createElement('span'); 
                var anchor = document.createElement('a'); 
                anchor.setAttribute('href', navigationItems[index].url); 
                anchor.innerHTML = navigationItems[index].title; 
                span.appendChild(anchor); 
                folderNavDiv.appendChild(span); 
            } 
 
            //add arrow (>) to separate navigation items, except the last one 
            if (index != navigationItems.length - 1{ 
                var span = document.createElement('span'); 
                span.innerHTML = '&nbsp;> '; 
                folderNavDiv.appendChild(span); 
            } 
        } 
    } 
 
 
    function _registerTemplate() { 
        var viewContext = {}; 
 
        viewContext.Templates = {}; 
        viewContext.OnPostRender = onPostRender; 
        SPClientTemplates.TemplateManager.RegisterTemplateOverrides(viewContext); 
    } 
    //delay the execution of the script until clienttempltes.js gets loaded 
    ExecuteOrDelayUntilScriptLoaded(_registerTemplate, 'clienttemplates.js'); 
}; 
 
//RegisterModuleInit ensure folderNavigation() function get executed when Minimum Download Strategy is enabled. 
//if you deploy the FolderNavigation.js file in '_layouts' folder use 'FolderNavigation.js' as first paramter. 
//if you deploy the FolderNavigation.js file in '_layouts/folder/subfolder' folder, use 'folder/subfolder/FolderNavigation.js as first parameter' 
//if you are deploying in master page gallery, use '/_catalogs/masterpage/FolderNavigation.js' as first parameter 
RegisterModuleInit('/_catalogs/masterpage/FolderNavigation.js', folderNavigation); 
 
//this function get executed in case when Minimum Download Strategy not enabled. 
folderNavigation(); 

Let me explain the code briefly:

  • The method ‘replaceQueryStringAndGet’ is used to replace query string parameter with new value. For example if you have url http://abc.com?key=value&name=sohel’  and you would like to replace the query string ‘key’ with value ‘New Value’, you can use the method like

    replaceQueryStringAndGet(http://abc.com?key=value&name=sohel&#8221;,“key”,“New Value”)

  • The function folderNavigation has three methods. Function ‘onPostRender’ is bound to rendering context’s OnPostRender event. The method first checks if the list view’s root folder is not null  and root folder url is not list url (which means user is browsing list’s/library’s root). Then the method split the render context’s folder path and creates navigation items as shown below:

    var item = { title: title, url: lastItem ? : replaceQueryStringAndGet(document.location.href, ‘RootFolder’, encodeURIComponent(rootFolderUrl)) };

    As shown above, in case of last item (which means current folder user browsing), the url is empty as we’ll show a text instead of link for current folder.

  • Function ‘RenderItems’ renders the items in the page. I think this is the place of customisation you might be interested. Having all navigation items passed to this function, you can render your navigation items in your own way. renderContext.wpq is unique webpart id in the page. As shown below with the wpq value of ‘WPQ2’ the webpart is rendered in a div with id ‘WebPartWPQ2’.

    Image 4: List View WebPart in Firebug

    In ‘RenderItems’ function I’ve added a div just before the webpart div ‘WebPartWPQ2’ to put the folder navigation as shown in the image 1.

  • In the method ‘_registerTemplate’, I’ve registered the template and bound the OnPostRender event.
  • The final piece is RegisterModuleInit. In some example you will find the function folderNavigation is executed immediately along with the declaration. However, there’s a problem with Client Side Rendering and Minimal Download Strategy (MDS) working together.
  • To avoid this problem, we need to Register foldernavigation function with RegisterModuleInit to ensure the script get executed in case of MDS-enabled site. The last line ‘folderNavigation()’ will execute normally in case of MDS-disabled site.

How To : 8 Steps for a successful SharePoint Change Management

As with virtually any other significant IT implementation project, a SharePoint deployment is as dependent on people as it is on technology for its success. If your system end users are in fact not using the system, or are not using it correctly or to its full potential, you will never achieve that all ‑ important return on investment.

dsadsa

ImageGen[1]            hero-for-hire_basic-layout_600

One hundred percent adoption by users who are proficient with SharePoint and are committed to gaining the greatest value from the software should be a key objective for all SharePoint project leaders. To bring this about it is crucial to develop and execute an effective change management strategy as a key component of your SharePoint implementation.

At Professional Advantage we have had great success with our SharePoint clients by informally following a change management process that was thought up by Dr John Kotter, an American professor whose 1966 book, Leading Change, is still highly influential in the world of change management theory.

In his book Dr Kotter puts forward an eight step process that change leaders can follow to avoid failure and adjust successfully to change. These steps, which can be usefully applied to a SharePoint deployment project, are:

 

1. Create a sense of urgency

No SharePoint project will get off the ground, let alone become successful, if there is no buy-in at the executive level. Here it is important to put a strong case forward as to why the move to SharePoint is very much in the organisation’s best interests. Begin by doing lots of research. Examine, for example, the competitive disadvantages that will be suffered if no change is made. Also highlight those business functions and processes within the organisation that could be significantly improved with SharePoint. Tie the benefits of SharePoint to the organisation’s broad business goals and ongoing strategic objectives. Explain as persuasively as possible why the current situation is unsustainable and why, when it comes to moving to SharePoint, it’s a case of ‘the sooner the better.’ The stronger your business case for a SharePoint implementation, the more likely it is that it will get the green light.

 

2. Create a guiding coalition

Once you’ve received the go-ahead for the SharePoint deployment the next step is to put together a coalition of people with the power and commitment to lead the change. This team will ideally be comprised of a wide variety of motivated individuals: department managers, technical experts and those at the coalface who will be using SharePoint on a day-to-day basis should all form part of the coalition. They should also be people who have grasped the urgency of the task ahead, who understand the business goals that will be achieved with a successful implementation and who recognise that 100% user adoption is a central goal of the project.

Crucial to the success of the coalition’s efforts is that its members all work well together. As the project evolves these change-drivers will be sharing ideas, making decisions and identifying and solving problems. Team members must be able to trust each other and collaborate effectively; if this does not occur the project will almost certainly stall.

 

3. Develop a change vision

By developing a clear vision for the project you give those involved a direction to follow and a goal to achieve. Ideally the vision will be easy to comprehend, achievable, flexible and something that all stakeholders can get enthusiastic about.

While the vision will by definition be broad, the strategies that underpin it will be specific. Priorities for the project should be defined and acted upon, with priority given to ‘low hanging fruit’, ie tasks that can be easily achieved and which will deliver visible, measurable and meaningful change within the organisation. This approach will add momentum to the project by enabling stakeholders to gain a real-world perspective on the changes that are in progress and why they’re good both for the organisation and for individual SharePoint users.

 

4. Communicate the vision for buy-in

Communicating your vision and promoting the behavioural changes that will drive it are critical for a successful SharePoint deployment. This step requires a top-down communications strategy that is consistent, creative, inspiring and ongoing.

At Professional Advantage our communication strategy forms part of our SharePoint adoption plan and includes a variety of tactics designed to get staff using SharePoint, and using it properly. In the past such tactics have included SharePoint launch parties, lunch sessions, system design competitions amongst staff, social media, blogs and the putting up of posters around the office promoting the use of SharePoint. The objective here is, of course, to get users educated and engaged. The more creative you are, the better. And always keep in mind that user adoption will likely be low unless you can answer the ‘What’s in it for me’ question.

 

5. Empower broad-based action

To achieve the highest possible level of SharePoint user adoption it’s best to remove any barriers that might impede that objective. This particularly applies to the laggards, ie those who are most resistant to change and least likely to make full use of the system.

Typically this will involve removing software and other technologies that make it easy for workers to continue doing things the old way. Too often organisations include this as an afterthought, resulting in smaller and slower user adoption. Here it is important to plan from the beginning, anticipate what systems will be made redundant (or scaled down) and schedule that in to the SharePoint implementation plan.

Also important here is encouragement from above. Supported by proper ongoing training, those who will be using SharePoint need to be encouraged to step out of their comfort zone and embrace the new system.

 

6. Celebrate short-term wins

Short-term wins are essential to the success of your SharePoint deployments, as are the active celebration of these wins when they occur. The transition to a SharePoint environment is a long-term process and momentum must be maintained every step of the way. Perhaps, as a result of SharePoint, a new level of intra-office collaboration has been achieved, or the organisation has experienced dramatic time savings with particular processes, or has achieved new standards of compliance. Whatever the win, the broadcasting of it should form part of the SharePoint communications plan. If people can see how and why SharePoint is working, they will be more likely to embrace the system and, in so doing, contribute to the achievement of the organisation’s business goals.

 

7. Consolidate gains and generate more change

You’ve scored some wins and people are now comfortable using SharePoint. While that is a wonderful thing, the danger at this stage is complacency. Rather than take your foot off the accelerator it’s important to build on what’s been achieved and pursue larger, more ambitious objectives. To fully ingrain SharePoint into your organisation’s culture (and to avoid regression) ramp things up with new projects and initiatives.

 

8. Making it stick

To fully embed SharePoint into your organisation’s culture and business practices everyone needs to be on board. Just as during a life-threatening cyclone there are always some residents who refuse to heed advice to leave town, with a SharePoint deployment there will always be some who are unwilling to move. Here it is important to reinforce, and continue to celebrate, the victories that have been achieved and communicate how important it is that everyone adopt the system.

As the SharePoint project continues to evolve so too will its vision and purpose. With the right planning and execution, and with the right leadership, people will, over time, forget the old ways of doing things and fully embrace the new.

How To : Use JSON and SAP NetWeaver together

Background

Imagesap2[1]
In this example, SAP is used as the backend data source and the NWGW (Netweaver Gateway) adapter to consumable from .NET client as OData format.

Since the NWGW component is hosted on premise and our .NET client is hosted in Azure, we are consuming this data from Azure through the Service Bus relay. While transferring data from on premise to Azure over SB relay, we are facing performance issues for single user for large volumes of data as well as in relatively small data for concurrent users. So I did some POC for improving performance by consuming the OData service in JSON format.

What I Did?

I’ve created a simple WCF Data Service which has no underlying data source connectivity. In this service when the context is initializing, a list of text messages is generated and exposed as OData.

Here is that simple service code:

[Serializable]
public class Message
{
public int ID { get; set; }
public string MessageText { get; set; }
}
public class MessageService
{
List<Message> _messages = new List<Message>();
public MessageService()
{
for (int i = 0; i < 100; i++)
{
Message msg = new Message
{
ID = i,
MessageText = string.Format(“My Message No. {0}”, i)
};
_messages.Add(msg);

}
}
public IQueryable<Message> Messages
{
get
{
return _messages.AsQueryable<Message>();
}
}
}
[ServiceBehavior(IncludeExceptionDetailInFaults = true)]
public class WcfDataService1 : DataService
{
// This method is called only once to initialize service-wide policies.
public static void InitializeService(DataServiceConfiguration config)
{
// TODO: set rules to indicate which entity sets
// and service operations are visible, updatable, etc.
// Examples:
config.SetEntitySetAccessRule(“Messages”, EntitySetRights.AllRead);
config.SetServiceOperationAccessRule(“*”, ServiceOperationRights.All);
config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V3;
}
}
Exposing one endpoint to Azure SB so that client can consume this service through SB endpoint. After hosting the service, I’m able to fetch data by simple OData query from browser.

I’m also able to fetch the data in JSON format.

After that, I create a console client application and consume the service from there.

Sample Client Code

class Program
{
static void Main(string[] args)
{
List lst = new List();

for (int i = 0; i < 100; i++)
{
Thread person = new Thread(new ThreadStart(MyClass.JsonInvokation));
person.Name = string.Format(“person{0}”, i);
lst.Add(person);
Console.WriteLine(“before start of {0}”, person.Name);
person.Start();
//Console.WriteLine(“{0} started”, person.Name);
}
Console.ReadKey();
foreach (var item in lst)
{
item.Abort();
}
}
}

public class MyClass
{
public static void JsonInvokation()
{
string personName = Thread.CurrentThread.Name;
Stopwatch watch = new Stopwatch();
watch.Start();
try
{
SimpleService.MessageService svcJson =
new SimpleService.MessageService(new Uri
(“https://abc.servicebus.windows.net/SimpleService /WcfDataService1”));
svcJson.SendingRequest += svc_SendingRequest;
svcJson.Format.UseJson();
var jdata = svcJson.Messages.ToList();

watch.Stop();
Console.WriteLine(“Person: {0} – JsonTime First Call time: {1}”,
personName, watch.ElapsedMilliseconds);

for (int i = 1; i <= 10; i++)
{
watch.Reset(); watch.Start();
jdata = svcJson.Messages.ToList();
watch.Stop();
Console.WriteLine(“Person: {0} – Json Call {1} time:
{2}”, personName, 1 + i, watch.ElapsedMilliseconds);
}

Console.WriteLine(jdata.Count);
}
catch (Exception ex)
{
Console.WriteLine(personName + “: ” + ex.Message);
}
Thread.Sleep(100);
}

public static void AtomInvokation()
{
string personName = Thread.CurrentThread.Name;

try
{
Stopwatch watch = new Stopwatch();
watch.Start();
SimpleService.MessageService svc =
new SimpleService.MessageService(new Uri
(“https://abc.servicebus.windows.net/SimpleService/WcfDataService1&#8221;));
svc.SendingRequest += svc_SendingRequest;
var data = svc.Messages.ToList();

watch.Stop();
Console.WriteLine(“Person: {0} – XmlTime First Call time: {1}”,
personName, watch.ElapsedMilliseconds);

for (int i = 1; i <= 10; i++)
{
watch.Reset(); watch.Start();
data = svc.Messages.ToList();
watch.Stop();
Console.WriteLine(“Person: {0} – Xml Call {1} time:
{2}”, personName, 1 + i, watch.ElapsedMilliseconds);
}

Console.WriteLine(data.Count);
}
catch (Exception ex)
{
Console.WriteLine(personName + “: ” + ex.Message);
}
Thread.Sleep(100);
}
}9pt;”>

 

What I Test After That
I tested two separate scenarios:

Scenario I: Single user with small and large volume of data
Measuring the data transfer time periodically in XML format and then JSON format. You might notice that first call I’ve printed separately in each screen shot as it is taking additional time to connect to SB endpoint. In the first call, the secret key authentication is happening.

Small data set (array size 10): consume in XML format.

 

Consume in JSON format:

 

For small set of data, Json and XML response time over service bus relay is almost same.

Consuming Large volume of data (Array Size 100)

 

Here the XML message size is around 51 KB. Now I’m going to consume the same list of data (Array size 100) in JSON format.

 

So from the above test scenario, it is very clear that JSON response time is much faster than XML response time and the reason for that is message size. In this test, when I’m getting the list of 100 records in XML format message size is 51.2 KB but JSON message size is 4.4 KB.

Scenario II: 100 Concurrent user with large volume of data (array size 100)
In this concurrent user load test, I’ve done any service throttling or max concurrent connection configuration.

 

In the above screen shot, you will find some time out error that I’m getting in XML response. And it is happening due to high response time over relay. But when I execute the same test with JSON response, I found the response time is quite stable and faster than XML response and I’m not getting any time out.

 

How Easy to Use UseJson()
If you are using WCF Data Service 5.3 and above and VS2012 update 3, then to consume the JSON structure from the client, I have to instantiate the proxy / context with .Format.UseJson().

Here you don’t need to load the Edmx structure separately by writing any custom code. .NET CodeGen will generate that code when you add the service reference.

 

But if that code is not generated from your environment, then you have to write a few lines of code to load the edmx and use it as .Format.UseJson(LoadEdmx());

Sample Code for Loading Edmx

public static IEdmModel LoadEdmx(string srvName)
{
string executionPath = Directory.GetCurrentDirectory();
DirectoryInfo di = new DirectoryInfo(executionPath).Parent;
var parent1 = di.Parent;
var srv = parent1.GetDirectories(“Service References\\” +
srvName)[0].GetFiles(“service.edmx”)[0].FullName;

XmlDocument doc = new XmlDocument();
doc.Load(srv);
var xmlreader = XmlReader.Create(new StringReader(doc.DocumentElement.OuterXml));

IEdmModel edmModel = EdmxReader.Parse(xmlreader);
return edmModel;
}

How To : Make use of Application Insights (with a great VS extension available freely)

You can now add Application Insights telemetry right from Visual Studio to new or existing projects in 2 clicks or less.

This release of Application Insights Tools for Visual Studio is in PREVIEW; see Known Issues below.

Get Started

To get started with a new project, simply create a Web, Windows Store 8.1, or Windows Phone 8.0 project. In the New Project dialog, make sure that Add Application Insights to Project is checked.

To get started with an existing project, right-click on a Web, Windows Store 8.1, or Windows Phone 8.0 project in Solution Explorer and choose Add Application Insights Telemetry to Project.

That’s it! Then run your Web application locally (or deploy your application), and after 10-15 minutes, telemetry data will automatically start appearing in the Application Insights Portal in the Usage tab.

Additional project types are supported with partial automation (see Known Issues below)

Q & A

Q: I don’t see the Add Application Insights Telemetry to Project command.

A: The type of app you’re creating is not supported yet. Instead, add your application at the Application Insights portal.

Q: Oops. I closed the Application Insights browser. How do I get it back?

A: In Solution Explorer, in the context menu of the project, choose Open Application Insights Portal…

Q: What other telemetry can I log from my app?

A: You can log events and metrics. Take a look at Improve your application from live usage data

Known Issues

This release of Application Insights Tools for Visual Studio is in PREVIEW. There are a number of known issues, including:

  • If you are upgrading from version 0.6.56.3 of the Application Insights Telemetry SDK for Services using the Application Insights Tools for Visual Studio, you will need to manually copy any custom settings from Monitoring.CollectionPlan.config to ApplicationInsights.config file.
  • For Windows Store 8.1 C++ projects instrumented with Application Insights, updates for the “Application Insights Telemetry SDK for Windows Store Apps” from nuget.org do not show up in the “Manage NuGet Packages” dialog. You can install an updated nuget package from the “Online” section of the “Manage NuGet Packages” dialog. Or, you can execute this command in the Package Manager Console: update-package Microsoft.ApplicationInsights.Telemetry.WindowsStore

Microsoft releases .Net 4.5.2 Framework and Developer Pack

You can download the releases now,

Image

We incorporated feedback we received for the .NET Framework 4.5.1 from different feedback sources to provide a faster release cadence. In this blog post we will talk about some of the new features we are delivering in the .NET Framework 4.5.2.

ASP.NET improvements

  • New HostingEnvironment.QueueBackgroundWorkItem method that lets you schedule small background work items. ASP.NET tracks these items and prevents IIS from abruptly terminating the worker process until all background work items have completed. These will enable ASP.NET applications to reliably schedule Async work items.
  • New HttpResponse.AddOnSendingHeaders and HttpResponseBase.AddOnSendingHeaders methods are more reliable and efficient than HttpApplication.PreSendRequestContent and HttpApplication.PreSendRequestHeaders. These APIs let you inspect and modify response headers and status codes as the HTTP response is being flushed to the client application. These reliability improvements minimize deadlocks and crashes between IIS and ASP.NET.
  • New HttpResponse.HeadersWritten and HttpResponseBase.HeadersWritten properties that return Boolean values to indicate whether the response headers have been written. You can use these properties to make sure that calls to APIs such as HttpResponse.StatusCode succeeds. This enables shared hosting scenarios for ASP.NET applications.

High DPI Improvements is an opt-in feature toenable resizing according to the system DPI settings for several glyphs or icons for the following Windows Forms controls: DataGridView, ComboBox, ToolStripComboBox, ToolStripMenuItem and Cursor. Here are examples of before and after views once this change is opted into.

.NET 4.5.1 Controls with High DPI setting .NET 4.5.2 Controls with High DPI setting
The red error glyph barely shows up and will eventually disappear with high scaling The red error glyph scales correctly.
The ToolStripMenu drop down arrow is barely visible, eventually won’t be usable with high scaling The drop down arrow in the ToolStripMenu scales correctly

Distributed transactions enhancement enables promotion of local transactions to Microsoft Distributed Transaction Coordinator (MSDTC) transactions without the use of another application domain or unmanaged code. This has a significant positive impact on the performance of distributed transactions.

More robust profiling with new profiling APIs that require dependent assemblies that are injected by the profiler to be loadable immediately, instead of being loaded after the app is fully initialized. This change does not affect users of the existing ICorProfiler APIs. Before this feature, diagnostics tools that do IL instrumentation via profiling API could cause unhandled exceptions to be thrown, unexpectedly terminating the process.

Improved activity tracing support in runtime and framework – The .NET Framework 4.5.2 enables out-of-process, Event Tracing for Windows (ETW)-based activity tracing for a larger surface area. This enables Application Performance Management vendors to provide lightweight tools that accurately track the costs of individual requests and activities that cross threads. These events are raised only when ETW controllers enable them.

For more information on usage of these features please refer to “What’s New in the .NET Framework 4.5.2”. Besides these features, there are many reliability and performance improvements across different areas of the .NET Framework.

Here are additional installers – pick package(s) most suitable for your needs based on your deployment scenario:

  • .NET Framework 4.5.2 Web Installer – A Bootstrapper that pulls in components based on the target OS/platform specs on which the .NET Framework is being deployed. Internet access is required.
  • .NET Framework 4.5.2 Offline Installer – The Full Package for offline deployments. Internet access is not required.
  • .NET Framework 4.5.2 Language Packs – Language specific support. You need to install the .NET Framework (language neutral) package before installing one or more language packs.
  • .NET Framework 4.5.2 Developer Pack – This will install .NET Framework Multi-targeting pack for building apps targeting .NET Framework 4.5.2 and also .NET Framework 4.5.2 runtime. Useful for build machines that need both the runtime and the multi-targeting pack

 

DRY Architecture, Layered Architecture, Domain Driven Design and a Framework to build great Single Web Pages – BiolerPlate Part 1

DRY – Don’t Repeat Yourself! is one of the main ideas of a good developer while developing a software. We’re trying to implement it from simple methods to classes and modules. What about developing a new web based application? We, software developers, have similar needs when developing enterprise web applications.

Enterprise web applications need login pages, user/role management infrastructure, user/application setting management, localization and so on. Also, a high quality and large scale software implements best practices such as Layered Architecture, Domain Driven Design (DDD), Dependency Injection (DI). Also, we use tools for Object-Releational Mapping (ORM), Database Migrations, Logging… etc. When it comes to the User Interface (UI), it’s not much different.

Starting a new enterprise web application is a hard work. Since all applications need some common tasks, we’re repeating ourselves. Many companies are developing their own Application Frameworks or Libraries for such common tasks to do not re-develop same things. Others are copying some parts of existing applications and preparing a start point for their new application. First approach is pretty good if your company is big enough and has time to develop such a framework.

As a software architect, I also developed such a framework im my company. But, there is some point it feels me bad: Many company repeats same tasks. What if we can share more, repeat less? What if DRY principle is implemented universally instead of per project or per company? It sounds utopian, but I think there may be a starting point for that!

What is ASP.NET Boilerplate?

http://www.aspnetboilerplate.com/

ASP.NET Boilerplate [1] is a starting point for new modern web applications using best practices and most popular tools. It’s aimed to be a solid model, a general-purpose application framework and a project template. What it does?

  • Server side
    • Based on latest ASP.NET MVC and Web API.
    • Implements Domain Driven Design (Entities, Repositories, Domain Services, Application Services, DTOs, Unif Of Work… and so on)
    • Implements Layered Architecture (Domain, Application, Presentation and Infrastructure Layers).
    • Provides an infrastructure to develop reusable and composable modules for large projects.
    • Uses most popular frameworks/libraries as (probably) you’re already using.
    • Provides an infrastructure and make it easy to use Dependency Injection (uses Castle Windsor as DI container).
    • Provides a strict model and base classes to use Object-Releational Mapping easily (uses NHibernate, can work with many DBMS).
    • Implements database migrations (uses FluentMigrator).
    • Includes a simple and flexible localization system.
    • Includes an EventBus for server-side global domain events.
    • Manages exception handling and validation.
    • Creates dynamic Web API layer for application services.
    • Provides base and helper classes to implement some common tasks.
    • Uses convention over configuration principle.
  • Client side
    • Provides two project templates. One for Single-Page Applications using Durandaljs, other one is a Multi-Page Application. Both templates uses Twitter Bootstrap.
    • Most used libraries are included by default: Knockout.js, Require.js, jQuery and some useful plug-ins.
    • Creates dynamic javascript proxies to call application services (using dynamic Web API layer) easily.
    • Includes unique APIs for some sommon tasks: showing alerts & notifications, blocking UI, making AJAX requests.

Beside these common infrastructure, the “Core Module” is being developed. It will provide a role and permission based authorization system (implementing ASP.NET Identity Framework), a setting systems and so on.

What ASP.NET Boilerplate is not?

ASP.NET Boilerplate provides an application development model with best practices. It has base classes, interfaces and tools that makes easy to build maintainable large-scale applications. But..

  • It’s not one of RAD (Rapid Application Development) tools those provide infrastructure for building applications without coding. Instead, it provides an infrastructure to code in best practices.
  • It’s not a code generation tool. While it has several features those build dynamic code in run-time, it does not generate codes.
  • It’s not a all-in-one framework. Instead, it uses well known tools/libraries for specific tasks (like NHibernate for O/RM, Log4Net for logging, Castle Windsor as DI container).

Getting started

In this article, I’ll show how to deleveop a Single-Page and Responsive Web Application using ASP.NET Boilerplate (I’ll call it as ABP from now). This sample application is named as “Simple Task System” and it consists of two pages: one for list of tasks, other one is to add new tasks. A Task can be related to a person, can be completed. The application is localized in two languages. Screenshot of Task List in the application is shown below:

A screenshot of 'Simple Task System'

Creating empty web application from template

ABP provides two templates to start a new project (Even if you can manually create your project and get ABP packages from nuget, template way is much more easy). Go to www.aspnetboilerplate.com/Templates to create your application from one of twotemplates (one for SPA (Single-Page Application), one for MPA (classic, Multi-Page Application) projects):

Creating template from ABP web site

I named my project as SimpleTaskSystem and created a SPA project. It downloaded project as a zip file. When I open the zip file, I see a solution is ready that contains assemblies (projects) for each layer of Domain Driven Design:

Project files

Created project’s runtime is .NET Framework 4.5.1, I advice to open with Visual Studio 2013. The only prerequise to be able to run the project is to create a database. SPA template assumes that you’re using SQL Server 2008 or later. But you can change it easily to another DBMS.

See the connection string in web.config file of the web project:

<add name="MainDb" connectionString="Server=localhost; Database=SimpleTaskSystemDb; Trusted_Connection=True;" />

You can change connection string here. I don’t change the database name, so I’m creating an empty database, named SimpleTaskSystemDb, in SQL Server:

Empty database

That’s it, your project is ready to run! Open it in VS2013 and press F5:

First run

Template consists of two pages: One for Home page, other is About page. It’s localized in English and Turkish. And it’s Single-Page Application! Try to navigate between pages, you’ll see that only the contents are changing, navigation menu is fixed, all scripts and styles are loaded only once. And it’s responsive. Try to change size of the browser.

Now, I’ll show how to change the application to a Simple Task System application layer by layer in the coming part 2

FREE Microsoft Dynamics CRM 2011 List Component for Microsoft SharePoint Server 2010

 

 

CRM2011 – SharePoint 2010 Integration? Glue CRM 2011 & Share Point 2010 together? Make CRM 2011 and Share Point 2010 converse? I wasn’t sure what to call this exactly. “Hooking together” works for me!

Now that we have a CRM 2011 instance and a Share Point site working, let’s get them connected up! Go to this website and download Microsoft Dynamics CRM 2011 List Component for Microsoft SharePoint Server 2010:

Accept the License Terms.

Extract the files to a folder (I chose C:\CRM List).

You will get a prompt “The Installation is complete.” Click OK.

Let’s go over to the Share Point Central Administration Server to install the list component we just extracted. Connect to http://localhost:48835/ (your port might be different, be aware of this). Click Manage web applications.

Click the new Share Point site, and then “General Settings” (the blue cogs).

Scroll down to Browser File Handling and choose Permissive, Click OK.

Let’s head back over to our new Share Point Site. Click Site Actions up top left, and then “Site Settings”.

Under Galleries click “Solutions”.


Click the Word “Solutions” up top (you have to click the word “Solutions”, even though it looks selected), and then click “Upload Solution”.

Select the .wsp component that we extracted wayyy back at the top of this. I used C:CRM List as my extract folder. Click OK.

You’ll get prompted at this point, I couldn’t active the control on this screen (but it still needs to be done). We need to make sure some services are running to activate the solution. Click Close.

Head back to the Share Point Central Administration. http://localhost:48835. Found at

Click System Settings –> Manage Services on this server

Click Start beside “Share Point Foundation Sandboxed Code Service”. I also started “Microsoft SharePoint Foundation Subscription Settings Service (by accident)” so that’s why that ones started.

Now to head back to our Share Point site http://localhost:39083/

Under Galleries click “Solutions”.

Click Solutions again, select crmlistcomponent, and the click “Activate” up top. Activate is now un-greyed out! Click Activate!

The solution has now been activated! Hurray!

There seems to be some confusion whether or not you need to run a power shell script to enable Activation of Share Point 2010 solutions (AllowHtcExtn). According to what I’ve read, you would need to run this if Share Point 2010 is running on a domain controller. I didn’t have to do this (and we’re on a domain controller), and I’ve yet to run into a problem with .htc stuff. Even in the Microsoft Dynamics CRM 2011 Readme it says:
“If you are using Microsoft SharePoint Server 2010 (On-Premises), you must add .htc extensions to the list of allowed file types:
a. Copy the AllowHtcExtn.ps1 script file to the server that is running Microsoft SharePoint Server 2010.
b. In the Windows PowerShell window or in the SharePoint Management Console, run the command: AllowHtcExtn.ps1 .
Example: AllowHtcExtn.ps1 http://servername%E2%80%9D

Some people say the script works for them , and some say that using just the blog method (what we did) works
The sharepoint configuration is complete at this point. You probably want to take a snapshot, name it “After Sharepoint Configuration”. Let’s head over to our CRM server (localhost:85).

In CRM Click Settings –> Document Management –> Document Management Settings

Select the entities that you want to have documents enabled on. This will create a “Documents” area when you open an instance of the entity. I’ll just leave the defaults for now. At the bottom punch in your Share Point site that you’ve created and click Next. This is the Share Point server we installed the list component on. You’re not allowed to use localhost:port, just use the computer name:port like below.

Don’t select the box, otherwise it will relate the files to those entities. Without checking the box you will end up with something like Site/EntityName/Record Name (which is what I want, especially if you’re using custom entities). Click Next.

If “Libraries are being created in the path”, click Next.

Everything should “Succeed”, Click Finish.

Let’s test this bad boy out now.

Create a new account called “Test”.

Click Save! Click “Documents” on the left side. You’ll get a prompt saying that the folder (Test) is being created under “Account”. Click OK.

Click Add.

Now you’ll probably get these errors! /crmgrid/scripts/DialogContainer.js and 403 FORBIDDEN! Depressing. The only real information on this error was here: . It wasn’t very clear, but I stumbled through it. It seems that CRM 2011 doesn’t enjoy being called localhost. Let’s fix these up.

The fix for this was to run inetmgr –> Click Microsoft Dynamics CRM –> click Stop

Click “Bindings…” on the right side. Click “Edit” on the items that show “localhost” and change it to my machine name: “win-b80icqrvluf”. This is so it has a a “real” name to connect to.

Before:

After:

Now click “Start” on the right side.

Head back over to the CRM (http://win-b80icqrvluf:85/CRMTest/main.aspx) make sure to use the host name, as it might give you the error if you use localhost. Open your Test Account again.

Click Documents –> Add, you should now see this popup (it can take a while to load for the first time on the VM). If you continue to get the error, stop both CRM 2011 and Share Point 2010 servers and restart them. If that doesn’t work, try restarting the whole server.

Pick a file, and click OK.

The file should be uploaded to Share Point now.

Head over to Share Point at http://win-b80icqrvluf:39083 and click “All Site Content” or “Libraries”.

Click Account.

You can see that CRM has created a folder “Test” (for our record). It creates 1 folder per record. Click it to see the files associated to that record!!

The files associated to the record “Test” in Accounts.

Share Point and CRM have combined into a super awesome force of doom. But we’re still missing 1 core piece of functionality (due to not picking a port when we installed CRM).

 

 

Resource – Office 365 Powershell Commandlets

Before you can start working with the SharePoint Online cmdlets you must first download those cmdlets. Having the cmdlets as a separate download (separate from SharePoint on-premises that is) allows you to use any machine to run the cmdlets.

blog-office365

 

All we have to do is make sure we have PowerShell V3 installed along with the .NET Framework v4 or better (required by PowerShell V3). With these prerequisites in place simply download and install the cmdlets from Microsoft: http://www.microsoft.com/en-us/download/details.aspx?id=35588.

Once installed open the SharePoint Online Management Shell by clicking Start > All Programs > SharePoint Online Management Shell > SharePoint Online Management Shell.

Just like with the SharePoint Management Shell for on-premises deployments the SharePoint Online Management Shell is just a standard PowerShell window. You can see this by looking at the target attribute of the shortcut properties:

C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoExit -Command “Import-Module Microsoft.Online.SharePoint.PowerShell -DisableNameChecking;”

As you can see from the shortcut, a PowerShell module is loaded: Microsoft.Online.SharePoint.PowerShell. Unlike with SharePoint on-premises, this is not a snap-in but a module, which is basically the new, better way of loading cmdlets. The nice thing about this is that, like with the snap-in, you can load the module in any PowerShell window and are not limited to using the SharePoint Online Management Shell.

(The -DisableNameChecking parameter of the Import-Module cmdlet simply tells PowerShell to not bother checking for valid verbs used by the loaded cmdlets and avoids warnings that are generated by the fact that the module does use an invalid verb – specifically, Upgrade). Note that unlike with the snap-in, there’s no need to specify the threading options because the cmdlets don’t use any unmanaged resources which need disposal.

Getting Connected

Now that you’ve got the SharePoint Online Management Shell installed you are now ready to connect to your tenant administration site. This initial connection is necessary to establish a connection context which stores the URL of the tenant administration site and the credentials used to connect to the site. To establish the connection use the Connect-SPOService cmdlet:

Connect-SPOService -Url https://contoso-admin.sharepoint.com -Credential gary@contoso.com

 

Running this cmdlet basically just stores a Microsoft.SharePoint.Client.ClientContext object in an internal static variable (or a sub-classed version of it at least). Future cmdlet calls then use this object to connect to the site, thereby negating the need to constantly provide the URL and credentials. (The downside of this object being internal is that we can’t extend the cmdlets to add our own, unless we want to use reflection which would be unsupported). To clear this internal variable (and make the session secure against other code that may attempt to use it) you can run the Disconnect-SPOService cmdlet. This cmdlet takes no parameters.

One tip to help make loading the module and then connecting to the site a tad bit easier would be to encapsulate the commands into a single helper method. In the following example I created a simple helper method named Connect-SPOSite which takes in the user and the tenant administration site to connect to, however, I default those values so that I only have to provide the password when I wish to get connected. I then put this method in my profile file (which you can edit by typing “ise $profile.CurrentUsersAllHosts”):

function Connect-SPOSite() {

    param (

        $user = “gary@contoso.com”,

        $site = https://contoso-admin.sharepoint.com&#8221;

    )

    if ((Get-Module Microsoft.Online.SharePoint.PowerShell).Count -eq 0) {

        Import-Module Microsoft.Online.SharePoint.PowerShell -DisableNameChecking

    }

    $cred = Get-Credential $user

    Connect-SPOService -Url $site -Credential $cred

}

 

SPO Cmdlets

Now that you’re connected you can finally do something interesting. First let’s look at the cmdlets that are available. There are currently only 30 cmdlets available to us and you can see the list of those cmdlets by typing “Get-Command -Module Microsoft.Online.SharePoint.PowerShell”. Note that all of the cmdlets will have a noun which starts with “SPO”. The following is a list of all the available cmdlets:

  • Site Groups
  • Users
    • Add-SPOUser – Add a user to an existing Site Collection Site Group.
    • Get-SPOUser – Get an existing user.
    • Remove-SPOUser – Remove an existing user from the Site Collection or from an existing Site Collection Group.
    • Set-SPOUser – Set whether an existing Site Collection user is a Site Collection administrator or not.
    • Get-SPOExternalUser – Returns external users from the tenant’s folder.
    • Remove-SPOExternalUser – Removes a collection of external users from the tenancy’s folder.
  • Site Collections
    • Get-SPOSite – Retrieve an existing Site Collection.
    • New-SPOSite – Create a new Site Collection.
    • Remove-SPOSite – Move an existing Site Collection to the recycle bin.
    • Repair-SPOSite – If any failed Site Collection scoped health check rules can perform an automatic repair then initiate the repair.
    • Set-SPOSite – Set the Owner, Title, Storage Quota, Storage Quota Warning Level, Resource Quota, Resource Quota Warning Level, Locale ID, and/or whether the Site Collection allows Self Service Upgrade.
    • Test-SPOSite – Run all Site Collection health check rules against the specified Site Collection.
    • Upgrade-SPOSite – Upgrade the Site Collection. This can do a build-to-build (e.g., RTM to SP1) upgrade or a version-to-version (e.g., 2010 to 2013) upgrade. Use the -VersionUpgrade parameter for a version-to-version upgrade.
    • Get-SPODeletedSite – Get a Site Collection from the recycle bin.
    • Remove-SPODeletedSite – Remove a Site Collection from the recycle bin (permanently deletes it).
    • Restore-SPODeletedSite – Restores an item from the recycle bin.
    • Request-SPOUpgradeEvaluationSite  – Creates a copy of the specified Site Collection and performs an upgrade on that copy.
    • Get-SPOWebTemplate – Get a list of all available web templates.
  • Tenants
    • Get-SPOTenant – Retrieves information about the subscription tenant. This includes the Storage Quota size, Storage Quota Allocated (used), Resource Quota size, Resource Quota Allocated (used), Compatibility Range (14-14, 14-15, or 15-15), whether External Services are enabled, and the No Access Redirect URL.
    • Get-SPOTenantLogEntry – Retrieves company logs (as of B2 only BCS logs are available).
    • Get-SPOTenantLogLastAvailableTimeInUtc – Returns the time when the logs are collected.
    • Set-SPOTenant – Sets the Minimum and Maximum Compatibility Level, whether External Services are enabled, and the No Access Redirect URL.
  • Apps
  • Connections

It’s important to understand that when working with all of the cmdlets which retrieve an object you will only ever be getting a simple data object which has no ability to act upon the source object. For example, the Get-SPOSite cmdlet returns an SPOSite object which has no methods and, though some properties do have a setter, they are completely useless and the object and its properties are not used by any other cmdlet (such as Set-SPOSite). This also means that there is no ability to access child objects (such as SPWeb or SPList items, to name just a couple).

The other thing to note is the lack of cmdlets for items at a lower scope than the Site Collection. Specifically there is no Get-SPOWeb or Get-SPOList cmdlet or anything of the sort. This can be potentially be quite limiting for most real world uses of PowerShell and, in my opinion, limit the usefulness of these new cmdlets to just the initial setup of a subscription and not the long-term maintenance of the subscription.

In the following examples I’ll walk through some examples of just a few of the more common cmdlets so that you can get an idea of the general usage of them.

Get a Site Collection

To see the list of Site Collections associated with a subscription or to see the details for a specific Site Collection use the Get-SPOSite cmdlet. This cmdlet has two parameter sets:

Get-SPOSite [[-Identity] <SpoSitePipeBind>] [-Limit <string>] [-Detailed] [<CommonParameters>]

Get-SPOSite [-Filter <string>] [-Limit <string>] [-Detailed] [<CommonParameters>]

The parameter that you’ll want to pay the most attention to is the -Detailed parameter. If this optional switch parameter is omitted then the SPOSite objects that will be returned will only have their properties partially set. Now you might think that this is in order to reduce the traffic between the server and the client, however, all the properties are still sent over the wire, they simply have default values for everything other than a couple core properties (so I would assume the only performance improvement would be in the query on the server). You can see the difference in the values that are returned by looking at a Site Collection with and without the details:

PS C:\> Get-SPOSite https://contoso.sharepoint.com/ | select *

LastContentModifiedDate   : 1/1/0001 12:00:00 AM
Status                    : Active
ResourceUsageCurrent      : 0
ResourceUsageAverage      : 0
StorageUsageCurrent       : 0
LockIssue                 :
WebsCount                 : 0
CompatibilityLevel        : 0
Url                       :
https://contoso.sharepoint.com/
LocaleId                  : 1033
LockState                 : Unlock
Owner                     :
StorageQuota              : 1000
StorageQuotaWarningLevel  : 0
ResourceQuota             : 300
ResourceQuotaWarningLevel : 255
Template                  : EHS#1
Title                     :
AllowSelfServiceUpgrade   : False

PS C:\> Get-SPOSite https://contoso.sharepoint.com/ -Detailed | select *

LastContentModifiedDate   : 11/2/2012 4:58:50 AM
Status                    : Active
ResourceUsageCurrent      : 0
ResourceUsageAverage      : 0
StorageUsageCurrent       : 1
LockIssue                 :
WebsCount                 : 1
CompatibilityLevel        : 15
Url                       :
https://contoso.sharepoint.com/
LocaleId                  : 1033
LockState                 : Unlock
Owner                     : s-1-5-21-3176901541-3072848581-1985638908-189897
StorageQuota              : 1000
StorageQuotaWarningLevel  : 0
ResourceQuota             : 300
ResourceQuotaWarningLevel : 255
Template                  : STS#0
Title                     : Contoso Team Site
AllowSelfServiceUpgrade   : True

Create a Site Collection

When we’re ready to create a Site Collection we can use the New-SPOSite cmdlet. This cmdlet is very similar to the New-SPSite cmdlet that we have for on-premises deployments. The following shows the syntax for the cmdlet:

New-SPOSite [-Url] <UrlCmdletPipeBind> -Owner <string> -StorageQuota <long> [-Title <string>] [-Template <string>] [-LocaleId <uint32>] [-CompatibilityLevel <int>] [-ResourceQuota <double>] [-TimeZoneId <int>] [-NoWait] [<CommonParameters>]

The following example demonstrates how we would call the cmdlet to create a new Site Collection called “Test”:

New-SPOSite -Url https://contoso.sharepoint.com/sites/Test -Title “Test” -Owner “gary@contoso.com” -Template “STS#0” -TimeZoneId 10 -StorageQuota 100

 

Note that the cmdlet also takes in a -NoWait parameter; this parameter tells the cmdlet to return immediately and not wait for the creation of the Site Collection to complete. If not specified then the cmdlet will poll the environment until it indicates that the Site Collection has been created. Using the -NoWait parameter is useful, however, when creating batches of Site Collections thereby allowing the operations to run asynchronously.

One issue you might bump into is in knowing which templates are available for your use. In the preceding example we are using the “STS#0” template, however, there are other templates available for our use and we can discover them using the Get-SPOWebTemplate cmdlet, as shown below:

PS C:\> Get-SPOWebTemplate

Name                     Title                         LocaleId  CompatibilityLevel
—-                     —–                         ——–  ——————
STS#0                    Team Site                         1033                  15
BLOG#0                   Blog                              1033                  15
BDR#0                    Document Center                   1033                  15
DEV#0                    Developer Site                    1033                  15
DOCMARKETPLACESITE#0     Academic Library                  1033                  15
OFFILE#1                 Records Center                    1033                  15
EHS#1                    Team Site – SharePoint Onl…     1033                  15
BICenterSite#0           Business Intelligence Center      1033                  15
SRCHCEN#0                Enterprise Search Center          1033                  15
BLANKINTERNETCONTAINER#0 Publishing Portal                 1033                  15
ENTERWIKI#0              Enterprise Wiki                   1033                  15
PROJECTSITE#0            Project Site                      1033                  15
COMMUNITY#0              Community Site                    1033                  15
COMMUNITYPORTAL#0        Community Portal                  1033                  15
SRCHCENTERLITE#0         Basic Search Center               1033                  15
visprus#0                Visio Process Repository          1033                  15

Give Access to a Site Collection

Once your Site Collection has been created you may wish to grant users access to the Site Collection. First you may want to create a new SharePoint group (if an appropriate one is not already present) and then you may want to add users to that group (or an existing one). To accomplish these tasks you use the New-SPOSiteGroup cmdlet and the Add-SPOUser cmdlet, respectively.

Looking at the New-SPOSiteGroup cmdlet you can see that it takes only three parameters, the name of the group to create, the permissions to add to the group, and the Site Collection within which to create the group:

New-SPOSiteGroup [-Site] <SpoSitePipeBind> [-Group] <string> [-PermissionLevels] <string[]> [<CommonParameters>]

In the following example I’m creating a new group named “Designers” and giving it the “Design” permission:

$site = Get-SPOSite https://contoso.sharepoint.com/sites/Test -Detailed

$group = New-SPOSiteGroup -Site $site -Group “Designers” -PermissionLevels “Design“

(Note that I’m seeing the Site Collection to a variable just to keep the commands a little shorter, you could just as easily provide the string URL directly).

Once the group is created we can then use the Add-SPOUser cmdlet to add a user to the group. Like the New-SPOSiteGroup cmdlet this cmdlet takes three parameters:

Add-SPOUser [-Site] <SpoSitePipeBind> [-LoginName] <string> [-Group] <string> [<CommonParameters>]

In the following example I’m adding a new user to the previously created group:

Add-SPOUser -Site $site -Group $group.LoginName -LoginName “tessa@contoso.com”

Delete and Recover a Site Collection

If you’ve created a Site Collection that you now wish to delete you can easily accomplish this by using the Remove-SPOSite cmdlet. When this cmdlet finishes the Site Collection will have been moved to the recycle bin and not actually deleted.

If you wish to permanently delete the Site Collection (and thus remove it from the recycle bin) then you must use the Remove-SPODeletedSite cmdlet. So to do a permanent delete it’s actually a two step process, as shown in the example below where I first move the “Test” Site Collection to the recycle bin and then delete it from the recycle bin:

Remove-SPOSite http://contoso.sharepoint.com/sites/test&#8221; -Confirm:$false

Remove-SPODeletedSite -Identity http://contoso.sharepoint.com/sites/test&#8221; -Confirm:$false

 

If you decide that you’d actually like to restore the Site Collection from the recycle bin you can simply use the Restore-SPODeletedSite cmdlet:

Restore-SPODeletedSite http://contoso.sharepoint.com/sites/test

Both the Remove-SPOSite and the Restore-SPODeletedSite cmdlets accept a –NoWait parameter which you can provide to tell the cmdlet to return immediately.

Parting Thoughts

There are obviously many other cmdlets available to explore (per the previous list), however, I hope that in the simple samples shown in this article you will find that working with the cmdlets is quite easy and fairly intuitive.

The key thing to remember is that you are working in a stateless environment so changes to an object such as SPOSite will not affect the actual Site Collection in any way and cmdlets like the Set-SPOSite cmdlet will not honor changes made to the properties as it will use nothing more than the URL property to know which Site Collection you are updating.

Though the existence of these cmdlets is definitely a good start and absolutely better than nothing, I have to say that I’m extraordinarily displeased with the number of available cmdlets and with how the module was implemented.

My biggest gripe is that the module is not extensible in any way so if I wish to add cmdlets for the management of SPWeb objects or SPList objects I’d have to create a whole new framework which would require an additional login as I wouldn’t be able to leverage the context object created by Connect-SPOService cmdlet.

This results in a severely limiting product that prevents community and ISV generated solutions from “fitting in” to the existing model. Perhaps one day I’ll create my own set of cmdlets to show Microsoft how it should have been done…perhaps one day I’ll have time for such frivolities :) .

 

How To : Setup MyTask List in SharePoint 2013

Overview

You are using SharePoint 2013, you have deployed My Sites. You or your users have tasks assigned. But when you or your users visit their MySite, they see below screen. Despite the users having assigned tasks elsewhere in the system, MySite still shows no tasks which is incorrect.

123

 

What is My Task List in SharePoint 2013?

By architecture of the Newsfeed site on SharePoint 2013, My Tasks list puts together and shows all the SharePoint and Project Server (if installed) task assignment right into the users My Site page. The tasks can be either private tasks or public tasks.

Pre-requisites for proper sync of My Task?

  • Search Service Application – very important to have this service enabled and running. Aggregator checks every 3 hours for any new “Tasks Lists”. Though the aggregator would look for SharePoint events / hints, they are known to have not activated an aggregation and hence the importance given to the indexer. Very important to have an Incremental / Continuous Crawl running.
  • Work Management Service Application (WMA) and the service running on the server.
  • User Profile Synchronization Service

Refreshing the My Tasks Page

The code behind aggregator is triggered by simply visiting the page within Newsfeed Site as long as the last trigger was older than 5 minutes. This delay is to preserve the performance of the SharePoint farm. This can be changed using PowerShell but highly recommend against the same for large farm deployments.

Possible problems causing sync not work?

  1. Work Management Service wasn’t running
  2. Search wasn’t indexing anything yet. No indexer meant aggregator could potentially be not performing any aggregation as well.

1234

Solution

  1. Work management Service should run on App Server. If required create one from Central Admin
  2. Work management service application should be created with an app pool which must run with profile app pool account
  3. Create/ensure Incremental Crawls to happen across all the content sources, setup people search, my sites search.
  4. Ensure that continuous crawl is running
  5. Wait till the crawl completes
  6. Review the permission of profile app pool and portal app pool account on the specific databases with dbowner permissions
  • social db
  • sync db
  • profile db
  • state service db
  • manage metadata db
  • my site db
  • portal content db
  • projects content db
  • teams content db
  • communities content db
  • Search db.
  1. User profile synchronization service should be running.
  2. Run IIS reset on all app and WFE servers at the same time.

12345

Introduction to the Unified Logging Service and Creating a Javascript Logging System

Microsoft SharePoint Foundation exposes a rich logging mechanism known as the Unified Logging Service (ULS) that enables developers to write useful information helping them to identify and troubleshoot issues during the application lifecycle. The ULS writes SharePoint Foundation events to the SharePoint Trace Log, and stores them in the file system, typically inside the SharePoint root folder in files named \14\LOGS\SERVERYYYmmDDID.log.

ULS exposes a rich managed object model enabling developers to specify their own configurations such as categories and severity while writing exceptions or trace message to the ULS logs. You can find more details on the managed API in the article Writing to the Trace Log from Custom Code.

With the evolution of a rich client object model in SharePoint 2010 that enables developers to build complex client applications, it is very important to write useful information that is not visible in the user interface but is recorded on the server so it can be monitored by administrators and developers.

To address these scenarios for applications running in thin-client browsers, SharePoint Foundation provides a web service named SharePoint Diagnostics (diagnostics.asmx). This web service enables a client application to submit diagnostic reports directly to the ULS logs.

This article focuses on how you can leverage the SharePoint Diagnostics web service to write trace messages from a custom JavaScript application into the ULS logs.

The following points are discussed:

  • Overview of the SendClientScriptErrorReport web method
  • Creating a simple JavaScript application to log trace messages by using SharePoint Diagnostics web service
  • Setting up the required configurations for enabling logging via the Diagnostics web service
  • Using the application
  • Using the ULS logging script with sandboxed solutions
The Diagnostics web service exposes a single method named SendClientScriptErrorReport that enables client applications to report errors to the ULS service. The following table summarizes the parameter list required by the SendClientScriptErrorReport method.

Parameter Name Description Value Examples
Message A string containing the message to display to the client The value of the displaypage property is null or undefined; not a function object.
File The URL file name associated with the current error customscript.js
Line A string containing the line of code from which the error is being generated 9
Client A string containing the client name that is experiencing the error <client><browser name=’Internet Explorer’ version=’9.0′></browser><language> en-us </language></client>
Stack A string containing the call-stack information from the generated error <stack><function depth=’0′ signature=’ myFunction() ‘>function myFunction() { ‘displaypage ();}</function></stack>
Team A string containing a team or product name Custom SharePoint Application
originalFile The physical file name associated with the current error customscript.js

In the table, notice that the example values for Client and Stack depict a XML fragments, not single lines of text. This information is stated in the protocol specification documented in 3.1.4.1.2.1 SendClientScriptErrorReport. Even though the protocol specification for these parameters requires a valid XML fragment, the web-service call to this method still succeeds even if the values supplied for these parameters do not follow this schema, despite the fact that creating the client and stack in this way would add more information to the trace.

The parameter list in the table shows that, unlike the managed API, the SendClientScriptErrorReport web method does not provide any option to specify the category or severity of the message being logged. Also looking at the method name and description, it appears that the exception logged should specify the severity level as Error. However, any message logged through the SharePoint Diagnostics web service is always displayed under the category Unified Logging Service and has a trace log severity level set to Verbose.

Later in this article, you will see the steps required to view the traces written through the SharePoint Diagnostics web service.

In this section, you create a JavaScript application that uses the Diagnostics web service to report errors to the ULS. The application contains a JavaScript file named ULSLogScript.js that contains the necessary functions to communicate and log traces to the Diagnostics web service. These functions are then called directly from any consumer script.

Note
This is a relatively simple application with just one file, so you are not creating a formal SharePoint solution; instead, you save the files directly to the Layouts directory in the SharePoint hive structure.

To create a JavaScript library containing the ULS logging logic

  1. Start Microsoft Visual Studio 2010.
  2. From the File menu, create a new JScript file and save it in the following path: <SharePoint Installation Folder>\14\TEMPLATE\LAYOUTS\LoggingSample\ULSLogScript.js.

    For example, C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\TEMPLATE\LAYOUTS\LoggingSample\ULSLogScript.js.

    Note
    You need to create a new directory named LoggingSample in the Layouts folder.
  3. Because you are using the JQuery library in the application, download the jquery-1.6.4.min.js file from the JQuery portal and add it to the LoggingSample folder created previously.
  4. Type or paste the following code into the ULSLogScript.js file.
    // Creates a custom ulslog object 
    // with the required properties.
    function ulsObject() {
        this.message = null;
        this.file = null;
        this.line = null;
        this.client = null;
        this.stack = null;
        this.team = null;
        this.originalFile = null;
    }
    

    The ulsObject function returns a new instance of a custom object with properties mapped to the parameters required by the SendClientScriptErrorReport method. This object is used throughout the script for performing various operations.

  5. Define the methods that populate the property values specified in the ulsObject method. Begin by defining the function that retrieves the client property. Following the ulsObject method, type or paste the following code.
    // Detecting the browser to create the client information
    // in the required format.
    function getClientInfo() {
        var browserName = '';
    
        if (jQuery.browser.msie)
            browserName = "Internet Explorer";
        else if (jQuery.browser.mozilla)
            browserName = "Firefox";
        else if (jQuery.browser.safari)
            browserName = "Safari";
        else if (jQuery.browser.opera)
            browserName = "Opera";
        else
            browserName = "Unknown";
    
        var browserVersion = jQuery.browser.version;
        var browserLanguage = navigator.language;
        if (browserLanguage == undefined) {
            browserLanguage = navigator.userLanguage;
        }
    
        var client = "<client><browser name='{0}' version='{1}'></browser><language>{2}</language></client>";
        client = String.format(client, browserName, browserVersion, browserLanguage);
     
        return client;
    }
    
    // Utility function to assist string formatting.
    String.format = function () {
        var s = arguments[0];
        for (var i = 0; i < arguments.length - 1; i++) {
            var reg = new RegExp("\\{" + i + "\\}", "gm");
            s = s.replace(reg, arguments[i + 1]);
        }
    
        return s;
    }
    

    The getClientInfo function uses the JQuery library to detect the current browser properties, such as the name and version, and then creates a XML fragment (as discussed previously) describing the browser details where the application is currently running. Additionally, a utility function named String.format assists string formatting through the code.

  6. Next, you need a function to create the call stack for the exception raised in the script. Add the following functions to the ULSLogScript.js code.
    // Creates the callstack in the required format 
    // using the caller function definition.
    function getCallStack(functionDef, depth) {
        if (functionDef != null) {
            var signature = '';
            functionDef = functionDef.toString();
            signature = functionDef.substring(0, functionDef.indexOf("{"));
            if (signature.indexOf("function") == 0) {
                signature = signature.substring(8);
            }
    
            if (depth == 0) {
                var stack = "<stack><function depth='0' signature='{0}'>{1}</function></stack>";
                stack = String.format(stack, signature, functionDef);
            }
            else {
                var stack = "<stack><function depth='1' signature='{0}'></function></stack>";
                stack = String.format(stack, signature);
            }
    
            return stack;
        }
    
        return "";
    }
    

    The getCallStack function receives the function definition where the exception occurred and a depth as a parameter. The depth parameter is used by the function to decide if only the caller function signature is required or the complete function definition is to be included. Based on the caller function definition, the getCallStack function extracts the required information such as the signature, body, and creates an XML fragment as described in the protocol specification.

  7. Next, create a function that creates a SOAP packet in the format expected by the Diagnostics web service SendClientScriptErrorReport method. Type or paste the following functions in the ULSLogScript.js file.
    // Creates the SOAP packet required by SendClientScriptErrorReport
    // web method.
    function generateErrorPacket(ulsObj) {
        var soapPacket = "<?xml version=\"1.0\" encoding=\"utf-8\"?>" +
                            "<soap:Envelope xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" " +
                                           "xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\" "+
                                           "xmlns:soap=\"http://schemas.xmlsoap.org/soap/envelope/\">" +
                              "<soap:Body>" +
                                "<SendClientScriptErrorReport " +
                                  "xmlns=\"http://schemas.microsoft.com/sharepoint/diagnostics/\">" +
                                  "<message>{0}</message>" +
                                  "<file>{1}</file>" +
                                  "<line>{2}</line>" +
                                  "<stack>{3}</stack>" +
                                  "<client>{4}</client>" +
                                  "<team>{5}</team>" +
                                  "<originalFile>{6}</originalFile>" +
                                "</SendClientScriptErrorReport>" +
                              "</soap:Body>" +
                            "</soap:Envelope>";
     
        soapPacket = String.format(soapPacket, encodeXmlString(ulsObj.message), encodeXmlString(ulsObj.file), 
                     ulsObj.line, encodeXmlString(ulsObj.stack), encodeXmlString(ulsObj.client), 
                     encodeXmlString(ulsObj.team), encodeXmlString(ulsObj.originalFile));
     
        return soapPacket;
    }
    
    // Utility function to encode special characters in XML.
    function encodeXmlString(txt) {
        txt = String(txt);
        txt = jQuery.trim(txt);
        txt = txt.replace(/&/g, "&amp;");
        txt = txt.replace(/</g, "&lt;");
        txt = txt.replace(/>/g, "&gt;");
        txt = txt.replace(/'/g, "&apos;");
        txt = txt.replace(/"/g, "&quot;");
     
        return txt;
    }
    

    The generateErrorPacket function receives an instance of the ulsObj object and returns the SOAP packet for the SendClientScriptErrorReport function as a string in the expected format. Because the values for the some parameters are expected as an XML fragment, the encodeXmlString function is used to encode the special characters.

  8. When the SOAP packet has been defined, you need a function to issue an asynchronous request to the Diagnostics web service. Add the code below to the ULSLogScript.js file.
    // Function to form the Diagnostics service URL.
    function getWebSvcUrl() {
        var serverurl = location.href;
        if (serverurl.indexOf("?") != -1) {
            serverurl = serverurl.replace(location.search, '');
        }
     
        var index = serverurl.lastIndexOf("/");
        serverurl = serverurl.substring(0, index - 1);
        serverurl = serverurl.concat('/_vti_bin/diagnostics.asmx');
     
        return serverurl;
    }
    
    // Method to post the SOAP packet to the Diagnostic web service.
    function postMessageToULSSvc(soapPacket) {
        $(document).ready(function () {
            $.ajax({
                url: getWebSvcUrl(),
                type: "POST",
                dataType: "xml",
                data: soapPacket, //soap packet.
                contentType: "text/xml; charset=\"utf-8\"",
                success: handleResponse, // Invoke when the web service call is successful.
                error: handleError// Invoke when the web service call fails.
            });
        });
    }
    
    // Invoked when the web service call succeeds.
    function handleResponse(data, textStatus, jqXHR) {
        // Custom code...
        alert('Successfully logged trace to ULS');
     }
     
    // Invoked when the web service call fails.
    function handleError(jqXHR, textStatus, errorThrown) {
        //Custom code...
            alert('Error occurred in executing the web request');
    }
    

    The postMessageToULSSvc function perform an asynchronous HTTP request and posts the SOAP packet to the Diagnostics web service. The URL of the Diagnostics web service is dynamically constructed in the getWebSvcUrl function. The postMessageToULSSvc function also defines respective handlers for success or error responses. Instead of displaying alerts on the handlers, other logic can be written as required by the application.

  9. Finally, you need a function that is invoked automatically when an error occurs in the code. To register this function globally for all the JavaScript functions on the page, you attach this function to the window.onerror event. Add the following lines of code as the first line of the ULSLogScript.js file.
    // Registering the ULS logging function on a global level.
    window.onerror = logErrorToULS;
    
    // Set default value for teamName.
    var teamName = "Custom SharePoint Application";
    
    // Further add the logErrorToULS method at the end of the script.
    
    // Function to log messages to Diagnostic web service.
    // Invoked by the window.onerror message.
    function logErrorToULS(msg, url, linenumber) {
        var ulsObj = new ulsObject();
        ulsObj.message = "Error occurred: " + msg;
        ulsObj.file = url.substring(url.lastIndexOf("/") + 1); // Get the current file name.
        ulsObj.line = linenumber;
        ulsObj.stack = getCallStack(logErrorToULS.caller); // Create error call stack.
        ulsObj.client = getClientInfo(); // Create client information.
        ulsObj.team = teamName; // Declared in the consumer script.
        ulsObj.originalFile = ulsObj.file;
    
        var soapPacket = generateErrorPacket(ulsObj); // Create the soap packet.
        postMessageToULSSvc(soapPacket); // Post to the web service.
    
        return true;
    }
    

    The line window.onerror = logErrorToULS links the function logErrorToULS with the window.onerror event. This enables you to capture the required information such as the error message, line number, and error file. The teamName variable enables you to set a unique value with respect to the calling application. This can be overridden in the consumer scripts. The logErrorToULS function creates an instance of the ulsObj object and populates all of its properties. Here, you see that the stack property of the ulsObj object is set to logErrorToULS.caller. This provides the function definition of the method that invoked this function. The postMessageToULSSvc function is called to write the error information to the trace logs.

    Note
    Because you cannot specify the security level of the trace message in the SendClientScriptErrorReport method, the message property of the ulsObj object is prepended with text indicating that the message logged is part of an exception.
  10. The logErrorToULS function is called automatically when an error occurs on the page, but to intentionally write a trace message to the ULS, you need one more function which can be called specifically. Add the following function just below the logErrorToULS function.
    // Function to log message to Diagnostic web service.
    // Specifically invoked by a consumer method.
    function logMessageToULS(message, fileName) {
        if (message != null) {
            var ulsObj = new ulsObject();
            ulsObj.message = message;
            ulsObj.file = fileName;
            ulsObj.line = 0; // We don't know the line, so we set it to zero.
            ulsObj.stack = getCallStack(logMessageToULS.caller);
            ulsObj.client = getClientInfo();
            ulsObj.team = teamName;
            ulsObj.originalFile = ulsObj.file;
    
            var soapPacket = generateErrorPacket(ulsObj);
            postMessageToULSSvc(soapPacket);
        }
    }
    

    Unlike the logErrorToULS function, the logMessageToULS function accepts the message to be logged and the file name where the error occurred as parameters.

So far, you have created the required logic to write trace messages or exceptions to the ULS logs. Now you need to write a function that consumes the logErrorToULS or logMessageToULS functions.

To create the consumer application

  1. Navigate to your SharePoint site.
  2. Create a new Web Parts page.
  3. Add a Content Editor Web Part in any of the available Web Part zones.
  4. Edit the Web Part and type or paste the following text in the HTML source.
    <script src="/_layouts/LoggingSample/jquery-1.6.4.min.js" type="text/javascript"></script>
     <script src="/_layouts/LoggingSample/ULSLogScript.js" type="text/javascript"></script>
     <script type="text/javascript">
            var teamName = "Simple ULS Logging";
            function doWork() {
                unknownFunction();
            }
            function logMessage() {
                logMessageToULS('This is a trace message from CEWP', 'loggingsample.aspx');
            }
     </script>
    
    <input type="button" value="Log Exception" onclick="doWork();" />
        <br /><br />
      <input type="button" value="Log Trace" onclick="logMessage();" />
    
    

    This HTML code contains the required script references to include the JQuery library and the ULSLogScript.js file that you created in the previous section. It also contains two inline JavaScript functions and the respective input buttons to invoke them.

    To demonstrate exception handling, the doWork function makes a call to an unknownFunction function that does not exist. This invokes an exception that is intercepted and logged by the ULSLogScript.js code. To demonstrate message logging, the logMessage function calls the logMessageToULS function to write trace messages to ULS.

  5. Exit the web page design mode.
  6. Save the Web Parts page.
Finally, you need to configure the Diagnostic Logging Service in SharePoint Central Administration to ensure that the traces and exceptions logged from the Diagnostics web service are visible in the ULS logs.

To configure the Diagnostic Logging Service

  1. Open SharePoint Central Administration.
  2. From the Quick Launch, click Monitoring.
    Figure 1. Click the Monitoring option

    Click the Monitoring option

  3. On the monitoring page, in the Reporting section, click Configure diagnostic logging.
    Figure 2. Click Configure diagnostic logging

    Click Configure diagnostic logging

  4. From all categories, expand the SharePoint Foundation category.
    Figure 3. Expand the SharePoint Foundation category

    Expand the SharePoint Foundation category

  5. Select the Unified Logging Service category.
    Figure 4. Select Unified Logging Service

    Select Unified Logging Service

  6. In the Least critical event to report to the trace log list, select Verbose.
    Figure 5. In the dropdown list, select Verbose

    From the dropdown list, select Verbose

  7. Click OK to save the configuration.

The server is now ready to log traces sent by the Diagnostics web service to ULS. These traces appear under the category Unified Logging Service with a severity set to Verbose.

In this section, you test the application by raising an alert that is logged to the ULS.

To test the logging application

  1. Click the Log Exception button inside the Content Editor Web Part (CEWP).
    Figure 6. Click the Log Exception button

    Click the Log Exception button

  2. An alert indicates that the message has been logged successfully to ULS.
    Figure 7. Confirmation message

    Confirmation message

  3. To see the exception details in the ULS logs, navigate to the Logs folder in the SharePoint hive ({SP Installation Path}\14\LOGS\)
  4. Because multiple log files can be present in the Logs folder, perform a descending sort on the Date modified field.
  5. Open the recent log file in a text editor such as Notepad and then search for Simple ULS Logging (the team name specified previously). Now you should see all the web service parameters as supplied from the client application, from Message to OriginalFileName, in the following text:

    10/14/2011 21:00:37.87 w3wp.exe (0x097C) 0x14DCSharePoint Foundation Unified Logging Service a084Verbose Message: Error occured: The value of the property ‘unknownFunction’ is null or undefined, not a Function object543a6672-9078-452f-93bd-545c4babefd510/14/2011 21:00:37.87 w3wp.exe (0x097C) 0x14DCSharePoint Foundation Unified Logging Service a085Verbose File: ULS%20Logging%20Sample.aspx543a6672-9078-452f-93bd-545c4babefd510/14/2011 21:00:37.87 w3wp.exe (0x097C) 0x14DCSharePoint Foundation Unified Logging Service a086Verbose Line: 676543a6672-9078-452f-93bd-545c4babefd510/14/2011 21:00:37.87 w3wp.exe (0x097C) 0x14DCSharePoint Foundation Unified Logging Service a087Verbose Client: <client><browser name=’Internet Explorer’ version=’8.0′></browser><language>en-us</language></client>543a6672-9078-452f-93bd-545c4babefd510/14/2011 21:00:37.87 w3wp.exe (0x097C) 0x14DCSharePoint Foundation Unified Logging Service a088Verbose Stack: <stack><function depth=’0′ signature=’ doWork() ‘>function doWork() { unknownFunction(); }</function></stack>543a6672-9078-452f-93bd-545c4babefd510/14/2011 21:00:37.87 w3wp.exe (0x097C) 0x14DCSharePoint Foundation Unified Logging Service a089Verbose TeamName: Simple ULS Logging543a6672-9078-452f-93bd-545c4babefd510/14/2011 21:00:37.87 w3wp.exe (0x097C) 0x14DCSharePoint Foundation Unified Logging Service a08aVerbose OriginalFileName: ULS%20Logging%20Sample.aspx543a6672-9078-452f-93bd-545c4babefd5

    Looking at the log message, you can easily determine that the exception occurred because unknownFunction was not defined, along with other relevant details such as the line number.

  6. Similarly, clicking Log Trace on the CEWP writes the following trace message:

    10/14/2011 21:29:55.76 w3wp.exe (0x097C) 0x0F6CSharePoint Foundation Unified Logging Service a084Verbose Message: This is a trace message from CEWP8c182889-c323-46f3-a287-a538c379f15210/14/2011 21:29:55.76 w3wp.exe (0x097C) 0x0F6CSharePoint Foundation Unified Logging Service a085Verbose File: loggingsample.aspx8c182889-c323-46f3-a287-a538c379f15210/14/2011 21:29:55.76 w3wp.exe (0x097C) 0x0F6CSharePoint Foundation Unified Logging Service a086Verbose Line: 08c182889-c323-46f3-a287-a538c379f15210/14/2011 21:29:55.76 w3wp.exe (0x097C) 0x0F6CSharePoint Foundation Unified Logging Service a087Verbose Client: <client><browser name=’Internet Explorer’ version=’8.0′></browser><language>en-us</language></client>8c182889-c323-46f3-a287-a538c379f15210/14/2011 21:29:55.76 w3wp.exe (0x097C) 0x0F6CSharePoint Foundation Unified Logging Service a088Verbose Stack: <stack><function depth=’1′ signature=’ logMessage() ‘></function></stack>8c182889-c323-46f3-a287-a538c379f15210/14/2011 21:29:55.76 w3wp.exe (0x097C) 0x0F6CSharePoint Foundation Unified Logging Service a089Verbose TeamName: Simple ULS Logging8c182889-c323-46f3-a287-a538c379f15210/14/2011 21:29:55.76 w3wp.exe (0x097C) 0x0F6CSharePoint Foundation Unified Logging Service a08aVerbose OriginalFileName: loggingsample.aspx8c182889-c323-46f3-a287-a538c379f152

    In this log, you see that a trace message was sent by the logMessage function.

In a sandboxed solution, you cannot deploy any file to the server file system (the Layouts folder), so to make the ULS logging script work, you need to make the following two changes:

  1. Provision the jquery-1.6.4.min.js and ULSLogScript.js file to a Site Collection–relative Styles Library folder (or any other library with appropriate read access).
  2. Update the script references in the consumer Content Query Web Part (CQWP), as needed.

The remaining functionality should work as is.

10 Must-Have Visual Studio Productivity Add-Ins I use everyday and recommend to every . Net Developer

Visual Studio provides a rich extensibility model that developers at Microsoft and in the community have taken advantage of to provide a host of quality add-ins. Some add-ins contribute significant how-did-I-live-without-this functionality, while others just help you automate that small redundant task you constantly find yourself performing.
ClickHandler.ashx
10 Must-Have Add-Ins

TestDriven.NET
GhostDoc
Paster
CodeKeep
PInvoke.NET
VSWindowManager PowerToy
WSContractFirst
VSMouseBindings
CopySourceAsHTML
Cache Visualizer

In this article, I introduce you to some of the best Visual Studio add-ins available today that can be downloaded for free. I walk through using each of the add-ins, but because I am covering so many I only have room to introduce you to the basic functionality.
Each of these add-ins works with Visual Studio .NET 2003 and most of them already have versions available for Visual Studio 2005. If a Visual Studio 2005 version is not available as of the time of this writing, it should be shortly.

 

TestDriven.NET
Test-driven development is the practice of writing unit tests before you write code, and then writing the code to make those tests pass. By writing tests before you write code, you identify the exact behavior your code should exhibit and, as a bonus, at the end you have 100 percent test coverage, which makes extensive refactoring possible.
NUnit gives you the ability to write unit tests using a simple syntax and then execute those tests one by one or altogether against your app. If you are using Visual Studio Team System, you have unit testing functionality built into the Visual Studio IDE. Before the Visual Studio Team System, there was TestDriven.NET, an add-in that integrates NUnit directly into the Visual Studio IDE. Using a non-Team System version of Visual Studio 2005 or Visual Studio .NET 2003, is, in my opinion, still the best solution available.
TestDriven.NET adds unit testing functionality directly to the Visual Studio IDE. Instead of writing a unit test, switching over to the NUnit GUI tool, running the test, then switching back to the IDE to code, and so on, you can do it all right in the IDE.

 

Figure 1 New Testing Options from TestDriven.NET 
After installing TestDriven.NET you will find a number of new menu items on the right-click context menu as shown in Figure 1. You can right-click directly on a unit test and run it. The results will be displayed in the output window as shown in Figure 2.

 

Figure 2 Output of a Unit Test 
While executing unit tests in the IDE is invaluable by itself, perhaps the best feature is that you can also quickly launch into the debugger by right-clicking on a test and selecting Test With | Debugger. This will launch the debugger and then execute your unit tests, hitting any breakpoints you have set in those tests.
In fact, it doesn’t even have to be a unit test for TestDriven.NET to execute it. You could just as easily test any public method that returns void. This means that if you are testing an old app and need to walk through some code, you can write a quick test and execute it right away.
TestDriven.NET is an essential add-in if you work with unit tests or practice test-driven development. (If you don’t already, you should seriously consider it.) TestDriven.NET was written by Jamie Cansdale and can be downloaded from http://www.testdriven.net.

 

GhostDoc
XML comments are invaluable tools when documenting your application. Using XML comments, you can mark up your code and then, using a tool like nDoc, you can generate help files or MSDN-like Web documentation based on those comments. The only problem with XML documentation is the time it takes to write it you often end up writing similar statements over and over again. The goal of GhostDoc is to automate the tedious parts of writing XML comments by looking at the name of your class or method, as well as any parameters, and making an educated guess as to how the documentation should appear based on recommended naming conventions. This is not a replacement for writing thorough documentation of your business rules and providing examples, but it will automate the mindless part of your documentation generation.
For instance consider the method shown here:

private void SavePerson(Person person) { }

After installing GhostDoc, you can right-click on the method declaration and choose Document this. The following comments will then be added to your document:

///  /// Saves the person. ///  ///Person. private void SavePerson(Person person) { }

As you can see, GhostDoc has automatically generated a summary based on how the method was named and has also populated the parameter comments. Don’t stop hereyou should add additional comments stating where the person is being saved to or perhaps give an example of creating and saving a person. Here is my comment after adding some additional information by hand:

///  /// Saves a person using the configured persistence provider. ///  ///The Person to be saved private void SavePerson(Person person) { }
Adding these extra comments is much easier since the basic, redundant portion is automatically generated by GhostDoc. GhostDoc also includes options that allow you to modify existing rules and add additional rules that determine what kind of comments should be generated.
GhostDoc was written by Roland Weigelt and can be downloaded from http://www.roland-weigelt.de/ghostdoc.

 

Smart Paster
Strings play a large role in most applications, whether they are comments being used to describe the behavior of the system, messages being sent to the user, or SQL statements that will be executed. One of the frustrating parts of working with strings is that they never seem to paste correctly into the IDE. When you are pasting comments, the strings might be too long or not aligned correctly, leaving you to spend time inserting line breaks, comment characters, and tabbing. When working with strings that will actually be concatenated, you have to do even more work, usually separating the parts of the string and inserting concatenation symbols or using a string builder.
The Smart Paster add-in helps to limit some of this by providing a number of commands on the right-click menu that let you paste a string from the clipboard into Visual Studio using a certain format. After installing Smart Paster, you will see the new paste options available on the right-click menu (see Figure 3).

 

Figure 3 String Pasting Options from Smart Paster 
For instance, you might have the following string detailing some of your business logic:

To update a person record, a user must be a member of the customer service group or the manager group. After the person has been updated, a letter needs to be generated to notify the customer of the information change.

You can copy and paste this into Visual Studio using the Paste As | Comment option, and you would get the following:

//To update a person record a user must be a member of the customer //service group or the manager group. After the person has been updated //a letter needs to be generated to notify the customer of the //information change.
The correct comment characters and carriage returns are automatically inserted (you can configure at what length to insert a character return). If you were inserting this text without the help of Smart Paster, it would paste as one long line, forcing you to manually add all the line breaks and comment characters. As another example, let’s say you have the following error message that you need to insert values into at run time:

You do not have the correct permissions to perform <insert action>. You must be a member of the <insert group> to perform this action.

Using the Paste As | StringBuilder command, you can insert this string as a StringBuilder into Visual Studio. The results would look like this:

StringBuilder stringBuilder = new StringBuilder(134); stringBuilder.AppendFormat( @"You do not have the correct permissions to "); stringBuilder.AppendFormat( @"perform . You must be a member of "); stringBuilder.AppendFormat( @"the  to perform this action.");

It would then simply be a matter of modifying the code to replace the variables sections of the string:

StringBuilder stringBuilder = new StringBuilder(134); stringBuilder.AppendFormat( @"You do not have the correct permissions to "); stringBuilder.AppendFormat( @"perform {0}. You must be a member of ", action); stringBuilder.AppendFormat( @"the {0} to perform this action.", group);

Smart Paster is a time-saving add-in that eliminates a lot of the busy work associated with working with strings in Visual Studio. It was written by Alex Papadimoulis.

 

CodeKeep
Throughout the process of software development, it is common to reuse small snippets of code. Perhaps you reuse an example of how to get an enum value from a string or a starting point on how to implement a certain pattern in your language of choice.
Visual Studio offers some built-in functionality for working with code snippets, but it assumes a couple of things. First, it assumes that you are going to store all of your snippets on your local machine, so if you switch machines or move jobs you have to remember to pack up your snippets and take them with you. Second, these snippets can only be viewed by you. There is no built-in mechanism for sharing snippets between users, groups, or the general public.
This is where CodeKeep comes to the rescue. CodeKeep is a Web application that provides a place for people to create and share snippets of code in any language. The true usefulness of CodeKeep is its Visual Studio add-in, which allows you to search quickly through the CodeKeep database, as well as submit your own snippets.
After installing CodeKeep, you can search the existing code snippets by selecting Tools | CodeKeep | Search, and then using the search screen shown in Figure 4.

 

Figure 4 Searching Code Snippets with CodeKeep 
From this screen you can view your own snippets or search all of the snippets that have been submitted to CodeKeep. When searching for snippets, you see all of the snippets that other users have submitted and marked as public (you can also mark code as private if you want to hide your bad practices). If you find the snippet you are looking for, you can view its details and then quickly copy it to the clipboard to insert into your code.
You can also quickly and easily add your own code snippets to CodeKeep by selecting the code you want to save, right-clicking, and then selecting Send to CodeKeep.This will open a new screen that allows you to wrap some metadata around your snippet, including comments, what language it is written in, and whether it should be private or public for all to see.
Whenever you write a piece of code and you can imagine needing to use it in the future, simply take a moment to submit it; this way, you won’t have to worry about managing your snippets or rewriting them in the future. Since CodeKeep stores all of your snippets on the server, they are centralized so you don’t have to worry about moving your code from system to system or job to job.
CodeKeep was written by Arcware’s Dave Donaldson and is available from http://www.codekeep.net.

 

PInvoke.NET
API calls within the .NET Framework. One of the hard parts of using P/Invoke is determining the method signature you need to use; this can often be an exercise in trial and error. Sending incorrect data types or values to an unmanaged API can often result in memory leaks or other unexpected results.
PInvoke.NET is a wiki that can be used to document the correct P/Invoke signatures to be used when calling unmanaged Win32 APIs. A wiki is a collaborative Web site that anyone can edit, which means there are thousands of signatures, examples, and notes about using P/Invoke. Since the wiki can be edited by anyone, you can contribute as well as make use of the information there.
While the wiki and the information stored there are extremely valuable, what makes them most valuable is the PInvoke.NET Visual Studio add-in. Once you have downloaded and installed the PInvoke.NET add-in, you will be able to search for signatures as well as submit new content from inside Visual Studio. Simply right-click on your code file and you will see two new context items: Insert PInvoke Signatures and Contribute PInvoke Signatures and Types.

Figure 5 Using PInvoke.NET 
When you choose Insert PInvoke Signatures, you’ll see the dialog box shown in Figure 5. Using this simple dialog box, you can search for the function you want to call. Optionally, you can include the module that this function is a part of. Now, a crucial part of all major applications is the ability to make the computer Beep. So I will search for the Beep function and see what shows up. The results can be seen in Figure 6.

Figure 6 Finding the Beep Function in PInvoke.NET 
.NET. The wiki suggests alternative managed APIs, letting you know that there is a new method System.Console.Beep in the .NET Framework 2.0.
There is also a link at the bottom of the dialog box that will take you to the corresponding page on the wiki for the Beep method. In this case, that page includes documentation on the various parameters that can be used with this method as well as some code examples on how to use it.
After selecting the signature you want to insert, click the Insert button and it will be placed into your code document. In this example, the following code would be automatically created for you:

[DllImport("kernel32.dll", SetLastError=true)] [return: MarshalAs(UnmanagedType.Bool)] static extern bool Beep( uint dwFreq, uint dwDuration);

You then simply need to write a call to this method and your computer will be beeping in no time.

The PInvoke.NET wiki and Visual Studio add-in take away a lot of the pain and research time sometimes involved when working with the Win32 API from managed code. The wiki can be accessed at http://www.pinvoke.net, and the add-in can be downloaded from the Helpful Tools link found in the bottom-left corner of the site.

 

VSWindowManager PowerToy
The Visual Studio IDE includes a huge number of different Windows, all of which are useful at different times. If you are like me, you have different window layouts that you like to use at various points in your dev work. When I am writing HTML, I like to hide the toolbox and the task list window. When I am designing forms, I want to display the toolbox and the task list. When I am writing code, I like to hide all the windows except for the task list. Having to constantly open, close, and move windows based on what I am doing can be both frustrating and time consuming.
Visual Studio includes the concept of window layouts. You may have noticed that when you start debugging, the windows will automatically go back to the layout they were in the last time you were debugging. This is because Visual Studio includes a normal and a debugging window layout.
Wouldn’t it be nice if there were additional layouts you could use for when you are coding versus designing? Well, that is exactly what VSWindowManager PowerToy does.
After installing VSWindowManager PowerToy, you will see some new options in the Window menu as shown in Figure 7.

 

Figure 7 VSWindowManager Layout Commands 
The Save Window Layout As menu provides commands that let you save the current layout of your windows. To start using this power toy, set up your windows the way you like them for design and then navigate to the Windows | Save Window Layout As | My Design Layout command. This will save the current layout. Do the same for your favorite coding layout (selecting My Coding Layout), and then for up to three different custom layouts.
VSWindowManager will automatically switch between the design and coding layouts depending on whether you are currently viewing a designer or a code file. You can also use the commands on the Apply Window Layout menu to choose from your currently saved layouts. When you select one of the layouts you have saved, VSWindowManager will automatically hide, show, and rearrange windows so they are in the exact same layout as before.
VSWindowManager PowerToy is very simple, but can save you a lot of time and frustration. VSWindowManager is available from vswindowmanager.codeplex.com/.

 

WSContractFirst
Visual Studio makes creating Web services deceptively easy You simply create an .asmx file, add some code, and you are ready to go. ASP.NET can then create a Web Services Description Language (WSDL) file used to describe behavior and message patterns for your Web service.
There are a couple problems with letting ASP.NET generate this file for you. The main issue is that you are no longer in control of the contract you are creating for your Web service. This is where contract-first development comes to the rescue. Contract-first development, also called contract-driven development, is the practice of writing the contract (the WSDL file) for your Web service before you actually write the Web service itself. By writing your own WSDL file, you have complete control over how your Web service will be seen and used, including the interface and data structures that are exposed.
Writing a WSDL document is not a lot of fun. It’s kind of like writing a legal contract, but using lots of XML. This is where the WSContractFirst add-in comes into play. WSContractFirst makes it easier to write your WSDL file, and will generate client-side and server-side code for you, based on that WSDL file. You get the best of both worlds: control over your contract and the rapid development you are used to from Visual Studio style services development.
The first step to using WSContractFirst is to create your XML schema files. These files will define the message or messages that will be used with your Web services. Visual Studio provides an easy-to-use GUI interface to define your schemasthis is particularly helpful since this is one of the key steps of the Web service development process. Once you have defined your schemas you simply need to right-click on one of them and choose Create WSDL Interface Description. This will launch the Generate WSDL Wizard, the first step of which is shown in Figure 8.

Figure 8 Building a WSDL File with WSContractFirst  
Step 1 collects the basics about your service including its name, namespace, and documentation. Step 2 allows you to specify the .xsd files you want to include in your service. The schema you selected to launch this wizard is included by default. Step 3 allows you to specify the operations of your service. You can name the operation as well as specify whether it is a one-way or request/response operation. Step 4 gives you the opportunity to enter the details for the operations and messages. Step 5 allows you to specify whether a element should be created and whether or not to launch the code generation dialog automatically when this wizard is done. Step 6 lets you specify alternative .xsd paths. Once the wizard is complete, your new WSDL file is added to your project.
Now that you have your WSDL file there are a couple more things WSContractFirst, can do for you. To launch the code generation portion of WSContractFirst, you simply need to right-click on your WSDL file and select Generate Web Service Code. This will launch the code generation dialog box shown in Figure 9.

Figure 9 WSContractFirst Code Generation Options 
You can choose to generate a client-side proxy or a service-side stub, as well as configure some other options about the code and what features it should include. Using these code generation features helps speed up development tremendously.
If you are developing Web services using Visual Studio you should definitely look into WSContractFirst and contract-first development. WSContractFirst was written by Thinktecture’s Christian Weyer.

 

VSMouseBindings
Your mouse probably has five buttons, so why are you only using three of them? The VSMouseBindings power toy provides an easy to use interface that lets you assign each of your mouse buttons to a Visual Studio command.
VSMouseBindings makes extensive use of the command pattern. You can bind mouse buttoms to various commands, such as open a new file, copy the selected text to the clipboard, or just about anything else you can do in Visual Studio. After installing VSMouseBindings you will see a new section in the Options dialog box called VsMouseBindings. The interface can be seen in Figure 10.

Figure 10 VSMouseBindings Options for Visual Studio 
As you can see, you can select a command for each of the main buttons. You probably shouldn’t mess around with the left and right mouse buttons, though, as their normal functionality is pretty useful. The back and forward buttons, however, are begging to be assigned to different commands. If you enjoy having functionality similar to a browser’s back and forward buttons, then you can set the buttons to the Navigate.Backward and Navigate.Forward commands inVisual Studio.
The Use this mouse shortcut in menu lets you set the scope of your settings. This means you can configure different settings when you are in the HTML designer as opposed to when you are working in the source editor.
VSMouseBindings is available from archive.msdn.microsoft.com/VSMouseBindings.

 

CopySourceAsHTML
Code is exponentially more readable when certain parts of that code are differentiated from the rest by using a different color text. Reading code in Visual Studio is generally much easier than trying to read code in an editor like Notepad.
Chances are you may have your own blog by now, or at least have spent some time reading them. Normally, when you try to post a cool code snippet to your blog it ends up being plain old text, which isn’t the easiest thing to read. This is where the CopySourceAsHTML add-in comes in to play. This add-in allows you to copy code as HTML, meaning you can easily post it to your blog or Web site and retain the coloring applied through Visual Studio.
After installing the CopySourceAsHTML add-in, simply select the code you want to copy and then select the Copy Source as HTML command from the right-click menu. After selecting this option you will see the dialog box shown in Figure 11.

Figure 11 Options for CopySourceAsHTML 

From here you can choose what kind of HTML view you want to create. This can include line numbers, specific tab sizes, and many other settings. After clicking OK, the HTML is saved to the clipboard. For instance, suppose you were starting with the following code snippet inside Visual Studio:

private long Add(int d, int d2) { return (long) d + d2; }
Figure 12 HTML Formatted Code  
After you select Copy As HTML and configure the HTML to include line numbers, this code will look like Figure 12 in the browser. Anything that makes it easier to share and understand code benefits all of us as it means more people will go to the trouble of sharing knowledge and learning from each other.
CopySourceAsHTML was written by Colin Coller and can be downloaded from copysourceashtml.codeplex.com/.

 

Cache Visualizer
Visual Studio 2005 includes a new debugging feature called visualizers, which can be used to create a human-readable view of data for use during the debugging process. Visual Studio 2005 includes a number of debugger visualizers by default, most notably the DataSet visualizer, which provides a tabular interface to view and edit the data inside a DataSet. While the default visualizers are very valuable, perhaps the best part of this new interface is that it is completely extensible. With just a little bit of work you can write your own visualizers to make debugging that much easier.
While a lot of users will write visualizers for their own custom complex types, some developers are already posting visualizers for parts of the Framework. I am going to look at one of the community-built visualizers that is already available and how it can be used to make debugging much easier.
The ASP.NET Cache represents a collection of objects that are being stored for later use. Each object has some settings wrapped around it, such as how long it will be cached for or any cache dependencies. There is no easy way while debugging to get an idea of what is in the cache, how long it will be there, or what it is watching. Brett Johnson saw that gap and wrote Cache Visualizer to examine the ASP.NET cache.
Once you have downloaded and installed the visualizer you will see a new icon appear next to the cache object in your debug windows, as shown in Figure 13.

Figure 13 Selecting Cache Visualizer While Debugging 
When you click on the magnifying glass to use the Cache Visualizer a dialog box appears that includes information about all of the objects currently stored in the ASP. NET cache, as you can see in Figure 14.

Figure 14 Cache Visualizer Shows Objects in the ASP.NET Cache 
Under Public Cache Entries, you can see the entries that I have added to the cache. The entries under Private Cache Entries are ones added by ASP.NET. Note that you can see the expiration information as well as the file dependency for the cache entry.
The Cache Visualizer is a great tool when you are working with ASP.NET. It is also representative of some of the great community-developed visualizers we will see in the future.

 

Wrapping It Up
While this article has been dedicated to freely available add-ins, there are also a host of add-ins that can be purchased for a reasonable price. I encourage you to check out these other options, as in some cases they can add some tremendous functionality to the IDE. This article has been a quick tour of some of the best freely available add-ins for Visual Studio. Each of these add-ins may only do a small thing, but together they help to increase your productivity and enable you to write better code.

NEW “Filter My Lists” Web Part now available + FREE Metro UI Master Page when ordering

“Filter My Lists” Web Part

Saves you time with optimal performance

Find what you are looking for with a few clicks, even in cluttered sites and lists with masses of items and documents.

Find exactly what you need and stop wasting your time browsing SharePoint.
Filter the content of multiple lists and libraries in a single   step.

Combine search and metadata filters

In a single panel combine item, document and attachment searches with metadata keyword searches and managed metadata filters.

Select multiple filter values from drop-down lists or alternatively use the keyword search of metadata fields with the help of wildcard characters and logical operators.

Export filtered views to Excel

Export filtered views and data to Excel. A print view enables you to print your results in a clear printable format with a single  click.

Keep views clear and concise

Provides a complete set of filters without cluttering list views and keeps your list views clear, concise and speedy. Enables you to filter SharePoint using columns which aren’t visible in list views.

Refine filters and save them for future use, whether private, to share with others or to use as default filters.

FREE Metro Style UI Master Page

 

Screen Capture Medium

Modern UI Master Page and Styles for SharePoint 2010.

This will give the Metro/Modern UI styling of SharePoint 2013 to your SharePoint 2010 team sites.

Features include:
– Quick launch styling
– Global navigation and drop-down styling
– Search box styling and layout change
– Web part header styling
– Segoe UI font

Architecture and Practical Application – BizTalk Adapter for mySAP Business Suite

Architecture for BizTalk Adapter for mySAP Business Suite

f36f4-netvjava

The Microsoft BizTalk Adapter for mySAP Business Suite implements a Windows Communication Foundation (WCF) custom binding, which contains a single custom transport binding element that enables communication with an SAP system.

biztalk-accelerator

The SAP adapter is wrapped by the Microsoft Windows Communication Foundation (WCF) Line of Business (LOB) Adapter SDK runtime and is exposed to applications through the WCF channel architecture. The SAP adapter communicates with the SAP system through either the 64-bit or 32-bit version of the SAP Unicode RFC SDK (librfc32u.dll).

The following figure shows the end-to-end architecture for solutions that are developed by using the SAP adapter.
SAP End-to-End Architecture
Consuming the Adapter

The SAP adapter exposes the SAP system as a WCF service to client applications. Client applications exchange SOAP messages with the SAP adapter through WCF channels to perform operations and to access data on the SAP system. The preceding figure shows four ways in which the SAP adapter can be consumed.

image
• Through a WCF channel application that performs operations on the SAP system by using the WCF channel model to exchange SOAP messages directly with the SAP adapter. For more information about developing solutions for the SAP adapter by using WCF channel model programming, see Developing Applications by Using the WCF Channel Model.

• Through a WCF service model application that calls methods on a WCF client to perform operations on the SAP system. A WCF client models the operations exposed by the SAP adapter as .NET methods. You can use the Microsoft Windows Communication Foundation (WCF) Line of Business (LOB) Adapter SDK or the svcutil.exe tool to create a WCF client class from metadata exposed by the SAP adapter. For more information about WCF service model programming and the SAP adapter, see Developing Applications by Using the WCF Service Model.

• Through a BizTalk port that is configured to use the BizTalk WCF-Custom adapter with the SAP Binding configured as the binding for the WCF-Custom transport type in a BizTalk Server application. The BizTalk WCF-Custom adapter enables communication between a BizTalk Server application and a WCF service.

The BizTalk WCF-Custom adapter supports custom WCF bindings through its WCF-Custom transport type, which enables you to configure any WCF binding exposed to the configuration system as the binding used by the BizTalk WCF-Custom adapter. For more information about how to use the SAP adapter in BizTalk Server solutions, see Developing BizTalk Applications. BizTalk transactions are supported by the BizTalk Layered Channel binding element which can be loaded by setting a binding property on the SAP Binding.

• Through an IIS-hosted Web service. In this scenario, the SAP adapter is exposed through a WCF Service proxy, which is hosted in IIS by using one of the standard WCF HTTP bindings.

• Through the .NET Framework Data Provider for mySAP Business Suite. The Data Provider for SAP runs on top of the SAP adapter and provides an ADO.NET interface to an SAP system.

The SAP adapter and the SAP RFC library are always hosted in-process with the application or service that consumes the adapter.

The SAP Adapter and WCF

WCF presents a programming model based on the exchange of SOAP messages over channels between clients and services. These messages are sent between endpoints exposed by a communicating client and service.

An endpoint consists of an endpoint address which specifies the location at which messages are received, a binding which specifies the communication protocols used to exchange messages, and a contract which specifies the operations and data types exposed by the endpoint.

A binding consists of one or more binding elements that stack on top of each other to define how messages are exchanged with the endpoint.

 

At a minimum, a binding must specify the transport and encoding used to exchange messages with the endpoint. Message exchange between endpoints occurs over a channel stack that is composed of one or more channels. Each channel is a concrete implementation of one of the binding elements in the binding configured for the endpoint.

For more information about WCF and the WCF programming model, see “Windows Communication Foundation” at http://go.microsoft.com/fwlink/?LinkId=89726.

The Microsoft BizTalk Adapter for mySAP Business Suite exposes a WCF custom binding, the SAP Binding (Microsoft.Adapters.SAP.SAPBinding). By default, this binding contains a single custom transport binding element, the SAP Adapter Binding Element (Microsoft.Adapters.SAP.SAPAdapter), which enables operations on an SAP system. When using the SAP adapter with BizTalk Server, you can set the EnableBizTalkCompatibilityMode binding property to load a custom binding element, the BizTalk Layered Channel Binding Element, on top of the SAP Adapter Binding Element. The BizTalk Layered Channel Binding Element is implemented internally by the SAP adapter and is not exposed outside the SAP Binding.

Microsoft.Adapters.SAP.SAPBinding (the SAP Binding) and Microsoft.Adapters.SAP.SAPAdapter (the SAP Adapter Binding Element) are public classes and are also exposed to the configuration system. Because the SAP Adapter Binding Element is exposed publicly, you can build your own custom WCF bindings capable of extending the functionality of the SAP adapter. For example, you could implement a custom binding to support Enterprise Single Sign-On (SSO) in a WCF channel or a WCF service model programming solution, to aggregate database operations into a single multifunction operation, or to perform schema transformation between operations implemented by a custom application and operations on the SAP system.

The SAP adapter is built on top of the Microsoft Windows Communication Foundation (WCF) Line of Business (LOB) Adapter SDK and runs on top of the WCF LOB Adapter SDK runtime. The WCF LOB Adapter SDK provides a software framework and tooling infrastructure that the SAP adapter leverages to provide a rich set of features to users and adapter clients.

The Connection to the SAP System

The SAP adapter connects with the SAP system through the SAP Unicode RFC SDK Library (librfc32u.dll). The SAP adapter supports both the 32 bit and the 64 bit versions of the SAP RFC SDK. The SAP RFC SDK enables external programs to call ABAP functions on a SAP system.

You establish a connection to an SAP system by providing a connection URI to the SAP adapter. The SAP adapter supports the following kinds of connections to an SAP system:
• An application host–based connection (A), in which the SAP adapter connects directly to an SAP application server.

• A load balancing connection (B), in which the SAP adapter connects to an SAP messaging server.

• A destination-based connection (D), in which the connection to the SAP system is specified by a destination in the saprfc.ini configuration file. A, B, and R type connections are supported.

• A listener connection (R), in which the adapter receives RFCs, tRFC and IDOCs through an RFC Destination on the SAP system that is specified by a listener host, a listener gateway service, and a listener program ID, either directly in the connection URI or by an R-based destination in the saprfc.ini configuration file.

Architecture for BizTalk Adapter for mySAP Business Suite

This topic has not yet been rated – Rate this topic

The Microsoft BizTalk Adapter for mySAP Business Suite implements a Windows Communication Foundation (WCF) custom binding, which contains a single custom transport binding element that enables communication with an SAP system. The SAP adapter is wrapped by the Microsoft Windows Communication Foundation (WCF) Line of Business (LOB) Adapter SDK runtime and is exposed to applications through the WCF channel architecture. The SAP adapter communicates with the SAP system through either the 64-bit or 32-bit version of the SAP Unicode RFC SDK (librfc32u.dll). The following figure shows the end-to-end architecture for solutions that are developed by using the SAP adapter.
SAP End-to-End Architecture
Consuming the Adapter

The SAP adapter exposes the SAP system as a WCF service to client applications. Client applications exchange SOAP messages with the SAP adapter through WCF channels to perform operations and to access data on the SAP system. The preceding figure shows four ways in which the SAP adapter can be consumed.
• Through a WCF channel application that performs operations on the SAP system by using the WCF channel model to exchange SOAP messages directly with the SAP adapter. For more information about developing solutions for the SAP adapter by using WCF channel model programming, see Developing Applications by Using the WCF Channel Model.

• Through a WCF service model application that calls methods on a WCF client to perform operations on the SAP system. A WCF client models the operations exposed by the SAP adapter as .NET methods. You can use the Microsoft Windows Communication Foundation (WCF) Line of Business (LOB) Adapter SDK or the svcutil.exe tool to create a WCF client class from metadata exposed by the SAP adapter. For more information about WCF service model programming and the SAP adapter, see Developing Applications by Using the WCF Service Model.

• Through a BizTalk port that is configured to use the BizTalk WCF-Custom adapter with the SAP Binding configured as the binding for the WCF-Custom transport type in a BizTalk Server application. The BizTalk WCF-Custom adapter enables communication between a BizTalk Server application and a WCF service.

The BizTalk WCF-Custom adapter supports custom WCF bindings through its WCF-Custom transport type, which enables you to configure any WCF binding exposed to the configuration system as the binding used by the BizTalk WCF-Custom adapter. For more information about how to use the SAP adapter in BizTalk Server solutions, see Developing BizTalk Applications.

BizTalk transactions are supported by the BizTalk Layered Channel binding element which can be loaded by setting a binding property on the SAP Binding.

• Through an IIS-hosted Web service. In this scenario, the SAP adapter is exposed through a WCF Service proxy, which is hosted in IIS by using one of the standard WCF HTTP bindings.

• Through the .NET Framework Data Provider for mySAP Business Suite. The Data Provider for SAP runs on top of the SAP adapter and provides an ADO.NET interface to an SAP system.

The SAP adapter and the SAP RFC library are always hosted in-process with the application or service that consumes the adapter.

The SAP Adapter and WCF

WCF presents a programming model based on the exchange of SOAP messages over channels between clients and services. These messages are sent between endpoints exposed by a communicating client and service.

An endpoint consists of an endpoint address which specifies the location at which messages are received, a binding which specifies the communication protocols used to exchange messages, and a contract which specifies the operations and data types exposed by the endpoint. A binding consists of one or more binding elements that stack on top of each other to define how messages are exchanged with the endpoint.

At a minimum, a binding must specify the transport and encoding used to exchange messages with the endpoint. Message exchange between endpoints occurs over a channel stack that is composed of one or more channels. Each channel is a concrete implementation of one of the binding elements in the binding configured for the endpoint.

For more information about WCF and the WCF programming model, see “Windows Communication Foundation” at http://go.microsoft.com/fwlink/?LinkId=89726.

The Microsoft BizTalk Adapter for mySAP Business Suite exposes a WCF custom binding, the SAP Binding (Microsoft.Adapters.SAP.SAPBinding). By default, this binding contains a single custom transport binding element, the SAP Adapter Binding Element (Microsoft.Adapters.SAP.SAPAdapter), which enables operations on an SAP system. When using the SAP adapter with BizTalk Server, you can set the EnableBizTalkCompatibilityMode binding property to load a custom binding element, the BizTalk Layered Channel Binding Element, on top of the SAP Adapter Binding Element. The BizTalk Layered Channel Binding Element is implemented internally by the SAP adapter and is not exposed outside the SAP Binding.

Microsoft.Adapters.SAP.SAPBinding (the SAP Binding) and Microsoft.Adapters.SAP.SAPAdapter (the SAP Adapter Binding Element) are public classes and are also exposed to the configuration system. Because the SAP Adapter Binding Element is exposed publicly, you can build your own custom WCF bindings capable of extending the functionality of the SAP adapter. For example, you could implement a custom binding to support Enterprise Single Sign-On (SSO) in a WCF channel or a WCF service model programming solution, to aggregate database operations into a single multifunction operation, or to perform schema transformation between operations implemented by a custom application and operations on the SAP system.

The SAP adapter is built on top of the Microsoft Windows Communication Foundation (WCF) Line of Business (LOB) Adapter SDK and runs on top of the WCF LOB Adapter SDK runtime. The WCF LOB Adapter SDK provides a software framework and tooling infrastructure that the SAP adapter leverages to provide a rich set of features to users and adapter clients.

The Connection to the SAP System

The SAP adapter connects with the SAP system through the SAP Unicode RFC SDK Library (librfc32u.dll). The SAP adapter supports both the 32 bit and the 64 bit versions of the SAP RFC SDK. The SAP RFC SDK enables external programs to call ABAP functions on a SAP system.

You establish a connection to an SAP system by providing a connection URI to the SAP adapter. The SAP adapter supports the following kinds of connections to an SAP system:
• An application host–based connection (A), in which the SAP adapter connects directly to an SAP application server.

• A load balancing connection (B), in which the SAP adapter connects to an SAP messaging server.

• A destination-based connection (D), in which the connection to the SAP system is specified by a destination in the saprfc.ini configuration file. A, B, and R type connections are supported.

• A listener connection (R), in which the adapter receives RFCs, tRFC and IDOCs through an RFC Destination on the SAP system that is specified by a listener host, a listener gateway service, and a listener program ID, either directly in the connection URI or by an R-based destination in the saprfc.ini configuration file.

So – How Do I Use a Custom Web Part?

This topic has not yet been rated – Rate this topic

This section provides information about using a custom Web Part with Microsoft Office SharePoint Server. To use a custom Web Part, you must do the following:
1. Create a custom Web Part

  1. Deploy the custom Web Part to a SharePoint portal
  2. Configure the SharePoint portal to use the custom Web Part

Before You Begin

Before you create a custom Web Part:
• Publish the SAP artifacts as a WCF service. For more information, see Step 1: Publish the SAP Artifacts as a WCF Service in Tutorial 1: Presenting Data from an SAP System on a SharePoint Site.

• Create an application definition file for the SAP artifacts using the Business Data Catalog in Microsoft Office SharePoint Server. For more information, see Step 2: Create an Application Definition File for the SAP Artifacts in Tutorial 1: Presenting Data from an SAP System on a SharePoint Site.

Step 1: Create a custom Web Part

To create a custom Web Part using Visual Studio, do the following:
1. Start Visual Studio, and then create a project.

  1. In the New Project dialog box, from the Project types pane, select Visual C#. From the Templates pane, select Class Library.
  • Specify a name and location for the solution. For this topic, specify CustomWebPart in the Name and Solution Name boxes. Specify a location, and then click OK.

  • Add a reference to the System.Web component into the project. Right-click the project name in Solution Explorer, and then click Add Reference. In the Add Reference dialog box, select System.Web in the .NET tab, and then click OK. The System.Web component contains the required namespace of System.Web.UI.WebControls.WebParts.

  • Add the required code based on your issue in the project. For the code sample that is relevant to a certain issue, see “Issues Involving Custom Web Parts” in Considerations While Using the SAP Adapter with Microsoft Office SharePoint Server.

  • Build the project. On successful build of the project, a .dll file, CustomWebPart.dll, will be generated in the /bin/Debug folder.

  • Only for 64-bit computer: Sign the CustomWebPart.dll file with a strong name before performing the following steps. Otherwise, you will not be able to import, and hence use the CustomWebPart.dll in the SharePoint portal in “Step 3: Configure the SharePoint Portal to use the custom Web Part.” For information about how to sign an assembly with a strong name, see http://go.microsoft.com/fwlink/?LinkId=197171.

  • Step 2: Deploy the custom Web Part to a SharePoint Portal

    You must do the following to make the CustomWebPart.dll file (custom Web Part) that is created in “Step 1: Create a custom Web Part” of this topic usable on the SharePoint portal:
    • Copy the CustomWebPart.dll file to the bin folder of the SharePoint Portal: Microsoft Office SharePoint Server creates portals under the :\Inetpub\wwwroot\wss\VirtualDirectories folder. A folder is created for each portal, and can be identified with the port number. You must copy the CustomWebPart.dll file created in “Step 1: Create a custom Web Part” of this topic to the :\Inetpub\wwwroot\wss\VirtualDirectories\bin folder. For example, if the port number of your SharePoint portal is 13614, you must copy the CustomWebPart.dll file to the :\Inetpub\wwwroot\wss\VirtualDirectories\13614\bin folder.

    TipTip

    Another way to find the folder location of your SharePoint portal is by using the Internet Information Services (IIS) Manager window (Start > Run > inetmgr). Locate your SharePoint portal in the Internet Information Services (IIS) Manager window ([computer_name] > Web Sites > [Portal-Name]), right-click, and then click Properties in the shortcut menu. In the properties dialog box of the SharePoint portal, click the Home Directory tab, and then select the Local path box.

    • Add the Safe Control Entry in the web.config File: Because the CustomWebPart.dll file will be used on different computers and by multiple users, you must declare the file as “safe.” To do so, open the web.config file located in the SharePoint portal folder at :\Inetpub\wwwroot\wss\VirtualDirectories. Under the section of the web.config file, add the following safe control entry:

    ◦On 32-bit computer:

    Copy

     

    ◦On 64-bit computer:

    Copy

     

    Save the web.config file, and then close it.

    Step 3: Configure the SharePoint portal to use the custom Web Part

    You need to add the custom Web Part to the Microsoft Office SharePoint Server Web Part Gallery, so that you can use it on your SharePoint portal. To do so:

    1. Start SharePoint 3.0 Central Administration. Click Start, point to All Programs, point to Microsoft Office Server, and then click SharePoint 3.0 Central Administration.
  • In the left navigation pane, click the name of the Shared Service Provider (SSP) to which you want to add the custom Web Part.

  • On the Shared Services Administration page, in the upper-right corner, click Site Actions, and then click Create.

  • On the Site Settings page, click Web Parts under the Galleries column.

  • On the Web Part Gallery page, to add the custom Web Part to the gallery, click New. At this point the custom Web Part is not available in the Web Part Gallery page.

  • On the New Web Parts page, locate CustomWebPart (name of the custom Web Part) in the list, select the check box on the left, and then click Populate Gallery on the top of the page. This will add the CustomWebPart entry in the Web Part Gallery page.

  • Now you can use the custom Web Part (CustomWebPart) to create Web Parts in your SharePoint portal. The custom Web Part (CustomWebPart) will appear under the Miscellaneous section in the Add Web Parts page.

     

    Expand

    BizTalk Adapter for mySAP Business Suite and the WCF LOB Adapter SDK

    This topic has not yet been rated Rate this topic

    The Microsoft BizTalk Adapter for mySAP Business Suite implements a set of core components that leverage functionality provided by the Microsoft Windows Communication Foundation (WCF) Line of Business (LOB) Adapter SDK and provide connectivity to the SAP system through the SAP Unicode RFC SDK Library (librfc32u.dll).

    The WCF LOB Adapter SDK serves as the software layer through which the SAP adapter interfaces with Windows Communication Foundation (WCF), and the RFC SDK serves as the layer through which the SAP adapter interfaces with the SAP system.

    The following figure shows the relationships between the internal components of the SAP adapter and between these components and the RFC SDK.

    The relationship of internal adapter components

    See

     

    SAP Weekend : Part 1 – ERPConnect Services for SharePoint 2010 (ECS)

    This weekend was spent completing my new “List Search Web Part” and also 2 Free Web Parts that is included in the “List Web Part Pack” – More about this in my future blog.

    erp256-bc0e84ce

    In between the “SAP Bug” bit me again and I decided to write a  blog post series on the various adapters I have used in SharePoint and SAP Integration Projects and to give you a basic “run down” of how and with which technologies each adapter connects the 2 systems with.

    ERPConnect was 1st on the list. ….

     

    Yes, I can hear the grumblings of those of us who have worked with SAP and SharePoint  Integration and the ERPConnect adapter before 🙂

    For starters, you need to have a SAP Developer Key to be allowed to use the SAP web service wizard, and also have the required SAP authorizations. In other cases it may not be allowed by IT operations to make any modification to the SAP environment, even if it’s limited to the full-automatic generation and activation of the BAPI webservice(s).

    Another reason from a system architecture viewpoint, is that the single BAPI and/or RFC calls may be of too low granularity. You actually want to perform a ‘business transaction’, consisting of multiple method invocations which must be treated as a Logical Unit of Work (LUW). SAP has introduced the concept of SAP Enterprise Services for this, and has delivered a first set of them. This is by far not complete yet, and SAP will augment it the coming years.

    SharePoint 2010 provides developer with the capability to integrate external data sources like SAP business data via the Business Connectivity Services (BCS) into the SharePoint system. The concept of BCS is based on entities and associated stereotyped operations. This perfectly suits for flat and simple structured data sets like SAP tables.

    Another and way more flexible option to use SAP data in SharePoint are the ERPConnect Services for SharePoint 2010 (ECS). The product suite consists of three product components: ERPConnect Services runtime, the BCS Connector application and the Xtract PPS for PerformancePoint Services.

    The runtime is providing a Service Application that integrates itself with the new service architecture of SharePoint 2010. The runtime offers a secure middle-tier layer to integrate different kind of SAP objects in your SharePoint applications, like tables and function modules.

    The BCS Connector application allows developers to create BDC models for the BCS Services, without programming knowledge. You may export the BDC models created by the BCS Connector to Visual Studio 2010 for further customizing.

     

    The Xtract PPS component offers a SAP data source provider for the PerformancePoint Services of SharePoint 2010. T

    his article gives you an overview of the ERPConnect Services runtime and shows how you can create and incorporate business data from SAP in different SharePoint application types, like Web Parts, Application Pages or Silverlight modules.

    This article does not introduce the other components.

    Background

    This section will give you a short explanation and background of SAP objects that can be used in ERPConnect Services. The most important objects are SAP tables and function modules. A function module is basically similar to a normal procedure in conventional programming languages. Function modules are written in ABAP, the SAP programming language, and are accessible from any other programs within a SAP system. They accept import and export parameters as well as other kind of special parameters.

     

    In addition, BAPIs (Business-API) are special function modules that are organized within the SAP Business Object Repository. In order to use function modules with the runtime they must be marked as Remote (RFC). SAP table data can also be retrieved. Tables in SAP are basically relational database tables. Others SAP objects like BW Cubes or SAP Queries can be accessed via the XtractQL query language (see below).

     

    Installation & Configuration

    Installing ERPConnect Services on a SharePoint 2010 server is done by an installer and is straight forward. The SharePoint Administration Service must run on the local server (see Windows Services).

    For more information see product documentation. After the installation has been successfully processed navigate to the Service Applications screen within the central administration (CA) of SharePoint:

     

    Before creating your first Service Application a Secure Store must be created, where ERPConnect Services will save SAP user credentials. In the settings page for the “Secure Store Service” create a new Target Application and name the application “ERPConnect Services”. Click on the button “Next” to define the store fields as follows:

     

    Finish the creation process by clicking on “Next” and define application administrators. Then, mark the application, click “Set Credentials” and enter the SAP user credentials:

     

    Let’s go on and create a new ERPConnect Service Application!

    Click the “ERPConnect Service Application” link in the “New” menu of the Service Applications page (see also first screenshot above). This opens a dialog to define the name of the service application, the SAP connection data and the IIS application pool:

     

    Click “Create” after entering all data and you will see the following entries in the Service Applications screen:

     

    That’s it! You are now done setting up your first ERPConnect Service Application.

    Development

    The runtime functionality covers different programming demands such as generically retrievable interface functions. The service applications are managed by the Central Administration of SharePoint. The following service and function areas are provided:

    1. Executing and retrieving data directly from SAP tables
    2. Executing SAP function modules / BAPIs
    3. Executing XtractQL query statements

    The next sections shows how to use these service and function areas and access different SAP objects from within your custom SharePoint applications using the ERPConnect Services. The runtime can be used in applications within the SharePoint context like Web Parts or Application Pages.

    In order to do so, you need to reference the assembly i in the project. Before you can access data from the SAP system you must create an instance of the ERPConnectServiceClient class. This is the gate to all SAP objects and the generic API of the runtime in overall. In the SharePoint context there are two options to create a client object instance:

    // Option #1
    ERPConnectServiceClient client = new ERPConnectServiceClient();
    
    // Option #2
    ERPConnectServiceApplicationProxy proxy = SPServiceContext.Current.GetDefaultProxy(
       typeof(ERPConnectServiceApplicationProxy)) as ERPConnectServiceApplicationProxy;
    ERPConnectServiceClient client = proxy.GetClient();

    For more details on using ECS in Silverlight or desktop applications see the specific sections below.

    Querying Tables

    Querying and retrieving table data is a common task for developers. The runtime allows retrieving data directly from SAP tables. The ERPConnectServiceClient class provides a method called ExecuteTableQuery with two overrides which query SAP tables in a simple way.

    The method also supports a way to pass miscellaneous parameters like row count and skip, custom function, where clause definition and a returning field list. These parameters can be defined by using the ExecuteTableQuerySettings class instance.

    DataTable dt = client.ExecuteTableQuery("T001");
    
    …
        
    ExecuteTableQuerySettings settings = new ExecuteTableQuerySettings {
      RowCount = 100,
      WhereClause = "ORT01 = 'Paris' AND LAND1 = 'FR'",
      Fields = new ERPCollection<string> { "BUKRS", "BUTXT", "ORT01", "LAND1" }
    };
    
    DataTable dt = client.ExecuteTableQuery("T001", settings);
    
    …
    
    // Sample 2
    DataTable dt = client.ExecuteTableQuery("MAKT",
                new ExecuteTableQuerySettings {
                    RowCount = 10,
                    WhereClause = "MATNR = '60-100C'",
                    OrderClause = "SPRAS DESC"
                });

    The first query reads all records from the SAP table T001 where the fields ORT01 equals Paris and LAND1 equals FR (France). The query returns the top 100 records and the result set contains only the fields BUKRS, BUTXT, ORT01 and LAND1.

    The second query returns the top ten records of the SAP table MAKT, where the field MATNR equals the material number 60-100C. The result set is ordered by the field SPRAS.

    Executing Function Modules

    In addition to query SAP tables the runtime API executes SAP function modules (BAPIs). Function modules must be marked as remote-enabled modules (RFC) within SAP.

    The ERPConnectServiceClient class provides a method called CreateFunction to create a structure of metadata for the function module. The method returns an instance of the data structure ERPFunction. This object instance contains all parameters types (import, export, changing and tables) that can be used with function modules.

    In the sample below we call the function SD_RFC_CUSTOMER_GET and pass a name pattern (T*) for the export parameter with name NAME1. Then we call the Execute method on the ERPFunction instance. Once the method has been executed the data structure is updated. The function returns all customers in the table CUSTOMER_T.

    ERPFunction function = client.CreateFunction("SD_RFC_CUSTOMER_GET");
    function.Exports["NAME1"].ParamValue = "T*";
    function.Execute();
    
    foreach(ERPStructure row in function.Tables["CUSTOMER_T"])
      Console.WriteLine(row["NAME1"] + ", " + row["ORT01"]);

    The following code shows an additional sample. Before we can execute this function module we need to define a table with HR data as input parameter.

    The parameters you need and what values the function module is returning dependents on the implementation of the function module

    ERPFunction function = client.CreateFunction("BAPI_CATIMESHEETMGR_INSERT");
    function.Exports["PROFILE"].ParamValue = "TEST";
    function.Exports["TESTRUN"].ParamValue = "X";
    
    ERPTable records = function.Tables["CATSRECORDS_IN"];
    ERPStructure r1 = records.AddRow();
    r1["EMPLOYEENUMBER"] = "100096";
    r1["WORKDATE"] = "20110704";
    r1["ABS_ATT_TYPE"] = "0001";
    r1["CATSHOURS"] = (decimal)8.0;
    r1["UNIT"] = "H";
    
    function.Execute();
    
    ERPTable ret = function.Tables["RETURN"]; 
    
    foreach(var i in ret)
      Console.WriteLine("{0} - {1}", i["TYPE"], i["MESSAGE"]);

    Executing XtractQL Query Statements

    The ECS runtime is offering a SAP query language called XtractQL. The XtractQL query language, also known as XQL, consists of ABAP and SQL syntax elements. XtractQL allows querying SAP tables, BW-Cubes, SAP Queries and executing function modules.

    It’s possible to return metadata for the objects and MDX statements can also be executed with XQL. All XQL queries are returning a data table object as result set. In case of the execution of function modules the caller must define the returning table (see sample below – INTO @RETVAL). XQL is very useful in situations where you need to handle dynamic statements. The following list shows a

    SELECT TOP 5 * FROM T001W WHERE FABKL = 'US'

    This query selects the top 5 records of the SAP table T001W where the field FABKL equals the value US.

    SELECT * FROM MARA WITH-OPTIONS(CUSTOMFUNCTIONNAME = 'Z_XTRACT_IS_TABLE')

     

    SELECT MAKTX AS [ShortDesc], MANDT, SPRAS AS Language FROM MAKT

    This query selects all records of the SAP table MAKT. The result set will contains three fields named ShortDesc, MANDT and Language.

     

    EXECUTE FUNCTION 'SD_RFC_CUSTOMER_GET'
       EXPORTS KUNNR='0000003340'
       TABLES CUSTOMER_T INTO @RETVAL;

    This query executes the SAP function module SD_RFC_CUSTOMER_GET and returns as result the table CUSTOMER_T (defined as @RETVAL).

    DESCRIBE FUNCTION 'SD_RFC_CUSTOMER_GET' GET EXPORTS

    This query returns metadata about the export parameters of the

    SELECT TOP 30 LIPS-LFIMG, LIPS-MATNR, TEXT_LIKP_KUNNR AS CustomerID
       FROM QUERY 'S|ZTHEO02|ZLIKP'
       WHERE SP$00002 BT '0080011000'AND '0080011999'

    This statement executes the SAP Query “S|ZTHEO02|ZLIKP” (name includes the workspace, user group and the query name). As you can see XtractQL extends the SQL syntax with ABAP or SAP specific syntax elements. This way you can define fields using the LIPS-MATNR format and SAP-like where clauses like “SP$00002 BT ‘0080011000’AND ‘0080011999’”.

    ERPConnect Services provides a little helper tool, the XtractQL Explorer (see screenshot below), to learn more about the query language and to test XQL queries. You can use this tool independent of SharePoint, but you need access to a SAP system.

    To find out more about all XtractQL language syntax see the product manual.

    Silverlight And Desktop Applications

    So far all samples are using the assembly ERPConnectServices.Server.Common.dll as project reference and all code snippets shown run within the SharePoint context, e.g. Web Part.

    ERPConnect Services also provides client libraries for Silverlight and desktop applications:

    ERPConnectServices.Client.dll for Desktop applications
    ERPConnectServices.Client.Silverlight.dll for Silverlight applications

    You need to add the references depending what project you are implementing.

    In Silverlight the implementation and design pattern is a little bit more complicated, since all web services will be called in asynchronously. It’s also not possible to use the DataTable class. It’s just not implemented for Silverlight.

    The runtime provides a similar class called ERPDataTable, which is used in this cases by the API. The ERPConnectServiceClient class for Silverlight provides the method ExecuteTableQueryAsync and an event called ExecuteTableQueryCompleted as callback delegate.

    public event EventHandler<ExecuteTableQueryCompletedEventArgs> ExecuteTableQueryCompleted;
    
    public void ExecuteTableQueryAsync(string tableName)
    public void ExecuteTableQueryAsync(string tableName, ExecuteTableQuerySettings settings)

    The following code sample shows a simple query of the SAP table T001 within a Silverlight client.

    First of all, an instance of the ERPConnectServiceClient is created using the URI of the ERPConnectService.svc, then a delegate is defined to handle the complete callback. Next, the query is executed, defined with a RowCount equal 10 to only return the top 10 records in the result set.

    Once the result is returned the data set will be attached to a DataGrid control (see screenshot below) within the callback method.

     

    void OnGetTableDataButtonClick(object sender, RoutedEventArgs e)
    {
      ERPConnectServiceClient client = new ERPConnectServiceClient(
        new Uri("http://<SERVERNAME>/_vti_bin/ERPConnectService.svc"));
    
      client.ExecuteTableQueryCompleted += OnExecuteTableQueryCompleted;
      client.ExecuteTableQueryAsync("T001", 
        new ExecuteTableQuerySettings { RowCount = 150 });
    }
    
    void OnExecuteTableQueryCompleted(object sender, ExecuteTableQueryCompletedEventArgs e)
    {
      if(e.Error != null)
        MessageBox.Show(e.Error.Message);
      else
      {
        e.Table.View.GroupDescriptions.Add(new PropertyGroupDescription("ORT01"));
        TableGrid.ItemsSource = e.Table.View;
      }
    }

    The screenshot below shows the XAML of the Silverlight page:

    The final result can be seen below:

    ECS Designer

    ERPConnect Services includes a Visual Studio 2010 plugin, the ECS Designer, that allows developer to visually design SAP interfaces. It’s working similar to the LINQ to SAP Designer I have written about a while ago, see article at CodeProject: LINQ to SAP.

    The ECS Designer is not automatically installed once you install the product. You need to call the installation program manually. The setup adds a new project item type to Visual Studio 2010 with the file extension .ecs and is linking it with the designer. The needed references are added automatically after adding an ECS project item.

    The designer generates source code to integrate with the ERPConnect Services runtime after the project item is saved. The generated context class contains methods and sub-classes that represent the defined SAP objects (see screenshots below).

     

    Before you access the SAP system for the first time you will be asked to enter the connection data. You may also load the connection data from SharePoint system. The designer GUI is shown in the screenshots below:

     

    The screenshot above for instance shows the tables dialog. After clicking the Add (+) button in the main designer screen and searching a SAP table in the search dialog, the designer opens the tables dialog.

     

    In this dialog you can change the name of the generated class, the class modifier and all needed properties (fields) the final class should contain.

     

    To preview your selection press the Preview button. The next screenshot shows the automatically generated classes in the file named EC1.Designer.cs:

     

    Using the generated code is simple. The project type we are using for this sample is a standard console application, therefore the designer is referencing the ERPConnectServices.Client.dll for desktop applications.

    Since we are not within the SharePoint context, we have to define the URI of the SharePoint system by passing this value into the constructor of the ERPConnectServicesContext class.

    The designer has generated class MAKT and an access property MAKTList for the context class of the table MAKT. The type of this property MAKTList is ERPTableQuery<MAKT>, which is a LINQ queryable data type.

     

    This means you can use LINQ statements to define the underlying query. Internally, the ERPTableQuery<T> type will translate your LINQ query into call of ExecuteTableQuery.

     

    That’s it!

     

    Advanced Techniques

    There are situations when you have to use the exact same SAP connection while calling a series of function modules in order to receive the correct result. Let’s take the following code:

    ERPConnectServiceClient client = new ERPConnectServiceClient();
    
    using(client.BeginConnectionScope())
    {
      ERPFunction f = client.CreateFunction("BAPI_GOODSMVT_CREATE");
    
      ERPStructure s = f.Exports["GOODSMVT_HEADER"].ToStructure();
      s["PSTNG_DATE"] = "20110609"; // Posting Date in the Document
      s["PR_UNAME"] = "BAEURLE";    // UserName
      s["HEADER_TXT"] = "XXX";      // HeaderText
      s["DOC_DATE"] = "20110609";   // Document Date in Document
    
      f.Exports["GOODSMVT_CODE"].ToStructure()["GM_CODE"] = "01";
    
      ERPStructure r = f.Tables["GOODSMVT_ITEM"].AddRow();
      r["PLANT"] = "1000";          // Plant
      r["PO_NUMBER"] = "4500017210"; // Purchase Order Number
      r["PO_ITEM"] = "010";      // Item Number of Purchasing Document 
      r["ENTRY_QNT"] = 1;          // Quantity in Unit of Entry
      r["MOVE_TYPE"] = "101";        // Movement Type
      r["MVT_IND"] = "B";            // Movement Indicator
      r["STGE_LOC"] = "0001";        // Storage Location
    
      f.Execute();
    
      string matDocument = f.Imports["MATERIALDOCUMENT"].ParamValue as string;
      string matDocumentYear = f.Imports["MATDOCUMENTYEAR"].ParamValue as string;
    
      ERPTable ret = f.Tables["RETURN"]; //.ToADOTable();
    
      foreach(var i in ret)
        Console.WriteLine("{0} - {1}", i["TYPE"], i["MESSAGE"]);
    
      ERPFunction fCommit = client.CreateFunction("BAPI_TRANSACTION_COMMIT");
      fCommit.Exports["WAIT"].ParamValue = "X";
      fCommit.Execute();
    }

    In this sample we create a goods receipt for a goods movement with BAPI_GOODSMVT_CREATE. The final call to BAPI_TRANSACTION_COMMIT will only work, if the system under the hood is using the same connection object.

     

    The runtime is not providing direct access to the underlying SAP connection, but the library offers a mechanism called connection scoping. You may create a new connection scope with the client library and telling ECS to use the same SAP connection until you close the connection scope. Within the connection scope every library call will use the same SAP connection.

    In order to create a new connection scope you need to call the BeginConnectionScope method of the class ERPConnectServiceClient.

    The method returns an IDisposable object, which can be used in conjunction with the using statement of C# to end the connection scope.

    Alternatively, you may call the EndConnectionScope method. It’s also possible to use function modules with nested structures as parameters.

    This is a special construct of SAP. The goods receipt sample above is using a nested structure for the export parameter GOODSMVT_CODE. For more detailed information about nested structures and tables see the product documentation.

    Developing a Real Outlook Social Connector

    This section contains a set of four Visual How Tos that shows how to develop a real provider for the Microsoft Outlook Social Connector (OSC) by using the OSC Provider Proxy Library.

    Outlook.com_[1]

    An OSC provider allows Outlook users to view, in the People Pane, an aggregation of social information updates that are applied on a professional or social network site. An OSC provider is a Component Object Model (COM) DLL. The OSC provider extensibility interfaces form the medium through which the OSC and an OSC provider communicate. OSC provider extensibility consists of a set of interfaces that is available as an open platform. These interfaces allow the OSC to access social network data in a way that is independent of the APIs of each social network. An OSC provider obtains social network data from the corresponding social network and, through implementing the extensibility interfaces, feeds that social network data to the OSC.

    The OSC Provider Proxy Library simplifies the implementation of the OSC provider extensibility interfaces. Instead of a provider explicitly implementing the OSC provider extensibility interfaces, the proxy library implements them, to call a set of abstract and virtual methods in the proxy library.

    A provider, in turn, overrides this set of abstract and virtual methods with the business logic specific to the social network, to return social network data that the OSC requires.

    To show how a provider can use the OSC Provider Proxy Library, this set of Visual How Tos describes a real provider for OfficeTalk. OfficeTalk is a social network in a private corporate environment and is not publicly available.

    Nonetheless, it is a good example of the kind of social network that you might want to develop a custom OSC provider for. You can use the procedures for creating the OSC provider for OfficeTalk to create a custom OSC provider for any social network.

    Developing a Real Outlook Social Connector Provider by Using a Proxy Library

     

    Overview

    The Microsoft Outlook Social Connector (OSC) provides a communication hub for personal and professional communications. Just by selecting an Outlook item such as an email or meeting request and clicking the sender or a recipient of that item, users can see, in the People Pane, activities, photos, and status updates for the person on their favorite social networks.

    The OSC obtains social network data by calling an OSC provider, which behaves like a translation layer between Outlook and the social network. The OSC provider model is open, and you can develop a custom OSC provider by implementing the required OSC provider extensibility interfaces. To retrieve social network data, the OSC makes calls to the OSC provider through these interface members. The OSC provider communicates with the social network and returns the social network data to the OSC as a string or as XML that conforms to the Outlook Social Connector XML schema. Figure 1 shows the various components of the sample OfficeTalk OSC provider reviewed in this Visual How To.

    Figure 1. Relationships of the sample OfficeTalk OSC provider with related components

    Relationship of sample provider with components

    This Visual How To shows the procedures to create a custom OSC provider for OfficeTalk. OfficeTalk is not publicly available and is being used as an example of the kind of social network you might want to develop a custom OSC provider for. You can use the procedures for creating the OSC provider for OfficeTalk to create a custom OSC provider for any social network.

    The OfficeTalk provider uses the Outlook Social Connector Provider Proxy Library to simplify the implementation of the OSC provider extensibility interfaces. The OSC Provider Proxy Library implements all of the OSC provider extensibility interface members. These interface members, in turn, call a consolidated set of abstract and virtual methods that provide the social network data that the OSC requires. To create a custom OSC provider that uses the OSC Provider Proxy Library, a developer overrides these abstract and virtual methods with the business logic to communicate with the social network.

    Code It

    The sample solution for this article includes all of the code for a custom OSC provider for OfficeTalk. However, this Visual How To does not show all of the code in the sample solution. Instead, it focuses on creating a custom OSC provider by using the OSC Provider Proxy Library.

    The sample solution contains two projects:

    • OSCProvider—This project is an unmodified version of the OSC Provider Proxy Library that is used to simplify the creation of the OfficeTalk OSC provider.
    • OfficeTalkOSCProvider—This project includes the source code files that are specific to the OfficeTalk OSC provider.

    The OfficeTalkOSCProvider project includes the following source code files:

    • OfficeTalkHelper—This class contains helper methods that are used throughout the sample solution.
    • OTProvider—This is a partial class that contains the OSC Provider Proxy Library override methods that return information about the OSC provider, information about the social network, and information for the current user.
    • OTProvider_Activities—This is a partial class that contains the OSC Provider Proxy Library override methods that return activity information.
    • OTProvider_Friends—This is a partial class that contains the OSC Provider Proxy Library override methods that return friends information.

    Creating the OfficeTalk OSC Provider Solution

    The following sections show the procedures to create the OfficeTalk OSC provider sample solution, and add OSC Provider Proxy Library override methods to return information about the OSC provider, the social network, and the current user.

    You must create the OSC provider as a class library. For this Visual How To, the solution was created with a name of OfficeTalkOSCProvider.

    Adding the OSC Provider Proxy Library Project

    You must download the Outlook Social Connector Provider Proxy Library from MSDN Code Gallery, and then extract it to the local computer.

    To add the OSC Provider Proxy Library to the OfficeTalkOSCProvider solution

    1. Copy the OSCProvider project to the OfficeTalkOSCProvider directory.
    2. On the File menu in Visual Studio 2010, point to Add, and then click Existing Project.
    3. Select the OSCPRovider.csproj project that you copied in Step 1.

    Adding References

    Add the following references to the OfficeTalkOSCProvider:

    • Outlook Social Provider COM component. The name in the COM tab is Microsoft Outlook Social Provider Extensibility. If there are multiple versions, select TypeLib Version 1.1.
    • System.Drawing

    Adding Social Network Specific References and Files

    Add other appropriate references and files for the social network. The sample solution does not include the OfficeTalk API assembly. To support the social network for which you are developing an OSC provider, replace the OfficeTalk API references and files with the references and files that are specific to your social network.

    The sample solution for OfficeTalk contains the following references and files:

    • The OfficeTalk API assembly.
    • The OfficeTalk icon file.

    Creating a Subclass of the OSC Provider Proxy Library OSCProvider

    Use the OSC Provider Proxy Library to create a subclass of the OSCProvider class, OTProvider, which represents the sample OSC provider. Add a class named OTProvider to the OfficeTalkOSCProvider project. OTProvider is defined as a partial class so that logic for OSC provider core methods, friends, and activities can be defined in separate source code files.

    Replace the class definition with the code in the following section. The code example starts with the using statements for the OSC Provider Proxy Library and OfficeTalk API. The OTProvider partial class then inherits from the OSCProvider class. Note that the OTProvider class has the ComVisible attribute so that the Outlook Social Connector can call it.

    Copy
    using System;
    using System.Globalization;
    using System.Collections.Generic;
    using System.IO;
    using System.Reflection;
    using System.Drawing;
    using System.Drawing.Imaging;
    
    // Using statements for the OSC Provider Proxy Library.
    using OSCProvider;
    using OSCProvider.Schema;
    
    // Using statements for the social network.
    using OfficeTalkAPI;
    
    namespace OfficeTalkOSCProvider
    {
        // SubClass of the OSC Provider Proxy Library OSCProvider
        // used to create a custom OSC provider.
        [System.Runtime.InteropServices.ComVisible(true)]
        public partial class OTProvider : OSCProvider.OSCProvider
        {
        ...
    
    

    After the OTProvider class is defined, add the following code for constants used throughout the OfficeTalkOSCProvider solution.

    Copy
    // Constants for the OfficeTalk OSC provider.
    internal static string NETWORK_NAME = @"OfficeTalk";
    internal static string NETWORK_GUID = @"YourNetworkGuid";
    internal static string API_VERSION = @"YourApiVersion";
    internal static string API_URL = @"YourApiUrl";
    internal static OSCProvider.ProviderSchemaVersion SCHEMA_VERSION =
        ProviderSchemaVersion.v1_1;
    
    

    Allowing for Debugging

    To debug the OfficeTalkOSCProvider, you must modify the OfficeTalkOSCProvider project to start using Outlook and register the OfficeTalkOSCProvider as an Outlook Social Connector.

    To set up the OfficeTalkOSCProvider project for debugging

    1. Right-click the OfficeTalkOSCProvider project, and then click Properties.
    2. Select the Debug tab.
    3. Under Start Action, select Start External Program.
    4. Specify the full path to the version of Outlook that is installed on your computer. The default path for 32-bit Outlook on 32-bit Windows is C:\Program Files\Microsoft Office\Office14\OUTLOOK.EXE.

    The Outlook Social Connector will not call the OfficeTalkOSCProvider until it is registered as an OSC provider. The sample solution includes a file named RegisterProvider.reg that updates the registry with the entries that are required to register the OfficeTalkOSCProvider as an OSC provider. You can update the registry by opening the RegistryProvider.reg file in Windows Explorer.

    The RegisterProvider.reg file assumes that the sample solution is located in the C:\temp directory. If the sample solution is located in a different directory, update the CodeBase entry in the RegisterProvider.reg file to point to the correct location.

    Adding Helper Methods

    The OfficeTalkHelper class contains helper methods, including the GetOfficeTalkClient and ConvertUserToPerson methods, that are used throughout the sample solution.

    The following GetOfficeTalkClient method returns an OfficeTalkClient object that is used to communicate with OfficeTalk. If the OfficeTalkClient has not been initialized, GetOfficeTalkClient creates and configures a new OfficeTalkClient by using the API_URL and API_VERSION constants that are defined in OTProvider.

    Copy
    // Returns a reference to the OfficeTalk client.
    private static OfficeTalkClient officeTalkClient = null;
    internal static OfficeTalkClient GetOfficeTalkClient()
    {
        if (officeTalkClient == null)
        {
            officeTalkClient =
              new OfficeTalkClient(OTProvider.API_URL);
            OfficeTalkClient.UserAgent =
              @"OfficeTalkOSC/" + OTProvider.API_VERSION;
        }
        return officeTalkClient;
    }
    
    

    The ConvertUserToPerson method converts an OfficeTalk User object to an OSC Provider Proxy Library Person object that is usable within the OSC Provider Proxy Library. The ConvertUserToPerson method creates a new OSC Provider Proxy Library Person and then maps the User properties to the related Person properties.

    Copy
    // Converts an Office Talk User to an OSC Provider Proxy Library Person.
    internal static Person ConvertUserToPerson(OfficeTalkAPI.OTUser user)
    {
        // Create the OSC Provider Proxy Library Person.
        Person person = new Person();
    
        // Map the User properties to the Person properties.
        person.FullName = user.name;
        person.Email = user.email;
        person.Company = user.department;
        person.UserID = user.id.ToString(CultureInfo.InvariantCulture);
        person.Title = user.title;
        person.CreationTime = user.created_atAsDateTime;
    
        // FriendStatus is based on whether the user is being followed 
        // by the currently logged-on user.
        person.FriendStatus = 
            user.following ? FriendStatus.friend : FriendStatus.notfriend;
    
        // Set the PictureUrl if a profile picture is loaded in OfficeTalk.
        if (user.image_url != null)
        {
            person.PictureUrl = new Uri(OTProvider.API_URL + user.image_url);
        }
    
        // WebProfilePage is set to the user's home page in OfficeTalk.
        person.WebProfilePage = 
            OTProvider.API_URL + @"/Home/index/" + user.alias + "#User";
    
        return person;
    }
    
    

    Overriding the GetProviderData Method

    The OSC ISocialProvider interface contains members that return information about the OSC provider. This includes the capabilities of the social network, how to communicate with the social network, and general information about the social network. The OSC Provider Proxy Library provides the GetProviderData abstract method, which you can override to return OSC provider information. The GetProviderData abstract method returns the OSC Provider Proxy Library ProviderData object, which encapsulates the provider information.

    The following section of the GetProviderData override method initializes a ProviderData object and sets the properties for the OfficeTalk provider.

    Copy
    // The ProviderData contains information about the social network and is 
    // used by the OSC ISocialProvider members to return information.
    ProviderData providerData = new ProviderData();
    
    // Friendly name of the social network to display in Outlook.
    providerData.NetworkName = NETWORK_NAME;
    
    // GUID that represents the social network.
    // This GUID should not change between versions.
    providerData.NetworkGuid = new Guid(NETWORK_GUID);
    
    // Version of the social network provider.
    providerData.Version = API_VERSION;
    
    // Array of URLs that the social network provider uses.
    // The default URL should be the first item in the array.
    providerData.Urls = new string[] { API_URL };
    
    // The icon of the social network to display in Outlook.
    Byte[] icon = null;
    Assembly assembly = Assembly.GetExecutingAssembly();
    using (Stream imageStream =
        assembly.GetManifestResourceStream("OfficeTalkOSCProvider.OTIcon16.bmp"))
    {
        using (MemoryStream memoryStream = new MemoryStream())
        {
            using (Image socialNetworkIcon = Image.FromStream(imageStream))
            {
                socialNetworkIcon.Save(memoryStream, ImageFormat.Bmp);
                icon = memoryStream.ToArray();
            }
        }
    }
    providerData.Icon = icon;
    
    

    The following section of the GetProviderData override method uses the Proxy Library Capabilities class to identify the capabilities and requirements for the OfficeTalk OSC provider. The Capabilities class defines capabilities by setting the CapabilityFlags property. The CapabiltiesFlag property uses a bitmask and is set by using the bitwise OR operator to combine constants that the OSC Provider Proxy Library has defined for each capability.

    Copy
    // Define the capabilities for the provider.
    // The Capabilities object will generate the appropriate XML string.
    Capabilities capabilities = new Capabilities(SCHEMA_VERSION);
    capabilities.CapabilityFlags =
        // OSC should call the GetAutoConfiguredSession method to get a 
        // configured session for the user.
        Capabilities.CAP_SUPPORTSAUTOCONFIGURE |
    
        // OSC should hide all links in the Account configuration dialog box.
        Capabilities.CAP_HIDEHYPERLINKS |
        Capabilities.CAP_HIDEREMEMBERMYPASSWORD |
    
        // The following activity settings identify that Activities uses
        // hybrid synchronization.
        // OSC will store activities for friends in a hidden folder and 
        // activities for non-friends in memory.
        Capabilities.CAP_GETACTIVITIES |
        Capabilities.CAP_DYNAMICACTIVITIESLOOKUP |
        Capabilities.CAP_DYNAMICACTIVITIESLOOKUPEX |
        Capabilities.CAP_CACHEACTIVITIES |
    
        // The following Friends settings identify that friend information
        // uses hybrid synchronization.
        // OSC will call the GetPeopleDetails method every time the People Pane 
        // is refreshed to ensure the latest user information is displayed.
        Capabilities.CAP_GETFRIENDS |
        Capabilities.CAP_DYNAMICCONTACTSLOOKUP |
        Capabilities.CAP_CACHEFRIENDS |
    
        // The following Friends settings identify that OfficeTalks supports
        // the FollowPerson and UnFollowPerson calls.
        Capabilities.CAP_DONOTFOLLOWPERSON |
        Capabilities.CAP_FOLLOWPERSON;
    
    // Set the email HashFunction.
    // Setting the EmailHashFunction is required if CAP_DYNAMICCONTACTSLOOKUP
    // or CAP_DYNAMICACTIVITIESLOOKUPEX are set.
    capabilities.EmailHashFunction = HashFunction.SHA1;
    
    // Set the capabilities property on the providerData object.
    providerData.ProviderCapabilities = capabilities;
    
    

    The capabilities and requirements defined in the preceding code example are specific to OfficeTalk. A custom OSC provider that is developed for a different social network must define a set of capabilities and requirements that are specific to that social network.

    The following list shows the CapabilityFlag constants that are available in the OSC Provider Proxy Library Capabilities class.

    CAP_SUPPORTSAUTOCONFIGURE
    The provider supports calling the ISocialProvider.GetAutoConfiguredSession method to attempt automatic configuration of the network for the user.
    CAP_GETFRIENDS
    The provider supports the ISocialPerson.GetFriendsAndColleagues or ISocialSession2.GetPeopleDetails method. The OSC uses the CAP_CACHEFRIENDS and CAP_DYNAMICCONTACTSLOOKUP settings to determine whether friends are stored as Outlook contact items or are stored in memory.
    CAP_CACHEFRIENDS
    The provider supports storing friends as Outlook contact items in a social-network-specific contacts folder.
    CAP_DYNAMICCONTACTSLOOKUP
    The provider supports the ISocialSession2.GetPeopleDetails method for on-demand synchronization of friends and non-friends. If CAP_DYNAMICCONTACTSLOOKUP is set, the OSC calls the ISocialSession2.GetPeopleDetails method every time the People Pane is refreshed.
    CAP_SHOWONDEMANDCONTACTSWHENMINIMIZED
    Indicates that the OSC should carry out on-demand synchronization for friends and non-friends when the People Pane is minimized.
    CAP_FOLLOWPERSON
    The provider supports the ISocialSession.FollowPerson method for adding the person as a friend on the social network.
    CAP_DONOTFOLLOWPERSON
    The provider supports the ISocialSession.UnFollowPerson method for removing the person as a friend on the social network.
    CAP_GETACTIVITIES
    The provider supports the ISocialPerson.GetActivities or ISocialSession2.GetActivitiesEx method. The OSC uses the CAP_CACHEACTIVITIES and CAP_DYNAMICACTIVITIESLOOKUPEX settings to determine whether activities are stored as Outlook RSS items or are stored in memory.
    CAP_CACHEACTIVITIES
    The provider supports storing activities as Outlook RSS items in a hidden News Feed folder. To support cached synchronization of activities CAP_CACHEACTIVITIES should be set and CAP_DYNAMICACTIVITIESLOOKUPEX should not be set. With cached synchronization of activities, the OSC stores all activities as Outlook RSS items in a hidden News Feed folder. To support hybrid synchronization of activities, both CAP_CACHEACTIVITIES and CAP_DYNAMICACTIVITIESLOOKUPEX should be set. With hybrid synchronization of activities, the OSC stores activities for friends as Outlook RSS items in a hidden News Feed folder and caches activities for non-friends in memory. To support on-demand synchronization of activities, CAP_CACHEACTIVITIES should not be set and CAP_DYNAMICACTIVITIESLOOKUPEX should be set. With on-demand synchronization of activities, the OSC caches all activities in memory.
    CAP_DYNAMICACTIVITIESLOOKUP
    Deprecated in OSC 1.1. Use the CAP_DYNAMICACTIVITIESLOOKUPEX setting instead.
    CAP_DYNAMICACTIVITIESLOOKUPEX
    The provider supports the ISocialSession2.GetActivitiesEx method for on-demand or hybrid synchronization of activities. To support on-demand synchronization of activities, CAP_DYNAMICACTIVITIESLOOKUPEX should be set and CAP_CACHEACTIVITIES should not be set. With on-demand synchronization of activities, the OSC calls ISocialSession2.GetActivitiesEx every time the People Pane is refreshed. To support hybrid synchronization of activities, both CAP_DYNAMICACTIVITIESLOOKUPEX and CAP_CACHEACTIVITIES should be set. With hybrid synchronization of activities, the OSC calls ISocialSession2.GetActivitiesEx every 30 minutes to refresh activities information. When CAP_DYNAMICACTIVITIESLOOKUPEX is not set, the OSC does not call ISocialSession2.GetActivitiesEx.
    CAP_SHOWONDEMANDACTIVITIESWHENMINIMIZED
    Indicates that the OSC should carry out on-demand synchronization for activities when the People Pane is minimized.
    CAP_DISPLAYURL
    Indicates that the OSC should display the network URL in the account configuration dialog box.
    CAP_HIDEHYPERLINKS
    Indicates that the OSC should hide the “Click here to create an account” and the “Forgot your password?” hyperlinks in the account configuration dialog box.
    CAP_HIDEREMEMBERMYPASSWORD
    Indicates that the OSC should hide the Remember my password check box in the account configuration dialog box.
    CAP_USELOGONWEBAUTH
    Indicates that the OSC should use forms-based authentication. When CAP_USELOGONWEBAUTH is set, the OSC uses forms-based authentication and calls the ISocialSession.LogonWeb method. When CAP_USELOGONWEBAUTH is not set, the OSC uses basic authentication and calls the ISocialSession.Logon method.
    CAP_USELOGONCACHED
    The provider supports the ISocialSession2.LogonCached method to log on with cached credentials. When CAP_USELOGONCACHED is set, the OSC ignores the CAP_USELOGONWEBAUTH setting and calls ISocialSession2.LogonCached for authentication.

    Overriding the GetMe Method

    Many of the OSC interface members and OSC Provider Proxy Library override methods require information about the current user. The OSC Provider Proxy Library provides the GetMe abstract method, which you can override to return information about the current user from the social network. The GetMe abstract method returns a Person object, which contains all social network data for the current user.

    The GetMe override method shown in the following example gets an OfficeTalkClient object to communicate with OfficeTalk. The GetMe override method then calls the OfficeTalk GetUser method by using the user name that is used to log on to Windows. After obtaining the OfficeTalk User, the GetMe override method calls the OfficeTalkHelper ConvertUserToPerson method to convert the OfficeTalk User to a Person that can be used within the OSC Provider Proxy Library.

    After the conversion is complete, the GetMe override method sets the Person.UserName property for the ISocialSession.LoggedOnUserName interface member. Only the GetMe override method sets the Person.UserName property when it returns information about the current user.

    Copy
    // OSC Proxy Library override method used to return information 
    // for the current user.
    public override Person GetMe()
    {
        // Get a reference to the OfficeTalk client.
        OfficeTalkClient officeTalkClient =
            OfficeTalkHelper.GetOfficeTalkClient();
    
        // Look up the user based on credentials used to log on to Windows.
        OTUser user =
            officeTalkClient.GetUser(System.Environment.UserName, Format.JSON);
    
        // Convert the OfficeTalk User to an OSC Provider Proxy Person.
        Person p = OfficeTalkHelper.ConvertUserToPerson(user);
    
        // Set the UserName property.
        // This is used only by the Person that the GetMe method returns to
        // support the OSC ISocialSession.LoggedOnUserName property.
        p.UserName = System.Environment.UserName;
    
        return p;
    }
    
    

    Overriding OSC Provider Proxy Library Friends Methods

    A custom OSC provider that uses the OSC Provider Proxy Library must override the abstract and virtual methods for returning friends social network data. In the sample solution, the overrides for these OTProvider methods are located in the OTProvider_Friends source file.

    The abstract and virtual methods for friends are as follows:

    • GetPeopleDetails—Returns detailed user information for the email addresses that are passed into the method.
    • GetFriends—Returns a list of friends for the current user.
    • FollowPersonEx—Adds the person who is identified by the email address as a friend on the social network.
    • UnFollowPerson—Removes the person who is identified by the user ID as a friend on the social network.

    Reviewing these methods is outside of the scope of this Visual How To. For more information about returning friends social network data, see Part 2: Getting Friends Information by Using the Proxy Library for Outlook Social Connector Provider Extensibility.

    Overriding OSC Provider Proxy Library Activity Methods

    A custom OSC provider that uses the OSC Provider Proxy Library must override the abstract and virtual methods for returning activity social network data. In the sample solution, the overrides for these OTProvider methods are located in the OTProvider_Activities source file.

    There is only one method to override for activities:

    • GetActivities—Returns activities for all users who are identified by the email addresses that are passed into the method.

    Covering these methods in detail is outside of the scope of this Visual How To. For more information about returning activities social network data, see Part 3: Getting Activities Information by Using the Proxy Library for Outlook Social Connector Provider Extensibility Visual How To.

    Read It

    Creating a custom Outlook Social Connector (OSC) provider for a social network is a straightforward process of implementing the OSC Provider extensibility interfaces to return social network data.

    The OSC Provider Proxy Library simplifies this process by removing the requirement to implement each individual interface member. Instead the OSC Provider Proxy Library defines a consolidated set of abstract and virtual methods to provide social network data. The developer of the OSC provider can focus on overriding these methods with the business logic required to interface with the social network API.

    The sample solution for this article includes all of the code required for a custom OSC provider for OfficeTalk. This Visual How To does not cover all of the code in the sample solution. This Visual How To focuses on creating a custom OSC provider solution, and returning information about the OSC provider, the social network capabilities, and the current user. The social network data that the OfficeTalk provider returns is shown in Figure 2.

    Figure 2. OSC showing OfficeTalk social network data in the People Pane

    OfficeTalk social network data in the People Pane

    For more information about returning friends social network data, see Part 2: Getting Friends Information by Using the Proxy Library for Outlook Social Connector Provider Extensibility.

    For more information about returning activities social network data, see Part 3: Getting Activities Information by Using the Proxy Library for Outlook Social Connector Provider Extensibility.

    New Web Part released – List Search Web Part now available!!

    The List Search Web Part reads the entries from a Sharepoint List or Library (located anywhere in the site collection) and displays the selected user fields in a grid with an optional interactive search filter.

    It can be used for WSS3.0, MOSS 2007, Sharepoint 2010 and Sharepoint 2013.

     Imagea

    The following parameters can be configured:

    • Sharepoint Site
    • List Columns to be displayed
    • Filtering, Grouping, Searching, Paging and Sorting of rows
    • AZ Index
    • optional Header text

    Installation Instructions:

    1. download the List Search Web Part Installation Instructions
    2. either install the web part manually or deploy the feature to your server/farm as described in the instructions. 
    3. Security Note:
      if you get the following error message: “Only an administrator may enumerate through all user profiles“, you will need to grant the application pool account(s) for the web application(s) „Manage User Profiles” permissions within the User Profile Sevice (SSP in case of MOSS2007).  
      This ensures that the application pool is able to retrieve the list of user profiles. 
      To assign this permission, access your active “User Profile Service” (SP 2010 Server ) or the “Shared Services Provider” (MOSS2007) via Central Admin. 
      From the „User Profiles and My Sites” group, click “Personalization services permissions”.  
      Add the „Manage User Profiles” permission to  your application pool account(s).
    4. Configure the following Web Part properties in the Web Part Editor “Miscellaneous” pane section as needed:
      • Site Name: Enter the name of the site that contains the List or Library:
        – leave this field empty if the List is in the current site (eg. the Web Part is placed in the same site)
        – Enter a “/” character if the List is contained in the top site
        – Enter a path if the List in in a subsite of the current site (eg. in the form of “current site/subsite”)
      • List Name: Enter the name of the desired Sharepoint List or Library
        Example: Project Documents
      • View Name: Optionally enter the desired List View of the list specified above. A List View allows you to specify specific data filtering and sorting. 
        Leave this field empty if you want to use the List default view.
      • Field Template: Enter the List columns to be displayed (separated by semicolons).
        Pictures can be attached (via File Upload) to the Sharepoint List items and displayed using the symbolic “Picture” column name.
        If you want to allow users to edit their own entries, please add the symbolic “Username” column name to the Field Template. An “Edit” symbol will then displayed to allow the user to navigate to the corresponding Edit Form:Example:
        Type;Name;Title;Modified;Modified By;Created By

        Friendly Header Names:
        If you would like to display a “friendly header name” instead of the default property name please append it to the User property, separated by the “|” pipe symbol.

        Example:
        Picture;LastName|Last Name;FirstName;Department;Email|Email Address

        Hiding individual columns:
        You can hide a column by prefixing it with a “!” character. 
        The following example hides the “Department” column: 
        LastName;FirstName;!Department;WorkEmail

        Suppress Column wrapping:
        You can suppress the wrapping of text inside a column by prefixing it with a “^” character.
        LastName;FirstName;Department;^AboutMe

        Showing the E-Mail address as plain text:
        You can opt to display the plain e-mail address (instead of the envelope icon) by appending “/plain” to the WorkEmail column:
        LastName;WorkEmail/plain;Department

      • Group By: enter an optional User property to group the rows.
      • Sort By: enter the List column(s) to define the default sort order. You can add multiple properties separated by commas. Append “/desc” to sort the column descending.
        Examples:
        Department
        Department,LastName
        Lastname
        /desc

        The columns headings can be clicked by the users to manually define the sort order.
      • AZ Index Column: enter an optional List column to display the AZ filter in the list header. 
        If an “!” character is appended to the property name, the “A” index will be forced when visiting the page.
        Example: LastName! 

         
         Image  
      • Search Box: enter one or more List columns (separated by semicolons) to allow for interactive searching.Example: LastName;FirstName

        If you want to display a search filter as a dropdown combo, please enter it with a leading “@” character:
        LastName;FirstName;Department;@Office

        Friendly Search Box Labels:
        If you would like to display a “friendly label” instead of the default property name please append it to the User property, separated by the “|” pipe symbol.
        Example:
        WorkPhone|Office Phone;Office|Office Nbr

         

      • Align Search Filters vertically: allows you to align the seach input boxes vertically to save horizontal space:
      • Rows per page: the Staff Directory web part supports paging and lets you specify the desired number of rows per page. 
      • Image Height: specify the image height in pixels if you include the “Picture” property. 
        Enter “0” if you want to use the default picture size.
      • Header Text: enter an optional header text. Please note that you can embed HTML tags if needed. You can additionally specify the text to be displayed if the “Show all entries” option is unchecked and the users has not performed a search yet by appending a “|” character followed by the text.
        Example:
        This is the regular header text|This text is only shown if the user has not yet performed a search
      • Detail View Page: enter an optional column name prefixed by “detailview=” to link a column to the item detail view page. Append the “/popup” option if you want to open the detail page in a Sharepoint 2010/2013 dialog popup window.
        Examples:
        detailview=LastName
        detailview/popup=Title
      • Alternating Row Color: enter the optional color of the alternating row background (leave blank to use default).
        Enter either the HTML color names (as eg. “red” etc.) or use hexadecimal RRGGBB coding (as eg. “#CCFFCC”). Enter the values without the double quotes.
        You can also change the default background color of the non-alternating rows by appending a second color value separated by a semicolon.
        Example: #ffffcc;#ffff99 

        The default Header style can be changed by adding the “AESD_Headerstyle” appSettings variable to the web.config “appSettings” section:

        <appSettings>
        <
        add key=AESD_Headerstyle value=background:green;font-size:10pt;color:white
         />
        <
        appSettings
        >

         

      • Show Column Headers: either show or suppress the List column header row.
      • Header Row CSS Style: enter the optionall header row CSS style(s) as needed.
        Example:
        color:blue;white-space:nowrap
      • Show Groups collapsed: either show the groups (if you specify a column in the “Group By” setting) collapsed or expanded when entering the page.
      • Enforce Security: hides the web part if user has no access to the site or the list. This avoids a login prompt if the user has not at least “View” permission on the list or site containing the list.
      • Show all entries: either show all directory entries or none when first visiting the page. 
        You can append a specific text to the “Header Text” field (see above) which is only displayed if this option is unchecked and no search has yet been performed by the user.
      • Open Links in new window: either open the links in a new window or in the same browser window.
      • Link Documents to Office365: open the Word, Excel and Powerpoint documents in the Office365 web viewer.
      • Show ‘Add New Item’ Button: either show or suppress the “Add new item button” to let users add new items to the list (this option is security-trimmed).
      • Export to CSV: Show/hide the “Export” button for Excel CSV File Export
      • CSV Separator: Enter the desired CSV field separator character (Default=Comma). Use a semicolon in countries which use the commas as a decimal separator.
      • Localization: enter the following 4 values (separated by semicolons) in your local language if you need to override the English strings corresponding to the 
        – Search button text, 
        – A..Z menu “View all” option, 
         the text displayed for Hyperlink columns 
        – the optional “Group By” name (if grouping is enabled)Default:
        Search;View all;Visit

      • License Key: enter your Product License Key (as supplied after purchase of the “Staff Directory Web Part” license key).
        Leave this field empty if you are using the free 30 day evaluation version.

     Contact me now at tomas.floyd@outlook.com for the List Search Web Part and other Free & Paid Web Parts and Apps for SharePoint 2010, 2013, Azure, Office 365, SharePoint Online

    Brand new 3 LINQ to Office Providers Available now!!

    The SPSamurai.Office.LINQ namespace contains 3 classes –

    OutlookProvider(LINQ to Outlook), OneNoteProvider (LINQ to OneNote) and ExcelProvider(LINQ to Excel).

    The OutlookProvider is a wrapper class which provides IEnumerable collections to data of the COM interface of Outlook ( appointments, contacts, mails, tasks, …).

    The OneNoteProvider provides collections of notebooks, sections and pages by manipulating the XML hierarchy tree of OneNote. And the ExcelProvider loads an Excel worksheet and provides column definition and row collections.

    All collections are IEnumerable so you can query them with LINQ. The full source code is provided.

    Check out my articles where I describe the implementation of these 3 classes and how to use them. These articles also contain a lot of LINQ query examples.

    Class diagrams:

     

     

     

     

     

    Features :
    Set flag with due date from predefined list: Today, Tomorrow, This Week, Next Week or Custom  
    Different options of follow-up visualization using combinations of flag, text and date  
    Support of sorting and filtering features  
    Support of different calendars (Gregorian, Japanese Emperor Era, Korean Tangun Era, Hijri, etc.)  
    Supported Datasheet view  
    Two-way conversion between ArtfulBits Follow-Up and standard Microsoft® SharePoint® Date and Time column  
    Language pack support (desired localization could be added by request)  

     

    Contact me at tomas.floyd@outlook.com for these tools and more SharePoint, Azure and Office 365 Apps, Tools and Web Parts or for specialised custom SharePoint Development

    How To : Reserve Resources on the Calendar in SharePoint 2013 / Online

    I suppose, many of you know about a great calendar feature in SharePoint 2010 called resource reservation. It enables organization of meetings in useful interface that allows to select multiple resources such as meeting rooms, projector and other facilities and required participants, and next the time frame that is free for all participants and facilities in the calendar view.

    You can switch between week and day views.

    Here is a screenshot of the calendar with resource reservation and member scheduling features:

    You can change resources and participants in the form of your meeting, find free time frames in the diagram and check double booking:

    There are two ways to add the resource reservation feature into SharePoint 2010 calendar:

    1. Enable web feature ‘Group Work Lists’, add calendar and go to its settings. Click ‘Title, description and navigation’ link in ‘General settings’ section. Here check ‘Use this calendar to share member’s schedule?’ and ‘Use this calendar for Resource Reservation?’
    2. Create a site based on ‘Group Work Site’ template.

    Here is the detailed instructions: http://office.microsoft.com/en-us/sharepoint-server-help/enable-reservation-of-resources-in-a-calendar-HA101810595.aspx

    SharePoint 2013 on-premise

    After migration to SharePoint 2013 I discovered that these features were excluded from the new platform and saved only as backward compatibility.

    So, you can migrate your application with installed booking calendar from SharePoint 2010 to SharePoint 2013 and you will keep functionality of resource reservation but you cannot activate it on a new SharePoint 2013 application through default interface.

    Microsoft officially explained these restrictions by unpopularity of the resource reservation feature: http://technet.microsoft.com/en-us/library/ff607742(v=office.15).aspx#section1

    First, I found a solution for SharePoint 2013 on-premise. It is possible to display the missing site templates including ‘Group Work Site’. Then you just need to create a site based on this template and you will get the calendar of resources.

    Go to C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\15\TEMPLATE\1033\XML, open WEBTEMP.XML file, find an element with ‘Group Work Site’ title attribute and change its Hidden attribute from FALSE to TRUE.

    SharePoint 2013 Online in Office 365

    Perfect, now we can use free SharePoint booking system based on the standard calendar. But what about SharePoint Online in Office 365? We do not have an access to WEBTEMP.XML in its file system.

    After some research I developed a sandbox solution that enables hidden ‘Group Work Lists’ feature and adds calendar with resource reservation and member scheduling features. Please, download it and follow the instructions to install:

    1. Go to the site collection settings.
    2. Open ‘Solutions’ area from ‘Web Designer Galleries’ section.
    3. Upload CalendarWithResources.wsp package and activate it.
    4. Now, navigate into the site where you wish to add the calendar with the enabled resource reservation feature.
    5. Open site settings -> site features.
    6. Activate ‘Calendar With Resources’ feature.

    Great, now you have Group Calendar with an ability to book resources and schedule meetings.

    This solution works for SharePoint 2013 on-premise as well, so you can use it instead of WEBTEMP.XML file modification.

    Free download :

    WSP File – http://1drv.ms/1f7ZSqO

     

    Sandbox Solutions available in CRM 2013 and Online

    You may have noticed changes to the navigation bar at the top of one or more of your Dynamics CRM Online instances.

    For both free and paid test instances, you’ll now see an orange navigation bar with a SANDBOX watermark.  Production instances will continue to display the blue bar you’ve come to expect.

     

    dynamicsCRMfeatures[1]

    This post covers what this change means to you and how this relates to some new capabilities we’ll be releasing in the near future.

    Changes to Non-production Instances

    Dynamics CRM Online 2013 introduced the concept of non-production, test, instances.  These instances could either be purchased as an add-on to your subscription or you would be granted a single free non-production instance if you purchased 25 or more user licenses for CRM Online.

    Since they are either purchased at a considerable discount from additional production instances or free, these instances may only be used for non-production purposes.  Earlier this year we published a thorough blog post introducing test instances.

    Up until now, the only noticeable difference between a production and non-production instance was the instance type displayed on instance’s edit settings page in the CRM Online admin center.  The type displayed was either Production instance, Paid Test instance, or Free Test instance, depending on how it was obtained.  The type is set when an instance is provisioned and can’t be changed by a customer administrator.

    The most recent change amounts to the following:

    • Free Test instance and Paid Test instance types have been renamed Sandbox instance.  See the edit settings page in the CRM Online admin center:

    • When you sign into a Sandbox instance, you’ll see the orange nav bar and SANDBOX watermark:

    We’ve made this change to ensure that end users know when they’ve signed into a sandbox instance and do not make production changes by mistake.

    Other than the changes mentioned above, there are no functional differences between production and sandbox instances.  You can perform all of the customization, development, and testing work in a sandbox instance without concerns that the experience will be different in production.

    Welcoming Sandbox Instances to CRM Online

    These changes are part of a much larger wave of improvements we are making in Dynamics CRM Online to better support enterprise application development.

    Your mission critical business applications run on Dynamics CRM Online and all changes must be managed carefully.  Without the proper development time, evaluation, and testing, the stability of your application may suffer and result in unnecessary downtime that could have been avoided by making these changes elsewhere.  The ideal place to develop and test new application change is aSandbox instance that is isolated from your production application.

    We don’t treat the running Sandbox applications any differently from Production instances.  They are both given the same level of resources and support.  By design, though, your Sandbox Instance application database is completely isolated from production.  It may contain a full or partial copy of production data, users, and customizations.  Since changes in a Sandbox Instance do not affect production, you can build your applications with the confidence that their daily productivity will not be adversely affected.

    In the near future, we’ll release additional capabilities to the CRM Online admin center that target Sandbox instances exclusively.

    Reset Instance (RESET)

    Delete the instance completely and re-provision from scratch.  This is particularly useful when you are starting a new implementation or have completed a project and you’d like to free up the resource consumed by a large sandbox instance.

    Copy Instance (COPY)

    Make a copy of an instance into a sandbox.  You can copy either a production or sandbox instance, but the target must be a sandbox.  There are two types of copies you can perform:

    • Full Copy

    Copy the full application database from the source to the target.  This make an exact copy of the source instance, including all application data, users, customizations, etc.  You’ll need to make sure you have enough available storage space to copy before you copy the instance.

    • Minimal Copy

    Copy only the customizations, core configuration data, and users from the source to the target.  This is primarily useful for development scenarios when the full production database is not needed.  You will need to import your custom configuration and sample data to complete the process.

    Administration Mode (ADMIN)

    Even though the production and sandbox databases are isolated from each another, you may have customizations that reach out to external services.  Without updates to these connections, you could inadvertently perform operations in a production service while working in a sandbox instance.  We’re introducing a new administration mode for sandbox instance to reduce the risk of production impact.  When you perform a copy operation, for example, the target sandbox instance is placed in administration mode.  After the copy is complete, the admin will have an opportunity to resolve any issues in the sandbox instance before bringing the instance fully back online.

    • Enable Administration Mode

    Only users with the System Administrator or System Customizer role can sign in at this time.  This allows an admin to lock out end users and give them a chance to make customization changes without having end user signed into the system.

    • Disable Background Operations

    Sometimes, even with no users signed into the system, asynchronous operations may result in your CRM application reaching out to an external service.  With this mode enabled, all asynchronous operations will be cancelled.  This includes workflows, sending email, Exchange sync, and Yammer.

    • Custom message for end users

    This text will be displayed to end users when they attempt to sign in.  Admin can use this to provide more information on what is going on in the sandbox instance and when the instance is expected to be available.

    Between these new sandbox admin capabilities and our rich development tools, we are making it easier than ever before to build, test, deploy, and maintain your Dynamics CRM Online solutions.  Keep an eye on this blog for announcements of future updates to your CRM Online administration experience.

    How To : Use the Content Query Web Part for SharePoint 2013 Search

    Meeting client requirements with SharePoint often involves aggregating items somehow – often we want to display things like “all the overdue tasks across all finance sites”, or “navigation links to all of the subsites of this area” or “related items (e.g. tagged with the same term)” and so on. In SharePoint 2010 there have been two main ways of accomplishing this:

    SharePoint-2013-Service-Pack-1-225x93

    • Content Query web part
    • Custom solution built on SPSiteDataQuery (site collection-scoped), SPQuery (list-scoped) or search API

    To a lesser extent, using the search web parts as part of a custom solution may also have been an option. Regardless, it was common to need custom code to meet such requirements. Maybe we needed to add paging to the results, or we needed to use some value obtained dynamically through code (e.g. from the current site/current page/current user/something else) – several Codeplex solutions arose from this gap, and lots of lines of code were written.

    SharePoint 2013 presents the Content Search web part as a new option – it’s capabilities mean that simply using the web part (with some front-end work to meet look and feel requirements) will meet many needs, without use of custom code. If you’re a developer, the following screenshot should give you a clue as to why code won’t be required too often (with one of my favorite options highlighted):

    CSWP_BasicsTab_AdvancedMode_PropertyFilterValues

    It’s incredibly powerful, and it’s a good idea to understand what it can do.

    Understanding the deal with search-based solutions

    As the name suggests, the Content Search web part is powered by SharePoint’s search function. As such, there are the following considerations:

    • The CSWP can be configured to “see” items anywhere in SharePoint (potential advantage)
      • In contrast, the CQWP and related SPSiteDataQuery can only search within the current site collection – the site collection “boundary” is a factor
    • Results shown are not guaranteed to be 100% up-to-date (potential disadvantage) 
      • Since a search crawl has to run before any content changes will be shown in search results (remember this can include titles, summaries, images and so on for pages/documents), if a user creates/edits an item it will not be shown immediately. This can be a critical point.
      • Furthermore, my understanding from a FAST engineer is that in SharePoint 2013 there is no longer any means of pushing a document directly into the search index – in previous FAST incarnations including FAST for SharePoint 2010, there were options such as docpush.exe for “proactively” add an item to the index, rather than waiting for the next search crawl.
      • That said, it should be possible to obtain much lower indexing latencies in SharePoint 2013 via the “Continuous Crawl’” capability. In most deployments, my guess would be that changes would be reflected within a few minutes at most if this is enabled (where previously you may have had an incremental crawl scheduled every 15, 30 or 60 minutes for a SharePoint sites content source.

    Summary – if the functionality you are creating needs fully up-to-date results (e.g. a user has created/edited something and it needs to be immediately reflected in the site) then you will probably need to stick with the original approaches (i.e. a query-based rather than search-based solution).

    Terminology – new concepts in SharePoint 2013 search

    So if we’re going to build solutions built on SP2013 search, we need to have a basic understanding of some concepts – we’ll run into these time and time again:

    Concept

    My quick definition

    Result Source Like a search ‘scope’ in SP2007/SP2010, but on steroids. Rules are specified to say what the scope consists of – e.g. DOCUMENTS in my TEAM SITES area (constraining on content type and path in this example).

    Created centrally, or at the web level. Result Sources can be used in just about any search-related functionality, including the Content Search web part.

    Query Rule Like a ‘best bet’ on steroids. Ability to do specially formatted results at top of results list (e.g.Promoted Result) for highly-recommended content. In addition to Promoted Result, we can also do a Result Block (example could be a block of 5 image results within main list of text links).

    Another option is to Change the Ranked Results – i.e. put something at the top, promoteor demote something by 1-10 (previously known as a ‘boost’ in FAST)

    LOTS of flexibility in matching the user’s query, including regular expressions and matching terms in the Managed Metadata store.

    Display Templates A Display Template is a JavaScript template (similar to jQuery templates) which controls formatting – in the case of the CSWP, this effectively replaces the use of XSL for look and feel. There is a separate template to pick for the overall control and formatting of an individualitem. The .js files for the templates are stored in the ‘Content Web Parts’ subfolder of the Master Page Gallery.

    Side note – in the context of a search results page (rather than CSWP), a Display Template is associated with a Result Type (e.g. Word doc, wiki page, PowerPoint file etc.) and so we have granular control over how each is displayed (and when). Extremely cool.

    So, lots of flexibility in the search infrastructure. Let’s see some of this in the context of the Content Search web part.

    Configuring the Content Search web part

    There are two main aspects to this:

    • Displaying the right items (Search Criteria)
    • Look and feel (Display Templates)

    In terms of the search criteria, there is enormous flexibility in what the CSWP – and the underlying search capability – can do. For one thing, it’s possible to either directly configure the query entirely in the properties of this web part instance (e.g. show me all documents which meet criteria X), and/or start from a pre-existing Result Source to do some of the filtering. Combining the approaches will be fairly common – an example could be “search only on wiki pages” (an OOTB Result Source) but only show items tagged with X (this defined directly in the CSWP properties).

    Interestingly, configuring a centralized Result Source and a Content Search web part on a page are very similar, even though it would seem some sort of “reusable scope” and a web part are very different things in SharePoint. The overlap comes because underneath both there is a search query which does the work of isolating the desired results – indeed, as we’ll see later the same “Query Builder” UI is used in both places (with a couple of minor differences). So, if you’ve learnt how to configure a CSWP you’ve essentially also learned how to create  a custom Result Source.

     

    Configuring the web part

    The first thing to understand is that the Content Search web part appears in different guises in the web part gallery. The ‘main’ web part is in the ‘Content Rollup’ category:

    CBS_MainWebPartInAdder

    But there are also many pre-configured versions available, each of which finds a specific type of content. This is great for end-users who don’t necessarily think in terms of needing a ‘Content Search’ web part:

    CBS_WebPartsInAdder
    And just to prove the point, the web parts above correspond to the following .webpart definition files in the Web Part Gallery:

    CBS_WebParts

    Once the web part has been added to the page, it can be configured by it’s tool pane. The main configuration item is the query to use, and this can be started by clicking the ‘Change query’ button:

    CSWP_properties
    This opens the “’Build Your Query” dialog – this has tabs labeled BASICS, REFINERS, SORTING, SETTINGS and TEST. This thing is known (unsurprisingly) as the Query Builder – what you might not realize, is that it’s used in several places in SharePoint 2013:

    • Configuring a Content Search web part (obviously)
    • Creating a Result Source (specifically in the Query Transform section)
    • Configuring a Search Results web part

    There are some differences – for example, when configuring a Search Results web part there is no SORTING tab because this will be handled in the Result Source or the query. I’m going to talk about things from the perspective of the Content Search web part, but will call out any differences for the other usages – so hopefully by learning the CSWP, you also get to learn 75% of the search infrastructure.

    BASICS tab – Quick Mode

    Although the first tab is labeled ‘BASICS’, I’d say it’s actually the most involved – this is where the query itself is configured, and there is a ‘Quick Mode’ and ‘Advanced Mode’. You’ll also notice that – and let me just say I’d personally be willing to give the Product Manager for this feature A BIG HUG for this – that there’s a “live” results preview pane, permanently visible on the right-hand side of the Query Builder. This shows the first 10 results which would display from running the currently configured search against the current index, without the need to save the web part after each change:

    CSWP_BasicsTab_QuickMode

    Note that if you create your own query, then this preview pane is only able to show results when you are on the TEST tab. And we’ll talk about that towards the end.

    Let’s now walk through the various configuration steps in here.

    Select a query

    In Quick Mode, the dropdown contains the Result Sources (see my definition above if you’ve forgotten already :)) which come out-of-the-box with SharePoint 2013 – one of these may provide a good foundation for what you need:

    CSWP_BasicsTab_QuickMode_SelectQuery
    As you select a Result Source from the dropdown, other options may become available lower down. So if I want to find items matching a specific content type, I get this:

    RestrictByContentType
    In fact, this option to restrict by content type appears for many of the pre-defined Result Sources, not just “Items matching a content type” – which makes sense, because it’s a common thing to include as a filter. Similarly, “Items matching a tag” and several other queries give this interface for selecting a tag to filter on:

    RestrictByTag
    And, happy days, if I specify the tag by typing one I get auto-complete to help me pick the term – this is a fully-fledged Managed Metadata input field. Consequently there’s also full validation of the terms you type-in (though this takes a few seconds to show), so if an author accidentally enters something which isn’t a known term, he/she should spot the mistake immediately:

    TermValidation

    Consider also that those middle options of using the navigation term associated with the current page is exactly what’s needed to build many types of ‘related items’ functionality – again, no code needed now.

    Restrict results by app

    In the next section, I can restrict the scope of the results to a particular location (e.g. the current site). This enables me to get something like the Content Query web part behavior of only searching within the current site collection if needed – because although we now have the power, it won’t always make sense to go across the entire farm 🙂

    RestrictByApp

    Add additional filters

    In the next section I can supplement the query with any valid query text, e.g. a property filter. In this example, I’m adding a filter to only present items which werecreated by the current user:

    AdditionalFilter

    Sort results

    When we scope our query to a pre-defined Result Source (as we are here in the CSWP ‘Quick Mode’), then sorting is usually pre-defined at that level. The CSWP does give us the opportunity to override sorting based on based on some popularity ranking models (around most viewed/most clicked) instead though – expect proper wording to appear in this dropdown in the RTM version, but you get the idea: 

    SortResults
    So what happens if none of the options presented so far do what you want? An example could be wanting to use an existing Result Source (e.g. ‘wiki pages’) but sort on Last Modified in descending order. Obviously the dropdown above does not allow that. We could create a custom Result Source and implement the query/sorting there, but that only really makes sense if we expect it to be re-used in multiple places.

    In these cases, we can click into Advanced Mode (still on the BASICS tab).

    BASICS tab – Advanced Mode

    In Advanced Mode you basically get to specify the full query text yourself. In my mind, this is like building a solution with the search API in SP2007/SP2010 – I saw many custom solutions (and built several myself) which used the FullTextSqlQuery or KeywordQuery classes to find the right items. SharePoint 2013 makes it much easier to have this full control whilst still piggybacking onto the out-of-the-box web parts – meaning less work and more productivity.

    When switching to the Advanced Mode, a couple of things become available:

    • A SORTING tab (details later)
    • Controls to help you build the query (which you’d previously do essentially by hand in earlier versions), with ‘Keyword filter’ and ‘Property filter’ options. These can be combined as you like, and the resulting query text appears in the textbox at the bottom:

    CSWP_BasicsTab_AdvancedMode

    Avoid custom code by using tokens

    There are many tokens which can be used when building a query in this way – often you might want to pass something into the query, such as a URL (querystring) parameter, the value in a particular field on the page, and so on. Being able to do this unlocks a huge range of possibilities for building solutions. This is where the first image in this article comes from – here’s a reminder:

    CSWP_BasicsTab_AdvancedMode_PropertyFilterValues

    In summary, when using the Advanced Mode of the query builder you should be able to target just about any content in your SharePoint environment.

    SORTING tab (Advanced Mode only)

    In SharePoint 2010 Enterprise Search, you could only sort by relevance/rank (the normal search engine approach) or date. FAST for SharePoint 2010 had more options (you could sort by a Managed Property). In SharePoint 2013, frankly the sort options alone are enough to blow your mind 🙂  If you don’t need anything specific around sorting then you can skip this bit, but if you do then here are your options:

    First you can sort by way more things than just rank and date:

    CSWP_SortTab
    One thing to note there – I’m unclear as to what makes it into that ‘Sort by’ list and what does not. It’s not Managed Properties as far as I can tell, so although the list is long many options may not be hugely useful. Still, better than before.

    Usefully, you can now do multi-level sorting (sort by this, then by that). The ‘Add sort level’ link in the image above adds another row, allowing me to do things like sorting by URL depth (so items higher up in the site hierarchy show at the top), and then by rank (that makes sense, because there’ll be lots of items at the same URL depth so I do need two levels of sorting):

    CSWP_SortTab_Custom

    Note that effectively what I’m doing here is building some sort of custom ranking model. This works great if I need something very specific on sorting, but also note SharePoint 2013 comes with several ranking models – the next section allows me to pick from these if I’ve left the ‘Sort by’ dropdown on ‘Rank’, unlike in the image above. This is because all these options are effectively different forms of rank – most are around People Search or popularity:

    CSWP_SortTab_RankingModel

    And for those occasions when the client is telling you that his/her strategic document really has to be on page 1 of the results (but not a Promoted Result/best bet), you have ‘Dynamic ordering’ – you can boost/demote results, including the option to promote to the top:

    CSWP_SortTab_DynamicOrdering

    REFINERS tab

    In the context of search, refiners are usually the links on the search engine’s results page (typically in the left nav) which allow the user to further filter the results. So if I do a search for “meeting minutes” and get lots of results, it would be nice to be able to filter by, say:

    • Date range
    • SharePoint site (since minutes might be stored in individual project sites)
    • Author
    • ..and so on

    However, in the context of the Content Search web part, refiners actually allow you to do this filtering as part of the initial query. The REFINERS tab is effectively a convenience to you, the person configuring the web part – what happens is that a search is performed whilst in edit mode, and all relevant refiners (e.g. managed properties) are presented as available refiners. These can be selected and moved over to the right-hand list:

    CSWP_RefinersTab
    The effect of this is that a further filter is added to my query. In the example above, this may be easier than using a Property Filter on the BASICS tab – since there I have little support, I just select the property and type the value:

    CSWP_BasicsTab_PropertyFilter
    In the REFINERS tab, SharePoint is doing the search for me (as it’s configured so far), and only coming back with values which have been found in the returned results.

    SETTINGS tab

    The SETTINGS tab controls some high-level options for running the search:

    CSWP_SettingsTab

    Query rules

    Since these can be defined at the parent site or search service, it could be the case that your CSWP gets affected by one of these. As the radio button shows, this can be overridden, but consider that some types of Query Rules may not have an effect anyway – as a reminder (from the table at the beginning), a Query Rule can either:

    • Add a promoted result
    • Add a result block
    • Change the ranked results somehow (by modifying the query)

    Out of these 3 actions, 1.5 of them could affect the results of a ‘default’ CSWP. This can be summarized:

    Query Rule Action

    Will affect CSWP results?

    Add a promoted result Not by default. When a search runs in SharePoint, multiple result sets are returned (e.g. ‘main results’, ‘best bet results’ and so on – in SP2013, the real names for these are ‘RelevantResults’, ‘SpecialTermResults’, ‘PersonalFavoriteResults’ and ‘RefinementResults’.). Although a CSWP can be configured to show any table, the default is ‘RelevantResults’ – and a promoted result gets added to ‘SpecialTermResults’.
    Add a result block Yes if result block is configured to show ‘ranked within core results’ (the default), rather than ‘shown above core results’.
    Change ranked results Yes.

    For completeness, here’s the place in the CSWP where you select which search result set to use (e.g. if you want to switch from the default of ‘RelevantResults’:

    CSWP_ResultTableSelection

    Options in the Results Table dropdown (shown to the left):

    CSWP_ResultTableSelectionOptions

    URL rewriting

    This one is fairly simple – if results are being returned from a catalog which is using “friendly” URLs, then the CSWP can override this to use the original URLs. It may not always make sense to use rewritten URLs in aggregations outside of the catalog pages, especially if you’ve implemented anything funky there.

    Loading behavior

    This is useful – specify whether the CSWP web part instance should load in the main page load (default) or in an AJAX manner after the main page has finished. Considering that a CSWP could either be the centerpiece of your landing page or merely some page footer navigation, it’s nice to be able to prioritize in this way.

    Priority

    Similarly, we can actually specify High, Medium or Low priority for each CSWP instance we use – great for the different usages we will have, although as per the description, note this only has any effect if the search service is overloaded.

    TEST tab

    The TEST tab is hugely useful – it provides you the ability:

    • To see the underlying query text (in Keyword Query Language [KQL]) which has been generated (though it must be edited in other tabs)
    • To see the preview when you are defining a query yourself (the preview pane will be empty on other tabs in this scenario)

    CSWP_TestTab_Less
    Which is all great, but at first glance it’s easy to miss some extra functionality – if the ‘Show more’ link is clicked, other information becomes visible including details on any refiners and Query Rules which have been applied. So below I can see that a custom Query Rule I created has indeed been used, so there’s no guesswork on (for example) whether a certain item is actually being promoted or not:

    CSWP_TestTab_More

    Sidenote – listing items from ONE site/list/library with the Content Search web part

    Worthy of a quick note – if all you need to do is roll-up content from one list/library, then you can do this with the CSWP – in the query, simply restrict the search using PATH:[URL to document library]. The Query Builder UI helps you do this by providing the ‘Restrict by app’ area:

    CSWPrestricttositeorlibrary_thumb2

    N.B. note that one potential gotcha here can be that you need ‘HTTP’ if your sites are browsed on HTTPS but crawled on HTTP (as in my case).

    If you do want to filter by site/list/library, consider of course that the good ol’ Content Query web part will work just fine here, and you’ll get instant changes as content is changed. What you won’t have, is the Content Search Web Part’s ability to automatically use tokens in the query (e.g. value of current navigation category, value from current user’s profile etc.)

    Summary

    The Content Search web part is a great tool in the SharePoint consultant’s box of tricks. Configuration may prove quite simple for some scenarios, but there is also huge amount of flexibility and so a certain degree of complexity comes with that. Many advanced scenarios which make use SP2013 search capabilities (such as Result Sources, Query Rules, promoted results and so on) will be possible – knowing the details will help you identify whether the CSWP can be the answer to a particular problem or not.

    Introduction to the Microsoft Monitoring Agent

    You can now download the Microsoft Monitoring Agent and install on any server in your enterprise that meet the minimum installation requirements.

    The Microsoft Monitoring Agent is a simple installation that is included with System Center Operations Manager 2012 R2 or can be installed separately to be used in a standalone manner. When using the Microsoft Monitoring Agent as a standalone tool the data captured is available as a Visual Studio IntelliTrace file. These files can be opened with Visual Studio Ultimate 2013 Release Candidate.

    The rest of this post will focus on using the Microsoft Monitoring Agent in a standalone mode.

    Using MMA in standalone mode

    After installing the Microsoft Monitoring Agent you enable monitoring of your IIS hosted application using PowerShell commands. Let’s say that I have installed my application (FabrikamFiber) under the default application node on my IIS server as shown below.

    To start monitoring this application I use the following PowerShell command running as Administrator:

    A couple of things to notice:

    • The name of the application that I am monitoring includes the site name (Default Web Site) as well as the application name (FabrikamFiber.Web).
    • I chose the monitor mode. My options are to use monitor, trace, or custom. I will focus on the monitor mode in this blog post.
    • The output path is a file location that I have already created. You may wish to make this location shared so that both developers and operations team members can access the files that get created in that location.

    Monitoring mode has been designed to have minimal impact on the running application and only records data when the Microsoft Monitoring Agent detects undesirable behavior such as problematic exceptions occur or performance is poor. The default settings record exceptions that bubble up to the global exception handler and pages that  take longer than 5 seconds to respond from the server. In addition to the conditions that trigger data collection the Microsoft Monitoring Agent will also limit the frequency in which it will record data to a maximum of 60 similar events per day.

    There are a couple of approaches that you can take to use the Microsoft Monitoring Agent in monitoring mode effectively. One method is to gather an IntelliTrace file after a problem has been reported. A more proactive approach is to gather an IntelliTrace file on a regular schedule, such as daily, to check for problematic behavior in your application that may not have been reported by your users.

    Generating and Using the IntelliTrace file

    To produce an IntelliTrace file you use the Checkpoint-WebApplicationMonitoring command as shown below:

    The IntelliTrace file can then be opened using Visual Studio 2013 Release Candidate. Any performance violations are listed in the Performance Data section of the IntelliTrace Summary page and exceptions are shown in the Exceptions section.

    Diagnosing exceptions remains very similar to the experience available with Visual Studio 2012. Diagnosing performance events is detailed in a separate blog post.

    For detailed information on the Microsoft Monitoring Agent refer to TechNet.

    For expanded scenarios and usage information refer to MSDN.

    How To : Customize the Duet Workflow Task form in InfoPath 2013

    Contents

    • Introduction
    • Displaying the SAP business properties
    • Adding and deleting controls on the form
    • Adding heading images to the form and applying a theme

    Duet

    Introduction

    After a task site is published through SharePoint Designer, the task form ApprovalProcess.xsn is generated. The form has a default layout. But we may also want to display the SAP business properties, add some more controls relevant to the use of the form, or delete some irrelevant controls. We may also want to give a nice look and feel to the form. We can do these customizations easily with the help of InfoPath 2013.

    Scenario

    We have published a task site of the task type TestTask using SharePoint Designer 2013. The task form has the default layout shown below. We want to customize the form to include a SAP business property, add a control, remove a control, add a heading image, and apply a theme.

    Figure 1. TestTask task form with default layout

    Figure 1. TestTask task form with default layout

    Displaying the SAP business properties

    Prerequisite: We can display the SAP business properties in the workflow task form provided that we have included the properties in the Extended Business Properties text box while creating the task site.

    Steps:

    1. In SharePoint Designer, click ApprovalProcess.xsn.

    Figure 2. ApprovalProcess.xsn in SharePoint Designer

    Figure 2. ApprovalProcess.xsn in SharePoint Designer

    InfoPath Designer opens with an auto-generated layout of the form.

    Figure 3. InfoPath Designer with auto-generated layout of the TestTask form

    Figure 3. InfoPath Designer with auto-generated layout of the TestTask form

    2. Insert a new row, wherever you want, for the business property LeaveDaysUsedTillToday that you want to display in the form.

    a. In the first column, enter the field name as you want to see it displayed in the form, for example,Leaves Used Till Today.

    Figure 4. Entering field name in the first column
    Figure 4. Entering field name in the first column

    b. In the second column, we need to get the value for the business property LeavesUsedTillTodayfrom the Workflow Business Document Library. Thus, we need to create a secondary data connection with the Workflow Business Document Library.

    3. Create a secondary data connection with the Workflow Business Document Library as follows: 

    a. Under Actions, click Manage Data Connections.

    Figure 5. Clicking Manage Data Connections in InfoPath
    Figure 5. Clicking Manage Data Connections in InfoPath

    The Data Connections dialog box appears.

    Figure 6. Data Connections dialog box

    Figure 6. Data Connections dialog box

    b. In the Data Connections dialog box, select Context  from the list of Data Connections for the form template, and click Add. The Data Connection Wizard starts.

    Figure 7. Data Connection Wizard

    Figure 7. Data Connection Wizard

    c. Click Next without changing any settings. The wizard now asks for the source of data. SelectSharePoint library or list as the source of data.

    Figure 8. Selecting SharePoint library or list in the Data Connection Wizard

    Figure 8. Selecting SharePoint library or list in the Data Connection Wizard

    d. Click Next. The wizard now prompts you to enter the location of the SharePoint site.

    e. Enter the URL of the task site, and click Next.

    Figure 9. Entering task site location in the Data Connection Wizard

    Figure 9. Entering task site location in the Data Connection Wizard

    f. Select the Workflow Business Data Document Library for the data connection, and click Next.

    Figure 10. Selecting Workflow Business Data Document Library for the data connection

    Figure 10. Selecting Workflow Business Data Document Library for the data connection

    g. Select the Title and LeavesUsedTillToday fields, and click Next. The Title field will help us filter the data corresponding to a task from the Workflow Business Data Document Library. The Title field is the concatenation of the Related Content field in the main data connection and the string “.xml“.

    Figure 11. Selecting the Title field and LeaveDaysUsedTillToday field

    Figure 11. Selecting the Title field and LeaveDaysUsedTillToday field

    h. Click Next.

    Figure 12. Data Connection Wizard

    Figure 12. Data Connection Wizard

    i. Click Finish.

    Figure 13. Finishing the Data Connection Wizard

    Figure 13. Finishing the Data Connection Wizard

    j. Close the Data Connections dialog box that now has the data connection to the Workflow Business Data Document Library.

    Figure 14. Data Connections dialog box with the new data connection

    Figure 14. Data Connections dialog box with the new data connection

    4. Click in the second column of the new row that we inserted in step 3. On the Home tab in the ribbon, click Calculated Value (fx) button in the Controls pane.

    Figure 15. Choosing Calculated Value in InfoPath

    Figure 15. Choosing Calculated Value in InfoPath

    The Insert Calculated Value dialog box appears.

    5. Click the fx button next to the XPath text box.

    Figure 16. Insert Calculated Value dialog box

    Figure 16. Insert Calculated Value dialog box

    The Insert Formula dialog box appears as shown in the following figure.

    6. Click Insert Field or Group.

    Figure 17. Insert Field or Group button

    Figure 17. Insert Field or Group button

    The Select a Field or Group dialog box appears, as shown in the following figure.

    7. Click Show advanced view.

    Figure 18. Show advanced view link

    Figure 18. Show advanced view link

    Now, we have the option to select the data connection also. 

    Figure 19. Select a Field or Group dialog box

    Figure 19. Select a Field or Group dialog box

    8. Select the secondary data connection to the Workflow Business Document Library from the drop-down list.

    Figure 20. Choosing a secondary data connection

    Figure 20. Choosing a secondary data connection

    9. Expand the dataFields tree structure until you see LeaveDaysUsedTillToday. SelectLeaveDaysUsedTillToday. Since we want to get only the business property for the corresponding task, we need to filter the data received from the data connection. Click Filter Data.

    Figure 21. Filter Data button

    Figure 21. Filter Data button

    10. The Filter Data dialog box appears. Click Add.

    Figure 22. Add button in the Filter Data dialog box

    Figure 22. Add button in the Filter Data dialog box

    The Specify Filter Conditions dialog box appears.

    Figure 23. Specify Filter Conditions dialog box

    Figure 23. Specify Filter Conditions dialog box

    11. Specify the filter conditions as follows:

    a. In the first drop-down list, choose Select a field or group.

    Figure 24. Choosing Select a field or group
    Figure 24. Choosing Select a field or group

    b. Choose Workflow Business Data Document Library as the data source.

    c. Expand the dataFields tree structure until you see Title. Select Title, and then click OK.

    Figure 25. Title in the Select a Field or Group dialog box

    Figure 25. Title in the Select a Field or Group dialog box

    d. In the second drop-down list in the Specify Filter Conditions dialog box, select is equal to.

    e. In the third drop-down list in the Specify Filter Conditions dialog box, select Use a formula.

    Figure 26. Selecting Use a formula in the list

    Figure 26. Selecting Use a formula in the list

    f. The Insert Formula dialog box opens. Click Insert Function.

    Figure 27. Insert Function button

    Figure 27. Insert Function button

    The Insert Function dialog box opens.

    Figure 28. Insert Function dialog box

    Figure 28. Insert Function dialog box

    g. Select Text in the Categories list, and then select concat in the Functions list. Click OK.

    Figure 29. Choosing category and function

    Figure 29. Choosing category and function

    The formula corresponding to the selection appears in the Insert Formula dialog box.

    Figure 30. Concat Formula prototype (skeleton) in the Insert Formula dialog box

    Figure 30. Concat Formula prototype (skeleton) in the Insert Formula dialog box

    h. Double-click the first argument in the concat function. The Select a Field or Group dialog box opens. Under the Main data connection, expand the dataFields tree structure till you see Related Content. Select the subfield :Description under Related Content. Click OK.

    Figure 31. :Description subfield under Related Content

    Figure 31. :Description subfield under Related Content

    i. Write the string “.xml” as the second argument in the concat function. Delete the comma following the second argument and the third argument.

    The updated formula is as shown in the following figure. Click OK.

    Figure 32. Updated concat formula (with provided arguments) in Insert Formula dialog box

    Figure 32. Updated concat formula (with provided arguments) in Insert Formula dialog box

    j. Click OK in the dialog boxes in the order: Specify Filter Conditions, Filter Data, Select a Field or Group.

    The final overall formula appears in the Insert Formula dialog box. (This dialog box was opened in step 5 and is still open)

    Figure 33. Final formula in the Insert Formula dialog box

    Figure 33. Final formula in the Insert Formula dialog box

    12. Click OK in the Insert Formula dialog box (shown above) to return to the Insert Calculated Valuedialog box where the XPath corresponding to our selections has been updated.

    Figure 34. Updated XPath in the Insert Calculated Value dialog box

    Figure 34. Updated XPath in the Insert Calculated Value dialog box

    Click OK.

    13. Click the File tab on the ribbon. Click Quick Publish.

    Figure 35. Publish your form

    Figure 35. Publish your form

    14. The Save As dialog box opens.

    Figure 36. Saving the form template

    Figure 36. Saving the form template

    15. Click Save. The Microsoft InfoPath dialog box stating the successful publishing of the form template appears. Click OK.

    Figure 37. Form template published successfully

    Figure 37. Form template published successfully

    16. Open the task site and look up any of the tasks. The task appears as shown in the following figure. The SAP business property LeavesUsedTillToday has the value 10 in this task.

    Figure 38. Task on the task site with LeavesUsedTillToday business property

    Figure 38. Task on the task site with LeavesUsedTillToday business property

    Adding and deleting controls on a form

    Suppose we want to add a control—for example, ID—from the main data connection to the workflow task form, and delete the control Consolidated Comments from the form.

    1. Insert a new row for the ID field.

    Figure 39. Inserting a new row for the ID field

    Figure 39. Inserting a new row for the ID field

    2.  Drag the ID field from the Fields task pane onto the canvas. The label for the control appears automatically in the left column when you drag the field into the right column of the table. However, this is true only if you highlight both columns when you release the mouse.

    Figure 40. ID field in right column

    Figure 40. ID field in right column

    3. Delete the row containing the control for Consolidated Comments.

    Figure 41. Consolidated Comments row deleted

    Figure 41. Consolidated Comments row deleted

    3. Click the File tab on the ribbon. Click Quick Publish. The Microsoft InfoPath dialog box stating the successful publishing of the form template appears.

    4. Click OK.

    5. Open the task site and look up any of the tasks. The task appears as shown in the following figure. The task has the ID field with value 1 and no Consolidated Comments control.

    Figure 42. Task on the task site with ID control and without Consolidated Comments control
    Figure 42. Task on the task site with ID control and without Consolidated Comments control

    Adding heading images to the form and applying a theme

    1. Place your cursor in the title area of the page layout. Add a title—for example, MyTask—in the required format and font.

    Figure 43. Title area of the page layout

    Figure 43. Title area of the page layout

    2. Add the heading image to the form by inserting a picture from the Insert tab on the ribbon.

    3. On the Page Design tab, apply the Professional – Standard theme. The easiest way to select the theme is to expand the Themes gallery by clicking the arrow at the lower-right corner. Professional – Standard is the first theme in the Professional section.

    Figure 44. Page Design tab

    Figure 44. Page Design tab

    The title, heading image, and page design should now resemble the following figure.

    Figure 45. New title, heading image, and page design

    Figure 45. New title, heading image, and page design

    4. Click the File tab on the ribbon. Click Quick Publish. The Microsoft InfoPath dialog box stating the successful publishing of the form template appears.

    5. Click OK.

    6. Open the task site and look up any of the tasks. The task appears as shown in the following figure. The task has the desired heading image and theme.

    Figure 46. Task on the task site with desired heading image and theme

    Figure 46. Task on the task site with desired heading image and theme

    Using SharePoint FAST to unlock SAP data and make it accesible to your entire business

    An important new mantra is search-driven applications. In fact, “search” is the new way of navigating through your information. In many organizations an important part of the business data is stored in SAP business suites.
    4336.SP2013SearchArchitecture[1]
    A frequently asked need is to navigate through the business data stored in SAP, via a user-friendly and intuitive application context.
    For many organizations (78% according to Microsoft numbers), SharePoint is the basis for the integrated employee environment. Starting with SharePoint 2010, FAST Enterprise Search Platform (FAST ESP) is part of the SharePoint platform.
    All analyst firms assess FAST ESP as a leader in their scorecards for Enterprise Search technology. For organizations that have SAP and Microsoft SharePoint administrations in their infrastructure, the FAST search engine provides opportunities that one should not miss.

    SharePoint Search

    Search is one of the supporting pillars in SharePoint. And an extremely important one, for realizing the SharePoint proposition of an information hub plus collaboration workplace. It is essential that information you put into SharePoint, is easy to be found again.

    By yourself of course, but especially by your colleagues. However, from the context of ‘central information hub’, more is needed. You must also find and review via the SharePoint workplace the data that is administrated outside SharePoint. Examples are the business data stored in Lines-of-Business systems [SAP, Oracle, Microsoft Dynamics], but also data stored on network shares.
    With the purchase of FAST ESP, Microsoft’s search power of the SharePoint platform sharply increased. All analyst firms consider FAST, along with competitors Autonomy and Google Search Appliance as ‘best in class’ for enterprise search technology.
    For example, Gartner positioned FAST as leader in the Magic Quadrant for Enterprise Search, just above Autonomy. In SharePoint 2010 context FAST is introduced as a standalone extension to the Enterprise Edition, parallel to SharePoint Enterprise Search.
    In SharePoint 2013, Microsoft has simplified the architecture. FAST and Enterprise Search are merged, and FAST is integrated into the standard Enterprise edition and license.

    SharePoint FAST Search architecture

    The logical SharePoint FAST search architecture provides two main responsibilities:

    1. Build the search index administration: in bulk, automated index all data and information which you want to search later. Depending on environmental context, the data sources include SharePoint itself, administrative systems (SAP, Oracle, custom), file shares, …
    2. Execute Search Queries against the accumulated index-administration, and expose the search result to the user.

    In the indexation step, SharePoint FAST must thus retrieve the data from each of the linked systems. FAST Search supports this via the connector framework. There are standard connectors for (web)service invocation and for database queries. And it is supported to custom-build a .NET connector for other ways of unlocking external system, and then ‘plug-in’ this connector in the search indexation pipeline. Examples of such are connecting to SAP via RFC, or ‘quick-and-dirty’ integration access into an own internal build system.
    In this context of search (or better: find) in SAP data, SharePoint FAST supports the indexation process via Business Connectivity Services for connecting to the SAP business system from SharePoint environment and retrieve the business data. What still needs to be arranged is the runtime interoperability with the SAP landscape, authentication, authorization and monitoring.
    An option is to build these typical plumping aspects in a custom .NET connector. But this not an easy matter. And more significant, it is something that nowadays end-user organizations do no longer aim to do themselves, due the involved development and maintenance costs.
    An alternative is to apply Duet Enterprise for the plumbing aspects listed. Combined with SharePoint FAST, Duet Enterprise plays a role in 2 manners: (1) First upon content indexing, for the connectivity to the SAP system to retrieve the data.
    The SAP data is then available within the SharePoint environment (stored in the FAST index files). Search query execution next happens outside of (a link into) SAP. (2) Optional you’ll go from the SharePoint application back to SAP if the use case requires that more detail will be exposed per SAP entity selected from the search result.  An example is a situation where it is absolutely necessary to show the actual status. As with a product in warehouse, how many orders have been placed?

    Security trimmed: Applying the SAP permissions on the data

    Duet Enterprise retrieves data under the SAP account of the individual SharePoint user. This ensures that also from the SharePoint application you can only view those SAP data entities whereto you have the rights according the SAP authorization model. The retrieval of detail data is thus only allowed if you are in the SAP system itself allowed to see that data.

    Due the FAST architecture, matters are different with search query execution. I mentioned that the SAP data is then already brought into the SharePoint context, there is no runtime link necessary into SAP system to execute the query. Consequence is that the Duet Enterprise is in this context not by default applied.
    In many cases this is fine (for instance in the customer example described below), in other cases it is absolutely mandatory to respect also on moment of query execution the specific SAP permissions.
    The FAST search architecture provides support for this by enabling you to augment the indexed SAP data with the SAP autorisations as metadata.
    To do this, you extend the scope of the FAST indexing process with retrieval of SAP permissions per data entity. This meta information is used for compiling ACL lists per data entity. FAST query execution processes this ACL meta-information, and checks each item in the search result whether it allowed to expose to this SharePoint [SAP] user.
    This approach of assembling the ACL information is a static timestamp of the SAP authorizations at the time of executing the FAST indexing process. In case the SAP authorizations are dynamic, this is not sufficient.
    For such situation it is required that at the time of FAST query execution, it can dynamically retrieve the SAP authorizations that then apply. The FAST framework offers an option to achieve this. It does require custom code, but this is next plugged in the standard FAST processing pipeline.
    SharePoint FAST combined with Duet Enterprise so provides standard support and multiple options for implementing SAP security trimming. And in the typical cases the standard support is sufficient.

    lip_image002_2.png

    Applied in customer situation

    The above is not only theory, we actually applied it in real practice. The context was that of opening up of SAP Enterprise Learning functionality to operation by the employees from their familiar SharePoint-based intranet. One of the use cases is that the employee searches in the course catalog for a suitable training.

    This is a striking example of search-driven application. You want a classified list of available courses, through refinement zoom to relevant training, and per applied classification and refinement see how much trainings are available. And of course you also always want the ability to freely search in the complete texts of the courses.
    In the solution direction we make the SAP data via Duet Enterprise available for FAST indexation. Duet Enterprise here takes care of the connectivity, Single Sign-On, and the feed into SharePoint BCS. From there FAST takes over. Indexation of the exposed SAP data is done via the standard FAST index pipeline, searching and displaying the search results found via standard FAST query execution and display functionalities.
    In this application context, specific user authorization per SAP course elements does not apply. Every employee is allowed to find and review all training data. As result we could suffice with the standard application of FAST and Duet Enterprise, without the need for additional customization.

    Conclusion

    Microsoft SharePoint Enterprise Search and FAST both are a very powerful tool to make the SAP business data (and other Line of Business administrations) accessible. The rich feature set of FAST ESP thereby makes it possible to offer your employees an intuitive search-driven user experience to the SAP data.

    Microsoft Application Insights – A comprehensive guide : Part 1 – Getting Started

    Application Insights Overview

    With Application Insights, we have the same excellent capabilities from the Application Performance Monitoring (APM) feature (formerly AviCode) available in SCOM 2012 R2 – with the exception that it’s all run from the cloud as part of Visual Studio Online. With this type of monitoring for your applications, you can ensure that they are available and performing optimally, while leveraging usage data to drive improvements and trends.

    In relation to the Microsoft Management Agent used for both Application Insights and SCOM 2012 R2, they share the same source code but have a slight difference with serialization that determines which REST interface and location the agent sends its performance data to.

    Application Insights supports both .NET and Java web applications. On the Java side of the house, it supports monitoring Tomcat 6, Tomcat 7 or JBoss 6. For the purposes of this deep-dive series however, I’m going to concentrate on demonstrating its capabilities monitoring .NET web applications and I’ll save the Java monitoring for a later post.

    You can use Application Insights to monitor web applications that are running in an on-premise/virtual machine scenario and of course, it’s also fully supported to monitor web applications running as a web role in Windows Azure Cloud Services. If you’re a Windows Phone app developer, you might be interested in the capability to view usage trends and other analytical data as users download and use your app on a daily basis.

    The method of getting these different environments monitored varies depending on scenario but typically, it’s a straight-forward enough process as you’ll understand when you read through this blog series.

    Regardless of the environment you’re monitoring with Application Insights, you can ensure you’re kept up to date with any performance issues (slow responses, uncaught exceptions etc.) by enabling email notification direct to your inbox. If you want to use Visual Studio to view the stack trace to help triage the problem, then this is an easy option too.

    So, that’s a high-level overview of what Application Insights can do, now let’s get started!

    Creating Your Account

    The first thing you’ll need to do is to create a new Visual Studio Online account by clicking on the following link to sign up:

    http://www.visualstudio.com/products/visual-studio-online-overview-vs

    Click on the ‘Ready to Go?’ tile

    Enter your Microsoft Account (formerly Windows Live ID) details, then click the Sign In button. If you don’t yet have a Microsoft Account, you can sign up for a new one here.

    Input all your details into the ‘Create a Visual Studio Online Account’ window, then hit the Create Account button to move on.

    Once you’ve created your Visual Studio Online account, you’ll need to specify a name for your first project. Call it what you want, then hit the Create Project button to finish.

    Now, at the Overview screen of your Home page, you should see a Blue tile titled ‘Try Application Insights’ as shown below. If you can’t see the Blue AI tile, then click the Help button and choose the ‘Display Announcement’ option from the resulting menu.

    Click on the Blue tile and you’ll be taken to the *Insights view where you’ll be prompted for an invitation code to gain access

    Invitation Code? I hear you ask.
    Don’t worry if you don’t have one, even though AI is still only available as a Preview, the good folks over at Microsoft have made a public code available at the following link:
    Type in your code, then hit the Get Started button to enter the new world of Application Insights!
    That’s it for Part 1 of this series. In Part 2, we’ll start work on building a demo .NET web application environment for us to give our new Application Insights account a test drive in.

    SharePoint Development roles urgently needs to be filled at MS Gold Partner – Contact me now for more information (Sorry, No recruiters, i am filling private positions)

    Senior SharePoint Developers needed urgently for MS Gold Partner in Sandton/Bryanston :

    3 – 5 years of development experience.

    2 year experience in SharePoint.

    3 years experience in C#.

    A minimum of 3 years experience in Visual Studio .NET 2005 – 2008.

    A minimum of 3 years experience in ASP.NET , HTML web development.

    A minimum of 3 years experience with Javascript.

    A minimum of 3 years experience with Windows XP, Windows 2003 and Windows Vista.

    A minimum of 3 years experience in relational database design and implementation with SQL Server
    Advantageous (nice-to-have):

    • Windows SharePoint Server.
    • Microsoft Office SharePoint Server.
    • BizTalk
    • Web Analytics
    • Microsoft CRM
    • K2

    Various Senior .Net Developet Positions available at MS Gold Partner and Part of the Britehouse Group! Contact me now! (Sorry. No recruiters – I am filling private positions)

    Required (not-negotiable):

    ·         A minimum of 4 years experience developing code in C# and / or VB.NET and ASP.NET

    ·         A minimum of 48 months Visual Studio 2005 / 2006 and / or 2008 experience.

    ·         A minimum of 48 months Transact-SQL (Stored procedures, views and triggers) experience.

    ·         A minimum of 48 months relational database design implementation using MS SQL Server 2000 / 2005 and / or 2008 experience.

    ·         A minimum of 48 months HTML experience.

    ·         A minimum of 48 months Javascript experience.

    ·         5 years experience of leading a development team.

    ·         Proficiency in technical architecture and high-level design, as well as test framework design and implementation.

    Senior Developers must be able to perform as a Tech-Lead developers with the following tasks:

    ·         Technical lead for development, design and implementation of .NET based solutions as part of the projects team.

    ·         Collaborate with Developers, Account Managers and Project Managers.

    ·         Estimate development tasks and execute well on project schedules.
    ·         Interact with clients to create requirement specifications for projects.
    ·          Innovate new solutions and keep up with new emerging technologies.

    ·         Mentoring of other developers.
    Advantageous (nice-to-have):

    ·         3 years computer science degree or equivalent.

    Windows SharePoint Server.
    ·        
    Microsoft Office SharePoint Server.
    ·        
    BizTalk.
    ·        
    Microsoft CRM.
    ·        
    Experience in web analytics.

    Using OpenXML to build Office 365 Apps (OOXML)

    If you’re building apps for Office to run in Word, you might already know that the JavaScript API for Office (Office.js) offers several formats for reading and writing document content. These are called coercion types, and they include plain text, tables, HTML, and Office Open XML (OOXML).

    So what are your options when you need to add rich content to a document, such as images, formatted tables, charts, or even just formatted text?

    You can use HTML for inserting some types of rich content, such as pictures. Depending on your scenario, there can be drawbacks to HTML coercion, such as limitations in the formatting and positioning options available to your content.

    Because Office Open XML is the language in which Word documents (such as .docx and .dotx) are written, you can insert virtually any type of content that a user can add to a Word document, with virtually any type of formatting the user can apply. Determining the Office Open XML markup you need to get it done is easier than you might think.

    Note Note

    Office Open XML is also the language behind PowerPoint and Excel (and, as of Office 2013, Visio) documents. However, currently, you can coerce content as Office Open XML only in apps for Office created for Word. For more information about Office Open XML, including the complete language reference documentation, see Additional resources.

    To begin, take a look at some of the content types you can insert using OOXML coercion.

    Download the code sample Loading and Writing Office Open XML, which contains the Office Open XML markup and Office.js code required for inserting any of the following examples into Word.

    Note Note

    Throughout this article, the terms content types and rich content refer to the types of rich content you can insert into a Word document.

    Figure 1. Text with direct formatting.

    Text with direct formatting applied.

    You can use direct formatting to specify exactly what the text will look like regardless of existing formatting in the user’s document.

    Figure 2. Text formatted using a style.

    Text formatted with paragraph style.

    You can use a style to automatically coordinate the look of text you insert with the user’s document.

    Figure 3. A simple image.

    Image of a logo.

    You can use the same method for inserting any Office-supported image format.

    Figure 4. An image formatted using picture styles and effects.

    Formatted image in Word 2013.

    Adding high quality formatting and effects to your images requires much less markup than you might expect.

    Figure 5. A content control.

    Text within a bound content control.

    You can use content controls with your app to add content at a specified (bound) location rather than at the selection.

    Figure 6. A text box with WordArt formatting.

    Text formatted with WordArt text effects.

    Text effects are available in Word for text inside a text box (as shown here) or for regular body text.

    Figure 7. A shape.

    An Office 2013 drawing shape in Word 2013.

    You can insert built-in or custom drawing shapes, with or without text and formatting effects.

    Figure 8. A table with direct formatting.

    A formatted table in Word 2013.

    You can include text formatting, borders, shading, cell sizing, or any table formatting you need.

    Figure 9. A table formatted using a table style.

    A formatted table in Word 2013.

    You can use built-in or custom table styles just as easily as using a paragraph style for text.

    Figure 10. A SmartArt diagram.

    A dynamic SmartArt diagram in Word 2013.

    Office 2013 offers a wide array of SmartArt diagram layouts (and you can use Office Open XML to create your own).

    Figure 11. A chart.

    A chart in Word 2013.

    You can insert Excel charts as live charts in Word documents, which also means you can use them in your app for Word.

    As you can see by the preceding examples, you can use OOXML coercion to insert essentially any type of content that a user can insert into their own document.

    There are two simple ways to get the Office Open XML markup you need. Either add your rich content to an otherwise blank Word 2013 document and then save the file in Word XML Document format or use a test app with the getSelectedDataAsync method to grab the markup. Both approaches provide essentially the same result.

    Note Note

    An Office Open XML document is actually a compressed package of files that represent the document contents. Saving the file in the Word XML Document format gives you the entire Office Open XML package flattened into one XML file, which is also what you get when using getSelectedDataAsync to retrieve the Office Open XML markup.

    If you save the file to an XML format from Word, note that there are two options under the Save as Type list in the Save As dialog box for .xml format files. Be sure to choose Word XML Document and not the Word 2003 option.

    Download the code sample named Get, Set, and Edit Office Open XML, which you can use as a tool to retrieve and test your markup.

    So is that all there is to it? Well, not quite. Yes, for many scenarios, you could use the full, flattened Office Open XML result you see with either of the preceding methods and it would work. The good news is that you probably don’t need most of that markup.

    If you’re one of the many app developers seeing Office Open XML markup for the first time, trying to make sense of the massive amount of markup you get for the simplest piece of content might seem overwhelming, but it doesn’t have to be.

    In this topic, we’ll use some common scenarios we’ve been hearing from the apps for Office developer community to show you techniques for simplifying Office Open XML for use in your app. We’ll explore the markup for some types of content shown earlier along with the information you need for minimizing the Office Open XML payload. We’ll also look at the code you need for inserting rich content into a document at the active selection and how to use Office Open XML with the bindings object to add or replace content at specified locations.

    When you use getSelectedDataAsync to retrieve the Office Open XML for a selection of content (or when you save the document in Word XML Document format), what you’re getting is not just the markup that describes your selected content; it’s an entire document with many options and settings that you almost certainly don’t need. In fact, if you use that method from a document that contains a task pane app, the markup you get even includes your task pane.

    Even a simple Word document package includes parts for document properties, styles, theme (formatting settings), web settings, fonts, and then some—in addition to parts for the actual content.

    For example, say that you want to insert just a paragraph of text with direct formatting, as shown earlier in Figure 1. When you grab the Office Open XML for the formatted text using using getSelectedDataAsync, you see a large amount of markup. That markup includes a package element that represents an entire document, which contains several parts (commonly referred to as document parts or, in the Office Open XML, as package parts), as you see listed in Figure 13. Each part represents a separate file within the package.

    Tip Tip

    You can edit Office Open XML markup in a text editor like Notepad. If you open it in Visual Studio 2012, you can use Edit >Advanced > Format Document (Ctrl+K, Ctrl+D) to format the package for easier editing. Then you can collapse or expand document parts or sections of them, as shown in Figure 12, to more easily review and edit the content of the Office Open XML package. Each document part begins with a pkg:part tag.

    Figure 12. Collapse and expand package parts for easier editing in Visual Studio 2012.

    Office Open XML code snippet for a package part.

    Figure 13. The parts included in a basic Word Office Open XML document package.

    Office Open XML code snippet for a package part.

    With all that markup, you might be surprised to discover that the only elements you actually need to insert the formatted text example are pieces of the .rels part and the document.xml part.

    Note Note

    The two lines of markup above the package tag (the XML declarations for version and Office program ID) are assumed when you use the OOXML coercion type, so you don’t need to include them. Keep them if you want to open your edited markup as a Word document to test it.

    Several of the other types of content shown at the start of this topic require additional parts as well (beyond those shown in Figure 13), and we’ll address those later in this topic. Meanwhile, since you’ll see most of the parts shown in Figure 13 in the markup for any Word document package, here’s a quick summary of what each of these parts is for and when you need it:

    • Inside the package tag, the first part is the .rels file, which defines relationships between the top-level parts of the package (these are typically the document properties, thumbnail (if any), and main document body). Some of the content in this part is always required in your markup because you need to define the relationship of the main document part (where your content resides) to the document package.

    • The document.xml.rels part defines relationships for additional parts required by the document.xml (main body) part, if any.

    Important note Important

    The .rels files in your package (such as the top-level .rels, document.xml.rels, and others you may see for specific types of content) are an extremely important tool that you can use as a guide for helping you quickly edit down your Office Open XML package. To learn more about how to do this, see Creating your own markup: best practices later in this topic.

    • The document.xml part is the content in the main body of the document. You need elements of this part, of course, since that’s where your content appears. But, you don’t need everything you see in this part. We’ll look at that in more detail later.

    • Many parts are automatically ignored by the Set methods when inserting content into a document using OOXML coercion, so you might as well remove them. These include the theme1.xml file (the document’s formatting theme), the document properties parts (core, app, and thumbnail), and setting files (including settings, webSettings, and fontTable).

    • In the Figure 1 example, text formatting is directly applied (that is, each font and paragraph formatting setting applied individually). But, if you use a style (such as if you want your text to automatically take on the formatting of the Heading 1 style in the destination document) as shown earlier in Figure 2, then you would need part of the styles.xml part as well as a relationship definition for it. For more information, see the topic section Adding objects that use additional Office Open XML parts.

    Let’s take a look at the minimal Office Open XML markup required for the formatted text example shown in Figure 1 and the JavaScript required for inserting it at the active selection in the document.

    Simplified Office Open XML markup

    We’ve edited the Office Open XML example shown here, as described in the preceding section, to leave just required document parts and only required elements within each of those parts. We’ll walk through how to edit the markup yourself (and explain a bit more about the pieces that remain here) in the next section of the topic.

    Copy
    <pkg:package xmlns:pkg="http://schemas.microsoft.com/office/2006/xmlPackage">
      <pkg:part pkg:name="/_rels/.rels" pkg:contentType="application/vnd.openxmlformats-package.relationships+xml" pkg:padding="512">
        <pkg:xmlData>
          <Relationships xmlns="http://schemas.openxmlformats.org/package/2006/relationships">
            <Relationship Id="rId1" Type="http://schemas.openxmlformats.org/officeDocument/2006/relationships/officeDocument" Target="word/document.xml"/>
          </Relationships>
        </pkg:xmlData>
      </pkg:part>
      <pkg:part pkg:name="/word/document.xml" pkg:contentType="application/vnd.openxmlformats-officedocument.wordprocessingml.document.main+xml">
        <pkg:xmlData>
          <w:document xmlns:w="http://schemas.openxmlformats.org/wordprocessingml/2006/main" >
            <w:body>
              <w:p>
                <w:pPr>
                  <w:spacing w:before="360" w:after="0" w:line="480" w:lineRule="auto"/>
                  <w:rPr>
                    <w:color w:val="70AD47" w:themeColor="accent6"/>
                    <w:sz w:val="28"/>
                  </w:rPr>
                </w:pPr>
                <w:r>
                  <w:rPr>
                    <w:color w:val="70AD47" w:themeColor="accent6"/>
                    <w:sz w:val="28"/>
                  </w:rPr>
                  <w:t>This text has formatting directly applied to achieve its font size, color, line spacing, and paragraph spacing.</w:t>
                </w:r>
              </w:p>
            </w:body>
          </w:document>
        </pkg:xmlData>
      </pkg:part>
    </pkg:package>
    
    NoteNote

    If you add the markup shown here to an XML file along with the XML declaration tags for version and mso-application at the top of the file (shown in Figure 13), you can open it in Word as a Word document. Or, without those tags, you can still open it using File> Open in Word. You’ll see Compatibility Mode on the title bar in Word 2013, because you removed the settings that tell Word this is a 2013 document. Since you’re adding this markup to an existing Word 2013 document, that won’t affect your content at all.

    JavaScript for using setSelectedDataAsync

    Once you save the preceding Office Open XML as an XML file that’s accessible from your solution, you can use the following function to set the formatted text content in the document using OOXML coercion.

    In this function, notice that all but the last line are used to get your saved markup for use in the setSelectedDataAsync method call at the end of the function. setSelectedDataASync requires only that you specify the content to be inserted and the coercion type.

    Note Note

    Replace yourXMLfilename with the name and path of the XML file as you’ve saved it in your solution. If you’re not sure where to include XML files in your solution or how to reference them in your code, see the Loading and Writing Office Open XML code sample for examples of that and a working example of the markup and JavaScript shown here.

    Copy
    function writeContent() {
        var myOOXMLRequest = new XMLHttpRequest();
        var myXML;
        myOOXMLRequest.open('GET', ‘yourXMLfilename’, false);
        myOOXMLRequest.send();
        if (myOOXMLRequest.status === 200) {
            myXML = myOOXMLRequest.responseText;
        }
        Office.context.document.setSelectedDataAsync(myXML, { coercionType: 'ooxml' });
    }
    

    Let’s take a closer look at the markup you need to insert the preceding formatted text example.

    For this example, start by simply deleting all document parts from the package other than .rels and document.xml. Then, we’ll edit those two required parts to simplify things further.

    Important note Important

    Use the .rels parts as a map to quickly gauge what’s included in the package and determine what parts you can delete completely (that is, any parts not related to or referenced by your content). Remember that every document part must have a relationship defined in the package and those relationships appear in the .rels files. So you should see all of them listed in either .rels, document.xml.rels, or a content-specific .rels file.

    The following markup shows the required .rels part before editing. Since we’re deleting the app and core document property parts, and the thumbnail part, we need to delete those relationships from .rels as well. Notice that this will leave only the relationship (with the relationship ID “rID1” in the following example) for document.xml.

    Copy
      <pkg:part pkg:name="/_rels/.rels" pkg:contentType="application/vnd.openxmlformats-package.relationships+xml" pkg:padding="512">
        <pkg:xmlData>
          <Relationships xmlns="http://schemas.openxmlformats.org/package/2006/relationships">
            <Relationship Id="rId3" Type="http://schemas.openxmlformats.org/package/2006/relationships/metadata/core-properties" Target="docProps/core.xml"/>
            <Relationship Id="rId2" Type="http://schemas.openxmlformats.org/package/2006/relationships/metadata/thumbnail" Target="docProps/thumbnail.emf"/>
            <Relationship Id="rId1" Type="http://schemas.openxmlformats.org/officeDocument/2006/relationships/officeDocument" Target="word/document.xml"/>
            <Relationship Id="rId4" Type="http://schemas.openxmlformats.org/officeDocument/2006/relationships/extended-properties" Target="docProps/app.xml"/>
          </Relationships>
        </pkg:xmlData>
      </pkg:part>
    
    Important noteImportant

    Remove the relationships (that is, the <Relationship…> tag) for any parts that you completely remove from the package. Including a part without a corresponding relationship, or excluding a part and leaving its relationship in the package, will result in an error.

    The following markup shows the document.xml part—which includes our sample formatted text content—before editing.

    Copy
    <pkg:part pkg:name="/word/document.xml" pkg:contentType="application/vnd.openxmlformats-officedocument.wordprocessingml.document.main+xml">
        <pkg:xmlData>
          <w:document mc:Ignorable="w14 w15 wp14" xmlns:wpc="http://schemas.microsoft.com/office/word/2010/wordprocessingCanvas" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:r="http://schemas.openxmlformats.org/officeDocument/2006/relationships" xmlns:m="http://schemas.openxmlformats.org/officeDocument/2006/math" xmlns:v="urn:schemas-microsoft-com:vml" xmlns:wp14="http://schemas.microsoft.com/office/word/2010/wordprocessingDrawing" xmlns:wp="http://schemas.openxmlformats.org/drawingml/2006/wordprocessingDrawing" xmlns:w10="urn:schemas-microsoft-com:office:word" xmlns:w="http://schemas.openxmlformats.org/wordprocessingml/2006/main" xmlns:w14="http://schemas.microsoft.com/office/word/2010/wordml" xmlns:w15="http://schemas.microsoft.com/office/word/2012/wordml" xmlns:wpg="http://schemas.microsoft.com/office/word/2010/wordprocessingGroup" xmlns:wpi="http://schemas.microsoft.com/office/word/2010/wordprocessingInk" xmlns:wne="http://schemas.microsoft.com/office/word/2006/wordml" xmlns:wps="http://schemas.microsoft.com/office/word/2010/wordprocessingShape">
            <w:body>
              <w:p>
                <w:pPr>
                  <w:spacing w:before="360" w:after="0" w:line="480" w:lineRule="auto"/>
                  <w:rPr>
                    <w:color w:val="70AD47" w:themeColor="accent6"/>
                    <w:sz w:val="28"/>
                  </w:rPr>
                </w:pPr>
                <w:r>
                  <w:rPr>
                    <w:color w:val="70AD47" w:themeColor="accent6"/>
                    <w:sz w:val="28"/>
                  </w:rPr>
                  <w:t>This text has formatting directly applied to achieve its font size, color, line spacing, and paragraph spacing.</w:t>
                </w:r>
                <w:bookmarkStart w:id="0" w:name="_GoBack"/>
                <w:bookmarkEnd w:id="0"/>
              </w:p>
              <w:p/>
              <w:sectPr>
                <w:pgSz w:w="12240" w:h="15840"/>
                <w:pgMar w:top="1440" w:right="1440" w:bottom="1440" w:left="1440" w:header="720" w:footer="720" w:gutter="0"/>
                <w:cols w:space="720"/>
              </w:sectPr>
            </w:body>
          </w:document>
        </pkg:xmlData>
      </pkg:part>
    

    Since document.xml is the primary document part where you place your content, let’s take a quick walk through that part. (Figure 14, which follows this list, provides a visual reference to show how some of the core content and formatting tags explained here relate to what you see in a Word document.)

    • The opening w:document tag includes several namespace (xmlns) listings. Many of those namespaces refer to specific types of content and you only need them if they’re relevant to your content.

      Notice that the prefix for the tags throughout a document part refers back to the namespaces. In this example, the only prefix used in the tags throughout the document.xml part is w:, so the only namespace that we need to leave in the opening w:document tag is xmlns:w.

    TipTip

    If you’re editing your markup in Visual Studio 2012, after you delete namespaces in any part, look through all tags of that part. If you’ve removed a namespace that’s required for your markup, you’ll see a red squiggly underline on the relevant prefix for affected tags. Also note that, if you remove the xmlns:mc namespace, you must also remove the mc:Ignorable attribute that precedes the namespace listings.

    • Inside the opening body tag, you see a paragraph tag (w:p), which includes our sample content for this example.

    • The w:pPr tag includes properties for directly-applied paragraph formatting, such as space before or after the paragraph, paragraph alignment, or indents. (Direct formatting refers to attributes that you apply individually to content rather than as part of a style.) This tag also includes direct font formatting that’s applied to the entire paragraph, in a nested w:rPr (run properties) tag, which contains the font color and size set in our sample.

    NoteNote

    You might notice that font sizes and some other formatting settings in Word Office Open XML markup look like they’re double the actual size. That’s because paragraph and line spacing, as well some section formatting properties shown in the preceding markup, are specified in twips (one-twentieth of a point).

    Depending on the types of content you work with in Office Open XML, you may see several additional units of measure, including English Metric Units (914,400 EMUs to an inch), which are used for some Office Art (drawingML) values and 100,000 times actual value, which is used in both drawingML and PowerPoint markup. PowerPoint also expresses some values as 100 times actual and Excel commonly uses actual values.

    • Within a paragraph, any content with like properties is included in a run (w:r), such as is the case with the sample text. Each time there’s a change in formatting or content type, a new run starts. (That is, if just one word in the sample text was bold, it would be separated into its own run.) In this example, the content includes just the one text run.

      Notice that, because the formatting included in this sample is font formatting (that is, formatting that can be applied to as little as one character), it also appears in the properties for the individual run.

    • Also notice the tags for the hidden “_GoBack” bookmark (w:bookmarkStart and w:bookmarkEnd), which appear in Word 2013 documents by default. You can always delete the start and end tags for the GoBack bookmark from your markup.

    • The last piece of the document body is the w:sectPr tag, or section properties. This tag includes settings such as margins and page orientation. The content you insert using setSelectedDataAsync will take on the active section properties in the destination document by default. So, unless your content includes a section break (in which case you’ll see more than one w:sectPr tag), you can delete this tag.

    Figure 14. How common tags in document.xml relate to the content and layout of a Word document.

    Office Open XML elements in a Word document.

    TipTip

    In markup you create, you might see another attribute in several tags that includes the characters w:rsid, which you don’t see in the examples used in this topic. These are revision identifiers. They’re used in Word for the Combine Documents feature and they’re on by default. You’ll never need them in markup you’re inserting with your app and turning them off makes for much cleaner markup. You can easily remove existing RSID tags or disable the feature (as described in the following procedure) so that they’re not added to your markup for new content.

    Be aware that if you use the co-authoring capabilities in Word (such as the ability to simultaneously edit documents with others), you should enable the feature again when finished generating the markup for your app.

    To turn off RSID attributes in Word for documents you create going forward, do the following:

    1. In Word 2013, choose File and then choose Options.

    2. In the Word Options dialog box, choose Trust Center and then choose Trust Center Settings.

    3. In the Trust Center dialog box, choose Privacy Options and then disable the setting Store Random Number to Improve Combine Accuracy.

    To remove RSID tags from an existing document, try the following shortcut with the document open in Word:

    1. With your insertion point in the main body of the document, press Ctrl+Home to go to the top of the document.

    2. On the keyboard, press Spacebar, Delete, Spacebar. Then, save the document.

    After removing the majority of the markup from this package, we’re left with the minimal markup that needs to be inserted for the sample, as shown in the preceding section.

    Several types of rich content require only the .rels and document.xml components shown in the preceding example, including content controls, Office drawing shapes and text boxes, and tables (unless a style is applied to the table). In fact, you can reuse the same edited package parts and swap out just the <body> content in document.xml for the markup of your content.

    To check out the Office Open XML markup for the examples of each of these content types shown earlier in Figures 5 through 8, explore the Loading and Writing Office Open XML code sample referenced in the Overview section.

    Before we move on, let’s take a look at differences to note for a couple of these content types and how to swap out the pieces you need.

    Understanding drawingML markup (Office graphics) in Word: What are fallbacks?

    If the markup for your shape or text box looks far more complex than you would expect, there is a reason for it. With the release of Office 2007, we saw the introduction of the Office Open XML Formats as well as the introduction of a new Office graphics engine that PowerPoint and Excel fully adopted. In the 2007 release, Word only incorporated part of that graphics engine—adopting the updated Excel charting engine, SmartArt graphics, and advanced picture tools. For shapes and text boxes, Word 2007 continued to use legacy drawing objects (VML). It was in the 2010 release that Word took the additional steps with the graphics engine to incorporate updated shapes and drawing tools.

    So, to support shapes and text boxes in Office Open XML Format Word documents when opened in Word 2007, shapes (including text boxes) require fallback VML markup.

    Typically, as you see for the shape and text box examples included in the Loading and Writing Office Open XML code sample, the fallback markup can be removed. Word 2013 automatically adds missing fallback markup to shapes when a document is saved. However, if you prefer to keep the fallback markup to ensure that you’re supporting all user scenarios, there’s no harm in retaining it.

    Note also that, if you have grouped drawing objects included in your content, you’ll see additional (and apparently repetitive) markup, but this must be retained. Portions of the markup for drawing shapes are duplicated when the object is included in a group.

    Important note Important

    When working with text boxes and drawing shapes, be sure to check namespaces carefully before removing them from document.xml. (Or, if you’re reusing markup from another object type, be sure to add back any required namespaces you might have previously removed from document.xml.) A substantial portion of the namespaces included by default in document.xml are there for drawing object requirements.

    Note about graphic positioning

    In the code samples Loading and Writing Office Open XMLand Get, Set, and Edit Office Open XML, the text box and shape are setup using different types of text wrapping and positioning settings. (Also be aware that the image examples in those code samples are setup using in line with text formatting, which positions a graphic object on the text baseline.)

    The shape in those code samples is positioned relative to the right and bottom page margins. Relative positioning lets you more easily coordinate with a user’s unknown document setup because it will adjust to the user’s margins and run less risk of looking awkward because of paper size, orientation, or margin settings. To retain relative positioning settings when you insert a graphic object, you must retain the paragraph mark (w:p) in which the positioning (known in Word as an anchor) is stored. If you insert the content into an existing paragraph mark rather than including your own, you may be able to retain the same initial visual, but many types of relative references that enable the positioning to automatically adjust to the user’s layout may be lost.

    Working with content controls

    Content controls are an important feature in Word 2013 that can greatly enhance the power of your app for Word in multiple ways, including giving you the ability to insert content at designated places in the document rather than only at the selection.

    In Word, find content controls on the Developer tab of the ribbon, as shown here in Figure 15.

    Figure 15. The Controls group on the Developer tab in Word.

    Content Controls group on the Word 2013 ribbon.

    Types of content controls in Word include rich text, plain text, picture, building block gallery, check box, dropdown list, combo box, date picker, and repeating section.

    • Use the Properties command, shown in Figure 15, to edit the title of the control and to set preferences such as hiding the control container.

    • Enable Design Mode to edit placeholder content in the control.

    If your app works with a Word template, you can include controls in that template to enhance the behavior of the content. You can also use XML data binding in a Word document to bind content controls to data, such as document properties, for easy form completion or similar tasks. (Find controls that are already bound to built-in document properties in Word on the Insert tab, under Quick Parts.)

    When you use content controls with your app, you can also greatly expand the options for what your app can do using a different type of binding. You can bind to a content control from within the app and then write content to the binding rather than to the active selection.

    Note Note

    Don’t confuse XML data binding in Word with the ability to bind to a control via your app. These are completely separate features. However, you can include named content controls in the content you insert via your app using OOXML coercion and then use code in the app to bind to those controls.

    Also be aware that both XML data binding and Office.js can interact with custom XML parts in your app—so it is possible to integrate these powerful tools. To learn about working with custom XML parts in the Office JavaScript API, see the Additional resources section of this topic.

    Working with bindings in your Word app is covered in the next section of the topic. First, let’s take a look at an example of the Office Open XML required for inserting a rich text content control that you can bind to using your app.

    Important note Important

    Rich text controls are the only type of content control you can use to bind to a content control from within your app.

    Copy
    <pkg:package xmlns:pkg="http://schemas.microsoft.com/office/2006/xmlPackage">
      <pkg:part pkg:name="/_rels/.rels" pkg:contentType="application/vnd.openxmlformats-package.relationships+xml" pkg:padding="512">
        <pkg:xmlData>
          <Relationships xmlns="http://schemas.openxmlformats.org/package/2006/relationships">
            <Relationship Id="rId1" Type="http://schemas.openxmlformats.org/officeDocument/2006/relationships/officeDocument" Target="word/document.xml"/>
          </Relationships>
        </pkg:xmlData>
      </pkg:part>
      <pkg:part pkg:name="/word/document.xml" pkg:contentType="application/vnd.openxmlformats-officedocument.wordprocessingml.document.main+xml">
        <pkg:xmlData>
          <w:document xmlns:w="http://schemas.openxmlformats.org/wordprocessingml/2006/main" xmlns:w15="http://schemas.microsoft.com/office/word/2012/wordml" >
            <w:body>
              <w:p/>
              <w:sdt>
                  <w:sdtPr>
                    <w:alias w:val="MyContentControlTitle"/>
                    <w:id w:val="1382295294"/>
                    <w15:appearance w15:val="hidden"/>
                    <w:showingPlcHdr/>
                  </w:sdtPr>
                  <w:sdtContent>
                    <w:p>
                      <w:r>
                      <w:t>[This text is inside a content control that has its container hidden. You can bind to a content control to add or interact with content at a specified location in the document.]</w:t>
                    </w:r>
                    </w:p>
                  </w:sdtContent>
                </w:sdt>
              </w:body>
          </w:document>
        </pkg:xmlData>
      </pkg:part>
     </pkg:package>
    

    As already mentioned, content controls—like formatted text—don’t require additional document parts, so only edited versions of the .rels and document.xml parts are included here.

    The w:sdt tag that you see within the document.xml body represents the content control. If you generate the Office Open XML markup for a content control, you’ll see that several attributes have been removed from this example, including the tag and document part properties. Only essential (and a couple of best practice) elements have been retained, including the following:

    • The alias is the title property from the Content Control Properties dialog box in Word. This is a required property (representing the name of the item) if you plan to bind to the control from within your app.

    • The unique id is a required property. If you bind to the control from within your app, the ID is the property the binding uses in the document to identify the applicable named content control.

    • The appearance attribute is used to hide the control container, for a cleaner look. This is a new feature in Word 2013, as you see by the use of the w15 namespace. Because this property is used, the w15 namespace is retained at the start of the document.xml part.

    • The showingPlcHdr attribute is an optional setting that sets the default content you include inside the control (text in this example) as placeholder content. So, if the user clicks or taps in the control area, the entire content is selected rather than behaving like editable content in which the user can make changes.

    • Although the empty paragraph mark (<w:p/>) that precedes the sdt tag is not required for adding a content control (and will add vertical space above the control in the Word document), it ensures that the control is placed in its own paragraph. This may be important, depending upon the type and formatting of content that will be added in the control.

    • If you intend to bind to the control, the default content for the control (what’s inside the sdtContent tag) must include at least one complete paragraph (as in this example), in order for your binding to accept multi-paragraph rich content.

    NoteNote

    The document part attribute that was removed from this sample w:sdt tag may appear in a content control to reference a separate part in the package where placeholder content information can be stored (parts located in a glossary directory in the Office Open XML package). Although document part is the term used for XML parts (that is, files) within an Office Open XML package, the term document parts as used in the sdt property refers to the same term in Word that is used to describe some content types including building blocks and document property quick parts (for example, built-in XML data-bound controls). If you see parts under a glossary directory in your Office Open XML package, you may need to retain them if the content you’re inserting includes these features. For a typical content control that you intend to use to bind to from your app, they’re not required. Just remember that, if you do delete the glossary parts from the package, you must also remove the document part attribute from the w:sdt tag.

    The next section will discuss how to create and use bindings in your Word app.

    We’ve already looked at how to insert content at the active selection in a Word document. If you bind to a named content control that’s in the document, you can insert any of the same content types into that control.

    So when might you want to use this approach?

    • When you need to add or replace content at specified locations in a template, such as to populate portions of the document from a database

    • When you want the option to replace content that you’re inserting at the active selection, such as to provide design element options to the user

    • When you want the user to add data in the document that you can access for use with your app, such as to populate fields in the task pane based upon information the user adds in the document

    Download the code sample Add and Populate a Binding in Word , which provides a working example of how to insert and bind to a content control, and how to populate the binding.

    Add and bind to a named content control

    As you examine the JavaScript that follows, consider these requirements:

    • As previously mentioned, you must use a rich text content control in order to bind to the control from your Word app.

    • The content control must have a name (this is the Title field in the Content Control Properties dialog box, which corresponds to the Alias tag in the Office Open XML markup). This is how the code identifies where to place the binding.

    • You can have several named controls and bind to them as needed. Use a unique content control name, unique content control ID, and a unique binding ID.

    Copy
    function addAndBindControl() {
            Office.context.document.bindings.addFromNamedItemAsync("MyContentControlTitle", "text", { id: 'myBinding' }, function (result) {
                if (result.status == "failed") {
                    if (result.error.message == "The named item does not exist.")
                        var myOOXMLRequest = new XMLHttpRequest();
                        var myXML;
                        myOOXMLRequest.open('GET', '../../Snippets_BindAndPopulate/ContentControl.xml', false);
                        myOOXMLRequest.send();
                        if (myOOXMLRequest.status === 200) {
                            myXML = myOOXMLRequest.responseText;
                        }
                        Office.context.document.setSelectedDataAsync(myXML, { coercionType: 'ooxml' }, function (result) {
                            Office.context.document.bindings.addFromNamedItemAsync("MyContentControlTitle", "text", { id: 'myBinding' });
                        });
                }
                });
            }
    

    The code shown here takes the following steps:

    • Attempts to bind to the named content control, using addFromNamedItemAsync.

      Take this step first if there is a possible scenario for your app where the named control could already exist in the document when the code executes. For example, you’ll want to do this if the app was inserted into and saved with a template that’s been designed to work with the app, where the control was placed in advance. You also need to do this if you need to bind to a control that was placed earlier by the app.

    • The callback in the first call to the addFromNamedItemAsync method checks the status of the result to see if the binding failed because the named item doesn’t exist in the document (that is, the content control named MyContentControlTitle in this example). If so, the code adds the control at the active selection point (using setSelectedDataAsync) and then binds to it.

    NoteNote

    As mentioned earlier and shown in the preceding code, the name of the content control is used to determine where to create the binding. However, in the Office Open XML markup, the code adds the binding to the document using both the name and the ID attribute of the content control.

    After code execution, if you examine the markup of the document in which your app created bindings, you’ll see two parts to each binding. In the markup for the content control where a binding was added (in document.xml), you’ll see the attribute <w15:webExtensionLinked/>.

    In the document part named webExtensions1.xml, you’ll see a list of the bindings you’ve created. Each is identified using the binding ID and the ID attribute of the applicable control, such as the following—where the appref attribute is the content control ID:<we:binding id=”myBinding” type=”text” appref=”1382295294″/>.

    Important noteImportant

    You must add the binding at the time you intend to act upon it. Don’t include the markup for the binding in the Office Open XML for inserting the content control because the process of inserting that markup will strip the binding.

    Populate a binding

    The code for writing content to a binding is similar to that for writing content to a selection.

    Copy
    function populateBinding(filename) {
            var myOOXMLRequest = new XMLHttpRequest();
            var myXML;
            myOOXMLRequest.open('GET', filename, false);
                myOOXMLRequest.send();
                if (myOOXMLRequest.status === 200) {
                    myXML = myOOXMLRequest.responseText;
                }
                Office.select("bindings#myBinding").setDataAsync(myXML, { coercionType: 'ooxml' });
            }
    

    As with setSelectedDataAsync, you specify the content to be inserted and the coercion type. The only additional requirement for writing to a binding is to identify the binding by ID. Notice how the binding ID used in this code (bindings#myBinding) corresponds to the binding ID established (myBinding) when the binding was created in the previous function.

    NoteNote

    The preceding code is all you need whether you are initially populating or replacing the content in a binding. When you insert a new piece of content at a bound location, the existing content in that binding is automatically replaced. Check out an example of this in the previously-referenced code sample Add and Populate a Binding in Word, which provides two separate content samples that you can use interchangeably to populate the same binding.

    Many types of content require additional document parts in the Office Open XML package, meaning that they either reference information in another part or the content itself is stored in one or more additional parts and referenced in document.xml.

    For example, consider the following:

    • Content that uses styles for formatting (such as the styled text shown earlier in Figure 2 or the styled table shown in Figure 9) requires the styles.xml part.

    • Images (such as those shown in Figures 3 and 4) include the binary image data in one (and sometimes two) additional parts.

    • SmartArt diagrams (such as the one shown in Figure 10) require multiple additional parts to describe the layout and content.

    • Charts (such as the one shown in Figure 11) require multiple additional parts, including their own relationship (.rels) part.

    You can see edited examples of the markup for all of these content types in the previously-referenced code sample Loading and Writing Office Open XML. You can insert all of these content types using the same JavaScript code shown earlier (and provided in the referenced code samples) for inserting content at the active selection and writing content to a specified location using bindings.

    Before you explore the samples, let’s take a look at few tips for working with each of these content types.

    Important note Important

    Remember, if you are retaining any additional parts referenced in document.xml, you will need to retain document.xml.rels and the relationship definitions for the applicable parts you’re keeping, such as styles.xml or an image file.

    Working with styles

    The same approach to editing the markup that we looked at for the preceding example with directly-formatted text applies when using paragraph styles or table styles to format your content. However, the markup for working with paragraph styles is considerably simpler, so that is the example described here.

    Editing the markup for content using paragraph styles

    The following markup represents the body content for the styled text example shown in Figure 2.

    Copy
    <w:body>
      <w:p>
        <w:pPr>
          <w:pStyle w:val="Heading1"/>
        </w:pPr>
        <w:r>
          <w:t>This text is formatted using the Heading 1 paragraph style.</w:t>
        </w:r>
      </w:p>
    </w:body>
    
    NoteNote

    As you see, the markup for formatted text in document.xml is considerably simpler when you use a style, because the style contains all of the paragraph and font formatting that you otherwise need to reference individually. However, as explained earlier, you might want to use styles or direct formatting for different purposes: use direct formatting to specify the appearance of your text regardless of the formatting in the user’s document; use a paragraph style (particularly a built-in paragraph style name, such as Heading 1 shown here) to have the text formatting automatically coordinate with the user’s document.

    Use of a style is a good example of how important it is to read and understand the markup for the content you’re inserting, because it’s not explicit that another document part is referenced here. If you include the style definition in this markup and don’t include the styles.xml part, the style information in document.xml will be ignored regardless of whether or not that style is in use in the user’s document.

    However, if you take a look at the styles.xml part, you’ll see that only a small portion of this long piece of markup is required when editing markup for use in your app:

    • The styles.xml part includes several namespaces by default. If you are only retaining the required style information for your content, in most cases you only need to keep the xmlns:w namespace.

    • The w:docDefaults tag content that falls at the top of the styles part will be ignored when your markup is inserted via the app and can be removed.

    • The largest piece of markup in a styles.xml part is for the w:latentStyles tag that appears after docDefaults, which provides information (such as appearance attributes for the Styles pane and Styles gallery) for every available style. This information is also ignored when inserting content via your app and so it can be removed.

    • Following the latent styles information, you see a definition for each style in use in the document from which you’re markup was generated. This includes some default styles that are in use when you create a new document and may not be relevant to your content. You can delete the definitions for any styles that aren’t used by your content.

    NoteNote

    Each built-in heading style has an associated Char style that is a character style version of the same heading format. Unless you’ve applied the heading style as a character style, you can remove it. If the style is used as a character style, it appears in document.xml in a run properties tag (w:rPr) rather than a paragraph properties (w:pPr) tag. This should only be the case if you’ve applied the style to just part of a paragraph, but it can occur inadvertently if the style was incorrectly applied.

    • If you’re using a built-in style for your content, you don’t have to include a full definition. You only must include the style name, style ID, and at least one formatting attribute in order for the coerced OOXML to apply the style to your content upon insertion.

      However, it’s a best practice to include a complete style definition (even if it’s the default for built-in styles). If a style is already in use in the destination document, your content will take on the resident definition for the style, regardless of what you include in styles.xml. If the style isn’t yet in use in the destination document, your content will use the style definition you provide in the markup.

    So, for example, the only content we needed to retain from the styles.xml part for the sample text shown in Figure 2, which is formatted using Heading 1 style, is the following.

    NoteNote

    A complete Word 2013 definition for the Heading 1 style has been retained in this example.

    Copy
    <pkg:part pkg:name="/word/styles.xml" pkg:contentType="application/vnd.openxmlformats-officedocument.wordprocessingml.styles+xml">
      <pkg:xmlData>
        <w:styles xmlns:w="http://schemas.openxmlformats.org/wordprocessingml/2006/main" >
          <w:style w:type="paragraph" w:styleId="Heading1">
            <w:name w:val="heading 1"/>
            <w:basedOn w:val="Normal"/>
            <w:next w:val="Normal"/>
            <w:link w:val="Heading1Char"/>
            <w:uiPriority w:val="9"/>
            <w:qFormat/>
            <w:pPr>
              <w:keepNext/>
              <w:keepLines/>
              <w:spacing w:before="240" w:after="0" w:line="259" w:lineRule="auto"/>
              <w:outlineLvl w:val="0"/>
            </w:pPr>
            <w:rPr>
              <w:rFonts w:asciiTheme="majorHAnsi" w:eastAsiaTheme="majorEastAsia" w:hAnsiTheme="majorHAnsi" w:cstheme="majorBidi"/>
              <w:color w:val="2E74B5" w:themeColor="accent1" w:themeShade="BF"/>
              <w:sz w:val="32"/>
              <w:szCs w:val="32"/>
            </w:rPr>
          </w:style>
        </w:styles>
      </pkg:xmlData>
    </pkg:part>
    

    Editing the markup for content using table styles

    When your content uses a table style, you need the same relative part of styles.xml as described for working with paragraph styles. That is, you only need to retain the information for the style you’re using in your content—and you must include the name, ID, and at least one formatting attribute—but are better off including a complete style definition to address all potential user scenarios.

    However, when you look at the markup both for your table in document.xml and for your table style definition in styles.xml, you see enormously more markup than when working with paragraph styles.

    • In document.xml, formatting is applied by cell even if it’s included in a style. Using a table style won’t reduce the volume of markup. The benefit of using table styles for the content is for easy updating and easily coordinating the look of multiple tables.

    • In styles.xml, you’ll see a substantial amount of markup for a single table style as well, because table styles include several types of possible formatting attributes for each of several table areas, such as the entire table, heading rows, odd and even banded rows and columns (separately), the first column, etc.

    Working with images

    The markup for an image includes a reference to at least one part that includes the binary data to describe your image. For a complex image, this can be hundreds of pages of markup and you can’t edit it. Since you don’t ever have to touch the binary part(s), you can simply collapse it if you’re using a structured editor such as Visual Studio 2012, so that you can still easily review and edit the rest of the package.

    If you check out the example markup for the simple image shown earlier in Figure 3, available in the previously-referenced code sample Loading and Writing Office Open XML, you’ll see that the markup for the image in document.xml includes size and position information as well as a relationship reference to the part that contains the binary image data. That reference is included in the <a:blip> tag, as follows:

    Copy
    <a:blip r:embed="rId4" cstate="print">
    

    Be aware that, because a relationship reference is explicitly used (r:embed=”rID4″) and that related part is required in order to render the image, if you don’t include the binary data in your Office Open XML package, you will get an error. This is different from styles.xml, explained previously, which won’t throw an error if omitted since the relationship is not explicitly referenced and the relationship is to a part that provides attributes to the content (formatting) rather than being part of the content itself.

    NoteNote

    When you review the markup, notice the additional namespaces used in the a:blip tag. You’ll see in document.xml that the xlmns:a namespace (the main drawingML namespace) is dynamically placed at the beginning of the use of drawingML references rather than at the top of the document.xml part. However, the relationships namespace (r) must be retained where it appears at the start of document.xml. Check your picture markup for additional namespace requirements. Remember that you don’t have to memorize which types of content require what namespaces—you can easily tell by reviewing the prefixes of the tags throughout document.xml.

    Understanding additional image parts and formatting

    When you use some Office picture formatting effects on your image—such as for the image shown in Figure 4, which uses adjusted brightness and contrast settings (in addition to picture styling)—a second binary data part for an HD format copy of the image data may be required. This additional HD format is required for formatting considered a layering effect, and the reference to it appears in document.xml similar to the following:

    Copy
    <a14:imgLayer r:embed="rId5">
    

    See the required markup for the formatted image shown in Figure 4 (which uses layering effects among others) in the Loading and Writing Office Open XML code sample.

    Working with SmartArt diagrams

    A SmartArt diagram has four associated parts, but only two are always required. You can examine an example of SmartArt markup in the Loading and Writing Office Open XML code sample. First, take a look at a brief description of each of the parts and why they are or are not required:

    Note Note

    If your content includes more than one diagram, they will be numbered consecutively, replacing the 1 in the file names listed here.

    • layout1.xml: This part is required. It includes the markup definition for the layout appearance and functionality.

    • data1.xml: This part is required. It includes the data in use in your instance of the diagram.

    • drawing1.xml: This part is not always required but if you apply custom formatting to elements in your instance of a diagram—such as directly formatting individual shapes—you might need to retain it.

    • colors1.xml: This part is not required. It includes color style information, but the colors of your diagram will coordinate by default with the colors of the active formatting theme in the destination document, based on the SmartArt color style you apply from the SmartArt Tools design tab in Word before saving out your Office Open XML markup.

    • quickStyles1.xml: This part is not required. Similar to the colors part, you can remove this as your diagram will take on the definition of the applied SmartArt style that’s available in the destination document (that is, it will automatically coordinate with the formatting theme in the destination document).

    Tip Tip

    The SmartArt layout1.xml file is a good example of places you may be able to further trim your markup but might not be worth the extra time to do so (because it removes such a small amount of markup relative to the entire package). If you would like to get rid of every last line you can of markup, you can delete the <dgm:sampData…> tag and its contents. This sample data defines how the thumbnail preview for the diagram will appear in the SmartArt styles galleries. However, if it’s omitted, default sample data is used.

    Be aware that the markup for a SmartArt diagram in document.xml contains relationship ID references to the layout, data, colors, and quick styles parts. You can delete the references in document.xml to the colors and styles parts when you delete those parts and their relationship definitions (and it’s certainly a best practice to do so, since you’re deleting those relationships), but you won’t get an error if you leave them, since they aren’t required for your diagram to be inserted into a document. Find these references in document.xml in the dgm:relIds tag. Regardless of whether or not you take this step, retain the relationship ID references for the required layout and data parts.

    Working with charts

    Similar to SmartArt diagrams, charts contain several additional parts. However, the setup for charts is a bit different from SmartArt, in that a chart has its own relationship file. Following is a description of required and removable document parts for a chart:

    Note Note

    As with SmartArt diagrams, if your content includes more than one chart, they will be numbered consecutively, replacing the 1 in the file names listed here.

    • In document.xml.rels, you’ll see a reference to the required part that contains the data that describes the chart (chart1.xml).

    • You also see a separate relationship file for each chart in your Office Open XML package, such as chart1.xml.rels.

      There are three files referenced in chart1.xml.rels, but only one is required. These include the binary Excel workbook data (required) and the color and style parts (colors1.xml and styles1.xml) that you can remove.

    Charts that you can create and edit natively in Word 2013 are Excel 2013 charts, and their data is maintained on an Excel worksheet that’s embedded as binary data in your Office Open XML package. Like the binary data parts for images, this Excel binary data is required, but there’s nothing to edit in this part. So you can just collapse the part in the editor to avoid having to manually scroll through it all to examine the rest of your Office Open XML package.

    However, similar to SmartArt, you can delete the colors and styles parts. If you’ve used the chart styles and color styles available in to format your chart, the chart will take on the applicable formatting automatically when it is inserted into the destination document.

    See the edited markup for the example chart shown in Figure 11 in the Loading and Writing Office Open XML code sample.

    You’ve already seen how to identify and edit the content in your markup. If the task still seems difficult when you take a look at the massive Open XML package generated for your document, following is a quick summary of recommended steps to help you edit that package down quickly:

    Note Note

    Remember that you can use all .rels parts in the package as a map to quickly check for document parts that you can remove.

    1. Open the flattened XML file in Visual Studio 2012 and press Ctrl+K, Ctrl+D to format the file. Then use the collapse/expand buttons on the left to collapse the parts you know you need to remove. You might also want to collapse long parts you need, but know you won’t need to edit (such as the base64 binary data for an image file), making the markup faster and easier to visually scan.

    2. There are several parts of the document package that you can almost always remove when you are preparing Open XML markup for use in your app. You might want to start by removing these (and their associated relationship definitions), which will greatly reduce the package right away. These include the theme1, fontTable, settings, webSettings, thumbnail, both the core and app properties files, and any taskpane or webExtension parts.

    3. Remove any parts that don’t relate to your content, such as footnotes, headers, or footers that you don’t require. Again, remember to also delete their associated relationships.

    4. Review the document.xml.rels part to see if any files referenced in that part are required for your content, such as an image file, the styles part, or SmartArt diagram parts. Delete the relationships for any parts your content doesn’t require and confirm that you have also deleted the associated part. If your content doesn’t require any of the document parts referenced in document.xml.rels, you can delete that file also.

    5. If your content has an additional .rels part (such as chart#.xml.rels), review it to see if there are other parts referenced there that you can remove (such as quick styles for charts) and delete both the relationship from that file as well as the associated part.

    6. Edit document.xml to remove namespaces not referenced in the part, section properties if your content doesn’t include a section break, and any markup that’s not related to the content that you want to insert. If inserting shapes or text boxes, you might also want to remove extensive fallback markup.

    7. Edit any additional required parts where you know that you can remove substantial markup without affecting your content, such as the styles part.

    After you’ve taken the preceding seven steps, you’ve likely cut between about 90 and 100 percent of the markup you can remove, depending on your content. In most cases, this is likely to be as far as you want to trim.

    Regardless of whether you leave it here or choose to delve further into your content to find every last line of markup you can cut, remember that you can use the previously-referenced code sample Get, Set, and Edit Office Open XML as a scratch pad to quickly and easily test your edited markup.

    Tip Tip

    If you update an OOXML snippet in an existing solution while developing, clear temporary Internet files before you run the solution again to update the Open XML used by your code. Markup that’s included in your solution in XML files is cached on your computer.

    You can, of course, clear temporary Internet files from your default web browser. To access Internet options and delete these settings from inside Visual Studio 2012, on the Debug menu, choose Options and Settings. Then, under Environment, choose Web Browser and then choose Internet Explorer Options.

    In this topic, you’ve seen several examples of what you can do with Open XML in your apps for . We’ve looked at a wide range of rich content type examples that you can insert into documents by using the OOXML coercion type, together with the JavaScript methods for inserting that content at the selection or to a specified (bound) location.

    So, what else do you need to know if you’re creating your app both for stand-alone use (that is, inserted from the Store or a proprietary server location) and for use in a pre-created template that’s designed to work with your app? The answer might be that you already know all you need.

    The markup for a given content type and methods for inserting it are the same whether your app is designed to stand-alone or work with a template. If you are using templates designed to work with your app, just be sure that your JavaScript includes callbacks that account for scenarios where referenced content might already exist in the document (such as demonstrated in the binding example shown in the section Add and bind to a named content control).

    When using templates with your app—whether the app will be resident in the template at the time that the user created the document or the app will be inserting a template—you might also want to incorporate other elements of the API to help you create a more robust, interactive experience. For example, you may want to include identifying data in a customXML part that you can use to determine the template type in order to provide template-specific options to the user. To learn more about how to work with customXML in your apps, see the additional resources that follow.

    For more information, see the following resources:

    Changing Site Access Request Email in SharePoint 2013 (Office 365)

    The option to set the for any SharePoint site access requests has moved around in the last few versions, so I thought I’d post this for those searching through old posts looking for one about SharePoint 2013.

    This setting determines who will receive an email when a user requests access to a particular site—usually when the user tries to access the site and is denied. The tricky part is that the for this request is not related to the site owner permissions, it’s just a string.

    To find the setting, navigate to:

    Site (Gear icon on top-right) > Site permissions > Access Request Settings (in the ribbon)

    First use the gear icon in the top-right corner to get to the Site Settings page. If you don’t see the Site Settings link, you probably don’t have sufficient rights to make this change.

    image

    Once there, click on Site permissions.

    SNAGHTMLf46205b

    This will open the “Permissions: <site name>” page where you can access the Access Request option from the ribbon at the top of the screen. The option you’re changing is “Send all access requests to the following e-mail address.”

    SNAGHTMLf4595d8

    Simply the email address you’d like to use for access requests and you’re done.

    Free e-book can help you develop a location-based Windows Store app

    If you need help getting started on developing a location-based Windows Store app, there’s a new, free e-book you can , “Location Intelligence for Windows Store Apps.”

    Written by Ricky Brundritt, the e-book dives into location intelligence and the different for creating location-aware apps for Windows 8.1.

    The first half of the book focuses on the inner workings of Window Store apps and available location-related tools available (e.g., sensors and the Bing Maps SDK).

    The second half goes through the process of creating location-intelligent apps, with code samples provided in JavaScript, C# and Visual Basic.

    Head over to the Bing Dev Center Team Blog to read more about this e-book.

     

     

    Duet Enterprise and NetWeaver – Feautures of and Benefits for businesses by integrating SAP with SharePoint

    ImageImage

    For years organisations have been scratching their heads trying to figure out how to provide collaboration capabilities on data held within SAP systems.

    Many have developed custom solutions and achieved mixed results. We have also seen the rise of Duet 1.x from Microsoft and SAP that provided 11 specific solutions that surfaced data residing in SAP via Microsoft Office. These were good solutions but limited due to their lack of extensibility.

    Hence the excitement over Duet Enterprise and the benefits that have been realised are exactly what everyone has been looking for. InfoSys Technologies have just released a case study with Microsoft that discusses their Duet Enterprise project and it is an interesting read. Particularly the benefits realised:

    1. Minimal Development Effort
    2. Higher Adoption

    3. Enhanced Productivity

    How can these benefits be real?

    Well, to start with, Duet Enterprise provides all the relevant plumbing to blend both SAP and SharePoint “Platforms”. SAP and Microsoft position Duet Enterprise as the “Foundation” layer between the two platforms.

    DE Architecture

    Architecturally speaking the Duet Enterprise Add-on’s must be installed on SAP Netweaver 7.02 Servers and the SharePoint 2010 Farm. The SAP Netweaver 7.02 Servers work as the gateway to the SAP backend systems and can actually support any version of SAP system. The Duet Enterprise is built on SharePoint 2010 and makes use of the Business Connectivity Services that enables the full Create Read Update Delete (CRUD) operations to data residing on external systems. The options for user experience are the same for any SharePoint 2010 solution; Mobile, Office or a Browser. There are no SAP or Duet Enterprise clients.

    Ok, but what does this do?

    This provides organisations with a jointly supported, (Microsoft and SAP) technical approach and OOB tools to address the common challenges faced when integrating SAP systems. Such as;

    Authentication: Using Claims-Based authentication to ensure SSO is available between SharePoint 2010 sites and SAP backend systems.

    Authorisation: Able to secure objects in SharePoint using SAP Roles, supporting concepts such a Manager seeing more than an employee.

    Monitoring: Duet Enterprise SharePoint Health Rules are provided to proactively monitor your Duet Enterprise solutions. Also available are Duet Enterprise Management Pack for System Centre Operations Manager, a quick video here shows it in action.

    It is also worth taking a look at the Duet Enterprise Architecture for more information.

    These are just some of the behind the scenes aspects of Duet Enterprise that essentially provide a jump start in any SAP/SharePoint integration project. However, there is also another layer that exemplifies how data from SAP can be surfaced through SharePoint 2010 and Office 2010. (Office 2010 isn’t a pre-req but certainly a better experience!).

    Currently included are;

    1. Duet Enterprise Workflow
  • Duet Enterprise Profile

  • Duet Enterprise Collaboration

  • Duet Enterprise Sites

  • Duet Enterprise Reporting

  • All of this provide a way to jump start development of a solution that can be used OOB or easily extended to suit specific requirements. The result here is that organisations are able to surface data typically locked away in backend SAP systems to a much wider audience and in an inexpensive way. I may blog on what each of these are in the future and how to extend them.

    Once deployed, IT departments have a “Foundation” in place to support future extensibility. Once users start seeing what is possible I fully expect the flood gates to open with requests for composite applications.

    What else do you get?

    1. Long-term product roadmap commitment from SAP and Microsoft
    2. Platform for innovation: broad ecosystem of ISV and service partners offering Business Pack Solutions
    3. Partner solutions/apps certified by SAP

    How To : Get data from Windows Azure Marketplace into your Office application

    ImageThis post walks through a published app for Office, along the way showing you everything you need to get started building your own app for Office that uses a data service from the Windows Azure Marketplace.

    Ever wondered how to get premium, curated data from Windows Azure Marketplace, into your Office applications, to create a rich and powerful experience for your users? If you have, you are in luck.

    Introducing the first ever app for Office that builds this integration with the Windows Azure Marketplace – US Crime Stats. This app enables users to insert crime statistics, provided by DATA.GOV, right into an Excel spreadsheet, without ever having to leave the Office client.

    One challenge faced by Excel users is finding the right set of data, and apps for Office provides a great opportunity to create rich, immersive experiences by connecting to premium data sources from the Windows Azure Marketplace.

    What is the Windows Azure Marketplace?

    The Windows Azure Marketplace (also called Windows Azure Marketplace DataMarket or just DataMarket) is a marketplace for datasets, data services and complete applications. Learn more about Windows Azure Marketplace.

    This blog article is organized into two sections:

    1. The U.S. Crime Stats Experience
    2. Writing your own Office Application that gets data from the Windows Azure Marketplace

    The US Crime Stats Experience

    You can find the app on the Office Store. Once you add the US Crime Stats app to your collection, you can go to Excel 2013, and add the US Crime Stats app to your spreadsheet.

    Figure 1. Open Excel 2013 spreadsheet

    blog_CrimeStats_fig01

    Once you choose US Crime Stats, the application is shown in the right pane. You can search for crime statistics based on City, State, and Year.

    Figure 2. US Crime Stats app is shown in the right task pane

    blog_CrimeStats_fig02

    Once you enter the city, state, and year, click ‘Insert Crime Data’ and the data will be inserted into your spreadsheet.

    Figure 3. Data is inserted into an Excel 2013 spreadsheet

    blog_CrimeStats_fig03

    What is going on under the hood?

    In short, when the ‘Insert Crime Data’ button is chosen, the application takes the input (city, state, and year) and makes a request to the DataMarket services endpoint for DATA.GOV in the form of an OData Call. When the response is received, it is then parsed, and inserted into the spreadsheet using the JavaScript API for Office.

    Writing your own Office application that gets data from the Windows Azure Marketplace

    Prerequisites for writing Office applications that get data from Windows Azure Marketplace

    How to write Office applications using data from Windows Azure Marketplace

    The MSDN article, Create a Marketplace application, covers everything necessary for creating a Marketplace application, but below are the steps in order.

    1. Register with the Windows Azure Marketplace:
      • You need to register your application first on the Windows Azure Marketplace Application Registration page. Instructions on how to register your application for the Windows Azure Marketplace are found in the MSDN topic, Register your Marketplace Application.
    2. Authentication:
    3. Receiving Data from the Windows Azure Marketplace DataMarket service

    New “Filter My ListView” SharePoint Web Part and App now available for SP 2010 & 2013 On-premise and Office 365!!

    What is it?

    The “Filter My ListView” Web Part / App is a SharePoint WebPart enables you to create custom filter to find information in SharePoint list or document library.

    my listview

    Why do you need it?

    In working with SharePoint and with large lists or document libraries containing 100K+ items, users frequently found that there is no usable tool for filtering data.

    SharePoint let us create views, but their functionality doesn’t meet the requirements of users. And most popular reason is this: list view is static and users can’t modify it on the fly.

    On the other hand the “Filter My List” web part may filter data representing in the current view’s columns. But user can’t apply multiple filter to list and others (date range, filter criteria, …).

    All this leads to the fact that we have to have custom solution this solving these limitations.

    Usage

    The “Filter My ListView” Web Part / App is a simple to use SharePoint list view filter. It enables your to create custom filter form, composed from all list fields (not only fields containing in current list view).

    Supported field types

    • Simple text

    jQuery UI is used for using autocomplete!

    • Text with options enables select filtering type

    Text with filtering options

    • Date

    • DateRange

    • Boolean

    • DropDown list represents unique values of field

    • User or Group
    • Taxonomy Term Picker

    • Multi-select CheckBoxList

    The “Filter My ListView” Web Part / App builds a filter form using different types of controls:

    • TextBox. “Contains” criteria filter
    • TextBox with autocomplete
    • TextBox with options. Allows user to choose filter criteria that can be one of these:
      • Equals
      • Not equals
      • Contains
      • Begins with
    • Date
    • Date Range
    • DropDownList
    • DropDownList with multiple selection
    • People picker
    • MetaData picker

    Relation between field type and supported filter types is represented in this matrix:

    Contact me now through my blog, https://sharepointsamurai.wordpress.com or at tomas.floyd@outlook.com for this and more SharePoint and Office 365 custom developed Web Parts and Apps

    Tip: Bypass WebProxy for BCS service application in Duet Enterprise landscape

    Setting up a fresh Duet Enterprise landscape, I was confronted with an issue trying to import BDC Models from the SAP Gateway system into SharePoint BCS:
    Application definition import failed. The following error occurred: Error loading url: “http://&#8230;.”. This normally happens when url does not point to a valid discovery document, or XSD schema.
    Using Fiddler I detected that the problem cause is a “(407) Proxy Authentication Required” issue: “The ISA server requires authorization to fulfill the request. Access to proxy filter is denied.” Although I did setup a rule in Windows CredentialsManager for automatic authentication against the web proxy, this is not picked up in the context of BCS service application as an autonomous running process. As it turns out, by default .NET web applications and services will attempt to use a proxy, even if it doesn’t need one.
    So how then to resolve from this situation? Multiple approaches are possible here:

    1. Explicitly set the Proxy Credentials for the BCS application process. It is not possible to set the proxy credentials direct in the web.config of 14hive\webservices\bdc. Instead you must use a 2-step delegation approach: refer in the web.config to a custom Proxy module implementation, and build the custom Proxy to explicitly set the proxy credentials:
      namespace ByPassProxyAuthentication
      {
          public class ByPassProxy : IWebProxy
          {
              public ICredentials Credentials
              {
                  get { 
                      return new NetworkCredential(
                          "username", "password", "domain"); }
                  set { }
              }
          }
      }
      
          <defaultProxy enabled="true" useDefaultCredentials="false">
    2. Disable usage of (default)proxy altogether for the BCS application process. This is a viable approach in case the consumed external systems are all within the internal company network infra.
      <system.net>  
        <defaultProxy  
          enabled="false"  
          useDefaultCredentials="false"/>  
        </system.net>
    3. Disable usage of (default)proxy for specific addresses for the BCS application process.
      <system.net>
          <defaultProxy>
              <bypasslist>
                  <add address="[a-z]+\.contoso\.com" />
                  <add address="192\.168\..*" />
                  <add address="Netbios name of server" />
              </bypasslist>
          </defaultProxy>
      </system.net>

      The first bypasses the proxy for all servers in the contoso.com domain; the second bypasses the proxy for all servers whose IP addresses begin with 192.168. The third bypass entry is for the ServerName

    4. Disable usage of proxy for specific address on system level. This is in fact the most simple approach, just disable proxy usage for certain url’s for all processes on system level. That is also the potential disadvantage, it can be that it is not allowed to disable proxy usage for all processes. You disable the proxy via IE \ Internet Options \ Connections \ LAN Settings \ Advanced \ Proxy Server \ Exception <Do not use proxy server for addresses beginning with>.

    Example of how to use the SAP NetWeaver Gateway in building a Cloud App

    Overview

    SAP provides a tool called SAP NetWeaver Gateway that enables the ability to expose SAP application data as an OData service. This OData service can then be used by a CBA to create custom line of business apps. SAP has several sample gateway services you can use for testing and app building. For our example, we will use the SAP Enterprise Procurement Model (EPM) service. Read the SAP documentation to learn how to access to the EPM service and other sample services from SAP. Be aware that these sample services are read-only; however, NetWeaver Gateway does support read-write services.

    Our SAP CBA app will be based on a fictional company that sells computers and accessories. This company has several locations worldwide, including a distribution branch that we will be building a line of business app for, named Contoso Shipping Management. Specifically, our app will help the branch manager of Contoso Shipping Management with their daily tasks. The branch manager routinely views product information in the system and adds supplemental production information that is specific to their branch (such as the item location and whether items are out of stock).

    Define the data model

    Begin by creating a CBA app; in Visual Studio. Choose the Cloud Business App project template under the Office/SharePoint>Apps node.

    Attach to SAP Data Source

    When you have created the app, attach it to the SAP service.

    1. In the Server Explorer, under the Server project, choose the Data Sources>Add Data Source.
    2. In the Attach Data Source Wizard, notice the option to select SAP as a data source; after selecting it, choose Next.
      clip_image001 Figure 1. Select SAP in the Attach Data Source Wizard
    3. On the Enter Connection Information page, enter the URL to SAP EPM service along with the credentials that you received after signing up for access to their test feeds; choose Next. Although, it is possible to select None for the authentication type, typically SAP feeds are configured to require authentication (CBA apps currently support connecting to SAP using basic authentication). For more information, see the Authentication section listed at the end of this post.
      clip_image003 Figure 2. Enter connection information in the Attach Data Source Wizard
    4. On the Choose your Entities page, select the BusinessPartner and Product entities and rename the data source to SAP_EPM_Service; choose Finish.
      clip_image005 Figure 3. Select the BusinessPartner and Product entities in the Attach Data Source Wizard

    As a result, you will now see the SAP_EPM_Service added as a data source to your Server project including the BusinessPartner and Product entities that you selected.

    clip_image007 Figure 4. Entity Designer showing the Product entity

    You can use the ctrl + up arrow\down arrow keys to change the order of the properties. It is useful to define the desired order on the entity, so that later when screens are created, the fields on the screen will automatically be added in the same order. For example, you may change the order of the properties so that ProductId, Name, ProductURL, and Description appear first.

    One feature of an SAP data source within a CBA is the recognition of certain SAP-specific annotations that can adorn entity properties within the service. Specifically, the annotations that will be recognized by a CBA are those that have the sap:semantics value set to “email”, “tel”, or “url”. The BusinessPartner entity that was selected in the Attach Data Source Wizard happens to have properties that demonstrate all three of these annotations. You can view the semantic annotations by viewing the $metadata from the SAP feed.

    https://sapes1.sapdevcenter.com/sap/opu/odata/sap/ZGWSAMPLE_SRV/$metadata

    clip_image008

    Viewing the BusinessPartner entity in the Entity Designer, observe that the EmailAddress, PhoneNumber, and WebAddress fields have their respective types set to the Email Address, Phone Number, and Web Address business types.

    clip_image010 Figure 5. Properties on the BusinessPartner entity have been set to the appropriate business type

    Extend the Product Entity Properties

    For our line of business app, we need to track some additional product information that is specific to our branch, Contoso Shipping Management. With a CBA, we can easily extend any entity properties by relating data from the internal database of the app with data from an external data source. Furthermore, CBAs support relating data between external data sources, such as SharePoint\Office 365 and SAP.

    Add a Relationship

    In our example, we would like to track two additional pieces of product information: the item location and whether a product is out of stock. First, we need to add an entity to the internal database of our app that will be used to store this additional information. Second, we will relate this entity to the Product entity (that exists in the SAP data source) using a one-to-one or zero-to-one relationship.

    1. To add an entity in the internal database, choose the Data Sources node in the Solution Explorer and select Add Table.
    2. Rename the table to ProductDetail and add the following properties: OutOfStock and BackroomLocation. Clear the Required check box for these properties.
      clip_image011 Figure 6. The ProductDetail entity
    3. Add a relationship between Product and ProductDetail. To do this, open Product in the entity designer and choose the Add: Relationship button.
      clip_image013 Figure 7. Adding a relationship
    4. In the Add New Relationship dialog box, add a relationship so that each Product can have one ProductDetail (and a ProductDetail must have a Product). Choose OK to close the dialog box.
      clip_image014 Figure 8. Configuring the relationship

    Create the client screens

    Now that the data model is defined, add some screens to the app. While working with the screens, remember that the sample SAP service that we are using is read-only. As a result, the only data that we can edit is the ProductDetail entity because it is stored in the internal database of the app. If instead we were using a read-write SAP service, we would be able to edit all of the information on these screens and automatically save the data back to SAP.

    Create the Common Screen Set

    1. Choose Screens node in the Solution Explorer and select Add Screen.
    2. In the Add New Screen dialog box, select the Common Screen Set template and set the Screen Data to SAP_EPM_Service.ProductCollection. Finally, choose OK to close the dialog box.
      clip_image016 Figure 9. Adding a new Common Screen Set

      As a result, you will now see a ProductCollection folder created in the SolutionExplorer that contains three screens: AddEditProduct, BrowseProductCollection, and ViewProduct.

    3. Since we defined a one-to-one or zero-to-one relationship between Product and ProductDetail, we need to add code that automatically creates a new ProductDetail entity when the AddEditProduct screen is opened. Choose the AddEditProduct screen from the Solution Explorer, choose the Write Code drop-down in the Screen Designer toolbar and choose created.
      clip_image018 Figure 10. Writing “created” code on the AddEditProduct screen

      To create a ProductDetail instance for this Product instance, use the following code.

      myapp.AddEditProduct.created = function (screen) {
        if (!screen.Product.ProductDetail) {
          var productDetail =        myapp.activeDataWorkspace.ApplicationData.ProductDetails.addNew();
          productDetail.Product = screen.Product;
        }
      };

      Now when the AddEditProduct screen is opened, a related ProductDetail is created, if one does not already exist.

    4. To make the fields that were added from the ProductDetail entity more prevalent on the screen, move the Out of Stock and Backroom Location controls to the top of the right Rows Layout group on this screen. Do the same on the ViewProduct screen as well. Notice that you can also change other appearance properties on controls, such as the font. There are many other properties that you can set on the screen designer to customize the screens.
    5. Also on the ViewProduct screen, drag out the Supplier field under our ProductDetail controls to show data from the BusinessPartner entity. This will create a group called Supplier (with properties from the related BusinessPartner entity). For this example, remove all the fields from this group except Email Address, Phone Number, and Web Address. Notice that these controls appear respectively as an Email Viewer, Phone Viewer, and Web Address Viewer (due to the annotations feature described earlier).
      clip_image020 Figure 11. Control layout
    6. Finally, we want to display the Product images on the ViewProduct screen. First change the Product Pic Url control to be an Image control. To do this, choose the ViewProduct screen in the Solution Explorer, then find the Product Pic Url control on the screen and change it from a Text control to an Image control.
      clip_image022 Figure 12. Changing Product Pic Url to an Image control

      Because this SAP service stores the image URLs in a relative format, we need to write more code to set the full URL to the image. With the Product Pic Url control still selected, choose the Edit PostRender Code link in the Properties window.

      clip_image024 Figure 13. Properties of the Product Pic Url control

      Add the following code to the PostRender method.

      myapp.ViewProduct.ProductPicUrl_postRender =  function (element, contentItem) {
        // add the URL of our SAP server to the relative ProductPicUrl
        var totalUri = "https://sapes1.sapdevcenter.com" + contentItem.value;
        $(element).find("img").attr("src", totalUri);
      };

    Run the app

    Now, run the app (press F5).

    If prompted, enter your SharePoint credentials. When the app starts, also choose Trust It (if prompted).

    Notice the following:

    • The app home screen allows you to browse Product data from the attached SAP data source.
    • When you choose a Product, the detailed Product information is displayed—this includes fields from both the attached SAP data source and the fields defined on the intrinsic ProductDetails entity.
      clip_image026 Figure 14. The ViewProduct screen
    • On the ViewProduct screen, you can choose the Edit button to open the AddEditProduct screen. Since the particular SAP service that the app is accessing is a read-only service, the fields defined on the SAP Product entity cannot be updated, but those defined on ProductDetail can be. If your app is accessing a read-only service, it is a good idea to remove the Add button on the ViewProduct screen and make the controls on the AddEditProduct screen “view” controls instead of “edit” controls.
      clip_image028 Figure 15. The AddEditProduct screen

    Additional notes

    Authentication

    SAP can be configured with a variety of authentication providers. For this release, we support HTTP Basic authentication. Basic authentication is enabled on most NetWeaver Gateway installations, and is easy to configure in both test and production. If you find that CBAs aren’t working with your SAP environment, let us know how we can support you better in the future.

    For our debut of SAP support, we wanted to enable a complete read and write scenario with SAP data. Therefore, in addition to basic authentication support, we’ve also implemented the session and CSRF token handling that SAP requires to be able to modify SAP data via the NetWeaver Gateway. This means that your CBAs will be able to write changes back to SAP if your SAP feeds support it. Don’t worry—when you attach to SAP data, we negotiate the SAP sessions and tokens automatically. There is nothing for you to configure.

    Non-addressable entities

    If your data fails to load and you see a diagnostics error similar to the following, it’s because you’re attempting to navigate directly to an SAP entity that is non-addressable.

    Error: The SalesOrderLineItemCollection is not addressable. Please use the Navigation Property via the SalesOrder Collection or Entity.

    For example, in the EPM service, the SalesOrderLineItemCollection entity set is marked as non-addressable which is evident in the service root document (for example, https://sapes1.sapdevcenter.com/sap/opu/odata/sap/ZGWSAMPLE_SRV).

    clip_image030

    This means that SalesOrderLineItemCollection is a child of SalesOrderCollection and that you can only access it by navigating to it through the parent.

    To solve this, be sure that you do not have a Browse Data Screen that is bound directly to a non-addressable entity (for example, SalesOrderLineItem). Instead, bind the Browse Data Screen to the parent (for example, SalesOrder) and create a View Details Screen that displays the SalesOrderLineItems data for the selected SalesOrder. This is done for you if you use the Common Screen Set template and select the parent as the primary entity for the Browse Data Screen.

    One design to rule them all – Responsive design

    In the last few years, the use of mobile devices to surf the web has increased significantly.

    According to some researchers, by 2015, more mobile devices than desktop computers will be used to access the web. Mobile devices come in different sizes and capabilities. And since the desktop experience just isn’t good for mobile users, what are the options for improving the experience of mobile users on your website?

    Improving the user experience across devices

    Optimizing the user experience of a website across different devices is a complex process. Not only do you have to take into account the screen resolution of each device, but you also need to consider its capabilities (such as touch or pointer-based) and device size (1024×768 might be clearly readable for everyone on a 20-inch monitor but might result in a very bad experience on a 5-inch screen).

    When you are planning improvements to the user experience of your website on mobile devices, there is no silver bullet approach. You have to research who your users are, which devices they use, and what they are trying to achieve on your website.

    You also have to have clear goals for what purpose your website serves and how it should guide your visitors in the process of becoming your customers.

    You have several options for improving the user experience of a public-facing website. Which one you choose depends on the different factors that apply to your scenario.

    Mobile websites

    In the past, when the web technology wasn’t as sophisticated as it is nowadays, it was a common practice to provide mobile users with a separate mobile website to optimize their experience. Being hosted on a separated URL, such as http://m.contoso.com, the mobile site would have a user experience optimized for mobile devices.

    In some scenarios, organizations would even go a step further and optimize the copy on the mobile website. When a user navigated to the website using a mobile device, the main website would detect the use of a mobile device and automatically redirect the visitor to the mobile version.

    It’s not that hard to imagine that building and maintaining two different sites is not only costly but also time consuming. Every update has to be done separately. Even then, with the diversity of today’s mobile devices, the questions remain whether a single mobile website would suffice and whether you wouldn’t need more websites to reach the whole spectrum of your customers.

    Being able to reuse the content across the main and mobile websites simplifies the content management process. But the need to separately maintain the functionality of both websites makes it hard to justify this approach in most scenarios.

    Mobile apps

    One of the recent developments of the mobile market is the increased popularity of companion apps. By using the native capabilities of specific devices, you can build rich mobile apps to support different use cases. There is no better user experience on a mobile device than the native one offered by the device itself. For more information, see Build mobile apps for SharePoint 2013.

    But is it realistic to build separate apps for all the different scenarios and for all the different mobile devices used to navigate the website? Although mobile apps are of great value for supporting specific use cases, there is still the need to access the information on the website from a mobile device in a user-friendly way.

    Responsive web design

    Instead of building separate mobile sites for mobile devices, what if we could have one website that automatically adapts itself to the particular device?

    Responsive web design is a concept based on the ability to separate the design from the content on a website. Using the CSS media queries capability implemented in all modern browsers, and based on the screen dimensions of the specific device, you can load different style sheets to ensure that the website is presented in a user-friendly manner. And because CSS has its limitations, you can use JavaScript to further optimize the interface and interaction of a website on mobile devices. For more information, see Implementing your responsive designs on SharePoint 2013.

    From the search engine optimization (SEO) perspective, responsive web design is the recommended way to optimize public-facing websites for mobile devices. After all, since the same HTML is sent to every device, it’s sufficient for an Internet search engine to index the content once, and it can be sure that the search results will apply the search query on every device.

    Implementing responsive web design on a public-facing website is relatively easy assuming you start planning for it from the beginning. The great advantage of responsive web design above other approaches is that you maintain your website once to support a variety of audiences, and the different experiences are future-proof as they depend on the devices’ dimensions rather than their identity.

    The following figures show how the sample Contoso Electronics website is displayed on different devices using responsive web design. Figure 1 shows the screen shot taken on a desktop device.

    Figure 1. The Contoso Electronics website displayed on a desktop deviceFigure 1. The Contoso Electronics website displayed on a desktop device

    Figure 2 shows how the Contoso Electronics website looks like on different mobile devices.

    Figure 2. The Contoso Electronics website displayed on mobile devicesFigure 2. The Contoso Electronics website displayed on mobile devices

    SharePoint 2013 device channels

    One of the new capabilities of SharePoint 2013 is device channels. You can use device channels to optimize how a website is displayed on different devices. By defining different channels and associating different devices with them, you can use different master pages to optimize how the website is presented to the user.

    Figure 5 shows a sample configuration of device channels for a public-facing website built with SharePoint 2013.

    Figure 3. Device channels configured for a public-facing website built on SharePoint 2013Figure 3. Device channels configured for a public-facing website built on SharePoint 2013

    Whereas responsive web design uses a device’s screen size to determine the presentation layer, device channels in SharePoint 2013 use the identity of the browser on the particular device to decide which presentation style to use.

    Depending on how many different devices your site visitors use, managing the different devices and experiences can become complex. By using device channels, you get more flexibility in controlling the markup of your website for the different devices. Another benefit of using device channels is that you can serve different content to different devices, whereas the same content is served when using responsive web design. With device channels, you can apply additional optimizations to your website, such as resizing images and videos server-side using the renditions capability, which further improves the performance and user experience of your website. For more information, see How to: Manage image renditions in SharePoint 2013.

    With all the different options at our disposal, which one should we use to get the best results?

    Improving the user experience of a public-facing website in SharePoint 2013

    First and foremost, it’s important to note that SharePoint 2013 supports all the methods mentioned above for improving user experience on mobile devices.

    Whether you’re looking at building a separate website for mobile users, supporting certain use cases with a mobile app, implementing responsive web design, or using device channels, it can all be implemented in your website on top of SharePoint 2013.

    Not only does SharePoint 2013 not stand in your way, but it also supports you in implementing some of those improvements.

    For example, using the cross-site publishing capability, you can easily publish the centrally managed content on both the main and the mobile websites. With the Search REST API, you can have your content published in your mobile app, and if you’re looking at optimizing the presentation of your website across different devices, SharePoint 2013 offers plenty of features to help you.

    With all these techniques at your disposal, it is up to you to decide which method, or combination of methods, is the best choice for what you’re trying to achieve. While you might be interested in supporting a particular complex process with a dedicated mobile app, it might still be of added value to ensure that everyone, regardless of their device, can access all the information on your website.

    In most scenarios, it’s easy to choose whether or not the particular optimization technique offersadded value. A slightly more difficult choice, partly due to the similarity of both methods, is whether you should use responsive web design or device channels to optimize the presentation of your website for mobile devices.

    Responsive web design and device channels comparison

    Responsive web design and the SharePoint 2013 device channels capability are similar in how they let you optimize a single website to be displayed in a user-friendly way on different devices. Despite this similarity, there are a few important differences between both approaches. Table 1 presents a comparison of the different properties of both approaches.

    Device channels Responsive web design
    Device management Property management
    Different HTML for every channel Same HTML for every device
    More management (support for new devices) Future proof (device size)
    More flexibility Limited by CSS support and capabilities
    Custom Vary-By User Agent response header required Preferred by Internet search engines
    Table 1. Comparison of device channels and responsive web design

    Applying user experience

    First of all, there is a difference in how both approaches determine which user experience should be applied for the particular user. Responsive web design uses the size of the screen to determine how the content should be laid out in the browser’s window. Device channels, on the other hand, use the identity of the browser to load the suitable channel.

    While responsive web design can cause different experiences to be loaded depending on the size of the browser window, device channels will always load the same experience for the same device regardless of the browser window size. Using device channels can have great advantages, for example, from the troubleshooting point of view where the user and the helpdesk employee would see the same interface despite the possible differences in their screen resolutions or browser window sizes.

    Page markup

    Another difference between device channels and responsive web design is how the page contents are served. Responsive web design changes only the presentation layer of the website. Although you can hide some pieces of the page in the browser using CSS, they are still present in the website’s code and therefore loaded. When using device channels, you can use different master pages to ensure that only the relevant markup is served to users. Additionally, you can use the device channel panels to further control the content elements loaded on specific pages.

    Although device channels allow for better control of the rendered HTML and therefore optimized performance of the website, more effort is required to ensure that Internet search engines will properly deal with all the different versions of the website presented to different devices. You can achieve this by using the Vary-By User Agent response header, but it has to be done manually.

    Future-proofness

    Responsive web design uses the size of the browser window to distinguish between the different experiences. This is a robust approach, and the chances are low that a new device will appear on the market that has a poor user experience despite the configured breakpoints. One reason for that might be related to some specific capabilities of such devices, but again, chances of this are very rare.

    SharePoint 2013 device channels are based on the identity of the browser used to open the website. There are two challenges with this approach. First of all, in some situations it might be impossible to distinguish between the same browser installed on the same operating system but on two devices with distinct capabilities. Second, if a new device appears on the market, you would have to verify that this device is assigned to the right device channel on your website.

    Choosing the right approach for optimizing the user experience

    Although responsive web design and device channels are very similar, their capabilities differ and they have different impact when used for optimizing a website for mobile devices. Due to their similarities and their own strength, choosing between the two approaches is often difficult. Why not combine both approaches to get the best of what they offer?

    Combining responsive web design and device channels

    An interesting scenario worth considering is to combine responsive web design and SharePoint 2013 device channels to benefit from the strengths of both approaches.

    When combining responsive web design and device channels, you could use responsive web design to create the baseline cross-device experience. Depending on your design for the different breakpoints, using responsive web design could be good for the 80%, or maybe even 90%, of the optimizations. The remainder—whether they’re caused by how the web design changes between the breakpoints or by the capabilities of the different devices that should be supported—could be covered by device channels and device channel panels.

    By using responsive web design to build the baseline for the cross-browser experience, we benefit from its future-proofness and robustness. For the specific exceptions, we can benefit from the granular control that SharePoint device channels offer us.

    Using jQuery Mobile and ASP.NET to Pass Data from one Page to another

    ASP.NET developers working on either Web Forms or ASP.NET MVC can integrate jQuery Mobile into their Web sites to create rich mobile Web apps. jQuery Mobile is a lightweight JavaScript framework for developing cross-platform mobile/device Web applications.

    In mobile application development, you as a developer must think about how to utilize available space in the best possible manner. For this purpose sometimes the UI needs to be divided in separate pages. In such cases, you may need to transfer value(s) entered in one page, on the second page for the further processing.

    Consider a scenario where the end user is asked to select the Product Category on Page 1 and based upon that, Page 2 displays products under that category. To get this done, values must be passed from page 1 to page 2.

    In jQuery Mobile, $.mobile object provides the changePage() method which accepts the page url as a first parameter and the values to be transferred as a second parameter using JSON based data. In this article, we will see, how values are passed from one page to another in an ASP.NET application.

    Step 1: Open Visual Studio and create a new ASP.NET empty web application, name it as ‘ASPNET_Mobile_PassingValuesAcrossPages’. In this application, using the NuGet Package Manager, add the latest jQuery and jQuery Mobile libraries.

    Step 2: To the project, add a new class file, name it as ‘ModelClasses.cs’ and add the following classes in it:

    public class Category {     public int CategoryId { get; set; }     public string CategoryName { get; set; } }

    public class Product {     public int ProductId { get; set; }     public string ProductName { get; set; }     public int Price { get; set; }     public int CategoryId { get; set; } }

    public class CategoryDataStore : List<Category> {     public CategoryDataStore()     {         Add(new Category() { CategoryId = 1, CategoryName = “Food Items” });         Add(new Category() { CategoryId = 2, CategoryName = “Home Appliances” });         Add(new Category() { CategoryId = 3, CategoryName = “Electronics” });         Add(new Category() { CategoryId = 4, CategoryName = “Wear” });     } }

    public class CategoryDataSource {     public List<Category> GetCategories()     {         return new CategoryDataStore();     } }

    public class ProductDataStore : List<Product> {     public ProductDataStore()     {         Add(new Product() { ProductId = 111, ProductName = “Apples”, Price = 300, CategoryId = 1 });         Add(new Product() { ProductId = 112, ProductName = “Date”,   Price = 600, CategoryId = 1 });         Add(new Product() { ProductId = 113, ProductName = “Fridge”, Price = 34000, CategoryId = 2 });         Add(new Product() { ProductId = 114, ProductName = “T.V.”,   Price = 30000, CategoryId = 2 });         Add(new Product() { ProductId = 115, ProductName = “Laptop”, Price = 72000, CategoryId = 3 });         Add(new Product() { ProductId = 116, ProductName = “Mobile”, Price = 40000, CategoryId = 3 });         Add(new Product() { ProductId = 117, ProductName = “T-Shirt”, Price = 800, CategoryId = 4 });         Add(new Product() { ProductId = 118, ProductName = “Jeans”, Price = 700, CategoryId = 4 });     } }

    public class ProductDataSource {     public List<Product> GetProductsByCategoryId(int catId)     {         var Products = from p in new ProductDataStore()                        where p.CategoryId == catId                        select p;

            return Products.ToList();     } }

    The above code provides Category and Product entities with data stored in it. The Product is a child of the Category class.

    Step 3: In the project, add a WEB API controller class with the following code:

    public class CategoryProductController : ApiController {     CategoryDataSource objCatDs;     ProductDataSource objPrdDs;

        public CategoryProductController()     {         objCatDs = new CategoryDataSource();         objPrdDs = new ProductDataSource();     }

        public IEnumerable<Category> Get()     {         return objCatDs.GetCategories();     }

        public IEnumerable<Product> Get(int id)     {         return objPrdDs.GetProductsByCategoryId(id);     } }

    Step 4: To define routing for the WEB API class, add a new Global Application Class (Global.asax) in the project and add the following code in the Application_Start method:

    protected void Application_Start(object sender, EventArgs e) {     RouteTable.Routes.MapHttpRoute(     name: “DefaultApi”,     routeTemplate: “api/{controller}/{id}”,     defaults: new { id = System.Web.Http.RouteParameter.Optional }     ); }

    Step 5: Add two HTML pages, name them as Page_Category.html and Page_Products.html.

    Task 6: In both the HTML pages, add the jQuery and jQuery Mobile references:

    <link href=”Content/jquery.mobile.structure-1.3.1.min.css” rel=”stylesheet” /> <link href=”Content/jquery.mobile.theme-1.3.1.min.css” rel=”stylesheet” /> <script src=”Scripts/jquery-2.0.2.min.js”></script> <script src=”Scripts/jquery.mobile-1.3.1.min.js”></script>

    Add the following HTML in the Page_Category.html page:

    <div data-role=”page” id=”catpage”>     <div data-role=”header”>         <h1>Select Category from List</h1>     </div>     <div data-role=”content”>         <div>             <select  id=”lstcat” data-native-menu=”false”>                 <option>List of Categories…..</option>             </select>         </div>         <br />         <br />         <input type=”button” data-icon=”search”            value=”Get Products”  data-inline=”true” id=”btngetproducts” />     </div> </div>

    The above code defines a page using <div>  whose data-role attribute is set to page. This page contains a <select> for displaying list of categories in it and an <input> button with a click event on which the control will be transmitted to Page_Products.html page.

    In the age_Category.html add the following script. The method loadlistview will make a call to WEB API and retrieve categories. These categories will be added into the <select> with id as lstcat. This method will be executed when the pageinit event is executed. On the click event of the button, the Category Value and Name will be passed to the Page_Products.html.

    <script type=”text/javascript”> $(document).on(“pageinit”, “#catpage”, function () {

    //Pass the data to the other Page $(“#btngetproducts”).on(‘click’, function () {     var categoryId = $(“#lstcat”).val();     var categoryName = $(“#lstcat”).find(“:selected”).text();     $.mobile.changePage(“Page_Products.html”, { data: { “catid”: categoryId ,”catname”:categoryName}}); });

    loadlistview();

    ///Function to load all categories function loadlistview() {         $.ajax({         type: “GET”,         url: “/api/CategoryProduct”,         contentType: “application/json; charset=utf-8”,         dataType: “json”     }).done(function (data) {         //Convert JSON into Array         var array = $.map(data, function (i, idx) {             return [[i.CategoryId, i.CategoryName]];         });

            //Add each Category Name in the ListView         var res = “”;         $.each(array, function (idx, cat) {             res += ‘<option value=”‘ + cat[0] + ‘”>’ + cat[1] + ‘</option>’;         });         $(“#lstcat”).append(res);         $(“#lstcat”).trigger(“change”);

        }).fail(function (err) {         alert(“Error ” + err.status);     }); } }); </script>

    Just look at this piece of code:

    $.mobile.changePage(“Page_Products.html”, { data: { “catid”: categoryId ,”catname”:categoryName}});

    This code is responsible for passing the CategoryId and Name using JSON expression to the Page_Products.html.

    Step 7: Open Page_Products.html and add the following HTML markup:

    <div data-role=”page” id=”prodpage” data-add-back-btn=”true”> <div data-role=”header”>         <h1>Products in Category</h1>         <span id=”catName”></span>     </div>          <table style=”border:double” data-role=”table” id=”tblprd”>         <thead>             <tr>                 <td style=’width:100px’>                     ProductId                 </td>                 <td style=’width:100px’>                     ProductName                 </td>                 <td style=’width:100px’>                     Price                 </td>             </tr>         </thead>         <tbody>         </tbody>     </table> </div>

    Here the <div> with id as prodpage is set with the attribute as data-role=page. The attribute data-add-back-btn=true, will add the BACK button on the page. On click of this button, we can move back to the Page_Category.html. This is the default style set in jQuery Mobile.

    Now here the important thing is how to read values which are send from Page_Category.html in the URL, on the Page_Product.html page. Unlike ASP.NET, in jQuery there is no simple way to read the values passed from one page to another. To read these values we will create a readUrlHelper helper method. This method reads the URL and using JavaScript string functions, reads the Key/Value pairs passed from the Page_Category.html. On the Page_Products.html, add this method using <script> inside the page created using <div> with id as prodpage.

    //This helper function will read the URL as //string and provides values for parameters //in the URL function readUrlHelper(pageurl, queryParamName) {     var queryParamValue = “”;

        //Get the Total URL Length     var stringLength = pageurl.length;

        //Get the Index of ? in URL     var indexOfQM = pageurl.indexOf(“?”);

        //Get the Lenght of the String after ?     var stringAfterQM = stringLength – indexofQM;

        //Get the remaining string after ?     var strBeforeQM = pageurl.substr(indexofQM + 1, stringAfterQM);

        //Split the remaining String based upon & sign     var data = strBeforeQM.split(“&”);

        //Iterate through the array of strings after split     $.each(data, function (idx, val) {         //Split the string based upon = sign         var queryExpression = val.split(“=”);

            if (queryExpression[0] == queryParamName)         {             queryParamValue = queryExpression[1];             //If the Query String has Data in Concatination then replace the             //’+’ by blank space ‘ ‘             queryParamValue =  queryParamValue.replace(‘+’, ‘ ‘);         }     });

        return queryParamValue

    }

    To retrieve the Products based upon the CategoryId, add a new method loadproducts inside the prodpage. This methods makes a call to the WEB API service and passes the CategoryId. This method also makes a call to the generatetable helper method to generate HTML table to display products in it based upon the JSON data received by the loadproducts method:

    //Method to make call to WEB API and retrieve //Products based upon the CategoryId. function loadproducts(id) {

        $.ajax({         type: “GET”,         url: “/api/CategoryProduct/” + id,         contentType: “application/json; charset=utf-8″,         dataType: “json”     }).done(function (data) {         generatetable(data);     }).fail(function (err) { }); } //Method to generate HTML table rows from the JSON data function generatetable(data) {

        //Convert JSON into Array     var array = $.map(data, function (i, idx) {         return [[i.ProductId, i.ProductName,i.Price]];     });     var tblbody = $(“#tblprd > tbody”);

        var tblhtml = “”;     //Generate the Table for each Row     for (var i = 0; i < array.length; i++) {         tblhtml = tblhtml + “<tr>”;         for (var j = 0; j < array[i].length; j++) {         tblhtml = tblhtml + “<td style=’width:100px’>” + array[i][j] + “</td>”;        }         tblhtml = tblhtml + “</tr>”;     }

        tblbody.append(tblhtml);     $(“#tblprd”).table(“refresh”);

    }

    Now inside the prodpage subscribe to the pageshow event which will make a call to the readUrlHelper method and loadproducts method

    var catId; $(“#prodpage”).on(“pageshow”, function (e) {     var pgurl = $(“#prodpage”).attr(“data-url”);     //Read the value for catid passed from the Page_Category.html     catId = readUrlHelper(pgurl, “catid”);     //Read the value for catname passed from the Page_Category.html    var catName = readUrlHelper(pgurl,”catname”);

       $(“#catName”).text(catName);

       loadproducts(catId);

    });

    The above code read the CategoryId and CategoryName passed by the Page_Category.html and based upon it the Products are displayed on the page.

    Make the Page_Category.html as a startup page and run the application (IE9, FireFox or Chrome) or in Opera Mobile Emulaor.

    On Page-Category.html select Categories as below:

    jquery-mob-categories

    When you click on Get Products , you will be navigated to the Page_Products.html with the following url:

    http://localhost:5506/Page_Products.html?catid=1&catname=Food+Items

    The URL contains key and value for CategoryId (catid=1) and CategoryName (catname=Food+Items)

    In the Page_Products.html, the readUrlHelper method will read CategoryId and CategoryName and based upon this data, Products will be displayed as below:

    jquery-transfer- values

    Conclusion: In a Mobile application it is very important for the developer to manage the UI and take care of data communication across these pages. If the data passed is complex (more than one key/value pair) then the URL must be read very carefully. In this article, we saw how we can use jQuery Mobile and ASP.NET to transfer values from one page to another.

    In need of a formal test case management system? Look no further than VS2013 Web Access (TWA)

    In Visual Studio 2012 Microsoft provided test case management and test execution capabilities in TFS Web Access. It was part of Visual Studio 2012 Update 2.

    Now in Visual Studio 2013, new capabilities and features have been added to create and modify test plans in Visual Studio 2013 Web Access (TWA).

    You don’t have to switch to Microsoft Test Manager for Test Plans, Test Suites and Shared Steps creation. The entire ‘Test Case Management’ can be done from the Web Access.

    TWA connects you to Visual Studio Team Foundation Server (TFS) or Team Foundation Service to manage source code, work items, builds, and test efforts.

    In order to access Test tab for Web Access, we need to provide Full access to the Windows user or Group who is accessing the functionality.

    web-access-testing01

    Observe the Test tab. If the settings are not configured, we need to go to Access Levels from the Control Panel where we can find the 3 different Access Level settings as shown here

    access-levels

    The default option is Limited Access. We need to add the user to get Test tab by setting the option to Full.

    Now observe the Test tab (right next to the Build tab as seen in Figure 1). I had already created Test Plan and Test Suites using Microsoft Test Manager 2013. These artifacts start appearing in the Web Access under the Test tab immediately.

    The Test Plan has 3 Test Suites – Requirement based, Static and Query based. Each suite has some Test Cases. We can also see the different status of an individual test case.

    test-case-status

    Note: I am also enclosing a similar Test Plan screenshot from Microsoft Test Manager 2013 for comparing the two. Observe the categorization of test status with Microsoft Test Manager.

    microsoft-test-manager

    In case we want to customize the columns, we can do so as follows. The bracketed quantity as seen in the screenshot below shows the width of the columns. We can add certain columns from left hand side and the view will change.

    web-access-testing03

    We can even create a new Test Plan. The Test Plan can have a name, Area Path and Iteration Path. Features like configuration settings, Test Settings and Test Environment can be added with Microsoft Test Manager 2013.

    create-test-plan

    Once a new Test Plan is added, we can add Test Suites to it. Test Suites can be of three types – Requirement Based, Static or Query Based. Even Shared Steps can be added if required. Creation of any kind of Test Suites will provide option for adding new or existing test cases to the Suite.

    test-plan

    A new Test Case can either be created with normal IDE or using Grid.

    new-test-case

    We can provide all the details to the Test Case like Title, Iteration, Area, and Assigned To (ownership of Test Case). The Steps to the test case can be added as action and expected result. If the test case is testing a requirement, Tested Backlog Items tab can be selected and a link can be provided to it. Any attachments we add to the test step can be viewed inline when the test case gets executed. If the attachment is in the form of image, the test step will show the actual image. If the attachment is in the form of a file, the file name with size appears. You can also add attachments when you run test case. These can be  log files etc.

    If we select the option of creation of New Test Case with Grid, we get a similar screen as follows.

    gird-test-case

    We can add actions and expected result to each test case. Create a new Test Case by providing the title. All the test cases can stored in Team Foundation Server at once. When we have finished creation of Test Plan, Test Suites and Test Cases, we can start executing the Test Case. While running the test case, we have optionsof running the default way with Web Access or using the client (Microsoft Test Manager).

    Once we start test runner, the test case with some steps is displayed in left hand pane (around 1/3 area of screen) and remaining area is available for actual execution as seen with Microsoft Test Manager.

    test-runner

    Once the execution starts and we encounter an error, we can create a bug, add a comment to it or add an attachment. Once the execution is completed, we can save and close the runner. We have various options to mark the test case as Pass, Fail, Block or Not applicable The test case can also be marked as Paused. For Paused Test Case, we later get Resume test as the option.

    web-access-testing20

    While creating a bug with Web Access, it is possible to add comments, attachments along with the bug; however we cannot create a rich bug. For creation of rich bug we will have to use Microsoft Test Manager 2013. The links in terms of comments, attachments can be seen in the bug as follows.

    web-access-testing22

    While Testing with Web Access there are some limitations. Basic testing can be executed but creation of rich bug, viewing test results, exploratory testing requires the client Microsoft Test Manager 2013.

    However despite of these limitations, being able to plan tests, manage full test suite and executing test cases right in your Visual Studio using the Lightweight browser-based Team Web Access, helps us improve quality in software projects, without leaving your favorite IDE workspace.

    As the following illustration shows, you can access a number of features from the Home page of TWA. You switch to different views and pages by first choosing one of the context view links at the top and then one of the pages within the context view. You can switch context between teams and team projects from the project Context Menu Icon context menu toward the top-right of each page.  You access the administration pages by choosing the Settings icon  gear Settings icon.

    Home page (Team Web Access)

    Important noteImportant
    The links and pages that you have access to depend on: (1) the Web Access Permissions group to which you are assigned: Limited, Standard, and Full, see Change access levels. Or, (2) whether the resource has been configured for your team project or team project collection.  The following links appear on the home page for the associated Web Access Permissions group shown in parenthesis:

    • View backlog (Full): Opens the Product Backlog page which provides access to both the product backlog and iteration or sprint pages. See Create and organize the product backlog.
    • View board (Full): Opens the Task Board page used to review progress during a sprint and update information for work performed. See Work in sprints.
    • View work items: Opens the Work Items page used to create work items and work item queries. See Query for Bugs, Tasks, or Other Work Items.
    • Request feedback (Full): Opens the Feedback Request form to invite stakeholders to provide feedback. See Request and review feedback.
    • Go to project portal: Requires a project portal has been enabled for your team project. See Access a Team Project Portal or Process Guidance.
    • View reports:  Requires Reporting to be enabled for the instance of TFS. See Add a report server.
    • Open new instance of Visual Studio: Opens an instance of Visual Studio 2012 and automatically connects to the team project context selected in TWA. Requires that you have a recent version of Visual Studio installed.

     

    Great tool available for Responsive Web Pages in SharePoint!!

    Web designers are crucial for a successful SharePoint implementation. We all know that. With this in mind, I wanted to write an article for our SharePoint web designers out there. Not being an authority on the subject, I decided to ask someone who has been working in web design for some time. By asking my contacts, I got the email address of an expert in SharePoint branding and UX customization. Eric Overfield was the name on the contact card. I set up a conference call, and very soon we were chatting and discussing UX, branding, artists, engineers, and SharePoint.

    The conversation quickly turned to devices and how to make SharePoint work as well as possible in this new and changing set of displays. Eric’s answer was: responsive web design. Responsive web design allows us to look at a site like a fluid grid. The fluid, dynamic grid adapts itself to fit the information in display resolutions as different as those in a phone, a tablet, and a full desktop monitor. Keep in mind that the mix of display resolutions doubles if you consider landscape and portrait orientations available in all these devices.

    The author of the original post about responsive web design, Ethan Marcotte, provided a reference site to demo the concepts explained in his post. In this demo, you can observe how the elements in the page rearrange themselves to fit the current resolution as you resize your browser window. The demo left me wondering how a SharePoint website would react to different resolutions by using the fluid grid characteristic of responsive frameworks. Fortunately, Eric, along with some other people, developed Responsive SharePoint. Responsive SharePoint is a CodePlex project that you can use to try responsive frameworks on your SharePoint website.

    I followed the provided instructions to install the resources by using Design Manager on an out-of-the-box publishing site. In no time, I was looking at how the site dynamically reacted to different resolutions as I resized my browser window. I decided to test the project by using the following display resolutions:

    • 1200×1900 (desktop, portrait orientation)
    • 768×1366 (tablet, portrait orientation)
    • 480×800 (smartphone, portrait orientation)

    The results were amazing. Within 10 minutes, I had a SharePoint website that automatically adapts to display resolutions commonly used in devices. The following figure compares the website in commonly used display resolutions:

    Figure 1. Comparison of resolutions of the SharePoint website using a responsive frameworkFigure 1. Comparison of resolutions of the SharePoint website using a responsive framework

    How is this achieved?

    In this post, I can only explain that Responsive SharePoint uses media queries to match the width of the display in the device and then applies a set of styles to present the content in the available space. For this to work, you need a browser that supports media queries. The latest version of the major browsers support such functionality. The following code example shows how to declare media queries:

    @media (min-width: 769px) and (max-width: 979px) {
        /*
            Styles for display width 
            between 769 and 979 pixels
        */
    }
    
    @media (max-width: 768px) {
        /*
            Styles for display width 
            equal to 768 pixels and thinner
        */
    }
    
    @media (min-width: 1200px) {
        /*
            Styles for display width 
            equal to 1200 pixels and wider
        */
    }

    Of course there is much more to it. You can learn more by browsing the Responsive SharePoint CodePlex project.

    The new design and branding features in SharePoint 2013 make it easy to create and edit your web design, including responsive designs. You can even use the tools you are familiar with by mapping a network drive to the SharePoint 2013 Master Page Gallery. In my case, I used Microsoft Expression Web 4 to browse and edit the master pages and CSS files.

    Brand NEW “My Latest Documents” SharePoint Web Part and App released and available!!

    In each SharePoint Team site where we have multiple document libraries, the requirement is always there to see what the latest changes are. Unfortunately the Microsoft web part allows only seeing the documents changed by myself.

    To be able to have a solution for that, where you haven’t to be administrator or owner, I created a definition for a recent document web part. This can be deployed on Office 365, SharePoint 2010 and SharePoint 2007 sites. On SharePoint 2007 and SharePoint 2010 the only use right needed, is to be able to modify the site.

    The goal of the web part was:

    • Show the 10 latest changed documents
    • Show a more button that displays additional 40 documents
    • Display the online status of users
    • Display the correct date format of each site
    • Display the name of the folder where the document is stored and a link to the folder.
    • Get documents recursively from all sub sites

    Example Image:

    The following instructions explain in detail how you can activate it:

     

    Activation SharePoint 2010

    1. Edit your webpage and add a new web part
    2. Select browse and upload a the webpart definition

    3. Click Upload

    4. Now, it’s a bit confusing, but you have to click again add new web part
    5. The upload web part is now available in your web part menu and you can add it.

    All this steps have to be done each time when you want to add the web part. To provide it for all site owners add it to the web part gallery.

    Activation on Office 365

    That means you have to upload it to your web part gallery:

    After uploading the web part is available on your site.

    You can simple edit the site, and click More Web Parts

    Afterwardy you can find it in the Default Web Parts folder.

     

    Contact me NOW at tomas.floyd@outlook.com to order this brand new Web Part and/or App

    Create a new Search Service Application in SharePoint 2013 using PowerShell

    The search architecture in SharePoint 2013 has changed quite a bit when compared to SharePoint 2010. In fact the Search Service in SharePoint 2013 is completely overhauled. It is a combination of FAST Search and SharePoint Search components.

    apxvsdik

    As you can see the query and crawl topologies are merged into a single topology, simply called “Search topology”. Provisioning of the search service application creates 4 databases:

    • SP2013_Enterprise_Search – This is a search administration database. It contains configuration and topology information
    • SP2013_Enterprise_Search_AnalyticsReportingStore – This database stores the result of usage analysis
    • SP2013_Enterprise_Search_CrawlStore – The crawl database contains detailed tracking and historical information about crawled items
    • SP2013_Enterprise_Search_LinksStore – Stores the information extracted by the content processing component and also stores click-through information

    # Create a new Search Service Application in SharePoint 2013

    Add-PSSnapin Microsoft.SharePoint.PowerShell -ErrorAction SilentlyContinue

    Settings    $IndexLocation = “C:\Data\Search15Index” #Location must be empty, will be deleted during the process!     $SearchAppPoolName = “Search App Pool”     $SearchAppPoolAccountName = “Contoso\administrator”     $SearchServerName = (Get-ChildItem env:computername).value     $SearchServiceName = “Search15”     $SearchServiceProxyName = “Search15 Proxy”     $DatabaseName = “Search15_ADminDB”     Write-Host -ForegroundColor Yellow “Checking if Search Application Pool exists”     $SPAppPool = Get-SPServiceApplicationPool -Identity $SearchAppPoolName -ErrorAction SilentlyContinue

    if (!$SPAppPool)    {         Write-Host -ForegroundColor Green “Creating Search Application Pool”         $spAppPool = New-SPServiceApplicationPool -Name $SearchAppPoolName -Account $SearchAppPoolAccountName -Verbose     }

    Start Services search service instance    Write-host “Start Search Service instances….”     Start-SPEnterpriseSearchServiceInstance $SearchServerName -ErrorAction SilentlyContinue     Start-SPEnterpriseSearchQueryAndSiteSettingsServiceInstance $SearchServerName -ErrorAction SilentlyContinue

    Write-Host -ForegroundColor Yellow “Checking if Search Service Application exists”    $ServiceApplication = Get-SPEnterpriseSearchServiceApplication -Identity $SearchServiceName -ErrorAction SilentlyContinue

    if (!$ServiceApplication)    {         Write-Host -ForegroundColor Green “Creating Search Service Application”         $ServiceApplication = New-SPEnterpriseSearchServiceApplication -Partitioned -Name $SearchServiceName -ApplicationPool $spAppPool.Name  -DatabaseName $DatabaseName     }

    Write-Host -ForegroundColor Yellow “Checking if Search Service Application Proxy exists”    $Proxy = Get-SPEnterpriseSearchServiceApplicationProxy -Identity $SearchServiceProxyName -ErrorAction SilentlyContinue

    if (!$Proxy)    { &n