Category Archives: SharePoint 2010

BCS connector for Exchange private mailbox SharePoint and FAST search

SharePoint 2010 BCS Mailbox connector for Microsoft Exchange empowers you search private mailboxes via SharePoint and FAST Search.
SharePoint 2010 BCS Mailbox connector for Microsoft Exchange  allow you:

  • Index all mailboxes, emails and attachments
  • Enable super users from AD group search against all mailboxes
  • Preview Exchange emails and attachments directly from search user interface via SharePoint Business Connectivity services.

    Microsoft provides Exchange OOB connector for SharePoint 2010 search and FAST Search for SharePoint.
    Unfortunately this connector limited to Exchange public folder only.

    Please, make sure you download following dependencies:

How To : SAP Integration with .Net 4.0 (SAP Connection Manager) & SharePoint

This is a simple, C# class library project to connect .NET applications with SAP.

ppt_img[1]

 

This component internally implements SAP .NET Connector 3.0. The SAP .NET Connector is a development environment that enables communication between the Microsoft .NET platform and SAP systems.

This connector supports RFCs and Web services, and allows you to write different applications such as Web form, Windows form, or console applications in the Microsoft Visual Studio .NET.

With the SAP .NET Connector, you can use all common programming languages, such as Visual Basic. NET, C#, or Managed C++.

Features
Using the SAP .NET Connector you can:

Write .NET Windows and Web form applications that have access to SAP business objects (BAPIs).

Develop client applications for the SAP Server.

Write RFC server applications that run in a .NET environment and can be installed starting from the SAP system.

Following are the steps to configure this utility on your project

Download and extract the attached file and place it on your machine. This package contains 3 libraries:

SAPConnectionManager.dll
sapnco.dll
sapnco_utils.dll

Now go to your project and add the reference of all these four libraries. Sapnco.dll and sapnco_utils.dll are inbuilt libraries used by SAP .NET Connector. SAPConnectionManager.dll is the main component which provides the connection between .NET and SAP.

Once the above steps are complete, you need to make certain entries related to SAP server on your configuration file. Here are the sample entries that you have to maintain on your own project. You need to change only the values which are marked in Bold. Rest remains unchanged.

<appSettings>
<add key=”ServerHost” value=”127.0.0.1″/>
<add key=”SystemNumber” value=”00″/>
<add key=”User” value=”sample”/>
<add key=”Password” value=”pass”/>
<add key=”Client” value=”50″/>
<add key=”Language” value=”EN”/>
<add key=”PoolSize” value=”5″/>
<add key=”PeakConnectionsLimit” value=”10″/>
<add key=”IdleTimeout” value=”600″/>
</appSettings>

To test this component, create one windows application. Add the reference of sapnco.dll, sapnco_utils.dll, andSAPConnectionManager.dll on your project.

Paste the below code on your Form lode event

SAPSystemConnect sapCfg = new SAPSystemConnect();
RfcDestinationManager.RegisterDestinationConfiguration(sapCfg);
RfcDestination rfcDest = null;
rfcDest = RfcDestinationManager.GetDestination(“Dev”);

sap_integration_en_round[1]
That’s it. Now you are successfully connected with your SAP Server. Next you need to call SAP business objects (BAPIs) and extract the data and stored it in DataSet or list.

Demo Code available on request!!

New Highly Customisable SharePoint CRM Template Available

A CRM/Project Management Site Template for SharePoint 2010 Enterprise or SharePoint Online tenants.
This extensive solution offers the following features:

  • User Friendly – Due to a custom User Interface & Pre-Populated InfoPath forms where possible
  • Contacts Management
  • Project Management – Associated sub tasks, documents & sales
  • Products & Services Catalog
  • Sales Register & Invoice Generation
  • Client Enquiry – Showing any items related to a client
  • Reporting
  • Integrated User Guide

To give you an idea of how SharePoint CRM looks, below is a selection of screenshots:

Home Screen

The buttons displayed are defined by a list within SharePoint CRM so can easily be modified.

Contacts

Project Management

Sales Register

Being SharePoint all aspects of the SharePoint CRM Template can be customised to meet your organisations needs:

For example, to customise the home page :

Customising your homepage

Rather than the buttons on the homepage being hardcoded, they are defined by a list within the CRM site. This means you can easily add/remove/edit buttons using just your web browser, here’s how to do so:

  1. From the Site Actions, choose View All Site Content
  2. Open the PortalMenu list

Items can then be edited in the same way as any other SharePoint 2010 list, below is a description of options available:

  • Section – Defines which section the button will be shown in on your homepage
  • Order – Defines the order of the buttons within a section
  • Button Name – The text that will be displayed within the button
  • Link – The URL that users will be taken to when the button is clicked
  • Hover – The text shown when a user hovers over a button
  • Dialog – Specifies whether the page that the button links to will open in a popup dialog box
  • New Project Form – If this option is checked the Link field will be ignored an the button will open the new project form.

Adding new Sections

New sections can be added to the homepage, but will required the use of SharePoint Designer to do so:

  1. Open the PortalMenu list as described above, the go to the List Settings page
  2. Edit the Section column, then add the name of your new section as a choice
  3. Create a new list item with your new choice set within the Section field
  4. Navigate to your homepage, and set the page to edit mode
  5. Export any sections web part, then import the web part and add it to any zone
  6. Using SharePoint Designer open the homepage (SitePages\default.aspx)
  7. Select the new web part, the update the filter to match the new choice you added to the section column

This template and other SharePoint Web Parts, Apps, Custom SharePoint Templates, Tools for SharePoint, Azure and Office365 is available by contacting me through my website at http://sharepointsamurai,wordpress.com/ 

New SharePoint 2010 & 2013, Online Connector for Outlook available!!

SharePoint Connector  For Outlook makes it easier for office users to upload emails to SharePoint and attach SharePoint documents to an email message .

 

Please contact me through my blog or at tomas.floyd@outlook.com for more information on pricing, licensing and trials.

 

   SharePointOutlookConnectorAttachingSharePointOutlookExplorer_2

FEATURES

  • Workflow integration
  • Attach items as attachments into email item functionality
  • Multiple site configuration
  • Check Out/Check In functionality
  • Drag & Drop files(attachments) into a folder
  • Copy, delete and open item/document functionalities
  • Version History for files
  • Saving email meta data to SharePoint document item (title, to, from etc) functionality
  • Works with all SharePoint versions
  • Saving email message as list item and attachments as attachment of the list item functionality
  • Template based search functionality
  • Windows explorer right click upload
  • Editing metadata on uploading document
  • Multiple site configuration
  • File System Integration has been added. (This allows you to integrate live mesh folders into outlook)
  • Attach items as attachments into email item functionality
  • Advanced Alert System
  • Library Content Viewer
  • Saving email message as list item and attachments as attachment of the list item functionality

How To : Create, Edit and Maintaining a Coded UI Test for Silverlight Application

Using the Microsoft Visual Studio 2013 Coded UI Test plugin for Silverlight, you can create Coded UI Tests or action recordings for Silverlight 5.0 applications.

Using Microsoft Microsoft Visual Studio 2010 Feature Pack 2, you can create coded UI tests or action recordings for Silverlight 4 applications. Action recordings let you fast forward through steps in a manual test. For more information about action recordings or coded UI tests, see How to: Create an Action Recording or How to: Create a Coded UI Test.

In this walkthrough, you will learn the procedures that are required to test a Silverlight control in a Silverlight based application. The walkthrough takes you through the following procedures:

Prerequisites

 

For this walkthrough you will need:

To prepare the walkthrough

  1. Verify that you have the Silverlight 4 developer runtime available at Silverlight Developer 4 for Developers.

  2. Verify that you have completed the procedures in Walkthrough: Creating a RIA Services Solution.

    The result will be a simple Silverlight application that uses a Silverlight grid control. Later, you will use the grid control in this walkthrough and perform coded UI tests on it.

  3.  

     

    For more information about supported and unsupported Silverlight controls, see How to: Set Up Your Silverlight Application for Testing.

  4. With the RIAServicesExample you created in Walkthrough: Creating a RIA Services Solution running, copy the address of the Web application to the clipboard or a notepad file. For example, the address might resemble this: http://localhost: <port number>/RIAServicesExampleTestPage.aspx.

Add the SilverlightUIAutomationHelper.dll to Your Silverlight 4 Project

 

To test your Silverlight applications, you must add Microsoft.VisualStudio.TestTools.UITest.Extension.SilverlightUIAutomationHelper.dll as a reference to your Silverlight 4 application so that the Silverlight controls can be identified. This helper assembly instruments your Silverlight application to enable the information about a control to be available to the Silverlight plugin API that you use in your coded UI test or is used for an action recording.This assembly cannot be redistributed. Therefore, you must add this reference conditionally when you want to build the application. By taking this approach the assembly is not redistributed when you deploy your software to a customer.

To add the SilverlightUIAutomationHelper.dll

  1. For each Silverlight project in your solution that you want to test, you must add the SilverlightUIAutomationHelper.dll. In Solution Explorer, right-click the RIAServicesExample project, select Unload Project.

    The project is displayed in Solution Explorer as RIAServicesExample (unavailable).

  2. Right-click the project again and then click Edit RIAServicesExample.csproj.

    The RIAServicesExample.csproj file is opened in the Code Editor. You will see <PropertyGroup> nodes followed by <ItemGroup> nodes. You must make the following two modifications:

    1. To set the production condition, add the following entry to the first <PropertyGroup> node:

       
      <Production Condition="'$(Production)'==''">False</Production>
      
    2. To add the DLL when the build is not a production build, insert the following <Choose> node after the <PropertyGroup> nodes, but before the <ItemGroup> nodes:

       
      <Choose>
         <When Condition=" '$(Production)'=='False' ">
               <ItemGroup>
                 <Reference Include="Microsoft.VisualStudio.TestTools.UITest.Extension.SilverlightUIAutomationHelper">
                 </Reference>
               </ItemGroup>
             </When>
       </Choose>
      
  3. To save the file, click Save.

  4. To reload these changes, right-click the server project and then click Reload Project

    Caution noteCaution

    If you have multiple Silverlight projects that you want to test, you must follow these steps for each project.

    Important noteImportant

    To remove the SilverlightUIAutomationHelper.dll so that it is not redistributed with your production code, set the production condition value to true in the first <PropertyGroup> node. In in this manner, the DLL is no longer added as a reference by the Choose node that you added to the project in the previous procedure. You can also set an environment variable named Production to the value True. Then you can use msbuild to build the Silverlight project and remove the SilverlightUIAutomationHelper.dll.

Create a Coded UI Test for RIAServicesExample Silverlight Application

 

To Create a Coded UI Test

  1. In Solution Explorer, right-click the solution, click Add and then select New Project.

    The Add New Project dialog box appears.

  2. In the Installed Templates pane, expand either Visual C# or Visual Basic, and then select Test.

  3. In the middle pane, select the Test Project template.

  4. Click OK.

    In Solution Explorer, the new test project named TestProject1 is added to your solution. Either the UnitTest1.cs or UnitTest1.vb file appears in the Code Editor. You can close the UnitTest1 file because it is not used in this walkthrough.

  5. In Solution Explorer, right-click TestProject1, click Add and then select Coded UI test.

    The Generate Code for Coded UI Test dialog box appears.

  6. Select the Record actions, edit UI map or add assertions option and then click OK.

    The UIMap – Coded UI Test Builder appears.

    For more information about the options in the dialog box, see How to: Create a Coded UI Test.

  7. Click Start Recording on the UIMap – Coded UI Test Builder. In several seconds, the Coded UI Test Builder will be ready.

    Start recording UI

  8. Launch Internet Explorer.

  9. In Internet Explorer’s address bar, enter the address of the Web application that you copied in a previous procedure. For example:

    http://localhost: <port number>/RIAServicesExampleTestPage.aspx

  10. Click one or two of the column headers to sort the data.

  11. Close Internet Explorer.

  12. On the UIMap – Coded UI Test Builder, click Generate Code.

  13. In the Method Name type SimpleSilverlightAppTest and then click Add and Generate. In several seconds, the Coded UI test appears and is added to the Solution.

  14. Close the UIMap – Coded UI Test Builder.

    The CodedUITest1.cs file appears in the Code Editor.

    NoteNote

    You can assign a unique automation property based on the type of Silverlight control in your application. For more information, see Set a Unique Automation Property for Silverlight Controls for Testing.

Run the Coded UI Test on the RIAServicesExample Silverlight Application

 

To run the coded UI test

  • On the Test menu, select Windows and then click Test View.In Test View, select CodedUITestMethod1 under the Test Name column and then click Run Selection in the toolbar.

    The coded UI test should successfully run using the Silverlight data grid control.

How To : Understand and Edit the Onet.xml File

Site-Definition-03.png_2D00_700x0[1]

When Microsoft SharePoint Foundation is installed, several Onet.xml files are installed—one in %ProgramFiles%\Common Files\Microsoft Shared\web server extensions\14\TEMPLATE\GLOBAL\XML that applies globally to the deployment, and several in different folders within %ProgramFiles%\Common Files\Microsoft Shared\web server extensions\14\TEMPLATE\SiteTemplates. Each file in the latter group corresponds to a site definition that is included with SharePoint Foundation. They include, for example, Blog sites, the Central Administration site, Meeting Workspace sites, and team SharePoint sites. Only the last two of these families contain more than one site definition configuration.

The global Onet.xml file defines list templates for hidden lists, list base types, a default definition configuration, and modules that apply globally to the deployment. Each Onet.xml file in a subdirectory of the %ProgramFiles%\Common Files\Microsoft Shared\web server extensions\14\TEMPLATE\SiteTemplates directory can define navigational areas, list templates, document templates, configurations, modules, components, and server email footers that are used in the site definition to which it corresponds.

Note Note
An Onet.xml is also part of a web template. Some Collaborative Application Markup Language (CAML) elements that are possible in the Onet.xml files of site definitions cannot be in the Onet.xml files that are part of web templates—for example, the DocumentTemplates element.
Depending on where an Onet.xml file is located and whether it is part of a site definition or a web template, the markup in the file does some or all of the following:

  • Specifies the web-scoped and site collection-scoped Features that are built-in to websites that are created from the site definition or web template.
  • Specifies the list types, pages, files, and Web Parts that are built-in to websites that are created from the site definition or web template.
  • Defines the top and side navigation areas that appear on the home page and in list views for a site definition.
  • Specifies the list definitions that are used in each site definition and whether they are available for creating lists in the user interface (UI).
  • Specifies document templates that are available in the site definition for creating document library lists in the UI, and specifies the files that are used in the document templates.
  • Defines the base list types from which default SharePoint Foundation lists are derived. (Only the global Onet.xml file serves this function. You cannot define new base list types.)
  • Specifies SharePoint Foundation components.
  • Defines the footer section used in server email.

 

You can perform the following kinds of tasks in a custom Onet.xml file that is used for either a custom site definition or a custom web template:

  • Specify an alternative cascading style sheet (CSS) file, JavaScript file, or ASPX header file for a site definition.
  • Modify navigation areas for the home page and list pages.
  • Add a new list definition as an option in the UI.
  • Define one configuration for the site definition or web template, specifying the lists, modules, files, and Web Parts that are included when the configuration is instantiated.
  • Specify Features to be included automatically with websites that are created from the site definition or web template.

You can perform the following kinds of tasks in a custom Onet.xml file that is used for a custom site definition, but not in one that is used for a custom web template:

  1. Add a document template for creating document libraries.
  2. Define more than one configuration for a site definition, specifying the lists, modules, files, and Web Parts that are included when the configuration is instantiated.
  3. Define a custom footer for email messages that are sent from websites that are based on the site definition.
  4. Define custom components, such as a file dialog box post processor, for websites that are based on the site definition.
Caution note Caution
You cannot create new base list types in either a site definition or a web template. The base types that are defined in the global Onet.xml file are the only base types that are supported.
Caution note Caution
We do not support making changes to an originally installed Onet.xml file. Changing this file can break existing sites. Also, when you install updates or service packs for SharePoint Foundation, or when you upgrade an installation to the next product version, there may be a new version of the Microsoft-supplied file, and installation cannot merge your changes with the new version. If you want a site type that is similar to a built-in site type, and you cannot use a web template, create a new site definition with its own Onet.xml file; do not modify the original file. For more information, see How to: Create a Custom Site Definition and Configuration. For more information about when you cannot use a web template, see Deciding Between Custom Web Templates and Custom Site Definitions.
The following sections define the various elements of the Onet.xml file.

Project Element

The top-level Project element specifies a default name for sites that are created through any of the site configurations in the site definition. It also specifies the directory that contains subfolders in which the files for each list definition reside.

Note Note
Unless indicated otherwise, excerpts used in the following examples are taken from the Onet.xml file for the STS site definition.
<Project 
  Title="$Resources:core,onet_TeamWebSite;" 
  Revision="2" 
  ListDir="$Resources:core,lists_Folder;" 
  xmlns:ows="Microsoft SharePoint" 
  UIVersion="4">

NoteNote
In all the examples in this topic, the strings that begin with “$Resources” are constants that are defined in a .resx file. For example, “$Resources:onet_TeamWebSite” is defined in the core.resx file as “Team Site”. When you create a custom Onet.xml file, you can use literal strings.

This element can also have several other attributes. For more information, see Project Element (Site).

The Project element does not contain any attribute that identifies the site definition that it defines. Each Onet.xml file is associated with a site definition by virtue of the directory path in where it resides, which (except for the global Onet.xml) is %ProgramFiles%\Common Files\Microsoft Shared\web server extensions\14\TEMPLATE\SiteTemplates\site_type\XML\, where site_type is the name of the site definition, such as STS or MPS. The Onet.xml file for a web template is associated with the template by virtue of being in the .wsp package for the web template.

 

NavBars Element

The NavBars element contains definitions for the top navigation area that is displayed on the home page or in list views, and definitions for the side navigation area that is displayed on the home page.

Note Note
A NavBar is not necessarily a toolbar. For example, it can be a tree of links.
<NavBars>
  <NavBar 
    Name="$Resources:core,category_Top;" 
    Separator="&amp;nbsp;&amp;nbsp;&amp;nbsp;" 
    Body="&lt;a ID='onettopnavbar#LABEL_ID#' href='#URL#' accesskey='J'&gt;#LABEL#&lt;/a&gt;" 
    ID="1002" />
  <NavBar 
    Name="$Resources:core,category_Documents;" 
    Prefix="&lt;table border='0' cellpadding='4' cellspacing='0'&gt;" 
    Body="&lt;tr&gt;&lt;td&gt;&lt;table border='0' cellpadding='0' cellspacing='0'&gt;&lt;tr&gt;&lt;td&gt;&lt;img src='/_layouts/images/blank.gif' id='100' alt='' border='0'&gt;&amp;nbsp;&lt;/td&gt;&lt;td valign='top'&gt;&lt;a id='onetleftnavbar#LABEL_ID#' href='#URL#'&gt;#LABEL#&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;&lt;/td&gt;&lt;/tr&gt;" 
    Suffix="&lt;/table&gt;" 
    ID="1004" />
    ...
</NavBars>

A NavBarLink element defines links for the top or side navigational area, and an entire NavBar section groups new links in the side area. Each NavBar element specifies a display name and a unique ID for the navigation bar, and it defines how to display the navigation bar.

For information about customizing the navigation areas on SharePoint Foundation pages, see Website Navigation.

ListTemplates Element

The ListTemplates section specifies the list definitions that are part of a site definition. This markup is still supported only for backward compatibility. New custom list types should be defined as Features. The following example is taken from the Onet.xml file for the Meetings Workspace site definition.

<ListTemplates>
  <ListTemplate 
    Name="meetings" 
    DisplayName="$Resources:xml_onet_mwsidmeetingDisp;" 
    Type="200" 
    BaseType="0" 
    Unique="TRUE" 
    Hidden="TRUE" 
    HiddenList="TRUE" 
    DontSaveInTemplate="TRUE" 
    SecurityBits="11" 
    Description="$Resources:xml_onet_mwsidmeetingDesc;"
    Image="/_layouts/images/itevent.gif">
  </ListTemplate>
  <ListTemplate 
    Name="agenda" 
    DisplayName="$Resources:xml_onet_mwsidagendaDisp;" 
    Type="201" 
    BaseType="0" 
    FolderCreation="FALSE" 
    DisallowContentTypes="TRUE" 
    SecurityBits="11" 
    Description="$Resources:xml_onet_mwsidagendaDesc" 
    Image="/_layouts/images/itagnda.gif">
  </ListTemplate>
    ...
</ListTemplates>

Each ListTemplate element specifies an internal name that identifies the list definition. The ListTemplate element also specifies a display name for the list definition and whether the option to add a link on the Quick Launch bar appears selected by default in the list-creation UI. In addition, this element specifies the description of the list definition and the path to the image that represents the list definition, both of which are displayed in the list-creation UI. If Hidden=”TRUE” is specified, the list definition does not appear as an option in the list-creation UI.

The ListTemplate element has two attributes for type: Type and BaseType. The Type attribute specifies a unique identifier for the list definition, and the BaseType attribute identifies the base list type for the list definition and corresponds to the Type value that is specified for one of the base list types that are defined in the global Onet.xml file.

For more information about creating new list types, see How to: Create a Custom List Definition.

DocumentTemplates Element

The DocumentTemplates section defines the document templates that are listed in the UI for creating a document library. This markup is still supported only for backward compatibility. You should define new document types as content types. For more information, see the Content Types section of this SDK.

<DocumentTemplates>
  ...
  <DocumentTemplate 
    Path="STS" 
    DisplayName="$Resources:core,doctemp_Word;" 
    Type="121" 
    Default="TRUE" 
    Description="$Resources:core,doctemp_Word_Desc;">
    <DocumentTemplateFiles>
      <DocumentTemplateFile 
        Name="doctemp\word\wdtmpl.dotx" 
        TargetName="Forms/template.dotx" 
        Default="TRUE" />
    </DocumentTemplateFiles>
  </DocumentTemplate>
  ...
</DocumentTemplates>

Each DocumentTemplate element specifies a display name, a unique identifier, and a description for the document template. If Default is set to TRUE, the template is the default template selected for document libraries that are created in sites based one of the configurations in the site definition. Despite its singular name, a DocumentTemplate element actually can contain a collection of DocumentTemplateFile elements. The Name attribute of each DocumentTemplateFile element specifies the relative path to a local file that serves as the template. The TargetName attribute specifies the destination URL of the template file when a document library is created. The Default attribute specifies whether the file is the default template file.

NoteNote
An Onet.xml file in a web template cannot have a DocumentTemplate element.

For a development task that involves document templates, see How to: Add a Document Template, File Type, and Editing Application to a Site Definition.

BaseTypes Element

The BaseTypes element of the global Onet.xml file is used during site or list creation to define the basic list types on which all list definitions in SharePoint Foundation are based. Each list template that is specified in the list templates section is identified with one of the base types: Generic List, Document Library, Discussion Forum, Vote or Survey, or Issues List.

Note Note
In SharePoint Foundation the BaseTypes section is implemented only in the global Onet.xml file, from which the following example is taken.
<BaseTypes>
  <BaseType 
    Title="Generic List" 
    Image="/_layouts/images/itgen.gif" 
    Type="0">
      <MetaData>
        <Fields>
          <Field 
            ID="{1d22ea11-1e32-424e-89ab-9fedbadb6ce1}" 
            ColName="tp_ID" 
            RowOrdinal="0" 
            ReadOnly="TRUE" 
            Type="Counter" 
            Name="ID" 
            PrimaryKey="TRUE" 
            DisplayName="$Resources:core,ID" 
            SourceID="http://schemas.microsoft.com/sharepoint/v3" 
            StaticName="ID">
          </Field>
          <Field 
            ID="{03e45e84-1992-4d42-9116-26f756012634}" 
            RowOrdinal="0" 
            Type="ContentTypeId" 
            Sealed="TRUE" 
            ReadOnly="TRUE" 
            Hidden="TRUE" 
            DisplayName="$Resources:core,Content_Type_ID;"
            Name="ContentTypeId" 
            DisplaceOnUpgrade="TRUE"
            SourceID="http://schemas.microsoft.com/sharepoint/v3" 
            StaticName="ContentTypeId" 
            ColName="tp_ContentTypeId">
          </Field>
          ...
      </Fields>
    </MetaData>
  </BaseType>
  ...
</BaseTypes>

Each BaseType element specifies the fields used in lists that are derived from the base type. The Type attribute of each Field element identifies the field with a field type that is defined in FldTypes.xml.

Caution noteCaution
Do not modify the contents of the global Onet.xml; doing so can break the installation. Base list types cannot be added. For information about how to add a list definition, see How to: Create a Custom List Definition.

Configurations Element

Each Configuration element in the Configurations section specifies the lists, modules, and Features that are created by default when the site definition configuration or web template is instantiated.

<Configurations>
  ...
  <Configuration 
    ID="0" 
    Name="Default">
    <Lists>
      <List 
        FeatureId="00BFEA71-E717-4E80-AA17-D0C71B360101" 
        Type="101" 
        Title="$Resources:core,shareddocuments_Title;" 
        Url="$Resources:core,shareddocuments_Folder;" 
        QuickLaunchUrl="$Resources:core,shareddocuments_Folder;/Forms/AllItems.aspx" />
      ...
    </Lists>
    <Modules>
      <Module 
        Name="Default" />
    </Modules>
    <SiteFeatures>
      <Feature 
        ID="00BFEA71-1C5E-4A24-B310-BA51C3EB7A57" />
      <Feature 
        ID="FDE5D850-671E-4143-950A-87B473922DC7" />
    </SiteFeatures>
    <WebFeatures>
      <Feature 
        ID="00BFEA71-4EA5-48D4-A4AD-7EA5C011ABE5" />
      <Feature 
        ID="F41CC668-37E5-4743-B4A8-74D1DB3FD8A4" />
    </WebFeatures>
  </Configuration>
  ...
</Configurations>

The ID attribute identifies the configuration (uniquely, relative to the other configurations, if any, within the Configurations element). If the Onet.xml file is part of a site definition, the ID value corresponds to the ID attribute of a Configuration element in a WebTemp*.xml file. (Web templates do not have WebTemp*.xml files.)

Each List element specifies the title of the list definition and the URL for where to create the list. You can use the QuickLaunchUrl attribute to set the URL of the view page to use when adding a link in the Quick Launch to a list that is created from the list definition. The value of the Type attribute corresponds to the Type attribute of a template in the list templates section. Each Module element specifies the name of a module that is defined in the modules section.

The SiteFeatures element and the WebFeatures element contain references to site collection and site-scoped Features to include in the site definition.

For post-processing capabilities, use an ExecuteUrl element within a Configuration element to specify the URL that is called following instantiation of the site.

For more information about definition configurations, see How to: Create a Custom Site Definition and Configuration.

Modules Element

The Modules collection specifies a pool of modules. Any module in the pool can be referenced by a configuration if the module should be included in websites that are created from the configuration. Each Module element in turn specifies one or more files to include, often for Web Parts, which are cached in memory on the front-end web server along with the schema files. You can use the Url attribute of the Module element to provision a folder as part of the site definition. This markup is supported only for backward compatibility. New modules should be incorporated into Features.

<Modules>
  <Modules>
    <Module 
      Name="Default" 
      Url="" 
      Path="">
      <File 
        Url="default.aspx" 
        NavBarHome="True">
        <View 
          List="$Resources:core,lists_Folder;
          /$Resources:core,announce_Folder;" 
          BaseViewID="0" 
          WebPartZoneID="Left" />
        <View 
          List="$Resources:core,lists_Folder;
          /$Resources:core,calendar_Folder;" 
          BaseViewID="0" 
          RecurrenceRowset="TRUE" 
          WebPartZoneID="Left" 
          WebPartOrder="2" />
        <AllUsersWebPart 
          WebPartZoneID="Right" 
          WebPartOrder="1"><![CDATA[<WebPart 
            xmlns="http://schemas.microsoft.com/WebPart/v2"
            xmlns:iwp="http://schemas.microsoft.com
            /WebPart/v2/Image">
            <Assembly>Microsoft.SharePoint, Version=12.0.0.0, 
              Culture=neutral, 
              PublicKeyToken=71e9bce111e9429c</Assembly>
            <TypeName>Microsoft.SharePoint.WebPartPages.ImageWebPart
            </TypeName>
            <FrameType>None</FrameType>
            <Title>$Resources:wp_SiteImage;</Title>
            <iwp:ImageLink>/_layouts/images/homepage.gif
            </iwp:ImageLink>
            <iwp:AlternativeText>$Resources:core,sitelogo_wss;
            </iwp:AlternativeText>
            </WebPart>]]>
        </AllUsersWebPart>
        <View 
          List="$Resources:core,lists_Folder;
          /$Resources:core,links_Folder;" 
          BaseViewID="0" 
          WebPartZoneID="Right" 
          WebPartOrder="2" />
          <NavBarPage 
            Name="$Resources:core,nav_Home;" 
            ID="1002" 
            Position="Start" />
          <NavBarPage 
            Name="$Resources:core,nav_Home;" 
            ID="0" 
            Position="Start" />
      </File>
    </Module>
  ...
</Modules>

The Module element specifies a name for the module, which corresponds to a module name that is specified within a configuration in Onet.xml.

The Url attribute of each File element in a module specifies the name of a file to create when a site is created. When the module includes a single file, such as default.aspx, NavBarHome=”TRUE” specifies that the file will serve as the destination page for the Home link in navigation bars. The File element for default.aspx also specifies the Web Parts to include on the home page and information about the home page for other pages that link to it.

A Module element can only be in an Onet.xml file that is part of a site definition, not in an Onet.xml file that is part of a web template.

For more information about using modules in SharePoint Foundation, see How to: Provision a File.

Components Element

The Components element specifies components to include in sites that are created through the definition.

<Components>
  <FileDialogPostProcessor ID="BDEADEE4-C265-11d0-BCED-00A0C90AB50F" />
</Components>

A Components element can only be included in an Onet.xml file that is part of a site definition, not in an Onet.xml file that is part of a web template.

ServerEmailFooter Element

The ServerEmailFooter element specifies the footer section used in email that is sent from the server.

<ServerEmailFooter>$Resources:ServerEmailFooter;</ServerEmailFooter>

A ServerEmailFooter element can only be included in an Onet.xml file that is part of a site definition, not in an Onet.xml file that is part of a web template.

A Look At : Application Management and Governance in SharePoint 2013

Summary:Learn how to govern applications for SharePoint 2013 by creating a customization policy and understanding the app model, branding, and life-cycle management.

8322.sharepoint_2D00_2010_5F00_4855E582[1]

How will you manage the applications that are developed for your environment? What customizations do you allow in your applications, and what are your processes for managing those applications?

 

For effective and manageable applications, your organization should consider the following:

  • Customization policy   SharePoint 2013 includes customizable features and capabilities that span multiple product areas, such as business intelligence, forms, workflow, and content management. Customization can introduce risks to the stability, maintenance, and security of the environment. To support customization while controlling its scope, you should develop a customization policy.
  • Life-cycle management   Follow best practices to manage applications and keep your environments in sync.
  • Branding   If you are designing an information architecture and a set of sites to use across an organization, consider including branding in your governance plan. A formal set of branding policies helps ensure that sites consistently use enterprise imagery, fonts, themes, and other design elements.
  • Solutions or apps for SharePoint?   Decide whether a solution or an app for SharePoint would be the best choice for specific customizations.

Get developer guidance about customizing and branding SharePoint 2013 on MSDN: Build sites for SharePoint 2013.

Foundation icon This article is part of a set of articles about governance. The following articles describe other aspects of governance:

The What is governance? poster gives a summary of this content. Download the PDF version or Visio version, or Zoom into the model in full detail with Zoom.it from Microsoft.

Determine the types of customizations you want to allow and how to manage them. Your customization policy should include:

  • Service-level descriptions   What are the parameters for supporting and managing customizations in your environments? See Service-level agreements.
  • Guidelines for updating customizations   How do you manage changes to customizations, and how do you roll out those changes to your environments? Consider ways to manage source code, such as a source control system and standards for documenting the code.
  • Processes for analyzing   How do you understand whether a particular customization is working well in your environment, or how do you decide which ones to create, change, or retire?
  • Approved tools for customization   Consider development standards, such as coding best practices and the tools that you will to use across your organization. For example, you should decide whether to allow the use of SharePoint Designer 2013 and Design Manager, and specify which site elements can be customized and by whom.
  • Process for piloting and testing customizations   How do you test and deploy customizations? How many people should be in a pilot testing group? What are your standards for testing and validating customizations?
  • Who is responsible for ongoing support   Who will be responsible for supporting customizations in your environments—individual teams or a central group?
  • Guidelines for packaging and deploying customizations   Do you have individual packages for each, or do you include several in a feature or solution? Which customizations should be apps for SharePoint instead of solutions? How do you ensure that customizations in one environment do not affect the rest of your SharePoint implementation?
  • Specific policies regarding each potential type of customization   What types of customizations do you allow?

    For more information about kinds of customizations and their potential risks, see the Customizations table later in this article. For more information about processes for managing customizations, see the white paper SharePoint Products and Technologies customization policy. Most of this content still applies to SharePoint 2013.

  • Policies around using the App Catalog and SharePoint Store Which apps for SharePoint do you want to make available to your organization? Can users purchase apps directly? See Solutions or apps for SharePoint? later in this article for more information.

The highly customizable design of SharePoint products enables you to provide the look, behavior, or functionality that meets your business needs. Customizations can introduce risk to your environment, whether that risk is to the environment’s performance, availability, or supportability. Conversely, a “no customizations” policy severely restricts your organization’s ability to take advantage of the SharePoint platform.

All customizations are not the same. You must decide carefully which kinds of customizations to allow in your environment. You must ensure the customizations support the performance, availability, and supportability you want for your environment. Your governance policy should balance a level of acceptable risk against the business needs for your organization.

What is considered a customization? All of the following are considered kinds of customizations in SharePoint products:

  • Configuration   Using the SharePoint user interface to configure SharePoint products.
  • Branding   Changing logos, styles, colors, master pages and page layouts, and so on to create a custom look for your SharePoint sites. See more about branding.
  • Custom code   Using developer tools to add or change functionality in SharePoint products or to interact with other applications. Risk can vary depending on kind of functionality and level of trust (full trust solutions should be rarely used; consider apps for SharePoint first).
    TipTip:
    Sandboxed solutions are deprecated in this release, so they are not the best option for custom code in the long term

Some customizations have very little risk or impact on your environment. Others have the potential for much higher risk and impact. The following table provides examples of different kinds of customizations, the risk level associated with that kind of customization, and potential issues that you might face if you allow that kind of customization.

Customizations

Risk level Types of customizations and examples Considerations or impact
Unsupported/High Unsupported customizations such as direct changes to the database schema or modifying files on the file system.
  • Will not be supported through Microsoft Customer Support.
  • Will be unable to upgrade.

Do not use.

Moderate to high Creating applications that interact with or redirect actions in key pipelines, such as events, claims, and so on.
  • Potential for service outage or performance issues.
  • Might require rework at upgrade.
Moderate to low Using a custom Web Part outside a sandbox environment, creating custom actions such as adding a menu item, or creating a custom site provisioning process.
  • Short or long-term performance issues or page errors.
  • Might require rework at upgrade.
Low Using solutions in a sandbox environment. Short-term performance issues; you can avoid some performance issues by using resource throttling and quotas.
Very low to no risk Using apps for SharePoint or using functionality within the product or configurations, such as associating a workflow with a list or using an instance of a built in Web Part. Minor configuration or page errors that would have to be addressed. Apps can be uninstalled or updated.
NoteNote:
For more information about customizations and upgrade, see Considerations for specific customizations.

 

 

Also, when you think through the customizations to allow in your environment, consider carefully whether a particular customization is necessary. If it recreates functionality that is already available in the product (such as creating a Web Part that does the same thing as the Content Editor Web Part or the Content by Query Web Part), then that might be unnecessary work.

Consider first whether the standard functionality can do what you want, or check the SharePoint Store to see if there is an app for SharePoint available that does what you need.

Follow these best practices to manage applications based on SharePoint 2013 throughout their life cycle:

  • Use separate development, preproduction, and production environments, and keep these environments as synchronized as possible so that you can accurately test your customizations.
  • Test all customizations before releasing the first time and after any updates have been made before you release them to your production environment.
  • Use source code control and solution and feature versioning to track changes to code.

Development, test, and production environments

Consistent branding with a corporate style guide makes for more cohesive-looking sites and easier development. Store approved themes in the theme gallery for consistency so that users will know when they visit the site that they are in the right place.

SharePoint 2013 includes a new feature to use for branding, Design Manager. By using Design Manager, you can create a visual design for your website with whatever web design tool or HTML editor you prefer and then upload that design into SharePoint. Design Manager is the central hub and interface where you manage all aspects of a custom design.

Creating the visual design of a site often fits into a larger process, in which multiple people or organizations are involved. For a roadmap of the tasks from a larger perspective, see Design and branding in SharePoint 2013.

SharePoint 2013 has a new development model based on apps for SharePoint. Apps for SharePoint are self-contained pieces of functionality that extend the capabilities of a SharePoint website. An app may include SharePoint features such as lists, workflows, and site pages, but it can also use a remote web application and remote data in SharePoint. An app has few or no dependencies on any other software on the device or platform where it is installed, other than what is built into the platform. Apps have no custom code that runs on the SharePoint servers.

The guidance for whether to use apps for SharePoint or SharePoint solutions is to:

  • Design apps for end users

    Apps for SharePoint:

    • Are easy for users (tenant administrators and site owners) to discover and install.
    • Use safe SharePoint extensions.
    • Provide the flexibility to develop future upgrades.
    • Can integrate with cloud-based resources.
    • Are available for both SharePoint Online and on-premises SharePoint sites.
  • Use farm solutions for administrators

    SharePoint solutions:

    • Can access the server-side object-model APIs that are needed to extend SharePoint management, configuration, and security
    • Can extend Central Administration, Windows PowerShell cmdlets, timer jobs, custom backups, and so on.
    • Are installed by administrators.
    • Can have farm, web application, or site-collection scope.

Go to MSDN to get more information about the new development model, Apps for SharePoint compared with SharePoint solutions, and Deciding between apps for SharePoint and SharePoint solutions.

Set a policy for using apps for SharePoint in your organization. Can users purchase and download apps? How do you make your organization’s apps available? How do you tell if they’re being used?

  • SharePoint Store   Determine whether users can purchase or download apps from the SharePoint Store.
  • App Catalog   Make specific apps for SharePoint available to your users by adding them to the App Catalog.
  • App requests   Configure app requests to control which apps are purchased and how many licenses are available.
  • Monitor apps   Monitor specific apps in SharePoint Server 2013 to check for errors and to track usage.

In the market

HTML5 SharePoint Pic Web Part Released and Available !!

This is a Sandbox web part control to display a matrix of image thumbnails.

For a build a Metro IDE or a Picture Gallery to show products, news, or a social team that integrates with pictures, etc. All this, from any SharePoint picture library.

Supports : SharePoint 2010 & 2013 On-Premise Web Part,  SharePoint Online Web Part

FEATURES OF THE WEB PART** ver. 1.0

     

**PREVIEW EXAMPLE OF THE CONTROL**





 
1

How To : Use the Modelling SDK to create UML Diagrams

Use Case Diagrams

A use case diagram is a summary of who uses your application and what they can do with it. It
describes the relationships among requirements, users, and the major components of the system, and
provides an overall view of how the system is used.

uml+activity+diagram+library+mgmt+book+return[1]
Activity Diagrams
Use case diagrams can be broken down into activity diagrams. An activity diagram shows the software
process as the fl ow of work through a series of actions. It can be a useful exercise to draw an
activity diagram showing the major tasks that a user will perform with the software application.

 

Sequence Diagrams

 

Sequence diagrams display interactions between different objects. This interaction usually takes
place as a series of messages between the different objects. Sequence diagrams can be considered an
alternate view to the activity diagram. A sequence diagram can show a clear view of the steps in a
use case. Figure 14-3 shows an example of a sequence diagram.
Component Diagrams

 

Component diagrams help visualize the high-level structure of the software system. They show the
major parts of a system and how those parts interact and depend on each other. One nice feature of
component diagrams is that they show how the different parts of the design interact with each other,
regardless of how those individual parts are actually implemented. Figure 14-4 shows an example of
a component diagram.

 

Class Diagrams

 

Class diagrams describe the objects in the application system. They do this without referencing any
particular implementation of the system itself. This type of UML modeling diagram is also referred
to as a conceptual class diagram. Figure 14-5 shows an example of a class diagram.

How to: Export UML Diagrams to Image Files

You can export a UML document from Visual Studio to an image that is under program control. For example, you might want to do this as part of automatic document generation.

If you want to export a document to an image manually, you can copy and paste the shapes from a diagram into other programs such as Word. You can also print documents to XPS format. For more information, see Export Images of Diagrams.

The following code defines a shortcut menu command, also known as a context menu command, that saves an image to a file.

Note Note

To make this code work as a menu command, you must incorporate it into a MEF component. For more information, seeHow to: Define a Menu Command on a Modeling Diagram.

The code first uses GetObject<T> to get the Diagram of the underlying implementation. This type has a methodCreateBitmap.

namespace SaveToImage
{
  using System.ComponentModel.Composition; // for [Import], [Export]
  using System.Drawing; // for Bitmap
  using System.Drawing.Imaging; // for ImageFormat
  using System.Linq; // for collection extensions
  using System.Windows.Forms; // for SaveFileDialog
  using Microsoft.VisualStudio.Modeling.Diagrams;
    // for Diagram
  using Microsoft.VisualStudio.Modeling.ExtensionEnablement;
    // for IGestureExtension, ICommandExtension, ILinkedUndoContext
  using Microsoft.VisualStudio.ArchitectureTools.Extensibility.Presentation;
    // for IDiagramContext
  using Microsoft.VisualStudio.ArchitectureTools.Extensibility.Uml;
    // for designer extension attributes


  /// 
  /// Called when the user clicks the menu item.
  /// 
  // Context menu command applicable to any UML diagram 
  [Export(typeof(ICommandExtension))]
  [ClassDesignerExtension]
  [UseCaseDesignerExtension]
  [SequenceDesignerExtension]
  [ComponentDesignerExtension]
  [ActivityDesignerExtension]
  class CommandExtension : ICommandExtension
  {
    [Import]
    IDiagramContext Context { get; set; }

    public void Execute(IMenuCommand command)
    {
      // Get the diagram of the underlying implementation.
      Diagram dslDiagram = Context.CurrentDiagram.GetObject();
      if (dslDiagram != null)
      {
        string imageFileName = FileNameFromUser();
        if (!string.IsNullOrEmpty(imageFileName))
        {
          Bitmap bitmap = dslDiagram.CreateBitmap(
           dslDiagram.NestedChildShapes,
           Diagram.CreateBitmapPreference.FavorClarityOverSmallSize);
          bitmap.Save(imageFileName, GetImageType(imageFileName));
        }
      }
    }

    /// 
    /// Called when the user right-clicks the diagram.
    /// Set Enabled and Visible to specify the menu item status.
    /// 
    ///
    public void QueryStatus(IMenuCommand command)
    {
      command.Enabled = Context.CurrentDiagram != null 
        && Context.CurrentDiagram.ChildShapes.Count() > 0;
    }

    /// 
    /// Menu text.
    /// 
    public string Text
    {
      get { return "Save To Image..."; }
    }


    /// 
    /// Ask the user for the path of an image file.
    /// 
    /// image file path, or null
    private string FileNameFromUser()
    {
      SaveFileDialog dialog = new SaveFileDialog();
      dialog.AddExtension = true;
      dialog.DefaultExt = "image.bmp";
      dialog.Filter = "Bitmap ( *.bmp )|*.bmp|JPEG File ( *.jpg )|*.jpg|Enhanced Metafile (*.emf )|*.emf|Portable Network Graphic ( *.png )|*.png";
      dialog.FilterIndex = 1;
      dialog.Title = "Save Diagram to Image";
      return dialog.ShowDialog() == DialogResult.OK ? dialog.FileName : null;
    }

    /// 
    /// Return the appropriate image type for a file extension.
    /// 
    ///
    /// 
    private ImageFormat GetImageType(string fileName)
    {
      string extension = System.IO.Path.GetExtension(fileName).ToLowerInvariant();
      ImageFormat result = ImageFormat.Bmp;
      switch (extension)
      {
        case ".jpg":
          result = ImageFormat.Jpeg;
          break;
        case ".emf":
          result = ImageFormat.Emf;
          break;
        case ".png":
          result = ImageFormat.Png;
          break;
      }
      return result;
    }
  }
}

How To : Design the Physical Architecture to Support Collaborative Development and ALM of SharePoint Foundation 2010 Application

Introduction

This article explains the physical architecture which fits best in collaborative development and ALM of SharePoint Foundation 2010 application and what are the servers and tools needed and how they play key roles in ALM of SharePoint Foundation 2010. The purpose of this article is to provide overall understanding of various servers and farms connected to each other in SharePoint Foundation.

Background

Basic understanding of different server OS & SharePoint Foundation 2010 is required.

Solution

Application Life-cycle Management (ALM) is the co-ordination of development life-cycle activities—including requirements, modeling, development, build, and testing. Recently, ALM has expanded beyond the application and the software development life cycle to also include business solution governance, infrastructure management, operations, and support.

You can use ALM to help align your organization in the context of a software solution in business, development, and operations. With an application development platform that supports ALM, you can provide integration between the various tools used and activities performed within each of these capabilities.

There are main four types of staging servers with standalone developer’s environment which plays a key role in ALM of SharePoint 2010 application:

  1. Development SharePoint Farm
  2. Team foundation server
  3. Integration/Testing Farm
  4. Production Farm
    +
    Developer’s Workstation

The below figure is a physical architecture which depicts how each sever is interconnected to support collaborative development and ALM for SharePoint Foundation 2010 application:

Click to enlarge image

Development SharePoint Farm

A SharePoint farm is fundamentally a collection of SharePoint role servers that provide for the base infrastructure required to house SharePoint sites. The farm level is the highest level of SharePoint architecture, providing a distinct operational boundary for a SharePoint environment. Each farm in an environment is a self-encompassing unit made up of one or more servers, such as web servers, service application servers, and SharePoint database servers.

SharePoint development farm needed for the developers in an organization that makes heavy use of SharePoint often need environments to test new applications, web parts, solutions, and other SharePoint customization. These developers often need a sandbox area where these farm level features and solutions can be tested.

I have considered two-tier topology for SharePoint Foundation 2010 farm. However it will be entirely based on the need of your application. If your application is a relatively small intranet application, then you can choose single tier topology or if you are going to integrate other search server with foundation, then you can choose three-tier topology with application server as a middle tier (Remember that SharePoint Foundation 2010 doesn’t include enterprise search). It may make sense to deploy one or more development farms so that developers have the opportunity to run their tests and develop software for SharePoint independent of the existing production environment.

There are basically two types of servers included in two-tier development farm of SharePoint Foundation 2010:

  1. Web server
  2. Content database server

In the above figure, there are three front-end web servers and one SharePoint content database server. However you can choose a single front-end web server connected to content database server based on your application need and architecture of production environment. All web servers share the same content database. This is called two-tier deployment farm where SharePoint server component and content database are installed on separate server. As I mentioned before, you can choose one-tier, two-tier or three-tier deployment topology based on your application architecture and topology of production architecture.

Each web server has SharePoint Foundation 2010 and SharePoint extension for TFS 2010 install on it. It needs SharePoint extension for TFS 2010 to connect with Team Foundation Server for source control, build management & project management.

Advantage of Development SharePoint Farm:

  1. Single place where SharePoint Admin can integrate all the final artifacts from multiple developers.
  2. Developer can sync with latest SharePoint site on its standalone developer workstation.
  3. Admin can easily approve artifacts and migrate to integration server.
  4. It is a unit testing environment for developers where they can test dependent functionality or farm level features.

Team Foundation Server

Team Foundation Server plays a key role in ALM which provides source control, build management and work item. You can have TFS installed on the same server which has content database server but if you are going to use build management of TFS, then it is advisable to have separate Team Foundation Server because it utilizes CPU intensively when it processes the builds.

As per the above figure, there are separate Team foundation servers which are connected to SharePoint Farm as well as standalone development workstation so that it can provide source control for customized content as well as developer’s artifacts and resources.

Advantages of TFS
  1. Source control for SharePoint artifacts and customization
  2. Build management for SharePoint
  3. Work item and bug tracking tool for SharePoint
  4. Admin console for all management activity
  5. Easy integration with SharePoint foundation server and VS 2010
  6. Easy check-in & check-out
  7. Web based console to manage ALM activity

Developer’s Workstation

As per the above figure, developers’ environment includes two developers workstation. In practice, you can take as many workstations as your development team size.

Developer workstation should have Windows 7 or Windows vista operating system with standalone SharePoint foundation server with local content database. So that one developer’s work doesn’t affect another developer and he can debug artifacts locally.

Developer workstation will include the following stuff installed:

  1. Windows 7 or Windows vista 64 bit OS
  2. Stand alone SharePoint Foundation server 2010
  3. SharePoint designer 2010
  4. Visual Studio 2010 (connected to TFS)

Developer workstation should be connected to Team Foundation Server 2010 so that when developer finally completes his artifact, then he can check-in his artifact in TFS so that other developers can take the latest code from TFS if needed. This way, parallel development can happen without affecting other developer’s work.

Integration/Testing Farm

Any production SharePoint environment should have a test environment in which new SharePoint web parts, solutions, service packs, patches, and add-ons can be tested. It is critical to deploy test farms, because many SharePoint add-ons could potentially disrupt or corrupt the formatting or structure of a production environment, and trying to test these new solutions on site collections or different web applications is not enough because the solutions often install directly on the SharePoint servers themselves. If there is an issue, the issue will be reflected in the entire farm.

Integration or testing server farm should be similar to the existing environments, with the same add-ons and solutions installed and should ideally include restores of production site collections to make it as similar as possible to the existing production environment. All changes and new products or solutions installed into an environment should subsequently be tested first in this environment.

Integration/testing servers will have final SharePoint sites and site collection as per the business requirements. QA will test all the business functionality here. Customer can also do their ‘User acceptance test’ before going live to the production server.

After user acceptance test passed, all the sites & site collection will be deployed on production server.

Advantage of Integration testing server:

  1. Clean environments and same physical architecture as production
  2. QA can test all dependent business functionality at one place
  3. Customer can participate in UAT
  4. Easy deployment/migration from integration testing server to production server

Production Farm

The final stage is rolling your farm into a production environment. At this stage, you will have incorporated the necessary solution and infrastructure adjustments that were identified during the user acceptance test stage. These servers are generally in the customer’s premises. Development team and testing team do not have control over it.

There are various 3rd party tools available in the market for SharePoint data protection, administration, migration, compliance and integration.

ImageGen[1]

Summary

So this way, you can design physical architecture where Development SharePoint Farm and developer’s workstation are integrated with TFS 2010. TFS and Content database are connected to testing server or testing farm where all the artifacts and content will be integrated in testing server for QA and UAT. Finally after UAT, it will be deployed on production farm.

You can use VM (Virtual Machine) for all the servers and workstation for effective infrastructure because if server crashes due to some reason, then you can quickly create a new VM for the needed OS from images.

Note: In the above figure, integration/Testing farm and production farm is a single server just for clear understanding but it will be as large as development farm with number of front-end web server and content database server in reality. All the server OS is Windows Server 2008 R2 SP2 64 bit. Please visit here for more information on hardware & software requirements for SharePoint Foundation 2010.

How to: Customize the SharePoint HTML Editor Field Control using ECM

You can use the HTML Editor field control to insert HTML content into a publishing page. Page templates that include a Publishing HTML column type also include the HTML Editor field control.

This editor has special capabilities, such as customized styles, editing constraints, reusable content support, a spelling checker, and use of asset pickers to select documents and images to insert into a page’s content. This topic describes how to modify some features and attributes of the HTML Editor field control.

Image

If the content type of a page layout supports the Page Content column, you can add a Rich HTML field control to your page layout by using markup such as the following.

<PublishingWebControls:RichHtmlField id="ArticleAbstract" FieldName="ArticleAbstract" 
          AllowExternalUrls="false" 
          AllowFonts="true" 
          AllowReusableContent="false" 
          AllowHeadings="false"
          AllowHyperlinks="false"
          AllowImages="false"
          AllowLists="false"
          AllowTables="false"
          AllowTextMarkup="false" 
          AllowHTMLSourceEditing="false"
          DisalbeBasicFormattingButtons="false"
          runat="server"/>

In the example above, RichHTMLField is the name of the field control that provides the richer HTML editing experience. Attributes such as AllowFonts and AllowTables specify restrictions on the field.

The HTML field control allows font tags, but the control does not allow URLs that are external to the current site collection, reusable content stored in a centralized list, standard HTML heading tags, hyperlinks, images, numbered or bulleted lists, tables, or text markup.

Table 1. HTML editor field control properties
Attribute Description
AllowExternalUrls Only URLs internal to the current site collection are allowed to be referenced in a link or an image.
AllowFonts Content may contain Font tags.
AllowHtmlSourceEditing HTML Editor can be switched into a mode that allows the HTML to be edited directly.
AllowReusableContent Content may contain reusable content fragments stored in a centralized list.
AllowHeadings Content may contain HTML heading tags (H1, H2, and so on).
AllowTextMarkup Content may contain bold, italic, and underlined text.
AllowImages Content may contain images.
AllowLists Content may contain numbered or bulleted lists.
AllowTables Content may contain table-related tags such as <table>, <tr>, and <td>.
AllowHyperlinks Content may contain links to other URLs.
AllowHtmlSourceEditing When set to false, the HTML editor is disabled from switching to HTML source editing mode.
AllowHyperlinks Gets or sets the constraint that allows hyperlinks to be added to the HTML. If this flag is set to false, <A>, <AREA>, and <MAP> tags are removed from the HTML. Default is true. This property also determines whether the editing user interface (UI) enables these operations.
AllowImageFormatting Gets or sets image formatting items. This restriction disables only menus and does not force the content to adhere to this restriction
AllowImagePositioning Gets or sets the position of the image. This restriction disables only menus and does not force the content to adhere to this restriction.
AllowImageStyles Gets or sets whether the Table Styles menu is enabled. This restriction disables only the menu and does not force the content to adhere to this restriction.
AllowInsert Gets or sets whether Insert options are shown. This restriction disables only the menu and does not force the content to adhere to this restriction.
AllowLists Gets or sets the constraint that allows list tags to be added to the HTML. If this flag is set to false, <LI>, <OL>, <UL>, <DD>, <DL>, <DT>, and <MENU> tags are removed from the HTML. Default is true. This also determines whether the editing UI enables these operations.
AllowParagraphFormatting Gets or sets whether paragraph formatting items are enabled. This restriction disables only menus and does not force the content to adhere to this restriction.
AllowStandardFonts Gets or sets whether standard fonts are enabled. This restriction disables only menus and does not force the content to adhere to this restriction.
AllowStyles Gets or sets whether the Style menu is enabled. This restriction disables only the menu and does not force the content to adhere to this restriction.
AllowTables Gets or sets the constraint to allow tables to be added when editing this field.
AllowTableStyles Gets or sets whether the Table Styles menu is enabled. This restriction disables only the menu and does not force the content to adhere to this restriction.
AllowTextMarkup Get or set the constraint to allow text markup to be added when editing this field.
AllowThemeFonts Gets or sets whether theme fonts are enabled. This restriction disables only menus and does not force the content to adhere to this restriction.
Predefined Table Formats

The HTML editor includes a set of predefined table formats, but it can be customized to fit the styling of an individual page. Each table format is a collection of cascading style sheet (CSS) classes for each table tag. You can define styling for the first and last row, odd and even rows, first and last column, and so on.

The HTML Editor dynamically applies certain styles from the referenced style sheets on the page and makes them available to users when formatting a table. For a custom style to be available when formatting a table, the relevant class names must follow the PREFIXTableXXX-NNN format, where:

  • PREFIX is ms-rte by default, but you can override the default by using the control PrefixStyleSheet() property of the RichHTML field control.
  • XXX is the specific table section, such as EvenRow or OddRow.
  • NNN is the name to identify the table styling.

The following example presents a complete set of classes for a table styling format.

.ms-rteTable-1 {border-collapse:collapse;border-top:gray 1.5pt;
    border-left:gray 1.5pt;border-bottom:gray 1.5pt;
    border-right:gray 1.5pt;border-style:solid;}
.ms-rteTableHeaderRow-1 {color:Green;background:yellow;text-align:left}
.ms-rteTableHeaderFirstCol-1 {padding:0in 5.4pt 0in 5.4pt;}
.ms-rteTableHeaderLastCol-1 {padding:0in 5.4pt 0in 5.4pt;}
.ms-rteTableHeaderOddCol-1 {padding:0in 5.4pt 0in 5.4pt;}
.ms-rteTableHeaderEvenCol-1 {padding:0in 5.4pt 0in 5.4pt;}
.ms-rteTableOddRow-1 {color:black;background:#FFFFDD;}
.ms-rteTableEvenRow-1 {color:black;background:#FFB4B4;}
.ms-rteTableFirstCol-1 {padding:0in 5.4pt 0in 5.4pt;}
.ms-rteTableLastCol-1 {padding:0in 5.4pt 0in 5.4pt;}
.ms-rteTableOddCol-1 {padding:0in 5.4pt 0in 5.4pt;}
.ms-rteTableEvenCol-1 {padding:0in 5.4pt 0in 5.4pt;}
.ms-rteTableFooterRow-1 {color:blue;font-style:bold;
    font-weight:bold;background:white;border-top:solid gray 1.0pt;
    border-bottom:solid gray 1.0pt;border-right:solid silver 1.0pt; 
    border-style:solid;}
.ms-rteTableFooterFirstCol-1 {padding:0in 5.4pt 0in 5.4pt;
    border-top:solid gray 1.0pt;text-align:left}
.ms-rteTableFooterLastCol-1 {padding:0in 5.4pt 0in 5.4pt;
    border-top:solid gray 1.0pt;text-align:left}
.ms-rteTableFooterOddCol-1 {padding:0in 5.4pt 0in 5.4pt;
    text-align:left;border-top:solid gray 1.0pt;}
.ms-rteTableFooterEvenCol-1 {padding:0in 5.4pt 0in 5.4pt;
    text-align:left;border-top:solid gray 1.0pt;}

Microsoft SharePoint Server 2010 includes a set of default table styles. However, if the system detects new styles that did not originate in the default .css file, it removes the default set and presents only those newly defined styles in the HTML editor dialog box.

Spelling Checker

In SharePoint Server 2010, the HTML editor includes a spelling checker, which can be customized by developers by using the SpellCheckV4Action Web control and the SpellCheckToolbarButton Web control. The spelling checker action registers client files and data during a spelling check.

It also includes a method to get the console tab and calls the user rights to verify that the current user has rights to perform a spelling check operation on the selected item. The spelling checker action calls the appropriate ECMAScript (JavaScript, JScript) code, and sends information to the client about available spellings and the default language to use for the request.

FREE Web Part – Random “Quote of the day” SP 2010 Web Part

The “Random Quote of the Day” Web Part randomly selects a quote from the specified Sharepoint list or from the selected RSS feed.

A timer can then be set and the web part will read a new, random post and place it within the web part.

It is great for team/company motivation, to display code snippets in on a Team or KB Site – Your imagination is the limit.

A “Starter” Excel list containing quotes for a quick start is supplied with the download package.

Image

The Web Part can be used with Sharepoint 2010.

You can configure the following web part properties: the Sharepoint list the Sharepoint list column or the RSS Feed URL for external tips enable or suppress the daily calendar display show an optional picture or calendar show a tip every day or on every page refresh configure CSS settings for individual formatting

 

Contact me at tomas.floyd@outlook.com for this cool free web part – Totally free of charge

How To : Peel back the layers of data and information and reveal meaningful BI with SharePoint

Business Intelligence (BI) often takes on the mantel of exotic, rare, and almost unattainable technology. But at its core, business intelligence is simply a method of reporting on what happened.

Image


Granted it is a type of reporting that reaches beyond an ordinary peek into the rearview mirror of past business events; business intelligence helps to spot future trends, make informed go/no-go decisions, or identify potential threats. BI technology is strongest when it rests on a large supply of valid, diverse and current data, and can leverage the proper tools to help users understand and visualize queries about that data.

This blog post is about how SharePoint 2013 can help users solve practical business information problems, even though they don’t have the time or the budget to custom build an enterprise-scale BI system. The underlying premise of this blog is – show how SharePoint 2013 can provide a reasonable cost-benefit ratio and justify investing in BI technology.


Before we jump into SharePoint 2013 and its capabilities, let’s take a high-level look at Business Intelligence.

What Problems Can BI Solve?


If the only tool you have in your toolbox is a hammer, then every problem might look like a nail. The fact is, most businesses are able to solve most problems without spending a dime on more technology. In other words, the ‘hammer’ most businesses have been using works just fine, because most of their problems look like nails. The challenge they face only comes into focus when their competition is able to solve the same type of problems, but they do it faster, cheaper, and with less effort. Obviously, this can be a doomsday scenario for the company falling behind, technologically speaking.

That said; Business Intelligence is a great tool…but what problems will it solve? Perhaps a better question would be…how do I figure out if BI can help my company? You are not alone in asking these questions. Just because we have the tools to do something amazing like BI, doesn’t mean you need it or can afford it. But it certainly would be beneficial for you to find out if and how a Business Intelligence capability would help your business.

The starting-line to find out if BI makes sense for your organization runs right through your own conference room. You need to sit down with your senior executives and managers and talk to them about the information they rely on to run their part of the business. What information do they need, when do they need it, what do they do with it, what information are they missing, and so on? Initiate this type of conversation and you will, undoubtedly, open up a window of opportunity to discuss the merits of Business Intelligence.

SharePoint 2013 and Business Intelligence

Assuming that you see value in establishing BI capabilities in your organization, a very good first step would be to evaluate Microsoft’s SharePoint 2013. Because Microsoft products are generally used throughout both the back-office and front-office of most businesses, SharePoint 2013 is a very powerful tool to integrate the data with the technical systems required to build BI capabilities.

The main theme for BI is aggregation of data from multiple sources and then making that data available when, where, and how it is needed. BI must also be in complete alignment with all corporate goals while it supports the needs of individual managers who are responsible for achieving those goals. SharePoint 2013 is designed to access information and put it in the hands of employees when and where they need it. Because of SharePoint 2013’s capabilities to enable collaboration and teamwork, its very nature aligns the goals of the business with the goals of the employees.

Data Warehousing Measures and Dimensions

Perhaps the most fundamental requirement of BI is the need for information or data. Often this data is distributed throughout multiple databases and must be aggregated in some form.

In data warehousing, which is the term used to describe the functions necessary to aggregate, store and access data for the purpose of Business Intelligence and analytics, the data is often loaded into Online Analytical Processing (OLAP) cubes. The data stored in a cube can be sorted and filtered based on measures and dimensions. This technique lets users query the cube based on practical business categories which enable calculations to be made such as sum, count, average, min/max, etc. This is called a measure.

The other characteristic used in a cube is called a dimension. Dimensions are a collection of information or references about a measureable event. Each dimension can be measured.
For example, let’s say you wanted to run a report that gives you an up-to-the-minute total on sales volume and the number of units sold for each region of your company. In this example, the regions would be the dimensions and the sales volume and number of units are the measures.

SharePoint is designed to access cubes and work with the data stored in the cube, based on the available measures and dimensions.

Key Performance Indicators Business Intelligence enables visualization of raw data in the form of charts, graphs, pictures, etc. Typically Key Performance Indicators (KPIs), Score Cards, and Dashboards use the raw data and turn it into something that can be easily consumed by a viewer. For example, a project status KPI is commonly displayed as green, yellow or red lights to indicate that the project is on target/no issues, there are minor issues, or the project is in trouble. This BI technique is an easy way to visualize the data and cut through all the non-essential information and get to the point. This also allows the viewer to quickly gauge if the corporate goals are being met or are in jeopardy.

SharePoint 2013 Business Intelligence Solutions

SharePoint 2013 has several products that may be used as part of a BI system. The following is a list of commonly used MS components, all or just some of them can be used to create a practical and powerful BI system:

  • BI Data Services – MS SQL Server Data Services and Integration services (both used to extract, transform and load data from disparate sources)
  • BI Engine MS SQL Server Analysis Services (supports OLAP cubes by letting you design, create, and manage multidimensional structures that contain data aggregated from other data sources, such as relational databases.)
  • PowerShell (a Microsoft task automation framework, consists of a command-line shell and associated scripting language built on .NET technology)
  • PowerPivot for SharePoint (Analysis Servicess server running in SharePoint mode and provides server hosting of PowerPivot data)
  • Microsoft Excel (commonly used spreadsheet with Pivot Tables and Pivot Charts and can be used with SharePoint)
  • Microsoft Performance Point Designer (is integrated with SharePoint to create dashboards, score cards, and analytics.

Setting Up SharePoint 2013


When SharePoint 2013 is installed and configured, Central Administration (CA) is provisioned. Central Administration is where you control all the settings and features of SharePoint Product sites for Web applications, like Excel or Performance Point. CA is a convenient tool that helps in linking the applications and tools required by SharePoint to set up a BI system. You will also use Microsoft’s PowerShell to set up the infrastructure for SharePoint sites so they can run in a multi-tenant environment on a single physical server or virtual server.

Excel Services or Performance Point

You can use either or both of these tools to create dashboards. Either one will help you establish trusted locations (e.g. http:// links), data providers, libraries, and databases.
Excel is often the easiest and most familiar tool to display and analyze BI data. Since Excel has been around a long time and so many people are experienced when it comes to using Excel, it is a good choice as the front-end tool to put on your BI environment.

With Excel you can add measures and dimensions from a source data cube (created by Analysis Services) and then use the Pivot Chart capabilities in Excel to select the fields you want to display, such as sales amount, product categories, sales by geography, etc. You can also create Pivot Tables is you want to display a spreadsheet with multiple columns and rows, also using the fields from the cube.

SharePoint’s Practical Solution


Microsoft and SharePoint have all the tools you need to create a very robust and practical BI solution. It is probable that you currently own licenses to many of the components, if not all, that are required to build a solution. If you are interested in Business Intelligence and you would consider a Microsoft-based solution, you might find that you can be up and running in a matter of days with a minimal investment.

How To : Use a Site mailbox to collaborate with your team

Share documents with others

Image

Every team has documents of some kind that need to be stored somewhere, and usually need to be shared with others. If you store your team’s documents on your SharePoint site, you can easily leverage the Site Mailbox app to share those documents with those who have site access.

 Important    When users view a site mailbox in Outlook, they will see a list of all the documents in that site’s document libraries. Site mailboxes present the same list of documents to all users, so some users may see documents they do not have access to open.

If you’re using Exchange, your documents will also appear in a folder in Outlook, making it even easier to forward documents to others.

Forwarding a document from the site mailbox

Organizations, and teams within organizations, often have several different email threads going in all directions at one time. It’s easy for lines to cross, information to get lost or overlooked, and for communication to break down. Site mailboxes enable you to store team or project-related email in one place, so that everyone on the team can see all communication.

On the Quick Launch, click Mailbox.

Mailbox on the Quick Launch

The site mailbox opens as a second, separate inbox and folder structure, next to your personal email account. Mail sent to and from the site mailbox account will be shared between all those who have Contributor permissions on the SharePoint site.

 Tip    Did you know you can also use a site mailbox to collaborate on documents?

Add a site mailbox as a mail recipient

By including the site mailbox on an important email thread, you ensure that a copy of the information in that thread is stored in a location that can be accessed by anyone on the team.

Simply add the site mailbox in the To, CC, or BCC line of an email message.

Email message with site mailbox included in CC field.

You could even consider adding the site mailbox email address to any team contact groups or distribution lists. That way, relevant email automatically gets stored in the team’s site mailbox.

Send email from the site mailbox

When you write and send email from the site mailbox, it will look as though it came from you.

Because everyone with Contributor permissions on a site can access the site mailbox, several people can work together to draft an email message.

To compose a message, simply click New Mail.

New mail button for site mailboxes.

This will open a new message in your site mailbox.

New mail message in a site mailbox.

How To : Use Javascript to enable Listview Folder Navigation

list view webpart is added to page and user navigate to different folders in the list view, there’s no way for users to know current folder hierarchy. So basically breadcrumb for the list view webpart missing. If there would be a way of showing users the exact location in the folder hierarchy the user is current in (as shown in the image below), wouldn’t be that great?


Image 1: Folder Navigation in action

Deploy the FolderNavigation.js File

Download the FolderNavigation.js and then you can deploy the script either in Layouts folder (in case of full trust solutions) or in Master Page gallery (in case of SharePoint Online or full trust). I would recommend to deploy in Master Page Gallery so that even if you move to cloud, it works without modification. If you deploy in Master page gallery, you don’t need to make any changes, but if you deploy in layouts folder, you need to make small changes in the script which is described in section ‘Deploy JS Link file in Layouts folder’.

 

Option 1: Deploy in Master Page Gallery (Suggested)

If you are dealing with SharePoint Online, you don’t have the option to deploy in Layouts folder. In that case you need to deploy it in Master page gallery. Note, deploying the script in other libraries (like site assets, site library) will not work, you need to deploy in master page gallery. Otherwise you can deploy in Layouts folder as described in next section. To deploy in master page gallery manually, please follow the steps:

  1. Download the JavaScript file attached.
  2. Navigate to Root web => site settings => Master Pages (under group ‘Web Designer Galleries’).
  3. From the ‘New Document’ ribbon try adding ’JavaScript Display Template’ and then upload the FolderNavigation.js file and set properties as shown below:

    Image 2: Upload the JavaScript file in master page gallery

    In the above image, we’ve specified the content type to ‘JavaScript Display Template’, ‘target control type’ to view to use the js file in list view. Also I’ve set target scope to ‘/’ which means all sites and subsites will be applied. If you have a site collection ‘/sites/HR’, then you need to use ‘/Sites/HR’ instead. You can also use List Template ID, if you need.

 

Option 2: Deploy in Layouts Folder

If you are deploying the FolderNavigation.js file in Layouts folder, you need to make small changes in the downloaded script’s RegisterModuleInti method as shown below:

RegisterModuleInit(FolderNavigation.js, folderNavigation);

 

In this case the ‘RegisterModuleInit’ first parameter will be the path relative to Layouts folder. If you deploy your file in path ‘/_Layouts/folder1’, the then you need to modify code as shown below:

RegisterModuleInit(Folder1/FolderNavigation.js, folderNavigation);

 

If you are deploying in other subfolders in Layouts folder, you need to update the path accordingly. What I’ve found till now, you can only deploy in Layouts and Master page gallery. But if you find deploying in other folders works, please share. Basically first paramter in RegisterModuleInti is the file either:

  • Relative to ‘_Layouts’ folder
  • Or Master page gallery in which case the path is started with ‘/_catalogs/masterpage’

 

Use the FolderNavigation.js in List View WebPart

Once you deploy the JavaScript file in Master page gallery or Layouts folder, you need to use it in List View WebPart. Once you deploy the FolderNavigation.js file, you can start using it in list view webpart. Edit the list view web part properties and then under ‘Miscellaneous’ section put the file url for JS Link as shown below:

Image 3: List View WebPart’s JS Like Propery

 

Few points to note for this JS Link:

  • if you have deployed the js file in Master Page Gallery, You can use ~site or ~SiteCollection token, which means current site or current site collection respectively. The URL for JS Link then might be ‘~siteCollection/_catalogs/masterpage/FolderNavigatin.js’ or  ‘~site/_catalogs/masterpage/FolderNavigatin.js’. If you deploy the file in Site Collection Master Page gallery only, you need to use ~siteCollection token in subsites so that it uses the JavaScript file from Site Collection.
  • If you have deployed in Layouts folder, you need to use corresponding path in the JS Link properties. For example if you are deploying the file in Layouts folder, then use ‘/_layouts/15/FolderNavigation.js’, if you are deploying in ‘Layouts/Folder1’ then, use ‘/_layouts/15/Folder1/FolderNavigation.js’. Just to inform again, if you deploy in Layouts folder, you need to make small changes in the JavaScript file as described under ‘Option 2: Deploy in Layouts Folder’ section.

 

JavaScript file Description

In case you are interested to know how the code works, the code snippet is given below:

JavaScript

function replaceQueryStringAndGet(url, key, value) { 
    var re = new RegExp("([?|&])" + key + "=.*?(&|$)""i"); 
    separator = url.indexOf('?') !== -1 ? "&" : "?"; 
    if (url.match(re)) { 
        return url.replace(re, '$1' + key + "=" + value + '$2'); 
    } 
    else { 
        return url + separator + key + "=" + value; 
    } 
} 
 
 
function folderNavigation() { 
    function onPostRender(renderCtx) { 
        if (renderCtx.rootFolder) { 
            var listUrl = decodeURIComponent(renderCtx.listUrlDir); 
            var rootFolder = decodeURIComponent(renderCtx.rootFolder); 
            if (renderCtx.rootFolder == '' || rootFolder.toLowerCase() == listUrl.toLowerCase()) 
                return; 
 
            //get the folder path excluding list url. removing list url will give us path relative to current list url 
            var folderPath = rootFolder.toLowerCase().indexOf(listUrl.toLowerCase()) == 0 ? rootFolder.substr(listUrl.length) : rootFolder; 
            var pathArray = folderPath.split('/'); 
            var navigationItems = new Array(); 
            var currentFolderUrl = listUrl; 
 
            var rootNavItem = 
                { 
                    title: 'Root', 
                    url: replaceQueryStringAndGet(document.location.href, 'RootFolder', listUrl) 
                }; 
            navigationItems.push(rootNavItem); 
 
            for (var index = 0; index < pathArray.length; index++) { 
                if (pathArray[index] == '') 
                    continue; 
                var lastItem = index == pathArray.length - 1; 
                currentFolderUrl += '/' + pathArray[index]; 
                var item = 
                    { 
                        title: pathArray[index], 
                        url: lastItem ? '' : replaceQueryStringAndGet(document.location.href, 'RootFolder'encodeURIComponent(currentFolderUrl)) 
                    }; 
                navigationItems.push(item); 
            } 
            RenderItems(renderCtx, navigationItems); 
        } 
    } 
 
 
    //Add a div and then render navigation items inside span 
    function RenderItems(renderCtx, navigationItems) { 
        if (navigationItems.length == 0return; 
        var folderNavDivId = 'foldernav_' + renderCtx.wpq; 
        var webpartDivId = 'WebPart' + renderCtx.wpq; 
 
 
        //a div is added beneth the header to show folder navigation 
        var folderNavDiv = document.getElementById(folderNavDivId); 
        var webpartDiv = document.getElementById(webpartDivId); 
        if(folderNavDiv!=null){ 
            folderNavDiv.parentNode.removeChild(folderNavDiv); 
            folderNavDiv =null; 
        } 
        if (folderNavDiv == null) { 
            var folderNavDiv = document.createElement('div'); 
            folderNavDiv.setAttribute('id', folderNavDivId) 
            webpartDiv.parentNode.insertBefore(folderNavDiv, webpartDiv); 
            folderNavDiv = document.getElementById(folderNavDivId); 
        } 
 
 
        for (var index = 0; index < navigationItems.length; index++) { 
            if (navigationItems[index].url == ''{ 
                var span = document.createElement('span'); 
                span.innerHTML = navigationItems[index].title; 
                folderNavDiv.appendChild(span); 
            } 
            else { 
                var span = document.createElement('span'); 
                var anchor = document.createElement('a'); 
                anchor.setAttribute('href', navigationItems[index].url); 
                anchor.innerHTML = navigationItems[index].title; 
                span.appendChild(anchor); 
                folderNavDiv.appendChild(span); 
            } 
 
            //add arrow (>) to separate navigation items, except the last one 
            if (index != navigationItems.length - 1{ 
                var span = document.createElement('span'); 
                span.innerHTML = '&nbsp;> '; 
                folderNavDiv.appendChild(span); 
            } 
        } 
    } 
 
 
    function _registerTemplate() { 
        var viewContext = {}; 
 
        viewContext.Templates = {}; 
        viewContext.OnPostRender = onPostRender; 
        SPClientTemplates.TemplateManager.RegisterTemplateOverrides(viewContext); 
    } 
    //delay the execution of the script until clienttempltes.js gets loaded 
    ExecuteOrDelayUntilScriptLoaded(_registerTemplate, 'clienttemplates.js'); 
}; 
 
//RegisterModuleInit ensure folderNavigation() function get executed when Minimum Download Strategy is enabled. 
//if you deploy the FolderNavigation.js file in '_layouts' folder use 'FolderNavigation.js' as first paramter. 
//if you deploy the FolderNavigation.js file in '_layouts/folder/subfolder' folder, use 'folder/subfolder/FolderNavigation.js as first parameter' 
//if you are deploying in master page gallery, use '/_catalogs/masterpage/FolderNavigation.js' as first parameter 
RegisterModuleInit('/_catalogs/masterpage/FolderNavigation.js', folderNavigation); 
 
//this function get executed in case when Minimum Download Strategy not enabled. 
folderNavigation(); 

Let me explain the code briefly:

  • The method ‘replaceQueryStringAndGet’ is used to replace query string parameter with new value. For example if you have url http://abc.com?key=value&name=sohel’  and you would like to replace the query string ‘key’ with value ‘New Value’, you can use the method like

    replaceQueryStringAndGet(http://abc.com?key=value&name=sohel&#8221;,“key”,“New Value”)

  • The function folderNavigation has three methods. Function ‘onPostRender’ is bound to rendering context’s OnPostRender event. The method first checks if the list view’s root folder is not null  and root folder url is not list url (which means user is browsing list’s/library’s root). Then the method split the render context’s folder path and creates navigation items as shown below:

    var item = { title: title, url: lastItem ? : replaceQueryStringAndGet(document.location.href, ‘RootFolder’, encodeURIComponent(rootFolderUrl)) };

    As shown above, in case of last item (which means current folder user browsing), the url is empty as we’ll show a text instead of link for current folder.

  • Function ‘RenderItems’ renders the items in the page. I think this is the place of customisation you might be interested. Having all navigation items passed to this function, you can render your navigation items in your own way. renderContext.wpq is unique webpart id in the page. As shown below with the wpq value of ‘WPQ2’ the webpart is rendered in a div with id ‘WebPartWPQ2’.

    Image 4: List View WebPart in Firebug

    In ‘RenderItems’ function I’ve added a div just before the webpart div ‘WebPartWPQ2’ to put the folder navigation as shown in the image 1.

  • In the method ‘_registerTemplate’, I’ve registered the template and bound the OnPostRender event.
  • The final piece is RegisterModuleInit. In some example you will find the function folderNavigation is executed immediately along with the declaration. However, there’s a problem with Client Side Rendering and Minimal Download Strategy (MDS) working together.
  • To avoid this problem, we need to Register foldernavigation function with RegisterModuleInit to ensure the script get executed in case of MDS-enabled site. The last line ‘folderNavigation()’ will execute normally in case of MDS-disabled site.

SharePoint 2013 and CRM 2011 integration. A customer portal approach

A Look At : Federated Authentication

More and more organisations are looking to collaborate with partners and customers in their ecosystem to help them achieve mutual goals. SharePoint is a great tool for enabling this collaboration but many organisations are reluctant to create and maintain identities for users from other organisations just to allow access to their own SharePoint farm. It’s hardly surprising; identity management is complex and expensive.

You have to pay for servers to host your identity provider (Microsoft Active Directory if you are using Windows); you have to keep it secure; you have to back it up and ensure that it is always available, and you have to pay for someone to maintain and administer it. Identity management becomes even more complicated when your organisation wants to give external users access to SharePoint; you have to ensure that they can only access SharePoint and can’t gain access to other systems; you have to buy additional client access licenses (CALs) for each external user because by adding them to your Active Directory you are making them an internal user.

 

Imageare

Microsoft, Google and others all offer identity providers (also known as IdPs or claims providers) that are free to use, and by federating with a third party IdP you shift the ownership and management of identities on to them. You may even find that the partner or customer you are looking to collaborate with may offer their own IdP (most likely Active Directory Federation Services if they themselves run Windows). Of course, you have to trust whichever IdP you choose; they will be responsible for authenticating the user instead of you so you must be confident that they will do a good job. You must also check what pieces of information about a user (also known as claims; for example, name, email address etc) IdPs offer to ensure they can tell you enough about a user for your purposes as they don’t all offer the same.

Having introduced support for federated authentication in SharePoint 2010, Microsoft paved the way for us to federate with third party IdPs within SharePoint itself. Unfortunately, configuring SharePoint to do this is fiddly and there is no user interface for doing so (a task made more onerous if you want to federate with multiple IdPs or tweak the configuration at a later date). Fortunately Microsoft has also introduced Azure Access Control Services (ACS) which makes the process of federating with one or more IdPs simple and easy to maintain. ACS is a cloud-based service that enables you to manage the IdPs used by your applications. The following diagram illustrates, at a high-level, the components of ACS.

An ACS namespace is a container for mappings between IdPs and one or more relying parties (the applications that want to use ACS), in our case SharePoint. Associated with each mapping is a rule group with defines how the relying party handles the individual claims associated with an identity. Using rule groups you can choose to hide or expose certain claims to specific relying parties within the namespace.

So by creating an ACS namespace you are in effect creating your own unique IdP that encapsulates the configuration for federating with one or more additional IdPs. A key point to remember is that your ACS namespace can be used by other applications (relying parties) that want to share the same identities, not just SharePoint. 

Once your ACS namespace has been created you need to configure SharePoint to trust it, which most of the time will be a one off task and from that point on you can manage and maintain the IdPs you support from within ACS. The following diagram illustrates, at a high-level, the typical architecture for integrating SharePoint and ACS.

 

In the scenario above the SharePoint web application is using two different claims providers (they are referred to as claims providers in SharePoint rather than IdPs). One is for internal users and trusts an internal AD domain and another is for external users and trusts an ACS namespace.

When a user tries to access a site within the web application they will get the default SharePoint Sign In page asking them which provider they want to use.

This page can be customised and branded as required. If the user selects Windows Authentication they will get the standard authentication dialog. If they select Azure Provider (or whatever you happen to have called your claims provider) they will be redirected to your ACS Sign In page.

Again this page can be customised and branded as required. By clicking on one of the IdPs the user will be redirected to the appropriate Sign In page. Once they have been successfully authenticated by the IdP they will be redirected back to SharePoint.

 

Conclusion

By integrating SharePoint with ACS you can simplify the process of giving external users access to SharePoint. It could also save you money in licence fees and administration costs[i].

An important point to bear in mind when planning federated authentication for SharePoint is that in order for Search to be able to index content within SharePoint, you must enable Windows authentication on at least one zone within your web application. Also, if you use a reverse proxy to perform authentication, such as Microsoft Threat Management Gateway, before allowing traffic to hit your SharePoint servers, you will need to disable the authentication checks

 

[i] The licensing model for external users differs between SharePoint 2010 and SharePoint 2013. With SharePoint 2010 if you expose your farm to external users, either anonymously or not, you have to purchase a separate licence for each server. The license covers you for any number of external users and you do not need to by a CAL for each user. With SharePoint 2013, Microsoft did away with the server license for external users and you still don’t need to buy CALs for the external users.

A Look At : The importance of people in a SharePoint project

Image

As with all other sizeable new business software implementations, a successful SharePoint deployment is one that is well thought-out and carefully managed every step of the way.

However in one key respect a SharePoint deployment is different from most others in the way it should be carried out. Whereas the majority of ERP solutions are very rigid in terms of their functionality and in the nature of the business problems they solve, SharePoint is far more of a jack-of-all-trades type of system. It’s a solution that typically spreads its tentacles across several areas within an organisation, and which has several people putting in their two cents worth about what functions SharePoint should be geared to perform.

So what is the best approach? And what makes for a good SharePoint project manager?

From my experience with SharePoint implementations, I would say first and foremost that a SharePoint deployment should be approached from a business perspective, rather than from a strictly technology standpoint. A SharePoint project delivered within the allotted time and budget can still fail if it’s executed without the broader business objectives in mind. If the project manager understands, and can effectively demonstrate, how SharePoint can solve the organisation’s real-world business problems and increase business value, SharePoint will be a welcome addition to the organisation’s software arsenal.

Also crucial is an understanding of people. An effective SharePoint project manager understands the concerns, limitations and capabilities of those who will be using the solution once it’s implemented. No matter how technically well-executed your SharePoint implementation is, it will amount to little if hardly anyone’s using the system. The objective here is to maximise user adoption and engagement, and this can be achieved by maximising user involvement in the deployment process.

 

Rather than only talk to managers about SharePoint and what they want from the system, also talk to those below them who will be using the product on a day-to-day basis. This means not only collaborating with, for example, the marketing director but also with the various marketing executives and co-ordinators.

 

It means not only talking with the human resources manager but also with the HR assistant, and so on. By engaging with a wide range of (what will be) SharePoint end-users and getting them involved in the system design process, the rate of sustained user adoption will be a lot higher than it would have been otherwise.

 

An example of user engagement in action concerns a SharePoint implementation I oversaw for an insurance company. The business wanted to improve the tracking of its documentation using a SharePoint-based records management system. Essentially the system was deployed to enhance the management and flow of health insurance and other key documentation within the organisation to ensure that the company meets its compliance obligations.

 

The project was a great success, largely because we ensured that there was a high level of end-user input right from the start. We got all the relevant managers and staff involved from the outset, we began training people on SharePoint early on and we made sure the change management part of the process was well-covered.

 

Also, and very importantly, the business value of the project was sharply defined and clearly explained from the get-go. As everyone set about making the transition to a SharePoint-driven system, they knew why it was important to the company and why it was going to be good for them too.

By contrast a follow-up SharePoint project for the company some months later was not as successful. Why? Because with that project, in which the company abandoned its existing intranet and developed a new one, the business benefits were poorly defined and were not effectively communicated to stakeholders. That particular implementation was driven by the company’s IT department which approached the project from a technical, rather than a business, perspective. User buy-in was not sought and was not achieved.

 

When the SharePoint solution went live hardly anyone used it because they didn’t see why they should. No-one had educated them on that. That’s the danger when you don’t engage all your prospective system end-users throughout every phase of a SharePoint implementation project.

As can be seen, while it is of course critical that the technical necessities of a SharePoint deployment be met, that’s only part of the picture. Without people using the system, or with people using the system to less than its maximum potential, the return on your SharePoint investment will never materialize.

Comprehensive engagement with all stakeholders, that’s where the other part of the picture comes in. That’s where a return on investment, an investment of time and effort, will most assuredly be achieved.

How To : 8 Steps for a successful SharePoint Change Management

As with virtually any other significant IT implementation project, a SharePoint deployment is as dependent on people as it is on technology for its success. If your system end users are in fact not using the system, or are not using it correctly or to its full potential, you will never achieve that all ‑ important return on investment.

dsadsa

ImageGen[1]            hero-for-hire_basic-layout_600

One hundred percent adoption by users who are proficient with SharePoint and are committed to gaining the greatest value from the software should be a key objective for all SharePoint project leaders. To bring this about it is crucial to develop and execute an effective change management strategy as a key component of your SharePoint implementation.

At Professional Advantage we have had great success with our SharePoint clients by informally following a change management process that was thought up by Dr John Kotter, an American professor whose 1966 book, Leading Change, is still highly influential in the world of change management theory.

In his book Dr Kotter puts forward an eight step process that change leaders can follow to avoid failure and adjust successfully to change. These steps, which can be usefully applied to a SharePoint deployment project, are:

 

1. Create a sense of urgency

No SharePoint project will get off the ground, let alone become successful, if there is no buy-in at the executive level. Here it is important to put a strong case forward as to why the move to SharePoint is very much in the organisation’s best interests. Begin by doing lots of research. Examine, for example, the competitive disadvantages that will be suffered if no change is made. Also highlight those business functions and processes within the organisation that could be significantly improved with SharePoint. Tie the benefits of SharePoint to the organisation’s broad business goals and ongoing strategic objectives. Explain as persuasively as possible why the current situation is unsustainable and why, when it comes to moving to SharePoint, it’s a case of ‘the sooner the better.’ The stronger your business case for a SharePoint implementation, the more likely it is that it will get the green light.

 

2. Create a guiding coalition

Once you’ve received the go-ahead for the SharePoint deployment the next step is to put together a coalition of people with the power and commitment to lead the change. This team will ideally be comprised of a wide variety of motivated individuals: department managers, technical experts and those at the coalface who will be using SharePoint on a day-to-day basis should all form part of the coalition. They should also be people who have grasped the urgency of the task ahead, who understand the business goals that will be achieved with a successful implementation and who recognise that 100% user adoption is a central goal of the project.

Crucial to the success of the coalition’s efforts is that its members all work well together. As the project evolves these change-drivers will be sharing ideas, making decisions and identifying and solving problems. Team members must be able to trust each other and collaborate effectively; if this does not occur the project will almost certainly stall.

 

3. Develop a change vision

By developing a clear vision for the project you give those involved a direction to follow and a goal to achieve. Ideally the vision will be easy to comprehend, achievable, flexible and something that all stakeholders can get enthusiastic about.

While the vision will by definition be broad, the strategies that underpin it will be specific. Priorities for the project should be defined and acted upon, with priority given to ‘low hanging fruit’, ie tasks that can be easily achieved and which will deliver visible, measurable and meaningful change within the organisation. This approach will add momentum to the project by enabling stakeholders to gain a real-world perspective on the changes that are in progress and why they’re good both for the organisation and for individual SharePoint users.

 

4. Communicate the vision for buy-in

Communicating your vision and promoting the behavioural changes that will drive it are critical for a successful SharePoint deployment. This step requires a top-down communications strategy that is consistent, creative, inspiring and ongoing.

At Professional Advantage our communication strategy forms part of our SharePoint adoption plan and includes a variety of tactics designed to get staff using SharePoint, and using it properly. In the past such tactics have included SharePoint launch parties, lunch sessions, system design competitions amongst staff, social media, blogs and the putting up of posters around the office promoting the use of SharePoint. The objective here is, of course, to get users educated and engaged. The more creative you are, the better. And always keep in mind that user adoption will likely be low unless you can answer the ‘What’s in it for me’ question.

 

5. Empower broad-based action

To achieve the highest possible level of SharePoint user adoption it’s best to remove any barriers that might impede that objective. This particularly applies to the laggards, ie those who are most resistant to change and least likely to make full use of the system.

Typically this will involve removing software and other technologies that make it easy for workers to continue doing things the old way. Too often organisations include this as an afterthought, resulting in smaller and slower user adoption. Here it is important to plan from the beginning, anticipate what systems will be made redundant (or scaled down) and schedule that in to the SharePoint implementation plan.

Also important here is encouragement from above. Supported by proper ongoing training, those who will be using SharePoint need to be encouraged to step out of their comfort zone and embrace the new system.

 

6. Celebrate short-term wins

Short-term wins are essential to the success of your SharePoint deployments, as are the active celebration of these wins when they occur. The transition to a SharePoint environment is a long-term process and momentum must be maintained every step of the way. Perhaps, as a result of SharePoint, a new level of intra-office collaboration has been achieved, or the organisation has experienced dramatic time savings with particular processes, or has achieved new standards of compliance. Whatever the win, the broadcasting of it should form part of the SharePoint communications plan. If people can see how and why SharePoint is working, they will be more likely to embrace the system and, in so doing, contribute to the achievement of the organisation’s business goals.

 

7. Consolidate gains and generate more change

You’ve scored some wins and people are now comfortable using SharePoint. While that is a wonderful thing, the danger at this stage is complacency. Rather than take your foot off the accelerator it’s important to build on what’s been achieved and pursue larger, more ambitious objectives. To fully ingrain SharePoint into your organisation’s culture (and to avoid regression) ramp things up with new projects and initiatives.

 

8. Making it stick

To fully embed SharePoint into your organisation’s culture and business practices everyone needs to be on board. Just as during a life-threatening cyclone there are always some residents who refuse to heed advice to leave town, with a SharePoint deployment there will always be some who are unwilling to move. Here it is important to reinforce, and continue to celebrate, the victories that have been achieved and communicate how important it is that everyone adopt the system.

As the SharePoint project continues to evolve so too will its vision and purpose. With the right planning and execution, and with the right leadership, people will, over time, forget the old ways of doing things and fully embrace the new.

Features from SharePoint 2010 Integration with SAP BusinessObjects BI 4.0

ImageOne of the core concepts of Business Connectivity Services (BCS) for SharePoint 2010 are the external content types. They are reusable metadata descriptions of connectivity information and behaviours (stereotyped operations) applied to external data. SharePoint offers developers several ways to create external content types and integrate them into the platform.

 

The SharePoint Designer 2010, for instance, allows you to create and manage external content types that are stored in supported external systems. Such an external system could be SQL Server, WCF Data Service, or a .NET Assembly Connector.

This article shows you how to create an external content type for SharePoint named Customer based on given SAP customer data. The definition of the content type will be provided as a .NET assembly, and the data are displayed in an external list in SharePoint.

The SAP customer data are retrieved from the function module SD_RFC_CUSTOMER_GET. In general, function modules in a SAP R/3 system are comparable with public and static C# class methods, and can be accessed from outside of SAP via RFC (Remote Function Call). Fortunately, we do not need to program RFC calls manually. We will use the very handy ERPConnect library from Theobald Software. The library includes a LINQ to SAP provider and designer that makes our lives easier.

.NET Assembly Connector for SAP

The first step in providing a custom connector for SAP is to create a SharePoint project with the SharePoint 2010 Developer Tools for Visual Studio 2010. Those tools are part of Visual Studio 2010. We will use the Business Data Connectivity Model project template to create our project:

After defining the Visual Studio solution name and clicking the OK button, the project wizard will ask what kind of SharePoint 2010 solution you want to create. The solution must be deployed as a farm solution, not as a sandboxed solution. Visual Studio is now creating a new SharePoint project with a default BDC model (BdcModel1). You can also create an empty SharePoint project and add a Business Data Connectivity Model project item manually afterwards. This will also generate a new node to the Visual Studio Solution Explorer called BdcModel1. The node contains a couple of project files: The BDC model file (file extension bdcm), and the Entity1.cs and EntityService.cs class files.

Next, we add a LINQ to SAP file to handle the SAP data access logic by selecting the LINQ to ERP item from the Add New Item dialog in Visual Studio. This will add a file called LINQtoERP1.erp to our project. The LINQ to SAP provider is internally called LINQ to ERP. Double click LINQtoERP1.erp to open the designer. Now, drag the Function object from the designer toolbox onto the design surface. This will open the SAP connection dialog since no connection data has been defined so far:

Enter the SAP connection data and your credentials. Click the Test Connection button to test the connectivity. If you could successfully connect to your SAP system, click the OK button to open the function module search dialog. Now search for SD_RFC_CUSTOMER_GET, then select the found item, and click OK to open the RFC Function Module /BAPI dialog:

SP2010SAPToBCS/BCS12.png

The dialog provides you the option to define the method name and parameters you want to use in your SAP context class. The context class is automatically generated by the LINQ to SAP designer including all SAP objects defined. Those objects are either C# (or VB.NET) class methods and/or additional object classes used by the methods.

For our project, we need to select the export parameters KUNNR and NAME1 by clicking the checkboxes in the Pass column. These two parameters become our input parameters in the generated context class method named SD_RFC_CUSTOMER_GET. We also need to return the customer list for the given input selection. Therefore, we select the table parameter CUSTOMER_T on the Tables tab and change the structure name to Customer. Then, click the OK button on the dialog, and the new objects get added to the designer surface.

IMPORTANT: The flag “Create Objects Outside Of Context Class” must be set to TRUE in the property editor of the LINQ designer, otherwise LINQ to SAP generates the Customer class as nested class of the SAP context class. This feature and flag is only available in LINQ to SAP for Visual Studio 2010.

The LINQ designer has also automatically generated a class called Customer within the LINQtoERP1.Designer.cs file. This class will become our BDC model entity or external content type. But first, we need to adjust and rename our BDC model that was created by default from Visual Studio. Currently, the BDC model looks like this:

Rename the BdcModel1 node and file into CustomerModel. Since we already have an entity class (Customer), delete the file Entity1.cs and rename the EntityService.cs file to CustomerService.cs. Next, open the CustomerModel file and rename the designer object Entity1. Then, change the entity identifier name from Identifier1 to KUNNR. You can also use the BDC Explorer for renaming. The final adjustment result should look as follows:

SP2010SAPToBCS/BCS4.png

The last step we need to do in our Visual Studio project is to change the code in the CustomerService class. The BDC model methods ReadItem and ReadList must be implemented using the automatically generated LINQ to SAP code. First of all, take a look at the code:

SP2010SAPToBCS/BCS6.png

As you can see, we basically have just a few lines of code. All of the SAP data access logic is encapsulated within the SAP context class (see the LINQtoERP1.Designer.cs file). The CustomerService class just implements a static constructor to set the ERPConnect license key and to initialize the static variable _sc with the SAP credentials as well as the two BDC model methods.

The ReadItem method, BCS stereotyped operation SpecificFinder, is called by BCS to fetch a specific item defined by the identifier KUNNR. In this case, we just call the SD_RFC_CUSTOMER_GET context method with the passed identifier (variable id) and return the first customer object we get from SAP.

The ReadList method, BCS stereotyped operation Finder, is called by BCS to return all entities. In this case, we just return all customer objects the SD_RFC_CUSTOMER_GET context method returns. The returned result is already of type IEnumerable<Customer>.

The final step is to deploy the SharePoint solution. Right-click on the project node in Visual Studio Solution Explorer and select Deploy. This will install and deploy the SharePoint solution on the server. You can also debug your code by just setting a breakpoint in the CustomerService class and executing the project with F5.

That’s all we have to do!

Now, start the SharePoint Central Administration panel and follow the link “Manage Service Applications”, or navigate directly to the URL http://<SERVERNAME>/_admin/ServiceApplications.aspx. Click on Business Data Connectivity Service to show all the available external content types:

On this page, we find our deployed BDC model including the Customer entity. You can click on the name to retrieve more details about the entity. Right now, there is just one issue open. We need to set permissions!

Mark the checkbox for our entity and click on Set Object Permissions in the Ribbon menu bar. Now, define the permissions for the users you want to allow to access the entity, and click the OK button. In the screen shown above, the user administrator has all the permissions possible.

In the next and final step, we will create an external list based on our entity. To do this, we open SharePoint Designer 2010 and connect us with the SharePoint website.

Click on External Content Types in the Site Objects panel to display all the content types (see above). Double click on the Customer entity to open the details. The SharePoint Designer is reading all the information available by BCS.

In order to create an external list for our entity, click on Create Lists & Form on the Ribbon menu bar (see screenshot below) and enter CustomerList as the name for the external list.

OK, now we are done!

Open the list, and you should get the following result:

The external list shows all the defined fields for our entity, even though our Customer class, automatically generated by the LINQ to SAP, has more than those four fields. This means you can only display a subset of the information for your entity.

Another option is to just select those fields required within the LINQ to SAP designer. With the LINQ designer, you can access not just the SAP function modules. You can integrate other SAP objects, like tables, BW cubes, SAP Query, or IDOCs. A demo version of the ERPConnect library can be downloaded from the Theobald Software homepage.

If you click the associated link of one of the customer numbers in the column KUNNR (see screenshot above), SharePoint will open the details view:

SP2010SAPToBCS/BCS10.png

 

 

How To : A library to create .mht files (available at request)

There are a number of ways to do this, including hosting Word or Excel on the Web Server and dealing with COM Interop issues, or purchasing third – party MIME encoding libraries, some of which sell for $250.00 or more. But, there is no native .NET solution. So, being the curious soul that I am, I decided to investigate a bit and see what I could come up with. Internet Explorer offers a File / Save As option to save a web page as “Web Archive, single file (*.mht)”.

Image

What this does is create an RFC – compliant Multipart MIME Message. Resources such as images are serialized to their Base64 inline encoding representations and each resource is demarcated with the standard multipart MIME header – breaks. Internet Explorer, Word, Excel and most newsreader programs all understand this format. The format, if saved with the file extension “.eml”, will come up as a web page inside Outlook Express; if saved with “.mht”, it will come up in Internet Explorer when the file is double-clicked out of Windows Explorer, and — what many do not know — if saved with a “*.doc” extension, it will load in MS Word, each with all the images intact, and in the case of the EML and MHT formats, with all of the hyperlinks fully-functioning. The primary advantage of the format is, of course, that all the resources can be consolidated into a single file,. making distribution and archiving much easier — including database storage in an NVarchar or NText type field.

 

System.Web.Mail, which .NET provides as a convenient wrapper around the CDO for Windows COM library, offers only a subset of the functionality exposed by the CDO library, and multipart MIME encoding is not a part of that functionality. However, through the wonders of COM Interop, we can create our own COM reference to CDO in the Visual Studio IDE, allowing it to generate a Runtime Callable Wrapper, and help ourselves to the entire rich set of functionality of CDO as we see fit.

 

One method in the CDO library that immediately came to my notice was the CreateMHTMLBody method. That’s MHTMLBody, meaning “Multipurpose Internet Mail Extension HTML (MHTML) Body”. Well!– when I saw that, my eyes lit up like the LED’s on a 32 – way Unisys box! This is a method on the CDO Message class; the method accepts a URI to the requested resource, along with some enumerations, and creates a MultiPart MIME – encoded email message out of the requested URI responses — including images, css and script — in one fell swoop.

 

“Ah”, you say, “How convenient”! Yes, and not only that, but we also get a free “multipart COM Interop Baggage” reference to the ADODB.Stream object – and by simply calling the GetStream method on the Message Class, and then using the Stream’s SaveToFile method, we can grab any resource including images, javascript, css and everything else (except video) and save it to a single MHT Web Archive file just as if we chose the “Save As” option out of Internet Explorer.

 

If we choose not to save the file, but instead want to get back the stream contents, no problem. We just call Stream.ReadText(Stream.Size) and it returns a string containing the entire MHT encoded content. At that point we can do whatever we want with it – set a content – header and Response .Write the content to the browser, for instance — or whatever.

 

For example, when we get back our “MHT” string, we can write the following code:

Response.ContentType=”application/msword”;
Response.AddHeader( “Content-Disposition”, “attachment;filename=NAME.doc”);
Response.Write(myDataString);

 

— and the browser will dutifully offer to save the file as a Word Document. It will still be Multipart MIME encoded, but the .doc extension on the filename allows Word to load it, and Word is smart enough to be able to parse and render the file very nicely. “Ah”, you are saying, “this is nice, and so is the price!”. Yup!

And, if you are serving this MIME-encoded file from out of your database, for example, and you would like it to be able to be displayed in the browser, just change the “NAME.doc” to “NAME.MHT”, and don’t set a content-type header. Internet Explorer will prompt the user to either save or open the file. If they choose “open”, it will be saved to the IE Temporary files and open up in the browser just as if they had loaded it from their local file system.

 

So, to answer a couple of questions that came up recently, yes — you can use this method to MHTML – encode any web page – even one that is dynamically generated as with a report — provided it has a URL, and save the MIME-encoded content as a string in either an NVarchar or NText column in your database. You can then bring this string back out and send it to the browser, images,css, javascript and all.

Now here is the code for a small, very basic “Converter” class I’ve written to take advantage of the two scenarios specified above. Bear in mind, there is much more available in CDO, but I leave this wondrous trail of ecstatic discovery to your whims of fancy:

using System;
using System.Web;
using CDO;
using ADODB;
using System.Text;
namespace PAB.Web.Utils
{
 public class MIMEConverter
 {
  //private ctor as our methods are all static here
  private MIMEConverter()
  {
   
  }   
  public static bool SaveWebPageToMHTFile( string url, string filePath)
  {
   bool result=false;
   CDO.Message  msg = new CDO.MessageClass(); 
   ADODB.Stream  stm=null ;
   try
   {
    msg.MimeFormatted =true;   
    msg.CreateMHTMLBody(url,CDO.CdoMHTMLFlags.cdoSuppressNone, "" ,"" );
stm = msg.GetStream(); stm.SaveToFile(filePath,ADODB.SaveOptionsEnum.adSaveCreateOverWrite); msg=null; stm.Close(); result=true; } catch {throw;} finally { //cleanup here } return result; } public static string ConvertWebPageToMHTString( string url ) { string data = String.Empty; CDO.Message msg = new CDO.MessageClass(); ADODB.Stream stm=null; try { msg.MimeFormatted =true; msg.CreateMHTMLBody(url,CDO.CdoMHTMLFlags.cdoSuppressNone,
"", "" );
stm = msg.GetStream(); data= stm.ReadText(stm.Size); } catch { throw; } finally { //cleanup here } return data; } } }

 

NOTE: When using this type of COM Interop from an ASP.NET web page, it is important to remember that you must set the AspCompat=”true” directive in the Page declaration or you will be very disappointed at the results! This forces the ASP.NET page to run in STA threading model which permits “classic ASP” style COM calls. There is, of course, a significant performance penalty incurred, but realistically, this type of operation would only be performed upon user request and not on every page request.

<

p align=”left”>The downloadable zip file below contains the entire class library and a web solution that will exercise both methods when you fill in a valid URI with protocol, and a valid file path and filename for saving on the server. Unzip this to a folder that you have named “ConvertToMHT” and then mark the folder as an IIS Application so that your request such as “http://localhost/ConvertToMHT/WebForm1.aspx&#8221; will function correctly. You can then load the Solution file and it should work “out of the box”. And, don’t forget – if you have an ASP.NET web application that wants to write a file to the file system on the server, it must be running under an identity that has been granted this permission.

How To : Use JSON and SAP NetWeaver together

Background

Imagesap2[1]
In this example, SAP is used as the backend data source and the NWGW (Netweaver Gateway) adapter to consumable from .NET client as OData format.

Since the NWGW component is hosted on premise and our .NET client is hosted in Azure, we are consuming this data from Azure through the Service Bus relay. While transferring data from on premise to Azure over SB relay, we are facing performance issues for single user for large volumes of data as well as in relatively small data for concurrent users. So I did some POC for improving performance by consuming the OData service in JSON format.

What I Did?

I’ve created a simple WCF Data Service which has no underlying data source connectivity. In this service when the context is initializing, a list of text messages is generated and exposed as OData.

Here is that simple service code:

[Serializable]
public class Message
{
public int ID { get; set; }
public string MessageText { get; set; }
}
public class MessageService
{
List<Message> _messages = new List<Message>();
public MessageService()
{
for (int i = 0; i < 100; i++)
{
Message msg = new Message
{
ID = i,
MessageText = string.Format(“My Message No. {0}”, i)
};
_messages.Add(msg);

}
}
public IQueryable<Message> Messages
{
get
{
return _messages.AsQueryable<Message>();
}
}
}
[ServiceBehavior(IncludeExceptionDetailInFaults = true)]
public class WcfDataService1 : DataService
{
// This method is called only once to initialize service-wide policies.
public static void InitializeService(DataServiceConfiguration config)
{
// TODO: set rules to indicate which entity sets
// and service operations are visible, updatable, etc.
// Examples:
config.SetEntitySetAccessRule(“Messages”, EntitySetRights.AllRead);
config.SetServiceOperationAccessRule(“*”, ServiceOperationRights.All);
config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V3;
}
}
Exposing one endpoint to Azure SB so that client can consume this service through SB endpoint. After hosting the service, I’m able to fetch data by simple OData query from browser.

I’m also able to fetch the data in JSON format.

After that, I create a console client application and consume the service from there.

Sample Client Code

class Program
{
static void Main(string[] args)
{
List lst = new List();

for (int i = 0; i < 100; i++)
{
Thread person = new Thread(new ThreadStart(MyClass.JsonInvokation));
person.Name = string.Format(“person{0}”, i);
lst.Add(person);
Console.WriteLine(“before start of {0}”, person.Name);
person.Start();
//Console.WriteLine(“{0} started”, person.Name);
}
Console.ReadKey();
foreach (var item in lst)
{
item.Abort();
}
}
}

public class MyClass
{
public static void JsonInvokation()
{
string personName = Thread.CurrentThread.Name;
Stopwatch watch = new Stopwatch();
watch.Start();
try
{
SimpleService.MessageService svcJson =
new SimpleService.MessageService(new Uri
(“https://abc.servicebus.windows.net/SimpleService /WcfDataService1”));
svcJson.SendingRequest += svc_SendingRequest;
svcJson.Format.UseJson();
var jdata = svcJson.Messages.ToList();

watch.Stop();
Console.WriteLine(“Person: {0} – JsonTime First Call time: {1}”,
personName, watch.ElapsedMilliseconds);

for (int i = 1; i <= 10; i++)
{
watch.Reset(); watch.Start();
jdata = svcJson.Messages.ToList();
watch.Stop();
Console.WriteLine(“Person: {0} – Json Call {1} time:
{2}”, personName, 1 + i, watch.ElapsedMilliseconds);
}

Console.WriteLine(jdata.Count);
}
catch (Exception ex)
{
Console.WriteLine(personName + “: ” + ex.Message);
}
Thread.Sleep(100);
}

public static void AtomInvokation()
{
string personName = Thread.CurrentThread.Name;

try
{
Stopwatch watch = new Stopwatch();
watch.Start();
SimpleService.MessageService svc =
new SimpleService.MessageService(new Uri
(“https://abc.servicebus.windows.net/SimpleService/WcfDataService1&#8221;));
svc.SendingRequest += svc_SendingRequest;
var data = svc.Messages.ToList();

watch.Stop();
Console.WriteLine(“Person: {0} – XmlTime First Call time: {1}”,
personName, watch.ElapsedMilliseconds);

for (int i = 1; i <= 10; i++)
{
watch.Reset(); watch.Start();
data = svc.Messages.ToList();
watch.Stop();
Console.WriteLine(“Person: {0} – Xml Call {1} time:
{2}”, personName, 1 + i, watch.ElapsedMilliseconds);
}

Console.WriteLine(data.Count);
}
catch (Exception ex)
{
Console.WriteLine(personName + “: ” + ex.Message);
}
Thread.Sleep(100);
}
}9pt;”>

 

What I Test After That
I tested two separate scenarios:

Scenario I: Single user with small and large volume of data
Measuring the data transfer time periodically in XML format and then JSON format. You might notice that first call I’ve printed separately in each screen shot as it is taking additional time to connect to SB endpoint. In the first call, the secret key authentication is happening.

Small data set (array size 10): consume in XML format.

 

Consume in JSON format:

 

For small set of data, Json and XML response time over service bus relay is almost same.

Consuming Large volume of data (Array Size 100)

 

Here the XML message size is around 51 KB. Now I’m going to consume the same list of data (Array size 100) in JSON format.

 

So from the above test scenario, it is very clear that JSON response time is much faster than XML response time and the reason for that is message size. In this test, when I’m getting the list of 100 records in XML format message size is 51.2 KB but JSON message size is 4.4 KB.

Scenario II: 100 Concurrent user with large volume of data (array size 100)
In this concurrent user load test, I’ve done any service throttling or max concurrent connection configuration.

 

In the above screen shot, you will find some time out error that I’m getting in XML response. And it is happening due to high response time over relay. But when I execute the same test with JSON response, I found the response time is quite stable and faster than XML response and I’m not getting any time out.

 

How Easy to Use UseJson()
If you are using WCF Data Service 5.3 and above and VS2012 update 3, then to consume the JSON structure from the client, I have to instantiate the proxy / context with .Format.UseJson().

Here you don’t need to load the Edmx structure separately by writing any custom code. .NET CodeGen will generate that code when you add the service reference.

 

But if that code is not generated from your environment, then you have to write a few lines of code to load the edmx and use it as .Format.UseJson(LoadEdmx());

Sample Code for Loading Edmx

public static IEdmModel LoadEdmx(string srvName)
{
string executionPath = Directory.GetCurrentDirectory();
DirectoryInfo di = new DirectoryInfo(executionPath).Parent;
var parent1 = di.Parent;
var srv = parent1.GetDirectories(“Service References\\” +
srvName)[0].GetFiles(“service.edmx”)[0].FullName;

XmlDocument doc = new XmlDocument();
doc.Load(srv);
var xmlreader = XmlReader.Create(new StringReader(doc.DocumentElement.OuterXml));

IEdmModel edmModel = EdmxReader.Parse(xmlreader);
return edmModel;
}

DRY Architecture, Layered Architecture, Domain Driven Design and a Framework to build great Single Web Pages – BiolerPlate Part 1

DRY – Don’t Repeat Yourself! is one of the main ideas of a good developer while developing a software. We’re trying to implement it from simple methods to classes and modules. What about developing a new web based application? We, software developers, have similar needs when developing enterprise web applications.

Enterprise web applications need login pages, user/role management infrastructure, user/application setting management, localization and so on. Also, a high quality and large scale software implements best practices such as Layered Architecture, Domain Driven Design (DDD), Dependency Injection (DI). Also, we use tools for Object-Releational Mapping (ORM), Database Migrations, Logging… etc. When it comes to the User Interface (UI), it’s not much different.

Starting a new enterprise web application is a hard work. Since all applications need some common tasks, we’re repeating ourselves. Many companies are developing their own Application Frameworks or Libraries for such common tasks to do not re-develop same things. Others are copying some parts of existing applications and preparing a start point for their new application. First approach is pretty good if your company is big enough and has time to develop such a framework.

As a software architect, I also developed such a framework im my company. But, there is some point it feels me bad: Many company repeats same tasks. What if we can share more, repeat less? What if DRY principle is implemented universally instead of per project or per company? It sounds utopian, but I think there may be a starting point for that!

What is ASP.NET Boilerplate?

http://www.aspnetboilerplate.com/

ASP.NET Boilerplate [1] is a starting point for new modern web applications using best practices and most popular tools. It’s aimed to be a solid model, a general-purpose application framework and a project template. What it does?

  • Server side
    • Based on latest ASP.NET MVC and Web API.
    • Implements Domain Driven Design (Entities, Repositories, Domain Services, Application Services, DTOs, Unif Of Work… and so on)
    • Implements Layered Architecture (Domain, Application, Presentation and Infrastructure Layers).
    • Provides an infrastructure to develop reusable and composable modules for large projects.
    • Uses most popular frameworks/libraries as (probably) you’re already using.
    • Provides an infrastructure and make it easy to use Dependency Injection (uses Castle Windsor as DI container).
    • Provides a strict model and base classes to use Object-Releational Mapping easily (uses NHibernate, can work with many DBMS).
    • Implements database migrations (uses FluentMigrator).
    • Includes a simple and flexible localization system.
    • Includes an EventBus for server-side global domain events.
    • Manages exception handling and validation.
    • Creates dynamic Web API layer for application services.
    • Provides base and helper classes to implement some common tasks.
    • Uses convention over configuration principle.
  • Client side
    • Provides two project templates. One for Single-Page Applications using Durandaljs, other one is a Multi-Page Application. Both templates uses Twitter Bootstrap.
    • Most used libraries are included by default: Knockout.js, Require.js, jQuery and some useful plug-ins.
    • Creates dynamic javascript proxies to call application services (using dynamic Web API layer) easily.
    • Includes unique APIs for some sommon tasks: showing alerts & notifications, blocking UI, making AJAX requests.

Beside these common infrastructure, the “Core Module” is being developed. It will provide a role and permission based authorization system (implementing ASP.NET Identity Framework), a setting systems and so on.

What ASP.NET Boilerplate is not?

ASP.NET Boilerplate provides an application development model with best practices. It has base classes, interfaces and tools that makes easy to build maintainable large-scale applications. But..

  • It’s not one of RAD (Rapid Application Development) tools those provide infrastructure for building applications without coding. Instead, it provides an infrastructure to code in best practices.
  • It’s not a code generation tool. While it has several features those build dynamic code in run-time, it does not generate codes.
  • It’s not a all-in-one framework. Instead, it uses well known tools/libraries for specific tasks (like NHibernate for O/RM, Log4Net for logging, Castle Windsor as DI container).

Getting started

In this article, I’ll show how to deleveop a Single-Page and Responsive Web Application using ASP.NET Boilerplate (I’ll call it as ABP from now). This sample application is named as “Simple Task System” and it consists of two pages: one for list of tasks, other one is to add new tasks. A Task can be related to a person, can be completed. The application is localized in two languages. Screenshot of Task List in the application is shown below:

A screenshot of 'Simple Task System'

Creating empty web application from template

ABP provides two templates to start a new project (Even if you can manually create your project and get ABP packages from nuget, template way is much more easy). Go to www.aspnetboilerplate.com/Templates to create your application from one of twotemplates (one for SPA (Single-Page Application), one for MPA (classic, Multi-Page Application) projects):

Creating template from ABP web site

I named my project as SimpleTaskSystem and created a SPA project. It downloaded project as a zip file. When I open the zip file, I see a solution is ready that contains assemblies (projects) for each layer of Domain Driven Design:

Project files

Created project’s runtime is .NET Framework 4.5.1, I advice to open with Visual Studio 2013. The only prerequise to be able to run the project is to create a database. SPA template assumes that you’re using SQL Server 2008 or later. But you can change it easily to another DBMS.

See the connection string in web.config file of the web project:

<add name="MainDb" connectionString="Server=localhost; Database=SimpleTaskSystemDb; Trusted_Connection=True;" />

You can change connection string here. I don’t change the database name, so I’m creating an empty database, named SimpleTaskSystemDb, in SQL Server:

Empty database

That’s it, your project is ready to run! Open it in VS2013 and press F5:

First run

Template consists of two pages: One for Home page, other is About page. It’s localized in English and Turkish. And it’s Single-Page Application! Try to navigate between pages, you’ll see that only the contents are changing, navigation menu is fixed, all scripts and styles are loaded only once. And it’s responsive. Try to change size of the browser.

Now, I’ll show how to change the application to a Simple Task System application layer by layer in the coming part 2

How To : Use SharePoint Dashboards & MSRS Reports for your Agile Development Life Cycle

The Problem We Solve

Agile BI is not a term many would associate with MSRS Reports and SharePoint Dashboards. While many organizations first turn to the Microsoft BI stack because of its familiarity, stitching together Microsoft’s patchwork of SharePoint, SQL Server, SSAS, MSRS, and Office creates administrative headaches and requires considerable time spent integrating and writing custom code.

This Showcase outlines the ease of accomplishing three of the most fundamental BI tasks with LogiXML technology as compared to MSRS and SharePoint:

  • Building a dashboard with multiple data sources
  • Creating interactive reports that reduce the load on IT by providing users self-service
  • Integrating disparate data sources

Read below to learn how an agile BI methodology can make your life much easier when it comes to dashboards and reports. Don’t feel like reading?

Building a Dashboard with LogiXML vs. MSRS + SharePoint

Microsoft’s only solution for dashboards is to either write your own code from scratch, manipulate SharePoint to serve a purpose for which it wasn’t initially designed, or look to third party apps. Below are some of the limitations to Microsoft’s approach to dashboards:

  • Limited Pre-Built Elements: Microsoft components come with only limited libraries of pre-built elements. In addition to actual development work, you will need to come up with an idea of how everything will work together. This necessitates becoming familiar with best practices in dashboards and reporting.
  • Sophisticated Development Expertise Required: While Microsoft components provide basic capabilities, anything more sophisticated is development resource-intensive and requires you to take on design, execution, and delivery. Any complex report visualizations and logic, such as interactive filters, must be written in code by the developer.
  • Limited Charts and Visualizations: Microsoft has a smaller sub-set of charts and visualization tools. If you want access to the complete library of .NET-capable charts, you’ll still need to OEM another charting solution at additional expense.
  • Lack of Integrated Workflow: Microsoft does not include workflow features sets out of the box in their BI offering.

LogiXML technology is centered on Logi Studio: an elemental, agile BI design environment which lets you simply choose from hundreds of powerful and configurable pre-built elements. Logi’s pre-built elements equip developers with tools to speed development, as well as the processes and logic required to build and manage BI projects. Below is a screen shot of the Logi Studio while building new dashboards.

agile-bi.jpg

Start a free LogiXML trial now.

Logi developers can easily create static or user-customizable dashboards using the Dashboard element. A dashboard is a collection of panels containing Logi reports, which in turn contain table, charts, images, etc. At runtime, the user can customize the dashboard by rearranging these panels on the browser page, by showing or hiding them, and even by changing their contents using adjustable reporting criteria. The data displayed within the panels can be configured, as in any Logi report, to link to other reports, providing drill-down functionality.

 

logi2.jpg

The dashboard displayed above has tabs and user customization enabled. The Dashboard element provides customization features, such as drag-and-drop panel positioning, support for built-in parameters the user can access to adjust the panel’s data contents, and a panel selection list that determines which panels will be displayed. AJAX techniques are utilized for web server interactions, allowing selective updates of portions of the dashboard. Dashboard customizations can be saved on an individual-user basis to create a highly personalized view of the data.

The Dashboard Wizard

The ‘Create a Dashboard’ wizard assists developers in creating dashboards by populating the report definition with the necessary dashboard-related elements. You can easily point to any data source by selecting from a variety of DataLayer types, including SQL, StoredProcedures, Web Services, Files, and more. A simple to use drag and drop SQL Query builder is also integrated, to offer a guided approach to constructing queries when connecting to your database.

logi3.jpg

Using the Dashboard Element

The Dashboard element is used to create the top level structure for all of your interactive panels within the final output. Under your dashboards, you can optionally add any number of Dashboard Panels, Panel Parameters for dynamic filtering, and even automatic refresh features with AJAX-based refresh timers.

logi4.jpg

Changing Appearance Using Themes and Style Sheets

The appearance of a dashboard can be changed easily by assigning a theme to your report. In addition, or as an alternative, you can change dashboard appearance using style. The Dashboard element has its own Cascading Style Sheet (CSS) file containing predefined classes that affect the display colors, font sizes, button labels, and spacing seen when the dashboard is displayed. You can override these classes by adding classes with the same name to your own style sheet file.

See us build a BI app with 3 data sources in under 10 minutes.

Ad Hoc Reporting Creation with LogiXML: Analysis Grid

The Analysis Grid is a managed reporting feature giving end users virtual ad hoc capability. It is an easy to use tool that allows business users to analyze and manipulate data and outputs in multiple and powerful ways.

logi5.jpg

Start a free LogiXML trial now.

Create an Analysis Grid by using the “Create Analysis Grid” wizard, or by simply adding the AnalysisGrid element into your definition file. Like the dashboard, data for the Analysis Grid can be accessed from any of the data options, including SQL databases, web sources, or files. You also have the option to launch the interactive query builder wizard for easy, drag-drop, SQL query creation.

The Analysis Grid is composed of three main parts: the data grid itself, i.e. a table of data to be analyzed; various action buttons at the top, allowing the user to perform actions such as create new columns with custom calculations, sort columns, add charts, and perform aggregations; and the ability to export the grid to Excel, CSV, or PDF format.

The Analysis Grid makes it easy to perform what-if analyses through features like filtering. The Grid also makes data-presentation impactful through visualization features including data driven color formatting, inline gauges, and custom formula creation.

Ad Hoc Reporting Creation with Microsoft

While simple ad hoc capabilities, such as enabling the selection of parameters like date ranges, can be accomplished quickly and easily with Microsoft, more sophisticated ad hoc analysis is challenging due to the following shortcomings.

Platform Integration Problems

Microsoft BI strategy is not unified and is strongly tied to SQL Server. To obtain analysis capabilities, you must build cubes through to the Analysis Service, which is a separate product with its own different security architecture. Next, you will need to build reports that talk to SQL server, also using separate products.

Dashboards require a SharePoint portal which is, again, a separate product with separate requirements and licensing. If you don’t use this, you must completely code your dashboards from scratch. Unfortunately, Microsoft Reporting Services doesn’t play well with Analysis Services or SharePoint since these were built on different technologies.

SharePoint itself offers an out of the box portal and dashboard solution but unfortunately with a number of significant shortcomings. SharePoint was designed as a document management and collaboration tool as opposed to an interactive BI dashboard solution. Therefore, in order to have a dashboard solution optimized for BI, reporting, and interactivity you are faced with two options:

  • Build it yourself using .NET and a combination of third party components
  • Buy a separate third party product

Many IT professionals find these to be rather unappealing options, since they require evaluating a new product or components, and/or a lot of work to build and make sure it integrates with the rest of the Microsoft stack.

Additionally, while SQL Server and other products support different types of security architectures, Analysis Services only has support for using integrated Windows NT security models to access cubes and therefore creates integration challenges.

Moreover, for client/ad hoc tools, you need Report Writer, a desktop product, or Excel – another desktop application. In addition to requiring separate licenses, these products don’t even talk to one another in the same ways, as they were built by different companies and subsequently acquired by Microsoft.

Each product requires a separate and often disconnected development environment with different design and administration features. Therefore to manage Microsoft BI, you must have all of these development environments available and know how to use them all.

Integration of Various Data Sources: LogiXML vs. Microsoft

LogiXML is data neutral, allowing you to easily connect to all of your organization’s data spread across multiple applications and databases. You can connect with any data source or data model and even combine data sources such as current data accessed through a web service with past data in spreadsheets.

Integration of Various Data Sources with Microsoft

Working with Microsoft components for BI means you will be faced with the challenge of limited support for non-Microsoft based databases and outside data sources. The Microsoft BI stack is centered on SQL Server databases and therefore the data source is optimized to work with SQL Server. Unfortunately, if you need outside content it can be very difficult to integrate.

Finally, Microsoft BI tools are designed with the total Microsoft experience in mind and are therefore optimized for Internet Explorer. While other browsers and devices might be useable, the experience isn’t optimized and may potentially lack in features or visualize differently.

 

Free & Licensed Windows 8, Azure, Office 365, SharePoint On-Premise and Online Tools, Web Parts, Apps available.
For more detail visit https://sharepointsamurai.wordpress.com or contact me at tomas.floyd@outlook.com

PressurePoint – great tool to Stress, Load and Performance test your SharPoint Site

SharePoint2013

 

Awesome tool developed by

 MargrietBruggeman

This version of PressurePoint only works with SharePoint 2013.

There’s a generic version of PressurePoint that works for all versions of SharePoint and even normal web sites at: http://gallery.technet.microsoft.com/PressurePoint-Dragon-for-58648ae4

Requires: The presence of the .NET 4.5 framework, because it makes extensive use of Parallel Programming techniques. Supports anonymous and Windows (NTLM) authentication.

About PressurePoint

When you apply enough pressure, every application you or somebody else builds has a point where it breaks. I call this point the pressure point.

I’d say it’s a strong advisory positive to undertake some activities to find out where the pressure point of the application that you’re responsible for lies. Several kinds of tests are commonly used to find out about these:

  • Performance testing, the umbrella term for testing applications responsiveness and stability. Following, I’ll list some more specific relevant types of performance testing.
  • Load testing, makes requests of an application to simulate normal or anticipated load conditions. This kind of test helps greatly when you want to determine what your end users should expect.
  • Endurance testing, tests if an application is able to hold up under continuous prolonged, but normal or expected, load. Typically looks for memory consumption and gradually decreasing performance.
  • Stress testing, here, you try to find the breaking point by applying maximum application capacity and observe in what ways the application breaks. It finds bottlenecks and root causes for performance degradation.
  • Spike testing, applies a sudden and dramatic increase in load and sees how the application responds to that.
  • Isolation testing, tests a specific part of the application. Usually, this involves an area that has proved to be troublesome.

It helps a lot if such tests are repeated throughout development/test/staging/production environments. This allows you to get a feel for your application.

During these tests, you’ll typically look at server response time (instead of rendering time), the time it takes the client to make the request and get the final response back. Because of this, I can advise to execute performance tests as close to the server or server farm as possible to eliminate network latency issues.

Most of the times, as an application developer or admin you don’t have much or any control over the network and you’ll be more interested how the specific application holds up.

Also, but this is quite obvious, if you can avoid it don’t place test clients on the server or server farm itself, or on the host hosting the virtual machines containing server or server farms. This can have quite the effect on the test outcome, although I have to say that in my experience the effect is limited enough to be able to undertake meaningful performance tests launched from the server or server farm. Other quick tips: it typically works better if you execute performance tests using multiple client computers and you should preferably execute performance tests using multiple user accounts.

Whatever types of tests you’re planning to do, please remember that forgetting to do any type of performance testing will result in an interesting product release experience. Lately, I can’t keep track anymore of the number of times companies contact me wishing they would have spent some time doing performance testing.

Lots of Tools

There are lots of tools out there that can help you do performance testing, but in my experience (and I have looked at 100+ of these tools) there are two types of tools: tools that are just a preview of a commercial version and too limited to do anything useful without buying the license and then there are tools that are insanely complex to use. See my blog post at http://sharepointdragons.com/2012/12/26/the-great-free-performance-load-and-stress-testing-tools-that-can-be-used-with-sharepoint-verdict/ for more information. The following overview at http://en.wikipedia.org/wiki/Test_tool is also nice and more objective (well, it would be more accurate to say that it refrains from giving any opinion).

So, it depends on your situation how to proceed. If you have budget, you can buy a great performance test tool and use that. I found myself in situations where I had to do performance testing in companies that didn’t have a budget to invest in performance tooling. There was also another issue…

About SharePoint

As I mainly work in SharePoint environments, I prefer to use a tool that is able to do performance testing specifically targeted towards SharePoint. I found none. During my SharePoint testing, uhm, dare I say, adventures, I found that SharePoint page requests are typically handled just fine and it’s hard to get a SharePoint environment to its knees just doing that. Request times tend to increase linearly, which is a good sign for an application. On top, SharePoint handles excessive page requests gracefully, without falling back in throwing all kinds of errors. Things get a lot more interesting and dangerous when you do one of the following things:

  • Execute custom code
  • Upload and retrieve documents of various sizes and batch sizes
  • Work with custom SharePoint Services, such as Search, Forms Services or SQL Server Reporting Services (let’s just say I picked out these as examples for no particular reason)

When using a testing tool that doesn’t have knowledge about SharePoint, it will be quite hard to test these aspects.

My conclusion

It may come as no surprise that eventually I decided that it was easier to build my own tool that has specific knowledge about SharePoint, can be extended by me at will, and is easy to use. Making extensive use of the .NET parallel programming capabilities, I found it was quite easy to do. When I was done, I decided that I wanted to share the basic version of it (basic, since I build custom extensions in it dedicated to the projects I’m doing) with the community. Later, I’m planning to add a specific version dedicated to SharePoint 2013, but I’m not quite there yet.

What to look for?

Doing performance testing in SharePoint environments without knowing what to look for is not the most useful thing one can do with one’s time. There are specific performance counters you should look out for on SharePoint WFE’s and different ones to check out on the back-end databases server. Depending on your needs, you might also need to spend some time coming up with the right set of performance counters you need for monitoring dedicated application servers. If you want to learn more about this topic, I can definitely recommend my gallery contribution at: http://gallery.technet.microsoft.com/PowerShell-script-for-59cf3f70 I’d also recommend the use of my SharePoint Flavored Weblog Reader (SFWR) tool at http://gallery.technet.microsoft.com/The-SharePoint-Flavored-5b03f323 which helps to analyze IIS log files.

Whether you use these tools or not: bear in mind that running a performance test tool without analyzing what happens on the server is absolutely useless!

How to use the PressurePoint Dragon for SharePoint

PressurePoint is a command line tool that reads an XML file that describes the test you want to execute. Currently, it only supports Windows (NTLM) or anonymous authentication. When you download the PressurePoint ZIP file it contains three things:

  • PressurePoint.exe, the actual performance test tool that can be executed by calling it from the command line. It requires the presence of the .NET 4 framework since it makes extensive use of parallel programming techniques.
  • PressurePoint.exe.config, the configuration file that is mandatory for the PressurePoint tool. Check out the TestLocation app setting and point it to the location of the XML file describing your test:

    Copy Code

    XML
    Edit|Remove
      <appSettings> 
        <add key="TestLocation" value="C:\Clients\XYZ\PressurePoint\test.xml"/> 
    </appSettings> 
    
  • Test.xml, an example XML Test Description file describing an example test.

Explanation of the structure of a Test Description file

The test description file can do a couple of simple things. It contains a test body that is repeated x times, determined by the repeat attribute of the <Test> element.

Copy Code

XML
Edit|Remove
<!--?xml version="1.0" encoding="utf-8" ?> 
<Test repeat="10"> 
[body omitted for clarity] 
</Test>

The <Test> element is the root element and only occurs once. It contains 1 or more <Session> elements. In a Session, you can specify important configuration info, such as the user name (user attribute), password (password attribute), domain name (domain attribute), the number of concurrent users that start a session (e.g. 20 instances of user A start executing the actions as described in a session) via the concurrentUsers attribute, a friendly name that is outputted to the console window to make it easier to identify which session is executed at a given time (friendlySessionName attribute).

Please note: If you’re using anonymous authentication, the values for user, password, and domain can just be left blank.

The following example shows the Session section:

Copy Code

XML
Edit|Remove
<Session user="administrator" password="verySecret" domain="lc" concurrentUsers="1" friendlySessionName="SessionA"> 
[body omitted for clarity] 
</Session>

Then there are various actions that can be used within a Session. These are:

  • Comment, outputs a text to the console window. Example:

    Copy Code

    XML
    Edit|Remove
    <Comment>Start Moon session A for administrator</Comment> 
    
  • Request, makes a request to a page. Please note: specify a page here, instead of a generic site url such as http://moon. Because right now, PressurePoint doesn’t support redirects. Example:

    Copy Code

    XML
    Edit|Remove
    <Request>http://moon/pages/default.aspx</Request>
  • DelaySeconds, waits for a given amount of time to simulate think time. Example:

    Copy Code

    XML
    Edit|Remove
    <DelaySeconds value="3" />
  • RandomDelaySeconds, waits for a random amount of time within a given range to provide a more realistic simulation of think time (which might not be what you want, since the action keeps the test more predictable. Example:

    Copy Code

    XML
    Edit|Remove
    <RandomDelaySeconds min="1" max="3" />
  • RandomRequest, makes a random request to a page from a given list. Example:

    Copy Code

    XML
    Edit|Remove
    <RandomRequest> 
      <URL>http://moon/pages/default.aspx</URL> 
      <URL>http://moon:28827/sitepages/home.aspx</URL> 
    <!--RandomRequest>  
    

    The next example is a full blown example of a single session by a single user repeated 10 times:

Copy Code

XML
Edit|Remove
<?xml version="1.0" encoding="utf-8" ?> 
<Test repeat="10"> 
  <Session user="administrator" password="superSecret" domain="lc" concurrentUsers="1" friendlySessionName="SessionA"> 
    <Comment>Start Moon session A for administrator</Comment>    <Request>http://moon/pages/default.aspx</Request> 
    <RandomRequest> 
      <URL>http://moon/pages/default.aspx</URL> 
      <RandomDelaySeconds min="1" max="3" />  <URL>http://moon:28827/sitepages/home.aspx</URL> 
    </RandomRequest> 
  </Session> 
</Test>

The next example shows how to simulate 1000 concurrent users, using 2 different user accounts in a test that is repated 100 times:

Copy Code

XML
Edit|Remove
<?xml version="1.0" encoding="utf-8" ?> 
<Test repeat="100"> 
  <Session user="administrator" password="secretPwd" domain="test" concurrentUsers="500" friendlySessionName="SessionA"> 
    <Comment>Start session "Home page" for administrator</Comment>    <Request>http://mysrv/sitepages/home.aspx</Request> 
  </Session>  
 
  <Session user="jBlack" password="superSecret" domain="test" concurrentUsers="500" friendlySessionName="SessionA"> 
    <Comment>Start session A for Jack Black</Comment>    <Request>http://mysrv/sitepages/home.aspx</Request> 
  </Session> 
</Test>

The following section contains SharePoint 2013 specific actions.

  • ClientSite, fetches the URL of a SharePoint site collection. Looks like this:

    Copy code

    XML
    Edit|Remove
    <ClientSite  <Url>http://moon</Url> 
    </ClientSite> 
    

 

Quick tips for constructing performance test cases

The following link contains interesting information about the typical type of use of a SharePoint environment: http://office.microsoft.com/en-us/windows-sharepoint-services-it/capacity-planning-for-windows-sharepoint-services-HA001160774.aspx . The quick take away is this:

  • Light usage: the end user makes 20 requests per hour.
  • Typical usage: the end user makes 36 requests per hour.
  • Heavy usage: the end user makes 60 requests per hour.
  • Extreme usage: the end user makes 120 requests per hour.

This will help you build test cases that are more realistic; especially in situations where the customer isn’t really sure how much the application will be used. Concerning this topic, I’ve also found the following topic to be quite interesting: http://blogs.technet.com/b/wbaer/archive/2007/07/06/requests-per-second-required-for-sharepoint-products-and-technologies.aspx

As a final guideline, I’ve also worked with the following rule of thumb that may help you: in a typical enterprise application, 1% of the users makes a request per second during peak time, in an enterprise application that is used extremely, 3% of the users makes a request per second during peak time.

Support Tools

It can be frustrating to try a new community tool that doesn’t seem to work. It makes you wonder whether you made a mistake in constructing the XML for the test case, or whether the tool simply doesn’t work. I’ve built two tools that support PressurePoint: Ping Dragon for SharePoint 2010 (http://gallery.technet.microsoft.com/Ping-Dragon-for-SharePoint-70fb299e ) and WinPing Dragon for SharePoint 2010 (http://gallery.technet.microsoft.com/WinPing-Dragon-for-eefb6dd3 ). The tools fulfill a single purpose: ping SharePoint using the same method leveraged by PressurePoint. In other words, if these tools work, PressurePoint will work too. The difference between both support tools is that the WinPing Dragon tool hides the password from view, while the Ping Dragon doesn’t.

What’s going on under the covers?

Usethe Resource Monitor tool (resmon.exe) to “check the heartbeat” of PressurePoint, since the tool is a bit of a black box to you and watching it doing its work can be a boring experience. Resource Monitor clearly shows how PressurePoint is building up to the point where it can simulate the load you require to simulate the number of different users and sessions you need. PressurePoint executes each session in a separate thread and Resource Monitor will show an increase of the PressurePoint thread counter until it approximates the intended load.

The System image normally, as you’d expect, has the highest number of active threads (a couple of 100s), but once you’re simulating loads of 100s or even 1000s of end users,

PressurePoint surpasses this. One of the things that I found interesting was that it can take quite a long time until you get to the point where you can actually run 100s or even 1000s of separate threads in a single application (on the environments I’ve tested it on, it can take 1 hour or more to reach those kinds of numbers). It makes sense, since those are a lot of threads, other threads finish their work, and your system has other tasks to take care of. But still, before building the tool, I didn’t anticipate this.

FREE Microsoft Dynamics CRM 2011 List Component for Microsoft SharePoint Server 2010

 

 

CRM2011 – SharePoint 2010 Integration? Glue CRM 2011 & Share Point 2010 together? Make CRM 2011 and Share Point 2010 converse? I wasn’t sure what to call this exactly. “Hooking together” works for me!

Now that we have a CRM 2011 instance and a Share Point site working, let’s get them connected up! Go to this website and download Microsoft Dynamics CRM 2011 List Component for Microsoft SharePoint Server 2010:

Accept the License Terms.

Extract the files to a folder (I chose C:\CRM List).

You will get a prompt “The Installation is complete.” Click OK.

Let’s go over to the Share Point Central Administration Server to install the list component we just extracted. Connect to http://localhost:48835/ (your port might be different, be aware of this). Click Manage web applications.

Click the new Share Point site, and then “General Settings” (the blue cogs).

Scroll down to Browser File Handling and choose Permissive, Click OK.

Let’s head back over to our new Share Point Site. Click Site Actions up top left, and then “Site Settings”.

Under Galleries click “Solutions”.


Click the Word “Solutions” up top (you have to click the word “Solutions”, even though it looks selected), and then click “Upload Solution”.

Select the .wsp component that we extracted wayyy back at the top of this. I used C:CRM List as my extract folder. Click OK.

You’ll get prompted at this point, I couldn’t active the control on this screen (but it still needs to be done). We need to make sure some services are running to activate the solution. Click Close.

Head back to the Share Point Central Administration. http://localhost:48835. Found at

Click System Settings –> Manage Services on this server

Click Start beside “Share Point Foundation Sandboxed Code Service”. I also started “Microsoft SharePoint Foundation Subscription Settings Service (by accident)” so that’s why that ones started.

Now to head back to our Share Point site http://localhost:39083/

Under Galleries click “Solutions”.

Click Solutions again, select crmlistcomponent, and the click “Activate” up top. Activate is now un-greyed out! Click Activate!

The solution has now been activated! Hurray!

There seems to be some confusion whether or not you need to run a power shell script to enable Activation of Share Point 2010 solutions (AllowHtcExtn). According to what I’ve read, you would need to run this if Share Point 2010 is running on a domain controller. I didn’t have to do this (and we’re on a domain controller), and I’ve yet to run into a problem with .htc stuff. Even in the Microsoft Dynamics CRM 2011 Readme it says:
“If you are using Microsoft SharePoint Server 2010 (On-Premises), you must add .htc extensions to the list of allowed file types:
a. Copy the AllowHtcExtn.ps1 script file to the server that is running Microsoft SharePoint Server 2010.
b. In the Windows PowerShell window or in the SharePoint Management Console, run the command: AllowHtcExtn.ps1 .
Example: AllowHtcExtn.ps1 http://servername%E2%80%9D

Some people say the script works for them , and some say that using just the blog method (what we did) works
The sharepoint configuration is complete at this point. You probably want to take a snapshot, name it “After Sharepoint Configuration”. Let’s head over to our CRM server (localhost:85).

In CRM Click Settings –> Document Management –> Document Management Settings

Select the entities that you want to have documents enabled on. This will create a “Documents” area when you open an instance of the entity. I’ll just leave the defaults for now. At the bottom punch in your Share Point site that you’ve created and click Next. This is the Share Point server we installed the list component on. You’re not allowed to use localhost:port, just use the computer name:port like below.

Don’t select the box, otherwise it will relate the files to those entities. Without checking the box you will end up with something like Site/EntityName/Record Name (which is what I want, especially if you’re using custom entities). Click Next.

If “Libraries are being created in the path”, click Next.

Everything should “Succeed”, Click Finish.

Let’s test this bad boy out now.

Create a new account called “Test”.

Click Save! Click “Documents” on the left side. You’ll get a prompt saying that the folder (Test) is being created under “Account”. Click OK.

Click Add.

Now you’ll probably get these errors! /crmgrid/scripts/DialogContainer.js and 403 FORBIDDEN! Depressing. The only real information on this error was here: . It wasn’t very clear, but I stumbled through it. It seems that CRM 2011 doesn’t enjoy being called localhost. Let’s fix these up.

The fix for this was to run inetmgr –> Click Microsoft Dynamics CRM –> click Stop

Click “Bindings…” on the right side. Click “Edit” on the items that show “localhost” and change it to my machine name: “win-b80icqrvluf”. This is so it has a a “real” name to connect to.

Before:

After:

Now click “Start” on the right side.

Head back over to the CRM (http://win-b80icqrvluf:85/CRMTest/main.aspx) make sure to use the host name, as it might give you the error if you use localhost. Open your Test Account again.

Click Documents –> Add, you should now see this popup (it can take a while to load for the first time on the VM). If you continue to get the error, stop both CRM 2011 and Share Point 2010 servers and restart them. If that doesn’t work, try restarting the whole server.

Pick a file, and click OK.

The file should be uploaded to Share Point now.

Head over to Share Point at http://win-b80icqrvluf:39083 and click “All Site Content” or “Libraries”.

Click Account.

You can see that CRM has created a folder “Test” (for our record). It creates 1 folder per record. Click it to see the files associated to that record!!

The files associated to the record “Test” in Accounts.

Share Point and CRM have combined into a super awesome force of doom. But we’re still missing 1 core piece of functionality (due to not picking a port when we installed CRM).

 

 

Select Master Page App for SharePoint 2013 now available!! (Get the SharePoint 2010 Select Master Page Web Part Free)

In Publishing sites, there will be a layouts or application page through which we can set a custom
or another master page as a default master page. Unfortunately, this is missing in Team Sites.

This is what this solution is all about. It is targeted mainly for Team sites, since publishing sites already have a provision.

It adds a custom ribbon button in the Share and Track group of the Files group of Master Page Gallery. This is a SharePoint 2013 Hosted App. Refer the documentation for the technical details.

 

The following screen shots depict the functionality.







 

The custom ribbon button will not be enabled if a folder is selected or more than 1 item is selected.
But if a file is selected, the button will be enabled, irrespective of the file extension. Upon selecting a file and clicking on the ribbon button, a pop up dialog will appear with the text “Working on it..”.

Then a confirmation alert will appear, asking “Are you sure?”. Once confirmed by the user, a progress message will be displayed in the pop up dialog. If the file selected is not of .master extension, then the user will be displayed an alert “This will work only for master pages.”.

If a master page, which is already set as default, is selected and the ribbon button is clicked, the user will be displayed an alert “The file at <url> is the current default master page. So please select another master page.”. If another master page is selected, then the user will be displayed an alert “Master Page Changed Successfully.

Please press CTRL + F5 for changes to reflect.”. Once the user clicks OK on the alert, the pop up dialog also closes and pressing CTRL + F5 will reflect the updated master page. Any time, the user clicks OK or cancel on the alert screens, the parent screen will be refreshed and the current selection will be cleared.

The app requires a Full Control on the host web, since this is required for setting the master page and thats precisely the reason why, I couldn’t publish this in the Office store.

The app has been tested on IE9 and the latest version of Chrome and Firefox. It may not work on IE8 or lower version of other browsers also, in case they don’t support HTML5. Also, the app currently supports only English. Also, the app will set the default master only on the host web (where the app is installed) and not on the sub webs.

The app uses jQuery AJAX and REST APIs of SharePoint 2013.

To use the app, just upload the app (.app file) to the App Catalog and add/install it to the host team site and trust it and navigate to the Master Page Gallery and you are good to go.

 

With this App, you will also receive the FREE SharePoint 2010 Select Master Page Web Part!!

It adds a custom ribbon button in the Share and Track group of the Documents group of Master Page Gallery.

It is a Sandbox solution and it is implemented to set the master of only the root site of a site collection, though it can be customized / extended for sub sites. It requires a user to be at least a Site owner to avoid unnecessary manipulation of master page by contributors or other users. Refer the documentation for the technical details.

The following screen shots depict the functionality.





 

 

Introduction to the Unified Logging Service and Creating a Javascript Logging System

Microsoft SharePoint Foundation exposes a rich logging mechanism known as the Unified Logging Service (ULS) that enables developers to write useful information helping them to identify and troubleshoot issues during the application lifecycle. The ULS writes SharePoint Foundation events to the SharePoint Trace Log, and stores them in the file system, typically inside the SharePoint root folder in files named \14\LOGS\SERVERYYYmmDDID.log.

ULS exposes a rich managed object model enabling developers to specify their own configurations such as categories and severity while writing exceptions or trace message to the ULS logs. You can find more details on the managed API in the article Writing to the Trace Log from Custom Code.

With the evolution of a rich client object model in SharePoint 2010 that enables developers to build complex client applications, it is very important to write useful information that is not visible in the user interface but is recorded on the server so it can be monitored by administrators and developers.

To address these scenarios for applications running in thin-client browsers, SharePoint Foundation provides a web service named SharePoint Diagnostics (diagnostics.asmx). This web service enables a client application to submit diagnostic reports directly to the ULS logs.

This article focuses on how you can leverage the SharePoint Diagnostics web service to write trace messages from a custom JavaScript application into the ULS logs.

The following points are discussed:

  • Overview of the SendClientScriptErrorReport web method
  • Creating a simple JavaScript application to log trace messages by using SharePoint Diagnostics web service
  • Setting up the required configurations for enabling logging via the Diagnostics web service
  • Using the application
  • Using the ULS logging script with sandboxed solutions
The Diagnostics web service exposes a single method named SendClientScriptErrorReport that enables client applications to report errors to the ULS service. The following table summarizes the parameter list required by the SendClientScriptErrorReport method.

Parameter Name Description Value Examples
Message A string containing the message to display to the client The value of the displaypage property is null or undefined; not a function object.
File The URL file name associated with the current error customscript.js
Line A string containing the line of code from which the error is being generated 9
Client A string containing the client name that is experiencing the error <client><browser name=’Internet Explorer’ version=’9.0′></browser><language> en-us </language></client>
Stack A string containing the call-stack information from the generated error <stack><function depth=’0′ signature=’ myFunction() ‘>function myFunction() { ‘displaypage ();}</function></stack>
Team A string containing a team or product name Custom SharePoint Application
originalFile The physical file name associated with the current error customscript.js

In the table, notice that the example values for Client and Stack depict a XML fragments, not single lines of text. This information is stated in the protocol specification documented in 3.1.4.1.2.1 SendClientScriptErrorReport. Even though the protocol specification for these parameters requires a valid XML fragment, the web-service call to this method still succeeds even if the values supplied for these parameters do not follow this schema, despite the fact that creating the client and stack in this way would add more information to the trace.

The parameter list in the table shows that, unlike the managed API, the SendClientScriptErrorReport web method does not provide any option to specify the category or severity of the message being logged. Also looking at the method name and description, it appears that the exception logged should specify the severity level as Error. However, any message logged through the SharePoint Diagnostics web service is always displayed under the category Unified Logging Service and has a trace log severity level set to Verbose.

Later in this article, you will see the steps required to view the traces written through the SharePoint Diagnostics web service.

In this section, you create a JavaScript application that uses the Diagnostics web service to report errors to the ULS. The application contains a JavaScript file named ULSLogScript.js that contains the necessary functions to communicate and log traces to the Diagnostics web service. These functions are then called directly from any consumer script.

Note
This is a relatively simple application with just one file, so you are not creating a formal SharePoint solution; instead, you save the files directly to the Layouts directory in the SharePoint hive structure.

To create a JavaScript library containing the ULS logging logic

  1. Start Microsoft Visual Studio 2010.
  2. From the File menu, create a new JScript file and save it in the following path: <SharePoint Installation Folder>\14\TEMPLATE\LAYOUTS\LoggingSample\ULSLogScript.js.

    For example, C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\TEMPLATE\LAYOUTS\LoggingSample\ULSLogScript.js.

    Note
    You need to create a new directory named LoggingSample in the Layouts folder.
  3. Because you are using the JQuery library in the application, download the jquery-1.6.4.min.js file from the JQuery portal and add it to the LoggingSample folder created previously.
  4. Type or paste the following code into the ULSLogScript.js file.
    // Creates a custom ulslog object 
    // with the required properties.
    function ulsObject() {
        this.message = null;
        this.file = null;
        this.line = null;
        this.client = null;
        this.stack = null;
        this.team = null;
        this.originalFile = null;
    }
    

    The ulsObject function returns a new instance of a custom object with properties mapped to the parameters required by the SendClientScriptErrorReport method. This object is used throughout the script for performing various operations.

  5. Define the methods that populate the property values specified in the ulsObject method. Begin by defining the function that retrieves the client property. Following the ulsObject method, type or paste the following code.
    // Detecting the browser to create the client information
    // in the required format.
    function getClientInfo() {
        var browserName = '';
    
        if (jQuery.browser.msie)
            browserName = "Internet Explorer";
        else if (jQuery.browser.mozilla)
            browserName = "Firefox";
        else if (jQuery.browser.safari)
            browserName = "Safari";
        else if (jQuery.browser.opera)
            browserName = "Opera";
        else
            browserName = "Unknown";
    
        var browserVersion = jQuery.browser.version;
        var browserLanguage = navigator.language;
        if (browserLanguage == undefined) {
            browserLanguage = navigator.userLanguage;
        }
    
        var client = "<client><browser name='{0}' version='{1}'></browser><language>{2}</language></client>";
        client = String.format(client, browserName, browserVersion, browserLanguage);
     
        return client;
    }
    
    // Utility function to assist string formatting.
    String.format = function () {
        var s = arguments[0];
        for (var i = 0; i < arguments.length - 1; i++) {
            var reg = new RegExp("\\{" + i + "\\}", "gm");
            s = s.replace(reg, arguments[i + 1]);
        }
    
        return s;
    }
    

    The getClientInfo function uses the JQuery library to detect the current browser properties, such as the name and version, and then creates a XML fragment (as discussed previously) describing the browser details where the application is currently running. Additionally, a utility function named String.format assists string formatting through the code.

  6. Next, you need a function to create the call stack for the exception raised in the script. Add the following functions to the ULSLogScript.js code.
    // Creates the callstack in the required format 
    // using the caller function definition.
    function getCallStack(functionDef, depth) {
        if (functionDef != null) {
            var signature = '';
            functionDef = functionDef.toString();
            signature = functionDef.substring(0, functionDef.indexOf("{"));
            if (signature.indexOf("function") == 0) {
                signature = signature.substring(8);
            }
    
            if (depth == 0) {
                var stack = "<stack><function depth='0' signature='{0}'>{1}</function></stack>";
                stack = String.format(stack, signature, functionDef);
            }
            else {
                var stack = "<stack><function depth='1' signature='{0}'></function></stack>";
                stack = String.format(stack, signature);
            }
    
            return stack;
        }
    
        return "";
    }
    

    The getCallStack function receives the function definition where the exception occurred and a depth as a parameter. The depth parameter is used by the function to decide if only the caller function signature is required or the complete function definition is to be included. Based on the caller function definition, the getCallStack function extracts the required information such as the signature, body, and creates an XML fragment as described in the protocol specification.

  7. Next, create a function that creates a SOAP packet in the format expected by the Diagnostics web service SendClientScriptErrorReport method. Type or paste the following functions in the ULSLogScript.js file.
    // Creates the SOAP packet required by SendClientScriptErrorReport
    // web method.
    function generateErrorPacket(ulsObj) {
        var soapPacket = "<?xml version=\"1.0\" encoding=\"utf-8\"?>" +
                            "<soap:Envelope xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" " +
                                           "xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\" "+
                                           "xmlns:soap=\"http://schemas.xmlsoap.org/soap/envelope/\">" +
                              "<soap:Body>" +
                                "<SendClientScriptErrorReport " +
                                  "xmlns=\"http://schemas.microsoft.com/sharepoint/diagnostics/\">" +
                                  "<message>{0}</message>" +
                                  "<file>{1}</file>" +
                                  "<line>{2}</line>" +
                                  "<stack>{3}</stack>" +
                                  "<client>{4}</client>" +
                                  "<team>{5}</team>" +
                                  "<originalFile>{6}</originalFile>" +
                                "</SendClientScriptErrorReport>" +
                              "</soap:Body>" +
                            "</soap:Envelope>";
     
        soapPacket = String.format(soapPacket, encodeXmlString(ulsObj.message), encodeXmlString(ulsObj.file), 
                     ulsObj.line, encodeXmlString(ulsObj.stack), encodeXmlString(ulsObj.client), 
                     encodeXmlString(ulsObj.team), encodeXmlString(ulsObj.originalFile));
     
        return soapPacket;
    }
    
    // Utility function to encode special characters in XML.
    function encodeXmlString(txt) {
        txt = String(txt);
        txt = jQuery.trim(txt);
        txt = txt.replace(/&/g, "&amp;");
        txt = txt.replace(/</g, "&lt;");
        txt = txt.replace(/>/g, "&gt;");
        txt = txt.replace(/'/g, "&apos;");
        txt = txt.replace(/"/g, "&quot;");
     
        return txt;
    }
    

    The generateErrorPacket function receives an instance of the ulsObj object and returns the SOAP packet for the SendClientScriptErrorReport function as a string in the expected format. Because the values for the some parameters are expected as an XML fragment, the encodeXmlString function is used to encode the special characters.

  8. When the SOAP packet has been defined, you need a function to issue an asynchronous request to the Diagnostics web service. Add the code below to the ULSLogScript.js file.
    // Function to form the Diagnostics service URL.
    function getWebSvcUrl() {
        var serverurl = location.href;
        if (serverurl.indexOf("?") != -1) {
            serverurl = serverurl.replace(location.search, '');
        }
     
        var index = serverurl.lastIndexOf("/");
        serverurl = serverurl.substring(0, index - 1);
        serverurl = serverurl.concat('/_vti_bin/diagnostics.asmx');
     
        return serverurl;
    }
    
    // Method to post the SOAP packet to the Diagnostic web service.
    function postMessageToULSSvc(soapPacket) {
        $(document).ready(function () {
            $.ajax({
                url: getWebSvcUrl(),
                type: "POST",
                dataType: "xml",
                data: soapPacket, //soap packet.
                contentType: "text/xml; charset=\"utf-8\"",
                success: handleResponse, // Invoke when the web service call is successful.
                error: handleError// Invoke when the web service call fails.
            });
        });
    }
    
    // Invoked when the web service call succeeds.
    function handleResponse(data, textStatus, jqXHR) {
        // Custom code...
        alert('Successfully logged trace to ULS');
     }
     
    // Invoked when the web service call fails.
    function handleError(jqXHR, textStatus, errorThrown) {
        //Custom code...
            alert('Error occurred in executing the web request');
    }
    

    The postMessageToULSSvc function perform an asynchronous HTTP request and posts the SOAP packet to the Diagnostics web service. The URL of the Diagnostics web service is dynamically constructed in the getWebSvcUrl function. The postMessageToULSSvc function also defines respective handlers for success or error responses. Instead of displaying alerts on the handlers, other logic can be written as required by the application.

  9. Finally, you need a function that is invoked automatically when an error occurs in the code. To register this function globally for all the JavaScript functions on the page, you attach this function to the window.onerror event. Add the following lines of code as the first line of the ULSLogScript.js file.
    // Registering the ULS logging function on a global level.
    window.onerror = logErrorToULS;
    
    // Set default value for teamName.
    var teamName = "Custom SharePoint Application";
    
    // Further add the logErrorToULS method at the end of the script.
    
    // Function to log messages to Diagnostic web service.
    // Invoked by the window.onerror message.
    function logErrorToULS(msg, url, linenumber) {
        var ulsObj = new ulsObject();
        ulsObj.message = "Error occurred: " + msg;
        ulsObj.file = url.substring(url.lastIndexOf("/") + 1); // Get the current file name.
        ulsObj.line = linenumber;
        ulsObj.stack = getCallStack(logErrorToULS.caller); // Create error call stack.
        ulsObj.client = getClientInfo(); // Create client information.
        ulsObj.team = teamName; // Declared in the consumer script.
        ulsObj.originalFile = ulsObj.file;
    
        var soapPacket = generateErrorPacket(ulsObj); // Create the soap packet.
        postMessageToULSSvc(soapPacket); // Post to the web service.
    
        return true;
    }
    

    The line window.onerror = logErrorToULS links the function logErrorToULS with the window.onerror event. This enables you to capture the required information such as the error message, line number, and error file. The teamName variable enables you to set a unique value with respect to the calling application. This can be overridden in the consumer scripts. The logErrorToULS function creates an instance of the ulsObj object and populates all of its properties. Here, you see that the stack property of the ulsObj object is set to logErrorToULS.caller. This provides the function definition of the method that invoked this function. The postMessageToULSSvc function is called to write the error information to the trace logs.

    Note
    Because you cannot specify the security level of the trace message in the SendClientScriptErrorReport method, the message property of the ulsObj object is prepended with text indicating that the message logged is part of an exception.
  10. The logErrorToULS function is called automatically when an error occurs on the page, but to intentionally write a trace message to the ULS, you need one more function which can be called specifically. Add the following function just below the logErrorToULS function.
    // Function to log message to Diagnostic web service.
    // Specifically invoked by a consumer method.
    function logMessageToULS(message, fileName) {
        if (message != null) {
            var ulsObj = new ulsObject();
            ulsObj.message = message;
            ulsObj.file = fileName;
            ulsObj.line = 0; // We don't know the line, so we set it to zero.
            ulsObj.stack = getCallStack(logMessageToULS.caller);
            ulsObj.client = getClientInfo();
            ulsObj.team = teamName;
            ulsObj.originalFile = ulsObj.file;
    
            var soapPacket = generateErrorPacket(ulsObj);
            postMessageToULSSvc(soapPacket);
        }
    }
    

    Unlike the logErrorToULS function, the logMessageToULS function accepts the message to be logged and the file name where the error occurred as parameters.

So far, you have created the required logic to write trace messages or exceptions to the ULS logs. Now you need to write a function that consumes the logErrorToULS or logMessageToULS functions.

To create the consumer application

  1. Navigate to your SharePoint site.
  2. Create a new Web Parts page.
  3. Add a Content Editor Web Part in any of the available Web Part zones.
  4. Edit the Web Part and type or paste the following text in the HTML source.
    <script src="/_layouts/LoggingSample/jquery-1.6.4.min.js" type="text/javascript"></script>
     <script src="/_layouts/LoggingSample/ULSLogScript.js" type="text/javascript"></script>
     <script type="text/javascript">
            var teamName = "Simple ULS Logging";
            function doWork() {
                unknownFunction();
            }
            function logMessage() {
                logMessageToULS('This is a trace message from CEWP', 'loggingsample.aspx');
            }
     </script>
    
    <input type="button" value="Log Exception" onclick="doWork();" />
        <br /><br />
      <input type="button" value="Log Trace" onclick="logMessage();" />
    
    

    This HTML code contains the required script references to include the JQuery library and the ULSLogScript.js file that you created in the previous section. It also contains two inline JavaScript functions and the respective input buttons to invoke them.

    To demonstrate exception handling, the doWork function makes a call to an unknownFunction function that does not exist. This invokes an exception that is intercepted and logged by the ULSLogScript.js code. To demonstrate message logging, the logMessage function calls the logMessageToULS function to write trace messages to ULS.

  5. Exit the web page design mode.
  6. Save the Web Parts page.
Finally, you need to configure the Diagnostic Logging Service in SharePoint Central Administration to ensure that the traces and exceptions logged from the Diagnostics web service are visible in the ULS logs.

To configure the Diagnostic Logging Service

  1. Open SharePoint Central Administration.
  2. From the Quick Launch, click Monitoring.
    Figure 1. Click the Monitoring option

    Click the Monitoring option

  3. On the monitoring page, in the Reporting section, click Configure diagnostic logging.
    Figure 2. Click Configure diagnostic logging

    Click Configure diagnostic logging

  4. From all categories, expand the SharePoint Foundation category.
    Figure 3. Expand the SharePoint Foundation category

    Expand the SharePoint Foundation category

  5. Select the Unified Logging Service category.
    Figure 4. Select Unified Logging Service

    Select Unified Logging Service

  6. In the Least critical event to report to the trace log list, select Verbose.
    Figure 5. In the dropdown list, select Verbose

    From the dropdown list, select Verbose

  7. Click OK to save the configuration.

The server is now ready to log traces sent by the Diagnostics web service to ULS. These traces appear under the category Unified Logging Service with a severity set to Verbose.

In this section, you test the application by raising an alert that is logged to the ULS.

To test the logging application

  1. Click the Log Exception button inside the Content Editor Web Part (CEWP).
    Figure 6. Click the Log Exception button

    Click the Log Exception button

  2. An alert indicates that the message has been logged successfully to ULS.
    Figure 7. Confirmation message

    Confirmation message

  3. To see the exception details in the ULS logs, navigate to the Logs folder in the SharePoint hive ({SP Installation Path}\14\LOGS\)
  4. Because multiple log files can be present in the Logs folder, perform a descending sort on the Date modified field.
  5. Open the recent log file in a text editor such as Notepad and then search for Simple ULS Logging (the team name specified previously). Now you should see all the web service parameters as supplied from the client application, from Message to OriginalFileName, in the following text:

    10/14/2011 21:00:37.87 w3wp.exe (0x097C) 0x14DCSharePoint Foundation Unified Logging Service a084Verbose Message: Error occured: The value of the property ‘unknownFunction’ is null or undefined, not a Function object543a6672-9078-452f-93bd-545c4babefd510/14/2011 21:00:37.87 w3wp.exe (0x097C) 0x14DCSharePoint Foundation Unified Logging Service a085Verbose File: ULS%20Logging%20Sample.aspx543a6672-9078-452f-93bd-545c4babefd510/14/2011 21:00:37.87 w3wp.exe (0x097C) 0x14DCSharePoint Foundation Unified Logging Service a086Verbose Line: 676543a6672-9078-452f-93bd-545c4babefd510/14/2011 21:00:37.87 w3wp.exe (0x097C) 0x14DCSharePoint Foundation Unified Logging Service a087Verbose Client: <client><browser name=’Internet Explorer’ version=’8.0′></browser><language>en-us</language></client>543a6672-9078-452f-93bd-545c4babefd510/14/2011 21:00:37.87 w3wp.exe (0x097C) 0x14DCSharePoint Foundation Unified Logging Service a088Verbose Stack: <stack><function depth=’0′ signature=’ doWork() ‘>function doWork() { unknownFunction(); }</function></stack>543a6672-9078-452f-93bd-545c4babefd510/14/2011 21:00:37.87 w3wp.exe (0x097C) 0x14DCSharePoint Foundation Unified Logging Service a089Verbose TeamName: Simple ULS Logging543a6672-9078-452f-93bd-545c4babefd510/14/2011 21:00:37.87 w3wp.exe (0x097C) 0x14DCSharePoint Foundation Unified Logging Service a08aVerbose OriginalFileName: ULS%20Logging%20Sample.aspx543a6672-9078-452f-93bd-545c4babefd5

    Looking at the log message, you can easily determine that the exception occurred because unknownFunction was not defined, along with other relevant details such as the line number.

  6. Similarly, clicking Log Trace on the CEWP writes the following trace message:

    10/14/2011 21:29:55.76 w3wp.exe (0x097C) 0x0F6CSharePoint Foundation Unified Logging Service a084Verbose Message: This is a trace message from CEWP8c182889-c323-46f3-a287-a538c379f15210/14/2011 21:29:55.76 w3wp.exe (0x097C) 0x0F6CSharePoint Foundation Unified Logging Service a085Verbose File: loggingsample.aspx8c182889-c323-46f3-a287-a538c379f15210/14/2011 21:29:55.76 w3wp.exe (0x097C) 0x0F6CSharePoint Foundation Unified Logging Service a086Verbose Line: 08c182889-c323-46f3-a287-a538c379f15210/14/2011 21:29:55.76 w3wp.exe (0x097C) 0x0F6CSharePoint Foundation Unified Logging Service a087Verbose Client: <client><browser name=’Internet Explorer’ version=’8.0′></browser><language>en-us</language></client>8c182889-c323-46f3-a287-a538c379f15210/14/2011 21:29:55.76 w3wp.exe (0x097C) 0x0F6CSharePoint Foundation Unified Logging Service a088Verbose Stack: <stack><function depth=’1′ signature=’ logMessage() ‘></function></stack>8c182889-c323-46f3-a287-a538c379f15210/14/2011 21:29:55.76 w3wp.exe (0x097C) 0x0F6CSharePoint Foundation Unified Logging Service a089Verbose TeamName: Simple ULS Logging8c182889-c323-46f3-a287-a538c379f15210/14/2011 21:29:55.76 w3wp.exe (0x097C) 0x0F6CSharePoint Foundation Unified Logging Service a08aVerbose OriginalFileName: loggingsample.aspx8c182889-c323-46f3-a287-a538c379f152

    In this log, you see that a trace message was sent by the logMessage function.

In a sandboxed solution, you cannot deploy any file to the server file system (the Layouts folder), so to make the ULS logging script work, you need to make the following two changes:

  1. Provision the jquery-1.6.4.min.js and ULSLogScript.js file to a Site Collection–relative Styles Library folder (or any other library with appropriate read access).
  2. Update the script references in the consumer Content Query Web Part (CQWP), as needed.

The remaining functionality should work as is.

What is Kendo UI

Kendo UI is an HTML5, jQuery-based framework for building modern web apps. The framework features lots of UI widgets, a rich data vizualization framework, an auto-adaptive Mobile framework, and all of the tools needed for HTML5 app development, such as Data Binding, Templating, Drag-and-Drop API, and more.

Kendoui

 

Kendo UI comes in different bundles:

  • Kendo UI Web – HTML5 widgets for desktop browsing experience.
  • Kendo UI DataViz – HTML5 data vizualization widgets.
  • Kendo UI Mobile – HTML5 framework for building hybrid mobile applications.
  • Kendo UI Complete – includes Kendo UI Web, Kendo UI DataViz and Kendo UI Mobile.
  • Telerik UI for ASP.NET MVC – Kendo UI Complete plus ASP.NET MVC wrappers for Kendo UI Web, DataViz and Mobile.
  • Telerik UI for JSP – Kendo UI Complete plus JSP wrappers for Kendo UI Web and Kendo UI DataViz.
  • Telerik UI for PHP – Kendo UI Complete plus PHP wrappers for Kendo UI Web and Kendo UI DataViz.

Installing and Getting Started with Kendo UI

You can download all Kendo UI bundles from the download page.

The distribution zip file contains the following:

  • /examples – quick start demos.
  • /js – minified JavaScript files.
  • /src – complete source code. Not available in the trial distribution.
  • /styles – minified CSS files and theme images.
  • /wrappers – server-side wrappers. Available in Telerik UI for ASP.NET MVC, JSP or PHP.
  • changelog.html – Kendo UI release notes.

Using Kendo UI

To use Kendo UI in your HTML page you need to include the required JavaScript and CSS files.

Kendo UI Web

  1. Download Kendo UI Web and extract the distribution zip file to a convenient location.
  2. Copy the /js and /styles directories of the Kendo UI Web distribution to your web application root directory.
  3. Include the Kendo UI Web JavaScript and CSS files in the head tag of your HTML page. Make sure the common CSS file is registered before the theme CSS file. Also make sure only one combined script file is registered. For more information, please refer to the Javascript Dependencies page.
    <!-- Common Kendo UI Web CSS -->
    <link href="styles/kendo.common.min.css" rel="stylesheet" />
    <!-- Default Kendo UI Web theme CSS -->
    <link href="styles/kendo.default.min.css" rel="stylesheet" />
    <!-- jQuery JavaScript -->
    <script src="js/jquery.min.js"></script>
    <!-- Kendo UI Web combined JavaScript -->
    <script src="js/kendo.web.min.js"></script>
    
  4. Initialize a Kendo UI Web Widget (the KendoDatePicker in this example):
    <!-- HTML element from which the Kendo DatePicker would be initialized -->
    <input id="datepicker" />
    <script>
    $(function() {
        // Initialize the Kendo DatePicker by calling the kendoDatePicker jQuery plugin
        $("#datepicker").kendoDatePicker();
    });
    </script>
    

Here is the complete example:

<!--doctype html>
<html>
    <head>
        <title>Kendo UI Web</title>
        <link href="styles/kendo.common.min.css" rel="stylesheet" />
        <link href="styles/kendo.default.min.css" rel="stylesheet" />
        <script src="js/jquery.min.js"></script>
        <script src="js/kendo.web.min.js"></script>
    </head>
    <body>
        <input id="datepicker" />
        <script>
            $(function() {
                $("#datepicker").kendoDatePicker();
            });
        </script>
    </body>
</html>

Kendo UI DataViz

  1. Download Kendo UI DataViz and extract the distribution zip file to a convenient location.
  2. Copy the /js and /styles directories of the Kendo UI DataViz distribution to your web application root directory.
  3. Include the Kendo UI DataViz JavaScript and CSS files in the head tag of your HTML page:
    <!-- Kendo UI DataViz CSS -->
    <link href="styles/kendo.dataviz.min.css" rel="stylesheet" />
    <!-- jQuery JavaScript -->
    <script src="js/jquery.min.js"></script>
    <!-- Kendo UI DataViz combined JavaScript -->
    <script src="js/kendo.dataviz.min.js"></script>
    
  4. Initialize a Kendo UIDataViz Widget (the Kendo Radial Gauge in this example):
    <!-- HTML element from which the Kendo Radial Gauge would be initialized -->
    <div id="gauge"></div>
    <script>
    $(function() {
        $("#gauge").kendoRadialGauge();
    });
    </script>
    

Here is the complete example:

<!--doctype html>
<html>
    <head>
        <title>Kendo UI DataViz</title>
        <link href="styles/kendo.dataviz.min.css" rel="stylesheet" />
        <script src="js/jquery.min.js"></script>
        <script src="js/kendo.dataviz.min.js"></script>
    </head>
    <body>
        <div id="gauge"></div>
        <script>
        $(function() {
            $("#gauge").kendoRadialGauge();
        });
        </script>
    </body>
</html>

Kendo UI Mobile

  1. Download Kendo UI Mobile and extract the distribution zip file to a convenient location.
  2. Copy the /js and /styles directories of the Kendo UI Mobile distribution to your web application root directory.
  3. Include the Kendo UI Mobile JavaScript and CSS files in the head tag of your HTML page:
    <!-- Kendo UI Mobile CSS -->
    <link href="styles/kendo.mobile.all.min.css" rel="stylesheet" />
    <!-- jQuery JavaScript -->
    <script src="js/jquery.min.js"></script>
    <!-- Kendo UI Mobile combined JavaScript -->
    <script src="js/kendo.mobile.min.js"></script>
    
  4. Initialize a Kendo Mobile Application
    <!-- Kendo Mobile View -->
    <div data-role="view" data-title="View" id="index">
        <!--Kendo Mobile Header -->
        <header data-role="header">
            <!--Kendo Mobile NavBar widget -->
            <div data-role="navbar">
                <span data-role="view-title"></span>
            </div>
        </header>
        <!--Kendo Mobile ListView widget -->
        <ul data-role="listview">
          <li>Item 1</li>
          <li>Item 2</li>
        </ul>
        <!--Kendo Mobile Footer -->
        <footer data-role="footer">
            <!-- Kendo Mobile TabStrip widget -->
            <div data-role="tabstrip">
                <a data-icon="home" href="#index">Home</a>
                <a data-icon="settings" href="#settings">Settings</a>
            </div>
        </footer>
    </div>
    <script>
    // Initialize a new Kendo Mobile Application
    var app = new kendo.mobile.Application();
    </script>
    

Here is the complete example:

<!--doctype html>
<html>
    <head>
        <title>Kendo UI Mobile</title>
        <link href="styles/kendo.mobile.all.min.css" rel="stylesheet" />
        <script src="js/jquery.min.js"></script>
        <script src="js/kendo.mobile.min.js"></script>
    </head>
    <body>
        <div data-role="view" data-title="View" id="index">
            <header data-role="header">
                <div data-role="navbar">
                    <span data-role="view-title"></span>
                </div>
            </header>
            <ul data-role="listview">
              <li>Item 1</li>
              <li>Item 2</li>
            </ul>
            <footer data-role="footer">
                <div data-role="tabstrip">
                    <a data-icon="home" href="#index">Home</a>
                    <a data-icon="settings" href="#settings">Settings</a>
                </div>
            </footer>
        </div>
        <script>
        var app = new kendo.mobile.Application();
        </script>
    </body>
</html>

Server-side wrappers

Kendo UI provides server-side wrappers for ASP.NET, PHP and JSP. Those are classes (ASP.NET and PHP) or XML tags (JSP) which allow configuring the Kendo UI widgets with server-side code.

You can find more info about the server-side wrappers here:

  • Get Started with Telerik UI for ASP.NET MVC
  • Get Started with Telerik UI for JSP
  • Get Started with Telerik UI for PHP

Next Steps

Kendo UI videos

You can watch the videos in the Kendo UI YouTube channel.

Kendo UI Dojo

A lot of interactive tutorials are available in the Kendo UI Dojo.

Further reading

  1. Kendo UI Widgets
  2. Data Attribute Initialization
  3. Requirements

Examples

  1. Online demos
  2. Code library projects
  3. Examples availableongithub
    • ASP.NET MVC examples
    • ASP.NET MVC Kendo UI Music Store
    • ASP.NET WebForms examples
    • JSP examples
    • Kendo Mobile Sushi
    • PHP examples
    • Ruby on Rails examples

Help Us Improve Kendo UI Documentation, Samples, Tutorials and Demos

The Kendo UI team would LOVE your help to improve our documentation. We encourage you to contribute in the way that you choose:

Submit a New Issue at GitHub

Open a new issue on the topic if it does not exist already.When creating an issue, please provide a descriptive title, be as specific as possible and link to the document in question. If you can provide a link to the closest anchor to the issue, that is even better.

Update the Documentation at GitHub

This is the most direct method. Follow the contribution instructions. The basic steps are that you fork our documentation and submit a pull request. That way you can contribute to exactly where you found the error and our technical writing team just needs to approve your change request. Please use only standard Markdown and follow the directions at the link. If you find an issue in the docs, or even feel like creating new content, we are happy to have your contributions!

Forums

You can also go to the Kendo UI Forums and leave feedback. This method will take a bit longer to reach our documentation team, but if you like the accountability of forums and you want a fast reply from our amazing support team, leaving feedback in the Kendo UI forums guarantees that your suggestion has a support number and that we’ll follow up on it.Thank you for contributing to the Kendo UI community!

NEW “Filter My Lists” Web Part now available + FREE Metro UI Master Page when ordering

“Filter My Lists” Web Part

Saves you time with optimal performance

Find what you are looking for with a few clicks, even in cluttered sites and lists with masses of items and documents.

Find exactly what you need and stop wasting your time browsing SharePoint.
Filter the content of multiple lists and libraries in a single   step.

Combine search and metadata filters

In a single panel combine item, document and attachment searches with metadata keyword searches and managed metadata filters.

Select multiple filter values from drop-down lists or alternatively use the keyword search of metadata fields with the help of wildcard characters and logical operators.

Export filtered views to Excel

Export filtered views and data to Excel. A print view enables you to print your results in a clear printable format with a single  click.

Keep views clear and concise

Provides a complete set of filters without cluttering list views and keeps your list views clear, concise and speedy. Enables you to filter SharePoint using columns which aren’t visible in list views.

Refine filters and save them for future use, whether private, to share with others or to use as default filters.

FREE Metro Style UI Master Page

 

Screen Capture Medium

Modern UI Master Page and Styles for SharePoint 2010.

This will give the Metro/Modern UI styling of SharePoint 2013 to your SharePoint 2010 team sites.

Features include:
– Quick launch styling
– Global navigation and drop-down styling
– Search box styling and layout change
– Web part header styling
– Segoe UI font

SAP Weekend : Part 2 – Using the Microsoft BizTalk Server for B2B Integration with SharePoint

This is Part 2 of my past weekend’s activities with SharePoint and SAP Integration methods.

 

In this post I am looking at how to use the BizTalk Adapter with SharePoint

 

Topics

  • Abstract
  • Goal
  • Business Scenario
  • Environment
  • Document Flow
  • Integration Steps
  • .NET Support
  • Summary

 

Abstract

In the past few years, the whole perspective of doing business has been moved towards implementing Enterprise Resource Planning Systems for the key areas like marketing, sales and manufacturing operations. Today most of the large organizations which deal with all major world markets, heavily rely on such key areas.

Operational Systems of any organization can be achieved from its worldwide network of marketing teams as well as from manufacturing and distribution techniques. In order to provide customers with realistic information, each of these systems need to be integrated as part of the larger enterprise.

This ultimately results into efficient enterprise overall, providing more reliable information and better customer service. This paper addresses the integration of Biztalk Server and Enterprise Resource Planning System and the need for their integration and their role in the current E-Business scenario.

 

Goal

There are several key business drivers like customers and partners that need to communicate on different fronts for successful business relationship. To achieve this communication, various systems need to get integrated that lead to evaluate and develop B2B Integration Capability and E–Business strategy. This improves the quality of business information at its disposal—to improve delivery times, costs, and offer customers a higher level of overall service.

To provide B2B capabilities, there is a need to give access to the business application data, providing partners with the ability to execute global business transactions. Facing internal integration and business–to–business (B2B) challenges on a global scale, organization needs to look for required solution.

To integrate the worldwide marketing, manufacturing and distribution facilities based on core ERP with variety of information systems, organization needs to come up with strategic deployment of integration technology products and integration service capabilities.

 

Business Scenario

Now take the example of this ABC Manufacturing Company: whose success is the strength of its European-wide trading relationships. Company recognizes the need to strengthen these relationships by processing orders faster and more efficiently than ever before.

The company needed a new platform that could integrate orders from several countries, accepting payments in multiple currencies and translating measurements according to each country’s standards. Now, the bottom line for ABC’s e-strategy was to accelerate order processing. To achieve this: the basic necessity was to eliminate the multiple collections of data and the use of invalid data.

By using less paper, ABC would cut processing costs and speed up the information flow. Keeping this long term goal in mind, ABC Manufacturing Company can now think of integrating its four key countries into a new business-to-business (B2B) platform.

 

Here is another example of this XYZ Marketing Company. Users visit on this company’s website to explore a variety of products for its thousands of customers all over the world. Now this company always understood that they could offer greater benefits to customers if they could more efficiently integrate their customers’ back-end systems. With such integration, customers could enjoy the advantages of highly efficient e-commerce sites, where a visitor on the Web could place an order that would flow smoothly from the website to the customer’s order entry system.

 

Some of those back-end order entry systems are built on the latest, most sophisticated enterprise resource planning (ERP) system on the market, while others are built on legacy systems that have never been upgraded. Different customers requires information formatted in different ways, but XYZ has no elegant way to transform the information coming out of website to meet customer needs. With the traditional approach:

For each new e-commerce customer on the site, XYZ’s staff needs to work for significant amounts of time creating a transformation application that would facilitate the exchange of information. But with better approach: XYZ needs a robust messaging solution that would provide the flexibility and agility to meet a range of customer needs quickly and effectively. Now again XYZ can think of integrating Customer Backend Systems with the help of business-to-business (B2B) platform.

 

Environment

Many large scale organizations maintain a centralized SAP environment as its core enterprise resource planning (ERP) system. The SAP system is used for the management and processing of all global business processes and practices. B2B integration mainly relies on the asynchronous messaging, Electronic Data Interchange (EDI) and XML document transformation mechanisms to facilitate the transformation and exchange of information between any ERP System and other applications including legacy systems.

For business document routing, transformation, and tracking, existing SAP-XML/EDI technology road map needs XML service engine. This will allow development of complex set of mappings from and to SAP to meet internal and external XML/EDI technology and business strategy. Microsoft BizTalk Server is the best choice to handle the data interchange and mapping requirements. BizTalk Server has the most comprehensive development and management support among business-to-business platforms. Microsoft BizTalk Server and BizTalk XML Framework version 2.0 with Simple Object Access Protocol (SOAP) version 1.1 provide precisely the kind of messaging solution that is needed to facilitate integration with cost effective manner.

 

Document Flow

Friends, now let’s look at the actual flow of document from Source System to Customer Target System using BizTalk Server. When a document is created, it is sent to a TCP/IP-based Application Linking and Enabling (ALE) port—a BizTalk-based receive function that is used for XML conversion. Then the document passes the XML to a processing script (VBScript) that is running as a BizTalk Application Integration Component (AIC). The following figure shows how BizTalk Server acts as a hub between applications that reside in two different organizations:

The data is serialized to the customer/vendor XML format using the Extensible Stylesheet Language Transformations (XSLT) generated from the BizTalk Mapper using a BizTalk channel. The XML document is sent using synchronous Hypertext Transfer Protocol Secure (HTTPS) or another requested transport protocol such as the Simple Mail Transfer Protocol (SMTP), as specified by the customer.

The following figure shows steps for XML document transformation:

The total serialized XML result is passed back to the processing script that is running as a BizTalk AIC. An XML “receipt” document then is created and submitted to another BizTalk channel that serializes the XML status document into a SAP IDOC status message. Finally, a Remote Function Call (RFC) is triggered to the SAP instance/client using a compiled C++/VB program to update the SAP IDOC status record. A complete loop of document reconciliation is achieved. If the status is not successful, an e-mail message is created and sent to one of the Support Teams that own the customer/vendor business XML/EDI transactions so that the conflict can be resolved. All of this happens instantaneously in a completely event-driven infrastructure between SAP and BizTalk.

Integration Steps

Let’s talk about a very popular Order Entry and tracking scenario while discussing integration hereafter. The following sections describe the high-level steps required to transmit order information from Order Processing pipeline Component into the SAP/R3 application, and to receive order status update information from the SAP/R3 application.

The integration of AFS purchase order reception with SAP is achieved using the BizTalk Adapter for SAP (BTS-SAP). The IDOC handler is used by the BizTalk Adapter to provide the transactional support for bridging tRFC (Transactional Remote Function Calls) to MSMQ DTC (Distributed Transaction Coordinator). The IDOC handler is a COM object that processes IDOC documents sent from SAP through the Com4ABAP service, and ensures their successful arrival at the appropriate MSMQ destination. The handler supports the methods defined by the SAP tRFC protocol. When integrating purchase order reception with the SAP/R3 application, BizTalk Server (BTS) provides the transformation and messaging functionality, and the BizTalk Adapter for SAP provides the transport and routing functionality.

The following two sequential steps indicate how the whole integration takes place:

  • Purchase order reception integration
  • Order Status Update Integration

Purchase Order Reception Integration

  1. Suppose a new pipeline component is added to the Order Processing pipeline. This component creates an XML document that is equivalent to the OrderForm object that is passed through the pipeline. This XML purchase order is in Commerce Server Order XML v1.0 format, and once created, is sent to a special Microsoft Message Queue (MSMQ) queue created specifically for this purpose.Writing the order from the pipeline to MSMQ:>

    The first step in sending order data to the SAP/R3 application involves building a new pipeline component to run within the Order Processing pipeline. This component must perform the following two tasks:

    A] Make an XML-formatted copy of the OrderForm object that is passing through the order processing pipeline. The GenerateXMLForDictionaryUsingSchema method of the DictionaryXMLTransforms object is used to create the copy.

    Private Function IPipelineComponent_Execute(ByVal objOrderForm As Object, _
        ByVal objContext As Object, ByVal lFlags As Long) As Long
    
    On Error GoTo ERROR_Execute
    
    Dim oXMLTransforms As Object
    Dim oXMLSchema As Object
    Dim oOrderFormXML As Object
    
    ' Return 1 for Success.
    IPipelineComponent_Execute = 1
    
    ' Create a DictionaryXMLTransforms object.
    Set oXMLTransforms = CreateObject("Commerce.DictionaryXMLTransforms")
    
    ' Create a PO schema object.
    Set oXMLSchema = oXMLTransforms.GetXMLFromFile(sSchemaLocation)
    
    ' Create an XML version of the order form.
    Set oOrderFormXML = oXMLTransforms.GenerateXMLForDictionaryUsingSchema_
        (objOrderForm, oXMLSchema)
    
    WritePO2MSMQ sQueueName, oOrderFormXML.xml, PO_TO_ERP_QUEUE_LABEL, _
        sBTSServerName, AFS_PO_MAXTIMETOREACHQUEUE
    
    Exit Function
    
    ERROR_Execute:
    App.LogEvent "QueuePO.CQueuePO -> Execute Error: " & _
    vbCrLf & Err.Description, vbLogEventTypeError
    
    ' Set warning level.
    IPipelineComponent_Execute = 2
    Resume Next
    
    End Function

    B] Send the newly created XML order document to the MSMQ queue defined for this purpose.

    Option Explicit
    
    ' MSMQ constants.
    
    ' Access modes.
    Const MQ_RECEIVE_ACCESS = 1
    Const MQ_SEND_ACCESS = 2
    Const MQ_PEEK_ACCESS = 32
    
    ' Sharing modes. Const MQ_DENY_NONE = 0
    Const MQ_DENY_RECEIVE_SHARE = 1
    
    ' Transaction options. Const MQ_NO_TRANSACTION = 0
    Const MQ_MTS_TRANSACTION = 1
    Const MQ_XA_TRANSACTION = 2
    Const MQ_SINGLE_MESSAGE = 3
    
    ' Error messages.
    Const MQ_ERROR_QUEUE_NOT_EXIST = -1072824317
    
    ' MQ Message ACKNOWLEDGEMENT.
    Const MQMSG_ACKNOWLEDGMENT_FULL_REACH_QUEUE = 5
    Const MQMSG_ACKNOWLEDGMENT_FULL_RECEIVE = 14
    Const DEFAULT_MAX_TIME_TO_REACH_QUEUE = 20
    ' MQ Message ACKNOWLEDGEMENT.
    Const MQMSG_ACKNOWLEDGMENT_FULL_REACH_QUEUE = 5
    Const MQMSG_ACKNOWLEDGMENT_FULL_RECEIVE = 14
    
    Function WritePO2MSMQ(sQueueName As String, sMsgBody As String, _
        sMsgLabel As String, sServerName As String, _
        Optional MaxTimeToReachQueue As Variant) As Long
    
    Dim lMaxTime As Long
    
    If IsMissing(MaxTimeToReachQueue) Then
    lMaxTime = DEFAULT_MAX_TIME_TO_REACH_QUEUE
    Else
    lMaxTime = MaxTimeToReachQueue
    End If
    
    Dim objQueueInfo As MSMQ.MSMQQueueInfo
    Dim objQueue As MSMQ.MSMQQueue, objAdminQueue As MSMQ.MSMQQueue
    Dim objQueueMsg As MSMQ.MSMQMessage
    
    On Error GoTo MSMQ_Error
    
    Set objQueueInfo = New MSMQ.MSMQQueueInfo
    objQueueInfo.FormatName = "DIRECT=OS:" & sServerName & "\PRIVATE$\" & sQueueName
    
    Set objQueue = objQueueInfo.Open(MQ_SEND_ACCESS, MQ_DENY_NONE)
    
    Set objQueueMsg = New MSMQ.MSMQMessage
    
    objQueueMsg.Label = sMsgLabel ' Set the message label property
    objQueueMsg.Body = sMsgBody ' Set the message body property
    objQueueMsg.Ack = MQMSG_ACKNOWLEDGMENT_FULL_REACH_QUEUE
    objQueueMsg.MaxTimeToReachQueue = lMaxTime
    
    objQueueMsg.send objQueue, MQ_SINGLE_MESSAGE
    
    objQueue.Close
    
    On Error Resume Next
    Set objQueueMsg = Nothing
    Set objQueue = Nothing
    Set objQueueInfo = Nothing
    
    Exit Function
    
    MSMQ_Error:
    App.LogEvent "Error in WritePO2MSMQ: " & Error
    Resume Next
    
    End Function
    
  2. A BTS MSMQ receive function picks up the document from the MSMQ queue and sends it to a BTS channel that has been configured for this purpose. Receiving the XML order from MSMQ: The second step in sending order data to the SAP/R3 application involves BTS receiving the order data from the MSMQ queue into which it was placed at the end of the first step. You must configure a BTS MSMQ receive function to monitor the MSMQ queue to which the XML order was sent in the previous step. This receive function forwards the XML message to the configured BTS channel for transformation.
  3. The third step in sending order data to the SAP/R3 application involves BTS transforming the order data from Commerce Server Order XML v1.0 format into ORDERS01 IDOC format. A BTS channel must be configured to perform this transformation. After the transformation is complete, the BTS channel sends the resulting ORDERS01 IDOC message to the corresponding BTS messaging port. The BTS messaging port is configured to send the transformed message to an MSMQ queue called the 840 Queue. Once the message is placed in this queue, the BizTalk Adapter for SAP is responsible for further processing. 
  4. BizTalk Adapter for SAP sends the ORDERS01document to the DCOM Connector (Get more information on DCOM Connector from www.sap.com/bapi), which writes the order to the SAP/R3 application. The DCOM Connector is an SAP software product that provides a mechanism to send data to, and receive data from, an SAP system. When an IDOC message is placed in the 840 Queue, the DOM Connector retrieves the message and sends it to SAP for processing. Although this processing is in the domain of the BizTalk Adapter for SAP, the steps involved are reviewed here as background information:
    • Determine the version of the IDOC schema in use and generate a BizTalk Server document specification.
    • Create a routing key from the contents of the Control Record of the IDOC schema.
    • Request a SAP Destination from the Manager Data Store given the constructed routing key.
    • Submit the IDOC message to the SAP System using the DCOM Connector 4.6D Submit functionality.

Order Status Update Integration

Order status update integration can be achieved by providing a mechanism for sending information about updates made within the SAP/R3 application back to the Commerce Server order system.

The following sequence of steps describes such a mechanism:

  1. BizTalk Adapter for SAP processing:
    After a user has updated a purchase order using the SAP client, and the IDOC has been submitted to the appropriate tRFC port, the BizTalk Adapter for SAP uses the DCOM connector to send the resulting information to the 840 Queue, packaged as an ORDERS01 IDOC message. The 840 Queue is an MSMQ queue into which the BizTalk Adapter for SAP places IDOC messages so that they can be retrieved and processed by interested parties. This process is within the domain of the BizTalk Adapter for SAP, and is used by this solution to achieve the order update integration.
  2. Receiving the ORDERS01 IDOC message from MSMQ:
    The second step in updating order status from the SAP/R3 application involves BTS receiving ORDERS01 IDOC message from the MSMQ queue (840 Queue) into which it was placed at the end of the first step. You must configure a BTS MSMQ receive function to monitor the 840 Queue into which the XML order status message was placed. This receive function must be configured to forward the XML message to the configured BTS channel for transformation.
  3. Transforming the order update from IDOC format:
    Using a BTS MSMQ receive function, the document is retrieved and passed to a BTS transformation channel. The BTS channel transforms the ORDERS01 IDOC message into Commerce Server Order XML v1.0 format, and then forwards it to the corresponding BTS messaging port. You must configure a BTS channel to perform this transformation.The following BizTalk Server (BTS) map demonstrates in the prototyping of this solution for transforming an SAP ORDERS01 IDOC message into an XML document in Commerce Server Order XML v1.0 format. It allows a change to an order in the SAP/R3 application to be reflected in the Commerce Server orders database.

    This map used in the prototype only maps the order ID, demonstrating how the order in the SAP/R3 application can be synchronized with the order in the Commerce Server orders database. The mapping of other fields is specific to a particular implementation, and was not done for the prototype.

< xsl:stylesheet xmlns:xsl='http://www.w3.org/1999/XSL/Transform' 
xmlns:msxsl='urn:schemas-microsoft-com:xslt' xmlns:var='urn:var' 
xmlns:user='urn:user' exclude-result-prefixes='msxsl var user' 
version='1.0'>
< xsl:output method='xml' omit-xml-declaration='yes' />
< xsl:template match='/'>
< xsl:apply-templates select='ORDERS01'/>
< /xsl:template>
< xsl:template match='ORDERS01'>
< orderform>

'Connection from source node "BELNR" to destination node "OrderID"

< xsl:if test='E2EDK02/@BELNR'>
< xsl:attribute name='OrderID'>
; < xsl:value-of select='E2EDK02/@BELNR'/>
< /xsl:attribute>
< /xsl:if>
< /orderform>
< /xsl:template>
< /xsl:stylesheet>

The BTS message port posts the transformed order update document to the configured ASP page for further processing. The configured ASP page retrieves the message posted to it and uses the Commerce Server OrderGroupManager and OrderGroup objects to update the order status information in the Commerce Server orders database.

  • Updating the Commerce Server order system:
    The fourth step in updating order status from the SAP/R3 application involves updating the Commerce Server order system to reflect the change in status. This is accomplished by adding the page _OrderStatusUpdate.asp to the AFS Solution Site and configuring the BTS messaging port to post the transformed XML document to that page. The update is performed using the Commerce Server OrderGroupManager and OrderGroup objects.
  •  

    The routine ProcessOrderStatus is the primary routine in the page. It uses the DOM and XPath to extract enough information to find the appropriate order using the OrderGroupManager object. Once the correct order is located, it is loaded into an OrderGroup object so that any of the entries in the OrderGroup object can be updated as needed.

    The following code implements page _OrderStatusUpdate.asp:

    < %@ Language="VBScript" %>
    
    < % 
    const TEMPORARY_FOLDER = 2
    
    call Main()
    
    Sub Main()
    call ProcessOrderStatus( ParseRequestForm() )
    End Sub
    
    Sub ProcessOrderStatus(sDocument)
    
    Dim oOrderGroupMgr 
    Dim oOrderGroup 
    Dim rs
    Dim sPONum
    Dim oAttr 
    Dim vResult
    Dim vTracking 
    Dim oXML
    Dim dictConfig
    Dim oElement
    
    Set oOrderGroupMgr = Server.CreateObject("CS_Req.OrderGroupManager")
    Set oOrderGroup = Server.CreateObject("CS_Req.OrderGroup")
    
    Set oXML = Server.CreateObject("MSXML.DOMDocument")
    oXML.async = False
    
    If oXML.loadXML (sDocument) Then
    
    ' Get the orderform element.
    Set oElement = oXML.selectSingleNode("/orderform")
    
    ' Get the poNum.
    sPONum = oElement.getAttribute("OrderID")
    
    Set dictConfig = Application("MSCSAppConfig").GetOptionsDictionary("")
    
    ' Use ordergroupmgr to find the order by OrderID.
    oOrderGroupMgr.Initialize (dictConfig.s_CatalogConnectionString)
    Set rs = oOrderGroupMgr.Find(Array("order_requisition_number='" sPONum & "'"), _
        Array(""), Array(""))
    
    If rs.EOF And rs.BOF Then
    'Create a new one. - Not implemented in this version.
    Else
    ' Edit the current one.
    oOrderGroup.Initialize dictConfig.s_CatalogConnectionString, rs("User_ID")
    
    ' Load the found order.
    oOrderGroup.LoadOrder rs("ordergroup_id")
    
    ' For the purposes of prototype, we only update the status
    oOrderGroup.Value.order_status_code = 2 ' 2 = Saved order
    
    ' Save it
    vResult = oOrderGroup.SaveAsOrder(vTracking)
    
    End If
    Else
    WriteError "Unable to load received XML into DOM."
    End If
    
    End Sub Function ParseRequestForm()
    
    Dim PostedDocument
    Dim ContentType
    Dim CharSet
    Dim EntityBody
    Dim Stream
    Dim StartPos
    Dim EndPos
    
    ContentType = Request.ServerVariables( "CONTENT_TYPE" )
    
    ' Determine request entity body character set (default to us-ascii).
    CharSet = "us-ascii"
    StartPos = InStr( 1, ContentType, "CharSet=""", 1)
    If (StartPos > 0 ) then
    StartPos = StartPos + Len("CharSet=""")
    EndPos = InStr( StartPos, ContentType, """",1 )
    CharSet = Mid (ContentType, StartPos, EndPos - StartPos )
    End If
    
    ' Check for multipart MIME message.
    PostedDocument = ""
    
    if ( ContentType = "" or Request.TotalBytes = 0) then
    
    ' Content-Type is required as well as an entity body.
    Response.Status = "406 Not Acceptable"
    Response.Write "Content-type or Entity body is missing" & VbCrlf
    Response.Write "Message headers follow below:" & VbCrlf
    Response.Write Request.ServerVariables("ALL_RAW") & VbCrlf
    Response.End
    Else
    If ( InStr( 1,ContentType,"multipart/" ) >

    .NET Support

    This Multi-Tier Application Environment can be implemented successfully with the help of Web portal which utilizes the Microsoft .NET Enterprise Server model. The Microsoft BizTalk Server Toolkit for Microsoft .NET provides the ability to leverage the power of XML Web services and Visual Studio .NET to build dynamic, transaction-based, fault-tolerant systems with full access to existing applications.

    Summary

    Microsoft BizTalk Server can help organizations quickly establish and manage Internet relationships with other organizations. It makes it possible for them to automate document interchange with any other organization, regardless of the conversion requirements and data formats used. This provides a cost-effective approach for integrating business processes across large Enterprises Resource Planning Systems. Integration process designed to facilitate collaborative e-commerce business processes. The process includes a document interchange engine, a business process execution engine, and a set of business document and server management tools. In addition, a business document editor and mapper tools are provided for managing trading partner relationships, administering server clusters, and tracking transactions.

    References

    All my Web Parts and Apps are now making use of Knockout.JS !! Template also available at very low price!!

    After completing the development of my latest Web Part, the “List Search” Web Part I decided to update all my Web Parts and Apps to using Knockout.JS, starting with the “List Search” Web Part.

    This topic came up when we I looked at some of my older products that includes generic list and library web parts, that would display few common fields like ID, Title, Description, File Url etc. Prior to this request we solved similar issues with OOB list and library web parts with custom XSLT, by creating Visual Studio web part for branding purposes only, or by using Imtech content query web part( which is XSLT solution by design).

    At the end, clients hated XSLT solutions and I hated to create new web part for every new list or library. That’s where Knockout popped. Why don’t we use Knockout for templates instead XSLT.

    I’ll assume that whoever reads this article knows about creating a web part for SharePoint, SharePoint module, java script and html and I will not go into details.

    Background

    A bit about Knockout

    From Knockout web site: “Knockout is a JavaScript library that helps you to create rich, responsive display and editor user interfaces with a clean underlying data model. “

    From Wikipedia:

    Knockout is a standalone JavaScript implementation of the Model-View-ViewModel pattern with templates. The underlying principles are therefore:

    • a clear separation between domain data, view components and data to be displayed
    • the presence of a clearly defined layer of specialized code to manage the relationships between the view components

    Knockout includes the following features:

    • Declarative bindings
    • Automatic UI refresh (when the data model’s state changes, the UI updates automatically)
    • Dependency tracking
    • Templating (using a native template engine although other templating engines can be used, such as jquery.tmpl)

    So what’s the deal?

    First you have your view model:

     var myViewModel = {
         personName: 'Bob',
         personAge: 123
    };

    Then you have a view:

    The name is <span data-bind="text:personName"></span>

    At the end just bind your view to model

     ko.applyBindings(myViewModel);

    We’ll talk about model later.

    Using the code

    Proof of concept

    I’ve created an html mock of our web part. This is useful, because we can prepare java scripts, css files, models and views in advance and test it without SharePoint and visual studio.

    You can download proof of concept as separate download from the link above.

    References

    There would be only two file references.

    One is knockout library itself

    <script type='text/javascript' src="http://knockoutjs.com/downloads/knockout-3.0.0.js"></script>

    and the other is css file I’ve added to this project

    <link href="css/controls.css" rel="stylesheet" type="text/css" />

    Model 

    I’ve designed model as Item class. Here it is:

    // Item class definition
    var Item = function (id, title, datecreated,url,description,thumbnail) {
       this.id = id;
       this.title = title;
       this.datecreated = datecreated;
       this.url=url;
       this.description=description;
       this.thumbnail=thumbnail;
    }

    It’s called item and it has 6 properties:

    1. id – ID of the item
    2. title – Title of the item
    3. datecreated – Creation date of the item
    4. url – Url of the item
    5. description – Description of the item
    6. thumbnail – Thumbnail of the item

     

    View model

    Here is the view model

    function viewModel1 (){
        var self = this;
        self.items =  [  
         new Item(2, 'News1 title','21.10.2013','javascript:OpenDialog(2);'
                   ,'Description News 1','img/pic1.jpg'), 
        new Item(1, 'News 2 title','21.02.2013','javascript:OpenDialog(1);',
                   'Description News 2','img/pic2.jpg')
    }

    View model has property items, which in fact is collection of Item objects. For mocking purposes we’ve added two Item objects in this collection (News 1 and News 2);

     

    View

    Here is the view:

    <div class="glwp glwp-central" id="k1">
      <div class="glwpLine"></div>
      <h5><img src="PublishingImages/siteIcon.png" 
              width="28" height="28" align="absmiddle" />
          News</h5>
      <div class="glwpLineGrey"></div>
        <ul data-bind="foreach:items">
          <li>
           <div class="glwpDate"><span data-bind="text: datecreated" ></span>
           <img class="glwpImage" data-bind="attr: { src: thumbnail }" />         
           </div>
           <div class="glwpText glwpText-central" >
            <a data-bind="attr: { href: url, title: title }" style="min-height:70px;">
             <span class="glwpTextTTL" data-bind="text:title"></span><br />
             <span data-bind="text: description"></span>
            </a>
           </div>
           <div class="glwpSep"></div>
          </li>
        </ul>
    </div>

    What we have here:

    It’s pretty simple. We haveunordered list bound to our model. One

    • element would be created for every item of our items collection (data-bind=”foreach: items”).

     

     

    Property binding: 

    •  datecreated">< /span> – This is the simplest data binding. It would write datecreated property of Item object to text of span element (like: <span>11/11/2013</span>)
    • <img class="glwpImage" data-bind="attr: { src: thumbnail }" />. This is a bit more complicated binding. It would take thumbnail property of item object and write it to src attribute of img element.
    • 70px;">. It would take url property and write it as href attribute of the a element, and title property as title attribute.
    • <span class="glwpTextTTL" data-bind="text:title"></span>. Title property would be written as text of span element
    • <span data-bind="text: description"></span>. Description property would be written as text of span element

    So anyone with little knowledge of html and css can customize this template anyway (s)he likes, as long as (s)he provides required properties.

     

    Binding

    ko.applyBindings(viewModel1,document.getElementById('k1'));

    Note second parameter in applyBindings method. It says document.getElementById('k1'). Same id is on the first div in our view (k1″>). This is helpful if you want to have more than one view model in one page. It tells knockout to bind this specific model (viewModel1) to specific template on our page (k1).

     

    What we have from this? We are going to create web part from this code and one of the web part features is that you can put same web part several times on the same page. So it would be possible to put one web part in SharePoint page to display news and one web part to display projects or documents. And they will coexist together.

    If you look at the source you will notice that we have 2 view models (viewModel1 and viewModel2) and two templates (k1 and k2), and two bindings of course. One binding is for news (with images and description) and one binding is for files (no images, and no descriptions). Templates are slightly different.

    Final result

    Here is the final result

    SharePoint Part

    As I said I will assume that you have some experience with SharePoint development so I will not explain how to create the project and add project items. Project type is standard Visual Studio 2010 SharePoint Empty Project template.

    SharePoint part consists of following items:

    • Web part item – KnockoutWp. Standard SharePoint Visual Web part project Item
    • Assets module. SharePoint module project item. We are going to use it for deploying of images and css files (0.png – empty container for images and controls.css – css file for our projects).
    • Layouts mapped folder. We’ll put here editor page for template.

    And here is the solution explorer for project:

    Assets

    We are going to deploy 2 files:

    • 0.png – 1×1 pixel transparent image aka placeholder
    • Controls.css – css file for our template

    Both of these items are going to be deployed to Style Library of the SharePoint site collection, so content editors may change it later without need of solution redeployment.

    Here is the elements.xml file:

    So our assets will end to http://oursitecollectionurl/Style Library/wp folder.

    KnockoutWp

    This is Visual Studio 2010 Visual Web part.

    It is consisted of 4 items:

    • KnockoutWp.cs – web part class
    • KnockoutWpUserControl – User control of our web part
    • KnockoutWp.webpart – web part xml file
    • Elements.xml – manifest file

    Properties

    Web part has following properties:

    • ListUrl (string, required) – url of the list we are displaying.
    • TitleField (string, optional) – display name of the field that would be displayed as Title. If it’s blank Title field would be used.
    • DateField (string, optional) – display name of the field that would be displayed as date. If it’s blank Created field would be used.
    • DescriptionField (string, optional) – display name of the field that would be displayed as Description. If it’s blank it would be omitted.
    • ImageField (string, optional) – display name of the field that would be displayed as Thumbnail picture. If it’s blank it would be omitted.
    • NoOfItems (int) – how many items from the list would be displayed
    • ItemTemplate (string) – html template of the web part. Defines the look of our web part.
    • WpPosition (enum) – Used for a three column layouts. Web part has styles for three zones: right, central and left. Difference is in width, padding and margin. Everything is set in css so you can accommodate it to your environment.

    On picture below you can see mapping between Field properties of web part and list item fields.

     

    EditorPart

    I’ve added one more thing to this web part it’s EditorPart class GenericListPartEditorPart. I’m not going into deep with editor parts, but here is quick info. When you create public property for a web part it is automatically displayed in web part edit panel.

    And it is great concept when you need simple properties as strings, numbers and short lists. If you want more complicated scenario (as we want here for our web part) it’s not enough.

    What I wanted here is template editor. It could be reasonably large so idea was to have a button in web part edit panel that would open large dialog window with editor. User would work with our template, click Apply and change ItemTemplate web part property.

    Template editor KnockoutWpUserControl

    This is user control created by Visual Studio, when we added Visual web part project item to the project. It consists of markup ascx file and code behind .ascx.cs file. We will put our markup and our c# code here.

    Markup

    Here is the complete markup:

    <script type='text/javascript' src="http://knockoutjs.com/downloads/knockout-3.0.0.js">
    </script>
    <style type="text/css">  @import url("/Style
    Library/wp/controls.css");  </style>  
    <div class="glwp glwp-<%=PositionClass %>" id="k<%=WpId %>">
      <div class="glwpLine"></div>      
      <h5><img src="<%=Icon %>" width="28" 
        height="28" align="absmiddle"><%=Title %></h5>
        <div class="glwpLineGrey"></div>      
      <asp:Literal ID="LitLayout" runat="server"></asp:Literal>
    </div>  
    
    <script type="text/javascript">    
      function OpenDialog(Url) {
        var options = SP.UI.$create_DialogOptions();        
        options.resizable = 1;        
        options.scroll = 1;        
        options.url = Url;
        SP.UI.ModalDialog.showModalDialog(options);    
    }         
    // Item class         
      var Item = function (id, title, datecreated,url,description,thumbnail) {            
         this.id = id;            
         this.title = title;
         this.datecreated = datecreated;
         this.url=url;
         this.description=description;
         this.thumbnail=thumbnail;
      }         
     //ViewModel goes here (It's created on server)        
     runat="server" ID="LitItems"></asp:Literal>
     
    //Function that opens Template editor. Used only in edit mode of web part       
     function portal_openTemplateEditor(wpid) {       
      var val="";              
      var options = SP.UI.$create_DialogOptions();              
      options.width = 600;             
      options.height = 500;                
      options.url = "/_layouts/KnockoutTemplate/TemplateEditor.aspx?c="+wpid;//"";
      options.dialogReturnValueCallback =
               Function.createDelegate(null,portal_openTemplateEditorClosedCallback);
      SP.UI.ModalDialog.showModalDialog(options);
    }
    </script>

    First Section, of the markup (picture below) has script (knockout, on the remote server) and style references (controls.css in local Document library). Below is html markup that defines the container of the web part (top and bottom borders, width, icon and title). Markup is not the cleanest because I was little lazy and left some public properties in it. Note< %=PositionClass%>, <%=WpId%> and so on.

    There are all public properties of the user control and they are used for presentation:

    • PositionClass – depending on WpPosition web part property (right, central or left) adds appropriate css class to markup and that way defines width, padding and margin of web part WpId is guid of the web part. It is used to uniquely identify the web part, because we can put several web parts of the same type and everything would crush without this identificator.
    • Icon – is a url to icon that would be displayed on web part. Web part property Title Icon Image URL is used here (this is OOB property)
    • Title –title text of the web part. Text that was entered in the title area of the web part. Web part property Title is used here (this is OOB property)

    Last interesting thing here is Literal control LitLayout. This control would hold our ItemTemplate property (html template of our web part).

    Second section, is a java script function that opens list item in a dialog window. It is used when underlying list is not document library.

    Third section consists of knockout view model (java script). Item class definition is self-explanatory (defines 6 properties only). The rest of the model is created on the server side so now there is only LitItems Literal control there.

    Fourth section is just a java script function that is used when editing web part properties. This function opens template editor in dialog window.

    Code

    Properties:

    • Properties from web part
      • Icon – url to the icon
      • Title – title of the web part
      • ListUrl – url to the list
      • TitleField – Title field in the list
      • DateField – Date field in the list
      • ImageField – Image field in the list
      • DescriptionField – Description field in the list
      • NoOfItems – number of items to return
      • Position – position of the web part (right, left or central)
      • ItemTemplate – html template of the web part
      • WpId – guid id of the web part ·
    • UC’s properties
      • PositionClass – css class based on position
      • ColumnMap – dictionary that holds internal names of the list item fields.

    Methods: File has only one method Page_Load. Code is executing with elevated privileges.

    In that method we:

    1. Resolve list by the supplied URL (ListUrl property) SPList annList = annWeb.GetList(ListUrl);
    2. Get internal names of the list columns by their Display names SpHelper.GetFieldsInternals(annWeb, annList.Title, TitleField, DateField, DescriptionField, ImageField, columnMap );
    3. Create CAML Query SpHelper.GetGenericQuery(annList, q, NoOfItems);
    4. Execute it
    5. Iterate over SPListItemCollection (coll) and create required JavaScript
    Helper class

    SPHelper is helper class and you can find it in Helpers directory.

    It has 3 responsibilities:

    1. To retrieve List Columns Internal names based on supplied List Columns display names (WP properties – TitleField – Title field, DateField, ImageField , DescriptionField ) – GetFieldsInternals method
    2. To create Caml query for retrieving list items – GetGenericQuery method
    3. To retrieve values from SharePoint columns based on their types – GetFieldValue method

     

    SAP Weekend : Part 1 – ERPConnect Services for SharePoint 2010 (ECS)

    This weekend was spent completing my new “List Search Web Part” and also 2 Free Web Parts that is included in the “List Web Part Pack” – More about this in my future blog.

    erp256-bc0e84ce

    In between the “SAP Bug” bit me again and I decided to write a  blog post series on the various adapters I have used in SharePoint and SAP Integration Projects and to give you a basic “run down” of how and with which technologies each adapter connects the 2 systems with.

    ERPConnect was 1st on the list. ….

     

    Yes, I can hear the grumblings of those of us who have worked with SAP and SharePoint  Integration and the ERPConnect adapter before 🙂

    For starters, you need to have a SAP Developer Key to be allowed to use the SAP web service wizard, and also have the required SAP authorizations. In other cases it may not be allowed by IT operations to make any modification to the SAP environment, even if it’s limited to the full-automatic generation and activation of the BAPI webservice(s).

    Another reason from a system architecture viewpoint, is that the single BAPI and/or RFC calls may be of too low granularity. You actually want to perform a ‘business transaction’, consisting of multiple method invocations which must be treated as a Logical Unit of Work (LUW). SAP has introduced the concept of SAP Enterprise Services for this, and has delivered a first set of them. This is by far not complete yet, and SAP will augment it the coming years.

    SharePoint 2010 provides developer with the capability to integrate external data sources like SAP business data via the Business Connectivity Services (BCS) into the SharePoint system. The concept of BCS is based on entities and associated stereotyped operations. This perfectly suits for flat and simple structured data sets like SAP tables.

    Another and way more flexible option to use SAP data in SharePoint are the ERPConnect Services for SharePoint 2010 (ECS). The product suite consists of three product components: ERPConnect Services runtime, the BCS Connector application and the Xtract PPS for PerformancePoint Services.

    The runtime is providing a Service Application that integrates itself with the new service architecture of SharePoint 2010. The runtime offers a secure middle-tier layer to integrate different kind of SAP objects in your SharePoint applications, like tables and function modules.

    The BCS Connector application allows developers to create BDC models for the BCS Services, without programming knowledge. You may export the BDC models created by the BCS Connector to Visual Studio 2010 for further customizing.

     

    The Xtract PPS component offers a SAP data source provider for the PerformancePoint Services of SharePoint 2010. T

    his article gives you an overview of the ERPConnect Services runtime and shows how you can create and incorporate business data from SAP in different SharePoint application types, like Web Parts, Application Pages or Silverlight modules.

    This article does not introduce the other components.

    Background

    This section will give you a short explanation and background of SAP objects that can be used in ERPConnect Services. The most important objects are SAP tables and function modules. A function module is basically similar to a normal procedure in conventional programming languages. Function modules are written in ABAP, the SAP programming language, and are accessible from any other programs within a SAP system. They accept import and export parameters as well as other kind of special parameters.

     

    In addition, BAPIs (Business-API) are special function modules that are organized within the SAP Business Object Repository. In order to use function modules with the runtime they must be marked as Remote (RFC). SAP table data can also be retrieved. Tables in SAP are basically relational database tables. Others SAP objects like BW Cubes or SAP Queries can be accessed via the XtractQL query language (see below).

     

    Installation & Configuration

    Installing ERPConnect Services on a SharePoint 2010 server is done by an installer and is straight forward. The SharePoint Administration Service must run on the local server (see Windows Services).

    For more information see product documentation. After the installation has been successfully processed navigate to the Service Applications screen within the central administration (CA) of SharePoint:

     

    Before creating your first Service Application a Secure Store must be created, where ERPConnect Services will save SAP user credentials. In the settings page for the “Secure Store Service” create a new Target Application and name the application “ERPConnect Services”. Click on the button “Next” to define the store fields as follows:

     

    Finish the creation process by clicking on “Next” and define application administrators. Then, mark the application, click “Set Credentials” and enter the SAP user credentials:

     

    Let’s go on and create a new ERPConnect Service Application!

    Click the “ERPConnect Service Application” link in the “New” menu of the Service Applications page (see also first screenshot above). This opens a dialog to define the name of the service application, the SAP connection data and the IIS application pool:

     

    Click “Create” after entering all data and you will see the following entries in the Service Applications screen:

     

    That’s it! You are now done setting up your first ERPConnect Service Application.

    Development

    The runtime functionality covers different programming demands such as generically retrievable interface functions. The service applications are managed by the Central Administration of SharePoint. The following service and function areas are provided:

    1. Executing and retrieving data directly from SAP tables
    2. Executing SAP function modules / BAPIs
    3. Executing XtractQL query statements

    The next sections shows how to use these service and function areas and access different SAP objects from within your custom SharePoint applications using the ERPConnect Services. The runtime can be used in applications within the SharePoint context like Web Parts or Application Pages.

    In order to do so, you need to reference the assembly i in the project. Before you can access data from the SAP system you must create an instance of the ERPConnectServiceClient class. This is the gate to all SAP objects and the generic API of the runtime in overall. In the SharePoint context there are two options to create a client object instance:

    // Option #1
    ERPConnectServiceClient client = new ERPConnectServiceClient();
    
    // Option #2
    ERPConnectServiceApplicationProxy proxy = SPServiceContext.Current.GetDefaultProxy(
       typeof(ERPConnectServiceApplicationProxy)) as ERPConnectServiceApplicationProxy;
    ERPConnectServiceClient client = proxy.GetClient();

    For more details on using ECS in Silverlight or desktop applications see the specific sections below.

    Querying Tables

    Querying and retrieving table data is a common task for developers. The runtime allows retrieving data directly from SAP tables. The ERPConnectServiceClient class provides a method called ExecuteTableQuery with two overrides which query SAP tables in a simple way.

    The method also supports a way to pass miscellaneous parameters like row count and skip, custom function, where clause definition and a returning field list. These parameters can be defined by using the ExecuteTableQuerySettings class instance.

    DataTable dt = client.ExecuteTableQuery("T001");
    
    …
        
    ExecuteTableQuerySettings settings = new ExecuteTableQuerySettings {
      RowCount = 100,
      WhereClause = "ORT01 = 'Paris' AND LAND1 = 'FR'",
      Fields = new ERPCollection<string> { "BUKRS", "BUTXT", "ORT01", "LAND1" }
    };
    
    DataTable dt = client.ExecuteTableQuery("T001", settings);
    
    …
    
    // Sample 2
    DataTable dt = client.ExecuteTableQuery("MAKT",
                new ExecuteTableQuerySettings {
                    RowCount = 10,
                    WhereClause = "MATNR = '60-100C'",
                    OrderClause = "SPRAS DESC"
                });

    The first query reads all records from the SAP table T001 where the fields ORT01 equals Paris and LAND1 equals FR (France). The query returns the top 100 records and the result set contains only the fields BUKRS, BUTXT, ORT01 and LAND1.

    The second query returns the top ten records of the SAP table MAKT, where the field MATNR equals the material number 60-100C. The result set is ordered by the field SPRAS.

    Executing Function Modules

    In addition to query SAP tables the runtime API executes SAP function modules (BAPIs). Function modules must be marked as remote-enabled modules (RFC) within SAP.

    The ERPConnectServiceClient class provides a method called CreateFunction to create a structure of metadata for the function module. The method returns an instance of the data structure ERPFunction. This object instance contains all parameters types (import, export, changing and tables) that can be used with function modules.

    In the sample below we call the function SD_RFC_CUSTOMER_GET and pass a name pattern (T*) for the export parameter with name NAME1. Then we call the Execute method on the ERPFunction instance. Once the method has been executed the data structure is updated. The function returns all customers in the table CUSTOMER_T.

    ERPFunction function = client.CreateFunction("SD_RFC_CUSTOMER_GET");
    function.Exports["NAME1"].ParamValue = "T*";
    function.Execute();
    
    foreach(ERPStructure row in function.Tables["CUSTOMER_T"])
      Console.WriteLine(row["NAME1"] + ", " + row["ORT01"]);

    The following code shows an additional sample. Before we can execute this function module we need to define a table with HR data as input parameter.

    The parameters you need and what values the function module is returning dependents on the implementation of the function module

    ERPFunction function = client.CreateFunction("BAPI_CATIMESHEETMGR_INSERT");
    function.Exports["PROFILE"].ParamValue = "TEST";
    function.Exports["TESTRUN"].ParamValue = "X";
    
    ERPTable records = function.Tables["CATSRECORDS_IN"];
    ERPStructure r1 = records.AddRow();
    r1["EMPLOYEENUMBER"] = "100096";
    r1["WORKDATE"] = "20110704";
    r1["ABS_ATT_TYPE"] = "0001";
    r1["CATSHOURS"] = (decimal)8.0;
    r1["UNIT"] = "H";
    
    function.Execute();
    
    ERPTable ret = function.Tables["RETURN"]; 
    
    foreach(var i in ret)
      Console.WriteLine("{0} - {1}", i["TYPE"], i["MESSAGE"]);

    Executing XtractQL Query Statements

    The ECS runtime is offering a SAP query language called XtractQL. The XtractQL query language, also known as XQL, consists of ABAP and SQL syntax elements. XtractQL allows querying SAP tables, BW-Cubes, SAP Queries and executing function modules.

    It’s possible to return metadata for the objects and MDX statements can also be executed with XQL. All XQL queries are returning a data table object as result set. In case of the execution of function modules the caller must define the returning table (see sample below – INTO @RETVAL). XQL is very useful in situations where you need to handle dynamic statements. The following list shows a

    SELECT TOP 5 * FROM T001W WHERE FABKL = 'US'

    This query selects the top 5 records of the SAP table T001W where the field FABKL equals the value US.

    SELECT * FROM MARA WITH-OPTIONS(CUSTOMFUNCTIONNAME = 'Z_XTRACT_IS_TABLE')

     

    SELECT MAKTX AS [ShortDesc], MANDT, SPRAS AS Language FROM MAKT

    This query selects all records of the SAP table MAKT. The result set will contains three fields named ShortDesc, MANDT and Language.

     

    EXECUTE FUNCTION 'SD_RFC_CUSTOMER_GET'
       EXPORTS KUNNR='0000003340'
       TABLES CUSTOMER_T INTO @RETVAL;

    This query executes the SAP function module SD_RFC_CUSTOMER_GET and returns as result the table CUSTOMER_T (defined as @RETVAL).

    DESCRIBE FUNCTION 'SD_RFC_CUSTOMER_GET' GET EXPORTS

    This query returns metadata about the export parameters of the

    SELECT TOP 30 LIPS-LFIMG, LIPS-MATNR, TEXT_LIKP_KUNNR AS CustomerID
       FROM QUERY 'S|ZTHEO02|ZLIKP'
       WHERE SP$00002 BT '0080011000'AND '0080011999'

    This statement executes the SAP Query “S|ZTHEO02|ZLIKP” (name includes the workspace, user group and the query name). As you can see XtractQL extends the SQL syntax with ABAP or SAP specific syntax elements. This way you can define fields using the LIPS-MATNR format and SAP-like where clauses like “SP$00002 BT ‘0080011000’AND ‘0080011999’”.

    ERPConnect Services provides a little helper tool, the XtractQL Explorer (see screenshot below), to learn more about the query language and to test XQL queries. You can use this tool independent of SharePoint, but you need access to a SAP system.

    To find out more about all XtractQL language syntax see the product manual.

    Silverlight And Desktop Applications

    So far all samples are using the assembly ERPConnectServices.Server.Common.dll as project reference and all code snippets shown run within the SharePoint context, e.g. Web Part.

    ERPConnect Services also provides client libraries for Silverlight and desktop applications:

    ERPConnectServices.Client.dll for Desktop applications
    ERPConnectServices.Client.Silverlight.dll for Silverlight applications

    You need to add the references depending what project you are implementing.

    In Silverlight the implementation and design pattern is a little bit more complicated, since all web services will be called in asynchronously. It’s also not possible to use the DataTable class. It’s just not implemented for Silverlight.

    The runtime provides a similar class called ERPDataTable, which is used in this cases by the API. The ERPConnectServiceClient class for Silverlight provides the method ExecuteTableQueryAsync and an event called ExecuteTableQueryCompleted as callback delegate.

    public event EventHandler<ExecuteTableQueryCompletedEventArgs> ExecuteTableQueryCompleted;
    
    public void ExecuteTableQueryAsync(string tableName)
    public void ExecuteTableQueryAsync(string tableName, ExecuteTableQuerySettings settings)

    The following code sample shows a simple query of the SAP table T001 within a Silverlight client.

    First of all, an instance of the ERPConnectServiceClient is created using the URI of the ERPConnectService.svc, then a delegate is defined to handle the complete callback. Next, the query is executed, defined with a RowCount equal 10 to only return the top 10 records in the result set.

    Once the result is returned the data set will be attached to a DataGrid control (see screenshot below) within the callback method.

     

    void OnGetTableDataButtonClick(object sender, RoutedEventArgs e)
    {
      ERPConnectServiceClient client = new ERPConnectServiceClient(
        new Uri("http://<SERVERNAME>/_vti_bin/ERPConnectService.svc"));
    
      client.ExecuteTableQueryCompleted += OnExecuteTableQueryCompleted;
      client.ExecuteTableQueryAsync("T001", 
        new ExecuteTableQuerySettings { RowCount = 150 });
    }
    
    void OnExecuteTableQueryCompleted(object sender, ExecuteTableQueryCompletedEventArgs e)
    {
      if(e.Error != null)
        MessageBox.Show(e.Error.Message);
      else
      {
        e.Table.View.GroupDescriptions.Add(new PropertyGroupDescription("ORT01"));
        TableGrid.ItemsSource = e.Table.View;
      }
    }

    The screenshot below shows the XAML of the Silverlight page:

    The final result can be seen below:

    ECS Designer

    ERPConnect Services includes a Visual Studio 2010 plugin, the ECS Designer, that allows developer to visually design SAP interfaces. It’s working similar to the LINQ to SAP Designer I have written about a while ago, see article at CodeProject: LINQ to SAP.

    The ECS Designer is not automatically installed once you install the product. You need to call the installation program manually. The setup adds a new project item type to Visual Studio 2010 with the file extension .ecs and is linking it with the designer. The needed references are added automatically after adding an ECS project item.

    The designer generates source code to integrate with the ERPConnect Services runtime after the project item is saved. The generated context class contains methods and sub-classes that represent the defined SAP objects (see screenshots below).

     

    Before you access the SAP system for the first time you will be asked to enter the connection data. You may also load the connection data from SharePoint system. The designer GUI is shown in the screenshots below:

     

    The screenshot above for instance shows the tables dialog. After clicking the Add (+) button in the main designer screen and searching a SAP table in the search dialog, the designer opens the tables dialog.

     

    In this dialog you can change the name of the generated class, the class modifier and all needed properties (fields) the final class should contain.

     

    To preview your selection press the Preview button. The next screenshot shows the automatically generated classes in the file named EC1.Designer.cs:

     

    Using the generated code is simple. The project type we are using for this sample is a standard console application, therefore the designer is referencing the ERPConnectServices.Client.dll for desktop applications.

    Since we are not within the SharePoint context, we have to define the URI of the SharePoint system by passing this value into the constructor of the ERPConnectServicesContext class.

    The designer has generated class MAKT and an access property MAKTList for the context class of the table MAKT. The type of this property MAKTList is ERPTableQuery<MAKT>, which is a LINQ queryable data type.

     

    This means you can use LINQ statements to define the underlying query. Internally, the ERPTableQuery<T> type will translate your LINQ query into call of ExecuteTableQuery.

     

    That’s it!

     

    Advanced Techniques

    There are situations when you have to use the exact same SAP connection while calling a series of function modules in order to receive the correct result. Let’s take the following code:

    ERPConnectServiceClient client = new ERPConnectServiceClient();
    
    using(client.BeginConnectionScope())
    {
      ERPFunction f = client.CreateFunction("BAPI_GOODSMVT_CREATE");
    
      ERPStructure s = f.Exports["GOODSMVT_HEADER"].ToStructure();
      s["PSTNG_DATE"] = "20110609"; // Posting Date in the Document
      s["PR_UNAME"] = "BAEURLE";    // UserName
      s["HEADER_TXT"] = "XXX";      // HeaderText
      s["DOC_DATE"] = "20110609";   // Document Date in Document
    
      f.Exports["GOODSMVT_CODE"].ToStructure()["GM_CODE"] = "01";
    
      ERPStructure r = f.Tables["GOODSMVT_ITEM"].AddRow();
      r["PLANT"] = "1000";          // Plant
      r["PO_NUMBER"] = "4500017210"; // Purchase Order Number
      r["PO_ITEM"] = "010";      // Item Number of Purchasing Document 
      r["ENTRY_QNT"] = 1;          // Quantity in Unit of Entry
      r["MOVE_TYPE"] = "101";        // Movement Type
      r["MVT_IND"] = "B";            // Movement Indicator
      r["STGE_LOC"] = "0001";        // Storage Location
    
      f.Execute();
    
      string matDocument = f.Imports["MATERIALDOCUMENT"].ParamValue as string;
      string matDocumentYear = f.Imports["MATDOCUMENTYEAR"].ParamValue as string;
    
      ERPTable ret = f.Tables["RETURN"]; //.ToADOTable();
    
      foreach(var i in ret)
        Console.WriteLine("{0} - {1}", i["TYPE"], i["MESSAGE"]);
    
      ERPFunction fCommit = client.CreateFunction("BAPI_TRANSACTION_COMMIT");
      fCommit.Exports["WAIT"].ParamValue = "X";
      fCommit.Execute();
    }

    In this sample we create a goods receipt for a goods movement with BAPI_GOODSMVT_CREATE. The final call to BAPI_TRANSACTION_COMMIT will only work, if the system under the hood is using the same connection object.

     

    The runtime is not providing direct access to the underlying SAP connection, but the library offers a mechanism called connection scoping. You may create a new connection scope with the client library and telling ECS to use the same SAP connection until you close the connection scope. Within the connection scope every library call will use the same SAP connection.

    In order to create a new connection scope you need to call the BeginConnectionScope method of the class ERPConnectServiceClient.

    The method returns an IDisposable object, which can be used in conjunction with the using statement of C# to end the connection scope.

    Alternatively, you may call the EndConnectionScope method. It’s also possible to use function modules with nested structures as parameters.

    This is a special construct of SAP. The goods receipt sample above is using a nested structure for the export parameter GOODSMVT_CODE. For more detailed information about nested structures and tables see the product documentation.

    New Web Part released – List Search Web Part now available!!

    The List Search Web Part reads the entries from a Sharepoint List or Library (located anywhere in the site collection) and displays the selected user fields in a grid with an optional interactive search filter.

    It can be used for WSS3.0, MOSS 2007, Sharepoint 2010 and Sharepoint 2013.

     Imagea

    The following parameters can be configured:

    • Sharepoint Site
    • List Columns to be displayed
    • Filtering, Grouping, Searching, Paging and Sorting of rows
    • AZ Index
    • optional Header text

    Installation Instructions:

    1. download the List Search Web Part Installation Instructions
    2. either install the web part manually or deploy the feature to your server/farm as described in the instructions. 
    3. Security Note:
      if you get the following error message: “Only an administrator may enumerate through all user profiles“, you will need to grant the application pool account(s) for the web application(s) „Manage User Profiles” permissions within the User Profile Sevice (SSP in case of MOSS2007).  
      This ensures that the application pool is able to retrieve the list of user profiles. 
      To assign this permission, access your active “User Profile Service” (SP 2010 Server ) or the “Shared Services Provider” (MOSS2007) via Central Admin. 
      From the „User Profiles and My Sites” group, click “Personalization services permissions”.  
      Add the „Manage User Profiles” permission to  your application pool account(s).
    4. Configure the following Web Part properties in the Web Part Editor “Miscellaneous” pane section as needed:
      • Site Name: Enter the name of the site that contains the List or Library:
        – leave this field empty if the List is in the current site (eg. the Web Part is placed in the same site)
        – Enter a “/” character if the List is contained in the top site
        – Enter a path if the List in in a subsite of the current site (eg. in the form of “current site/subsite”)
      • List Name: Enter the name of the desired Sharepoint List or Library
        Example: Project Documents
      • View Name: Optionally enter the desired List View of the list specified above. A List View allows you to specify specific data filtering and sorting. 
        Leave this field empty if you want to use the List default view.
      • Field Template: Enter the List columns to be displayed (separated by semicolons).
        Pictures can be attached (via File Upload) to the Sharepoint List items and displayed using the symbolic “Picture” column name.
        If you want to allow users to edit their own entries, please add the symbolic “Username” column name to the Field Template. An “Edit” symbol will then displayed to allow the user to navigate to the corresponding Edit Form:Example:
        Type;Name;Title;Modified;Modified By;Created By

        Friendly Header Names:
        If you would like to display a “friendly header name” instead of the default property name please append it to the User property, separated by the “|” pipe symbol.

        Example:
        Picture;LastName|Last Name;FirstName;Department;Email|Email Address

        Hiding individual columns:
        You can hide a column by prefixing it with a “!” character. 
        The following example hides the “Department” column: 
        LastName;FirstName;!Department;WorkEmail

        Suppress Column wrapping:
        You can suppress the wrapping of text inside a column by prefixing it with a “^” character.
        LastName;FirstName;Department;^AboutMe

        Showing the E-Mail address as plain text:
        You can opt to display the plain e-mail address (instead of the envelope icon) by appending “/plain” to the WorkEmail column:
        LastName;WorkEmail/plain;Department

      • Group By: enter an optional User property to group the rows.
      • Sort By: enter the List column(s) to define the default sort order. You can add multiple properties separated by commas. Append “/desc” to sort the column descending.
        Examples:
        Department
        Department,LastName
        Lastname
        /desc

        The columns headings can be clicked by the users to manually define the sort order.
      • AZ Index Column: enter an optional List column to display the AZ filter in the list header. 
        If an “!” character is appended to the property name, the “A” index will be forced when visiting the page.
        Example: LastName! 

         
         Image  
      • Search Box: enter one or more List columns (separated by semicolons) to allow for interactive searching.Example: LastName;FirstName

        If you want to display a search filter as a dropdown combo, please enter it with a leading “@” character:
        LastName;FirstName;Department;@Office

        Friendly Search Box Labels:
        If you would like to display a “friendly label” instead of the default property name please append it to the User property, separated by the “|” pipe symbol.
        Example:
        WorkPhone|Office Phone;Office|Office Nbr

         

      • Align Search Filters vertically: allows you to align the seach input boxes vertically to save horizontal space:
      • Rows per page: the Staff Directory web part supports paging and lets you specify the desired number of rows per page. 
      • Image Height: specify the image height in pixels if you include the “Picture” property. 
        Enter “0” if you want to use the default picture size.
      • Header Text: enter an optional header text. Please note that you can embed HTML tags if needed. You can additionally specify the text to be displayed if the “Show all entries” option is unchecked and the users has not performed a search yet by appending a “|” character followed by the text.
        Example:
        This is the regular header text|This text is only shown if the user has not yet performed a search
      • Detail View Page: enter an optional column name prefixed by “detailview=” to link a column to the item detail view page. Append the “/popup” option if you want to open the detail page in a Sharepoint 2010/2013 dialog popup window.
        Examples:
        detailview=LastName
        detailview/popup=Title
      • Alternating Row Color: enter the optional color of the alternating row background (leave blank to use default).
        Enter either the HTML color names (as eg. “red” etc.) or use hexadecimal RRGGBB coding (as eg. “#CCFFCC”). Enter the values without the double quotes.
        You can also change the default background color of the non-alternating rows by appending a second color value separated by a semicolon.
        Example: #ffffcc;#ffff99 

        The default Header style can be changed by adding the “AESD_Headerstyle” appSettings variable to the web.config “appSettings” section:

        <appSettings>
        <
        add key=AESD_Headerstyle value=background:green;font-size:10pt;color:white
         />
        <
        appSettings
        >

         

      • Show Column Headers: either show or suppress the List column header row.
      • Header Row CSS Style: enter the optionall header row CSS style(s) as needed.
        Example:
        color:blue;white-space:nowrap
      • Show Groups collapsed: either show the groups (if you specify a column in the “Group By” setting) collapsed or expanded when entering the page.
      • Enforce Security: hides the web part if user has no access to the site or the list. This avoids a login prompt if the user has not at least “View” permission on the list or site containing the list.
      • Show all entries: either show all directory entries or none when first visiting the page. 
        You can append a specific text to the “Header Text” field (see above) which is only displayed if this option is unchecked and no search has yet been performed by the user.
      • Open Links in new window: either open the links in a new window or in the same browser window.
      • Link Documents to Office365: open the Word, Excel and Powerpoint documents in the Office365 web viewer.
      • Show ‘Add New Item’ Button: either show or suppress the “Add new item button” to let users add new items to the list (this option is security-trimmed).
      • Export to CSV: Show/hide the “Export” button for Excel CSV File Export
      • CSV Separator: Enter the desired CSV field separator character (Default=Comma). Use a semicolon in countries which use the commas as a decimal separator.
      • Localization: enter the following 4 values (separated by semicolons) in your local language if you need to override the English strings corresponding to the 
        – Search button text, 
        – A..Z menu “View all” option, 
         the text displayed for Hyperlink columns 
        – the optional “Group By” name (if grouping is enabled)Default:
        Search;View all;Visit

      • License Key: enter your Product License Key (as supplied after purchase of the “Staff Directory Web Part” license key).
        Leave this field empty if you are using the free 30 day evaluation version.

     Contact me now at tomas.floyd@outlook.com for the List Search Web Part and other Free & Paid Web Parts and Apps for SharePoint 2010, 2013, Azure, Office 365, SharePoint Online

    How to Create Custom SharePoint 2010 Page Layouts using SharePoint Designer 2010

    Scenario:

    You’re working with the Enterprise Wiki Site Template and you don’t really like where the “Last modified…” information is located (above the content). You want to move that information to the bottom of the page.

    Option 1: Modify the “EnterpriseWiki.aspx” Page Layout directly.

    Option 2: Create a new Page Layout based on the original one and then modify that one.

    We’ll go ahead and go with Option 2 since we don’t want to modify the out of the box template just in case we need it later on.

    How To:

    Step 1

    Navigate to the top level site of the Site Collection > Site Actions > Site Settings > Master pages (Under the Galleries section). Then switch over to the Documents tab in the Ribbon and then click New > Page Layout.

    Step 2

    Select the Enterprise Wiki Page Content Type to associate with, give it a URL and Title. Note that there’s also a link on this page to create a new Content Type. You might be interested in doing this if you wanted to say, add more editing fields or metadata properties to the layout. For example if you wanted to add another Managed Metadata column to capture folksonomy aside from the already included “Wiki Categories” Managed Metadata column.

    Step 3

    SharePoint Designer time! Hover over your newly created Page Layout and “Edit in Microsoft SharePoint Designer.”

    Step 4

    Now you can choose to build your page manually by dragging your SharePoint Controls onto the page and laying them out as you’d like…

    … Or you can copy and paste the OOB Enterprise Wiki Page Layout. I think I’ll do that. 🙂

    Step 5

    Alright, so you’ve copied the contents of the EnterpriseWiki.aspx Page Layout and now it’s time for some customizing. I found the control I want to move, so I’ll simply do a copy or cut/paste to the new spot.

    Step 6

    Check-in, publish, and approve the new Page Layout. Side note: I like to add the Check-In/Check-Out/Discard or Undo-Checkout buttons to all of my Office Applications’ Quick Access Toolbars for convenience.

    Step 7

    Almost there! Navigate to your publishing site, in this case the Enterprise Wiki Site, then go to Site Actions > Site Settings > Page layouts and site templates (Under Look and Feel). Here you’ll be able to make the new Page Layout available for use within the site.

    Step 8

    Go back to your site and edit the page that you’d like to change the layout for. On the Page tab of the Ribbon, click on Page Layout and select your custom Page Layout.

    Et voila! You just created a custom Page Layout using SharePoint Designer 2010, re-arranged a SharePoint control and managed to plan for the future by not modifying the out of the box template. That was a really simple example but I hope it helped to give you some ideas on how else you can customize Page Layouts within SharePoint 2010!

    2132

    Great tool to handle storing constants in SharePoint Development

     

     

    Today I want to introduce something I’ve been working on recently which could be of use to you if you’re a SharePoint developer. Often when developing SharePoint solutions which require coding, the developer faces a decision about what to do with values he/she doesn’t want to ‘hardcode’ into the C# or VB.Net code. Example values for a SharePoint site/application/control could be:

    ‘AdministratorEmail’ – ‘bob@somewhere.com’
    ‘SendWorkflowEmails’ – ‘true’

    Generally we avoid hardcoding such values, since if the value needs to be changed we have to recompile the code, test and redeploy. So, alternatives to hardcoding which folks might consider could be:

    • store values in appSettings section of web.config
    • store values in different places, e.g. Feature properties, event handler registration data, etc.
    • store values in a custom SQL table
    • store values in a SharePoint list

    Personally, although I like the facility to store complex custom config sections in web.config, I’m not a big fan of appSettings. If I need to change a value, I have to connect to the server using Remote Desktop and open and modify the file – if I’m in a server farm with multiple front-ends, I need to repeat this for each, and I’m also causing the next request to be slow because the app domain will unload to refresh the config. Going through the other options, the second isn’t great because we’d need to be reactivating Features/event receiver registrations every time (even more hassle), though the third (using SQL) is probably fine, unless I want a front-end to make modifying the values easier, in which case we’d have some work to do.

    So storing config values in a SharePoint list could be nice, and is the basis for my solution. Now I’d bet lots of people are doing this already – it’s pretty quick to write some simple code to fetch the value, and this means we avoid the hardcoding problem. However, the Config Store ‘framework’ goes further than this – for one thing it’s highly-optimized so you can be sure there is no negative performance impact, but there are also some bonuses in terms of where it can used from and the ease of deployment. So what I hope to show is that the Config Store takes things quite a bit further than just a simple implementation of ‘retrieving config values from a list’.

    Details (reproduced from the Codeplex site)

    The list used to store config items looks like this (N.B. the items shown are my examples, you’d add your own):

    ConfigStoreList.jpg
    There is a special content type associated with the list, so adding a new item is easy:

    ConfigItemContentType.jpg

    ..and the content type is defined so that all the required fields are mandatory:

    AddNewConfigItem.jpg

    Retrieving values

    Once a value has been added to the Config Store, it can be retrieved in code as follows (you’ll also need to add a reference to the Config Store assembly and ‘using’ statement of course):

    string sAdminEmail = ConfigStore.GetValue("MyApplication", "AdminEmail");

    Note that there is also a method to retrieve multiple values with a single query. This avoids the need to perform multiple queries, so should be used for performance reasons if your control/page will retrieve many items from the Config Store – think of it as a best practise. The code is slightly more involved, but should make sense when you think it through. We create a generic List of ‘ConfigIdentifiers’ (a ConfigIdentifier specifies the category and name of the item e.g. ‘MyApplication’, ‘AdminEmail’) and pass it to the ‘GetMultipleItems()’ method:

    List<ConfigIdentifier> configIds = new List<ConfigIdentifier>();

    ConfigIdentifier adminEmail = new ConfigIdentifier("MyApplication", "AdminEmail");
    ConfigIdentifier sendMails = new ConfigIdentifier("MyApplication", "SendWorkflowEmails");
    configIds.Add(adminEmail);
    configIds.Add(sendMails);
    Dictionary<ConfigIdentifier, string> configItems = ConfigStore.GetMultipleValues(configIds);
    string sAdminEmail = configItems[adminEmail];
    string sSendMails = configItems[sendMails];

    ..the method returns a generic Dictionary containing the values, and we retrieve each one by passing the respective ConfigIdentifier we created earlier to the indexer.

    Other notes

      • All items are wrapped up in a Solution/Feature so there is no need to manually create site columns/content types/the Config Store list etc. There is also an install script for you to easily install the Solution.

        • Config items are cached in memory, so where possible there won’t even be a database lookup!

          • The Config Store is also designed to operate where no SPContext is present e.g. a list event receiver. In this scenario, it will look for values in your SharePoint web application’s web.config file to establish the URL for the site containing the Config Store (N.B. these web.config keys get automatically added when the Config Store is installed to your site). This also means it can be used outside your SharePoint application, e.g. a console app.

            • The Config Store can be moved from it’s default location of the root web for your site. For example sites I create usually have a hidden ‘config’ web, so I put the Config Store in here, along with other items. (To do this, create a new list (in whatever child web you want) from the ‘Configuration Store list’ template (added during the install), and modify the ‘ConfigWebName’/’ConfigListName’ keys which were added to your web.config to point to the new location. As an alternative if you already added 100 items which you don’t want to recreate, you could use my other tool, the SharePoint Content Deployment Wizard at http://www.codeplex.com/SPDeploymentWizard to move the list.)

              • All source code and Solution/Feature files are included, so if you want to change anything, you can

                • Installation instructions are in the readme.txt in the download

                Summary

                Hopefully you might agree that the Config Store goes beyond a simple implementation of ‘storing config values in a list’. For me, the key aspects are the caching and the fact that the entire solution is ‘joined up’, so it’s easy to deploy quickly and reliably as a piece of functionality. Many organizations are probably at the stage where their SharePoint codebase is becoming mature and perhaps based on a foundation of a ‘core API’ – the Config Store could provide a useful piece in such a toolbox.

                You can download the Config Store framework and all the source code from www.codeplex.com/SPConfigStore.

                Brand new 3 LINQ to Office Providers Available now!!

                The SPSamurai.Office.LINQ namespace contains 3 classes –

                OutlookProvider(LINQ to Outlook), OneNoteProvider (LINQ to OneNote) and ExcelProvider(LINQ to Excel).

                The OutlookProvider is a wrapper class which provides IEnumerable collections to data of the COM interface of Outlook ( appointments, contacts, mails, tasks, …).

                The OneNoteProvider provides collections of notebooks, sections and pages by manipulating the XML hierarchy tree of OneNote. And the ExcelProvider loads an Excel worksheet and provides column definition and row collections.

                All collections are IEnumerable so you can query them with LINQ. The full source code is provided.

                Check out my articles where I describe the implementation of these 3 classes and how to use them. These articles also contain a lot of LINQ query examples.

                Class diagrams:

                 

                 

                 

                 

                 

                Features :
                Set flag with due date from predefined list: Today, Tomorrow, This Week, Next Week or Custom  
                Different options of follow-up visualization using combinations of flag, text and date  
                Support of sorting and filtering features  
                Support of different calendars (Gregorian, Japanese Emperor Era, Korean Tangun Era, Hijri, etc.)  
                Supported Datasheet view  
                Two-way conversion between ArtfulBits Follow-Up and standard Microsoft® SharePoint® Date and Time column  
                Language pack support (desired localization could be added by request)  

                 

                Contact me at tomas.floyd@outlook.com for these tools and more SharePoint, Azure and Office 365 Apps, Tools and Web Parts or for specialised custom SharePoint Development

                SharePoint 2013 Standards released!!

                SharePoint Development Standards for SharePoint 2013

                This is only meant as application specific standards. You should always review these standards along with regular development standards which identify things like naming conventions and approaches.

                designmanager[1]

                General Principals

                •          All new functionality and customizations must be documented.
                • Do not edit out of the box files.
                  • For a few well defined files such as the Web.config or docicon.xml files, the built-in files included with SharePoint Products and Technologies should never be modified except through official supported software updates, service packs, or product upgrades.
                • Do not modify the Database Schema.
                  • Any change made to the structure or object types associated with a database or to the tables included in it. This includes changes to the SQL processing of data such as triggers and adding new User Defined Functions.

                o    A schema change could be performed by a SQL script, by manual change, or by code that has appropriate permissions to access the SharePoint databases. Any custom code or installation script should always be scrutinized for the possibility that it modifies the SharePoint database.

                •          Do not directly access the databases.

                o    Any addition, modification, or deletion of the data within any SharePoint database using database access commands. This would include bulk loading of data into a database, exporting data, or directly querying or modifying data.

                o    Directly querying or modifying the database could place extra load on a server, or could expose information to users in a way that violates security policies or personal information management policies. If server- side code must query data, then the process for acquiring that data should be through the built-in SharePoint object model, and not by using any type of query to the database. Client-side code that needs to modify or query data in SharePoint Products and Technologies can do this by using calls to the built-in SharePoint Web services that in turn call the object model.

                •   Exception: In SharePoint 2010 the Logging database can be queried directly as this database was designed for that purpose.
                •          In SharePoint 2007 site and list templates must be created through code and features (site and list definitions). STP files are not allowed as they are not updatable.

                Development Decisions:

                There are plenty of challenging decisions that go into defining what the right solution for a business or technical challenge will be. What follows is a chart meant to help developers when selecting their development approach.

                                     Sandbox                               Apps                                      Farm
                When to use Deprecated. Therefore, it’s unadvisable to build new sandboxed solutions. Best practice. Create apps whenever you can. Create farm solutions when you can’t do it in an app. See http://www.learningsharepoint.com/2012/07/20/sharepoint-2013-apps-vs-farm-solutions/ for more info.
                Server-side code Runs under a strict CAS policy and is limited in what it can do. No SharePoint server-code. When apps are hosted in an isolated SharePoint site, no server-code whatsoever is allowed. Can run full trust code or run under fine grained custom CAS policies.
                Resource throttling Run under an advanced resource management system that allows resource point allocation and automatic shutdown for troublesome solutions. Apps run isolated from a SharePoint farm, but can have an indirect impact by leveraging the client object model. Can impact SharePoint server-farm stability directly.
                Runs cross-domain No, and there’s no need to since code runs within the SharePoint farm. Yes, which provides a very interesting way to distribute server loads. No, and there’s no need to since code runs within the SharePoint farm.
                Efficiency/Performance Runs on the server farm, but in a dedicated isolated process. The sandbox architecture provides overhead. Apps hosted on separate app servers (even cross-domain) or in the cloud may cause considerable overhead. Very efficient.
                Safety Very safe. Apps rely on OAuth 2.0. The OAuth 2.0 standard is surrounded by some controversy (for example, check out what OAuth lead author Eran Hammer has to say about it here: http://hueniverse.com/2012/07/oauth-2-0-and-the-road-to-hell/ . In fact, some SharePoint experts have gone on the record stating that security for Apps will become a big problem. We’ll just have to wait and see how this turns out. Can be very safe, but this requires additional testing, validation and potential monitoring.
                Should IT pros worry over it? Due the the limited CAS permissions and resource throttling system, IT pros don’t have to worry. Apps are able to do a lot via the client OM. There are some uncertainties concerning the safety of an App running on a page with other Apps. For now, this seems to be the most worry-able option, but we’ll have to see how this plays out. Definitely. This type of solutions run on the SharePoint farm itself and therefore can have a profound impact.
                Manageability Easy to manage within the SharePoint farm. Can be managed on a dedicated environment without SharePoint. Dedicated app admins can take care of this. Easy to manage within the SharePoint farm.
                Cloud support Yes Yes, also support for App MarketPlace. No, on-premises (or dedicated) only.

                App Development (SharePoint 2013):

                •          When developing an app select the most appropriate client API:

                o   Apps that offer Create/Read/Update/Delete (CRUD) actions against SharePoint or BCS external data, and are hosted on an application server separated by a firewall benefit most from using the JavaScript client object model.

                o   Server-side code in Apps that offer Create/Read/Update/Delete (CRUD) actions against SharePoint or BCS external data, and are hosted on an application server but not separated by a firewall mainly benefit from using the .managed client object model, but the Silverlight client object model, JavaScript client object model or REST are also options.

                o   Apps hosted on non-Microsoft technology (such as members of the LAMP stack) will need to use REST.

                o   Windows phone apps need to use the mobile client object model.

                o   If an App contains a Silverlight application, it should use the Silverlight client object model.

                o   Office Apps that also work with SharePoint need to use the JavaScript client object model.

                Quality Assurance:

                • Custom code must be checked for memory leaks using SPDisposeCheck.
                  • False positives should be identified and commented.
                • Code should be carefully reviewed and checked. As a starting point use this code review checklist (and provide additional review as needed).
                • Provide an Installation Guide which contains the following items (Note this relates to SharePoint Deployment Standards):
                  • Solution name and version number.
                  • Targeted environments for installation.
                  • Software and hardware Prerequisites: explicitly describes what is all needed updates, activities, configurations, packages, etc. that should be installed or performed before the package installation.
                  • Deployment steps: Detailed steps to deploy or retract the package.
                  • Deployment validation: How to validate that the package is deployed successfully.
                  • Describe all impacted scopes in the deployment environment and the type of impact.

                Branding:

                • A consistent user interface should be leveraged throughout the site. If a custom application is created it should leverage the same master page as the site.
                • Editing out of the box master pages is not allowed. Instead, duplicate an existing master page; make edits, then ensure you add it to a solution package for feature deployment.
                • When possible you should avoid removing SharePoint controls from any design as this may impact system behavior, or impair SharePoint functionality.
                • All Master Pages should have a title, a description, and a preview image.
                • All Page Layouts should have a title, a description, and a preview image.

                Deployment:

                • All custom SharePoint work should be deployed through SharePoint Solution (.wsp) files.
                • Do not deploy directly into the SharePointRoot (12-Hive, 14-Hive) Folders. Instead deploy into a folder identified by Project Name.

                Features:

                • Features must have a unique GUID within the farm.
                • Features with event receivers should clean up all changes created in the activation as part of the deactivation routine.
                  • The exception to this is if the feature creates a list or library that contains user supplied data. Do not delete the list/library in this instance.
                • Features deployed at the Farm or Web Application level should never be hidden.
                • Site Collection and Site Features may be hidden if necessary.
                • Ensure that all features you develop have an appropriate name, description, updated version number and icon.

                SharePoint Designer:

                • SharePoint Designer 2007 updates are generally not allowed.
                  • The only exception to this rule is for creating DataForm Web Parts.
                  • The following is a recommended way of managing this aspect:
                    Create a temporary web part page (for managing the manipulation of a data view web part). Once the web part is ready for release and all modifications have been made export the .webpart and then delete the page. You can now import it onto a page elsewhere or place it in the gallery. This way none of your production pages are un-ghosted. The other advantage is that you can place the DVWP on a publishing page (as long as there are web part zones to accept them).
                  • DataForm Web Parts should be exported through the SharePoint GUI and solution packaged for deployment as a feature.
                  • This does not mean that SharePoint Designer should not be used for creating and testing branding artifacts such as master pages and page layouts.
                    • It is important for these artifacts to be deployed through solution files (WSPs) and typical build and deployment processes and not by manual methods.
                • SharePoint Designer 2010 updates are generally only allowed by a trained individual.
                  • The following is a recommended way of managing the creation of DataForm Web Parts:
                    Create a temporary web part page (for managing the manipulation of a data view web part). Once the web part is ready for release and all modifications have been made export the .webpart and then delete the page. You can now import it onto a page elsewhere or place it in the gallery. This way none of your production pages are un-ghosted. The other advantage is that you can place the DVWP on a publishing page (as long as there are web part zones to accept them).
                  • DataForm Web Parts should be exported through the SharePoint GUI and solution packaged for deployment as a feature.
                • SharePoint Designer workflows should not be used for Business Critical Processes.
                  • They are not portable and cannot be packaged for solution deployment.
                    • Exception Note: Based on the design and approach being used it may be viable in SharePoint 2010 for you to design a workflow that has more portability. This should be determined on a case by case basis as to whether it is worth the investment and is supportable in your organization.

                Site Definitions:

                • In SharePoint 2007 site and list templates must be created through code and features (site and list definitions).
                  • STP files are not allowed as they are not updatable.
                • Site definitions should use a minimal site definition with feature stapling.

                Solutions:

                • Solutions must have a unique GUID within the farm.
                • Ensure that the new solution version number is incremented (format V#.#.#).
                • The solution package should not contain any of the files deployed with SharePoint.
                • Referenced assemblies should not be set to “Local Copy = true”
                • All pre-requisites must be communicated and pre-installed as separate solution(s) for easier administration.

                Source Control:

                • All source code must be under a proper source control (like TFS or SVN).
                • All internal builds must have proper labels on source control.
                • All releases have proper labels on source control.

                InfoPath:

                • If an InfoPath Form has a code behind file or requires full trust then it must be packaged as a solution and deployed through Central Administration.
                • If an InfoPath form does not have code behind and does not need full trust the form can be manually published to a document library, but the process and location of the document library must be documented inside the form.
                  • Just add the documentation text into a section control at the top of the form and set conditional formatting on that section to always hide the section, that way users will never see it.

                WebParts

                • All WebParts should have a title, a description, and an icon.

                Application Configuration

                o   Web.config

                •   APIs such as the ConfigurationSection class and SPWebConfigModification class should always be used when making modifications to the Web.config file.
                •   HTTPModules, FBA membership and Role provider configuration must be made to the Web.config.

                o   Property Bags

                •   It is recommended that you create your own _layouts page for your own settings.
                •   It is also recommended that you use this codeplex tool to support this method http://pbs2010.codeplex.com/

                o   Lists

                •   This is not a recommended option for Farm or Web Application level configuration data.
                •   It is also recommended that you use this codeplex tool to support this method http://spconfigstore.codeplex.com/

                o   Hierarchical Object Store (HOS) or SPPersistedObject

                •   Ensure that any trees or relationships you create are clearly documented.
                •   It is also recommended that you use the Manage Hierarchical Object Store feature at http://features.codeplex.com/. This feature only stores values in the Web Application. You can build a hierarchy of persisted objects but these objects don’t necessarily map to SPSites and SPWebs.

                 

                SAP NetWeaver and Hyper-Threading on Windows Servers : To be or Not to be

                Here are the key messages related to SAP NetWeaver and Hyperthreading ( HT ) now called
                Simultaneous MultiThreading ( SMT ) derived from different sources as well as internal lab tests
                done for the WS2012 SAP First Customer Shipment program. In addition I incorporated very
                valuable input from Juergen Thomas who published many blogs and papers about SAP on the
                Microsoft platform :

                1. Always keep in mind : a CPU thread is NOT equal to a core !  ( also see walk-through section at the end )
                2. Sizing based on SAPS which is done by Hardware vendors for certain server models usually
                  includes SMT. Looking at the latest published SAP SD benchmarks one will realize immediately
                  that SMT was turned on according to the CPU Information ( # processors / # cores / # threads ).
                  The goal is to achieve the maximum amount of SD workload. Including SMT for sizing implictly
                  means that customers will turn it on

                3. To use Hyper-V on a Server with more than 64 logical processors ( e.g. 40 cores and SMT turned on )
                  needs to use Windows Server 2012. Hyper-V of Windows 2008 ( R2 ) has a limit of 64 logical CPUs,
                  Windows 2008 R2 being able to address 256 CPU threads in a bare-metal deployment

                4. When using the latest OS and application releases the general suggestion regarding SMT is to
                  always turn it on. It either helps or won’t hurt. Turning SMT on or off requires a reboot as it’s a
                  BIOS setting on the physical host

                5. How much SMT will help depends on the application workload. While there is a proven benefit in
                  SAP SD Benchmarks as well as in many other benchmarks, we know customer tests where there
                  was no difference between running a virtualized SAP application Server on WS2012 Hyper-V with
                  or without SMT. Conclusion is that the effect/impact of SMT is pretty much dependent on the
                  individual customer scenario
                   

                General Information about Hyper-Threading

                Being around for many years Hyper-Threading ( HT ) now called Simultaneous MultiThreading ( SMT )
                is a well-known feature of Intel processors. Looking on the Internet one can easily find a lot of information
                and technical descriptions about it like this one :

                http://software.intel.com/en-us/articles/performance-insights-to-intel-hyper-threading-technology/
                While there have been some issues in early days the recommendations related to the more recent OS or
                application releases is usually to have SMT turned on as it does increase the throughput  achievable
                with an application on a single server. Nevertheless the question about the impact of SMT on SAP NetWeaver
                shows up again when looking at the execution speed of a single request handled by a single CPU thread.
                Especially as we are pushing WS2012 Hyper-V – customers wonder what the performance characteristics
                are with a combination of SMT and virtualization.

                In this blog I try to summarize the status quo and give some guidance based on all the statements and
                test results and experiences which are around.

                Single-Thread Performance

                I personally would like to separate the SMT discussion from the single-thread performance discussion.
                When People talk about the latter one it is usually about the trend in processor technology to increase
                the number of cores instead of increasing the clock rate. In these discussions the often unspoken
                assumption is 1 thread per core. Sure – there are very good reasons for it. But as one can read under
                the following links only applications which are able to use parallelism will fully benefit from the multi-core
                design. As a consequence certain SAP batch jobs which are dependent on high single-thread performance
                might not improve a lot or not at all when the underlying hardware gets upgraded to the next processor
                version which has more cores per CPU. A customer example of this effect can be found here :
                http://blogs.msdn.com/b/saponsqlserver/archive/2010/01/24/performance-what-do-we-mean-in-regards-to-sap-workload.aspx

                SAP NetWeaver is not a multi-threaded application. But in SAP it’s of course possible trying to
                parallelize processing on a Business process level. Just think about payroll parallelism in SAP.

                Some basic articles around single-threaded CPU performance and multi-core processing
                can be found here :

                http://preshing.com/20120208/a-look-back-at-single-threaded-cpu-performance

                http://en.wikipedia.org/wiki/Multi-core_processor

                http://iet-journals.org/archive/2012/may_vol_2_no_5/846361133715321.pdf

                Could SMT hurt in some cases ?

                is in principle the wrong question. The correct question should be : What is the effect and impact
                of SMT with different applications and different configurations or scenarios ?
                Understanding these effects and impacts will make it possible to adapt and get the maximum out
                of an investment in a specific hardware model.
                There might be some outdated messages or opinions around based on experiences from the early
                days which are no longer valid. In other cases I personally wouldn’t call it an issue of SMT but
                wrong sizing or overlooking some documented restrictions. First let’s look again at a general statement :

                http://software.intel.com/en-us/articles/performance-insights-to-intel-hyper-threading-technology/

                “Ideal scheduling would be to place active threads on cores before scheduling on threads on
                the same core when maximum performance is the goal. This is best left to the operating system.
                All multi-threaded operating systems support Intel HT Technology, while later versions have more
                support for scheduling threads in the most ideal manner to maximize performance gains”

                 

                Here are some more details :

                1. single-thread / single-core performance

                In SAP note 1612283 section “1.1 Clock Speed” you will find the following statement :
                “If you need to speed up a single transaction or report you might try to switch off
                Hyperthreading”

                Based on some testing I would like to differentiate this a little bit further. As long as the
                number of running processes / threads is <= the number of cores the OS / hypervisor should
                be smart enough to distribute the workload over all the cores. In this case there shouldn’t be
                any effect/impact by the fact that SMT is on or off. Based on the basics of SMT as described
                in the Intel article named above, expectation is that with the number of running processes
                /threads exceeding the # of cores, the performance/throughput of a single CPU thread,
                dependent on the load, is decreasing.

                1. parallelism
                  From an OS perspective one shouldn’t see any major issues with SMT anymore. It Looks different
                  though when it comes to the application. One potential issue could arise if an application doesn’t realize
                  that the available logical CPUs are mapped to SMT threads and not cores. This could lead to wrong
                  assumptions. Here is an example from SQL Server :

                http://support.microsoft.com/kb/2023536

                “For servers that have hyper-threading enabled, the max degree of parallelism value should not
                exceed the number of physical processors”
                ( the term “processors” being used related to physical cores in this article )

                It’s related to the two items above. Parallelism on a SQL statement level means that the SQL Server
                Optimizer expects all logical CPUs to be of the same type. The important question is if this is just not
                as fast as if there would be as many cores as logical CPUs or if it becomes in fact slower than without
                SMT
                3. virtualization

                Another topic is running VMs on Hyper-V with SMT turned on on the underlying physical host. It’s
                again not different from the items mentioned before. Inside a VM an application might not be aware
                of the nature of a virtual CPU. It’s not just SMT. Depending on the configuration ( e.g. over-
                commitment ) and the capabilities of an OS/hypervisor a Virtual Processor will correspond only to a
                “fraction” of a real Physical CPU.

                Internal lab tests on WS2012 Hyper-V have proven the statement I quoted at the beginning of this
                section. As long as there are enough cores available the workload will be optimally distributed.
                A perfect way to show this is to increase the number of virtual CPUs inside a VM step by step
                while monitoring the CPU workload on the host.

                The CPU load screenshots further down were taken from perfmon on a WS2012 host where SMT
                was turned on. The server had 8 cores and due to SMT 16 logical processors. The server also had
                two NUMA nodes  ->  4 cores / 8 threads each. The SAP test running inside a VM ( guest OS was
                Windows 2008 R2 ) was absolutely CPU-bound. The scenario looked like this :

                a, the test started with two Virtual processors ( VP ) and the workload was increased until both VPs
                were 100% busy

                b, then the number of VPs was increased to four to see if it was possible to double the
                workload. Scalability was very good in this case because the workload could still be
                distributed over all four cores in one single NUMA node of the host server

                c, but going from 4 VPs to 6 VPs changed the picture. Hyper-V has improved NUMA
                support and per default sets max VPs per NUMA node to the # of Logical Processors.
                ( LP ) of the NUMA node ( 8 on the test Hardware ).
                And as 6 VPs is still < 8 the whole workload of the VM still ended up on one single NUMA
                node. On the other side due to SMT it was no longer possible to achieve an almost 1:1
                VP-to-physical-core mapping. This is a situation where you will see still an improved
                throughput compared to four VPs but it’s far away from linear scalability we achieved when
                going from 2 VPs to 4 VPs.

                Keep in mind that it’s NOT possible to configure processor affinity on Hyper-V to achieve
                a fixed VP-physical-core mapping. But setting the “reserve” value in WS2012 Hyper-V
                Manager to 100 has basically the same effect. See also the blog from Ben Armstrong :

                http://blogs.msdn.com/b/virtual_pc_guy/archive/2009/09/21/processor-affinity-and-why-you-don-t-need-it-on-hyper-v.aspx

                d, next step was to adapt the setting for the VM in Hyper-V Manager. It allows to define max
                VPs per NUMA node. Setting this value to four forced Hyper-V to distribute the workload
                over two NUMA nodes when using 6 VPs in the VM. Now it was again possible to achieve
                basically a 1:1 VP-to-physical-core mapping. Scalability looked fine and because it was
                totally CPU-bound the disadvantage of potentially slower memory access didn’t matter.
                The advantage of getting more CPU power outweighed the memory access penalty by far.

                The following pictures and screenshots will visualize the four items above :

                 

                Figure 1 : as long as SMT is turned off and a VM won’t span multiple numa nodes the virtual processors
                will be mapped to the cores of one single numa node on the physical host

                 

                Figure 2 : once SMT is turned on the Virtual Processors of a VM will be mapped to Logical Processors
                on the physical host. By Default WS2012 Hyper-V Manager will set the maximum # of Virtual
                Processors per numa node according to the hardware layout. In this example it means a max
                of 8 Virtual Processors per numa node

                Figure 3 : configuring 4 Virtual Processors in a VM while SMT is turned on means that these 4 VPs
                will be mapped to 8 Logical Processors on the physical host. This allows the OS/Hypervisor
                to make sure that the workload will be distributed over all 4 physical cores in an optimal way
                similar to having SMT turned off


                Figure 4 : perfmon showed that the workload which kept four virtual processors busy inside a VM was
                distributed over eight Logical processors on four cores within one NUMA node on the
                physical host

                 

                Figure 5 : what happens when adding two additional Virtual Processors inside the
                single VM ? Because 6 VPs is still less than the default setting of a max
                of 8 VPs per single numa node the whole workload will be still mapped
                to 8 Logical Processors which correspond to 4 cores within one numa
                node


                Figure 6 : increasing the workload the same way as before when going from two virtual processor to
                four VPs by configuring six VPs inside the VM caused a super-busy single NUMA node using
                almost all the threads 100%. Means each of the single physical cores of this NUMA node had
                to engage more severly the two SMT threads it represented to the OS/Hypervisor. Therefore
                the scalability for going from four VPs to 6 VPs looked not great compared to going to four
                VPs from two VPs.

                Figure 7 : the default setting regarding # of virtual processors per numa node in WS2012
                Hyper-V Manager can be changed. Setting the number low enough will force
                the Hypervisor to use more than one numa node

                 

                 

                 


                Figure 8 : changing the “max VPs per NUMA node” setting in Hyper-V Manager ( 2012 ) to four
                forced Hyper-V to use the second NUMA node. This allowed again basically a 1:1
                VP-to-physical-core mapping and scalability looked fine again
                Conclusion :

                The references as well as the experiences shown make it obvious that the effects of SMT are related
                to sizing and configuration of SAP deployments as well as to set expectations and SLAs accordingly.

                It is proven that with having SMT configured on a Hyper-V host or on a SAP bare metal deployment,
                the overall throughput of a specific server is increasing including the power/throughput ratio. Both
                are usually the goals we follow when specifying hardware configurations for SAP deployments.

                In terms of using SMT for Hyper-V hosts, one clearly needs to define the goals of deploying SAP
                components in VMs. Is the goal again to maximize the available capacity of servers, then having
                SMT enabled is the way to go. Means one would deploy as many VPs as there are Logical Processors
                on the host server and accept that there might be performance variations dependent on the load over
                all VMs or the fact that a VM has more VPs than the # of physical cores in one NUMA node of the host
                server. SLAs towards the business units would then take such variations into account.

                Walk-through TTHS – Tray Table Hyper-Seating

                 

                One thing which will be repeated again and again in all the articles about SMT is the fact that a
                CPU thread is NOT equal to a core. To visualize this specific point and to make it easy to remember
                I would like to compare SMT with TTHS – Tray Table Hyper-Seating as shown on the following
                six Pictures :

                 

                Figure 1 :  you have four comfortable seats and four passengers. Everyone is happy.

                 

                Figure 2 :  now you want to get more than four passengers into the car and the idea is to introduce
                TTHS – Tray Table Hyper Seating. This will allow to put two passengers on one seat. But
                it’s pretty obvious that it’s not so comfortable anymore. Especially one of the two
                passengers cannot enjoy the cozy seat surface.
                 

                Figure 3 :  therefore the driver should be smart enough to let passengers enjoy the cozy seat surface
                as long as seats are available despite the fact that TTHS is turned on

                Figure 4 :   at some point though when you want to put six passengers into the 4-seat car two of them
                have to get on the TTHS spots. This is when issues might evolve

                 

                Figure 5 :  of course one could turn TTHS off again and share two seats the traditional way. While this
                might work too it’s very obvious that it’s not perfect

                 

                Figure 6 :  conclusion :  if it’s a hard requirement that every passenger has his own seat to fully enjoy
                the cozy seat surface then there is no other way than to take a different car with an
                approrpiate number of seats

                How To : Use Powershell Scripts in Office 365 through the SharePoint CSOM

                When we first started to work with Office 365, I remember being quite concerned at the lack of PowerShell cmdlets – basically all the commands we’re used to using do not exist there. Here’s a gratuitous graph to illustrate the point:

                image

                So yes, nearly 800 PowerShell commands in SP2013 (up from around 530 in SP2010) down to a measly 30 in SharePoint Online. And those 30 mainly cover basic operations with sites, users and permissions – no scripting of, say, Managed Metadata, user profiles, search and so on. It’s true to say that some of these things are now available down at site-collection scope (needed, of course, when you don’t have a true “Central Admin” site but there are still “tenant-level” settings that you want to use script for rather than make manual changes through the UI.

                So what’s a poor developer/administrator to do?

                The answer is to write PowerShell as you always did, but embed CSOM code in there. More examples later, but here’s a small illustration:

                # get the site collection scoped Features collections (e.g. to activate one) – not showing how to obtain $clientContext here..
                $siteFeatures = $clientContext.Site.Features
                $clientContext.Load($siteFeatures)
                $clientContext.ExecuteQuery()

                So we’re using the .NET CSOM, but instead of C# we are using PowerShell’s ability to call any .NET object (indeed, nearly every script will use PowerShell’s New-Objectcommand). All the things we came to love about PowerShell are back on the table:

                • Scripts can be easily amended, no need to recompile (or open Visual Studio)
                • We can debug with PowerGui or PowerShell ISE
                • We can leverage other things PowerShell is good at e.g. easily reading from XML files, using other PowerShell modules and other APIs (including .NET) etc.

                Of course, we can only perform operations where the method exists in the .NET CSOM – that’s the boundary of what we can do.

                Getting started

                Step 1 – understand the landscape

                The first thing to understand is that there are actually 3 different approaches for scripting against Office 365/SharePoint Online, depending on what you need to do. It might just be me, but I think that when you start it’s easy to get confused between them, or not fully appreciate that they all exist. The 3 approaches I’m thinking of are:

                • SharePoint Online cmdlets
                • MSOL cmdlets
                • PowerShell + CSOM

                This post focuses on the last flavor. I also wrote a short companion post about the overall landscape and with some details/examples on the other flavors, at Using SharePoint Online and MSOL cmdlets in PowerShell with Office 365

                Step 2 – prepare the machine you will run scripts against SharePoint Online

                Option 1 – if you will NOT run scripts from a SP2013 box (e.g. a SP2013 VM):

                You need to obtain the SharePoint DLLs which comprise the .NET CSOM, and copy them to a folder on your machine – your scripts will reference these DLLs.

                1. Go to any SharePoint 2013 server, and copy any DLL
                2. which starts with Microsoft.SharePoint.Client*.dll from the C:\Program Files\Common Files\microsoft shared\Web Server Extensions\15\ISAPI folder.
                3. Store them in a folder on your machine e.g. C:\Lib – make a note of this location.

                CSOM DLLs

                Option 2 – if you WILL run scripts from a SP2013 box (e.g. a SP2013 VM):

                In this case, there is no need to copy the DLLs – your scripts will reference them in the original SharePoint install location (C:\Program Files\Common Files\microsoft shared\Web Server Extensions\15\ISAPI).

                The top of your script – referencing DLLs and authentication

                Each .ps1 file which calls the SharePoint CSOM needs to deal with two things before you can use the API – loading the CSOM types, and authenticating/obtaining a ClientContext object. So, you’ll need this at the top of your script:

                ** N.B. My newer code samples do not show in RSS Readers – click here for full article **
                # replace these details (also consider using Get-Credential to enter password securely as script runs)..
                $username = “SomeUser@SomeOrg.onmicrosoft.com”
                $password = “SomePassword”
                $securePassword = ConvertTo-SecureString $Password -AsPlainText -Force
                # the path here may need to change if you used e.g. C:\Lib..
                Add-Type -Path “c:\Program Files\Common Files\microsoft shared\Web Server Extensions\15\ISAPI\Microsoft.SharePoint.Client.dll”
                Add-Type -Path “c:\Program Files\Common Files\microsoft shared\Web Server Extensions\15\ISAPI\Microsoft.SharePoint.Client.Runtime.dll”
                # note that you might need some other references (depending on what your script does) for example:
                Add-Type -Path “c:\Program Files\Common Files\microsoft shared\Web Server Extensions\15\ISAPI\Microsoft.SharePoint.Client.Taxonomy.dll”
                # connect/authenticate to SharePoint Online and get ClientContext object..
                $clientContext = New-Object Microsoft.SharePoint.Client.ClientContext($url)
                $credentials = New-Object Microsoft.SharePoint.Client.SharePointOnlineCredentials($username, $securePassword)
                $clientContext.Credentials = $credentials
                if (!$clientContext.ServerObjectIsNull.Value)
                {
                Write-Host “Connected to SharePoint Online site: ‘$Url'” -ForegroundColor Green
                }
                view rawTopOfScript_PSCSOM.ps1 hosted with ❤ by GitHub

                In the scripts which follow, we’ll include this “top of script” stuff by dot-sourcing TopOfScript.ps1 in every script below – you could follow a similar approach (perhaps with a different name!) or simply paste that stuff into every script you create. If you enter a valid set of credentials and URL, running the script above should see you ready to rumble:

                PS CSOM got context

                Script examples

                Activating a Feature in SPO

                Something you might want to do at some point is enable or disable a Feature using script. The script below, like the others that follow it, all reference my TopOfScript.ps1 script above:

                ** N.B. My newer code samples do not show in RSS Readers – click here for full article **
                . .\TopOfScript.ps1
                [bool]$enable = $true
                [bool]$force = $false
                # using the Minimal Download Strategy Feature here..
                $FeatureId = [GUID](“87294C72-F260-42f3-A41B-981A2FFCE37A”)
                # ..and working with the web-scoped Features – use $clientContext.Site.Features for site-scoped Features
                $webFeatures = $clientContext.Web.Features
                $clientContext.Load($webFeatures)
                $clientContext.ExecuteQuery()
                if ($enable)
                {
                $webfeatures.Add($featureId, $force, [Microsoft.SharePoint.Client.FeatureDefinitionScope]::None)
                }
                else
                {
                $webfeatures.Remove($featureId, $force)
                }
                try
                {
                $clientContext.ExecuteQuery()
                if ($enable)
                {
                Write-Host “Feature ‘$FeatureId’ successfully activated..”
                }
                else
                {
                Write-Host “Feature ‘$FeatureId’ successfully deactivated..”
                }
                }
                catch
                {
                Write-Error “An error occurred whilst activating/deactivating the Feature. Error detail: $($_)
                }
                view rawActivateOrDeactivateFeature_PSCSOM.ps1 hosted with ❤ by GitHub

                PS CSOM activate feature

                Enable side-loading (for app deployment)

                Along very similar lines (because it also involves activating a Feature), is the idea of enabling “side-loading” on a site. By default, if you’re developing a SharePoint app it can only be F5 deployed from Visual Studio to a site created from the Developer Site template, but by enabling “side-loading” you can do it on (say) a team site too. Since the Feature isn’t visible (in the UI), you’ll need a script like this:

                ** N.B. My newer code samples do not show in RSS Readers – click here for full article **
                . .\TopOfScript.ps1
                [bool]$enable = $true
                [bool]$force = $false
                # this is the side-loading Feature ID..
                $FeatureId = [GUID](“AE3A1339-61F5-4f8f-81A7-ABD2DA956A7D”)
                # ..and this one is site-scoped, so using $clientContext.Site.Features..
                $siteFeatures = $clientContext.Site.Features
                $clientContext.Load($siteFeatures)
                $clientContext.ExecuteQuery()
                if ($enable)
                {
                $siteFeatures.Add($featureId, $force, [Microsoft.SharePoint.Client.FeatureDefinitionScope]::None)
                }
                else
                {
                $siteFeatures.Remove($featureId, $force)
                }
                try
                {
                $clientContext.ExecuteQuery()
                if ($enable)
                {
                Write-Host “Feature ‘$FeatureId’ successfully activated..”
                }
                else
                {
                Write-Host “Feature ‘$FeatureId’ successfully deactivated..”
                }
                }
                catch
                {
                Write-Error “An error occurred whilst activating/deactivating the Feature. Error detail: $($_)
                }
                view rawEnableSideLoading_PSCSOM.ps1 hosted with ❤ by GitHub

                PS CSOM enable side loading

                Iterating webs

                Sometimes you might want to loop through all the webs in a site collection, or underneath a particular web:

                ** N.B. My newer code samples do not show in RSS Readers – click here for full article **
                1234567891011121314151617181920
                . .\TopOfScript.ps1
                $rootWeb = $clientContext.Web
                $childWebs = $rootWeb.Webs
                $clientContext.Load($rootWeb)
                $clientContext.Load($childWebs)
                $clientContext.ExecuteQuery()
                function processWeb($web)
                {
                $lists = $web.Lists
                $clientContext.Load($web)
                $clientContext.ExecuteQuery()
                Write-Host “Web URL is” $web.Url
                }
                foreach ($childWeb in $childWebs)
                {
                processWeb($childWeb)
                }
                view rawIterateWebs.ps1 hosted with ❤ by GitHub

                PS CSOM iterate webs

                (Worth noting that you also see SharePoint-hosted app webs also in the image above, since these are just subwebs (albeit ones which get accessed on the app domain URL rather than the actual host site’s web application URL).

                Iterating webs, then lists, and updating a property on each list

                Or how about extending the sample above to not only iterate webs, but also the lists in each – the property I’m updating on each list is the EnableVersioning property, but you easily use any other property or method in the same way:

                ** N.B. My newer code samples do not show in RSS Readers – click here for full article **
                . .\TopOfScript.ps1
                $enableVersioning = $true
                $rootWeb = $clientContext.Web
                $childWebs = $rootWeb.Webs
                $clientContext.Load($rootWeb)
                $clientContext.Load($childWebs)
                $clientContext.ExecuteQuery()
                function processWeb($web)
                {
                $lists = $web.Lists
                $clientContext.Load($web)
                $clientContext.Load($lists)
                $clientContext.ExecuteQuery()
                Write-Host “Processing web with URL “ $web.Url
                foreach ($list in $web.Lists)
                {
                Write-Host “– “ $list.Title
                # leave the “Master Page Gallery” and “Site Pages” lists alone, since these have versioning enabled by default..
                if ($list.Title -ne “Master Page Gallery” -and $list.Title -ne “Site Pages”)
                {
                Write-Host “—- Versioning enabled: “ $list.EnableVersioning
                $list.EnableVersioning = $enableVersioning
                $list.Update()
                $clientContext.Load($list)
                $clientContext.ExecuteQuery()
                Write-Host “—- Versioning now enabled: “ $list.EnableVersioning
                }
                }
                }
                foreach ($childWeb in $childWebs)
                {
                processWeb($childWeb)
                }
                view rawIterateWebsAndListsEnableVersioning.ps1 hosted with ❤ by GitHub

                PS CSOM iterate lists enable versioning

                Import search schema XML

                In SharePoint 2013 and Office 365, many aspects of search configuration (such as Managed Properties and Crawled Properties, Query Rules, Result Sources and Result Types) can be exported and importing between environments as an XML file. The sample below shows the import operation handled with PS + CSOM:

                ** N.B. My newer code samples do not show in RSS Readers – click here for full article **
                . .\TopOfScript.ps1
                # need some extra types bringing in for this script..
                Add-Type -Path “c:\Program Files\Common Files\microsoft shared\Web Server Extensions\15\ISAPI\Microsoft.SharePoint.Client.Search.dll”
                # TODO: replace this path with yours..
                $pathToSearchSchemaXmlFile = “C:\COB\Cloud\PS_CSOM\XML\COB_TenantSearchConfiguration.xml”
                # we can work with search config at the tenancy or site collection level:
                #$configScope = “SPSiteSubscription”
                $configScope = “SPSite”
                $searchConfigurationPortability = New-Object Microsoft.SharePoint.Client.Search.Portability.SearchConfigurationPortability($clientContext)
                $owner = New-Object Microsoft.SharePoint.Client.Search.Administration.SearchObjectOwner($clientContext, $configScope)
                [xml]$searchConfigXml = Get-Content $pathToSearchSchemaXmlFile
                $searchConfigurationPortability.ImportSearchConfiguration($owner, $searchConfigXml.OuterXml)
                $clientContext.ExecuteQuery()
                Write-Host “Search configuration imported” -ForegroundColor Green
                view rawImportSearchSchema.ps1 hosted with ❤ by GitHub

                PS CSOM import search schema

                Summary

                As you can hopefully see, there’s lots you can accomplish with the PowerShell and CSOM combination. Anything that can be done with CSOM API can be wrapped into a script, and you can build up a library of useful PowerShell snippets just like the old days. There are some interesting things that you CANNOT do with CSOM (such as automating the process of uploading/deploying a sandboxed WSP to Office 365), but there ARE approaches for solving even these problems, and I’ll most likely cover this (and our experiences) in future posts.

                A final idea on the PowerShell + CSOM front is the idea that you can have “hybrid” scripts which can deal with both SharePoint Online and on-premises SharePoint. For example, on my current project everything we build must be deployable to both SPO and on-premises, and our scripts take a “DeploymentTarget” parameter where the values can be “Online” or “OnPremises”. There are some differences (i.e. branching) in the scripts, but for many operations the same commands can be run.

                How To : Use the Content Query Web Part for SharePoint 2013 Search

                Meeting client requirements with SharePoint often involves aggregating items somehow – often we want to display things like “all the overdue tasks across all finance sites”, or “navigation links to all of the subsites of this area” or “related items (e.g. tagged with the same term)” and so on. In SharePoint 2010 there have been two main ways of accomplishing this:

                SharePoint-2013-Service-Pack-1-225x93

                • Content Query web part
                • Custom solution built on SPSiteDataQuery (site collection-scoped), SPQuery (list-scoped) or search API

                To a lesser extent, using the search web parts as part of a custom solution may also have been an option. Regardless, it was common to need custom code to meet such requirements. Maybe we needed to add paging to the results, or we needed to use some value obtained dynamically through code (e.g. from the current site/current page/current user/something else) – several Codeplex solutions arose from this gap, and lots of lines of code were written.

                SharePoint 2013 presents the Content Search web part as a new option – it’s capabilities mean that simply using the web part (with some front-end work to meet look and feel requirements) will meet many needs, without use of custom code. If you’re a developer, the following screenshot should give you a clue as to why code won’t be required too often (with one of my favorite options highlighted):

                CSWP_BasicsTab_AdvancedMode_PropertyFilterValues

                It’s incredibly powerful, and it’s a good idea to understand what it can do.

                Understanding the deal with search-based solutions

                As the name suggests, the Content Search web part is powered by SharePoint’s search function. As such, there are the following considerations:

                • The CSWP can be configured to “see” items anywhere in SharePoint (potential advantage)
                  • In contrast, the CQWP and related SPSiteDataQuery can only search within the current site collection – the site collection “boundary” is a factor
                • Results shown are not guaranteed to be 100% up-to-date (potential disadvantage) 
                  • Since a search crawl has to run before any content changes will be shown in search results (remember this can include titles, summaries, images and so on for pages/documents), if a user creates/edits an item it will not be shown immediately. This can be a critical point.
                  • Furthermore, my understanding from a FAST engineer is that in SharePoint 2013 there is no longer any means of pushing a document directly into the search index – in previous FAST incarnations including FAST for SharePoint 2010, there were options such as docpush.exe for “proactively” add an item to the index, rather than waiting for the next search crawl.
                  • That said, it should be possible to obtain much lower indexing latencies in SharePoint 2013 via the “Continuous Crawl’” capability. In most deployments, my guess would be that changes would be reflected within a few minutes at most if this is enabled (where previously you may have had an incremental crawl scheduled every 15, 30 or 60 minutes for a SharePoint sites content source.

                Summary – if the functionality you are creating needs fully up-to-date results (e.g. a user has created/edited something and it needs to be immediately reflected in the site) then you will probably need to stick with the original approaches (i.e. a query-based rather than search-based solution).

                Terminology – new concepts in SharePoint 2013 search

                So if we’re going to build solutions built on SP2013 search, we need to have a basic understanding of some concepts – we’ll run into these time and time again:

                Concept

                My quick definition

                Result Source Like a search ‘scope’ in SP2007/SP2010, but on steroids. Rules are specified to say what the scope consists of – e.g. DOCUMENTS in my TEAM SITES area (constraining on content type and path in this example).

                Created centrally, or at the web level. Result Sources can be used in just about any search-related functionality, including the Content Search web part.

                Query Rule Like a ‘best bet’ on steroids. Ability to do specially formatted results at top of results list (e.g.Promoted Result) for highly-recommended content. In addition to Promoted Result, we can also do a Result Block (example could be a block of 5 image results within main list of text links).

                Another option is to Change the Ranked Results – i.e. put something at the top, promoteor demote something by 1-10 (previously known as a ‘boost’ in FAST)

                LOTS of flexibility in matching the user’s query, including regular expressions and matching terms in the Managed Metadata store.

                Display Templates A Display Template is a JavaScript template (similar to jQuery templates) which controls formatting – in the case of the CSWP, this effectively replaces the use of XSL for look and feel. There is a separate template to pick for the overall control and formatting of an individualitem. The .js files for the templates are stored in the ‘Content Web Parts’ subfolder of the Master Page Gallery.

                Side note – in the context of a search results page (rather than CSWP), a Display Template is associated with a Result Type (e.g. Word doc, wiki page, PowerPoint file etc.) and so we have granular control over how each is displayed (and when). Extremely cool.

                So, lots of flexibility in the search infrastructure. Let’s see some of this in the context of the Content Search web part.

                Configuring the Content Search web part

                There are two main aspects to this:

                • Displaying the right items (Search Criteria)
                • Look and feel (Display Templates)

                In terms of the search criteria, there is enormous flexibility in what the CSWP – and the underlying search capability – can do. For one thing, it’s possible to either directly configure the query entirely in the properties of this web part instance (e.g. show me all documents which meet criteria X), and/or start from a pre-existing Result Source to do some of the filtering. Combining the approaches will be fairly common – an example could be “search only on wiki pages” (an OOTB Result Source) but only show items tagged with X (this defined directly in the CSWP properties).

                Interestingly, configuring a centralized Result Source and a Content Search web part on a page are very similar, even though it would seem some sort of “reusable scope” and a web part are very different things in SharePoint. The overlap comes because underneath both there is a search query which does the work of isolating the desired results – indeed, as we’ll see later the same “Query Builder” UI is used in both places (with a couple of minor differences). So, if you’ve learnt how to configure a CSWP you’ve essentially also learned how to create  a custom Result Source.

                 

                Configuring the web part

                The first thing to understand is that the Content Search web part appears in different guises in the web part gallery. The ‘main’ web part is in the ‘Content Rollup’ category:

                CBS_MainWebPartInAdder

                But there are also many pre-configured versions available, each of which finds a specific type of content. This is great for end-users who don’t necessarily think in terms of needing a ‘Content Search’ web part:

                CBS_WebPartsInAdder
                And just to prove the point, the web parts above correspond to the following .webpart definition files in the Web Part Gallery:

                CBS_WebParts

                Once the web part has been added to the page, it can be configured by it’s tool pane. The main configuration item is the query to use, and this can be started by clicking the ‘Change query’ button:

                CSWP_properties
                This opens the “’Build Your Query” dialog – this has tabs labeled BASICS, REFINERS, SORTING, SETTINGS and TEST. This thing is known (unsurprisingly) as the Query Builder – what you might not realize, is that it’s used in several places in SharePoint 2013:

                • Configuring a Content Search web part (obviously)
                • Creating a Result Source (specifically in the Query Transform section)
                • Configuring a Search Results web part

                There are some differences – for example, when configuring a Search Results web part there is no SORTING tab because this will be handled in the Result Source or the query. I’m going to talk about things from the perspective of the Content Search web part, but will call out any differences for the other usages – so hopefully by learning the CSWP, you also get to learn 75% of the search infrastructure.

                BASICS tab – Quick Mode

                Although the first tab is labeled ‘BASICS’, I’d say it’s actually the most involved – this is where the query itself is configured, and there is a ‘Quick Mode’ and ‘Advanced Mode’. You’ll also notice that – and let me just say I’d personally be willing to give the Product Manager for this feature A BIG HUG for this – that there’s a “live” results preview pane, permanently visible on the right-hand side of the Query Builder. This shows the first 10 results which would display from running the currently configured search against the current index, without the need to save the web part after each change:

                CSWP_BasicsTab_QuickMode

                Note that if you create your own query, then this preview pane is only able to show results when you are on the TEST tab. And we’ll talk about that towards the end.

                Let’s now walk through the various configuration steps in here.

                Select a query

                In Quick Mode, the dropdown contains the Result Sources (see my definition above if you’ve forgotten already :)) which come out-of-the-box with SharePoint 2013 – one of these may provide a good foundation for what you need:

                CSWP_BasicsTab_QuickMode_SelectQuery
                As you select a Result Source from the dropdown, other options may become available lower down. So if I want to find items matching a specific content type, I get this:

                RestrictByContentType
                In fact, this option to restrict by content type appears for many of the pre-defined Result Sources, not just “Items matching a content type” – which makes sense, because it’s a common thing to include as a filter. Similarly, “Items matching a tag” and several other queries give this interface for selecting a tag to filter on:

                RestrictByTag
                And, happy days, if I specify the tag by typing one I get auto-complete to help me pick the term – this is a fully-fledged Managed Metadata input field. Consequently there’s also full validation of the terms you type-in (though this takes a few seconds to show), so if an author accidentally enters something which isn’t a known term, he/she should spot the mistake immediately:

                TermValidation

                Consider also that those middle options of using the navigation term associated with the current page is exactly what’s needed to build many types of ‘related items’ functionality – again, no code needed now.

                Restrict results by app

                In the next section, I can restrict the scope of the results to a particular location (e.g. the current site). This enables me to get something like the Content Query web part behavior of only searching within the current site collection if needed – because although we now have the power, it won’t always make sense to go across the entire farm 🙂

                RestrictByApp

                Add additional filters

                In the next section I can supplement the query with any valid query text, e.g. a property filter. In this example, I’m adding a filter to only present items which werecreated by the current user:

                AdditionalFilter

                Sort results

                When we scope our query to a pre-defined Result Source (as we are here in the CSWP ‘Quick Mode’), then sorting is usually pre-defined at that level. The CSWP does give us the opportunity to override sorting based on based on some popularity ranking models (around most viewed/most clicked) instead though – expect proper wording to appear in this dropdown in the RTM version, but you get the idea: 

                SortResults
                So what happens if none of the options presented so far do what you want? An example could be wanting to use an existing Result Source (e.g. ‘wiki pages’) but sort on Last Modified in descending order. Obviously the dropdown above does not allow that. We could create a custom Result Source and implement the query/sorting there, but that only really makes sense if we expect it to be re-used in multiple places.

                In these cases, we can click into Advanced Mode (still on the BASICS tab).

                BASICS tab – Advanced Mode

                In Advanced Mode you basically get to specify the full query text yourself. In my mind, this is like building a solution with the search API in SP2007/SP2010 – I saw many custom solutions (and built several myself) which used the FullTextSqlQuery or KeywordQuery classes to find the right items. SharePoint 2013 makes it much easier to have this full control whilst still piggybacking onto the out-of-the-box web parts – meaning less work and more productivity.

                When switching to the Advanced Mode, a couple of things become available:

                • A SORTING tab (details later)
                • Controls to help you build the query (which you’d previously do essentially by hand in earlier versions), with ‘Keyword filter’ and ‘Property filter’ options. These can be combined as you like, and the resulting query text appears in the textbox at the bottom:

                CSWP_BasicsTab_AdvancedMode

                Avoid custom code by using tokens

                There are many tokens which can be used when building a query in this way – often you might want to pass something into the query, such as a URL (querystring) parameter, the value in a particular field on the page, and so on. Being able to do this unlocks a huge range of possibilities for building solutions. This is where the first image in this article comes from – here’s a reminder:

                CSWP_BasicsTab_AdvancedMode_PropertyFilterValues

                In summary, when using the Advanced Mode of the query builder you should be able to target just about any content in your SharePoint environment.

                SORTING tab (Advanced Mode only)

                In SharePoint 2010 Enterprise Search, you could only sort by relevance/rank (the normal search engine approach) or date. FAST for SharePoint 2010 had more options (you could sort by a Managed Property). In SharePoint 2013, frankly the sort options alone are enough to blow your mind 🙂  If you don’t need anything specific around sorting then you can skip this bit, but if you do then here are your options:

                First you can sort by way more things than just rank and date:

                CSWP_SortTab
                One thing to note there – I’m unclear as to what makes it into that ‘Sort by’ list and what does not. It’s not Managed Properties as far as I can tell, so although the list is long many options may not be hugely useful. Still, better than before.

                Usefully, you can now do multi-level sorting (sort by this, then by that). The ‘Add sort level’ link in the image above adds another row, allowing me to do things like sorting by URL depth (so items higher up in the site hierarchy show at the top), and then by rank (that makes sense, because there’ll be lots of items at the same URL depth so I do need two levels of sorting):

                CSWP_SortTab_Custom

                Note that effectively what I’m doing here is building some sort of custom ranking model. This works great if I need something very specific on sorting, but also note SharePoint 2013 comes with several ranking models – the next section allows me to pick from these if I’ve left the ‘Sort by’ dropdown on ‘Rank’, unlike in the image above. This is because all these options are effectively different forms of rank – most are around People Search or popularity:

                CSWP_SortTab_RankingModel

                And for those occasions when the client is telling you that his/her strategic document really has to be on page 1 of the results (but not a Promoted Result/best bet), you have ‘Dynamic ordering’ – you can boost/demote results, including the option to promote to the top:

                CSWP_SortTab_DynamicOrdering

                REFINERS tab

                In the context of search, refiners are usually the links on the search engine’s results page (typically in the left nav) which allow the user to further filter the results. So if I do a search for “meeting minutes” and get lots of results, it would be nice to be able to filter by, say:

                • Date range
                • SharePoint site (since minutes might be stored in individual project sites)
                • Author
                • ..and so on

                However, in the context of the Content Search web part, refiners actually allow you to do this filtering as part of the initial query. The REFINERS tab is effectively a convenience to you, the person configuring the web part – what happens is that a search is performed whilst in edit mode, and all relevant refiners (e.g. managed properties) are presented as available refiners. These can be selected and moved over to the right-hand list:

                CSWP_RefinersTab
                The effect of this is that a further filter is added to my query. In the example above, this may be easier than using a Property Filter on the BASICS tab – since there I have little support, I just select the property and type the value:

                CSWP_BasicsTab_PropertyFilter
                In the REFINERS tab, SharePoint is doing the search for me (as it’s configured so far), and only coming back with values which have been found in the returned results.

                SETTINGS tab

                The SETTINGS tab controls some high-level options for running the search:

                CSWP_SettingsTab

                Query rules

                Since these can be defined at the parent site or search service, it could be the case that your CSWP gets affected by one of these. As the radio button shows, this can be overridden, but consider that some types of Query Rules may not have an effect anyway – as a reminder (from the table at the beginning), a Query Rule can either:

                • Add a promoted result
                • Add a result block
                • Change the ranked results somehow (by modifying the query)

                Out of these 3 actions, 1.5 of them could affect the results of a ‘default’ CSWP. This can be summarized:

                Query Rule Action

                Will affect CSWP results?

                Add a promoted result Not by default. When a search runs in SharePoint, multiple result sets are returned (e.g. ‘main results’, ‘best bet results’ and so on – in SP2013, the real names for these are ‘RelevantResults’, ‘SpecialTermResults’, ‘PersonalFavoriteResults’ and ‘RefinementResults’.). Although a CSWP can be configured to show any table, the default is ‘RelevantResults’ – and a promoted result gets added to ‘SpecialTermResults’.
                Add a result block Yes if result block is configured to show ‘ranked within core results’ (the default), rather than ‘shown above core results’.
                Change ranked results Yes.

                For completeness, here’s the place in the CSWP where you select which search result set to use (e.g. if you want to switch from the default of ‘RelevantResults’:

                CSWP_ResultTableSelection

                Options in the Results Table dropdown (shown to the left):

                CSWP_ResultTableSelectionOptions

                URL rewriting

                This one is fairly simple – if results are being returned from a catalog which is using “friendly” URLs, then the CSWP can override this to use the original URLs. It may not always make sense to use rewritten URLs in aggregations outside of the catalog pages, especially if you’ve implemented anything funky there.

                Loading behavior

                This is useful – specify whether the CSWP web part instance should load in the main page load (default) or in an AJAX manner after the main page has finished. Considering that a CSWP could either be the centerpiece of your landing page or merely some page footer navigation, it’s nice to be able to prioritize in this way.

                Priority

                Similarly, we can actually specify High, Medium or Low priority for each CSWP instance we use – great for the different usages we will have, although as per the description, note this only has any effect if the search service is overloaded.

                TEST tab

                The TEST tab is hugely useful – it provides you the ability:

                • To see the underlying query text (in Keyword Query Language [KQL]) which has been generated (though it must be edited in other tabs)
                • To see the preview when you are defining a query yourself (the preview pane will be empty on other tabs in this scenario)

                CSWP_TestTab_Less
                Which is all great, but at first glance it’s easy to miss some extra functionality – if the ‘Show more’ link is clicked, other information becomes visible including details on any refiners and Query Rules which have been applied. So below I can see that a custom Query Rule I created has indeed been used, so there’s no guesswork on (for example) whether a certain item is actually being promoted or not:

                CSWP_TestTab_More

                Sidenote – listing items from ONE site/list/library with the Content Search web part

                Worthy of a quick note – if all you need to do is roll-up content from one list/library, then you can do this with the CSWP – in the query, simply restrict the search using PATH:[URL to document library]. The Query Builder UI helps you do this by providing the ‘Restrict by app’ area:

                CSWPrestricttositeorlibrary_thumb2

                N.B. note that one potential gotcha here can be that you need ‘HTTP’ if your sites are browsed on HTTPS but crawled on HTTP (as in my case).

                If you do want to filter by site/list/library, consider of course that the good ol’ Content Query web part will work just fine here, and you’ll get instant changes as content is changed. What you won’t have, is the Content Search Web Part’s ability to automatically use tokens in the query (e.g. value of current navigation category, value from current user’s profile etc.)

                Summary

                The Content Search web part is a great tool in the SharePoint consultant’s box of tricks. Configuration may prove quite simple for some scenarios, but there is also huge amount of flexibility and so a certain degree of complexity comes with that. Many advanced scenarios which make use SP2013 search capabilities (such as Result Sources, Query Rules, promoted results and so on) will be possible – knowing the details will help you identify whether the CSWP can be the answer to a particular problem or not.

                How authentication works in Duet Enterprise 2.0

                How authentication works in Duet Enterprise 2.0

                Duet Enterprise 2.0 stores all business processes and data in the SAP system while letting SharePoint users access the processes and data from SharePoint websites and Outlook 2013. Because SAP and SharePoint authenticate users differently, Duet provides a single sign-on authentication model that authenticates each user individually.

                It’s helpful to understand the following things before you look at the overall authentication process.

                • A user logs on to SharePoint by using their SharePoint user identity. This can be either forms-based authentication or credentials stored in Active Directory Domain Services (AD DS), but is typically associated with a user account stored in AD DS.
                • The SAP environment can’t authenticate a user’s SharePoint identity. Instead, a Duet Enterprise component installed on the SharePoint Server 2013 farm swaps the user’s Windows credentials for a user certificate that SAP NetWeaver uses to authenticate the user. When Duet Enterprise 2.0 is installed, the SAP administrator creates a trust relationship with the DuetRoot Certificate (an X.509 Root Authority certificate), which is stored in the SharePoint Secure Store Service. This certificate is used to create a certificate for each individual user on the fly.
                • Information in an SAP environment can’t be secured with Windows credentials or SharePoint credentials (which in this case would be the user’s SharePoint identity). Instead, information is secured in SAP using SAP user accounts. When deploying Duet Enterprise 2.0, an SAP administrator maps each SharePoint user account to a unique SAP user. This way, a user who logs into a SharePoint website can access data that’s stored in SAP without getting an extra login prompt.

                The following picture shows a high-level view of authentication flow in a Duet Enterprise 2.0 environment. It shows the steps that occur when a SharePoint user accesses SAP information from a SharePoint site.

                Tip Tip:
                To see this picture and the following list that describes the process without having to scroll, download the Authentication flow in Duet Enterprise 2.0 poster.

                 

                Figure: Duet Enterprise 2.0 authentication

                Secure objects in SharePoint by using SAP rolesThe following list describes the steps shown in the preceding picture. This picture assumes that a SharePoint user has requested data that’s stored in the SAP environment.

                A.   A user logs on to a Duet Enterprise 2.0-enabled SharePoint website using his SharePoint user identity. Because the website contains an external list or Web Part that surfaces SAP data, the request is sent to the Business Connectivity Services runtime in the SharePoint farm.

                B.   The Business Connectivity Services runtime invokes the Duet Enterprise 2.0 OData Extension Provider.

                C.   The Duet Enterprise 2.0 OData Extension Provider gets the DuetRoot Certificate from the Secure Store.

                D.   The Duet Enterprise 2.0 OData Extension Provider uses the DuetRoot Certificate to create an X.509 user certificate and sends the certificate to the Business Connectivity Services runtime.

                E.   The Business Connectivity Services runtime sends the request with the user certificate to the SAP NetWeaver Gateway component of SAP NetWeaver in a request packet.

                Tip Tip:
                SAP NetWeaver with the SAP NetWeaver Gateway component installed is also known as SAP NetWeaver Gateway.

                 

                F.   Because SAP NetWeaver trusts the DuetRoot Certificate that was used to create the user certificate, SAP NetWeaver can authenticate the user and look up the SAP user who is mapped to the SharePoint user who is identified by the certificate.

                G.   The SAP user account that’s mapped to the SharePoint user is returned to SAP NetWeaver.

                H.   SAP NetWeaver uses the SAP user account to request access to the requested information in the SAP system and, if the user is authorized to access the information, the requested information is sent to SAP NetWeaver Gateway.

                I.   SAP NetWeaver Gateway sends the reply as a response packet to the Business Connectivity Services runtime on the on-premises SharePoint farm.

                J.   The Business Connectivity Services runtime passes the information to the SharePoint user. In this case, to the website from which the user has requested the information.

                note Note:
                The two-way connection between the SharePoint Server farm and SAP NetWeaver is secured by using two Secure Sockets Layer (SSL) certificates. One certificate is bound to a SharePoint web application and trusted by the SAP administrator. The other certificate is bound to SAP NetWeaver and trusted by the SharePoint administrator.

                 

                Using SAP roles to access SharePoint objects

                In the enterprise, the tasks that a user does are usually related to that user’s role. Because of this, it’s handy to grant permissions to resources, such as list items, websites, and documents, based on SAP roles. Conceptually, SAP roles are like SharePoint groups except that they’re created and managed in SAP.

                In SAP NetWeaver, users are assigned one or more roles, such as Sales Representative, Project Manager, Executive, and Human Resources Specialist. SAP roles can be broad, such as All Sales Managers, or narrow, such as Sales Managers Eastern Region.

                In Duet Enterprise 2.0, these SAP roles can be used to grant permissions in SharePoint Server. Anything you can set permissions on in SharePoint Server can be assigned permissions using SAP roles.

                This includes objects directly related to Duet Enterprise 2.0, such as SAP reports, external lists, actions on external content types, and any general and securable SharePoint Server objects, such as websites or document libraries.

                After a role is granted permissions to an object, any user who is assigned that role will then have permissions to use that object.

                If you remember nothing else about RoleSync, remember that SAP NetWeaver admins assign SAP users to roles and they also assign SharePoint users to SAP users. This effectively assigns one or more SAP roles to SharePoint users.

                Duet Enterprise 2.0 uses the Duet Enterprise Profile Synchronization Timer Job feature to bring the user role assignments from the SAP system into the SharePoint user profile store. Duet Enterprise 2.0 also uses the Duet Enterprise Claims Provider to help manage the role-based permissions to securable objects in SharePoint Server.

                note Note:
                Think of role synchronization as a one-way street. Users’ roles that are defined in the SAP system are brought into the SharePoint user profile store. No properties in the SharePoint user profiles are sent from SharePoint back to SAP.

                 

                During role synchronization, the set of SAP users is imported into the SharePoint user profile store by using Business Connectivity Services. For each SAP user who has a related user profile in SharePoint, all of the SAP roles assigned to that user are listed in the user profile store.

                Role synchronization connects from SharePoint Server to an external system on the SAP side named “SAPUsersService.” This external system sends the user-to-roles mappings to the SharePoint user profile store.

                After role synchronization is completed, you’ll see a new field, called SAP Roles at the bottom of the User Profile page. The SAP roles assigned to the user are separated by semicolons.

                SAP Roles as seen on a User Profile page.

                SAP Roles as seen on a User Profile page in SharePoint.Role synchronization is typically scheduled to be run on a schedule by using the Duet Enterprise Profile Synchronization Timer Job. You decide how often to synchronize roles and how many users to import at a time.

                After roles are synchronized with the SharePoint user profile store, users and administrators can grant access to SharePoint securable objects using the SAP roles. Before this capability is available, a SharePoint farm administrator needs to activate the Duet Enterprise SAP Roles Claims Provider feature at the farm level which makes the claims provider available.

                note Note:
                When a user’s role is changed in the SAP system, the change can take some time (up to 10 hours) to be propagated to the SharePoint system. This might temporarily prevent users from being authorized if their roles have changed since the last sync job.

                Business Connectivity Services (BCS) client side logging for Office 2010 when working with Duet Enterprise

                Use tracing on the client (SharePoint Server 2010) 
                http://technet.microsoft.com/en-us/library/ff700209.aspx

                Duet

                Here are the condensed steps that I use on the client when troubleshooting issues related to taking Duet Enterprise lists offline. 
                NOTE: You must be an Administrator on the computer to use the tracing.

                Create the Data Collector Set:

                1. Launch Perfmon (Start –> Run –> Perfmon) 
                2. Expand Data Collector Sets 
                3. Right-click on “User Defined” and choose “New Data Collector set” 
                4. Give it a name, I use “BCS” 
                5. Choose the “Create Manually” option and click Next 
                6. Check the box labeled “Event trace data” and click Next 
                7. Next to the Providers box, click the “Add…” button and wait for the list to load. 
                8. Select the item named “Microsoft-Office-Business Connectivity Services” and click “OK” 
                9. Leave everything at the default values and click “Finish” 
                10. You should now see your “BCS” Data Collector Set listed under “User Defined” in Perfmon.

                 

                Start collecting the trace information:

                1. Select the “BCS” data collector set you created previously. 
                2. Click the “Start the Data Collector Set” button. 
                3. Reproduce the issue. 
                4. Click the “Stop the Data Collector Set” button to stop the trace. 
                5. A trace file with a .etl extension should be created in a path like this: 
                    C:\PerfLogs\Admin\BCS\MACHINENAME_20110824-000001\DataCollector01.etl

                View the trace:

                1. Launch Event Viewer (Start –> Run –> eventvwr.msc) 
                2. Action menu  –> “Open Saved Log…” 
                3. In the “Open Saved Log” dialog, navigate to the .etl trace file you created earlier and choose Open. 
                4. When prompted to convert to the new event log format choose Yes. 
                5. Change the display name if you would like then click OK 
                6. You should see the trace information in event viewer.

                Using SharePoint FAST to unlock SAP data and make it accesible to your entire business

                An important new mantra is search-driven applications. In fact, “search” is the new way of navigating through your information. In many organizations an important part of the business data is stored in SAP business suites.
                4336.SP2013SearchArchitecture[1]
                A frequently asked need is to navigate through the business data stored in SAP, via a user-friendly and intuitive application context.
                For many organizations (78% according to Microsoft numbers), SharePoint is the basis for the integrated employee environment. Starting with SharePoint 2010, FAST Enterprise Search Platform (FAST ESP) is part of the SharePoint platform.
                All analyst firms assess FAST ESP as a leader in their scorecards for Enterprise Search technology. For organizations that have SAP and Microsoft SharePoint administrations in their infrastructure, the FAST search engine provides opportunities that one should not miss.

                SharePoint Search

                Search is one of the supporting pillars in SharePoint. And an extremely important one, for realizing the SharePoint proposition of an information hub plus collaboration workplace. It is essential that information you put into SharePoint, is easy to be found again.

                By yourself of course, but especially by your colleagues. However, from the context of ‘central information hub’, more is needed. You must also find and review via the SharePoint workplace the data that is administrated outside SharePoint. Examples are the business data stored in Lines-of-Business systems [SAP, Oracle, Microsoft Dynamics], but also data stored on network shares.
                With the purchase of FAST ESP, Microsoft’s search power of the SharePoint platform sharply increased. All analyst firms consider FAST, along with competitors Autonomy and Google Search Appliance as ‘best in class’ for enterprise search technology.
                For example, Gartner positioned FAST as leader in the Magic Quadrant for Enterprise Search, just above Autonomy. In SharePoint 2010 context FAST is introduced as a standalone extension to the Enterprise Edition, parallel to SharePoint Enterprise Search.
                In SharePoint 2013, Microsoft has simplified the architecture. FAST and Enterprise Search are merged, and FAST is integrated into the standard Enterprise edition and license.

                SharePoint FAST Search architecture

                The logical SharePoint FAST search architecture provides two main responsibilities:

                1. Build the search index administration: in bulk, automated index all data and information which you want to search later. Depending on environmental context, the data sources include SharePoint itself, administrative systems (SAP, Oracle, custom), file shares, …
                2. Execute Search Queries against the accumulated index-administration, and expose the search result to the user.

                In the indexation step, SharePoint FAST must thus retrieve the data from each of the linked systems. FAST Search supports this via the connector framework. There are standard connectors for (web)service invocation and for database queries. And it is supported to custom-build a .NET connector for other ways of unlocking external system, and then ‘plug-in’ this connector in the search indexation pipeline. Examples of such are connecting to SAP via RFC, or ‘quick-and-dirty’ integration access into an own internal build system.
                In this context of search (or better: find) in SAP data, SharePoint FAST supports the indexation process via Business Connectivity Services for connecting to the SAP business system from SharePoint environment and retrieve the business data. What still needs to be arranged is the runtime interoperability with the SAP landscape, authentication, authorization and monitoring.
                An option is to build these typical plumping aspects in a custom .NET connector. But this not an easy matter. And more significant, it is something that nowadays end-user organizations do no longer aim to do themselves, due the involved development and maintenance costs.
                An alternative is to apply Duet Enterprise for the plumbing aspects listed. Combined with SharePoint FAST, Duet Enterprise plays a role in 2 manners: (1) First upon content indexing, for the connectivity to the SAP system to retrieve the data.
                The SAP data is then available within the SharePoint environment (stored in the FAST index files). Search query execution next happens outside of (a link into) SAP. (2) Optional you’ll go from the SharePoint application back to SAP if the use case requires that more detail will be exposed per SAP entity selected from the search result.  An example is a situation where it is absolutely necessary to show the actual status. As with a product in warehouse, how many orders have been placed?

                Security trimmed: Applying the SAP permissions on the data

                Duet Enterprise retrieves data under the SAP account of the individual SharePoint user. This ensures that also from the SharePoint application you can only view those SAP data entities whereto you have the rights according the SAP authorization model. The retrieval of detail data is thus only allowed if you are in the SAP system itself allowed to see that data.

                Due the FAST architecture, matters are different with search query execution. I mentioned that the SAP data is then already brought into the SharePoint context, there is no runtime link necessary into SAP system to execute the query. Consequence is that the Duet Enterprise is in this context not by default applied.
                In many cases this is fine (for instance in the customer example described below), in other cases it is absolutely mandatory to respect also on moment of query execution the specific SAP permissions.
                The FAST search architecture provides support for this by enabling you to augment the indexed SAP data with the SAP autorisations as metadata.
                To do this, you extend the scope of the FAST indexing process with retrieval of SAP permissions per data entity. This meta information is used for compiling ACL lists per data entity. FAST query execution processes this ACL meta-information, and checks each item in the search result whether it allowed to expose to this SharePoint [SAP] user.
                This approach of assembling the ACL information is a static timestamp of the SAP authorizations at the time of executing the FAST indexing process. In case the SAP authorizations are dynamic, this is not sufficient.
                For such situation it is required that at the time of FAST query execution, it can dynamically retrieve the SAP authorizations that then apply. The FAST framework offers an option to achieve this. It does require custom code, but this is next plugged in the standard FAST processing pipeline.
                SharePoint FAST combined with Duet Enterprise so provides standard support and multiple options for implementing SAP security trimming. And in the typical cases the standard support is sufficient.

                lip_image002_2.png

                Applied in customer situation

                The above is not only theory, we actually applied it in real practice. The context was that of opening up of SAP Enterprise Learning functionality to operation by the employees from their familiar SharePoint-based intranet. One of the use cases is that the employee searches in the course catalog for a suitable training.

                This is a striking example of search-driven application. You want a classified list of available courses, through refinement zoom to relevant training, and per applied classification and refinement see how much trainings are available. And of course you also always want the ability to freely search in the complete texts of the courses.
                In the solution direction we make the SAP data via Duet Enterprise available for FAST indexation. Duet Enterprise here takes care of the connectivity, Single Sign-On, and the feed into SharePoint BCS. From there FAST takes over. Indexation of the exposed SAP data is done via the standard FAST index pipeline, searching and displaying the search results found via standard FAST query execution and display functionalities.
                In this application context, specific user authorization per SAP course elements does not apply. Every employee is allowed to find and review all training data. As result we could suffice with the standard application of FAST and Duet Enterprise, without the need for additional customization.

                Conclusion

                Microsoft SharePoint Enterprise Search and FAST both are a very powerful tool to make the SAP business data (and other Line of Business administrations) accessible. The rich feature set of FAST ESP thereby makes it possible to offer your employees an intuitive search-driven user experience to the SAP data.

                COMING SOON – The “User Poll” Web Part for SharePoint 2010 & 2013

                The User Poll Web Part provides your SharePoint environment with a set of web parts to allow your end users to create simple polls. It does this without the hassle of the standard SharePoint surveys which is not intended to create a simple 1 question poll.

                The User Poll Web Part is a poll web part for SharePoint and it allows site users to quickly create polls anywhere in the Site Collection. The poll Web Part is designed to provide a user friendly interface: Important settings and actions are available from within the Web Part.

                There is no direct need to work with the SharePoint Web Part setting menu and poll data is managed from normal SharePoint lists. Administrators can manage and keep track of all created polls from a centralized list.

                A standard SharePoint installation also comes with a polling mechanism as part of the Survey Lists, but these surveys are complicated and require quite some time to configure.

                The The User Poll Web Part allows users to setup a single topic poll within a few minutes.

                The roadmap for the project is provided below.

                Basic functionality

                • Poll settings are configured directly from the web part display or SharePoint lists
                • Publish and unpublish functionality

                 

                Project road map:

                • Release production build of The User Poll Web Part 2013
                • Automated security management on the poll response and answer list
                • Result view only web part
                • Add multiple HTML5 chart options (currently only horizontal bar)
                • Documentation

                Contact me at tomas.floyd@outlook.com!!

                https://sharepointsamurai.wordpress.com/

                Client-side PowerShell for SharePoint Online and Office 365

                SharePoint PowerShell is a PowerShell API for SharePoint 2010, 2013 and Online. Very usefull for Office 365 and private clouds where you don’t have access to the physical server.

                Image

                The API uses the Managed .NET Client-Side Object Model (CSOM) of SharePoint 2013. It’s a library of PowerShell Scripts and in it’s core it talks to the CSOM dll’s.

                Examples :

                Import-Module .\spps.psm1 
                
                Initialize-SPPS -siteURL "https://example.sharepoint.com/" -online $true -username "sitecollectionadmin@example.onmicrosoft.com" -password "password"
                Example
                # Include SPPS
                Import-Module .\spps.psm1 
                
                # Setup SPPS
                Initialize-SPPS -siteURL "https://example.sharepoint.com/" -online $true -username "sitecollectionadmin@example.onmicrosoft.com" -password "password"
                
                # Activate Publishing Site Feature
                Activate-Feature -featureId "f6924d36-2fa8-4f0b-b16d-06b7250180fa" -force $false -featureDefinitionScope "Site"
                
                #Activate Publishing Web Feature
                Activate-Feature -featureId "94c94ca6-b32f-4da9-a9e3-1f3d343d7ecb" -force $false -featureDefinitionScope "Web"

                Features

                • Site Collection
                  • Test connection
                • Site
                  • Manage subsites
                  • Manage permissions
                • Lists and Document Libraries
                  • Create list and document library
                  • Manage fields
                  • Manage list and list item permissions
                  • Upload files to document library (including folders)
                  • Add items to a list with CSV
                  • Add and remove list items
                  • File check-in and check-out
                • Master pages
                  • Set system master page
                  • Set custom master page
                • Features
                  • Activate features
                • Web Parts
                  • Add Web Parts to page
                • Users and Groups
                  • Set site permissions
                  • Set list permissions
                  • Set document permissions
                  • Create SharePoint groups
                  • Add users and groups to SharePoint groups
                • Solutions
                  • Upload sandboxed solutions
                  • Activate sandboxed solutions

                  Contact me at tomas.floyd@outlook.com for this and more Azure,SharePoint & Office 365 Tools, Web Parts and Apps

                SharePoint Development roles urgently needs to be filled at MS Gold Partner – Contact me now for more information (Sorry, No recruiters, i am filling private positions)

                Senior SharePoint Developers needed urgently for MS Gold Partner in Sandton/Bryanston :

                3 – 5 years of development experience.

                2 year experience in SharePoint.

                3 years experience in C#.

                A minimum of 3 years experience in Visual Studio .NET 2005 – 2008.

                A minimum of 3 years experience in ASP.NET , HTML web development.

                A minimum of 3 years experience with Javascript.

                A minimum of 3 years experience with Windows XP, Windows 2003 and Windows Vista.

                A minimum of 3 years experience in relational database design and implementation with SQL Server
                Advantageous (nice-to-have):

                • Windows SharePoint Server.
                • Microsoft Office SharePoint Server.
                • BizTalk
                • Web Analytics
                • Microsoft CRM
                • K2

                Various Senior .Net Developet Positions available at MS Gold Partner and Part of the Britehouse Group! Contact me now! (Sorry. No recruiters – I am filling private positions)

                Required (not-negotiable):

                ·         A minimum of 4 years experience developing code in C# and / or VB.NET and ASP.NET

                ·         A minimum of 48 months Visual Studio 2005 / 2006 and / or 2008 experience.

                ·         A minimum of 48 months Transact-SQL (Stored procedures, views and triggers) experience.

                ·         A minimum of 48 months relational database design implementation using MS SQL Server 2000 / 2005 and / or 2008 experience.

                ·         A minimum of 48 months HTML experience.

                ·         A minimum of 48 months Javascript experience.

                ·         5 years experience of leading a development team.

                ·         Proficiency in technical architecture and high-level design, as well as test framework design and implementation.

                Senior Developers must be able to perform as a Tech-Lead developers with the following tasks:

                ·         Technical lead for development, design and implementation of .NET based solutions as part of the projects team.

                ·         Collaborate with Developers, Account Managers and Project Managers.

                ·         Estimate development tasks and execute well on project schedules.
                ·         Interact with clients to create requirement specifications for projects.
                ·          Innovate new solutions and keep up with new emerging technologies.

                ·         Mentoring of other developers.
                Advantageous (nice-to-have):

                ·         3 years computer science degree or equivalent.

                Windows SharePoint Server.
                ·        
                Microsoft Office SharePoint Server.
                ·        
                BizTalk.
                ·        
                Microsoft CRM.
                ·        
                Experience in web analytics.

                Using jQuery Mobile and ASP.NET to Pass Data from one Page to another

                ASP.NET developers working on either Web Forms or ASP.NET MVC can integrate jQuery Mobile into their Web sites to create rich mobile Web apps. jQuery Mobile is a lightweight JavaScript framework for developing cross-platform mobile/device Web applications.

                In mobile application development, you as a developer must think about how to utilize available space in the best possible manner. For this purpose sometimes the UI needs to be divided in separate pages. In such cases, you may need to transfer value(s) entered in one page, on the second page for the further processing.

                Consider a scenario where the end user is asked to select the Product Category on Page 1 and based upon that, Page 2 displays products under that category. To get this done, values must be passed from page 1 to page 2.

                In jQuery Mobile, $.mobile object provides the changePage() method which accepts the page url as a first parameter and the values to be transferred as a second parameter using JSON based data. In this article, we will see, how values are passed from one page to another in an ASP.NET application.

                Step 1: Open Visual Studio and create a new ASP.NET empty web application, name it as ‘ASPNET_Mobile_PassingValuesAcrossPages’. In this application, using the NuGet Package Manager, add the latest jQuery and jQuery Mobile libraries.

                Step 2: To the project, add a new class file, name it as ‘ModelClasses.cs’ and add the following classes in it:

                public class Category {     public int CategoryId { get; set; }     public string CategoryName { get; set; } }

                public class Product {     public int ProductId { get; set; }     public string ProductName { get; set; }     public int Price { get; set; }     public int CategoryId { get; set; } }

                public class CategoryDataStore : List<Category> {     public CategoryDataStore()     {         Add(new Category() { CategoryId = 1, CategoryName = “Food Items” });         Add(new Category() { CategoryId = 2, CategoryName = “Home Appliances” });         Add(new Category() { CategoryId = 3, CategoryName = “Electronics” });         Add(new Category() { CategoryId = 4, CategoryName = “Wear” });     } }

                public class CategoryDataSource {     public List<Category> GetCategories()     {         return new CategoryDataStore();     } }

                public class ProductDataStore : List<Product> {     public ProductDataStore()     {         Add(new Product() { ProductId = 111, ProductName = “Apples”, Price = 300, CategoryId = 1 });         Add(new Product() { ProductId = 112, ProductName = “Date”,   Price = 600, CategoryId = 1 });         Add(new Product() { ProductId = 113, ProductName = “Fridge”, Price = 34000, CategoryId = 2 });         Add(new Product() { ProductId = 114, ProductName = “T.V.”,   Price = 30000, CategoryId = 2 });         Add(new Product() { ProductId = 115, ProductName = “Laptop”, Price = 72000, CategoryId = 3 });         Add(new Product() { ProductId = 116, ProductName = “Mobile”, Price = 40000, CategoryId = 3 });         Add(new Product() { ProductId = 117, ProductName = “T-Shirt”, Price = 800, CategoryId = 4 });         Add(new Product() { ProductId = 118, ProductName = “Jeans”, Price = 700, CategoryId = 4 });     } }

                public class ProductDataSource {     public List<Product> GetProductsByCategoryId(int catId)     {         var Products = from p in new ProductDataStore()                        where p.CategoryId == catId                        select p;

                        return Products.ToList();     } }

                The above code provides Category and Product entities with data stored in it. The Product is a child of the Category class.

                Step 3: In the project, add a WEB API controller class with the following code:

                public class CategoryProductController : ApiController {     CategoryDataSource objCatDs;     ProductDataSource objPrdDs;

                    public CategoryProductController()     {         objCatDs = new CategoryDataSource();         objPrdDs = new ProductDataSource();     }

                    public IEnumerable<Category> Get()     {         return objCatDs.GetCategories();     }

                    public IEnumerable<Product> Get(int id)     {         return objPrdDs.GetProductsByCategoryId(id);     } }

                Step 4: To define routing for the WEB API class, add a new Global Application Class (Global.asax) in the project and add the following code in the Application_Start method:

                protected void Application_Start(object sender, EventArgs e) {     RouteTable.Routes.MapHttpRoute(     name: “DefaultApi”,     routeTemplate: “api/{controller}/{id}”,     defaults: new { id = System.Web.Http.RouteParameter.Optional }     ); }

                Step 5: Add two HTML pages, name them as Page_Category.html and Page_Products.html.

                Task 6: In both the HTML pages, add the jQuery and jQuery Mobile references:

                <link href=”Content/jquery.mobile.structure-1.3.1.min.css” rel=”stylesheet” /> <link href=”Content/jquery.mobile.theme-1.3.1.min.css” rel=”stylesheet” /> <script src=”Scripts/jquery-2.0.2.min.js”></script> <script src=”Scripts/jquery.mobile-1.3.1.min.js”></script>

                Add the following HTML in the Page_Category.html page:

                <div data-role=”page” id=”catpage”>     <div data-role=”header”>         <h1>Select Category from List</h1>     </div>     <div data-role=”content”>         <div>             <select  id=”lstcat” data-native-menu=”false”>                 <option>List of Categories…..</option>             </select>         </div>         <br />         <br />         <input type=”button” data-icon=”search”            value=”Get Products”  data-inline=”true” id=”btngetproducts” />     </div> </div>

                The above code defines a page using <div>  whose data-role attribute is set to page. This page contains a <select> for displaying list of categories in it and an <input> button with a click event on which the control will be transmitted to Page_Products.html page.

                In the age_Category.html add the following script. The method loadlistview will make a call to WEB API and retrieve categories. These categories will be added into the <select> with id as lstcat. This method will be executed when the pageinit event is executed. On the click event of the button, the Category Value and Name will be passed to the Page_Products.html.

                <script type=”text/javascript”> $(document).on(“pageinit”, “#catpage”, function () {

                //Pass the data to the other Page $(“#btngetproducts”).on(‘click’, function () {     var categoryId = $(“#lstcat”).val();     var categoryName = $(“#lstcat”).find(“:selected”).text();     $.mobile.changePage(“Page_Products.html”, { data: { “catid”: categoryId ,”catname”:categoryName}}); });

                loadlistview();

                ///Function to load all categories function loadlistview() {         $.ajax({         type: “GET”,         url: “/api/CategoryProduct”,         contentType: “application/json; charset=utf-8”,         dataType: “json”     }).done(function (data) {         //Convert JSON into Array         var array = $.map(data, function (i, idx) {             return [[i.CategoryId, i.CategoryName]];         });

                        //Add each Category Name in the ListView         var res = “”;         $.each(array, function (idx, cat) {             res += ‘<option value=”‘ + cat[0] + ‘”>’ + cat[1] + ‘</option>’;         });         $(“#lstcat”).append(res);         $(“#lstcat”).trigger(“change”);

                    }).fail(function (err) {         alert(“Error ” + err.status);     }); } }); </script>

                Just look at this piece of code:

                $.mobile.changePage(“Page_Products.html”, { data: { “catid”: categoryId ,”catname”:categoryName}});

                This code is responsible for passing the CategoryId and Name using JSON expression to the Page_Products.html.

                Step 7: Open Page_Products.html and add the following HTML markup:

                <div data-role=”page” id=”prodpage” data-add-back-btn=”true”> <div data-role=”header”>         <h1>Products in Category</h1>         <span id=”catName”></span>     </div>          <table style=”border:double” data-role=”table” id=”tblprd”>         <thead>             <tr>                 <td style=’width:100px’>                     ProductId                 </td>                 <td style=’width:100px’>                     ProductName                 </td>                 <td style=’width:100px’>                     Price                 </td>             </tr>         </thead>         <tbody>         </tbody>     </table> </div>

                Here the <div> with id as prodpage is set with the attribute as data-role=page. The attribute data-add-back-btn=true, will add the BACK button on the page. On click of this button, we can move back to the Page_Category.html. This is the default style set in jQuery Mobile.

                Now here the important thing is how to read values which are send from Page_Category.html in the URL, on the Page_Product.html page. Unlike ASP.NET, in jQuery there is no simple way to read the values passed from one page to another. To read these values we will create a readUrlHelper helper method. This method reads the URL and using JavaScript string functions, reads the Key/Value pairs passed from the Page_Category.html. On the Page_Products.html, add this method using <script> inside the page created using <div> with id as prodpage.

                //This helper function will read the URL as //string and provides values for parameters //in the URL function readUrlHelper(pageurl, queryParamName) {     var queryParamValue = “”;

                    //Get the Total URL Length     var stringLength = pageurl.length;

                    //Get the Index of ? in URL     var indexOfQM = pageurl.indexOf(“?”);

                    //Get the Lenght of the String after ?     var stringAfterQM = stringLength – indexofQM;

                    //Get the remaining string after ?     var strBeforeQM = pageurl.substr(indexofQM + 1, stringAfterQM);

                    //Split the remaining String based upon & sign     var data = strBeforeQM.split(“&”);

                    //Iterate through the array of strings after split     $.each(data, function (idx, val) {         //Split the string based upon = sign         var queryExpression = val.split(“=”);

                        if (queryExpression[0] == queryParamName)         {             queryParamValue = queryExpression[1];             //If the Query String has Data in Concatination then replace the             //’+’ by blank space ‘ ‘             queryParamValue =  queryParamValue.replace(‘+’, ‘ ‘);         }     });

                    return queryParamValue

                }

                To retrieve the Products based upon the CategoryId, add a new method loadproducts inside the prodpage. This methods makes a call to the WEB API service and passes the CategoryId. This method also makes a call to the generatetable helper method to generate HTML table to display products in it based upon the JSON data received by the loadproducts method:

                //Method to make call to WEB API and retrieve //Products based upon the CategoryId. function loadproducts(id) {

                    $.ajax({         type: “GET”,         url: “/api/CategoryProduct/” + id,         contentType: “application/json; charset=utf-8″,         dataType: “json”     }).done(function (data) {         generatetable(data);     }).fail(function (err) { }); } //Method to generate HTML table rows from the JSON data function generatetable(data) {

                    //Convert JSON into Array     var array = $.map(data, function (i, idx) {         return [[i.ProductId, i.ProductName,i.Price]];     });     var tblbody = $(“#tblprd > tbody”);

                    var tblhtml = “”;     //Generate the Table for each Row     for (var i = 0; i < array.length; i++) {         tblhtml = tblhtml + “<tr>”;         for (var j = 0; j < array[i].length; j++) {         tblhtml = tblhtml + “<td style=’width:100px’>” + array[i][j] + “</td>”;        }         tblhtml = tblhtml + “</tr>”;     }

                    tblbody.append(tblhtml);     $(“#tblprd”).table(“refresh”);

                }

                Now inside the prodpage subscribe to the pageshow event which will make a call to the readUrlHelper method and loadproducts method

                var catId; $(“#prodpage”).on(“pageshow”, function (e) {     var pgurl = $(“#prodpage”).attr(“data-url”);     //Read the value for catid passed from the Page_Category.html     catId = readUrlHelper(pgurl, “catid”);     //Read the value for catname passed from the Page_Category.html    var catName = readUrlHelper(pgurl,”catname”);

                   $(“#catName”).text(catName);

                   loadproducts(catId);

                });

                The above code read the CategoryId and CategoryName passed by the Page_Category.html and based upon it the Products are displayed on the page.

                Make the Page_Category.html as a startup page and run the application (IE9, FireFox or Chrome) or in Opera Mobile Emulaor.

                On Page-Category.html select Categories as below:

                jquery-mob-categories

                When you click on Get Products , you will be navigated to the Page_Products.html with the following url:

                http://localhost:5506/Page_Products.html?catid=1&catname=Food+Items

                The URL contains key and value for CategoryId (catid=1) and CategoryName (catname=Food+Items)

                In the Page_Products.html, the readUrlHelper method will read CategoryId and CategoryName and based upon this data, Products will be displayed as below:

                jquery-transfer- values

                Conclusion: In a Mobile application it is very important for the developer to manage the UI and take care of data communication across these pages. If the data passed is complex (more than one key/value pair) then the URL must be read very carefully. In this article, we saw how we can use jQuery Mobile and ASP.NET to transfer values from one page to another.

                SharePoint Samurai