Category Archives: SharePoint Online

How To : Plan the Deployment of Farm Solutions for SharePoint 2013

SharePoint 2013

While everyone is talking about Apps, there are still significant investments in Full Trust Solutions (a.k.a. Farm Solutions) and I am sure that many OnPrem deployments will want to carry these forward when upgrading to SharePoint 2013.  The new SharePoint 2013 upgrade model allows Sites to continue to run in 2010 mode after upgrading and each Site Collection explicitly has to be upgraded individually.

Not the way it worked in 2010 with Visual Upgrade, but this time there is actually both a 14 and 15 Root folder deployed and all the Features and Layout files from SharePoint 2010 are deployed as part of the 2013 installation.

For those of you new to SharePoint, the root folder is where SharePoint keeps most of its application files and the default location for this is “C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\[SharePoint Internal Version]”, where the versions for the last releases have been 60 (6.0), 12, 14, and now 15. The location is also known as “The xx hive.

This is great in an upgrade scenario, where you may want to do a platform upgrade first or only want to share the new features of 2013 with a few users while maintaining an unchanged experience for the rest of the organization.  This also gives us the opportunity to have different functionality and features for sites running in 2010 and 2013 mode.  However, this requires some extra thought in the development and deployment process that I will give an introduction to here.

Because you can now have Sites running in both 2010 and 2013 mode, SharePoint 2013 introduces a new concept of a Compatibility Level.  Right now it can only be 14 or 15, but you can imagine that there is room for growth.  This Compatibility Level is available at Site Collection and Site (web) level and can be used in code constructs and PowerShell commands.  I will start by explaining how you use it while building and deploying wsp-files for SharePoint 2013 and then finish off with a few things to watch out for and some code tips.

Deployment Considerations

If you take your wsp-files from SharePoint 2010 and just deploy these with Add-SPSolution -> Install-SPSolution as you did in 2010, then SharePoint will assume it is a 2010 solution or a “14” mode solution.  If the level is not specified in the PowerShell command, it determines the level based on the value of the SharePointProductVersion attribute in the Solution manifest file of the wsp-package.  The value can currently be 15.0 or 14.0. If this attribute is missing, it will assume 14.0 (SharePoint 2010) and since this attribute did not exist in 2010, only very well informed people will have this included in existing packages.

For PowerShell cmdlets related to installing solutions and features, there is a new parameter called CompatibilityLevel. This can override the settings of the package itself and can assume the following values: 14, 15, New, Old, All and “14,15” (the latter currently also means All).

The parameter is available for Install-SPSolution, Uninstall-SPSolution, Install-SPFeature and Uninstall-SPFeature.  There is no way to specify “All” versions in the package itself – only the intended target – and therefore these parameters need to be specified if you want to deploy to both targets.

It is important to note that Compatibility Level impacts only files deployed to the Templates folder in the 14/15 Root folder. That is:  Features, Layouts-files, Images, ControlTemplates, etc.

This means that files outside of this folder (e.g. a WCF Service deployed to the ISAPI folder) will be deployed to the 15/ISAPI no matter what level is set in the manifest or PowerShell.  Files such as Assemblies in GAC/Bin and certain resource files will also be deployed to the same location regardless of the Compatibility Level.

It is possible to install the same solution in both 14 and 15 mode, but only if it is done in the same command – specifying Compatibility Level as either “All” or “14,15”.  If it is first deployed with 14 and then with 15, it will throw an exception.  It can be installed with the –Force parameter, but this is not recommended as it could hide other errors and lead to an unknown state for the system.

The following three diagrams illustrate where files go depending on parameters and attributes set (click on the individual images for a larger view). Thanks to the Ignite Team for creating these. I did some small changes from the originals to emphasize a few points.

CompatibilityLevelOld

CompatibilityLevelNew

CompatibilityLevelAll

When retracting the solutions, there is also an option to specify Compatibility Level.  If you do not specify this, it will retract all – both 14 and 15 files if installed.  When deployed to both levels, you can retract one, but the really important thing to understand here is that it will not only retract the files from the version folder, but also all version neutral files – such as Assemblies, ISAPI deployed files, etc. – leaving only the files from the Root folder you did not retract.

To plan for this, my suggestion would be the following during development/deployment:

  • If you want to only run sites in 2013 mode, then deploy the Solutions with CompatibilityLevel 15 or SharePointProductVersion 15.0.
  • If you want to run with both 2010 and 2013 mode, and want to share features and layout files, then deploy to both (All or “14,15”).
  • If you want to differentiate the files and features that are used in 2010 and 2013 mode, then the solutions should be split into two or three solutions:
    • One solution (“Xxx – SP2010”), which contains the files and features to be deployed to the 14 folder for 2010 mode.  including code-behind (for things like feature activation and Application pages), but excluding shared assemblies and files.
    • One solution (“Xxx – SP2013”), which contains the files and features to be deployed to the 15 folder for 2013 mode, including code-behind (for things like feature activation and Application pages), but excluding shared assemblies and files.
    • One solution (“Xxx – Common”), which contains shared files (e.g. common assemblies or web services). This solution would also include all WebApplication scoped features such as bin-deployed assemblies and assemblies with SafeControl entries.
  • If you only want to have two solutions for various reasons, the Common solution can be joined with the SP2013 solution as this is likely to be the one you will keep the longest.
  • The assemblies being used as code-files for the artifacts in SP2010 and SP2013 need to have different names or at least different versions to differentiate them. Web Parts need to go in the Common package and should be shared across the versions, however the installed Web Part templates can be unique to the version mode.

Things to watch out for…

There are a few issues that are worth being aware of that may be fixed in future updates, but you’ll need to watch out for these currently.  I’ve come across an issue where installing the same solution in both levels can go wrong.  If you install it with level All and then uninstall it with level 14 two times, the deployment logic will think that it completely removed the solution, but the files in the 15/Templates folder will still be there.

To recover from this, you can install it with –Force in the orphan level and then uninstall it.  Again, it is better to not get in this situation.

Another scenario that can get you in trouble is if you install a solution in one Compatibility Level (either through PowerShell Parameter or manifest file attribute) and then uninstall with the other level.  It will then remove the common files but leave the specific 14 or 15 folder files and display the solution as fully retracted.

Unfortunately there is no public API to query which Compatibility Levels a package is deployed to.  So you need to get it right the first time or as quickly as possible move to native 2013 mode and packages (this is where we all want to be anyway).

Code patterns

An additional tip is to look for hard coded paths in you custom code such as _layouts and _controltemplates.  The SPUtility class has been updated with static methods to help you parse the current location based on the upgrade status of the Site.   For example, SPUtility.ContextLayoutsFolder will give you the path to the correct layouts folder.  See the reference article on SPUtility properties for more examples.

Round up

I hope this gave you an insight into some of the things you need to consider when deploying Farm Solutions for SharePoint 2013. There are lots of scenarios that are not covered here. If you find some, please share these or share your concerns and I will try to add it as comments or an additional post.

How To : Implement Business Data Connectivity in SharePoint 2013

Business Data Connectivity

Business Connectivity Services is a centralized infrastructure in SharePoint 2013 and Office 2013 that supports integrated data solutions. With Business Connectivity Services, you can use SharePoint 2013 and Office 2013 clients as interfaces into data that doesn’t live in SharePoint 2013 itself. For example, this external data may be in a database and it is accessed by using the out-of-the-box Business Connectivity Services connector for that database.

DuetEnterpriseDesign[1]

Business Connectivity Services can also connect to data that is available through a web service, or data that is published as an OData source or many other types of external data. Business Connectivity Services does this through out-of-the box or custom connectors.

External Content Types in BCS

External content types are the core of BCS. They enable you to manage and reuse the metadata and behaviors of a business entity, such as Customer or Order, from a central location. They enable users to interact with that external data and process it in a more meaningful way.

For more information about using external content types in BCS, see External content types in SharePoint 2013.

How to Connect With SQL External Data Source

Open the SharePoint Designer 2013 and click on the open site icon:

Input the site URL which we need to open:

Enter your site credentials here:

Now we need to create the new external content type and here we have the options for changing the name of the content type and creating the connection for external data source:

And click on the hyperlink text “Click here to discover the external data source operations, now this window will open:

Click on the “Add Connection “button, we can create a new connection. Here we have the different options to select .NET Type, SQL Server, WCF Service.

Here we selected SQL server, now we need to provide the Server credentials:

Now, we can see all the tables and views from the database.

In this screen, we have the options for creating different types of operations against the database:

Click on the next button:

Parameters Configurations:

Options for Filter parameters Configuration:

Here we need to add new External List, Click on the “External List”:

Select the Site here and click ok button:

Enter the list name here and click ok button:

After that, refresh the SharePoint site, we can see the external list here and click on the list:

Here we have the error message “Access denied by Business Connectivity.”

Solution for this Error

SharePoint central admin, click on the Manage service application:

Click on the Business Data Connectivity Service:

Set the permission for this list:

Click ok after setting the permissions:

After that, refresh the site and hope this will work… but again, it has a problem. The error message like Login failed for user “NT AUTHORITY\ANONYMOUS LOGON”.

Solution for this Error

We need to edit the connection properties, the Authentication mode selects the value ‘BDC Identity’.

Then follow the below mentioned steps.

Open PowerShell and type the following lines:

$bdc = Get-SPServiceApplication | 
where {$_ -match “Business Data Connectivity Service”}
$bdc.RevertToSelfAllowed = $true
$bdc.Update();

Now it’s working fine.

And there is a chance for one more error like:

Database Connector has throttled the response.
The response from database contains more than '2000' rows. 
The maximum number of rows that can be read through Database Connector is '2000'. 
The limit can be changed via the 'Set-SPBusinessDataCatalogThrottleConfig' cmdlet

It’s because it depends on the number of recodes that exist in the table.

Solution for this Error

Follow the below steps:

Open PowerShell and type the following lines and execute:

$bcs = Get-SPServiceApplicationProxy | where{$_.GetType().FullName 
-eq (‘Microsoft.SharePoint.BusinessData.SharedService.’ + ‘BdcServiceApplicationProxy’)}
$BCSThrottle = Get-SPBusinessDataCatalogThrottleConfig -Scope database 
-ThrottleType items -ServiceApplicationProxy $bcs
Set-SPBusinessDataCatalogThrottleConfig -Identity $BCSThrottle -Maximum 1000000 -Default 20000

How to: Create a provider-hosted app for SharePoint to access SAP data via SAP Gateway for Microsoft

You can create an app for SharePoint that reads and writes SAP data, and optionally reads and writes SharePoint data, by using SAP Gateway for Microsoft and the Azure AD Authentication Library for .NET. This article provides the procedures for how you can design the app for SharePoint to get authorized access to SAP.

hero-for-hire_basic-layout_600sap_integration_en_round[1]


The following are prerequisites to the procedures in this article:

sap_integration_en_round[2]

Code sample: SharePoint 2013: Using the SAP Gateway to Microsoft in an app for SharePoint

OAuth 2.0 in Azure AD enables applications to access multiple resources hosted by Microsoft Azure and SAP Gateway for Microsoft is one of them. With OAuth 2.0, applications, in addition to users, are security principals. Application principals require authentication and authorization to protected resources in addition to (and sometimes instead of) users. The process involves an OAuth “flow” in which the application, which can be an app for SharePoint, obtains an access token (and refresh token) that is accepted by all of the Microsoft Azure-hosted services and applications that are configured to use Azure AD as an OAuth 2.0 authorization server. The process is very similar to the way that the remote components of a provider-hosted app for SharePoint gets authorization to SharePoint as described in Creating apps for SharePoint that use low-trust authorization and its child articles. However, the ACS authorization system uses Microsoft Azure Access Control Service (ACS) as the trusted token issuer rather than Azure AD.

Tip Tip
If your app for SharePoint accesses SharePoint in addition to accessing SAP Gateway for Microsoft, then it will need to use both systems: Azure AD to get an access token to SAP Gateway for Microsoft and the ACS authorization system to get an access token to SharePoint. The tokens from the two sources are not interchangeable. For more information, see Optionally, add SharePoint access to the ASP.NET application.

For a detailed description and diagram of the OAuth flow used by OAuth 2.0 in Azure AD, see Authorization Code Grant Flow. (For a similar description, and a diagram, of the flow for accessing SharePoint, see See the steps in the Context Token flow.)

Create the Visual Studio solution

  1. Create an App for SharePoint project in Visual Studio with the following steps. (The continuing example in this article assumes you are using C#; but you can start an app for SharePoint project in the Visual Basic section of the new project templates as well.)
    1. In the New app for SharePoint wizard, name the project and click OK. For the continuing example, use SAP2SharePoint.
    2. Specify the domain URL of your Office 365 Developer Site (including a final forward slash) as the debugging site; for example, https://<O365_domain&gt;.sharepoint.com/. Specify Provider-hosted as the app type. Click Next.
    3. Choose a project type. For the continuing example, choose ASP.NET Web Forms Application. (You can also make ASP.NET MVC applications that access SAP Gateway for Microsoft.) Click Next.
    4. Choose Azure ACS as the authentication system. (Your app for SharePoint will use this system if it accesses SharePoint. It does not use this system when it accesses SAP Gateway for Microsoft.) Click Finish.
  2. After the project is created, you are prompted to login to the Office 365 account. Use the credentials of an account administrator; for example Bob@<O365_domain>.onmicrosoft.com.
  3. There are two projects in the Visual Studio solution; the app for SharePoint proper project and an ASP.NET web forms project. Add the Active Directory Authentication Library (ADAL) package to the ASP.NET project with these steps:
    1. Right-click the References folder in the ASP.NET project (named SAP2SharePointWeb in the continuing example) and select Manage NuGet Packages.
    2. In the dialog that opens, select Online on the left. Enter Microsoft.IdentityModel.Clients.ActiveDirectory in the search box.
    3. When the ADAL library appears in the search results, click the Install button beside it, and accept the license when prompted.
  4. Add the Json.net package to the ASP.NET project with these steps:
    1. Enter Json.net in the search box. If this produces too many hits, try searching on Newtonsoft.json.
    2. When Json.net appears in the search results, click the Install button beside it.
  5. Click Close.

Register your web application with Azure AD

  1. Login into the Azure Management portal with your Azure administrator account.
    Note Note
    For security purposes, we recommend against using an administrator account when developing apps.
  2. Choose Active Directory on the left side.
  3. Click on your directory.
  4. Choose APPLICATIONS (on the top navigation bar).
  5. Choose Add on the toolbar at the bottom of the screen.
  6. On the dialog that opens, choose Add an application my organization is developing.
  7. On the ADD APPLICATION dialog, give the application a name. For the continuing example, use ContosoAutomobileCollection.
  8. Choose Web Application And/Or Web API as the application type, and then click the right arrow button.
  9. On the second page of the dialog, use the SSL debugging URL from the ASP.NET project in the Visual Studio solution as the SIGN-ON URL. You can find the URL using the following steps. (You need to register the app initially with the debugging URL so that you can run the Visual Studio debugger (F5). When your app is ready for staging, you will re-register it with its staging Azure Web Site URL. Modify the app and stage it to Azure and Office 365.)
    1. Highlight the ASP.NET project in Solution Explorer.
    2. In the Properties window, copy the value of the SSL URL property. An example is https://localhost:44300/.
    3. Paste it into the SIGN-ON URL on the ADD APPLICATION dialog.
  10. For the APP ID URI, give the application a unique URI, such as the application name appended to the end of the SSL URL; for example https://localhost:44300/ContosoAutomobileCollection.
  11. Click the checkmark button. The Azure dashboard for the application opens with a success message.
  12. Choose CONFIGURE on the top of the page.
  13. Scroll to the CLIENT ID and make a copy of it. You will need it for a later procedure.
  14. In the keys section, create a key. It won’t appear initially. Click SAVE at the bottom of the page and the key will be visible. Make a copy of it. You will need it for a later procedure.
  15. Scroll to permissions to other applications and select your SAP Gateway for Microsoft service application.
  16. Open the Delegated Permissions drop down list and enable the boxes for the permissions to the SAP Gateway for Microsoft service that your app for SharePoint will need.
  17. Click SAVE at the bottom of the screen.

Configure the application to communicate with Azure AD

  1. In Visual Studio, open the web.config file in the ASP.NET project.
  2. In the <appSettings> section, the Office Developer Tools for Visual Studio have added elements for the ClientID and ClientSecret of the app for SharePoint. (These are used in the Azure ACS authorization system if the ASP.NET application accesses SharePoint. You can ignore them for the continuing example, but do not delete them. They are required in provider-hosted apps for SharePoint even if the app is not accessing SharePoint data. Their values will change each time you press F5 in Visual Studio.) Add the following two elements to the section. These are used by the application to authenticate to Azure AD. (Remember that applications, as well as users, are security principals in OAuth-based authentication and authorization systems.)
    <add key="ida:ClientID" value="" />
    <add key="ida:ClientKey" value="" />
    
  3. Insert the client ID that you saved from your Azure AD directory in the earlier procedure as the value of the ida:ClientID key. Leave the casing and punctuation exactly as you copied it and be careful not to include a space character at the beginning or end of the value. For the ida:ClientKey key use the key that you saved from the directory. Again, be careful not to introduce any space characters or change the value in any way. The <appSettings> section should now look something like the following. (The ClientId value may have a GUID or an empty string.)
    <appSettings>
      <add key="ClientId" value="" />
      <add key="ClientSecret" value="LypZu2yVajlHfPLRn5J2hBrwCk5aBOHxE4PtKCjIQkk=" />
      <add key="ida:ClientID" value="4da99afe-08b5-4bce-bc66-5356482ec2df" />
      <add key="ida:ClientKey" value="URwh/oiPay/b5jJWYHgkVdoE/x7gq3zZdtcl/cG14ss=" />
    </appSettings>
    
    NoteNote
    Your application is known to Azure AD by the “localhost” URL you used to register it. The client ID and client key are associated with that identity. When you are ready to stage your application to an Azure Web Site, you will re-register it with a new URL.
  4. Still in the appSettings section, add an Authority key and set its value to the Office 365 domain (some_domain.onmicrosoft.com) of your organizational account. In the continuing example, the organizational account is Bob@<O365_domain>.onmicrosoft.com, so the authority is <O365_domain>.onmicrosoft.com.
    <add key="Authority" value="<O365_domain>.onmicrosoft.com" />
    
  5. Still in the appSettings section, add an AppRedirectUrl key and set its value to the page that the user’s browser should be redirected to after the ASP.NET app has obtained an authorization code from Azure AD. Usually, this is the same page that the user was on when the call to Azure AD was made. In the continuing example, use the SSL URL value with “/Pages/Default.aspx” appended to it as shown below. (This is another value that you will change for staging.)
    Copy
    <add key="AppRedirectUrl" value="https://localhost:44322/Pages/Default.aspx" />
    
  6. Still in the appSettings section, add a ResourceUrl key and set its value to the APP ID URI of SAP Gateway for Microsoft (not the APP ID URI of your ASP.NET application). Obtain this value from the SAP Gateway for Microsoft administrator. The following is an example.
    <add key="ResourceUrl" value="http://<SAP_gateway_domain>.cloudapp.net/" />
    

    The <appSettings> section should now look something like this:

    <appSettings>
      <add key="ClientId" value="06af1059-8916-4851-a271-2705e8cf53c6" />
      <add key="ClientSecret" value="LypZu2yVajlHfPLRn5J2hBrwCk5aBOHxE4PtKCjIQkk=" />
      <add key="ida:ClientID" value="4da99afe-08b5-4bce-bc66-5356482ec2df" />
      <add key="ida:ClientKey" value="URwh/oiPay/b5jJWYHgkVdoE/x7gq3zZdtcl/cG14ss=" />
      <add key="Authority" value="<O365_domain>.onmicrosoft.com" />
      <add key="AppRedirectUrl" value="https://localhost:44322/Pages/Default.aspx" />
      <add key="ResourceUrl" value="http://<SAP_gateway_domain>.cloudapp.net/" />
    </appSettings>
    
  7. Save and close the web.config file.
    Tip Tip
    Do not leave the web.config file open when you run the Visual Studio debugger (F5). The Office Developer Tools for Visual Studio change the ClientId value (not the ida:ClientID) every time you press F5. This requires you to respond to a prompt to reload the web.config file, if it is open, before debugging can execute.

Add a helper class to authenticate to Azure AD

  1. Right-click the ASP.NET project and use the Visual Studio item adding process to add a new class file to the project named AADAuthHelper.cs.
  2. Add the following using statements to the file.
    using Microsoft.IdentityModel.Clients.ActiveDirectory;
    using System.Configuration;
    using System.Web.UI;
    
    
  3. Change the access keyword from public to internal and add the static keyword to the class declaration.
    internal static class AADAuthHelper
    
  4. Add the following fields to the class. These fields store information that your ASP.NET application uses to get access tokens from AAD.
    private static readonly string _authority = ConfigurationManager.AppSettings["Authority"];
    private static readonly string _appRedirectUrl = ConfigurationManager.AppSettings["AppRedirectUrl"];
    private static readonly string _resourceUrl = ConfigurationManager.AppSettings["ResourceUrl"];     
            
    private static readonly ClientCredential _clientCredential = new ClientCredential(
                               ConfigurationManager.AppSettings["ida:ClientID"],
                               ConfigurationManager.AppSettings["ida:ClientKey"]);
    
    private static readonly AuthenticationContext _authenticationContext = 
                new AuthenticationContext("https://login.windows.net/common/" + 
                                          ConfigurationManager.AppSettings["Authority"]);
    
  5. Add the following property to the class. This property holds the URL to the Azure AD login screen.
    private static string AuthorizeUrl
    {
        get
        {
            return string.Format("https://login.windows.net/{0}/oauth2/authorize?response_type=code&redirect_uri={1}&client_id={2}&state={3}",
                _authority,
                _appRedirectUrl,
                _clientCredential.OwnerId,
                Guid.NewGuid().ToString());
        }
    }
    
    
  6. Add the following properties to the class. These cache the access and refresh tokens and check their validity.
    public static Tuple<string, DateTimeOffset> AccessToken
    {
        get {
    return HttpContext.Current.Session["AccessTokenWithExpireTime-" + _resourceUrl] 
           as Tuple<string, DateTimeOffset>;
        }
    
        set { HttpContext.Current.Session["AccessTokenWithExpireTime-" + _resourceUrl] = value; }
    }
    
    private static bool IsAccessTokenValid
    {
       get 
       { 
           return AccessToken != null &&
           !string.IsNullOrEmpty(AccessToken.Item1) &&
           AccessToken.Item2 > DateTimeOffset.UtcNow;
       }
    }
    
    private static string RefreshToken
    {
        get { return HttpContext.Current.Session["RefreshToken" + _resourceUrl] as string; }
        set { HttpContext.Current.Session["RefreshToken-" + _resourceUrl] = value; }
    }
    
    private static bool IsRefreshTokenValid
    {
        get { return !string.IsNullOrEmpty(RefreshToken); }
    }
    
    
  7. Add the following methods to the class. These are used to check the validity of the authorization code and to obtain an access token from Azure AD by using either an authentication code or a refresh token.
    private static bool IsAuthorizationCodeNotNull(string authCode)
    {
        return !string.IsNullOrEmpty(authCode);
    }
    
    private static Tuple<Tuple<string,DateTimeOffset>,string> AcquireTokensUsingAuthCode(string authCode)
    {
        var authResult = _authenticationContext.AcquireTokenByAuthorizationCode(
                    authCode,
                    new Uri(_appRedirectUrl),
                    _clientCredential,
                    _resourceUrl);
    
        return new Tuple<Tuple<string, DateTimeOffset>, string>(
                    new Tuple<string, DateTimeOffset>(authResult.AccessToken, authResult.ExpiresOn), 
                    authResult.RefreshToken);
    }
    
    private static Tuple<string, DateTimeOffset> RenewAccessTokenUsingRefreshToken()
    {
        var authResult = _authenticationContext.AcquireTokenByRefreshToken(
                             RefreshToken,
                             _clientCredential.OwnerId,
                             _clientCredential,
                             _resourceUrl);
    
        return new Tuple<string, DateTimeOffset>(authResult.AccessToken, authResult.ExpiresOn);
    }
    
    
  8. Add the following method to the class. It is called from the ASP.NET code behind to obtain a valid access token before a call is made to get SAP data via SAP Gateway for Microsoft.
    internal static void EnsureValidAccessToken(Page page)
    {
        if (IsAccessTokenValid) 
        {
            return;
        }
        else if (IsRefreshTokenValid) 
        {
            AccessToken = RenewAccessTokenUsingRefreshToken();
            return;
        }
        else if (IsAuthorizationCodeNotNull(page.Request.QueryString["code"]))
        {
            Tuple<Tuple<string, DateTimeOffset>, string> tokens = null;
            try
            {
                tokens = AcquireTokensUsingAuthCode(page.Request.QueryString["code"]);
            }
            catch 
            {
                page.Response.Redirect(AuthorizeUrl);
            }
            AccessToken = tokens.Item1;
            RefreshToken = tokens.Item2;
            return;
        }
        else
        {
            page.Response.Redirect(AuthorizeUrl);
        }
    }
    
Tip Tip
The AADAuthHelper class has only minimal error handling. For a robust, production quality app for SharePoint, add more error handling as described in this MSDN node: Error Handling in OAuth 2.0.

Create data model classes

  1. Create one or more classes to model the data that your app gets from SAP. In the continuing example, there is just one data model class. Right-click the ASP.NET project and use the Visual Studio item adding process to add a new class file to the project named Automobile.cs.
  2. Add the following code to the body of the class:
    public string Price;
    public string Brand;
    public string Model;
    public int Year;
    public string Engine;
    public int MaxPower;
    public string BodyStyle;
    public string Transmission;
    

Add code behind to get data from SAP via the SAP Gateway for Microsoft

  1. Open the Default.aspx.cs file and add the following using statements.
    using System.Net;
    using Newtonsoft.Json.Linq;
    
  2. Add a const declaration to the Default class whose value is the base URL of the SAP OData endpoint that the app will be accessing. The following is an example:
    private const string SAP_ODATA_URL = @"https://<SAP_gateway_domain>.cloudapp.net:8081/perf/sap/opu/odata/sap/ZCAR_POC_SRV/";
    
  3. The Office Developer Tools for Visual Studio have added a Page_PreInit method and a Page_Load method. Comment out the code inside the Page_Load method and comment out the whole Page_Init method. This code is not used in this sample. (If your app for SharePoint is going to access SharePoint, then you restore this code. See Optionally, add SharePoint access to the ASP.NET application.)
  4. Add the following line to the top of the Page_Load method. This will ease the process of debugging because your ASP.NET application is communicating with SAP Gateway for Microsoft using SSL (HTTPS); but your “localhost:port” server is not configured to trust the certificate of SAP Gateway for Microsoft. Without this line of code, you would get an invalid certificate warning before Default.aspx will open. Some browsers allow you to click past this error, but some will not let you open Default.aspx at all.
    ServicePointManager.ServerCertificateValidationCallback = (s, cert, chain, errors) => true;
    
    Important noteImportant
    Delete this line when you are ready to deploy the ASP.NET application to staging. See Modify the app and stage it to Azure and Office 365.
  5. Add the following code to the Page_Load method. The string you pass to the GetSAPData method is an OData query.
    if (!IsPostBack)
    {
        GetSAPData("DataCollection?$top=3");
    }
    
    
  6. Add the following method to the Default class. This method first ensures that the cache for the access token has a valid access token in it that has been obtained from Azure AD. It then creates an HTTP GET Request that includes the access token and sends it to the SAP OData endpoint. The result is returned as a JSON object that is converted to a .NET List object. Three properties of the items are used in an array that is bound to the DataListView.
    private void GetSAPData(string oDataQuery)
    {
        AADAuthHelper.EnsureValidAccessToken(this);
    
        using (WebClient client = new WebClient())
        {
            client.Headers[HttpRequestHeader.Accept] = "application/json";
            client.Headers[HttpRequestHeader.Authorization] = "Bearer " + AADAuthHelper.AccessToken.Item1;
            var jsonString = client.DownloadString(SAP_ODATA_URL + oDataQuery);
            var jsonValue = JObject.Parse(jsonString)["d"]["results"];
            var dataCol = jsonValue.ToObject<List<Automobile>>();
    
            var dataList = dataCol.Select((item) => {
                return item.Brand + " " + item.Model + " " + item.Price;
                }).ToArray();
    
            DataListView.DataSource = dataList;
            DataListView.DataBind();
        }
    }
    
    

Create the user interface

  1. Open the Default.aspx file and add the following markup to the form of the page:
    <div>
      <h3>Data from SAP via SAP Gateway for Microsoft</h3>
    
      <asp:ListView runat="server" ID="DataListView">
        <ItemTemplate>
          <tr runat="server">
            <td runat="server">
              <asp:Label ID="DataLabel" runat="server"
                Text="<%# Container.DataItem.ToString()%>" /><br />
            </td>
          </tr>
        </ItemTemplate>
      </asp:ListView>
    </div>
    
  2. Optionally, give the web page the “look ‘n’ feel” of a SharePoint page with the SharePoint Chrome Control and the host SharePoint website’s style sheet.

Test the app with F5 in Visual Studio

  1. Press F5 in Visual Studio.
  2. The first time that you use F5, you may be prompted to login to the Developer Site that you are using. Use the site administrator credentials. In the continuing example, it is Bob@<O365_domain>.onmicrosoft.com.
  3. The first time that you use F5, you are prompted to grant permissions to the app. Click Trust It.
  4. After a brief delay while the access token is being obtained, the Default.aspx page opens. Verify that the SAP data appears.

Optionally, add SharePoint access to the ASP.NET application


Of course, your app for SharePoint doesn’t have to expose only SAP data in a web page launched from SharePoint. It can also create, read, update, and delete (CRUD) SharePoint data. Your code behind can do this using either the SharePoint client object model (CSOM) or the REST APIs of SharePoint. The CSOM is deployed as a pair of assemblies that the Office Developer Tools for Visual Studio automatically included in the ASP.NET project and set to Copy Local in Visual Studio so that they are included in the ASP.NET application package. For information about using CSOM, start with How to: Complete basic operations using SharePoint 2013 client library code. For information about using the REST APIs, start with Understanding and Using the SharePoint 2013 REST Interface.Regardless, of whether you use CSOM or the REST APIs to access SharePoint, your ASP.NET application must get an access token to SharePoint, just as it does to SAP Gateway for Microsoft. See Understand authentication and authorization to SAP Gateway for Microsoft and SharePoint above. The procedure below provides some basic guidance about how to do this, but we recommend that you first read the following articles:

  1. Open the Default.aspx.cs file and uncomment the Page_PreInit method. Also uncomment the code that the Office Developer Tools for Visual Studio added to the Page_Load method.
  2. If your app for SharePoint is going to access SharePoint data, then you have to cache the SharePoint context token that is POSTed to the Default.aspx page when the app is launched in SharePoint. This is to ensure that the SharePoint context token is not lost when the browser is redirected following the Azure AD authentication. (You have several options for how to cache this context. See OAuth tokens.) The Office Developer Tools for Visual Studio add a SharePointContext.cs file to the ASP.NET project that does most of the work. To use the session cache, you simply add the following code inside the “if (!IsPostBack)” block before the code that calls out to SAP Gateway for Microsoft:
    if (HttpContext.Current.Session["SharePointContext"] == null) 
    {
         HttpContext.Current.Session["SharePointContext"]
            = SharePointContextProvider.Current.GetSharePointContext(Context);
    }
    
  3. The SharePointContext.cs file makes calls to another file that the Office Developer Tools for Visual Studio added to the project: TokenHelper.cs. This file provides most of the code needed to obtain and use access tokens for SharePoint. However, it does not provide any code for renewing an expired access token or an expired refresh token. Nor does it contain any token caching code. For a production quality app for SharePoint, you need to add such code. The caching logic in the preceding step is an example. Your code should also cache the access token and reuse it until it expires. When the access token is expired, your code should use the refresh token to get a new access token. We recommend that you read OAuth tokens.
  4. Add the data calls to SharePoint using either CSOM or REST. The following example is a modification of CSOM code that Office Developer Tools for Visual Studio adds to the Page_Load method. In this example, the code has been moved to a separate method and it begins by retrieving the cached context token.
    Copy
    private void GetSharePointTitle()
    {
        var spContext = HttpContext.Current.Session["SharePointContext"] as SharePointContext;
        using (var clientContext = spContext.CreateUserClientContextForSPHost())
        {
            clientContext.Load(clientContext.Web, web => web.Title);
            clientContext.ExecuteQuery();
            SharePointTitle.Text = "SharePoint web site title is: " + clientContext.Web.Title;
        }
    }
    
  5. Add UI elements to render the SharePoint data. The following shows the HTML control that is referenced in the preceding method:
    <h3>SharePoint title</h3><asp:Label ID="SharePointTitle" runat="server"></asp:Label><br />
    
Note Note
While you are debugging the app for SharePoint, the Office Developer Tools for Visual Studio re-register it with Azure ACS each time you press F5 in Visual Studio. When you stage the app for SharePoint, you have to give it a long-term registration. See the section Modify the app and stage it to Azure and Office 365.

Modify the app and stage it to Azure and Office 365


When you have finished debugging the app for SharePoint using F5 in Visual Studio, you need to deploy the ASP.NET application to an actual Azure Web Site.

Create the Azure Web Site

  1. In the Microsoft Azure portal, open WEB SITES on the left navigation bar.
  2. Click NEW at the bottom of the page and on the NEW dialog select WEB SITE | QUICK CREATE.
  3. Enter a domain name and click CREATE WEB SITE. Make a copy of the new site’s URL. It will have the form my_domain.azurewebsites.net.

Modify the code and markup in the application

  1. In Visual Studio, remove the line ServicePointManager.ServerCertificateValidationCallback = (s, cert, chain, errors) => true; from the Default.aspx.cs file.
  2. Open the web.config file of the ASP.NET project and change the domain part of the value of the AppRedirectUrl key in the appSettings section to the domain of the Azure Web Site. For example, change <add key=”AppRedirectUrl” value=”https://localhost:44322/Pages/Default.aspx&#8221; /> to <add key=”AppRedirectUrl” value=”https://my_domain.azurewebsites.net/Pages/Default.aspx&#8221; />.
  3. Right-click the AppManifest.xml file in the app for SharePoint project and select View Code.
  4. In the StartPage value, replace the string ~remoteAppUrl with the full domain of the Azure Web Site including the protocol; for example https://my_domain.azurewebsites.net. The entire StartPage value should now be: https://my_domain.azurewebsites.net/Pages/Default.aspx. (Usually, the StartPage value is exactly the same as the value of the AppRedirectUrl key in the web.config file.)

Modify the AAD registration and register the app with ACS

  1. Login into Azure Management portal with your Azure administrator account.
  2. Choose Active Directory on the left side.
  3. Click on your directory.
  4. Choose APPLICATIONS (on the top navigation bar).
  5. Open the application you created. In the continuing example, it is ContosoAutomobileCollection.
  6. For each of the following values, change the “localhost:port” part of the value to the domain of your new Azure Web Site:
    • SIGN-ON URL
    • APP ID URI
    • REPLY URL

    For example, if the APP ID URI is https://localhost:44304/ContosoAutomobileCollection, change it to https://<my_domain&gt;.azurewebsites.net/ContosoAutomobileCollection.

  7. Click SAVE at the bottom of the screen.
  8. Register the app with Azure ACS. This must be done even if the app does not access SharePoint and will not use tokens from ACS, because the same process also registers the app with the App Management Service of the Office 365 subscription, which is a requirement. You perform the registration on the AppRegNew.aspx page of any SharePoint website in the Office 365 subscription. For details, see Guidelines for registering apps for SharePoint 2013. As part of this process you will obtain a new client ID and client secret. Insert these values in the web.config for the ClientId (not ida:ClientID) and ClientSecret keys.
    Caution note Caution
    If for any reason you press F5 after making this change, the Office Developer Tools for Visual Studio will overwrite one or both of these values. For that reason, you should keep a record of the values obtained with AppRegNew.aspx and always verify that the values in the web.config are correct just before you publish the ASP.NET application.

Publish the ASP.NET application to Azure and install the app to SharePoint

  1. There are several ways to publish an ASP.NET application to an Azure Web Site. For more information, see How to Deploy an Azure Web Site.
  2. In Visual Studio, right-click the SharePoint app project and select Package. On the Publish your app page that opens, click Package the app. File explorer opens to the folder with the app package.
  3. Login to Office 365 as a global administrator, and navigate to the organization app catalog site collection. (If there isn’t one, create it. See Use the App Catalog to make custom business apps available for your SharePoint Online environment.)
  4. Upload the app package to the app catalog.
  5. Navigate to the Site Contents page of any website in the subscription and click add an app.
  6. On the Your Apps page, scroll to the Apps you can add section and click the icon for your app.
  7. After the app has installed, click it’s icon on the Site Contents page to launch the app.

For more information about installing apps for SharePoint, see Deploying and installing apps for SharePoint: methods and options.

Deploying the app to production


When you have finished all testing you can deploy the app in production. This may require some changes.

  1. If the production domain of the ASP.NET application is different from the staging domain, you will have to change AppRedirectUrl value in the web.config and the StartPage value in the AppManifest.xml file, and repackage the app for SharePoint. See the procedure Modify the code and markup in the application above.
  2. The change in domain also requires that you edit the apps registration with AAD. See the procedure Modify the AAD registration and register the app with ACS above.
  3. The change in domain also requires that you re-register the app with ACS (and the subscription’s App Management Service) as described in the same procedure. (There is no way to edit an app’s registration with ACS.) However, it is not necessary to generate a new client ID or client secret on the AppRegNew.aspx page. You can copy the original values from the ClientId (not ida:ClientID) and ClientSecret keys of the web.config into the AppRegNew form. If you do generate new ones, be sure to copy the new values to the keys in web.config.

How To : Add a Promoted Links Web Part to SharePoint 2013 App Default page

This article helps you to add Promoted links web part to your default app page as the following figure:

 

To do this follow the following steps:
Open the shortcut menu for the project, and then choose Add, New Item
Add Picture Textbox, and two buttons to infopath form

 

In the Templates pane, choose the List template, and then choose the Add button :

Enter list name and choose the Create a non-customizable list based on an existing list type of option button, and then, in its list, choose Promoted links, and then choose the Finish button

Binding the CAPTCHA image
In Solution Explorer, under the list instance node, open the Elements.xml file.
Add the promoted links items as the following:
<?versionencodingutf-8?>
Elementsxmlnshttp://schemas.microsoft.com/sharepoint/
ListInstanceTitleMyPromotedLinks
OnQuickLaunch
TemplateType
FeatureId192efa95-e50c-475e-87ab-361cede5dd7f
Lists/MyPromotedLinks
DescriptionMy List Instance
FieldTitleTwitter</Field
FieldBackgroundImageLocation/PromotedLinksApp/Images/twitter.png
FieldDescriptionMuawiyah Shannak Twitter
FieldLinkLocationhttps://twitter.com/MuShannak</Field
FieldOrder</Field
</
FieldTitle</Field
FieldBackgroundImageLocation/PromotedLinksApp/Images/blogger.png
FieldDescriptionMuawiyah Shannak Blog
FieldLinkLocationhttp://mushannak.blogspot.com</Field
FieldOrder</Field
</
FieldTitleLinkedin</Field
FieldBackgroundImageLocation/PromotedLinksApp/Images/linkedin.png
FieldDescriptionMuawiyah Shannak Linkedin
FieldLinkLocationhttp://ae.linkedin.com/in/shannak</Field
FieldOrder</Field
</
</
</
<!–ListInstance
</Elements
In Solution Explorer, under the Pages node, open the Default.aspx file. Add following tags inside the PlaceHolderMain Place Holder:
WebPartPagesWebPartZone=”WebPartZone”runat=”server”FrameType=”None”>
WebPartPagesXsltListViewWebPart=”XsltListViewAppPromotedList”
runat=”server”ListUrl=”Lists/MyPromotedLinks”IsIncluded=”True”
NoDefaultStyle=”TRUE”Title=”Images used in switcher”
PageType=”PAGE_NORMALVIEW”Default=”False”
ViewContentTypeId=”0x”>
</WebPartPagesXsltListViewWebPart
</WebPartPagesWebPartZone

Deploy a solution and you will find nice promoted links web part in the app default page!

A Look At : DevOps and DNS – What Every Developer Should Know

Over the years I have had the to work alongside many really smart, switched on people in the development community. I’ve learnt from them many intermediate and experienced programming skills. Generally when it comes understanding the very basis of how the internet functions using DNS, most of these very same experienced developers haven’t got a clue.

I wrote this post to hopefully help pay back some of the awesome karma they  have earned helping me over the years, by teaching them something in return. Lets learn about DNS.

imageDNS is a huge part of the inner workings of the internet. spend a considerable amount of man hours a year ensuring the sites they build are fast and respond well to user interaction by setting up expensive CDN’s, recompressing images, minifying script files and much more – but what a lot of us don’t understand is that DNS server configuration can make a big difference to the speed of your site – hopefully at the end of this post you’ll feel empowered to get the most out of this part of your website’s configuration.

What I will in this post:

Why does DNS matter to you?

Well it’s simple – if you are a developer it matters to you because:

  • You own , and up until now your has taken care of your DNS for you – but you need to know what’s going on in case something bad happens…
  • Maybe the you have allows you to manage your own DNS using a web interface, but you haven’t a clue what you are doing.
  • The DNS that your webhost or ISP offers you is probably not the fastest – if your website grows over time, you probably want to setup your own DNS or manage it through a dedicated service such as DNSMadeEasy, ZoneEdit or DynDNS.

First up: How the internet works (DNS)

If you already know how this works feel to step ahead.

image

In really simple terms, when you enter a URL and hit enter, apart from magical unicorns rendering the requested page in your browser window, the interwebs works kinda like this:

  • You want to visit to a domain name, so your PC first checks its internal DNS cache to see if it’s looked it up recently – if so it uses this record
  • Your PC then asks your DNS server (probably configured by your router or ISP when you first started your PC) for the IP address of the server hosting the domain name you want to visit.
  • Your ISP’s DNS server looks up the root DNS servers for the world to find out who takes care of the DNS configuration for the domain you want to visit.
  • Your ISP’s DNS server then asks this authorative DNS server for the domain name you want’s IP information, fetches it, caches it, and then returns it to your PC.
  • Your browser connects to this IP address and asks for a web page.

There are a number of different scenarios that play a role in special circumstances with the above but I’m not really going to cover everything in this post.

What DNS does do:

  • Converts hostnames to IP addresses.
  • Stores mail delivery information for a domain.
  • Stores miscellaneous information against a domain name (TXT records).

What DNS doesn’t/cannot do

  • Redirects users to a different server/site.
  • Configure which port the client is connecting to (not entirely true; SRV records are used for protocol/port mappings for services).

Tools for the Job

One of the coolest things about the tools you’ll need for this blog post, is where I tell you that independent of which operating system you are using, you almost certainly have everything you need to query and test the DNS configuration of your website installed right now without you even knowing

The Swiss Army Knife of DNS inspection is the command line tool NSLOOKUP. This is installed by default in nearly every OS you’ll ever need it on.

NSLOOKUP on Windows

NSLOOKUP on Unix/Linux/Mac OSX

Another cool thing the usage is the same on most platforms as well.

To run NSLOOKUP simply open a terminal/command prompt and type

nslookup

image

The first thing you’ll notice about the pic above is that the first thing NSLOOKUP tells me upon launch is the current DNS server that it will use for its lookups.

By default NSLOOKUP will use your current machine’s DNS settings for its DNS lookups. This can sometimes give you different results from the rest of the world as your internal DNS at your place of work/ISP may be returning different results so they can route, say your office mail, to the internal mail server IP rather than the external internet/DMZ IP address.

Lets change this to use Google’s global DNS server to get a better global view (what others see when they surf the web outside my network) on our DNS queries, by typing;

server 8.8.8.8

Now if I query this blog’s domain name “diaryofaninja.com.” (ensure you place the additional period on the end of this query to avoid any internal DNS suffixes to be added) I should get back the A record for my domain; (A records are the default query type used by NSLOOKUP – I discuss DNS record types further in this post below).

image

An overview of common DNS record types

Below is a simple overview of all the common types of DNS records and some example scenarios.

All records usually share the following common properties:

Value – this is usually the contents of the records. If it is an A record this is the IP address for that A record

TTL – this is the “Time To Live” in seconds for a DNS record and basically means that DNS Clients of Servers accessing the requested record should not cache the record any longer than this value. If this value is set to 3600 this means to cache the returned record’s value for an hour (these values are usually the reason that IT people talk about DNS changes taking “24-48 hours” as these values are usually set quite high on hostnames that are quite static so that they offer the best performance by being kept in cache.

SOA (Start of Authority) records

SOA records (start of authority records) are the root of your domain’s registration. SOA records are created by your domain name registrar in the parent domain’s DNS servers (in the case of a .com domain the SOA record is created in the DNS servers for the .com root domain. In an SOA record the hostnames or IP addresses of your domain’s DNS servers are stored. These tell the internet’s root DNS servers (mother ship DNS servers) where to ask for the rest of your domain’s DNS configuration (such as A, MX and TXT records). When a client (a web browser, a mail server, an FTP client etc.) wants to connect to part of your website, it asks the locally configured DNS server for the record –  the server in turn looks for the SOA records for your domain so it knows which DNS server to ask about it.

Consider these records as the source of “which DNS server stores all the information about the website I want to look up”.

Hostname (A and CNAME) records

A records store information about a hostname record for your domain name. These list the IP address that a client should talk to when using a certain hostname.

If you had an address of http://mywebsite.examplecompany.com into your web browser this would refer to the A record “mywebsite” on the domain “examplecompany.com”.

If you have multiple A records with the same hostname, clients will receive a list of all the records. The order of this list will change with iterate each time you query the DNS server – this is called round-robin DNS and it a simple way to spread load across multiple servers .

AAAA records are the same as A records, only they stores the 128bit IPv6 address of a server instead of the IPv4 IP address – as the world shifts to using IPv6 these records will gain more relevance, but if your webhost supports IPv6 its worth setting these records up now, so that any visitors using IPv6 can access your website.

A CNAME record (Canonical name) is basically  an alias for an A record. This tells whoever is asking, that the DNS information for the requested hostname is stored in another record somewhere else on the internet. This other record might not even be on the same domain name or on the same DNS server. CNAME’s are very powerful as they allow you simplify your domains DNS records by centralising the information somewhere else. ISPs and webhosts commonly use CNAMEs to centralise the DNS configuration storage for things like mail or web server’s by allowing you to keep all the configuration details on a parent domain name.

It is important to note that root records for a domain name (I.e the empty A record for mydomain.com) cannot be a CNAME. The simple hard and fast reason why, is that CNAME’s cannot live on the same node in a DNS forest as any other type of record – because the very nature of a CNAME record defines that all configuration for that node is stored somewhere else, and given you store other information at the root of your domain other than your A record (MX records for mail etc.) this would break every other record’s functionality. This is mentioned specifically in the RFC for DNS, section 3.6.2.

An example of CNAME usage, is when most webhosting company web servers have a hostname such as web0234.mywebhost.com

When setting up your website, your webhost might for instance make the “http://www.yourwebsite.com record for your website a CNAME that has the value web0234.mywebhost.com so that when trying to access “http://www.yourwebsite.com” DNS clients look up the IP address for “web0234.mywebhost.com”. This makes their life easier if the IP address for this web server changes, as they only have to update a single DNS record, instead of updating all their clients DNS records.

To reiterate this to make it crystal clear:
CNAME’s are not a redirection. They are a reference pointer for a hostname. All they tell DNS clients, is that the configuration information for the hostname being queried is the same as can be found by querying the other hostname.

Illustration – Visiting a website

In the case that you want to visit http://www.google.com your computer does the following:

  1. Using the local machine’s DNS client your operating system talks to the locally configured DNS server for your local network/ISP’s network.
  2. This DNS server inturn looks up the DNS server for google.com by first looking up the SOA record for google.com and then connecting to the DNS server listed.
  3. Your local DNS server then asks the DNS server for google.com for the A record listed for www – the google.com DNS server will return an IP address for http://www.google.com. Your ISP or local network’s DNS server, along with returning it to you, will then cache this record for as long as the TTL (time to live) property of the record says.
  4. Your browser then connects to this returned IP address listed on port 80 and asks for the web page.

All of the above happens in milliseconds – but you can understand that if the google.com DNS server is slow in responding this negatively affects your browsing experience.

A records and CNAME records have a TTL (Time To Live) property to indicate how long they can be cached for.

Mail (MX)

MX records are the internets way of telling mail where to be delivered. They list the hostname or IP address of the mail server that handles mail for a given domain name. If a mail server is looking to deliver mail to “examplecompany.com” it will look up the MX record for this domain.

MX records have both a TTL (Time To Live) and a Priority (a weighting to give the order in which they should be looked up).

Illustration – Sending an e-mail to a friend

In the case that you send an email to your friend at myfriend@otherexamplecompany.com your local SMTP mail server (usually at your ISP) does the following:

  1. Your mail server connects to its local network/ISP’s DNS server and asks for the MX record for otherexamplecompany.com.
  2. Your local DNS server or ISP’s DNS server looks up the SOA record for otherexamplecompany.com and then connects to the DNS server listed.
  3. It asks for the MX records for this domain and is returned a list of hostnames.
  4. it grabs the first hostname from the list (order in ascending order by Priority), runs a second query for the IP address of this mail server and returns this IP address to your mail server.
  5. your mail server then connects to this IP address on the SMTP TCP port 25 and delivers your mail.

Text Records (TXT)

TXT records are a powerful addition to the DNS standard that allow the storage of miscellaneous information for a hostname. Many web developers, system admins and the like use TXT records for the storage of information such as SPF records and DKIM public keys.

TXT records have a TTL (Time To Live) property to indicate how long they can be cached for.

Name Server Records (NS)

Name Server Records are placed in your domain’s DNS when you wish to store the configuration of part of your domain’s DNS on a separate DNS server. This can be very handy if you want to give control of a subdomain to another person/entity.

i.e. my site is http://www.widgetsareus.com and I manage all of the DNS for this domain, but I would like support.widgetsareus.com and any child sub domains of this domain to be managed by the company we outsource all of our customer support to – therefore I have setup an NS record for support.widgetsareus.com to point at our support partner’s DNS servers.

Setting up a domain from scratch

If you are setting up a domain you’ve just purchased from scratch you’ll need to do the following:

Setup your website (A records)

  1. Setup a DNS server to store the configuration for yourdomain.com
    This might be at your webhost, or might be a third party service such as DNSMadeEasy, ZoneEdit or DynDNS.
  2. Set the Nameserver SOA records for your domain name to the above DNS server’s IP address or hostname (at your domain registrar)
  3. Create a new root record to point at your webserver’s IP address (this is simply an A record with an empty hostname) in your domain name’s DNS forest.
  4. Create a new www A record that points at your webserver’s IP address in your domain’s DNS server
  5. Setup your webserver’s website to listen for the host-header of your domain name (IIS calls this a “binding”).
  6. Test your DNS as below.
  7. Try and access your site in a web browser.

Testing your website’s A record

In a command prompt/terminal type NSLOOKUP

Enter “yourdomainname.com.” (including the extra period on the end) and hit enter

Check that the returned record value/IP address is that of your web server.

image

Remember to do the same for “www.youdomainname.com.” if you also use www. in your domain name.

Setup your website’s mail (MX record)

  1. Setup a DNS server to store the configuration for yourdomain.com  (Follow steps 1 and 2 above from your website if you haven’t already).
  2. Create a new MX record that points at your mail servers IP address or hostname.
  3. Setup your mail server to listen to receive mail for yourdomain.com
  4. Test that all the above is setup correctly using nslookup as per below.
  5. Try and send and receive email to and from your domain name.
  6. Setup SPF records, to verify your mail server’s ability to send mail on behalf of your domain name

Testing your website’s MX record

In a command prompt/terminal type NSLOOKUP

Enter “set type=mx” and hit enter. This set the query type to MX records.

Enter “yourdomainname.com.” (including the extra period on the end) and hit enter

Check that the returned record value/IP address(es) is that of your mail server.

image

Investigating Common Problems

How do I check what DNS server is authorative for my domain name?

You’ve set up your websites DNS, everything is fine; then one day, everyone visiting your site is directed to a site that isn’t yours!

To check which DNS server is authorative for your domain name, first open a command prompt or terminal.

Type “NSLOOKUP” and hit enter

Type ”set type=ns” and hit enter. This sets the query type to NS (NameServer) records.

Type “yourdomainname.com.” and hit enter (make sure you put the extra dot on the end.)

Confirm that the nameserver’s returned are yours.

image

How do I check what IP address my site is currently pointing at?

In a command prompt/terminal launch NSLOOKUP

Enter “yourdomainname.com.” (including the extra period on the end) and hit enter

Check that the returned record value/IP address is that of your web server.

image

Remember to do the same for “www.youdomainname.com.” if you also use www. in your domain name.

What is split DNS?

Split DNS is when you run a separate DNS forest for a domain name both on your external DNS servers (for everyone else to see) and also internally for staff or local users to see.

This allows you to do things like:

  • Ensure local users talk to your mail server (or any other internal server) using the internal IP address, and internet users talk to your mail server’s external DMZ IP address.
  • Block access to certain sites by giving incorrect or different DNS results for these site’s domain names. This if often how many net nanny etc softwares work.

For some users my sites seems to be served from a different address – how do I check “what the world sees” vs. “what I see”?

Many things can occur that result in some people seeing different DNS results to others:

  • Your ISP/company’s DNS server may have an older cached record to the current live record
  • Your local computer may be caching the DNS record you are requesting
  • Your local DNS server may be fetching the records for your domain from a different authorative DNS server than the rest of the world.

How do you investigate these things?

The easiest way to investigate these things is to query an external DNS server that you know is good for the records you want, to get a better idea of how the rest of the world sees things.

A really good server that is easy to remember are the ones owned by Google. The primary and secondary DNS server for Google’s Public DNS system are “8.8.8.8” and “8.8.4.4” respectively.

You can use whatever DNS servers you think are more likely to see the correct values.

To do this, open a command prompt/console.

Type “NSLOOKUP” and hit enter

Type “server 8.8.8.8” and hit enter. This sets the DNS server we will query to the Google Public DNS server’s address.

Type “yourdomainname.com.” and check the resulting record values.

image

How To : SharePoint Cross-site Publishing and Free code for Web Part

Cross-site publishing is one of the powerful new capabilities in SharePoint 2013.  It enables the separation of data entry from display and breaks down the container barriers that have traditionally existed in SharePoint (ex: rolling up information across site collections). 

 cross-site-publishing

Cross-site publishing is delivered through search and a number of new features, including list/library catalogs, catalog connections, and the content search web part.  Unfortunately, SharePoint Online/Office 365 doesn’t currently support these features.  Until they are added to the service (possibly in a quarterly update), customers will be looking for alternatives to close the gap.  In this post, I will outline several alternatives for delivering cross-site and search-driven content in SharePoint Online and how to template these views for reuse

I’m a huge proponent of SharePoint Online.  After visiting several Microsoft data centers, I feel confident that Microsoft is better positioned to run SharePoint infrastructure than almost any organization in the world.  SharePoint Online has very close feature parity to SharePoint on-premise, with the primary gaps existing in cross-site publishing and advanced business intelligence.  Although these capabilities have acceptable alternatives in the cloud (as will be outlined in this post), organizations looking to maximize the cloud might consider SharePoint running in IaaS for immediate access to these features.

 

Apps for SharePoint

The new SharePoint app model is fully supported in SharePoint Online and can be used to deliver customizations to SharePoint using any web technology.  New SharePoint APIs can be used with the app model to deliver an experience similar to cross-site publishing.  In fact, the content search web part could be re-written for delivery through the app model as an “App Part” for SharePoint Online. 
Although the app model provides great flexibility and reuse, it does come with some drawbacks.  Because an app part is delivered through a glorified IFRAME, it would be challenging to navigate to a new page from within the app part.  A link within the app would only navigate within the IFRAME (not the parent of the IFRAME).  Secondly, there isn’t a great mechanism for templating a site to automatically leverage an app part on its page(s).  Apps do not work with site templates, so a site that contains an app cannot be saved as a template.  Apps can be “stapled” to sites, but the app installed event (which would be needed to add the app part to a page) only fires when the app is installed into the app catalog.

REST APIs and Script Editor

The script editor web part is a powerful new tool that can help deliver flexible customization into SharePoint Online.  The script editor web part allows a block of client-side script to be added to any wiki or web part page in a site.  Combined with the new SharePoint REST APIs, the script editor web part can deliver mash-ups very similar to cross-site publishing and the content search web part.  Unlike apps for SharePoint, the script editor isn’t constrained by IFRAME containers, app permissions, or templating limitations.  In fact, a well-configured script editor web part could be exported and re-imported into the web part gallery for reuse.

Cross-site publishing leverages “catalogs” for precise querying of specific content.  Any List/Library can be designated as a catalog.  By making this designation, SharePoint will automatically create managed properties for columns of the List/Library and ultimately generate a search result source in sites that consume the catalog.  Although SharePoint Online doesn’t support catalogs, it support the building blocks such as managed properties and result sources.  These can be manually configured to provide the same precise querying in SharePoint Online and exploited in the script editor web part for display.

Calling Search REST APIs

<div id=”divContentContainer”></div>
<script type=”text/javascript”>
    $(document).ready(function ($) {
        var basePath = “https://tenant.sharepoint.com/sites/somesite/_api/&#8221;;
        $.ajax({
            url: basePath + “search/query?Querytext=’ContentType:News'”,
            type: “GET”,
            headers: { “Accept”: “application/json;odata=verbose” },
            success: function (data) {
                //script to build UI HERE
            },
            error: function (data) {
                //output error HERE
            }
        });
    });
</script>

 

An easier approach might be to directly reference a list/library in the REST call of our client-side script.  This wouldn’t require manual search configuration and would provide real-time publishing (no waiting for new items to get indexed).  You could think of this approach similar to a content by query web part across site collections (possibly even farms) and the REST API makes it all possible!

List REST APIs

<div id=”divContentContainer”></div>
<script type=”text/javascript”>
    $(document).ready(function ($) {
        var basePath = “https://tenant.sharepoint.com/sites/somesite/_api/&#8221;;
        $.ajax({
            url: basePath + “web/lists/GetByTitle(‘News’)/items/?$select=Title&$filter=Feature eq 0”,
            type: “GET”,
            headers: { “Accept”: “application/json;odata=verbose” },
            success: function (data) {
                //script to build UI HERE
            },
            error: function (data) {
                //output error HERE
            }
        });
    });
</script>

 

The content search web part uses display templates to render search results in different arrangements (ex: list with images, image carousel, etc).  There are two types of display templates the content search web part leverages…the control template, which renders the container around the items, and the item template, which renders each individual item in the search results.  This is very similar to the way a Repeater control works in ASP.NET.  Display templates are authored using HTML, but are converted to client-side script automatically by SharePoint for rendering.  I mention this because our approach is very similar…we will leverage a container and then loop through and render items in script.  In fact, all the examples in this post were converted from display templates in a public site I’m working on. 

Item display template for content search web part

<!–#_
var encodedId = $htmlEncode(ctx.ClientControl.get_nextUniqueId() + “_ImageTitle_”);
var rem = index % 3;
var even = true;
if (rem == 1)
    even = false;

var pictureURL = $getItemValue(ctx, “Picture URL”);
var pictureId = encodedId + “picture”;
var pictureMarkup = Srch.ContentBySearch.getPictureMarkup(pictureURL, 140, 90, ctx.CurrentItem, “mtcImg140”, line1, pictureId);
var pictureLinkId = encodedId + “pictureLink”;
var pictureContainerId = encodedId + “pictureContainer”;
var dataContainerId = encodedId + “dataContainer”;
var dataContainerOverlayId = encodedId + “dataContainerOverlay”;
var line1LinkId = encodedId + “line1Link”;
var line1Id = encodedId + “line1”;
 _#–>
<div style=”width: 320px; float: left; display: table; margin-bottom: 10px; margin-top: 5px;”>
   <a href=”_#= linkURL =#_”>
      <div style=”float: left; width: 140px; padding-right: 10px;”>
         <img src=”_#= pictureURL =#_” class=”mtcImg140″ style=”width: 140px;” />
      </div>
      <div style=”float: left; width: 170px”>
         <div class=”mtcProfileHeader mtcProfileHeaderP”>_#= line1 =#_</div>
      </div>
   </a>
</div>

 

Script equivalent

<div id=”divUnfeaturedNews”></div>
<script type=”text/javascript”>
    $(document).ready(function ($) {
        var basePath = “https://richdizzcom.sharepoint.com/sites/dallasmtcauth/_api/&#8221;;
        $.ajax({
            url: basePath + “web/lists/GetByTitle(‘News’)/items/?$select=Title&$filter=Feature eq 0”,
            type: “GET”,
            headers: { “Accept”: “application/json;odata=verbose” },
            success: function (data) {
                //get the details for each item
                var listData = data.d.results;
                var itemCount = listData.length;
                var processedCount = 0;
                var ul = $(“<ul style=’list-style-type: none; padding-left: 0px;’ class=’cbs-List’>”);
                for (i = 0; i < listData.length; i++) {
                    $.ajax({
                        url: listData[i].__metadata[“uri”] + “/FieldValuesAsHtml”,
                        type: “GET”,
                        headers: { “Accept”: “application/json;odata=verbose” },
                        success: function (data) {
                            processedCount++;
                            var htmlStr = “<li style=’display: inline;’><div style=’width: 320px; float: left; display: table; margin-bottom: 10px; margin-top: 5px;’>”;
                            htmlStr += “<a href=’#’>”;
                            htmlStr += “<div style=’float: left; width: 140px; padding-right: 10px;’>”;
                            htmlStr += setImageWidth(data.d.PublishingRollupImage, ‘140’);
                            htmlStr += “</div>”;
                            htmlStr += “<div style=’float: left; width: 170px’>”;
                            htmlStr += “<div class=’mtcProfileHeader mtcProfileHeaderP’>” + data.d.Title + “</div>”;
                            htmlStr += “</div></a></div></li>”;
                            ul.append($(htmlStr))
                            if (processedCount == itemCount) {
                                $(“#divUnfeaturedNews”).append(ul);
                            }
                        },
                        error: function (data) {
                            alert(data.statusText);
                        }
                    });
                }
            },
            error: function (data) {
                alert(data.statusText);
            }
        });
    });

    function setImageWidth(imgString, width) {
        var img = $(imgString);
        img.css(‘width’, width);
        return img[0].outerHTML;
    }
</script>

 

Even one of the more complex carousel views from my site took less than 30min to convert to the script editor approach.

Advanced carousel script

<div id=”divFeaturedNews”>
    <div class=”mtc-Slideshow” id=”divSlideShow” style=”width: 610px;”>
        <div style=”width: 100%; float: left;”>
            <div id=”divSlideShowSection”>
                <div style=”width: 100%;”>
                    <div class=”mtc-SlideshowItems” id=”divSlideShowSectionContainer” style=”width: 610px; height: 275px; float: left; border-style: none; overflow: hidden; position: relative;”>
                        <div id=”divFeaturedNewsItemContainer”>
                        </div>
                    </div>
                </div>
            </div>
        </div>
    </div>
</div>
<script type=”text/javascript”>
    $(document).ready(function ($) {
        var basePath = “https://richdizzcom.sharepoint.com/sites/dallasmtcauth/_api/&#8221;;
        $.ajax({
            url: basePath + “web/lists/GetByTitle(‘News’)/items/?$select=Title&$filter=Feature eq 1&$top=4”,
            type: “GET”,
            headers: { “Accept”: “application/json;odata=verbose” },
            success: function (data) {
                var listData = data.d.results;
                for (i = 0; i < listData.length; i++) {
                    getItemDetails(listData, i, listData.length);
                }
            },
            error: function (data) {
                alert(data.statusText);
            }
        });
    });
    var processCount = 0;
    function getItemDetails(listData, i, count) {
        $.ajax({
            url: listData[i].__metadata[“uri”] + “/FieldValuesAsHtml”,
            type: “GET”,
            headers: { “Accept”: “application/json;odata=verbose” },
            success: function (data) {
                processCount++;
                var itemHtml = “<div class=’mtcItems’ id=’divPic_” + i + “‘ style=’width: 610px; height: 275px; float: left; position: absolute; border-bottom: 1px dotted #ababab; z-index: 1; left: 0px;’>”
                itemHtml += “<div id=’container_” + i + “‘ style=’width: 610px; height: 275px; float: left;’>”;
                itemHtml += “<a href=’#’ title='” + data.d.Caption_x005f_x0020_x005f_Title + “‘ style=’width: 610px; height: 275px;’>”;
                itemHtml += data.d.Feature_x005f_x0020_x005f_Image;
                itemHtml += “</a></div></div>”;
                itemHtml += “<div class=’titleContainerClass’ id=’divTitle_” + i + “‘ data-originalidx='” + i + “‘ data-currentidx='” + i + “‘ style=’height: 25px; z-index: 2; position: absolute; background-color: rgba(255, 255, 255, 0.8); cursor: pointer; padding-right: 10px; margin: 0px; padding-left: 10px; margin-top: 4px; color: #000; font-size: 18px;’ onclick=’changeSlide(this);’>”;
                itemHtml += data.d.Caption_x005f_x0020_x005f_Title;
                itemHtml += “<span id=’currentSpan_” + i + “‘ style=’display: none; font-size: 16px;’>” + data.d.Caption_x005f_x0020_x005f_Body + “</span></div>”;
                $(‘#divFeaturedNewsItemContainer’).append(itemHtml);

                if (processCount == count) {
                    allItemsLoaded();
                }
            },
            error: function (data) {
                alert(data.statusText);
            }
        });
    }
    window.mtc_init = function (controlDiv) {
        var slideItems = controlDiv.children;
        for (var i = 0; i < slideItems.length; i++) {
            if (i > 0) {
                slideItems[i].style.left = ‘610px’;
            }
        };
    };

    function allItemsLoaded() {
        var slideshows = document.querySelectorAll(“.mtc-SlideshowItems”);
        for (var i = 0; i < slideshows.length; i++) {
            mtc_init(slideshows[i].children[0]);
        }

        var div = $(‘#divTitle_0’);
        cssTitle(div, true);
        var top = 160;
        for (i = 1; i < 4; i++) {
            var divx = $(‘#divTitle_’ + i);
            cssTitle(divx, false);
            divx.css(‘top’, top);
            top += 35;
        }
    }

 

bottlenecks[1]

 

    function cssTitle(div, selected) {
        if (selected) {
            div.css(‘height’, ‘auto’);
            div.css(‘width’, ‘300px’);
            div.css(‘top’, ’10px’);
            div.css(‘left’, ‘0px’);
            div.css(‘font-size’, ’26px’);
            div.css(‘padding-top’, ‘5px’);
            div.css(‘padding-bottom’, ‘5px’);
            div.find(‘span’).css(‘display’, ‘block’);
        }
        else {
            div.css(‘height’, ’25px’);
            div.css(‘width’, ‘auto’);
            div.css(‘left’, ‘0px’);
            div.css(‘font-size’, ’18px’);
            div.css(‘padding-top’, ‘0px’);
            div.css(‘padding-bottom’, ‘0px’);
            div.find(‘span’).css(‘display’, ‘none’);
        }
    }

    window.changeSlide = function (item) {
        //get all title containers
        var listItems = document.querySelectorAll(‘.titleContainerClass’);
        var currentIndexVals = { 0: null, 1: null, 2: null, 3: null };
        var newIndexVals = { 0: null, 1: null, 2: null, 3: null };

        for (var i = 0; i < listItems.length; i++) {
            //current Index
            currentIndexVals[i] = parseInt(listItems[i].getAttribute(‘data-currentidx’));
        }

        var selectedIndex = 0; //selected Index will always be 0
        var leftOffset = ”;
        var originalSelectedIndex = ”;

        var nextSelected = ”;
        var originalNextIndex = ”;

        if (item == null) {
            var item0 = document.querySelector(‘[data-currentidx=”‘ + currentIndexVals[0] + ‘”]’);
            originalSelectedIndex = parseInt(item0.getAttribute(‘data-originalidx’));
            originalNextIndex = originalSelectedIndex + 1;
            nextSelected = currentIndexVals[0] + 1;
        }
        else {
            nextSelected = item.getAttribute(‘data-currentidx’);
            originalNextIndex = item.getAttribute(‘data-originalidx’);
        }

        if (nextSelected == 0) { return; }

        for (i = 0; i < listItems.length; i++) {
            if (currentIndexVals[i] == selectedIndex) {
                //this is the selected item, so move to bottom and animate
                var div = $(‘[data-currentidx=”0″]’);
                cssTitle(div, false);
                div.css(‘left’, ‘-400px’);
                div.css(‘top’, ‘230px’);

                newIndexVals[i] = 3;
                var item0 = document.querySelector(‘[data-currentidx=”0″]’);
                originalSelectedIndex = item0.getAttribute(‘data-originalidx’);

                //annimate
                div.delay(500).animate(
                    { left: ‘0px’ }, 500, function () {
                    });
            }
            else if (currentIndexVals[i] == nextSelected) {
                //this is the NEW selected item, so resize and slide in as selected
                var div = $(‘[data-currentidx=”‘ + nextSelected + ‘”]’);
                cssTitle(div, true);
                div.css(‘left’, ‘-610px’);

                newIndexVals[i] = 0;

                //annimate
                div.delay(500).animate(
                    { left: ‘0px’ }, 500, function () {
                    });
            }
            else {
                //move up in queue
                var curIdx = currentIndexVals[i];
                var div = $(‘[data-currentidx=”‘ + curIdx + ‘”]’);

                var topStr = div.css(‘top’);
                var topInt = parseInt(topStr.substring(0, topStr.length – 1));

                if (curIdx != 1 && nextSelected == 1 || curIdx > nextSelected) {
                    topInt = topInt – 35;
                    if (curIdx – 1 == 2) { newIndexVals[i] = 2 };
                    if (curIdx – 1 == 1) { newIndexVals[i] = 1 };
                }

                //move up
                div.animate(
                    { top: topInt }, 500, function () {
                    });
            }
        };

        if (originalNextIndex < 0)
            originalNextIndex = itemCount – 1;

        //adjust pictures
        $(‘#divPic_’ + originalNextIndex).css(‘left’, ‘610px’);
        leftOffset = ‘-610px’;

        $(‘#divPic_’ + originalSelectedIndex).animate(
            { left: leftOffset }, 500, function () {
            });

        $(‘#divPic_’ + originalNextIndex).animate(
            { left: ‘0px’ }, 500, function () {
            });

        var item0 = document.querySelector(‘[data-currentidx=”‘ + currentIndexVals[0] + ‘”]’);
        var item1 = document.querySelector(‘[data-currentidx=”‘ + currentIndexVals[1] + ‘”]’);
        var item2 = document.querySelector(‘[data-currentidx=”‘ + currentIndexVals[2] + ‘”]’);
        var item3 = document.querySelector(‘[data-currentidx=”‘ + currentIndexVals[3] + ‘”]’);
        if (newIndexVals[0] != null) { item0.setAttribute(‘data-currentidx’, newIndexVals[0]) };
        if (newIndexVals[1] != null) { item1.setAttribute(‘data-currentidx’, newIndexVals[1]) };
        if (newIndexVals[2] != null) { item2.setAttribute(‘data-currentidx’, newIndexVals[2]) };
        if (newIndexVals[3] != null) { item3.setAttribute(‘data-currentidx’, newIndexVals[3]) };
    };
</script>

 

End-result of script editors in SharePoint Online

Separate authoring site collection

Final Thoughts

A Look At : SharePoint 2013 Site Templates

hero-for-hire_basic-layout_600
SharePoint 2013 offers a vast variety of out-of-the-box site templates. One of the success factors of your SharePoint deployment is choosing the most suitable site template that meets your business needs.

I’ve been asked many times which site template can serve particular required needs and what differs one template from another, so I decided to write a quick overview of all the available SharePoint 2013 site templates and their common uses.

Collaboration Site Templates

  • Team Site – The most common SharePoint site template, mainly used by teams to collaborate, organize, create, and share information and documents.

  • Blog – a site on which a user or group of users write opinions and share information.

  • Developer Site – this site template is focused on Apps for Office development. Developers can build, test and publish their apps here.

  • Project Site – this site template is used for managing and collaborating on a project. Project site coordinates project status and all additional information relevant to the project.

  • Community Site – a site where the community members can explore, discover content and discuss common topics.

 

Enterprise Site Templates

  • Document Center – this site is used to centrally manage documents in your enterprise.

  • eDiscovery Center – this site is used to manage, search and export content for investigations matters.

  • Records Center – this site is used to submit and find important documents that should be stored for long-term archival.

  • Business Intelligence Center – this site is used for providing access to Business Intelligence content in SharePoint.

  • Enterprise Search Center – this site delivers an enterprise search experience.  Users can access the enterprise search center to perform general searches, people searches, conversation or video searches, all in one place. You can easily customize search results pages.

  • My Site Host – this site is used for hosting public profile pages and personal sites. This site can be available after configuration of the User Profile Service Application.

  • Community Portal – this site is used for discovering new communities across the enterprise.

  • Basic Search Center – this site is delivering the basic search experience.

  • Visio Process Repository – this site allows you sharing and viewing Visio process diagrams.

Publishing Site Templates

  • Publishing Portal – this site template is used for an internet-facing sites or a large intranet portals.

  • Enterprise Wiki – this site is used for publishing knowledge that you want to share across the enterprise.

  • Product Catalog – this site is used for managing product catalogs.

If none of those SharePoint site templates meets your needs you can always create custom templates.

 

This will be the focus of a future blog post as I am busy finishing a FREE Custom Knowledge Base Site Template

Some of the features will include :

  • Creating an ALM web and site template, setup life cycle management and deployment
  • Advanced functionality using Managed Metadata and BCS
  • Document Conversion using Word Automation Services
  • Using the search to build out our feature functionality
  • An Office 365 and SharePoint Online version

 

A Look At : The New Search Functionality in SharePoint Online and how Developers can make use of it

SharePointOnline2L-1[2]hero-for-hire_basic-layout_600http://en.gravatar.com/sharepointsamurai/

 

Search functionality in SharePoint 2013 includes several enhancements, custom content processing and a new framework for presenting search result types. SharePoint Server 2013 presents a new search architecture that includes substantial changes and additions to the search components and databases.

Also, there have been significant enhancements made to the Keyword Query Language (KQL).

Some of the features and functionalities have been depreciated from the previous version of SharePoint 2013. There has been a more search user interface improvement which brings the user more interactive with search results. For example, users can rest the pointer over a search result to see the content preview in the hover panel to the right of the result.

Now you can see Office 365 SharePoint 2013 and its admin features of Search Service Application. It’s a breakthrough advancing; nearly all the new features listed here are missed in Office 365 – SharePoint 2010. The following screen capture shows the SharePoint central administrator view for the Search section.

Manage all aspects of the Search experience for your end users improving the relevancy of your results per your content and metadata.

Search helps users quickly return to important sites and documents by remembering what they have previously searched and clicked. The results of previously searched and clicked items are displayed as query suggestions at the top of the results page.

In addition to the default manner in which search results are differentiated, site collection administrators and site owners can create and use result types to customize how results are displayed for important documents. A result type is a rule that identifies a type of result and a way to display it.

 

Manage Search Schema

Managed properties are used to restrict search results, and present the content of the properties in search results. Crawled properties are automatically extracted from crawled content. All the changes to properties will take effect only after the next full crawl.

Under the search schema section, administrator can:

  • View, create, or modify Managed Properties and map crawled properties to managed properties
  • View or modify Crawled Properties, or to view crawled properties in a particular category
  • View or modify Categories, or view crawled properties in a particular category.

While creating a new managed property, the ‘Mappings to crawled properties’ is one of the key attributes for the configuration set in our new property.

 

 

Manage Search Dictionaries

  Taxonomy Term Store  
People Search Dictionaries System
Department Company Exclusions Hashtags
Job Title Company Inclusions Keywords
Location Query Spelling Exclusions Orphaned terms
  Query Spelling Includings  

 

Manage Authoritative Pages

Search in SharePoint 2013 will analyze the collection of authoritative and non-authoritative pages to determine the ranking of search results. The authoritative sites are of two kinds:

  • Authoritative Site Pages
  • Non-authoritative Site Pages

Authoritative site pages are the links, which administrator authorized to be the most relevant information. There can be multiple authoritative pages in each environment. There is an option for specifying second and third-level authorities for search ranking. Non-authoritative site pages are the content from certain sites can be ranked lower than the rest of the content in the site.

 

Query Suggestion Settings

SharePoint Search comprises various features that you can leverage for building productivity solutions. One of the interesting and useful competencies are Query Suggestions. The query suggestions are administrated by two options as follows:

  • Always Suggest Phrases
  • Never Suggest Phrases

Manage Result Sources

Result Sources are used to frame the search results and confederate queries to external sources, such as internet search engines, etc. Once the result source are defined, we can configure search web parts and query rule actions to use the result source.

How the Result Source is managed? A SharePoint Online administrator of SharePoint Online Tenant can manage result sources for all site collections and sites reside under the same tenant. A site collection administrator or a site owner can manage result sources for a site collection or a site, respectively.

SharePoint 2013 provides 16 pre-defined result sources. The pre-configured default result source is Local SharePoint Results. We can state a different result source as the default as per our requirement

.

While creating a new Result Source, there is Protocol and Query transform are the two important parameters which tells the Result Source what to do in the SharePoint.

Protocol – Local SharePoint for results from the index of this Search Service. OpenSearch 1.0/1.1 for results from a search engine that uses that protocol. Exchange for results from an exchange source. Remote SharePoint for results from the index of a search service hosted in another farm.

Query Transform – Change incoming queries to use this new query text instead. Include the incoming query in the new text by using the query variable “{searchTerms}“.

Use this to scope results. For example, to only return OneNote items, set the new text to “{searchTerms} fileextension=one“. Then, an incoming query “sharepoint” becomes “sharepoint fileextension=one“. Launch the Query Builder for additional options.

 

Manage Query Rules

Query rules are to conditionally stimulate the search results and show hunks of supplementary results based on the rules created in the SharePoint. In a query rule, you can specify conditions and correlated actions without any help of code. The user with Site Collection, Site owner permission level can create and manage the query rules.

 

Manage Query Client Types

Query Client Types are one of the new search features in SharePoint 2013. Client Type identifies an application where a search query is sent from. Applications are prioritized by tiers. Top tier has the highest priority. When resource limit is reached, query throttling becomes ON, and search system will process the queries from top tier to bottom tier.

System Client Types are available out-of-the box, and cannot be deleted. We can add a new custom Client Type by clicking on New Client Type.

 

Remove Search Results

To remove data from the search results, type the URLs which needed to remove from it. All the URLs listed in the textbox will be removed from search results immediately, once after the Remove Now button is clicked.

View Usage Reports

Here the administrator will be able to see the usage reports and search related report, example Query Rules usage by day, Top Queries by Day, etc.

Search Center Settings

In this setting, the default search system will be mapped. Usually the Enterprise Search Center site that has been created for search entire SharePoint sites in the organization.

Export Search Configuration

Create a file that includes all customized query rules, result sources, result types, ranking models and site search settings but not any that shipped with SharePoint, in the current tenant that can be imported to other tenants.

Import Search Configuration

If you have a search configuration you’d like to import, browse for it below. Settings imported from the file will be created and activated as part of the site. You can modify any of the settings after import.

Crawl Log Permissions

Grant users read access to crawl log information for this tenant.

Search Client Object Model

SharePoint 2013 Search includes a client object model (CSOM) that enables access to most of the Query object model functionality for online, on-premises, and mobile development. You can use the Search CSOM to create client applications that run on a machine that does not have SharePoint 2013 installed to return SharePoint 2013 Preview search results.

The Search CSOM includes a Microsoft .NET Framework managed client object model and JavaScript object model, and it is built on SharePoint 2013. First, client code accesses the SharePoint CSOM. Then, client code accesses the Search CSOM.

NOTE: Custom search solutions in SharePoint Server 2013 do not support SQL syntax. Search in SharePoint 2013 supports FQL syntax and KQL syntax for custom search solutions.

We can configure crawled and managed properties. Configure Result Sources which were Federated Result / Scopes in SharePoint Search 2010.

 

Introduction to Business Connectivity Services (BCS)

BCS has the ability to connect and query the data sources and returns the results to the user through an external list, or app for SharePoint, or Office 2013. The Microsoft Office 2013 and SharePoint 2013 include Microsoft Business Connectivity Services (BCS).

The SharePoint 2013 and the Office 2013 suites include Microsoft Business Connectivity Services. With Business Connectivity Services, you can use SharePoint 2013 and Office 2013 clients as an interface into data that doesn’t live in SharePoint 2013 itself. It does this by making a connection to the data source, running a query, and returning the results.

Business Connectivity Services returns the results to the user through an external list, or app for SharePoint, or Office 2013 where you can perform different operations against them, such as Create, Read, Update, Delete, and Query (CRUDQ). Business Connectivity Services can access external data sources through Open Data (OData), Windows Communication Foundation (WCF) endpoints, web services, cloud-based services, and .NET assemblies, or through custom connectors.

Business Connectivity Services can access external data sources through Open Data (OData), Windows Communication Foundation (WCF) endpoints, web services, cloud-based services, and .NET assemblies, or through custom connectors. The Open Data Protocol is known as OData. It is an open web protocol for querying and updating data.

Business Connectivity Services uses SharePoint 2013 and Office 2013 as a client interface for data which doesn’t reside SharePoint 2013 environment.

The following screen capture is the BCS features and configuration options available under the SharePoint Administration Center in the Office 365.

How To : Use the Office 365 API Client Libraries (Javascript and .Net)

blog-office365

One of the cool things with today’s Office 365 API Tooling update is that you can now access the Office 365 APIs using libraries available for .NET and JavaScript.

 

\\8These libraries make it easier to interact with the REST APIs from the device or platform of your choice. And when I say platform of your choice, it really is! Office 365 API and the client libraries support the following project types in Visual Studio today:https://sharepointsamurai.wordpress.com/wp-admin/post.php?post=1625&action=edit&message=10

  1. NET Windows Store Apps
  2. .NET Windows Store Universal Apps
  3. Windows Forms Applications
  4. WPF Applications
  5. ASP.NET MVC Web Applications
  6. ASP.NET Web Forms Applications
  7. Xamarin Android and iOS Applications
  8. Multi-device Hybrid Apps

p.s: support for more projects coming on the way….

Few Things Before We Get Started

  • The authentication library is released as “alpha”.
    • If you don’t see something you want or if you think we missed addressing some scenarios/capabilities, let us know!
    • In this initial release of the authentication library, we focused on simplifying the getting started experience, especially for Office 365 services and not so much on the interoperability across other services (that support OAuth) but that’s something we can start looking for next updates to make it more generic.
  • The library is not meant to replace Active Directory Authentication Library (ADAL) but it is a wrapper over it (where it exists) which gives you a focused getting started experience.
    • However, If you want to opt out and go “DIY”, you still can.

Setting Up Authentication

The first step to accessing Office 365 APIs via the client library is to get authenticated with Office 365.

Once you configure the required Office 365 service and its permissions, the tool will add the required client libraries for authentication and the service into your project.

Lets quickly look at what authenticating your client looks like.

Getting Authenticated

Office 365 APIs use OAuth Common Consent Framework for authentication and authorization.

Below is the code to authenticate your .NET application:

Authenticator authenticator = new Authenticator();

AuthenticationInfo authInfo =
await authenticator.AuthenticateAsync(ExchangeResourceId);

Below is the JS code snippet used for authentication in Cordova projects:

var authContext = new O365Auth.Context();
authContext.getIdToken('https://outlook.office365.com/')
.then((function (token) {
    var client = new Exchange.Client('https://outlook.office365.com/ews/odata', 
                         token.getAccessTokenFn('https://outlook.office365.com'));
    client.me.calendar.events.getEvents().fetch()
        .then(function (events) {
            // get currentPage of events and logout
            var myevents = events.currentPage;
            authContext.logOut();
        }, function (reason) {
            // handle error
        });
}).bind(this), function (reason) {
    // handle error
});

Authenticator Class

The Authenticator class initializes the key stuff required for authentication:

1) Office 365 app client Id

2) Redirect URI

3) Authentication URI

You can find these settings in:

– For Web Applications – web.config

– For Windows Store Apps – App.xaml

– For Desktop Applications (Windows Forms & WPF) – AssemblyInfo.cs/.vb

– For Xamarin Applications – AssemblyInfo.cs

If you would like to provide these values at runtime and not from the config files, you can do so by using the alternate constructor:

image

To authenticate, you call the AuthenticateAsync method by passing the service’s resource Id:

AuthenticationInfo authInfo = await authenticator.AuthenticateAsync(ExchangeResourceId);

If you are using the discovery service, you can specify the capability instead of the resource Id:

AuthenticationInfo authInfo =
await authenticator.AuthenticateAsync("Mail", ServiceIdentifierKind.Capability);

The string to use for other services if you use discovery service: Calendar, Contacts and MyFiles

NOTE:

– For now, if you want to use the discovery service, you will also need to configure a SharePoint resource, either Sites or My Files. This is because the discovery service currently uses SharePoint resource Id.

– Active Directory Graph & Sites do not support discovery service yet

Depending on your client, the AuthenticateAsync will open the appropriate window for you to authenticate:

– For web applications, you will be redirected to login page to authenticate

– For Windows Store Apps, you will get dialog box to authenticate

– For desktop apps, you will get a dialog window to authenticate

image

AuthenticatorInfo Class

Once successfully authenticated, the method returns an AuthenticatorInfo object which helps you to get the required access token:

ExchangeClient client =
new ExchangeClient(new Uri(ExchangeServiceRoot), authInfo.GetAccessToken);

And also help you re-authenticate for a different resource when you create the service client.

AuthenticationInfo graphAuthInfo =
    await authInfo.ReauthenticateAsync("https://graph.windows.net/");

The library automatically handles token lifetime management by monitoring the expiration time of the access token and performing a refresh automatically.

Thats it! – Now you can make subsequent calls to the service to return the items you want!

Authentication Library

For .NET projects:

The library is available as a Nuget package. So, if you want to add it manually to your project without the tool, you could do so. However, you will have to manually register an app in the Azure Active Directory to authenticate against AAD.

Microsoft Office 365 Authentication Library for ASP.NET

Microsoft Office 365 Authentication Library for .NET (Android and iOS)

Microsoft Office 365 Authentication Library for ASP.NET

For Cordova projects:

You will need to use the Office 365 API tool which generates the aadgraph.js under the Scripts folder that handles authentication.

A Look At : Application Management and Governance in SharePoint 2013

Summary:Learn how to govern applications for SharePoint 2013 by creating a customization policy and understanding the app model, branding, and life-cycle management.

8322.sharepoint_2D00_2010_5F00_4855E582[1]

How will you manage the applications that are developed for your environment? What customizations do you allow in your applications, and what are your processes for managing those applications?

 

For effective and manageable applications, your organization should consider the following:

  • Customization policy   SharePoint 2013 includes customizable features and capabilities that span multiple product areas, such as business intelligence, forms, workflow, and content management. Customization can introduce risks to the stability, maintenance, and security of the environment. To support customization while controlling its scope, you should develop a customization policy.
  • Life-cycle management   Follow best practices to manage applications and keep your environments in sync.
  • Branding   If you are designing an information architecture and a set of sites to use across an organization, consider including branding in your governance plan. A formal set of branding policies helps ensure that sites consistently use enterprise imagery, fonts, themes, and other design elements.
  • Solutions or apps for SharePoint?   Decide whether a solution or an app for SharePoint would be the best choice for specific customizations.

Get developer guidance about customizing and branding SharePoint 2013 on MSDN: Build sites for SharePoint 2013.

Foundation icon This article is part of a set of articles about governance. The following articles describe other aspects of governance:

The What is governance? poster gives a summary of this content. Download the PDF version or Visio version, or Zoom into the model in full detail with Zoom.it from Microsoft.

Determine the types of customizations you want to allow and how to manage them. Your customization policy should include:

  • Service-level descriptions   What are the parameters for supporting and managing customizations in your environments? See Service-level agreements.
  • Guidelines for updating customizations   How do you manage changes to customizations, and how do you roll out those changes to your environments? Consider ways to manage source code, such as a source control system and standards for documenting the code.
  • Processes for analyzing   How do you understand whether a particular customization is working well in your environment, or how do you decide which ones to create, change, or retire?
  • Approved tools for customization   Consider development standards, such as coding best practices and the tools that you will to use across your organization. For example, you should decide whether to allow the use of SharePoint Designer 2013 and Design Manager, and specify which site elements can be customized and by whom.
  • Process for piloting and testing customizations   How do you test and deploy customizations? How many people should be in a pilot testing group? What are your standards for testing and validating customizations?
  • Who is responsible for ongoing support   Who will be responsible for supporting customizations in your environments—individual teams or a central group?
  • Guidelines for packaging and deploying customizations   Do you have individual packages for each, or do you include several in a feature or solution? Which customizations should be apps for SharePoint instead of solutions? How do you ensure that customizations in one environment do not affect the rest of your SharePoint implementation?
  • Specific policies regarding each potential type of customization   What types of customizations do you allow?

    For more information about kinds of customizations and their potential risks, see the Customizations table later in this article. For more information about processes for managing customizations, see the white paper SharePoint Products and Technologies customization policy. Most of this content still applies to SharePoint 2013.

  • Policies around using the App Catalog and SharePoint Store Which apps for SharePoint do you want to make available to your organization? Can users purchase apps directly? See Solutions or apps for SharePoint? later in this article for more information.

The highly customizable design of SharePoint products enables you to provide the look, behavior, or functionality that meets your business needs. Customizations can introduce risk to your environment, whether that risk is to the environment’s performance, availability, or supportability. Conversely, a “no customizations” policy severely restricts your organization’s ability to take advantage of the SharePoint platform.

All customizations are not the same. You must decide carefully which kinds of customizations to allow in your environment. You must ensure the customizations support the performance, availability, and supportability you want for your environment. Your governance policy should balance a level of acceptable risk against the business needs for your organization.

What is considered a customization? All of the following are considered kinds of customizations in SharePoint products:

  • Configuration   Using the SharePoint user interface to configure SharePoint products.
  • Branding   Changing logos, styles, colors, master pages and page layouts, and so on to create a custom look for your SharePoint sites. See more about branding.
  • Custom code   Using developer tools to add or change functionality in SharePoint products or to interact with other applications. Risk can vary depending on kind of functionality and level of trust (full trust solutions should be rarely used; consider apps for SharePoint first).
    TipTip:
    Sandboxed solutions are deprecated in this release, so they are not the best option for custom code in the long term

Some customizations have very little risk or impact on your environment. Others have the potential for much higher risk and impact. The following table provides examples of different kinds of customizations, the risk level associated with that kind of customization, and potential issues that you might face if you allow that kind of customization.

Customizations

Risk level Types of customizations and examples Considerations or impact
Unsupported/High Unsupported customizations such as direct changes to the database schema or modifying files on the file system.
  • Will not be supported through Microsoft Customer Support.
  • Will be unable to upgrade.

Do not use.

Moderate to high Creating applications that interact with or redirect actions in key pipelines, such as events, claims, and so on.
  • Potential for service outage or performance issues.
  • Might require rework at upgrade.
Moderate to low Using a custom Web Part outside a sandbox environment, creating custom actions such as adding a menu item, or creating a custom site provisioning process.
  • Short or long-term performance issues or page errors.
  • Might require rework at upgrade.
Low Using solutions in a sandbox environment. Short-term performance issues; you can avoid some performance issues by using resource throttling and quotas.
Very low to no risk Using apps for SharePoint or using functionality within the product or configurations, such as associating a workflow with a list or using an instance of a built in Web Part. Minor configuration or page errors that would have to be addressed. Apps can be uninstalled or updated.
NoteNote:
For more information about customizations and upgrade, see Considerations for specific customizations.

 

 

Also, when you think through the customizations to allow in your environment, consider carefully whether a particular customization is necessary. If it recreates functionality that is already available in the product (such as creating a Web Part that does the same thing as the Content Editor Web Part or the Content by Query Web Part), then that might be unnecessary work.

Consider first whether the standard functionality can do what you want, or check the SharePoint Store to see if there is an app for SharePoint available that does what you need.

Follow these best practices to manage applications based on SharePoint 2013 throughout their life cycle:

  • Use separate development, preproduction, and production environments, and keep these environments as synchronized as possible so that you can accurately test your customizations.
  • Test all customizations before releasing the first time and after any updates have been made before you release them to your production environment.
  • Use source code control and solution and feature versioning to track changes to code.

Development, test, and production environments

Consistent branding with a corporate style guide makes for more cohesive-looking sites and easier development. Store approved themes in the theme gallery for consistency so that users will know when they visit the site that they are in the right place.

SharePoint 2013 includes a new feature to use for branding, Design Manager. By using Design Manager, you can create a visual design for your website with whatever web design tool or HTML editor you prefer and then upload that design into SharePoint. Design Manager is the central hub and interface where you manage all aspects of a custom design.

Creating the visual design of a site often fits into a larger process, in which multiple people or organizations are involved. For a roadmap of the tasks from a larger perspective, see Design and branding in SharePoint 2013.

SharePoint 2013 has a new development model based on apps for SharePoint. Apps for SharePoint are self-contained pieces of functionality that extend the capabilities of a SharePoint website. An app may include SharePoint features such as lists, workflows, and site pages, but it can also use a remote web application and remote data in SharePoint. An app has few or no dependencies on any other software on the device or platform where it is installed, other than what is built into the platform. Apps have no custom code that runs on the SharePoint servers.

The guidance for whether to use apps for SharePoint or SharePoint solutions is to:

  • Design apps for end users

    Apps for SharePoint:

    • Are easy for users (tenant administrators and site owners) to discover and install.
    • Use safe SharePoint extensions.
    • Provide the flexibility to develop future upgrades.
    • Can integrate with cloud-based resources.
    • Are available for both SharePoint Online and on-premises SharePoint sites.
  • Use farm solutions for administrators

    SharePoint solutions:

    • Can access the server-side object-model APIs that are needed to extend SharePoint management, configuration, and security
    • Can extend Central Administration, Windows PowerShell cmdlets, timer jobs, custom backups, and so on.
    • Are installed by administrators.
    • Can have farm, web application, or site-collection scope.

Go to MSDN to get more information about the new development model, Apps for SharePoint compared with SharePoint solutions, and Deciding between apps for SharePoint and SharePoint solutions.

Set a policy for using apps for SharePoint in your organization. Can users purchase and download apps? How do you make your organization’s apps available? How do you tell if they’re being used?

  • SharePoint Store   Determine whether users can purchase or download apps from the SharePoint Store.
  • App Catalog   Make specific apps for SharePoint available to your users by adding them to the App Catalog.
  • App requests   Configure app requests to control which apps are purchased and how many licenses are available.
  • Monitor apps   Monitor specific apps in SharePoint Server 2013 to check for errors and to track usage.

In the market

How To : Use the CSOM to Update SharePoint Web Part Properties

List in SharePoint9

I wanted to share two methods I developed for retrieving and updating web part properties from JavaScript using CSOM in SharePoint 2013 (I haven’t seen a reference for getting a page’s web part manager through REST).

The web part ID should be available through the “webpartid” attribute included in the page markup.

The methods use the jQuery deferred object, but that could easily be replaced with anything else to handle the asynchronous events. Using this I’m hoping to create configurable client side web parts which is a problem I’ve recently had to tackle.

View on GitHub

app.js

  1. //pass in the web part ID as a string (guid)
  2. function getWebPartProperties(wpId) {
  3. var dfd = $.Deferred();
  4.  
  5. //get the client context
  6. var clientContext =
  7. new SP.ClientContext(_spPageContextInfo.webServerRelativeUrl);
  8. //get the current page as a file
  9. var oFile = clientContext.get_web()
  10. .getFileByServerRelativeUrl(_spPageContextInfo.serverRequestPath);
  11. //get the limited web part manager for the page
  12. var limitedWebPartManager =
  13. oFile.getLimitedWebPartManager(SP.WebParts.PersonalizationScope.shared);
  14. //get the web parts on the current page
  15. var collWebPart = limitedWebPartManager.get_webParts();
  16.  
  17. //request the web part collection and load it from the server
  18. clientContext.load(collWebPart);
  19. clientContext.executeQueryAsync(Function.createDelegate(this, function () {
  20. var webPartDef = null;
  21. //find the web part on the page by comparing ID’s
  22. for (var x = 0; x < collWebPart.get_count() && !webPartDef; x++) {
  23. var temp = collWebPart.get_item(x);
  24. if (temp.get_id().toString() === wpId) {
  25. webPartDef = temp;
  26. }
  27. }
  28. //if the web part was not found
  29. if (!webPartDef) {
  30. dfd.reject(“Web Part: “ + wpId + ” not found on page: “
  31. + _spPageContextInfo.webServerRelativeUrl);
  32. return;
  33. }
  34.  
  35. //get the web part properties and load them from the server
  36. var webPartProperties = webPartDef.get_webPart().get_properties();
  37. clientContext.load(webPartProperties);
  38. clientContext.executeQueryAsync(Function.createDelegate(this, function () {
  39. dfd.resolve(webPartProperties, webPartDef, clientContext);
  40. }), Function.createDelegate(this, function () {
  41. dfd.reject(“Failed to load web part properties”);
  42. }));
  43. }), Function.createDelegate(this, function () {
  44. dfd.reject(“Failed to load web part collection”);
  45. }));
  46.  
  47. return dfd.promise();
  48. }
  49.  
  50. //pass in the web part ID and a JSON object with the properties to update
  51. function saveWebPartProperties(wpId, obj) {
  52. var dfd = $.Deferred();
  53.  
  54. getWebPartProperties(wpId).done(
  55. function (webPartProperties, webPartDef, clientContext) {
  56. //set web part properties
  57. for (var key in obj) {
  58. webPartProperties.set_item(key, obj[key]);
  59. }
  60. //save web part changes
  61. webPartDef.saveWebPartChanges();
  62. //execute update on the server
  63. clientContext.executeQueryAsync(Function.createDelegate(this, function () {
  64. dfd.resolve();
  65. }), Function.createDelegate(this, function () {
  66. dfd.reject(“Failed to save web part properties”);
  67. }));
  68. }).fail(function (err) { dfd.reject(err); });
  69.  
  70. return dfd.promise();
  71. }

Latest SharePoint 2013 Resources

Introduction


Best practices are, and rightfully so, always a much sought-after topic. There are various kinds of best practices:

 

•Microsoft best practices. In real life, these are the most important ones to know, as most companies implementing SharePoint best practices have a tendency to follow as much of these as possibly can. Independent consultants doing architecture and code reviews will certainly take a look at these as well. In general, you can safely say that best practices endorsed by Microsoft have an added bonus and it will be mentioned whenever this is the case.

 
•Best practices. These practices are patterns that have proven themselves over and over again as a way to achieve a high quality of your solutions, and it’s completely irrelevant who proposed them. Often MS best practices will also fall in this category. In real life, these practices should be the most important ones to follow.

 
•Practices. These are just approaches that are reused over and over again, but not necessarily the best ones. Wiki’s are a great way to discern best practices from practices. It’s certainly possible that this page refers to these “Practices of the 3rd kind”, but hopefully, the SharePoint community will eventually filter them out. Therefore, everybody is invited and encouraged to actively participate in the various best practices discussions.
This Wiki page contains an overview of SharePoint 2013 Best Practices of all kinds, divided by categories.

Performance

This section discusses best practices regarding performance issues.
•http://gallery.technet.microsoft.com/The-SharePoint-Flavored-5b03f323     , the SharePoint Flavored Weblog Reader (SFWR) helps troubleshooting performance problems by analyzing the IIS log files of SharePoint WFEs.
•http://gallery.technet.microsoft.com/office/PressurePoint-Dragon-for-87572ee1   , PressurePoint Dragon for SharePoint 2013 helps executing performance tests.
•http://gallery.technet.microsoft.com/Maxer-for-SharePoint-2013-52208636     , a tool for checking capacity planning limits.
•http://gallery.technet.microsoft.com/Ping-Dragon-for-SharePoint-70fb299e   , a command line tool for pinging SharePoint and getting the response time of a SharePoint page.
•http://gallery.technet.microsoft.com/WinPing-Dragon-for-eefb6dd3   , a WPF client for  for pinging SharePoint and getting the response time of a SharePoint page.
•http://social.technet.microsoft.com/wiki/contents/articles/16218.sharepoint-2013-best-practices-in-depth-performance-counters.aspx , in depth info about performance counters relevant to SharePoint 2013.
•http://technet.microsoft.com/en-us/library/ff758658.aspx   , TechNet performance monitoring tips.
•http://www.iis.net/downloads/community/2007/05/wcat-63-(x64)   , the Web Capacity Analysis Tool (WCAT) is a lightweight HTTP load generation tool to measure the performance of a web server. Used by MS support in various capacity analysis plans.
•Improve SharePoint Speed by fixing a SSL Trust Issue,  http://sharepoint-community.net/profiles/blogs/how-to-improve-speed-on-sharepoint-2013
•http://technet.microsoft.com/en-us/library/cc262813.aspx   , Large Lists.
•http://technet.microsoft.com/en-us/library/hh395916.aspx   , Estimating performance and capacity.

SharePoint Server 2013 Build Numbers

 

Version Build # Type Server
Package (KB) Foundation
Package (KB) Language
specific Notes
Public Beta Preview   15.0.4128.1014 Beta n/a n/a yes Known issues
SPS 2013   RTM 15.0.4420.1017 RTM n/a n/a yes Setup, Install
Dec. 2012 Fix 15.0.4433.1506 update 2752058
2752001   n/a yes Known Issue
March 2013   15.0.4481.1005 PU 2767999   2768000   global New Baseline
April 2013    15.0.4505.1002 CU – 2751999   global Known Issue
April 2013   15.0.4505.1005 CU 2726992   – global Known Issue
June 2013   15.0.4517.1003 CU   2817346   global Known Issue   1
Known Issue 2
June 2013   15.0.4517.1005 CU 2817414   – global Known Issue 1  Known Issue 2
August 2013   15.0.4535.1000 CU 2817616   2817517   global –
October 2013   15.0.4551.1001 CU   2825674   global –
October 2013   15.0.4551.1005 CU 2825647     global –
December 2013   15.0.4551.1508 CU   2849961   global –
December 2013   15.0.4551.1511 CU 2850024     global see KB
Feb. 2014 – skipped! n/a – – – – –
SP1-released Apr.2014   15.0.4569.1000
(15.0.4569.1506) SP 2817429

2880552   –   yes

Re-released SP

SP1-released Apr.2014
(15.0.4569.1509)
fixed Build:
15.0.4571.1502
SP  –
2817439
2760625   – Fix
2880551   – Current
yes

Known Issue

Re-released SP

April 2014   15.0.4605.1004 CU 2878240   2863892   global Known Issue
MS14-022 15.0.4615.1001 PU 2952166   2952166   n/a Security fix
June 2014   15.0.4623.1001 CU 2881061   2881063   global n/a

reference: http://blogs.technet.com/b/steve_chen/archive/2013/03/26/3561010.aspx

Feature Overview

This section discusses best places to get SharePoint feature overviews.
•http://www.apps4rent.com/sharepoint-2013-features-comparison.html   , nice feature comparison.
•http://technet.microsoft.com/en-us/library/jj819267.aspx   , extensive SharePoint Online overview.
•http://technet.microsoft.com/en-us/library/ff607742(v=office.15).aspx   , deprecated features.
•http://www.andrewconnell.com/blog/archive/2013/01/11/sharepoint-2013-amp-office-365-feature-matrixndashan-easier-way-to.aspx   , matrix overview.
•http://www.rharbridge.com/www.rharbridge.com/?page_id=966   , nice overview including SharePoint 2013, 2010, 2007, and Office 365.
•http://www.fpweb.net/sharepoint-hosting/2013/compare-sharepoint-server-standard-enterprise/   , 2013 standard vs enterprise.
•http://www.khamis.net/Blog/Post/275/SharePoint-2013-Standard-vs–Enterprise-vs–Foundation-Feature-Comparison-Matrix  , 2013 standard vs enterprise vs foundation.
•http://blog.blksthl.com/2013/01/14/sharepoint-2013-feature-comparison-chart-all-editions/#SIT   , overview of all 2013 versions.

Capacity Planning
•http://technet.microsoft.com/en-us/library/cc261834.aspx   , excellent planning resource.
•http://technet.microsoft.com/en-us/library/cc263199.aspx   , overview of various technical diagrams.
•http://technet.microsoft.com/en-us/library/jj219628.aspx#HW_Enterprise   , info about scaling search.
•http://technet.microsoft.com/en-us/library/cc262787.aspx   , capacity boundaries.

Installation

This section discusses installation best practices.
•http://social.technet.microsoft.com/wiki/contents/articles/15289.sharepoint-2013-best-practices-creating-a-development-environment.aspx , provides a detailed explanation how to create a SharePoint 2013 development environment.
•http://technet.microsoft.com/en-us/library/cc262749.aspx   , system requirements overview.
•http://technet.microsoft.com/en-us/library/ee662513.aspx   , provides an overview of the administrative and service accounts you need for a SharePoint 2013 installation.
•http://technet.microsoft.com/en-us/library/cc678863.aspx   , describes SharePoint 2013 administrative and service account permissions for SQL Server, the File System, File Shares, and Registry entries.
•http://social.technet.microsoft.com/wiki/contents/articles/14500.sharepoint-2013-best-practices-service-accounts.aspx , naming conventions and permission overview for service accounts.
•http://www.slideshare.net/michaeltnoel/spcsea-2013-upgrading-to-sharepoint-2013  , a methodical approach to upgrading to SharePoint 2013.
•http://autospinstaller.codeplex.com/   , Automated SharePoint 2010/2013 installation using PowerShell and XML configuration.
•http://autospinstallergui.codeplex.com/   , GUI tool for configuring the AutoSPInstaller configuration XML.
•http://social.technet.microsoft.com/wiki/contents/articles/16343.sharepoint-2013-best-practices-setting-up-a-dev-environment-for-windows-apps-and-sharepoint.aspx , describes how to set up a dev environment needed for creating Windows Apps that leverage SharePoint.
•http://technet.microsoft.com/en-us/library/jj658588.aspx   , installing workflows.
•Install SharePoint 2013 on a single server with SQL Server
•Install SharePoint 2013 on a single server with a built-in database
•Install SharePoint 2013 across multiple servers for a three-tier farm
•Install and configure a virtual environment for SharePoint 2013
•Install or uninstall language packs for SharePoint 2013
•Add web or application servers to farms in SharePoint 2013
•Add a database server to an existing farm in SharePoint 2013
•Remove a server from a farm in SharePoint 2013
•Uninstall SharePoint 2013
•Install and configure a virtual environment for SharePoint 2013

Upgrade and Migration

This section discusses how to upgrade to SharePoint 2013 from a previous version.
•http://blogs.msdn.com/b/russmax/archive/2013/04/01/why-sharepoint-2013-cumulative-update-takes-5-hours-to-install.aspx?CommentPosted=true#commentmessage   Why SharePoint 2013 Cumulative Update takes 5 hours to install, improve CU (patch) Installation times from 5 hours to 30 mins.
•http://social.technet.microsoft.com/wiki/contents/articles/15743.sharepoint-2013-best-practices-upgrading-from-sharepoint-2007.aspx discusses best practices for upgrading from SharePoint 2007 to 2013.
•http://social.technet.microsoft.com/wiki/contents/articles/16033.sharepoint-2013-best-practices-migrate-from-sharepoint-foundation-2013-to-sharepoint-server-2013.aspx , upgrade SharePoint Foundation 2013 to SharePoint Server 2013.
•http://technet.microsoft.com/en-us/library/cc262483.aspx   , SharePoint 2010 to 2013.
•http://technet.microsoft.com/en-us/library/cc303436.aspx   , upgrade databases from SharePoint 2010 to 2013.
•http://www.google.nl/url?sa=t&rct=j&q=download%20proven%20practices%20for%20upgrading%20or%20migrating%20to%20sharepoint%202013&source=web&cd=1&ved=0CEgQFjAA&url=http%3A%2F%2Feu.avepoint.com%2Fassets%2Fpdf%2Fwhite-papers%2Femea%2FSharePoint-2013-Migration-White-Paper.pdf&ei=L2FRUdPHJoqX1AWy44CgBw&usg=AFQjCNHA6Iuoigex0xyHb-EuPdBDIiLrhw&bvm=bv.44158598,d.d2k   , PDF document containing extensive info about Proven Practices for Upgrading or Migrating to SharePoint 2013.
•http://technet.microsoft.com/en-us/library/ee947141.aspx   , upgrade from sharepoint 2007 or wss 3 to sharepoint 2013.

Infrastructure

This section discusses infrastructure best practices.
•http://technet.microsoft.com/en-us/library/cc263199(v=office.15)   , infrastructure diagrams.
•http://social.technet.microsoft.com/wiki/contents/articles/16180.sharepoint-2013-best-practices-dealing-with-geographically-dispersed-locations.aspx , dealing with geographically dispersed locations.

Backup and Recovery
This section deals with best practices about the back up and restore of SharePoint environments. •http://technet.microsoft.com/en-us/library/ee663490.aspx   , general overview of backup and recovery.
•http://technet.microsoft.com/en-us/library/ee428315.aspx   , back-up solutions for specific parts of SharePoint.
•http://www.slideshare.net/thomasvochten/sharepoint-high-availability-disaster-recovery   , good info about disaster recovery.
•http://technet.microsoft.com/en-us/library/cc748824.aspx   , high availability architectures.
•http://social.technet.microsoft.com/wiki/contents/articles/17195.sharepoint-2013-best-practices-back-up-sharepoint-online.aspx , how to back up SharePoint online?

Database
•http://technet.microsoft.com/en-us/library/cc678868.aspx   , great resource about SharePoint databases.
•http://technet.microsoft.com/en-us/library/ff851878.aspx   , removing ugly GUIDs from SharePoint database names.

Implementation and Maintenance

This section deals with best practices about implementing SharePoint.
•http://social.technet.microsoft.com/wiki/contents/articles/6575.ten-steps-to-a-successful-sharepoint-implementation-en-us.aspx explains how to implement SharePoint.
•http://technet.microsoft.com/en-us/library/ff851878.aspx   , rename service applications.

Apps

This section deals with best practices regarding SharePoint Apps.
•http://technet.microsoft.com/en-us/library/fp161237(v=office.15).aspx   , great resource for planning Apps.
•http://msdn.microsoft.com/en-us/library/jj163230.aspx  ,  a resource for building apps for SharePoint.
•http://msdn.microsoft.com/en-us/library/jj163264.aspx   , Best practices and design patterns for app license checking.

Every day use
•http://social.technet.microsoft.com/wiki/contents/articles/16166.sharepoint-2013-best-practices-using-folders.aspx , using folders
•http://social.technet.microsoft.com/wiki/contents/articles/17829.sharepoint-2013-going-up-in-the-navigation.aspx , discusses options for navigating up
•http://social.technet.microsoft.com/wiki/contents/articles/17997.sharepoint-2013-best-practice-choosing-between-a-choice-lookup-or-taxonomy-managed-metadata-column.aspx , discusses best practices for choosing between choice, lookup or taxonomy column

Add-ons

This section deals with useful SharePoint add-ons.
•http://www.infragistics.com/products/sharepoint/  , an collection of web parts for an enterprise dashboard.
•http://harmon.ie/Products/Mobile  , an app for iPhone/iPad that enhances mobile access to SharePoint documents.

Development
This section covers best practices targeted towards software developers. •http://social.technet.microsoft.com/wiki/contents/articles/13373.sharepoint-2013-what-to-do-farm-solution-vs-sandbox-vs-app.aspx , discusses when to use farm solutions, sandbox solutions, or sharepoint apps.
•http://social.technet.microsoft.com/wiki/contents/articles/13637.sharepoint-2013-best-practices-what-client-api-should-you-choose-when-building-apps.aspx , guidelines to help you pick the correct client API to use with your app.
•http://msdn.microsoft.com/en-us/library/jj164060(v=office.15).aspx   , guidelines to help you pick the correct client API for your SharePoint solution.
•http://social.technet.microsoft.com/wiki/contents/articles/16343.sharepoint-2013-best-practices-setting-up-a-dev-environment-for-windows-apps-and-sharepoint.aspx , describes how to set up a dev environment needed for creating Windows Apps that leverage SharePoint.
•http://social.technet.microsoft.com/wiki/contents/articles/16353.sharepoint-2013-best-practices-working-with-connection-strings-in-auto-hosted-sharepoint-apps.aspx , discusses how to deal with connection strings in auto-hosted apps.

Debugging

This section contains debugging tips for SharePoint.
•Use WireShark to capture traffic on the SharePoint server.
•Use a Text Differencing tool to compare if web.config files on WFEs are identical.
•Use Fiddler to monitor web traffic using the People Picker. This will provide insight in how to use the people picker for custom development. Please note: the client People Picker web service interface is located in SP.UI.ApplicationPages.ClientPeoplePickerWebServiceInterface.

Troubleshooting
•Troubleshooting Office Web Apps
•http://social.technet.microsoft.com/wiki/contents/articles/16640.sharepoint-2013-tips-for-troubleshooting-search-suggestions.aspx , troubleshooting search suggestions.
•http://technet.microsoft.com/en-us/library/jj906556.aspx   , troubleshooting claims authentication.
•http://technet.microsoft.com/en-us/library/dn169566.aspx   , troubleshooting fine grained permissions.
•http://social.technet.microsoft.com/Forums/sharepoint/en-US/02b78299-bc7f-448b-b233-f9cae0da8466/sharepoint-2013-alerts-are-not-firing-any-mails-for-the-normal-alerts-and-search-alerts-can-someone , troubleshooting email alerts.

Farms

This section discusses best practices regarding SharePoint 2013 farm topologies.
•Office Web Apps topologies
•How to configure SharePoint Farm
•How to install SharePoint Farm
•Overview of farm virtualization and architectures

Accessibility

This section discusses SharePoint accessibility topics.
•http://office.microsoft.com/en-us/sharepoint-foundation-help/keyboard-shortcuts-for-sharepoint-products-HA102772894.aspx   , shortcuts for SharePoint.
•http://technet.microsoft.com/en-us/library/ff852108.aspx   , conformance statement A-level (WCAG 2.0).
•http://technet.microsoft.com/en-us/library/ff852107.aspx   , conformance statement AA-level (WCAG 2.0).

Top 10 Blogs to Follow
It’s certainly a best practice to keep up to date with the latest SharePoint news. Therefore, a top 10 of blog suggestions to follow is included. 1.Corey Roth at http://www.dotnetmafia.com/blogs/dotnettipoftheday/
2.Jeremy Thake at http://jeremythake.com
3.Nik Patel at http://nikspatel.wordpress.com/
4.Yaroslav Pentsarskyy at http://www.sharemuch.com/
5.Giles Hamson at http://spandps.com/author/ghamson/
6.Danny Jessee at http://www.dannyjessee.com/blog/
7.Marc D Anderson at http://sympmarc.com/
8.Andrew Connell at http://www.andrewconnell.com/blog
9.Geoff Evelyn at http://www.sharepointgeoff.com/
10.http://sharepointdragons.com /  , Nikander & Margriet on SharePoint.

Recommended SharePoint Related Tools

What to put in your bag of tools?
1.http://gallery.technet.microsoft.com/The-SharePoint-Flavored-5b03f323    , the SharePoint Flavored Weblog Reader (SFWR) helps troubleshooting performance problems by analyzing the IIS log files of SharePoint WFEs.
2.http://gallery.technet.microsoft.com/PressurePoint-Dragon-for-87572ee1   , PressurePoint Dragon for SharePoint 2013 helps executing performance tests.
3.http://gallery.technet.microsoft.com/Maxer-for-SharePoint-2013-52208636   , a tool for checking capacity planning limits.
4.http://visualstudiogallery.msdn.microsoft.com/36a6eb45-a7b1-47c3-9e85-09f0aef6e879    , Muse.VSExtensions, a great tool for referencing assemblies located in the GAC.
5.http://www.quest.com/powergui-freeware/   , helps with all your PowerShell development. In a SharePoint environment, there usually will be some.
6.http://powerguivsx.codeplex.com/   , Visual Studio extension based on PowerGUI that adds PowerShell IntelliSense support to Visual Studio.
7.http://visualstudiogallery.msdn.microsoft.com/4784e790-32f4-455f-9228-53f537c03787   , FishBurn Systems provides some sort of CKSDev lite for VS.NET 2012/SharePoint 2013. Very useful.
8.http://visualstudiogallery.msdn.microsoft.com/6ed4c78f-a23e-49ad-b5fd-369af0c2107f   , web extensions make creating CSS in VS.NET a lot easier and supports CSS generation for multiple platforms.
9.http://technet.microsoft.com/en-us/library/cc508851  , the SharePoint 2010 Administration Toolkit (works on 2013).
10.http://clumsyleaf.com/products/cloudxplorer   , a great tool when you’ve installed your SharePoint farm on Azure.

Training

If you want to learn about SharePoint 2013, there are valuable resources out there to get started.
•http://technet.microsoft.com/en-us/sharepoint/fp123606.aspx%20  , basic training for IT Pros.
•http://www.microsoft.com/en-us/download/details.aspx?id=35396   , free eBook.
•www.MicrosoftVirtualAcademy.com   , great resource with advanced online and interactive sessions.
http://technet.microsoft.com/en-us/library/gg609831.aspx   , at the end there’s a nice overview of training resources.

See Also
•SharePoint 2013 Portal
•SharePoint 2013 – Service Applications
•SharePoint 2013 – Resources for Developers
•SharePoint 2013 – Resources for IT Pros

 

HTML5 SharePoint Pic Web Part Released and Available !!

This is a Sandbox web part control to display a matrix of image thumbnails.

For a build a Metro IDE or a Picture Gallery to show products, news, or a social team that integrates with pictures, etc. All this, from any SharePoint picture library.

Supports : SharePoint 2010 & 2013 On-Premise Web Part,  SharePoint Online Web Part

FEATURES OF THE WEB PART** ver. 1.0

     

**PREVIEW EXAMPLE OF THE CONTROL**





 
1

How To : Use the Modelling SDK to create UML Diagrams

Use Case Diagrams

A use case diagram is a summary of who uses your application and what they can do with it. It
describes the relationships among requirements, users, and the major components of the system, and
provides an overall view of how the system is used.

uml+activity+diagram+library+mgmt+book+return[1]
Activity Diagrams
Use case diagrams can be broken down into activity diagrams. An activity diagram shows the software
process as the fl ow of work through a series of actions. It can be a useful exercise to draw an
activity diagram showing the major tasks that a user will perform with the software application.

 

Sequence Diagrams

 

Sequence diagrams display interactions between different objects. This interaction usually takes
place as a series of messages between the different objects. Sequence diagrams can be considered an
alternate view to the activity diagram. A sequence diagram can show a clear view of the steps in a
use case. Figure 14-3 shows an example of a sequence diagram.
Component Diagrams

 

Component diagrams help visualize the high-level structure of the software system. They show the
major parts of a system and how those parts interact and depend on each other. One nice feature of
component diagrams is that they show how the different parts of the design interact with each other,
regardless of how those individual parts are actually implemented. Figure 14-4 shows an example of
a component diagram.

 

Class Diagrams

 

Class diagrams describe the objects in the application system. They do this without referencing any
particular implementation of the system itself. This type of UML modeling diagram is also referred
to as a conceptual class diagram. Figure 14-5 shows an example of a class diagram.

How to: Export UML Diagrams to Image Files

You can export a UML document from Visual Studio to an image that is under program control. For example, you might want to do this as part of automatic document generation.

If you want to export a document to an image manually, you can copy and paste the shapes from a diagram into other programs such as Word. You can also print documents to XPS format. For more information, see Export Images of Diagrams.

The following code defines a shortcut menu command, also known as a context menu command, that saves an image to a file.

Note Note

To make this code work as a menu command, you must incorporate it into a MEF component. For more information, seeHow to: Define a Menu Command on a Modeling Diagram.

The code first uses GetObject<T> to get the Diagram of the underlying implementation. This type has a methodCreateBitmap.

namespace SaveToImage
{
  using System.ComponentModel.Composition; // for [Import], [Export]
  using System.Drawing; // for Bitmap
  using System.Drawing.Imaging; // for ImageFormat
  using System.Linq; // for collection extensions
  using System.Windows.Forms; // for SaveFileDialog
  using Microsoft.VisualStudio.Modeling.Diagrams;
    // for Diagram
  using Microsoft.VisualStudio.Modeling.ExtensionEnablement;
    // for IGestureExtension, ICommandExtension, ILinkedUndoContext
  using Microsoft.VisualStudio.ArchitectureTools.Extensibility.Presentation;
    // for IDiagramContext
  using Microsoft.VisualStudio.ArchitectureTools.Extensibility.Uml;
    // for designer extension attributes


  /// 
  /// Called when the user clicks the menu item.
  /// 
  // Context menu command applicable to any UML diagram 
  [Export(typeof(ICommandExtension))]
  [ClassDesignerExtension]
  [UseCaseDesignerExtension]
  [SequenceDesignerExtension]
  [ComponentDesignerExtension]
  [ActivityDesignerExtension]
  class CommandExtension : ICommandExtension
  {
    [Import]
    IDiagramContext Context { get; set; }

    public void Execute(IMenuCommand command)
    {
      // Get the diagram of the underlying implementation.
      Diagram dslDiagram = Context.CurrentDiagram.GetObject();
      if (dslDiagram != null)
      {
        string imageFileName = FileNameFromUser();
        if (!string.IsNullOrEmpty(imageFileName))
        {
          Bitmap bitmap = dslDiagram.CreateBitmap(
           dslDiagram.NestedChildShapes,
           Diagram.CreateBitmapPreference.FavorClarityOverSmallSize);
          bitmap.Save(imageFileName, GetImageType(imageFileName));
        }
      }
    }

    /// 
    /// Called when the user right-clicks the diagram.
    /// Set Enabled and Visible to specify the menu item status.
    /// 
    ///
    public void QueryStatus(IMenuCommand command)
    {
      command.Enabled = Context.CurrentDiagram != null 
        && Context.CurrentDiagram.ChildShapes.Count() > 0;
    }

    /// 
    /// Menu text.
    /// 
    public string Text
    {
      get { return "Save To Image..."; }
    }


    /// 
    /// Ask the user for the path of an image file.
    /// 
    /// image file path, or null
    private string FileNameFromUser()
    {
      SaveFileDialog dialog = new SaveFileDialog();
      dialog.AddExtension = true;
      dialog.DefaultExt = "image.bmp";
      dialog.Filter = "Bitmap ( *.bmp )|*.bmp|JPEG File ( *.jpg )|*.jpg|Enhanced Metafile (*.emf )|*.emf|Portable Network Graphic ( *.png )|*.png";
      dialog.FilterIndex = 1;
      dialog.Title = "Save Diagram to Image";
      return dialog.ShowDialog() == DialogResult.OK ? dialog.FileName : null;
    }

    /// 
    /// Return the appropriate image type for a file extension.
    /// 
    ///
    /// 
    private ImageFormat GetImageType(string fileName)
    {
      string extension = System.IO.Path.GetExtension(fileName).ToLowerInvariant();
      ImageFormat result = ImageFormat.Bmp;
      switch (extension)
      {
        case ".jpg":
          result = ImageFormat.Jpeg;
          break;
        case ".emf":
          result = ImageFormat.Emf;
          break;
        case ".png":
          result = ImageFormat.Png;
          break;
      }
      return result;
    }
  }
}

How To : Design the Physical Architecture to Support Collaborative Development and ALM of SharePoint Foundation 2010 Application

Introduction

This article explains the physical architecture which fits best in collaborative development and ALM of SharePoint Foundation 2010 application and what are the servers and tools needed and how they play key roles in ALM of SharePoint Foundation 2010. The purpose of this article is to provide overall understanding of various servers and farms connected to each other in SharePoint Foundation.

Background

Basic understanding of different server OS & SharePoint Foundation 2010 is required.

Solution

Application Life-cycle Management (ALM) is the co-ordination of development life-cycle activities—including requirements, modeling, development, build, and testing. Recently, ALM has expanded beyond the application and the software development life cycle to also include business solution governance, infrastructure management, operations, and support.

You can use ALM to help align your organization in the context of a software solution in business, development, and operations. With an application development platform that supports ALM, you can provide integration between the various tools used and activities performed within each of these capabilities.

There are main four types of staging servers with standalone developer’s environment which plays a key role in ALM of SharePoint 2010 application:

  1. Development SharePoint Farm
  2. Team foundation server
  3. Integration/Testing Farm
  4. Production Farm
    +
    Developer’s Workstation

The below figure is a physical architecture which depicts how each sever is interconnected to support collaborative development and ALM for SharePoint Foundation 2010 application:

Click to enlarge image

Development SharePoint Farm

A SharePoint farm is fundamentally a collection of SharePoint role servers that provide for the base infrastructure required to house SharePoint sites. The farm level is the highest level of SharePoint architecture, providing a distinct operational boundary for a SharePoint environment. Each farm in an environment is a self-encompassing unit made up of one or more servers, such as web servers, service application servers, and SharePoint database servers.

SharePoint development farm needed for the developers in an organization that makes heavy use of SharePoint often need environments to test new applications, web parts, solutions, and other SharePoint customization. These developers often need a sandbox area where these farm level features and solutions can be tested.

I have considered two-tier topology for SharePoint Foundation 2010 farm. However it will be entirely based on the need of your application. If your application is a relatively small intranet application, then you can choose single tier topology or if you are going to integrate other search server with foundation, then you can choose three-tier topology with application server as a middle tier (Remember that SharePoint Foundation 2010 doesn’t include enterprise search). It may make sense to deploy one or more development farms so that developers have the opportunity to run their tests and develop software for SharePoint independent of the existing production environment.

There are basically two types of servers included in two-tier development farm of SharePoint Foundation 2010:

  1. Web server
  2. Content database server

In the above figure, there are three front-end web servers and one SharePoint content database server. However you can choose a single front-end web server connected to content database server based on your application need and architecture of production environment. All web servers share the same content database. This is called two-tier deployment farm where SharePoint server component and content database are installed on separate server. As I mentioned before, you can choose one-tier, two-tier or three-tier deployment topology based on your application architecture and topology of production architecture.

Each web server has SharePoint Foundation 2010 and SharePoint extension for TFS 2010 install on it. It needs SharePoint extension for TFS 2010 to connect with Team Foundation Server for source control, build management & project management.

Advantage of Development SharePoint Farm:

  1. Single place where SharePoint Admin can integrate all the final artifacts from multiple developers.
  2. Developer can sync with latest SharePoint site on its standalone developer workstation.
  3. Admin can easily approve artifacts and migrate to integration server.
  4. It is a unit testing environment for developers where they can test dependent functionality or farm level features.

Team Foundation Server

Team Foundation Server plays a key role in ALM which provides source control, build management and work item. You can have TFS installed on the same server which has content database server but if you are going to use build management of TFS, then it is advisable to have separate Team Foundation Server because it utilizes CPU intensively when it processes the builds.

As per the above figure, there are separate Team foundation servers which are connected to SharePoint Farm as well as standalone development workstation so that it can provide source control for customized content as well as developer’s artifacts and resources.

Advantages of TFS
  1. Source control for SharePoint artifacts and customization
  2. Build management for SharePoint
  3. Work item and bug tracking tool for SharePoint
  4. Admin console for all management activity
  5. Easy integration with SharePoint foundation server and VS 2010
  6. Easy check-in & check-out
  7. Web based console to manage ALM activity

Developer’s Workstation

As per the above figure, developers’ environment includes two developers workstation. In practice, you can take as many workstations as your development team size.

Developer workstation should have Windows 7 or Windows vista operating system with standalone SharePoint foundation server with local content database. So that one developer’s work doesn’t affect another developer and he can debug artifacts locally.

Developer workstation will include the following stuff installed:

  1. Windows 7 or Windows vista 64 bit OS
  2. Stand alone SharePoint Foundation server 2010
  3. SharePoint designer 2010
  4. Visual Studio 2010 (connected to TFS)

Developer workstation should be connected to Team Foundation Server 2010 so that when developer finally completes his artifact, then he can check-in his artifact in TFS so that other developers can take the latest code from TFS if needed. This way, parallel development can happen without affecting other developer’s work.

Integration/Testing Farm

Any production SharePoint environment should have a test environment in which new SharePoint web parts, solutions, service packs, patches, and add-ons can be tested. It is critical to deploy test farms, because many SharePoint add-ons could potentially disrupt or corrupt the formatting or structure of a production environment, and trying to test these new solutions on site collections or different web applications is not enough because the solutions often install directly on the SharePoint servers themselves. If there is an issue, the issue will be reflected in the entire farm.

Integration or testing server farm should be similar to the existing environments, with the same add-ons and solutions installed and should ideally include restores of production site collections to make it as similar as possible to the existing production environment. All changes and new products or solutions installed into an environment should subsequently be tested first in this environment.

Integration/testing servers will have final SharePoint sites and site collection as per the business requirements. QA will test all the business functionality here. Customer can also do their ‘User acceptance test’ before going live to the production server.

After user acceptance test passed, all the sites & site collection will be deployed on production server.

Advantage of Integration testing server:

  1. Clean environments and same physical architecture as production
  2. QA can test all dependent business functionality at one place
  3. Customer can participate in UAT
  4. Easy deployment/migration from integration testing server to production server

Production Farm

The final stage is rolling your farm into a production environment. At this stage, you will have incorporated the necessary solution and infrastructure adjustments that were identified during the user acceptance test stage. These servers are generally in the customer’s premises. Development team and testing team do not have control over it.

There are various 3rd party tools available in the market for SharePoint data protection, administration, migration, compliance and integration.

ImageGen[1]

Summary

So this way, you can design physical architecture where Development SharePoint Farm and developer’s workstation are integrated with TFS 2010. TFS and Content database are connected to testing server or testing farm where all the artifacts and content will be integrated in testing server for QA and UAT. Finally after UAT, it will be deployed on production farm.

You can use VM (Virtual Machine) for all the servers and workstation for effective infrastructure because if server crashes due to some reason, then you can quickly create a new VM for the needed OS from images.

Note: In the above figure, integration/Testing farm and production farm is a single server just for clear understanding but it will be as large as development farm with number of front-end web server and content database server in reality. All the server OS is Windows Server 2008 R2 SP2 64 bit. Please visit here for more information on hardware & software requirements for SharePoint Foundation 2010.

How to: Customize the SharePoint HTML Editor Field Control using ECM

You can use the HTML Editor field control to insert HTML content into a publishing page. Page templates that include a Publishing HTML column type also include the HTML Editor field control.

This editor has special capabilities, such as customized styles, editing constraints, reusable content support, a spelling checker, and use of asset pickers to select documents and images to insert into a page’s content. This topic describes how to modify some features and attributes of the HTML Editor field control.

Image

If the content type of a page layout supports the Page Content column, you can add a Rich HTML field control to your page layout by using markup such as the following.

<PublishingWebControls:RichHtmlField id="ArticleAbstract" FieldName="ArticleAbstract" 
          AllowExternalUrls="false" 
          AllowFonts="true" 
          AllowReusableContent="false" 
          AllowHeadings="false"
          AllowHyperlinks="false"
          AllowImages="false"
          AllowLists="false"
          AllowTables="false"
          AllowTextMarkup="false" 
          AllowHTMLSourceEditing="false"
          DisalbeBasicFormattingButtons="false"
          runat="server"/>

In the example above, RichHTMLField is the name of the field control that provides the richer HTML editing experience. Attributes such as AllowFonts and AllowTables specify restrictions on the field.

The HTML field control allows font tags, but the control does not allow URLs that are external to the current site collection, reusable content stored in a centralized list, standard HTML heading tags, hyperlinks, images, numbered or bulleted lists, tables, or text markup.

Table 1. HTML editor field control properties
Attribute Description
AllowExternalUrls Only URLs internal to the current site collection are allowed to be referenced in a link or an image.
AllowFonts Content may contain Font tags.
AllowHtmlSourceEditing HTML Editor can be switched into a mode that allows the HTML to be edited directly.
AllowReusableContent Content may contain reusable content fragments stored in a centralized list.
AllowHeadings Content may contain HTML heading tags (H1, H2, and so on).
AllowTextMarkup Content may contain bold, italic, and underlined text.
AllowImages Content may contain images.
AllowLists Content may contain numbered or bulleted lists.
AllowTables Content may contain table-related tags such as <table>, <tr>, and <td>.
AllowHyperlinks Content may contain links to other URLs.
AllowHtmlSourceEditing When set to false, the HTML editor is disabled from switching to HTML source editing mode.
AllowHyperlinks Gets or sets the constraint that allows hyperlinks to be added to the HTML. If this flag is set to false, <A>, <AREA>, and <MAP> tags are removed from the HTML. Default is true. This property also determines whether the editing user interface (UI) enables these operations.
AllowImageFormatting Gets or sets image formatting items. This restriction disables only menus and does not force the content to adhere to this restriction
AllowImagePositioning Gets or sets the position of the image. This restriction disables only menus and does not force the content to adhere to this restriction.
AllowImageStyles Gets or sets whether the Table Styles menu is enabled. This restriction disables only the menu and does not force the content to adhere to this restriction.
AllowInsert Gets or sets whether Insert options are shown. This restriction disables only the menu and does not force the content to adhere to this restriction.
AllowLists Gets or sets the constraint that allows list tags to be added to the HTML. If this flag is set to false, <LI>, <OL>, <UL>, <DD>, <DL>, <DT>, and <MENU> tags are removed from the HTML. Default is true. This also determines whether the editing UI enables these operations.
AllowParagraphFormatting Gets or sets whether paragraph formatting items are enabled. This restriction disables only menus and does not force the content to adhere to this restriction.
AllowStandardFonts Gets or sets whether standard fonts are enabled. This restriction disables only menus and does not force the content to adhere to this restriction.
AllowStyles Gets or sets whether the Style menu is enabled. This restriction disables only the menu and does not force the content to adhere to this restriction.
AllowTables Gets or sets the constraint to allow tables to be added when editing this field.
AllowTableStyles Gets or sets whether the Table Styles menu is enabled. This restriction disables only the menu and does not force the content to adhere to this restriction.
AllowTextMarkup Get or set the constraint to allow text markup to be added when editing this field.
AllowThemeFonts Gets or sets whether theme fonts are enabled. This restriction disables only menus and does not force the content to adhere to this restriction.
Predefined Table Formats

The HTML editor includes a set of predefined table formats, but it can be customized to fit the styling of an individual page. Each table format is a collection of cascading style sheet (CSS) classes for each table tag. You can define styling for the first and last row, odd and even rows, first and last column, and so on.

The HTML Editor dynamically applies certain styles from the referenced style sheets on the page and makes them available to users when formatting a table. For a custom style to be available when formatting a table, the relevant class names must follow the PREFIXTableXXX-NNN format, where:

  • PREFIX is ms-rte by default, but you can override the default by using the control PrefixStyleSheet() property of the RichHTML field control.
  • XXX is the specific table section, such as EvenRow or OddRow.
  • NNN is the name to identify the table styling.

The following example presents a complete set of classes for a table styling format.

.ms-rteTable-1 {border-collapse:collapse;border-top:gray 1.5pt;
    border-left:gray 1.5pt;border-bottom:gray 1.5pt;
    border-right:gray 1.5pt;border-style:solid;}
.ms-rteTableHeaderRow-1 {color:Green;background:yellow;text-align:left}
.ms-rteTableHeaderFirstCol-1 {padding:0in 5.4pt 0in 5.4pt;}
.ms-rteTableHeaderLastCol-1 {padding:0in 5.4pt 0in 5.4pt;}
.ms-rteTableHeaderOddCol-1 {padding:0in 5.4pt 0in 5.4pt;}
.ms-rteTableHeaderEvenCol-1 {padding:0in 5.4pt 0in 5.4pt;}
.ms-rteTableOddRow-1 {color:black;background:#FFFFDD;}
.ms-rteTableEvenRow-1 {color:black;background:#FFB4B4;}
.ms-rteTableFirstCol-1 {padding:0in 5.4pt 0in 5.4pt;}
.ms-rteTableLastCol-1 {padding:0in 5.4pt 0in 5.4pt;}
.ms-rteTableOddCol-1 {padding:0in 5.4pt 0in 5.4pt;}
.ms-rteTableEvenCol-1 {padding:0in 5.4pt 0in 5.4pt;}
.ms-rteTableFooterRow-1 {color:blue;font-style:bold;
    font-weight:bold;background:white;border-top:solid gray 1.0pt;
    border-bottom:solid gray 1.0pt;border-right:solid silver 1.0pt; 
    border-style:solid;}
.ms-rteTableFooterFirstCol-1 {padding:0in 5.4pt 0in 5.4pt;
    border-top:solid gray 1.0pt;text-align:left}
.ms-rteTableFooterLastCol-1 {padding:0in 5.4pt 0in 5.4pt;
    border-top:solid gray 1.0pt;text-align:left}
.ms-rteTableFooterOddCol-1 {padding:0in 5.4pt 0in 5.4pt;
    text-align:left;border-top:solid gray 1.0pt;}
.ms-rteTableFooterEvenCol-1 {padding:0in 5.4pt 0in 5.4pt;
    text-align:left;border-top:solid gray 1.0pt;}

Microsoft SharePoint Server 2010 includes a set of default table styles. However, if the system detects new styles that did not originate in the default .css file, it removes the default set and presents only those newly defined styles in the HTML editor dialog box.

Spelling Checker

In SharePoint Server 2010, the HTML editor includes a spelling checker, which can be customized by developers by using the SpellCheckV4Action Web control and the SpellCheckToolbarButton Web control. The spelling checker action registers client files and data during a spelling check.

It also includes a method to get the console tab and calls the user rights to verify that the current user has rights to perform a spelling check operation on the selected item. The spelling checker action calls the appropriate ECMAScript (JavaScript, JScript) code, and sends information to the client about available spellings and the default language to use for the request.

OneDrive and Yammer takes Social Collaboration to a new level on SharePoint Online

Yammer brings conversations to your OneDrive and SharePoint Online files

Christophe Fiessinger is a group product manager on the enterprise social team.

5810.image-1[1]

At SharePoint Conference 2014 we announced new enterprise social experiences across Office 365 helping businesses work more like networks by leveraging the power of the cloud to bring people together, gain quicker access to relevant insights and help make smarter decisions, faster.

Today we’re announcing the release of one of those features–document conversations–which essentially embeds the social collaboration capabilities of Yammer into the Office apps you use to get work done every day. Get ready for a new, simple way to collaborate on the content you produce with Office Online and store in the cloud in SharePoint Online or OneDrive for Business.

 

Document conversations enable people to share their ideas and expertise around Office documents, images and videos right from within the content they are editing or reviewing. Imagine being able to ask questions, find expertise and offer feedback about content without having to leave the application you’re working in!

 

Because it’s Yammer you can also view and participate in conversations outside your document, on your mobile device, in Microsoft Dynamics CRM or any app where a Yammer feed is embedded! Get ready for a totally new way to produce incredible content!

Here’s how document conversations work. When you open a file in your browser from your cloud store, you see the file on the left with a contextual Yammer conversation in a pane on the right. You can collapse and expand the Yammer pane as needed.

You can do more than join in a conversation from the Yammer pane. You can also post a message, @mention your coworkers, and publish to a Yammer group—either public or private.

Document conversations are easy to join in Yammer as well. If you’re working in Yammer, you’ll see a threaded conversation in the group the post was published in with an icon that enables you to open the file from the cloud location where it lives. The Yammer conversations about files are visible to users in the group but only users who have permission to view or edit the file can open it.

Document conversations are progressively being rolled out to our customers during the course of this summer where it will then be available across all sites within a tenant. To leverage document conversations, you will need to enable Yammer as your default social network.

 

For additional information, see this post: Make Yammer your default social network in Office 365.Get started today by storing your files in the cloud on OneDrive for Business or SharePoint Online, and harness social collaboration across your company with Yammer. In the coming weeks Document conversations will then be activated in your organization and ready to use! Because we continue to innovate and integrate, subscribe to this blog to get the latest updates across Office 365. And don’t hesitate to reach out to us on Twitter or Facebook with your questions or suggestions.

Christophe Fiessinger @cfiessinger

Frequently asked questions

Q: What file types can be used for document conversations?

A: Document Conversations supports over 30 common file types, including .doc, .xls, .ppt, .pdf, .png, .gif, .mp4, .avi, and more.

Q: Can I see conversations in Office desktop or when I send a document as a mail attachment?

A: No. Currently Yammer threads are visible only in files stored in a SharePoint Online document library or OneDrive for Business in Office 365.

Q: What happens if a file is renamed?

A: Document Conversations uses Yammer’s Open Graph protocol, so when a post is published it also contains a link to the file. This link serves as the glue between the file and its associated conversations. Because the link changes according to the file name, when a file is renamed, the link changes, causing the Yammer conversations to become disassociated from the new file name.

Q: Can I start conversations in a Yammer external network?

A: No. We set our initial goal to build Document Conversations to help teams work better internally. While the new document conversations cannot be started in Yammer external networks today, we are exploring ways to extend collaboration around content to beyond your firewall.

How To : Peel back the layers of data and information and reveal meaningful BI with SharePoint

Business Intelligence (BI) often takes on the mantel of exotic, rare, and almost unattainable technology. But at its core, business intelligence is simply a method of reporting on what happened.

Image


Granted it is a type of reporting that reaches beyond an ordinary peek into the rearview mirror of past business events; business intelligence helps to spot future trends, make informed go/no-go decisions, or identify potential threats. BI technology is strongest when it rests on a large supply of valid, diverse and current data, and can leverage the proper tools to help users understand and visualize queries about that data.

This blog post is about how SharePoint 2013 can help users solve practical business information problems, even though they don’t have the time or the budget to custom build an enterprise-scale BI system. The underlying premise of this blog is – show how SharePoint 2013 can provide a reasonable cost-benefit ratio and justify investing in BI technology.


Before we jump into SharePoint 2013 and its capabilities, let’s take a high-level look at Business Intelligence.

What Problems Can BI Solve?


If the only tool you have in your toolbox is a hammer, then every problem might look like a nail. The fact is, most businesses are able to solve most problems without spending a dime on more technology. In other words, the ‘hammer’ most businesses have been using works just fine, because most of their problems look like nails. The challenge they face only comes into focus when their competition is able to solve the same type of problems, but they do it faster, cheaper, and with less effort. Obviously, this can be a doomsday scenario for the company falling behind, technologically speaking.

That said; Business Intelligence is a great tool…but what problems will it solve? Perhaps a better question would be…how do I figure out if BI can help my company? You are not alone in asking these questions. Just because we have the tools to do something amazing like BI, doesn’t mean you need it or can afford it. But it certainly would be beneficial for you to find out if and how a Business Intelligence capability would help your business.

The starting-line to find out if BI makes sense for your organization runs right through your own conference room. You need to sit down with your senior executives and managers and talk to them about the information they rely on to run their part of the business. What information do they need, when do they need it, what do they do with it, what information are they missing, and so on? Initiate this type of conversation and you will, undoubtedly, open up a window of opportunity to discuss the merits of Business Intelligence.

SharePoint 2013 and Business Intelligence

Assuming that you see value in establishing BI capabilities in your organization, a very good first step would be to evaluate Microsoft’s SharePoint 2013. Because Microsoft products are generally used throughout both the back-office and front-office of most businesses, SharePoint 2013 is a very powerful tool to integrate the data with the technical systems required to build BI capabilities.

The main theme for BI is aggregation of data from multiple sources and then making that data available when, where, and how it is needed. BI must also be in complete alignment with all corporate goals while it supports the needs of individual managers who are responsible for achieving those goals. SharePoint 2013 is designed to access information and put it in the hands of employees when and where they need it. Because of SharePoint 2013’s capabilities to enable collaboration and teamwork, its very nature aligns the goals of the business with the goals of the employees.

Data Warehousing Measures and Dimensions

Perhaps the most fundamental requirement of BI is the need for information or data. Often this data is distributed throughout multiple databases and must be aggregated in some form.

In data warehousing, which is the term used to describe the functions necessary to aggregate, store and access data for the purpose of Business Intelligence and analytics, the data is often loaded into Online Analytical Processing (OLAP) cubes. The data stored in a cube can be sorted and filtered based on measures and dimensions. This technique lets users query the cube based on practical business categories which enable calculations to be made such as sum, count, average, min/max, etc. This is called a measure.

The other characteristic used in a cube is called a dimension. Dimensions are a collection of information or references about a measureable event. Each dimension can be measured.
For example, let’s say you wanted to run a report that gives you an up-to-the-minute total on sales volume and the number of units sold for each region of your company. In this example, the regions would be the dimensions and the sales volume and number of units are the measures.

SharePoint is designed to access cubes and work with the data stored in the cube, based on the available measures and dimensions.

Key Performance Indicators Business Intelligence enables visualization of raw data in the form of charts, graphs, pictures, etc. Typically Key Performance Indicators (KPIs), Score Cards, and Dashboards use the raw data and turn it into something that can be easily consumed by a viewer. For example, a project status KPI is commonly displayed as green, yellow or red lights to indicate that the project is on target/no issues, there are minor issues, or the project is in trouble. This BI technique is an easy way to visualize the data and cut through all the non-essential information and get to the point. This also allows the viewer to quickly gauge if the corporate goals are being met or are in jeopardy.

SharePoint 2013 Business Intelligence Solutions

SharePoint 2013 has several products that may be used as part of a BI system. The following is a list of commonly used MS components, all or just some of them can be used to create a practical and powerful BI system:

  • BI Data Services – MS SQL Server Data Services and Integration services (both used to extract, transform and load data from disparate sources)
  • BI Engine MS SQL Server Analysis Services (supports OLAP cubes by letting you design, create, and manage multidimensional structures that contain data aggregated from other data sources, such as relational databases.)
  • PowerShell (a Microsoft task automation framework, consists of a command-line shell and associated scripting language built on .NET technology)
  • PowerPivot for SharePoint (Analysis Servicess server running in SharePoint mode and provides server hosting of PowerPivot data)
  • Microsoft Excel (commonly used spreadsheet with Pivot Tables and Pivot Charts and can be used with SharePoint)
  • Microsoft Performance Point Designer (is integrated with SharePoint to create dashboards, score cards, and analytics.

Setting Up SharePoint 2013


When SharePoint 2013 is installed and configured, Central Administration (CA) is provisioned. Central Administration is where you control all the settings and features of SharePoint Product sites for Web applications, like Excel or Performance Point. CA is a convenient tool that helps in linking the applications and tools required by SharePoint to set up a BI system. You will also use Microsoft’s PowerShell to set up the infrastructure for SharePoint sites so they can run in a multi-tenant environment on a single physical server or virtual server.

Excel Services or Performance Point

You can use either or both of these tools to create dashboards. Either one will help you establish trusted locations (e.g. http:// links), data providers, libraries, and databases.
Excel is often the easiest and most familiar tool to display and analyze BI data. Since Excel has been around a long time and so many people are experienced when it comes to using Excel, it is a good choice as the front-end tool to put on your BI environment.

With Excel you can add measures and dimensions from a source data cube (created by Analysis Services) and then use the Pivot Chart capabilities in Excel to select the fields you want to display, such as sales amount, product categories, sales by geography, etc. You can also create Pivot Tables is you want to display a spreadsheet with multiple columns and rows, also using the fields from the cube.

SharePoint’s Practical Solution


Microsoft and SharePoint have all the tools you need to create a very robust and practical BI solution. It is probable that you currently own licenses to many of the components, if not all, that are required to build a solution. If you are interested in Business Intelligence and you would consider a Microsoft-based solution, you might find that you can be up and running in a matter of days with a minimal investment.

How To : Use a Site mailbox to collaborate with your team

Share documents with others

Image

Every team has documents of some kind that need to be stored somewhere, and usually need to be shared with others. If you store your team’s documents on your SharePoint site, you can easily leverage the Site Mailbox app to share those documents with those who have site access.

 Important    When users view a site mailbox in Outlook, they will see a list of all the documents in that site’s document libraries. Site mailboxes present the same list of documents to all users, so some users may see documents they do not have access to open.

If you’re using Exchange, your documents will also appear in a folder in Outlook, making it even easier to forward documents to others.

Forwarding a document from the site mailbox

Organizations, and teams within organizations, often have several different email threads going in all directions at one time. It’s easy for lines to cross, information to get lost or overlooked, and for communication to break down. Site mailboxes enable you to store team or project-related email in one place, so that everyone on the team can see all communication.

On the Quick Launch, click Mailbox.

Mailbox on the Quick Launch

The site mailbox opens as a second, separate inbox and folder structure, next to your personal email account. Mail sent to and from the site mailbox account will be shared between all those who have Contributor permissions on the SharePoint site.

 Tip    Did you know you can also use a site mailbox to collaborate on documents?

Add a site mailbox as a mail recipient

By including the site mailbox on an important email thread, you ensure that a copy of the information in that thread is stored in a location that can be accessed by anyone on the team.

Simply add the site mailbox in the To, CC, or BCC line of an email message.

Email message with site mailbox included in CC field.

You could even consider adding the site mailbox email address to any team contact groups or distribution lists. That way, relevant email automatically gets stored in the team’s site mailbox.

Send email from the site mailbox

When you write and send email from the site mailbox, it will look as though it came from you.

Because everyone with Contributor permissions on a site can access the site mailbox, several people can work together to draft an email message.

To compose a message, simply click New Mail.

New mail button for site mailboxes.

This will open a new message in your site mailbox.

New mail message in a site mailbox.

SharePoint 2013 and CRM 2011 integration. A customer portal approach

A Look At : Federated Authentication

More and more organisations are looking to collaborate with partners and customers in their ecosystem to help them achieve mutual goals. SharePoint is a great tool for enabling this collaboration but many organisations are reluctant to create and maintain identities for users from other organisations just to allow access to their own SharePoint farm. It’s hardly surprising; identity management is complex and expensive.

You have to pay for servers to host your identity provider (Microsoft Active Directory if you are using Windows); you have to keep it secure; you have to back it up and ensure that it is always available, and you have to pay for someone to maintain and administer it. Identity management becomes even more complicated when your organisation wants to give external users access to SharePoint; you have to ensure that they can only access SharePoint and can’t gain access to other systems; you have to buy additional client access licenses (CALs) for each external user because by adding them to your Active Directory you are making them an internal user.

 

Imageare

Microsoft, Google and others all offer identity providers (also known as IdPs or claims providers) that are free to use, and by federating with a third party IdP you shift the ownership and management of identities on to them. You may even find that the partner or customer you are looking to collaborate with may offer their own IdP (most likely Active Directory Federation Services if they themselves run Windows). Of course, you have to trust whichever IdP you choose; they will be responsible for authenticating the user instead of you so you must be confident that they will do a good job. You must also check what pieces of information about a user (also known as claims; for example, name, email address etc) IdPs offer to ensure they can tell you enough about a user for your purposes as they don’t all offer the same.

Having introduced support for federated authentication in SharePoint 2010, Microsoft paved the way for us to federate with third party IdPs within SharePoint itself. Unfortunately, configuring SharePoint to do this is fiddly and there is no user interface for doing so (a task made more onerous if you want to federate with multiple IdPs or tweak the configuration at a later date). Fortunately Microsoft has also introduced Azure Access Control Services (ACS) which makes the process of federating with one or more IdPs simple and easy to maintain. ACS is a cloud-based service that enables you to manage the IdPs used by your applications. The following diagram illustrates, at a high-level, the components of ACS.

An ACS namespace is a container for mappings between IdPs and one or more relying parties (the applications that want to use ACS), in our case SharePoint. Associated with each mapping is a rule group with defines how the relying party handles the individual claims associated with an identity. Using rule groups you can choose to hide or expose certain claims to specific relying parties within the namespace.

So by creating an ACS namespace you are in effect creating your own unique IdP that encapsulates the configuration for federating with one or more additional IdPs. A key point to remember is that your ACS namespace can be used by other applications (relying parties) that want to share the same identities, not just SharePoint. 

Once your ACS namespace has been created you need to configure SharePoint to trust it, which most of the time will be a one off task and from that point on you can manage and maintain the IdPs you support from within ACS. The following diagram illustrates, at a high-level, the typical architecture for integrating SharePoint and ACS.

 

In the scenario above the SharePoint web application is using two different claims providers (they are referred to as claims providers in SharePoint rather than IdPs). One is for internal users and trusts an internal AD domain and another is for external users and trusts an ACS namespace.

When a user tries to access a site within the web application they will get the default SharePoint Sign In page asking them which provider they want to use.

This page can be customised and branded as required. If the user selects Windows Authentication they will get the standard authentication dialog. If they select Azure Provider (or whatever you happen to have called your claims provider) they will be redirected to your ACS Sign In page.

Again this page can be customised and branded as required. By clicking on one of the IdPs the user will be redirected to the appropriate Sign In page. Once they have been successfully authenticated by the IdP they will be redirected back to SharePoint.

 

Conclusion

By integrating SharePoint with ACS you can simplify the process of giving external users access to SharePoint. It could also save you money in licence fees and administration costs[i].

An important point to bear in mind when planning federated authentication for SharePoint is that in order for Search to be able to index content within SharePoint, you must enable Windows authentication on at least one zone within your web application. Also, if you use a reverse proxy to perform authentication, such as Microsoft Threat Management Gateway, before allowing traffic to hit your SharePoint servers, you will need to disable the authentication checks

 

[i] The licensing model for external users differs between SharePoint 2010 and SharePoint 2013. With SharePoint 2010 if you expose your farm to external users, either anonymously or not, you have to purchase a separate licence for each server. The license covers you for any number of external users and you do not need to by a CAL for each user. With SharePoint 2013, Microsoft did away with the server license for external users and you still don’t need to buy CALs for the external users.

A Look At : The importance of people in a SharePoint project

Image

As with all other sizeable new business software implementations, a successful SharePoint deployment is one that is well thought-out and carefully managed every step of the way.

However in one key respect a SharePoint deployment is different from most others in the way it should be carried out. Whereas the majority of ERP solutions are very rigid in terms of their functionality and in the nature of the business problems they solve, SharePoint is far more of a jack-of-all-trades type of system. It’s a solution that typically spreads its tentacles across several areas within an organisation, and which has several people putting in their two cents worth about what functions SharePoint should be geared to perform.

So what is the best approach? And what makes for a good SharePoint project manager?

From my experience with SharePoint implementations, I would say first and foremost that a SharePoint deployment should be approached from a business perspective, rather than from a strictly technology standpoint. A SharePoint project delivered within the allotted time and budget can still fail if it’s executed without the broader business objectives in mind. If the project manager understands, and can effectively demonstrate, how SharePoint can solve the organisation’s real-world business problems and increase business value, SharePoint will be a welcome addition to the organisation’s software arsenal.

Also crucial is an understanding of people. An effective SharePoint project manager understands the concerns, limitations and capabilities of those who will be using the solution once it’s implemented. No matter how technically well-executed your SharePoint implementation is, it will amount to little if hardly anyone’s using the system. The objective here is to maximise user adoption and engagement, and this can be achieved by maximising user involvement in the deployment process.

 

Rather than only talk to managers about SharePoint and what they want from the system, also talk to those below them who will be using the product on a day-to-day basis. This means not only collaborating with, for example, the marketing director but also with the various marketing executives and co-ordinators.

 

It means not only talking with the human resources manager but also with the HR assistant, and so on. By engaging with a wide range of (what will be) SharePoint end-users and getting them involved in the system design process, the rate of sustained user adoption will be a lot higher than it would have been otherwise.

 

An example of user engagement in action concerns a SharePoint implementation I oversaw for an insurance company. The business wanted to improve the tracking of its documentation using a SharePoint-based records management system. Essentially the system was deployed to enhance the management and flow of health insurance and other key documentation within the organisation to ensure that the company meets its compliance obligations.

 

The project was a great success, largely because we ensured that there was a high level of end-user input right from the start. We got all the relevant managers and staff involved from the outset, we began training people on SharePoint early on and we made sure the change management part of the process was well-covered.

 

Also, and very importantly, the business value of the project was sharply defined and clearly explained from the get-go. As everyone set about making the transition to a SharePoint-driven system, they knew why it was important to the company and why it was going to be good for them too.

By contrast a follow-up SharePoint project for the company some months later was not as successful. Why? Because with that project, in which the company abandoned its existing intranet and developed a new one, the business benefits were poorly defined and were not effectively communicated to stakeholders. That particular implementation was driven by the company’s IT department which approached the project from a technical, rather than a business, perspective. User buy-in was not sought and was not achieved.

 

When the SharePoint solution went live hardly anyone used it because they didn’t see why they should. No-one had educated them on that. That’s the danger when you don’t engage all your prospective system end-users throughout every phase of a SharePoint implementation project.

As can be seen, while it is of course critical that the technical necessities of a SharePoint deployment be met, that’s only part of the picture. Without people using the system, or with people using the system to less than its maximum potential, the return on your SharePoint investment will never materialize.

Comprehensive engagement with all stakeholders, that’s where the other part of the picture comes in. That’s where a return on investment, an investment of time and effort, will most assuredly be achieved.

How To : 8 Steps for a successful SharePoint Change Management

As with virtually any other significant IT implementation project, a SharePoint deployment is as dependent on people as it is on technology for its success. If your system end users are in fact not using the system, or are not using it correctly or to its full potential, you will never achieve that all ‑ important return on investment.

dsadsa

ImageGen[1]            hero-for-hire_basic-layout_600

One hundred percent adoption by users who are proficient with SharePoint and are committed to gaining the greatest value from the software should be a key objective for all SharePoint project leaders. To bring this about it is crucial to develop and execute an effective change management strategy as a key component of your SharePoint implementation.

At Professional Advantage we have had great success with our SharePoint clients by informally following a change management process that was thought up by Dr John Kotter, an American professor whose 1966 book, Leading Change, is still highly influential in the world of change management theory.

In his book Dr Kotter puts forward an eight step process that change leaders can follow to avoid failure and adjust successfully to change. These steps, which can be usefully applied to a SharePoint deployment project, are:

 

1. Create a sense of urgency

No SharePoint project will get off the ground, let alone become successful, if there is no buy-in at the executive level. Here it is important to put a strong case forward as to why the move to SharePoint is very much in the organisation’s best interests. Begin by doing lots of research. Examine, for example, the competitive disadvantages that will be suffered if no change is made. Also highlight those business functions and processes within the organisation that could be significantly improved with SharePoint. Tie the benefits of SharePoint to the organisation’s broad business goals and ongoing strategic objectives. Explain as persuasively as possible why the current situation is unsustainable and why, when it comes to moving to SharePoint, it’s a case of ‘the sooner the better.’ The stronger your business case for a SharePoint implementation, the more likely it is that it will get the green light.

 

2. Create a guiding coalition

Once you’ve received the go-ahead for the SharePoint deployment the next step is to put together a coalition of people with the power and commitment to lead the change. This team will ideally be comprised of a wide variety of motivated individuals: department managers, technical experts and those at the coalface who will be using SharePoint on a day-to-day basis should all form part of the coalition. They should also be people who have grasped the urgency of the task ahead, who understand the business goals that will be achieved with a successful implementation and who recognise that 100% user adoption is a central goal of the project.

Crucial to the success of the coalition’s efforts is that its members all work well together. As the project evolves these change-drivers will be sharing ideas, making decisions and identifying and solving problems. Team members must be able to trust each other and collaborate effectively; if this does not occur the project will almost certainly stall.

 

3. Develop a change vision

By developing a clear vision for the project you give those involved a direction to follow and a goal to achieve. Ideally the vision will be easy to comprehend, achievable, flexible and something that all stakeholders can get enthusiastic about.

While the vision will by definition be broad, the strategies that underpin it will be specific. Priorities for the project should be defined and acted upon, with priority given to ‘low hanging fruit’, ie tasks that can be easily achieved and which will deliver visible, measurable and meaningful change within the organisation. This approach will add momentum to the project by enabling stakeholders to gain a real-world perspective on the changes that are in progress and why they’re good both for the organisation and for individual SharePoint users.

 

4. Communicate the vision for buy-in

Communicating your vision and promoting the behavioural changes that will drive it are critical for a successful SharePoint deployment. This step requires a top-down communications strategy that is consistent, creative, inspiring and ongoing.

At Professional Advantage our communication strategy forms part of our SharePoint adoption plan and includes a variety of tactics designed to get staff using SharePoint, and using it properly. In the past such tactics have included SharePoint launch parties, lunch sessions, system design competitions amongst staff, social media, blogs and the putting up of posters around the office promoting the use of SharePoint. The objective here is, of course, to get users educated and engaged. The more creative you are, the better. And always keep in mind that user adoption will likely be low unless you can answer the ‘What’s in it for me’ question.

 

5. Empower broad-based action

To achieve the highest possible level of SharePoint user adoption it’s best to remove any barriers that might impede that objective. This particularly applies to the laggards, ie those who are most resistant to change and least likely to make full use of the system.

Typically this will involve removing software and other technologies that make it easy for workers to continue doing things the old way. Too often organisations include this as an afterthought, resulting in smaller and slower user adoption. Here it is important to plan from the beginning, anticipate what systems will be made redundant (or scaled down) and schedule that in to the SharePoint implementation plan.

Also important here is encouragement from above. Supported by proper ongoing training, those who will be using SharePoint need to be encouraged to step out of their comfort zone and embrace the new system.

 

6. Celebrate short-term wins

Short-term wins are essential to the success of your SharePoint deployments, as are the active celebration of these wins when they occur. The transition to a SharePoint environment is a long-term process and momentum must be maintained every step of the way. Perhaps, as a result of SharePoint, a new level of intra-office collaboration has been achieved, or the organisation has experienced dramatic time savings with particular processes, or has achieved new standards of compliance. Whatever the win, the broadcasting of it should form part of the SharePoint communications plan. If people can see how and why SharePoint is working, they will be more likely to embrace the system and, in so doing, contribute to the achievement of the organisation’s business goals.

 

7. Consolidate gains and generate more change

You’ve scored some wins and people are now comfortable using SharePoint. While that is a wonderful thing, the danger at this stage is complacency. Rather than take your foot off the accelerator it’s important to build on what’s been achieved and pursue larger, more ambitious objectives. To fully ingrain SharePoint into your organisation’s culture (and to avoid regression) ramp things up with new projects and initiatives.

 

8. Making it stick

To fully embed SharePoint into your organisation’s culture and business practices everyone needs to be on board. Just as during a life-threatening cyclone there are always some residents who refuse to heed advice to leave town, with a SharePoint deployment there will always be some who are unwilling to move. Here it is important to reinforce, and continue to celebrate, the victories that have been achieved and communicate how important it is that everyone adopt the system.

As the SharePoint project continues to evolve so too will its vision and purpose. With the right planning and execution, and with the right leadership, people will, over time, forget the old ways of doing things and fully embrace the new.

New Office 365 API VS.Net Add-In exposes Javascript Client model

You can now access the Office 365 APIs using libraries available for .NET and JavaScript. These libraries make it easier to interact with the REST APIs from the device or platform of your choice.

 

Office365

The libraries are included in the latest update for Office 365 API Tools for Visual Studio Preview. Along with the libraries, this release also brings you some key updates to the tooling experience, making it easier to interact with Office 365 services.

Client libraries

Office 365 provides REST-based APIs that enable developers to access Office resources such as calendar, contacts, mail, files, and more.

The client libraries will let you:

  • Perform authentication and discovery
  • Use the Mail, Calendar and Contacts API
  • Use the My Files and Sites API (currently .NET only, with JavaScript coming soon)
  • Use the Users and Groups API

 

You can program directly against the REST APIs to interact with Office 365, but it requires you to write and maintain code around managing authentication tokens, constructing the right urls and queries for the API you wanted to access, and perform other tasks.

By using client libraries to access the Office 365 APIs, you can reduce the complexity of the code you need to write to access the APIs. We’re providing these libraries for .NET as well as JavaScript developers for use with the just-announced multi-device hybrid applications.

Here are some examples of how easy it is access the Office 365 APIs using these libraries.

.NET C# code to authenticate and get upcoming events from your Office 365 calendar:

// Shows UI to authenticate
Authenticator = newAuthenticator();
AuthenticationInfo result = await authenticator.AuthenticateAsync("https://outlook.office365.com");

The AuthenticateAsync method will prompt for a username and password and authenticate against the specified resource url, like outlook.office365.com in this case. Once you have the authentication information, you can create a client object that serves as the base for accessing all the APIs for Exchange:


// Create a client object
ExchangeClient client =
newExchangeClient(newUri("https://outlook.office365.com/ews/odata"),
result.GetAccessToken);

Because we’re using .NET here, we get to take advantage of the native language capabilities, like LINQ, so querying the Office 365 calendar is as simple as writing a LINQ query and executing it:

// Obtain calendar event data
var eventsResults = await (from i in client.Me.Events
where i.End >= DateTimeOffset.UtcNow
select i).Take(10).ExecuteAsync();

With just those four lines of code you can start making calls to the Office 365 APIs!

We wanted to make sure that you can reach multiple device and service platforms with a consistent API, so the client libraries are portable .NET libraries, which means they also work with Android and iOS devices through Xamarin. Because authentication needs to display a UI that is different on the various platforms, we also provide platform-specific authentication libraries, which can then be used with the portable ones to provide an end-to-end experience.

For developers creating multi-device hybrid applications that target multiple device platforms through JavaScript, we also have JavaScript versions of these libraries that provide a similar experience while adopting JavaScript’s patterns and practices, such as using the promises pattern instead of await.

 

Here is the same example to authenticate and get calendar events in JavaScript:

var authContext = new O365Auth.Context();
authContext.getIdToken('https://outlook.office365.com/')
.then((function (token) {
// authentication succeeded
var client = new Exchange.Client('https://outlook.office365.com/ews/odata',
token.getAccessTokenFn('https://outlook.office365.com'));
client.me.calendar.events.getEvents().fetch()
.then(function (events) {
// get currentPage of calendar events
var myevents = events.currentPage;
}, function (reason) {
// handle error
});
}).bind(this), function (reason) {
// authentication failed
});

The flow to authenticate and create a client object is similar across .NET and JavaScript, but you’re doing it in a way that should be natural to the language.

Along with the JavaScript files for these libraries, we are also including the TypeScript type definition (.d.ts)—in case you choose to develop your apps in TypeScript.

As you get started using these libraries, there are a few things to keep in mind. This is a very early preview release of the libraries that is meant to prove out the concept and get feedback on it. The libraries do not currently cover all the APIs provided by the services and some of the APIs in the library may not work. The APIs in the libraries themselves will definitely change in future updates.

Note that while we tend to call these “client” libraries, these also work with .NET server technologies like Asp.Net Web Forms and MVC, so you really get to target the breadth of the .NET platform.

 

Tooling updates

With today’s update of our Office 365 API Tools for Visual Studio 2013, the tool displays the available Office 365 services that you can add to your project. Once you’ve signed in with your Office 365 credentials, adding a service to your project is as easy as selecting the appropriate service and applying the required permissions.

dotnetvisualstudioupdate_01

Once you submit the changes, Visual Studio performs the following:

  1. Registers an application (if there isn’t an application registered yet) in Microsoft Azure Active Directory to consume Office 365 services.
  2. Adds the following to the project:
    1. Client libraries from Nuget for the configured services.
    2. Sample code files that use the Client Libraries.

Project types supported

With the broad reach of the client libraries, the Office 365 API tool is now available for a variety of project types (client, desktop, and web) in Visual Studio. Here’s are all the project types supported with the May update:

  • .NET Windows Store Apps
  • Windows Forms Application
  • WPF Application
  • ASP.NET MVC Web Application
  • ASP.NET Web Forms Application
  • Xamarin Android and iOS Applications
  • Multi-device hybrid apps

Installing the latest update

To install the latest update, you can either:

  • Check for updates within Visual Studio. To do so, follow these steps:
    1. In Visual Studio menu, click Tools->Extensions and Updates->Updates.
    2. You should see the update available for Office 365 API Tools.
    3. Click Update to update to the latest version.

–OR–

  • Download the extension and install it manually.

Once you’ve updated, you can invoke the Office 365 API tool as usual, that is, by going to your project node in the Solution Explorer and selecting Add->Connected Service from the context menu.

Looking forward to seeing your Apps out there when I visit the stores!!


MSDN references

Check also new SharePoint Online Solution Pack for branding and provisioning. This package contains also some examples, which originates from the AMS reference implementations. Here’s the direct links for the Solution Pack

You can find introduction to this SharePoint Online Solution Pack for branding and provisioning from following blog post – Introduction to SharePoint Online Solution Pack for branding and provisioning released.

The “Hybrid” SharePoint Online Model

Hybrid

The hybrid approach is not merging information from two different site collections into one. Or making sure an on-premise document library has the same content as the document library in an online environment. So what does hybrid technically mean then? It basically means we have two separate environments that act and operate completely independent of each other.

SharePointOnline
SharePointOnline

 

Even the SharePoint service applications such as the user profile service, managed metadata service, and search cannot be shared between the on-premises farm(s) and SharePoint Online environment. Instead, administrators should choose to either fully deploy a service application in only one location, or configure an instance of the service in each environment. But still there are ways to integrate functionality between the two environments.

The idea is that you first segment the different workloads from SharePoint across the on-premise and online environment. You often see that the commodity services like collaboration on team sites, news sites, projects sites and so on are stored in the Online environment, while the more advanced scenario’s often remain on-premise (think of BI capabilities, Fast Search or advanced custom solutions).

 

So where does the hybrid word come from then? It basically means that we stitch these two environments together using the same look and feel, so that the end users have a complete transparent and rich experience and do not notice the difference between working in the on-premise environment or in the online environment. They can only see the difference by looking at the URL.

Single Sign On

In order to have such a complete transparent and rich experience from an end user perspective, it is important that the end users only need to authenticate once. This can be accomplished by implementing and configuring single sign on. Once this has been set up there is a trust relationship between the on-premise and online environment. This will make sure that if the end users that already authenticated in the on-premise environment (Active Directory), don’t need to re-enter their password in the online environment. So navigating between the on-premise and online environment will be transparent without password prompts. Should you require more information on how this technology exactly works or need more information on how to implement it, please see the following links:

 

How Single Sign-On Works in Office 365
http://community.office365.com/en-us/w/sso/727.aspx

Prepare for Single Sign on:
http://onlinehelp.microsoft.com/en-us/office365-enterprises/ff652540.aspx

Plan for and deploy Active Directory Federation Services 2.0 for use with single sign-on
http://onlinehelp.microsoft.com/en-us/office365-enterprises/ff652539.aspx

Single sign-on: Roadmap
http://onlinehelp.microsoft.com/en-us/office365-enterprises/hh125004.aspx

Deploying and Configuring ADFS 2.0
http://www.youtube.com/watch?v=fwHIKlAPV0g

Questions about Single Sign On (SSO) with Office 365 for Education
http://blogs.technet.com/b/educloud/archive/2011/09/23/questions-about-single-sign-on-sso-with-office-365-for-education.aspx

Video Screencast: Complete setup details for federated identity access from on-premise AD to Office 365
http://blogs.msdn.com/b/plankytronixx/archive/2011/01/24/video-screencast-complete-setup-details-for-federated-identity-access-from-on-premise-ad-to-office-365.aspx

Branding

So how do we give these two environments the same look and feel (branding), so that the end user doesn’t notice the difference? This is not as simple as it sounds. In order to make the environments look and feel the same, you would need to design and apply the same master pages, use the same icons, images and style sheets. Next to that you need to make sure the global navigation of both environments will integrate seamlessly by linking to each other’s environment.

clip_image001

More detailed information and things to consider when branding a SharePoint Online environment can be found here.

Search

Search is one area which has some integration capabilities. Thought the integration is not ideal, as we can’t share the relevance of the search results between the two environments. But what we can do is to have either two search boxes, one for on-premise content and one for the online content, or use federated search. With federated search you can do one search query, but get two separated results from two difference content sources showing up in two separate result sets. Below is a screenshot of search results from SharePoint and search results from Bing.

clip_image001[6]

Obviously you can customize the search results page and its layout so that it will fit your needs. Bear in mind though, that you can only setup federated search in an on-premise environment and is not available in the Online environment (see also the Microsoft SharePoint Online for Enterprises Service Description). More info about the search integration capabilities can be found in the whitepaper “Hybrid SharePoint Environments with Office 365”.

 

 

User profile

A user’s my site and my profile should exist in a single environment only to ensure that there is a single correct and complete source of user data. Although the user profile service cannot be shared between environments, it is possible to link on-premises SharePoint User Profiles to Office 365 and vice versa. So whichever environment a user is currently browsing, if they access their own or another user’s profile, it will redirect to the environment that is hosting the service. More information on how to implement user profiles and my sites in a hybrid environment can be found in the whitepaper “Hybrid SharePoint Environments with Office 365”.

 

Business Connectivity Services

Since the November update of SharePoint Online, we can connect to Line Of Business (LOB) data stored in either your on-premise environment or in Azure using the Business Connectivity Services (BCS) component. As long as you have your LOB application exposed to the web, you should be able to hookup the data into SharePoint Online. For more information about BCS in SharePoint Online, please see the following resources:

Introduction to Business Connectivity Services in SharePoint Online
http://msdn.microsoft.com/en-us/library/hh412217.aspx

What’s New for BCS in SharePoint Online
http://msdn.microsoft.com/en-us/library/hh418045.aspx

SharePoint Online Developer Resource Center
http://msdn.microsoft.com/en-us/sharepoint/gg153540.aspx

 

 

 

Integrating other components

Though it can be challenging to accomplish forms of integration for other SharePoint components between the two environments, there are techniques and strategies to take into account when you are planning and designing for a hybrid environment. A lot more detail about these techniques and strategies can be found in a blog post soon to follow on the power of Prointsm in SharePi

Features from SharePoint 2010 Integration with SAP BusinessObjects BI 4.0

ImageOne of the core concepts of Business Connectivity Services (BCS) for SharePoint 2010 are the external content types. They are reusable metadata descriptions of connectivity information and behaviours (stereotyped operations) applied to external data. SharePoint offers developers several ways to create external content types and integrate them into the platform.

 

The SharePoint Designer 2010, for instance, allows you to create and manage external content types that are stored in supported external systems. Such an external system could be SQL Server, WCF Data Service, or a .NET Assembly Connector.

This article shows you how to create an external content type for SharePoint named Customer based on given SAP customer data. The definition of the content type will be provided as a .NET assembly, and the data are displayed in an external list in SharePoint.

The SAP customer data are retrieved from the function module SD_RFC_CUSTOMER_GET. In general, function modules in a SAP R/3 system are comparable with public and static C# class methods, and can be accessed from outside of SAP via RFC (Remote Function Call). Fortunately, we do not need to program RFC calls manually. We will use the very handy ERPConnect library from Theobald Software. The library includes a LINQ to SAP provider and designer that makes our lives easier.

.NET Assembly Connector for SAP

The first step in providing a custom connector for SAP is to create a SharePoint project with the SharePoint 2010 Developer Tools for Visual Studio 2010. Those tools are part of Visual Studio 2010. We will use the Business Data Connectivity Model project template to create our project:

After defining the Visual Studio solution name and clicking the OK button, the project wizard will ask what kind of SharePoint 2010 solution you want to create. The solution must be deployed as a farm solution, not as a sandboxed solution. Visual Studio is now creating a new SharePoint project with a default BDC model (BdcModel1). You can also create an empty SharePoint project and add a Business Data Connectivity Model project item manually afterwards. This will also generate a new node to the Visual Studio Solution Explorer called BdcModel1. The node contains a couple of project files: The BDC model file (file extension bdcm), and the Entity1.cs and EntityService.cs class files.

Next, we add a LINQ to SAP file to handle the SAP data access logic by selecting the LINQ to ERP item from the Add New Item dialog in Visual Studio. This will add a file called LINQtoERP1.erp to our project. The LINQ to SAP provider is internally called LINQ to ERP. Double click LINQtoERP1.erp to open the designer. Now, drag the Function object from the designer toolbox onto the design surface. This will open the SAP connection dialog since no connection data has been defined so far:

Enter the SAP connection data and your credentials. Click the Test Connection button to test the connectivity. If you could successfully connect to your SAP system, click the OK button to open the function module search dialog. Now search for SD_RFC_CUSTOMER_GET, then select the found item, and click OK to open the RFC Function Module /BAPI dialog:

SP2010SAPToBCS/BCS12.png

The dialog provides you the option to define the method name and parameters you want to use in your SAP context class. The context class is automatically generated by the LINQ to SAP designer including all SAP objects defined. Those objects are either C# (or VB.NET) class methods and/or additional object classes used by the methods.

For our project, we need to select the export parameters KUNNR and NAME1 by clicking the checkboxes in the Pass column. These two parameters become our input parameters in the generated context class method named SD_RFC_CUSTOMER_GET. We also need to return the customer list for the given input selection. Therefore, we select the table parameter CUSTOMER_T on the Tables tab and change the structure name to Customer. Then, click the OK button on the dialog, and the new objects get added to the designer surface.

IMPORTANT: The flag “Create Objects Outside Of Context Class” must be set to TRUE in the property editor of the LINQ designer, otherwise LINQ to SAP generates the Customer class as nested class of the SAP context class. This feature and flag is only available in LINQ to SAP for Visual Studio 2010.

The LINQ designer has also automatically generated a class called Customer within the LINQtoERP1.Designer.cs file. This class will become our BDC model entity or external content type. But first, we need to adjust and rename our BDC model that was created by default from Visual Studio. Currently, the BDC model looks like this:

Rename the BdcModel1 node and file into CustomerModel. Since we already have an entity class (Customer), delete the file Entity1.cs and rename the EntityService.cs file to CustomerService.cs. Next, open the CustomerModel file and rename the designer object Entity1. Then, change the entity identifier name from Identifier1 to KUNNR. You can also use the BDC Explorer for renaming. The final adjustment result should look as follows:

SP2010SAPToBCS/BCS4.png

The last step we need to do in our Visual Studio project is to change the code in the CustomerService class. The BDC model methods ReadItem and ReadList must be implemented using the automatically generated LINQ to SAP code. First of all, take a look at the code:

SP2010SAPToBCS/BCS6.png

As you can see, we basically have just a few lines of code. All of the SAP data access logic is encapsulated within the SAP context class (see the LINQtoERP1.Designer.cs file). The CustomerService class just implements a static constructor to set the ERPConnect license key and to initialize the static variable _sc with the SAP credentials as well as the two BDC model methods.

The ReadItem method, BCS stereotyped operation SpecificFinder, is called by BCS to fetch a specific item defined by the identifier KUNNR. In this case, we just call the SD_RFC_CUSTOMER_GET context method with the passed identifier (variable id) and return the first customer object we get from SAP.

The ReadList method, BCS stereotyped operation Finder, is called by BCS to return all entities. In this case, we just return all customer objects the SD_RFC_CUSTOMER_GET context method returns. The returned result is already of type IEnumerable<Customer>.

The final step is to deploy the SharePoint solution. Right-click on the project node in Visual Studio Solution Explorer and select Deploy. This will install and deploy the SharePoint solution on the server. You can also debug your code by just setting a breakpoint in the CustomerService class and executing the project with F5.

That’s all we have to do!

Now, start the SharePoint Central Administration panel and follow the link “Manage Service Applications”, or navigate directly to the URL http://<SERVERNAME>/_admin/ServiceApplications.aspx. Click on Business Data Connectivity Service to show all the available external content types:

On this page, we find our deployed BDC model including the Customer entity. You can click on the name to retrieve more details about the entity. Right now, there is just one issue open. We need to set permissions!

Mark the checkbox for our entity and click on Set Object Permissions in the Ribbon menu bar. Now, define the permissions for the users you want to allow to access the entity, and click the OK button. In the screen shown above, the user administrator has all the permissions possible.

In the next and final step, we will create an external list based on our entity. To do this, we open SharePoint Designer 2010 and connect us with the SharePoint website.

Click on External Content Types in the Site Objects panel to display all the content types (see above). Double click on the Customer entity to open the details. The SharePoint Designer is reading all the information available by BCS.

In order to create an external list for our entity, click on Create Lists & Form on the Ribbon menu bar (see screenshot below) and enter CustomerList as the name for the external list.

OK, now we are done!

Open the list, and you should get the following result:

The external list shows all the defined fields for our entity, even though our Customer class, automatically generated by the LINQ to SAP, has more than those four fields. This means you can only display a subset of the information for your entity.

Another option is to just select those fields required within the LINQ to SAP designer. With the LINQ designer, you can access not just the SAP function modules. You can integrate other SAP objects, like tables, BW cubes, SAP Query, or IDOCs. A demo version of the ERPConnect library can be downloaded from the Theobald Software homepage.

If you click the associated link of one of the customer numbers in the column KUNNR (see screenshot above), SharePoint will open the details view:

SP2010SAPToBCS/BCS10.png

 

 

How To : A library to create .mht files (available at request)

There are a number of ways to do this, including hosting Word or Excel on the Web Server and dealing with COM Interop issues, or purchasing third – party MIME encoding libraries, some of which sell for $250.00 or more. But, there is no native .NET solution. So, being the curious soul that I am, I decided to investigate a bit and see what I could come up with. Internet Explorer offers a File / Save As option to save a web page as “Web Archive, single file (*.mht)”.

Image

What this does is create an RFC – compliant Multipart MIME Message. Resources such as images are serialized to their Base64 inline encoding representations and each resource is demarcated with the standard multipart MIME header – breaks. Internet Explorer, Word, Excel and most newsreader programs all understand this format. The format, if saved with the file extension “.eml”, will come up as a web page inside Outlook Express; if saved with “.mht”, it will come up in Internet Explorer when the file is double-clicked out of Windows Explorer, and — what many do not know — if saved with a “*.doc” extension, it will load in MS Word, each with all the images intact, and in the case of the EML and MHT formats, with all of the hyperlinks fully-functioning. The primary advantage of the format is, of course, that all the resources can be consolidated into a single file,. making distribution and archiving much easier — including database storage in an NVarchar or NText type field.

 

System.Web.Mail, which .NET provides as a convenient wrapper around the CDO for Windows COM library, offers only a subset of the functionality exposed by the CDO library, and multipart MIME encoding is not a part of that functionality. However, through the wonders of COM Interop, we can create our own COM reference to CDO in the Visual Studio IDE, allowing it to generate a Runtime Callable Wrapper, and help ourselves to the entire rich set of functionality of CDO as we see fit.

 

One method in the CDO library that immediately came to my notice was the CreateMHTMLBody method. That’s MHTMLBody, meaning “Multipurpose Internet Mail Extension HTML (MHTML) Body”. Well!– when I saw that, my eyes lit up like the LED’s on a 32 – way Unisys box! This is a method on the CDO Message class; the method accepts a URI to the requested resource, along with some enumerations, and creates a MultiPart MIME – encoded email message out of the requested URI responses — including images, css and script — in one fell swoop.

 

“Ah”, you say, “How convenient”! Yes, and not only that, but we also get a free “multipart COM Interop Baggage” reference to the ADODB.Stream object – and by simply calling the GetStream method on the Message Class, and then using the Stream’s SaveToFile method, we can grab any resource including images, javascript, css and everything else (except video) and save it to a single MHT Web Archive file just as if we chose the “Save As” option out of Internet Explorer.

 

If we choose not to save the file, but instead want to get back the stream contents, no problem. We just call Stream.ReadText(Stream.Size) and it returns a string containing the entire MHT encoded content. At that point we can do whatever we want with it – set a content – header and Response .Write the content to the browser, for instance — or whatever.

 

For example, when we get back our “MHT” string, we can write the following code:

Response.ContentType=”application/msword”;
Response.AddHeader( “Content-Disposition”, “attachment;filename=NAME.doc”);
Response.Write(myDataString);

 

— and the browser will dutifully offer to save the file as a Word Document. It will still be Multipart MIME encoded, but the .doc extension on the filename allows Word to load it, and Word is smart enough to be able to parse and render the file very nicely. “Ah”, you are saying, “this is nice, and so is the price!”. Yup!

And, if you are serving this MIME-encoded file from out of your database, for example, and you would like it to be able to be displayed in the browser, just change the “NAME.doc” to “NAME.MHT”, and don’t set a content-type header. Internet Explorer will prompt the user to either save or open the file. If they choose “open”, it will be saved to the IE Temporary files and open up in the browser just as if they had loaded it from their local file system.

 

So, to answer a couple of questions that came up recently, yes — you can use this method to MHTML – encode any web page – even one that is dynamically generated as with a report — provided it has a URL, and save the MIME-encoded content as a string in either an NVarchar or NText column in your database. You can then bring this string back out and send it to the browser, images,css, javascript and all.

Now here is the code for a small, very basic “Converter” class I’ve written to take advantage of the two scenarios specified above. Bear in mind, there is much more available in CDO, but I leave this wondrous trail of ecstatic discovery to your whims of fancy:

using System;
using System.Web;
using CDO;
using ADODB;
using System.Text;
namespace PAB.Web.Utils
{
 public class MIMEConverter
 {
  //private ctor as our methods are all static here
  private MIMEConverter()
  {
   
  }   
  public static bool SaveWebPageToMHTFile( string url, string filePath)
  {
   bool result=false;
   CDO.Message  msg = new CDO.MessageClass(); 
   ADODB.Stream  stm=null ;
   try
   {
    msg.MimeFormatted =true;   
    msg.CreateMHTMLBody(url,CDO.CdoMHTMLFlags.cdoSuppressNone, "" ,"" );
stm = msg.GetStream(); stm.SaveToFile(filePath,ADODB.SaveOptionsEnum.adSaveCreateOverWrite); msg=null; stm.Close(); result=true; } catch {throw;} finally { //cleanup here } return result; } public static string ConvertWebPageToMHTString( string url ) { string data = String.Empty; CDO.Message msg = new CDO.MessageClass(); ADODB.Stream stm=null; try { msg.MimeFormatted =true; msg.CreateMHTMLBody(url,CDO.CdoMHTMLFlags.cdoSuppressNone,
"", "" );
stm = msg.GetStream(); data= stm.ReadText(stm.Size); } catch { throw; } finally { //cleanup here } return data; } } }

 

NOTE: When using this type of COM Interop from an ASP.NET web page, it is important to remember that you must set the AspCompat=”true” directive in the Page declaration or you will be very disappointed at the results! This forces the ASP.NET page to run in STA threading model which permits “classic ASP” style COM calls. There is, of course, a significant performance penalty incurred, but realistically, this type of operation would only be performed upon user request and not on every page request.

<

p align=”left”>The downloadable zip file below contains the entire class library and a web solution that will exercise both methods when you fill in a valid URI with protocol, and a valid file path and filename for saving on the server. Unzip this to a folder that you have named “ConvertToMHT” and then mark the folder as an IIS Application so that your request such as “http://localhost/ConvertToMHT/WebForm1.aspx&#8221; will function correctly. You can then load the Solution file and it should work “out of the box”. And, don’t forget – if you have an ASP.NET web application that wants to write a file to the file system on the server, it must be running under an identity that has been granted this permission.

In Depth Look : Private Cloud Infrastructure as a Service Capabilities

saas[1]

 

The primary purpose of a Private Cloud Infrastructure as a Service capability is to provide well managed infrastructure services to the Platform and Software Layers. To achieve this, the Infrastructure Layer, highlighted in the Private Cloud Reference Model diagram below, includes five capabilities.


Figure 1: Private Cloud Reference Model

This document describes these Infrastructure Layer capabilities and the impact of Private Cloud Infrastructure as a Service (IaaS) patterns on their planning and design. These patterns are defined in the Private Cloud Principles, Concepts, and Patterns document and are summarized here:

  • Resource Pooling: Divides resources into partitions for management purposes.
  • Physical Fault Domain: The group of physical resources dependent on a single point of failure such as an Uninterruptible Power Supply (UPS).
  • Upgrade Domain: A group of resources upgraded as a single unit.
  • Reserve Capacity: Unallocated resources, which take over service in the event of a failed Physical Fault Domain.
  • Scale Unit: A collection of resources treated as a single unit of additional capacity.
  • Capacity Plan: A model that enables a private cloud to deliver the perception of infinite capacity.
  • Health Model: Defines how a service or system may remain healthy.
  • Service Class: Defines services delivered by Infrastructure as a Service.
  • Cost Model: The financial breakdown of a private cloud and its services.

The Health Model, Service Class, and Scale Unit patterns directly affect Infrastructure and are detailed in the relevant sections later. Conversely, private cloud infrastructure design directly affects Physical Fault Domains, Upgrade Domains, and the Cost Model. These relationships are shown in Figure 2 below.


Figure 2: Infrastructure Relationship with Patterns

Background

The private cloud principles “perception of continuous availability” and “resiliency over redundancy mindset” are designed to make a private cloud architect think differently.

Traditional solutions rely heavily on redundancy to achieve high availability and avoid failure. But redundancy at the facility (power) and infrastructure (network, server, and storage) layers is very costly. Modern cloud applications are designed with a different, holistic approach to achieving availability. This means shifting focus from building redundancy into the facility and physical infrastructure to engineering the entire solution to handle failures — eliminating them, or at least minimizing their impact.

This approach to availability relies on resilience as well as redundancy. Resilience means rapid, and ideally automatic, recovery from a failure. Redundancy is typically achieved at the application level. (A non-cloud example is Active Directory®, where redundancy is achieved by providing more domain controllers than is needed to handle the load.)

Customer interest in cost reduction will help drive adoption of this approach over the medium term. Removing power redundancy from racks or co-location rooms has a big impact on operational expenses, but this typically occurs only when the hosted application doesn’t have to be highly available, or when high availability is achieved through redundancy at the application layer – for example, Active Directory replication, or application layer mirroring such as SQL Server™ mirroring. Combining reductions in physical redundancy with virtualization results in lower capital and operational expenditure compared to a highly redundant infrastructure.

Applications that depend on a highly available infrastructure will not achieve their Service-Level Agreement (SLA) when placed on the type of infrastructure defined earlier. Customers are therefore likely to develop two environments when designing their private cloud: a standard environment with reduced facility and infrastructure redundancy, and a high-availability environment with traditional levels of redundancy.

Standard Environment

High-Availability Environment

No power redundancy to the rack (for example one in-rack UPS) Redundant power to each server
No network redundancy to the servers (redundant core network) Redundant network connections to each server
Local storage, possibly redundant storage, and storage network Redundant storage presented to each server
Ideally no migration or possibly quick migration Live Migration

These two environments allow a Architect to differentiate service classifications from a high-availability perspective. The standard environment is appropriate for stateless workloads; stateful workloads will require the high availability environments. Stateful and stateless machines are managed differently. Statefulness will likely appear as a characteristic of the service classifications.

Stateless workloads (web servers, for example) are typically redundant at the server level via a load-balanced farm. These servers could easily be hosted in the Standard Environment. If all stateless workloads had an automated build, the Standard Environment could do away with any form of VM migration – and simply deploy another VM after destroying the existing one, thereby saving the cost of shared storage.

Stateful workloads, on the other hand, require a specific management approach and impose higher costs on the consumer. Unless designed for high availability at the application level, they will require some form of redundancy in the infrastructure. Further, the High-Availability Environment requires Live Migration to enable maintenance of the underlying fabric and load balancing of the VMs.

Security

The number one concern of customers considering moving services to the cloud is security. Recent concerns expressed in the industry forums are all well founded and present reasons to think through the end to end scenarios and attack surfaces presented when deploying multiple services from various departments in an organization on a private cloud.

In a cloud-based platform, regardless of whether it is private or public cloud, customers will be working on an essentially virtualized environment. The platform or software will run on top of a shared physical infrastructure managed internally or by the service provider. The security architecture used by the applications will need to move up from the infrastructure to the platform and application layers. In private cloud security this will provide security in addition to the perimeter network.

Public cloud involves handing over control to a third party, sharing services with unrelated business entities or even competitors and requires a high degree of trust in the providers security model and practices. In many ways the security concerns of private cloud and similar those of self-hosted or outsourced datacenter however the move to a virtualized self-service service oriented paradigm inherent in private cloud computing introduces some additional security concerns.

First is the isolation of tenants from each other and the hosting infrastructure at both the compute and network layers. Virtualization is a part of any private cloud strategy and the security of this model is totally dependent on the ability to isolate one tenant from another and prevent the careless or malicious tenant from impacting the stability of the core infrastructure upon which all tenants rely.

Another concern is Authentication, Authorization and Auditing of access to the cloud services. Self-service implies that tenant administrators can initiate management processes and workflows where previously this was accomplished through IT. For any misconfiguration or excessive permissions granted to these users can impact the stability or security of the cloud solution.

Many private cloud security concerns are also shared by traditional datacenter environment which is not surprising since the private cloud is just an evolution of the traditional datacenter model. These include:

  • Impact the confidentiality, integrity or availability through exploitation of software vulnerabilities.
  • Unauthorized access due to weak or misconfiguration.
  • Impact to confidentiality, integrity or availability by malicious code.
  • Impact to confidentiality, integrity or availability of data.
  • Compliance with internal or industry specific regulations and standards.

Secure Virtualization Platform

The biggest risk in running in a multi-tenant virtualized environment is that a tenant running services on the same physical infrastructure as you can break out of its isolating partition and impact the confidentiality, integrity or availability of your workload and data. Therefore the security in virtualization platform is key in the isolation and non-interference between the individual virtual machines running on the infrastructure.

Highly Automated Management, Monitoring and Reporting

Many management tasks involve multiple steps that must be completed in the proper sequence by multiple administrators across multiple systems. Any shortcuts, omissions or errors can leave assets vulnerable to unauthorized access or affect the reliability of components within a solution. By orchestrating discreet management and monitoring tasks into workflows that require proper authorization and approval greatly diminish the chance of mistakes that affect the security of the solution.

Authentication, Authorization and Auditing

Most organizations have a common capability for providing an overarching framework for authentication and access control and then a private cloud introduces all parts of hosting and hosted services that include the hosting infrastructure and the virtual machines workloads that run in that infrastructure. This framework must be designed and possibly extended to provide a single point of managing identities and credentials, authentication services and common security model for access to resources across the private cloud.

Multi-layer Security

Moving to a cloud-based platform requires a change in mind-set of developers and IT security professionals. Some of the risks of the public cloud are mitigated by using a private cloud architecture, however, the perimeter security protecting a private cloud should be seen as an addition to public cloud security practices, not an alternative. You cannot apply the traditional defense-in-depth security models directly to cloud computing, however you should still apply the principal of multiple layers of security. By taking a fresh look at security when you move to a cloud-based model, you should aim for a more secure system rather than accepting security that continues with the current levels.

Security Governance

Enterprise IT systems are now typically well regulated and controlled. The security risks are well documented and therefore proper processes are put in place to develop new applications and systems, or to provision them from 3rd party vendors. It is very unlikely that a department manager would be able to purchase and install software without approval from the IT department.

With public cloud systems and Web browser clients however, it is possible that individual department managers could bypass the IT department and provision public cloud-based software. Indeed, they might use free cloud storage systems as a convenient means to synchronize documents without even considering that they are using public cloud services. Public cloud systems might be appealing to a manager as they could very quickly provision a new system and remove what they might see as unnecessary bureaucracy. They may even be unaware of the security and compliance policies that are in place to protect the organization. In a cloud-based landscape, we must protect corporate systems and data from these unauthorized, untested systems.

Facilities

Facilities represent the physical components – buildings, racks, power, cooling, and physical interconnects – that house or support a private cloud. It is beyond the scope of this document to provide detailed guidance on facilities, but the private cloud principles affect facility design.

The definition of a Scale Unit impacts power, cooling, space, racking, and cabling requirements. The team that defines a Scale Unit should include personnel that design and manage these aspects of the facility in addition to the procurement, Capacity Planning, and Service Delivery teams. The following table lists some ramifications of Scale Unit size choices from a facilities perspective.

Small Scale Unit

Large Scale Unit

Benefit

Trade-off

Benefit

Trade-off

  • Lower amount of physical labor needed to add a Scale Unit
  • Complicates the Resource Pool, Fault Domain, and Reserve Capacity equation
  • Inefficient
  • Stranded power (un-utilized power)
  • Un-utilized space
  • Allocation of full facilities units (for example, UPS, Rack, and Co-location Room) is easy to cost and engineer
  • Reduces under-utilization of power, cooling, and space
  • Higher amount of labor to commission

Knowing how much power, cooling, and space each Scale Unit will consume enables the facilities team to perform effective Capacity Planning and the engineering team to effectively plan resources.

Compute, Network, and Storage Fabric

The term Fabric defines a collection of interconnected compute, network, and storage resources.

The concept of homogenous physical infrastructure, introduced in the Private Cloud Principles, Concepts, and Patterns guide, stipulates that all servers in a Resource Pool should be identical. Homogenizing the compute, storage, and network components in servers allows for predictable scale and performance. In other words, every server in a Resource Pool should have the same processor characteristics such as family (Intel/AMD), number of cores/CPUs, and generation (Xeon 2.6 Gigahertz (GHz)). The homogenized compute concept also stipulates that each server have the same amount of Random Access Memory (RAM) and the same number of connections to Resource Pool storage and networks. With these specifications met, any virtualized service could relocate from one failing or failed physical server to another physical server and continue to function identically.

Physical Server

The physical server hosts the hypervisor and provides access to the network and shared storage. In the Standard Environment, the facilities do not provide power redundancy, so the servers do not require dual power supplies.

Every server will be a member of a single compute Resource Pool and a single Physical Fault Domain. Assuming all servers are homogeneous (as recommended), they will all be members of a single Upgrade Domain.

Capacity Planning must be done for each server specification, as its size (CPU and RAM specification) will determine how many virtual machines it is able to host. This is covered in greater detail in the Private Cloud Planning Guide for Service Delivery.

Server specification selection impacts the Scale Unit, Cost Model, and service class. Scale Units have a finite amount of power and cooling, so server efficiency has an impact on a private cloud. It may be that all power in a Scale Unit is consumed before all physical space. The cost of servers impacts the Cost Model irrespective of whether this cost is passed onto the consumer. Selecting only small one-unit servers will limit the architect’s ability to define a range of service classifications. The server needs to accommodate the largest service classification after the parent partition and hypervisor consume their resources.

Microsoft research shows servers with processors one or two models behind the latest versions offer a better price, performance, and power consumption ratio than the newer processors.

The Private Cloud Reference Architecture dictates that the “concept of homogenization of physical infrastructure” be adopted for each Resource Pool. Server specifications (CPU, RAM) may vary between Resource Pools, but this complicates Fabric Management (defined in the Private Cloud Planning Guide for Systems Management), which spans Resource Pools and Capacity Planning, and may necessitate different service classes for each pool.

Delivering IaaS requires that the service is pre-defined and delivered consistently. To achieve consistent performance, the VMs must have equal resources available to them from each server, in other words, the same CPU cycles and RAM. If servers within a Resource Pool do not provide homogeneous performance and RAM, consistent performance cannot be guaranteed.

Absolute homogenization may be hard to maintain over the long term as server models may be discontinued by the vendor; therefore relationships between Resource Pools, Scale Units, and server model longevity must be considered carefully.

The following table lays out some of the benefits and trade-offs of homogeneous and heterogeneous Resource Pools.

Homogeneous Physical Infrastructure

Heterogeneous Physical Infrastructure

Benefit

Trade-off

Benefit

Trade-off

  • Predictable performance within a Resource Pool
  • Guaranteed Live Migration across the fabric
  • Reuse of existing equipment may not be possible
  • Possible reuse of existing equipment
  • Allows for a broader range of server classes
  • VMs cannot be moved between Resource Pools
  • More upfront work to make sure Live Migration will work appropriately

In addition, servers should support the following requirements to achieve an automated infrastructure and resiliency:

Automated Infrastructure

  • Wake On Local Area Network
  • Remote BIOS Upgrades/Configurations
  • Boot from Flash
  • Pre-Boot Execution Environment (PXE) for remote imaging
  • Virtualization Support
    • Data Execution and Prevention
    • 64 bit CPUs
  • Standard Environment: 2 Network adapters that support TCP offload (TOE)
    • Management x 1
    • Consumer x 1
  • High-Availability Environment: 4 or 6 redundant network adapters that support TOE
    • Management x 2: Could be teamed for redundancy
    • Live Migration x 2: Could be teamed for redundancy
    • Consumer x 2: Could be teamed for resiliency
  • Standard Environment: Storage connections that meet the required service classification
    • For Internet Small Computer System Interface (iSCSI), 1 x Hardware iSCSI initiators: Could use vendor-specific software to achieve resiliency
    • For Fiber Channel, 1 x Fiber Channel host bus adapter (HBA): – Could use vendor-specific software to achieve resiliency
  • High-Availability Environment: Redundant storage connections that meet the required service classification
    • For iSCSI, 2 x Hardware iSCSI initiators: Could use vendor-specific software to achieve resiliency
    • For Fiber Channel, 2x Fiber Channel HBAs: Could use vendor-specific software to achieve resiliency

To dynamically initiate remediation events in case of failure or impending failure of server components, each server is required to display warnings, errors, and state information for the following:

  • CPU
    • State (Busy/Ready)
    • Utilization
    • Heat
    • Fans
  • RAM
    • Utilization
    • Error-Correcting Code (ECC) Errors
  • Storage
    • Read/Write Failures
    • Predictive Failures
  • Network Interface Cards (NICs)
    • Port State
    • Send/Receive Errors
  • Motherboard
    • Server Post Errors
  • Power Supply
    • State
    • Active / Passive
    • Power Output Variations
  • Fans
    • Speed
    • State

Storage

To achieve the perception of infinite capacity, proactive Capacity Management must be performed, and storage capacity added ahead of demand. The amount of storage added as a single unit (a Storage Scale Unit) will depend on the rate of storage consumption, hardware vendor lead time, and the level of risk the business wishes to assume (that is, weighing remaining unallocated capacity against the possibility of exhausting all capacity). This is detailed in Private Cloud Planning Guide for Service Delivery.

Storage will be placed in Storage Resource Pools, from which it is automatically allocated to consumers. Though Resource Pools are not a new concept for Storage Area Networks (SANs), allowing the infrastructure to allocate storage on-demand based on policy may be a new approach for many organizations. Further, the SAN must present an application programming interface (API) to Fabric Management to allow automation of allocation and provisioning.

The storage provided within a private cloud must be consistent in performance and availability. This means the Input/output (I/O) Operations per Second (IOPS) cannot vary significantly. If there is a need to make different levels of storage performance available to users of a private cloud, it can be accomplished through multiple service classifications. A private cloud is intended, however, to provide a limited set of standardized services; therefore, variances should be carefully considered.

The cost of providing the storage within a private cloud should be clearly defined. This permits metering, and possibly allocation of costs to consumers. If different classes of storage are provided for different levels of performance, their costs should be differentiated. For example, if SAN is being used in an environment, it is possible to have storage tiers where faster Solid State Drives (SSD) are used for more critical workloads. Less-critical workloads can be placed on a Tier 2 Secure Attention Sequence (SAS), and even less-critical workloads on Tier 3 SATA drives.

The Private Cloud Reference Architecture assumes the storage arrays and the storage network are redundant, with no single point of failure beyond the array itself. In this regard, the storage array can be considered a Fault Domain.

The design should adopt some form of de-duplication technology to reduce storage consumption.

As the storage array is a single point of failure, it should display health information to the systems monitoring service to make sure that any outages and their impact are quickly identified. Providing snapshots and mirroring between arrays for continuity is beyond the scope of this guidance.

Physical Storage Switches

If a Architect follows the recommendation to allow any VM to execute on any server in a Resource Pool, Virtual Hard Disks (VHDs) should reside on a SAN. While it is possible to host VHDs locally, the guidance assumes that they are hosted on a SAN.

A key decision in private cloud design is whether to use iSCSI or Fiber Channel for storage. If iSCSI is utilized to house virtual workload storage, it is suggested that each virtualization host include iSCSI HBAs instead of standard NICs for performance reasons.

The purpose of a storage switch is to provide resilient and flexible connectivity between shared storage and physical servers. The storage switch must meet peak storage I/O requirements for the virtual services. In addition, the interconnect speeds between switches should be evaluated to determine the maximum throughput for switch-to-switch communications. This may limit the maximum number of hosts that can be placed on each switch.

While switch throughput is important, attention should also be paid to the number of available switch ports needed to support the physical virtualization hosts. Refer to the switch hardware vendor to make sure it meets these requirements.

Physical storage switch requirements include:

  • Dedicated switch port on each switch for each host and storage processor connection. This is needed for redundancy and I/O optimization.
  • iSCSI traffic separated from all other IP traffic, preferably on its own switched infrastructure or logically through a virtual local area network (VLAN) on a shared IP switch. This segregates data access from traditional network communications for host-to-host and workload operations and provides data security.
  • Redundant power supplies and cooling fans increase the number of faults the storage switch can withstand.
  • Programmatic interface to support firmware upgrades and configuration.

Physical Storage Subsystem

Stateless workloads can be hosted on Direct-Attached Storage (DAS) instead of SAN, driving down the cost of service. The downside is that Fabric Management has to handle transitioning active user connections between VMs homed on different hosts, as VM migration is impossible. This may mean tighter integration with the network than is specified in this document (in order to know when all connections to a VM have been abandoned or terminated before stopping the VM, for example).

SAN storage, while more expensive, provides advantages:

  • The VM can be re-homed to other servers.
  • Live Migration can be employed.
  • Backup (of the VM) can occur out of band (for example, taking snapshots).
  • Capacity can be increased almost limitlessly.

The logical storage configuration (or storage classification) should be designed to meet requirements in the following areas:

  • Capacity: To provide the required storage space for the virtual service data and backups.
  • Performance Delivery: To support the required number of IOPS and throughput.
  • Fault Tolerance: To provide the desired level of protection against hardware failures. If a SAN is used, this may include redundant HBA and switches.
  • Manageability: To provide a high degree of platform self-management. This requires a programmatic interface to provide automated configuration and firmware upgrades.

Additionally, a private cloud must meet the following requirements to make sure that it is highly available and well-managed:

  • Multiple paths to the disk array for redundancy. Should a disk fail, hot or warm spare disks can provide resiliency in the provisioned storage. Consult the storage vendor for specific recommendations.
  • A storage system with automatic data recovery, to allow an automatic background process to rebuild data onto a spare or replacement disk drive when another disk drive in the array fails.
  • Redundant power supplies and cooling fans, to increase the number of faults the Storage Array can withstand.

Network

The Private CLoud Reference Architecture assumes that the network presented to servers is not redundant for the Standard Environment and is redundant for the High-Availability Environment.

The network is tightly coupled with physical servers. Each Compute Resource Pool includes the network switches necessary for the servers to operate; each Scale Unit includes a pre-defined and fixed number of servers and switches.

The switches must be monitored to make sure no workloads saturate the network. A private cloud is designed as a general-purpose infrastructure. Workloads that challenge the network with high utilization may not be good candidates for a private cloud unless separate Resource Pools are created specifically to handle these workloads.

Switches are members of network upgrade domains, but the definition and membership of upgrade domains will likely vary depending on the nature of the upgrade. If switches are not redundant (for example, in the Standard Environment), the whole Resource Pool will need to be taken offline for switch maintenance, which requires switch reboots.

Network hardware (switches and load balancers) must display an API to Fabric Management that enables automated management of networks such as creation of VLANs, Virtual IP addresses (VIP), and addition or removal of hosts from the VIP.

Physical Network

Some key decisions that should be made to increase the bandwidth of the physical networks are related to the use of Live Migration requirements of port security, and the need for link aggregation. Here is a table showing the benefits and trade-offs of using Live Migration:

Use Live Migration

Do Not Use Live Migration

Benefit

Trade-off

Benefit

Trade-off

  • Transparent movement of Stateful applications
  • Transparent infrastructure upgrades
  • Additional network switch ports will be required
  • More network adapters are required per virtualization host
  • Greater Reserve Capacity may be required because of cluster size limitations of 16 nodes
  • Less switch ports are required
  • Fewer network adapters are required per virtualization host
  • Ideal for stateless applications
  • No transparent movement of Stateful applications
  • For Stateful applications, infrastructure upgrades will need to be coordinated with VM owners

To support the dynamic characteristics of a private cloud, a network switch should support a remote programmatic interface – for firmware upgrades, and prioritization of traffic for quality of service. These switches should be dedicated for a private cloud to maintain predictable performance and to minimize risks associated with human interaction. As defined earlier, the servers need to be connected to at least two networks, management and consumer, with live migration (if required). The connections should always be the same; for example, network adapter 1 to management, network adapter 2 to consumer, and network adapter 3 to Live Migration.

If iSCSI is chosen for the storage interconnects, iSCSI traffic should reside in an isolated VLAN in order to maintain security and performance levels. This iSCSI traffic should not share a network adaptor with other traffic, for example the management or consumer network traffic.

The interconnect speeds between switches should be evaluated to determine the maximum bandwidth for communications. This could affect the maximum number of hosts which can be placed on each switch.

When designing network connectivity for a well-managed infrastructure, the virtualization hosts should have the following specific networking requirements:

  • Support for 802.1Q VLAN Tagging: To provide network segmentation for the virtualization hosts, supporting management infrastructure and workloads. This is the preferred method to help secure and isolate data traffic for a private cloud.
  • Remote Out-of-band Management Capability: To monitor and manage servers remotely over the network regardless of whether the server is turned on or off.
  • Support for PXE Version 2 or Later: To facilitate automated server provisioning.

To dynamically initiate remediation events in response to the failure or impending failure of network switch components, each switch is required to display warnings, errors, and state information for the following:

  • CPU
    • Utilization
    • Temperature
  • Flash Memory
    • Utilization
  • Interface Details
    • Port State
    • Port Errors
    • Bandwidth Utilization
  • Power Supply
    • State
    • Active / Passive
    • Power Output Variations
  • Fans
    • Speed
    • State

Storage Switch/Subsystem Health Model

To dynamically initiate remediation events in response to either the failure or impending failure of storage switches and storage subsystem components, each component is required to display warnings, errors, and state information for the following:

Storage Switch

  • CPU
    • Utilization
    • Temperature
  • Flash Memory
    • Utilization
  • Interface Details
    • Port State
    • Port Errors
    • Bandwidth Utilization
  • Power Supply
    • State
    • Active / Passive
    • Power Output Variation
  • Fans
    • Speed
    • State

Storage Subsystem

  • CPU
    • Utilization
    • Temperature
  • Flash Memory
    • Utilization
  • Service Processor
    • State
    • Errors
    • IOPS
  • Disks
    • Read / Write Failures
    • Predictive Failures
  • Power Supply
    • State
    • Active / Passive
    • Power Output Variations
  • Fans
    • Speed
    • State

Hypervisor

The hypervisor exposes the VM services to consumers. It needs to be configured identically on all hosts in a Resource Pool, and ideally all hosts in the private cloud. Fabric Management will orchestrate the addition of virtual switches, machines, and disks.

An architect needs to decide whether the private cloud should use CPU Resource Reservations to make sure of predictable performance of VMs. This table lists the benefits and trade-offs:

Use CPU Resource Reservations

Do Not Use CPU Resource Reservations

Benefit

Trade-off

Benefit

Trade-off

  • Consistent VM performance for consumers
  • Fixed number of VMs per host might lead to low utilization of resources
  • Toolset may not set resource reservations
  • Variable number of VMs per host means resource utilization can be maximized
  • Consumers do not experience consistent VM performance
  • One VM can adversely affect the processing performance of others

The decision is driven by whether efficiency or consistency is more important for the private cloud.

The architect could elect to provide different classes of services – one which uses resource reservations to deliver predictability, and another which shares the resources. Separate Resource Pools could be deployed accordingly, along with differential pricing to incent the consumers to exhibit desired behavior.
Resource reservations will not prevent a host from saturating the network and crippling the performance of other hosts. As stated in the Network section earlier, this needs monitoring.

Parent Partition

The parent partition provides the hypervisor with access to physical resources such as network and storage. It also hosts the hypervisor management interfaces. The parent partition needs to be configured identically on all servers in a Resource Pool.

If an architect elects to create a service classification which depends on consuming LUNs directly (not via the parent partition), the parent partition must be configured to present the pass-through for this storage. Further, this storage must be available to all parent partitions in that Resource Pool to enable VM portability between hosts.

The parent partition displays health information for the server, the parent partition operating system, and the hypervisor. The health monitoring system, in turn, consumes this information to enable Capacity Management and Fabric Management.

Management Layers

Task Execution

Task execution is the low level management operations that can be performed on a platform and generally are surfaced through the command line or Application Programming Interface (API). The capability to execute tasks must not only exist but the usage semantics should be consistent across members of a fault domain to enable automation using a common format. When differences in semantics exist this forces the automation layer to compensate for these differences through custom code in the orchestration or even require using different execution hosts or engines within a fault domain.

Automation

The automation layer is made up of the foundational automation technology plus a series of single purpose commands and scripts that perform operations such as starting or stopping a virtual machine, restarting a server, or applying a software update. These atomic units of automation are combined and executed by higher-level management systems. The modularity of this layered approach dramatically simplifies development, debugging, and maintenance.

Orchestration

In much the same way that an enterprise resource planning (ERP) system manages a business process such as order fulfillment and handles exceptions such as inventory shortages, the orchestration layer provides an engine for IT-process automation and workflow. The orchestration layer is the critical interface between the IT organization and its infrastructure and transforms intent into workflow and automation.

Ideally, the orchestration layer provides a graphical user interface in which complex workflows that consist of events and activities across multiple management-system components can be combined, to form an end-to-end IT business process such as automated patch management or automatic power management. The orchestration layer must provide the ability to design, test, implement, and monitor these IT workflows.

Service Management

Service management provides the means for automating and adapting IT service management best practices, such as those found in the IT Infrastructure Library (ITIL), to provide built-in processes for incident resolution, problem resolution, and change control.

Self Service

Self Service capability is a characteristic of private cloud computing and must be present in any implementation. The intent is to permit users to approach a self-service capability and be presented with options available for provisioning in an organization. The capability may be basic where only provisioning of virtual machine with a pre-defined configuration or may be more advanced allowing configuration options to the base configuration and leading up to a platform capability or service.

Self service capability is a critical business driver that enables members of an organization to become more agile in responding to business needs with IT capabilities to meet those needs in a manner that aligns and conforms with internal business IT requirements and governance.

This means the interface between IT and the business are abstracted to simple, well defined and approved set of service options that are presented as a menu in a portal or available from the command line. The business selects these services from the catalog, begins the provisioning process and notified upon completions, the business is then only charged for what they actually use.

This is analogous to capability available on Public Cloud platforms.

The entities that consume self service capabilities in an organization are individual business units, project teams, or any other department in the organization that have a need to provision IT resources. These entities are referred to as Tenants. In a private cloud tenants are granted the ability to provision compute and storage resources as they need them to run their workload. Connectivity to these resources is managed behind the scenes by the fabric management layers of the private cloud.

Tenant administrators are granted access to a self-service portal where they can initiate workflows to provision virtualized services in the appropriate configuration and capacity. For example compute resources may be available in small, medium or large instance capacities and also storage of the appropriate size and performance characteristics. Resources are provisioned without any intervention from infrastructure personnel in IT and the overall progress is tracked and reported by the fabric management layer and reported through the portal.

A chargeback model is defines how tenants will be charged for using the cloud resources. This is typically the numbers and size of resources provisioned times the amount of time they are provisioned for. This information is available to tenant administrators through the self-service portal and well as the ability to provide cost reporting.

Tenants are granted the ability to manage, monitor and report on the resources that they have provisioned.

How To : Use JSON and SAP NetWeaver together

Background

Imagesap2[1]
In this example, SAP is used as the backend data source and the NWGW (Netweaver Gateway) adapter to consumable from .NET client as OData format.

Since the NWGW component is hosted on premise and our .NET client is hosted in Azure, we are consuming this data from Azure through the Service Bus relay. While transferring data from on premise to Azure over SB relay, we are facing performance issues for single user for large volumes of data as well as in relatively small data for concurrent users. So I did some POC for improving performance by consuming the OData service in JSON format.

What I Did?

I’ve created a simple WCF Data Service which has no underlying data source connectivity. In this service when the context is initializing, a list of text messages is generated and exposed as OData.

Here is that simple service code:

[Serializable]
public class Message
{
public int ID { get; set; }
public string MessageText { get; set; }
}
public class MessageService
{
List<Message> _messages = new List<Message>();
public MessageService()
{
for (int i = 0; i < 100; i++)
{
Message msg = new Message
{
ID = i,
MessageText = string.Format(“My Message No. {0}”, i)
};
_messages.Add(msg);

}
}
public IQueryable<Message> Messages
{
get
{
return _messages.AsQueryable<Message>();
}
}
}
[ServiceBehavior(IncludeExceptionDetailInFaults = true)]
public class WcfDataService1 : DataService
{
// This method is called only once to initialize service-wide policies.
public static void InitializeService(DataServiceConfiguration config)
{
// TODO: set rules to indicate which entity sets
// and service operations are visible, updatable, etc.
// Examples:
config.SetEntitySetAccessRule(“Messages”, EntitySetRights.AllRead);
config.SetServiceOperationAccessRule(“*”, ServiceOperationRights.All);
config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V3;
}
}
Exposing one endpoint to Azure SB so that client can consume this service through SB endpoint. After hosting the service, I’m able to fetch data by simple OData query from browser.

I’m also able to fetch the data in JSON format.

After that, I create a console client application and consume the service from there.

Sample Client Code

class Program
{
static void Main(string[] args)
{
List lst = new List();

for (int i = 0; i < 100; i++)
{
Thread person = new Thread(new ThreadStart(MyClass.JsonInvokation));
person.Name = string.Format(“person{0}”, i);
lst.Add(person);
Console.WriteLine(“before start of {0}”, person.Name);
person.Start();
//Console.WriteLine(“{0} started”, person.Name);
}
Console.ReadKey();
foreach (var item in lst)
{
item.Abort();
}
}
}

public class MyClass
{
public static void JsonInvokation()
{
string personName = Thread.CurrentThread.Name;
Stopwatch watch = new Stopwatch();
watch.Start();
try
{
SimpleService.MessageService svcJson =
new SimpleService.MessageService(new Uri
(“https://abc.servicebus.windows.net/SimpleService /WcfDataService1”));
svcJson.SendingRequest += svc_SendingRequest;
svcJson.Format.UseJson();
var jdata = svcJson.Messages.ToList();

watch.Stop();
Console.WriteLine(“Person: {0} – JsonTime First Call time: {1}”,
personName, watch.ElapsedMilliseconds);

for (int i = 1; i <= 10; i++)
{
watch.Reset(); watch.Start();
jdata = svcJson.Messages.ToList();
watch.Stop();
Console.WriteLine(“Person: {0} – Json Call {1} time:
{2}”, personName, 1 + i, watch.ElapsedMilliseconds);
}

Console.WriteLine(jdata.Count);
}
catch (Exception ex)
{
Console.WriteLine(personName + “: ” + ex.Message);
}
Thread.Sleep(100);
}

public static void AtomInvokation()
{
string personName = Thread.CurrentThread.Name;

try
{
Stopwatch watch = new Stopwatch();
watch.Start();
SimpleService.MessageService svc =
new SimpleService.MessageService(new Uri
(“https://abc.servicebus.windows.net/SimpleService/WcfDataService1&#8221;));
svc.SendingRequest += svc_SendingRequest;
var data = svc.Messages.ToList();

watch.Stop();
Console.WriteLine(“Person: {0} – XmlTime First Call time: {1}”,
personName, watch.ElapsedMilliseconds);

for (int i = 1; i <= 10; i++)
{
watch.Reset(); watch.Start();
data = svc.Messages.ToList();
watch.Stop();
Console.WriteLine(“Person: {0} – Xml Call {1} time:
{2}”, personName, 1 + i, watch.ElapsedMilliseconds);
}

Console.WriteLine(data.Count);
}
catch (Exception ex)
{
Console.WriteLine(personName + “: ” + ex.Message);
}
Thread.Sleep(100);
}
}9pt;”>

 

What I Test After That
I tested two separate scenarios:

Scenario I: Single user with small and large volume of data
Measuring the data transfer time periodically in XML format and then JSON format. You might notice that first call I’ve printed separately in each screen shot as it is taking additional time to connect to SB endpoint. In the first call, the secret key authentication is happening.

Small data set (array size 10): consume in XML format.

 

Consume in JSON format:

 

For small set of data, Json and XML response time over service bus relay is almost same.

Consuming Large volume of data (Array Size 100)

 

Here the XML message size is around 51 KB. Now I’m going to consume the same list of data (Array size 100) in JSON format.

 

So from the above test scenario, it is very clear that JSON response time is much faster than XML response time and the reason for that is message size. In this test, when I’m getting the list of 100 records in XML format message size is 51.2 KB but JSON message size is 4.4 KB.

Scenario II: 100 Concurrent user with large volume of data (array size 100)
In this concurrent user load test, I’ve done any service throttling or max concurrent connection configuration.

 

In the above screen shot, you will find some time out error that I’m getting in XML response. And it is happening due to high response time over relay. But when I execute the same test with JSON response, I found the response time is quite stable and faster than XML response and I’m not getting any time out.

 

How Easy to Use UseJson()
If you are using WCF Data Service 5.3 and above and VS2012 update 3, then to consume the JSON structure from the client, I have to instantiate the proxy / context with .Format.UseJson().

Here you don’t need to load the Edmx structure separately by writing any custom code. .NET CodeGen will generate that code when you add the service reference.

 

But if that code is not generated from your environment, then you have to write a few lines of code to load the edmx and use it as .Format.UseJson(LoadEdmx());

Sample Code for Loading Edmx

public static IEdmModel LoadEdmx(string srvName)
{
string executionPath = Directory.GetCurrentDirectory();
DirectoryInfo di = new DirectoryInfo(executionPath).Parent;
var parent1 = di.Parent;
var srv = parent1.GetDirectories(“Service References\\” +
srvName)[0].GetFiles(“service.edmx”)[0].FullName;

XmlDocument doc = new XmlDocument();
doc.Load(srv);
var xmlreader = XmlReader.Create(new StringReader(doc.DocumentElement.OuterXml));

IEdmModel edmModel = EdmxReader.Parse(xmlreader);
return edmModel;
}

New version of Prism released – Get it now free!!

Prism helps developers who want to create a Windows Store business app using C#, XAML, the Windows Runtime, and development patterns such as Model-View-ViewModel (MVVM) and event aggregation.

Prism includes two libraries, a reference implementation called AdventureWorks Shopper, Quickstarts and associated documentation.

PrimForWindowsRuntime[1]

 

This is an update from the version released in May for Windows 8.

The guidance demonstrates:

  • How to implement pages, touch, navigation, settings, suspend/resume, search, tiles, and tile notifications.
  • How to implement the Model-View-ViewModel (MVVM) pattern.
  • How to validate user input for correctness.
  • How to manage application data.
  • How to test your app and tune its performance.

What’s new in the Windows 8.1 version

Documentation

  • Created a developer task topic to help developers learn how to complete key Windows Store dev tasks for validation, creating pages, navigation, touch, tiles, search, performance, testing, deployment, extended splash screen, incremental loading, Model-View-ViewModel (MVVM),  loosely coupled communication, and using the Prism libraries.
  • Added the AdventureWorks Shopper logical architecture to help you understand what code you need to write for a Windows Store app vs the code the Prism library provides.
  • Updated PDF for Windows 8.1.
  • Provided release notes on CodePlex including a change log and late breaking news.

AdventureWorks Shopper Reference Implementation

  • Created AutoRotatingGridView grid control to create a fluid page layout that responds to user requests to change the pages size and orientation
  • Demonstrated using the IncrementalUpdateBehavior Blend Behavior for large data to improve user perceived performance
  • Cleaned up styles
  • Used Flyout/MenuFlyout instead of popup
  • Changed FlyoutViews to use SettingsFlyout
  • Used out of the box control for Watermark
  • Used Blend Behaviors
  • Used SearchBox & new search APIs
  • Updated top/bottom app bars to use CommandBars and Action Buttons
  • Used Windows.Web.Http.HttpClient instead of System.Net.Http.HttpClient

Prism for Windows Runtime

  • Updated VisualStateAwarePage to detect page size and orientation
  • Removed FlyoutService and FlyoutView
  • Removed SearchPaneService and SearchQueryArguments. Used new SearchBox control instead.
  • Added support for an extended splashscreen

Quickstarts

  • Created Incremental Loading Quickstart to demonstrate how to improve end user perceived performance for a large grid by handling the ContainerContentChanging event, or by using the IncrementalUpdateBehavior Blend Behavior vs traditional data binding.
  • Created Extended Splash Screen Quickstart to demonstrate how to use the Prism library to create an extended splash screen.

Where to get it?

  • Documentation on the Windows Development Center.
  • PDF version of the documentation will be available later this month.
  • Source code for AdventureWorks Shopper reference implementation and the Prism libraries.
  • Source code for the associated quickstarts.
  • Via NuGet – use NuGet package Manager in Visual Studio and search online for Prism.StoreApps and Prism.PubSubEvents

If you need the source code for the AdventureWorks Shopper reference implementation and the Prism library that runs on Windows 8 we moved it to our CodePlex site.

Where to start?

  • Review the AdventureWorks reference implementation. After you download the code, see Getting started with Prism library for instructions on how to compile and run the reference implementation, as well as understand the Visual Studio solution structure.
  • Review Quickstarts. The Quickstart samples focus on specific tasks such as validation, event aggregation, bootstrapping an MVVM app, and adding an extended splash screen to your app.
  • Create an app. If you want to create your own app using Prism see Using Prism for the Windows Runtime.
  • Explore developer tasks. Learn how the Prism team implemented many of the tasks required to create a Windows Store app.
  • Review the documentation. The associated documentation outlines the key decisions and lessons learned to create a Windows Store business app.
  • Review the release notes. The release notes provide late breaking updates and a more detailed log of the changes in this release.

 

What code do I write and what does Prism library provide?

We included the AdventureWorks logical architecture in the documentation to help you understand what code is provided by the Prism library and what code you will need to create for your Windows Store business app.

 

Logical architecture of a Windows Store business app that uses Prism

Community

Prism for the Windows Runtime has a  community site you can post questions, provide feedback, connect with other users to share ideas, and find additional content such as extensions and training material. Community members can also help Microsoft plan and test future releases of Prism for the Windows Runtime. For more info see patterns & practices: Prism for the Windows Runtime.

So go download the code and get started creating your Windows Store app with Prism. We want to hear about your successes and challenges on our CodePlex site. What else do we need to add to the library and associated documentation? Many of the additions to this release came from user feedback from the CodePlex site.

How To : Use SharePoint Dashboards & MSRS Reports for your Agile Development Life Cycle

The Problem We Solve

Agile BI is not a term many would associate with MSRS Reports and SharePoint Dashboards. While many organizations first turn to the Microsoft BI stack because of its familiarity, stitching together Microsoft’s patchwork of SharePoint, SQL Server, SSAS, MSRS, and Office creates administrative headaches and requires considerable time spent integrating and writing custom code.

This Showcase outlines the ease of accomplishing three of the most fundamental BI tasks with LogiXML technology as compared to MSRS and SharePoint:

  • Building a dashboard with multiple data sources
  • Creating interactive reports that reduce the load on IT by providing users self-service
  • Integrating disparate data sources

Read below to learn how an agile BI methodology can make your life much easier when it comes to dashboards and reports. Don’t feel like reading?

Building a Dashboard with LogiXML vs. MSRS + SharePoint

Microsoft’s only solution for dashboards is to either write your own code from scratch, manipulate SharePoint to serve a purpose for which it wasn’t initially designed, or look to third party apps. Below are some of the limitations to Microsoft’s approach to dashboards:

  • Limited Pre-Built Elements: Microsoft components come with only limited libraries of pre-built elements. In addition to actual development work, you will need to come up with an idea of how everything will work together. This necessitates becoming familiar with best practices in dashboards and reporting.
  • Sophisticated Development Expertise Required: While Microsoft components provide basic capabilities, anything more sophisticated is development resource-intensive and requires you to take on design, execution, and delivery. Any complex report visualizations and logic, such as interactive filters, must be written in code by the developer.
  • Limited Charts and Visualizations: Microsoft has a smaller sub-set of charts and visualization tools. If you want access to the complete library of .NET-capable charts, you’ll still need to OEM another charting solution at additional expense.
  • Lack of Integrated Workflow: Microsoft does not include workflow features sets out of the box in their BI offering.

LogiXML technology is centered on Logi Studio: an elemental, agile BI design environment which lets you simply choose from hundreds of powerful and configurable pre-built elements. Logi’s pre-built elements equip developers with tools to speed development, as well as the processes and logic required to build and manage BI projects. Below is a screen shot of the Logi Studio while building new dashboards.

agile-bi.jpg

Start a free LogiXML trial now.

Logi developers can easily create static or user-customizable dashboards using the Dashboard element. A dashboard is a collection of panels containing Logi reports, which in turn contain table, charts, images, etc. At runtime, the user can customize the dashboard by rearranging these panels on the browser page, by showing or hiding them, and even by changing their contents using adjustable reporting criteria. The data displayed within the panels can be configured, as in any Logi report, to link to other reports, providing drill-down functionality.

 

logi2.jpg

The dashboard displayed above has tabs and user customization enabled. The Dashboard element provides customization features, such as drag-and-drop panel positioning, support for built-in parameters the user can access to adjust the panel’s data contents, and a panel selection list that determines which panels will be displayed. AJAX techniques are utilized for web server interactions, allowing selective updates of portions of the dashboard. Dashboard customizations can be saved on an individual-user basis to create a highly personalized view of the data.

The Dashboard Wizard

The ‘Create a Dashboard’ wizard assists developers in creating dashboards by populating the report definition with the necessary dashboard-related elements. You can easily point to any data source by selecting from a variety of DataLayer types, including SQL, StoredProcedures, Web Services, Files, and more. A simple to use drag and drop SQL Query builder is also integrated, to offer a guided approach to constructing queries when connecting to your database.

logi3.jpg

Using the Dashboard Element

The Dashboard element is used to create the top level structure for all of your interactive panels within the final output. Under your dashboards, you can optionally add any number of Dashboard Panels, Panel Parameters for dynamic filtering, and even automatic refresh features with AJAX-based refresh timers.

logi4.jpg

Changing Appearance Using Themes and Style Sheets

The appearance of a dashboard can be changed easily by assigning a theme to your report. In addition, or as an alternative, you can change dashboard appearance using style. The Dashboard element has its own Cascading Style Sheet (CSS) file containing predefined classes that affect the display colors, font sizes, button labels, and spacing seen when the dashboard is displayed. You can override these classes by adding classes with the same name to your own style sheet file.

See us build a BI app with 3 data sources in under 10 minutes.

Ad Hoc Reporting Creation with LogiXML: Analysis Grid

The Analysis Grid is a managed reporting feature giving end users virtual ad hoc capability. It is an easy to use tool that allows business users to analyze and manipulate data and outputs in multiple and powerful ways.

logi5.jpg

Start a free LogiXML trial now.

Create an Analysis Grid by using the “Create Analysis Grid” wizard, or by simply adding the AnalysisGrid element into your definition file. Like the dashboard, data for the Analysis Grid can be accessed from any of the data options, including SQL databases, web sources, or files. You also have the option to launch the interactive query builder wizard for easy, drag-drop, SQL query creation.

The Analysis Grid is composed of three main parts: the data grid itself, i.e. a table of data to be analyzed; various action buttons at the top, allowing the user to perform actions such as create new columns with custom calculations, sort columns, add charts, and perform aggregations; and the ability to export the grid to Excel, CSV, or PDF format.

The Analysis Grid makes it easy to perform what-if analyses through features like filtering. The Grid also makes data-presentation impactful through visualization features including data driven color formatting, inline gauges, and custom formula creation.

Ad Hoc Reporting Creation with Microsoft

While simple ad hoc capabilities, such as enabling the selection of parameters like date ranges, can be accomplished quickly and easily with Microsoft, more sophisticated ad hoc analysis is challenging due to the following shortcomings.

Platform Integration Problems

Microsoft BI strategy is not unified and is strongly tied to SQL Server. To obtain analysis capabilities, you must build cubes through to the Analysis Service, which is a separate product with its own different security architecture. Next, you will need to build reports that talk to SQL server, also using separate products.

Dashboards require a SharePoint portal which is, again, a separate product with separate requirements and licensing. If you don’t use this, you must completely code your dashboards from scratch. Unfortunately, Microsoft Reporting Services doesn’t play well with Analysis Services or SharePoint since these were built on different technologies.

SharePoint itself offers an out of the box portal and dashboard solution but unfortunately with a number of significant shortcomings. SharePoint was designed as a document management and collaboration tool as opposed to an interactive BI dashboard solution. Therefore, in order to have a dashboard solution optimized for BI, reporting, and interactivity you are faced with two options:

  • Build it yourself using .NET and a combination of third party components
  • Buy a separate third party product

Many IT professionals find these to be rather unappealing options, since they require evaluating a new product or components, and/or a lot of work to build and make sure it integrates with the rest of the Microsoft stack.

Additionally, while SQL Server and other products support different types of security architectures, Analysis Services only has support for using integrated Windows NT security models to access cubes and therefore creates integration challenges.

Moreover, for client/ad hoc tools, you need Report Writer, a desktop product, or Excel – another desktop application. In addition to requiring separate licenses, these products don’t even talk to one another in the same ways, as they were built by different companies and subsequently acquired by Microsoft.

Each product requires a separate and often disconnected development environment with different design and administration features. Therefore to manage Microsoft BI, you must have all of these development environments available and know how to use them all.

Integration of Various Data Sources: LogiXML vs. Microsoft

LogiXML is data neutral, allowing you to easily connect to all of your organization’s data spread across multiple applications and databases. You can connect with any data source or data model and even combine data sources such as current data accessed through a web service with past data in spreadsheets.

Integration of Various Data Sources with Microsoft

Working with Microsoft components for BI means you will be faced with the challenge of limited support for non-Microsoft based databases and outside data sources. The Microsoft BI stack is centered on SQL Server databases and therefore the data source is optimized to work with SQL Server. Unfortunately, if you need outside content it can be very difficult to integrate.

Finally, Microsoft BI tools are designed with the total Microsoft experience in mind and are therefore optimized for Internet Explorer. While other browsers and devices might be useable, the experience isn’t optimized and may potentially lack in features or visualize differently.

 

Free & Licensed Windows 8, Azure, Office 365, SharePoint On-Premise and Online Tools, Web Parts, Apps available.
For more detail visit https://sharepointsamurai.wordpress.com or contact me at tomas.floyd@outlook.com

FREE Microsoft Dynamics CRM 2011 List Component for Microsoft SharePoint Server 2010

 

 

CRM2011 – SharePoint 2010 Integration? Glue CRM 2011 & Share Point 2010 together? Make CRM 2011 and Share Point 2010 converse? I wasn’t sure what to call this exactly. “Hooking together” works for me!

Now that we have a CRM 2011 instance and a Share Point site working, let’s get them connected up! Go to this website and download Microsoft Dynamics CRM 2011 List Component for Microsoft SharePoint Server 2010:

Accept the License Terms.

Extract the files to a folder (I chose C:\CRM List).

You will get a prompt “The Installation is complete.” Click OK.

Let’s go over to the Share Point Central Administration Server to install the list component we just extracted. Connect to http://localhost:48835/ (your port might be different, be aware of this). Click Manage web applications.

Click the new Share Point site, and then “General Settings” (the blue cogs).

Scroll down to Browser File Handling and choose Permissive, Click OK.

Let’s head back over to our new Share Point Site. Click Site Actions up top left, and then “Site Settings”.

Under Galleries click “Solutions”.


Click the Word “Solutions” up top (you have to click the word “Solutions”, even though it looks selected), and then click “Upload Solution”.

Select the .wsp component that we extracted wayyy back at the top of this. I used C:CRM List as my extract folder. Click OK.

You’ll get prompted at this point, I couldn’t active the control on this screen (but it still needs to be done). We need to make sure some services are running to activate the solution. Click Close.

Head back to the Share Point Central Administration. http://localhost:48835. Found at

Click System Settings –> Manage Services on this server

Click Start beside “Share Point Foundation Sandboxed Code Service”. I also started “Microsoft SharePoint Foundation Subscription Settings Service (by accident)” so that’s why that ones started.

Now to head back to our Share Point site http://localhost:39083/

Under Galleries click “Solutions”.

Click Solutions again, select crmlistcomponent, and the click “Activate” up top. Activate is now un-greyed out! Click Activate!

The solution has now been activated! Hurray!

There seems to be some confusion whether or not you need to run a power shell script to enable Activation of Share Point 2010 solutions (AllowHtcExtn). According to what I’ve read, you would need to run this if Share Point 2010 is running on a domain controller. I didn’t have to do this (and we’re on a domain controller), and I’ve yet to run into a problem with .htc stuff. Even in the Microsoft Dynamics CRM 2011 Readme it says:
“If you are using Microsoft SharePoint Server 2010 (On-Premises), you must add .htc extensions to the list of allowed file types:
a. Copy the AllowHtcExtn.ps1 script file to the server that is running Microsoft SharePoint Server 2010.
b. In the Windows PowerShell window or in the SharePoint Management Console, run the command: AllowHtcExtn.ps1 .
Example: AllowHtcExtn.ps1 http://servername%E2%80%9D

Some people say the script works for them , and some say that using just the blog method (what we did) works
The sharepoint configuration is complete at this point. You probably want to take a snapshot, name it “After Sharepoint Configuration”. Let’s head over to our CRM server (localhost:85).

In CRM Click Settings –> Document Management –> Document Management Settings

Select the entities that you want to have documents enabled on. This will create a “Documents” area when you open an instance of the entity. I’ll just leave the defaults for now. At the bottom punch in your Share Point site that you’ve created and click Next. This is the Share Point server we installed the list component on. You’re not allowed to use localhost:port, just use the computer name:port like below.

Don’t select the box, otherwise it will relate the files to those entities. Without checking the box you will end up with something like Site/EntityName/Record Name (which is what I want, especially if you’re using custom entities). Click Next.

If “Libraries are being created in the path”, click Next.

Everything should “Succeed”, Click Finish.

Let’s test this bad boy out now.

Create a new account called “Test”.

Click Save! Click “Documents” on the left side. You’ll get a prompt saying that the folder (Test) is being created under “Account”. Click OK.

Click Add.

Now you’ll probably get these errors! /crmgrid/scripts/DialogContainer.js and 403 FORBIDDEN! Depressing. The only real information on this error was here: . It wasn’t very clear, but I stumbled through it. It seems that CRM 2011 doesn’t enjoy being called localhost. Let’s fix these up.

The fix for this was to run inetmgr –> Click Microsoft Dynamics CRM –> click Stop

Click “Bindings…” on the right side. Click “Edit” on the items that show “localhost” and change it to my machine name: “win-b80icqrvluf”. This is so it has a a “real” name to connect to.

Before:

After:

Now click “Start” on the right side.

Head back over to the CRM (http://win-b80icqrvluf:85/CRMTest/main.aspx) make sure to use the host name, as it might give you the error if you use localhost. Open your Test Account again.

Click Documents –> Add, you should now see this popup (it can take a while to load for the first time on the VM). If you continue to get the error, stop both CRM 2011 and Share Point 2010 servers and restart them. If that doesn’t work, try restarting the whole server.

Pick a file, and click OK.

The file should be uploaded to Share Point now.

Head over to Share Point at http://win-b80icqrvluf:39083 and click “All Site Content” or “Libraries”.

Click Account.

You can see that CRM has created a folder “Test” (for our record). It creates 1 folder per record. Click it to see the files associated to that record!!

The files associated to the record “Test” in Accounts.

Share Point and CRM have combined into a super awesome force of doom. But we’re still missing 1 core piece of functionality (due to not picking a port when we installed CRM).

 

 

Resource – Office 365 Powershell Commandlets

Before you can start working with the SharePoint Online cmdlets you must first download those cmdlets. Having the cmdlets as a separate download (separate from SharePoint on-premises that is) allows you to use any machine to run the cmdlets.

blog-office365

 

All we have to do is make sure we have PowerShell V3 installed along with the .NET Framework v4 or better (required by PowerShell V3). With these prerequisites in place simply download and install the cmdlets from Microsoft: http://www.microsoft.com/en-us/download/details.aspx?id=35588.

Once installed open the SharePoint Online Management Shell by clicking Start > All Programs > SharePoint Online Management Shell > SharePoint Online Management Shell.

Just like with the SharePoint Management Shell for on-premises deployments the SharePoint Online Management Shell is just a standard PowerShell window. You can see this by looking at the target attribute of the shortcut properties:

C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoExit -Command “Import-Module Microsoft.Online.SharePoint.PowerShell -DisableNameChecking;”

As you can see from the shortcut, a PowerShell module is loaded: Microsoft.Online.SharePoint.PowerShell. Unlike with SharePoint on-premises, this is not a snap-in but a module, which is basically the new, better way of loading cmdlets. The nice thing about this is that, like with the snap-in, you can load the module in any PowerShell window and are not limited to using the SharePoint Online Management Shell.

(The -DisableNameChecking parameter of the Import-Module cmdlet simply tells PowerShell to not bother checking for valid verbs used by the loaded cmdlets and avoids warnings that are generated by the fact that the module does use an invalid verb – specifically, Upgrade). Note that unlike with the snap-in, there’s no need to specify the threading options because the cmdlets don’t use any unmanaged resources which need disposal.

Getting Connected

Now that you’ve got the SharePoint Online Management Shell installed you are now ready to connect to your tenant administration site. This initial connection is necessary to establish a connection context which stores the URL of the tenant administration site and the credentials used to connect to the site. To establish the connection use the Connect-SPOService cmdlet:

Connect-SPOService -Url https://contoso-admin.sharepoint.com -Credential gary@contoso.com

 

Running this cmdlet basically just stores a Microsoft.SharePoint.Client.ClientContext object in an internal static variable (or a sub-classed version of it at least). Future cmdlet calls then use this object to connect to the site, thereby negating the need to constantly provide the URL and credentials. (The downside of this object being internal is that we can’t extend the cmdlets to add our own, unless we want to use reflection which would be unsupported). To clear this internal variable (and make the session secure against other code that may attempt to use it) you can run the Disconnect-SPOService cmdlet. This cmdlet takes no parameters.

One tip to help make loading the module and then connecting to the site a tad bit easier would be to encapsulate the commands into a single helper method. In the following example I created a simple helper method named Connect-SPOSite which takes in the user and the tenant administration site to connect to, however, I default those values so that I only have to provide the password when I wish to get connected. I then put this method in my profile file (which you can edit by typing “ise $profile.CurrentUsersAllHosts”):

function Connect-SPOSite() {

    param (

        $user = “gary@contoso.com”,

        $site = https://contoso-admin.sharepoint.com&#8221;

    )

    if ((Get-Module Microsoft.Online.SharePoint.PowerShell).Count -eq 0) {

        Import-Module Microsoft.Online.SharePoint.PowerShell -DisableNameChecking

    }

    $cred = Get-Credential $user

    Connect-SPOService -Url $site -Credential $cred

}

 

SPO Cmdlets

Now that you’re connected you can finally do something interesting. First let’s look at the cmdlets that are available. There are currently only 30 cmdlets available to us and you can see the list of those cmdlets by typing “Get-Command -Module Microsoft.Online.SharePoint.PowerShell”. Note that all of the cmdlets will have a noun which starts with “SPO”. The following is a list of all the available cmdlets:

  • Site Groups
  • Users
    • Add-SPOUser – Add a user to an existing Site Collection Site Group.
    • Get-SPOUser – Get an existing user.
    • Remove-SPOUser – Remove an existing user from the Site Collection or from an existing Site Collection Group.
    • Set-SPOUser – Set whether an existing Site Collection user is a Site Collection administrator or not.
    • Get-SPOExternalUser – Returns external users from the tenant’s folder.
    • Remove-SPOExternalUser – Removes a collection of external users from the tenancy’s folder.
  • Site Collections
    • Get-SPOSite – Retrieve an existing Site Collection.
    • New-SPOSite – Create a new Site Collection.
    • Remove-SPOSite – Move an existing Site Collection to the recycle bin.
    • Repair-SPOSite – If any failed Site Collection scoped health check rules can perform an automatic repair then initiate the repair.
    • Set-SPOSite – Set the Owner, Title, Storage Quota, Storage Quota Warning Level, Resource Quota, Resource Quota Warning Level, Locale ID, and/or whether the Site Collection allows Self Service Upgrade.
    • Test-SPOSite – Run all Site Collection health check rules against the specified Site Collection.
    • Upgrade-SPOSite – Upgrade the Site Collection. This can do a build-to-build (e.g., RTM to SP1) upgrade or a version-to-version (e.g., 2010 to 2013) upgrade. Use the -VersionUpgrade parameter for a version-to-version upgrade.
    • Get-SPODeletedSite – Get a Site Collection from the recycle bin.
    • Remove-SPODeletedSite – Remove a Site Collection from the recycle bin (permanently deletes it).
    • Restore-SPODeletedSite – Restores an item from the recycle bin.
    • Request-SPOUpgradeEvaluationSite  – Creates a copy of the specified Site Collection and performs an upgrade on that copy.
    • Get-SPOWebTemplate – Get a list of all available web templates.
  • Tenants
    • Get-SPOTenant – Retrieves information about the subscription tenant. This includes the Storage Quota size, Storage Quota Allocated (used), Resource Quota size, Resource Quota Allocated (used), Compatibility Range (14-14, 14-15, or 15-15), whether External Services are enabled, and the No Access Redirect URL.
    • Get-SPOTenantLogEntry – Retrieves company logs (as of B2 only BCS logs are available).
    • Get-SPOTenantLogLastAvailableTimeInUtc – Returns the time when the logs are collected.
    • Set-SPOTenant – Sets the Minimum and Maximum Compatibility Level, whether External Services are enabled, and the No Access Redirect URL.
  • Apps
  • Connections

It’s important to understand that when working with all of the cmdlets which retrieve an object you will only ever be getting a simple data object which has no ability to act upon the source object. For example, the Get-SPOSite cmdlet returns an SPOSite object which has no methods and, though some properties do have a setter, they are completely useless and the object and its properties are not used by any other cmdlet (such as Set-SPOSite). This also means that there is no ability to access child objects (such as SPWeb or SPList items, to name just a couple).

The other thing to note is the lack of cmdlets for items at a lower scope than the Site Collection. Specifically there is no Get-SPOWeb or Get-SPOList cmdlet or anything of the sort. This can be potentially be quite limiting for most real world uses of PowerShell and, in my opinion, limit the usefulness of these new cmdlets to just the initial setup of a subscription and not the long-term maintenance of the subscription.

In the following examples I’ll walk through some examples of just a few of the more common cmdlets so that you can get an idea of the general usage of them.

Get a Site Collection

To see the list of Site Collections associated with a subscription or to see the details for a specific Site Collection use the Get-SPOSite cmdlet. This cmdlet has two parameter sets:

Get-SPOSite [[-Identity] <SpoSitePipeBind>] [-Limit <string>] [-Detailed] [<CommonParameters>]

Get-SPOSite [-Filter <string>] [-Limit <string>] [-Detailed] [<CommonParameters>]

The parameter that you’ll want to pay the most attention to is the -Detailed parameter. If this optional switch parameter is omitted then the SPOSite objects that will be returned will only have their properties partially set. Now you might think that this is in order to reduce the traffic between the server and the client, however, all the properties are still sent over the wire, they simply have default values for everything other than a couple core properties (so I would assume the only performance improvement would be in the query on the server). You can see the difference in the values that are returned by looking at a Site Collection with and without the details:

PS C:\> Get-SPOSite https://contoso.sharepoint.com/ | select *

LastContentModifiedDate   : 1/1/0001 12:00:00 AM
Status                    : Active
ResourceUsageCurrent      : 0
ResourceUsageAverage      : 0
StorageUsageCurrent       : 0
LockIssue                 :
WebsCount                 : 0
CompatibilityLevel        : 0
Url                       :
https://contoso.sharepoint.com/
LocaleId                  : 1033
LockState                 : Unlock
Owner                     :
StorageQuota              : 1000
StorageQuotaWarningLevel  : 0
ResourceQuota             : 300
ResourceQuotaWarningLevel : 255
Template                  : EHS#1
Title                     :
AllowSelfServiceUpgrade   : False

PS C:\> Get-SPOSite https://contoso.sharepoint.com/ -Detailed | select *

LastContentModifiedDate   : 11/2/2012 4:58:50 AM
Status                    : Active
ResourceUsageCurrent      : 0
ResourceUsageAverage      : 0
StorageUsageCurrent       : 1
LockIssue                 :
WebsCount                 : 1
CompatibilityLevel        : 15
Url                       :
https://contoso.sharepoint.com/
LocaleId                  : 1033
LockState                 : Unlock
Owner                     : s-1-5-21-3176901541-3072848581-1985638908-189897
StorageQuota              : 1000
StorageQuotaWarningLevel  : 0
ResourceQuota             : 300
ResourceQuotaWarningLevel : 255
Template                  : STS#0
Title                     : Contoso Team Site
AllowSelfServiceUpgrade   : True

Create a Site Collection

When we’re ready to create a Site Collection we can use the New-SPOSite cmdlet. This cmdlet is very similar to the New-SPSite cmdlet that we have for on-premises deployments. The following shows the syntax for the cmdlet:

New-SPOSite [-Url] <UrlCmdletPipeBind> -Owner <string> -StorageQuota <long> [-Title <string>] [-Template <string>] [-LocaleId <uint32>] [-CompatibilityLevel <int>] [-ResourceQuota <double>] [-TimeZoneId <int>] [-NoWait] [<CommonParameters>]

The following example demonstrates how we would call the cmdlet to create a new Site Collection called “Test”:

New-SPOSite -Url https://contoso.sharepoint.com/sites/Test -Title “Test” -Owner “gary@contoso.com” -Template “STS#0” -TimeZoneId 10 -StorageQuota 100

 

Note that the cmdlet also takes in a -NoWait parameter; this parameter tells the cmdlet to return immediately and not wait for the creation of the Site Collection to complete. If not specified then the cmdlet will poll the environment until it indicates that the Site Collection has been created. Using the -NoWait parameter is useful, however, when creating batches of Site Collections thereby allowing the operations to run asynchronously.

One issue you might bump into is in knowing which templates are available for your use. In the preceding example we are using the “STS#0” template, however, there are other templates available for our use and we can discover them using the Get-SPOWebTemplate cmdlet, as shown below:

PS C:\> Get-SPOWebTemplate

Name                     Title                         LocaleId  CompatibilityLevel
—-                     —–                         ——–  ——————
STS#0                    Team Site                         1033                  15
BLOG#0                   Blog                              1033                  15
BDR#0                    Document Center                   1033                  15
DEV#0                    Developer Site                    1033                  15
DOCMARKETPLACESITE#0     Academic Library                  1033                  15
OFFILE#1                 Records Center                    1033                  15
EHS#1                    Team Site – SharePoint Onl…     1033                  15
BICenterSite#0           Business Intelligence Center      1033                  15
SRCHCEN#0                Enterprise Search Center          1033                  15
BLANKINTERNETCONTAINER#0 Publishing Portal                 1033                  15
ENTERWIKI#0              Enterprise Wiki                   1033                  15
PROJECTSITE#0            Project Site                      1033                  15
COMMUNITY#0              Community Site                    1033                  15
COMMUNITYPORTAL#0        Community Portal                  1033                  15
SRCHCENTERLITE#0         Basic Search Center               1033                  15
visprus#0                Visio Process Repository          1033                  15

Give Access to a Site Collection

Once your Site Collection has been created you may wish to grant users access to the Site Collection. First you may want to create a new SharePoint group (if an appropriate one is not already present) and then you may want to add users to that group (or an existing one). To accomplish these tasks you use the New-SPOSiteGroup cmdlet and the Add-SPOUser cmdlet, respectively.

Looking at the New-SPOSiteGroup cmdlet you can see that it takes only three parameters, the name of the group to create, the permissions to add to the group, and the Site Collection within which to create the group:

New-SPOSiteGroup [-Site] <SpoSitePipeBind> [-Group] <string> [-PermissionLevels] <string[]> [<CommonParameters>]

In the following example I’m creating a new group named “Designers” and giving it the “Design” permission:

$site = Get-SPOSite https://contoso.sharepoint.com/sites/Test -Detailed

$group = New-SPOSiteGroup -Site $site -Group “Designers” -PermissionLevels “Design“

(Note that I’m seeing the Site Collection to a variable just to keep the commands a little shorter, you could just as easily provide the string URL directly).

Once the group is created we can then use the Add-SPOUser cmdlet to add a user to the group. Like the New-SPOSiteGroup cmdlet this cmdlet takes three parameters:

Add-SPOUser [-Site] <SpoSitePipeBind> [-LoginName] <string> [-Group] <string> [<CommonParameters>]

In the following example I’m adding a new user to the previously created group:

Add-SPOUser -Site $site -Group $group.LoginName -LoginName “tessa@contoso.com”

Delete and Recover a Site Collection

If you’ve created a Site Collection that you now wish to delete you can easily accomplish this by using the Remove-SPOSite cmdlet. When this cmdlet finishes the Site Collection will have been moved to the recycle bin and not actually deleted.

If you wish to permanently delete the Site Collection (and thus remove it from the recycle bin) then you must use the Remove-SPODeletedSite cmdlet. So to do a permanent delete it’s actually a two step process, as shown in the example below where I first move the “Test” Site Collection to the recycle bin and then delete it from the recycle bin:

Remove-SPOSite http://contoso.sharepoint.com/sites/test&#8221; -Confirm:$false

Remove-SPODeletedSite -Identity http://contoso.sharepoint.com/sites/test&#8221; -Confirm:$false

 

If you decide that you’d actually like to restore the Site Collection from the recycle bin you can simply use the Restore-SPODeletedSite cmdlet:

Restore-SPODeletedSite http://contoso.sharepoint.com/sites/test

Both the Remove-SPOSite and the Restore-SPODeletedSite cmdlets accept a –NoWait parameter which you can provide to tell the cmdlet to return immediately.

Parting Thoughts

There are obviously many other cmdlets available to explore (per the previous list), however, I hope that in the simple samples shown in this article you will find that working with the cmdlets is quite easy and fairly intuitive.

The key thing to remember is that you are working in a stateless environment so changes to an object such as SPOSite will not affect the actual Site Collection in any way and cmdlets like the Set-SPOSite cmdlet will not honor changes made to the properties as it will use nothing more than the URL property to know which Site Collection you are updating.

Though the existence of these cmdlets is definitely a good start and absolutely better than nothing, I have to say that I’m extraordinarily displeased with the number of available cmdlets and with how the module was implemented.

My biggest gripe is that the module is not extensible in any way so if I wish to add cmdlets for the management of SPWeb objects or SPList objects I’d have to create a whole new framework which would require an additional login as I wouldn’t be able to leverage the context object created by Connect-SPOService cmdlet.

This results in a severely limiting product that prevents community and ISV generated solutions from “fitting in” to the existing model. Perhaps one day I’ll create my own set of cmdlets to show Microsoft how it should have been done…perhaps one day I’ll have time for such frivolities :) .

 

Select Master Page App for SharePoint 2013 now available!! (Get the SharePoint 2010 Select Master Page Web Part Free)

In Publishing sites, there will be a layouts or application page through which we can set a custom
or another master page as a default master page. Unfortunately, this is missing in Team Sites.

This is what this solution is all about. It is targeted mainly for Team sites, since publishing sites already have a provision.

It adds a custom ribbon button in the Share and Track group of the Files group of Master Page Gallery. This is a SharePoint 2013 Hosted App. Refer the documentation for the technical details.

 

The following screen shots depict the functionality.







 

The custom ribbon button will not be enabled if a folder is selected or more than 1 item is selected.
But if a file is selected, the button will be enabled, irrespective of the file extension. Upon selecting a file and clicking on the ribbon button, a pop up dialog will appear with the text “Working on it..”.

Then a confirmation alert will appear, asking “Are you sure?”. Once confirmed by the user, a progress message will be displayed in the pop up dialog. If the file selected is not of .master extension, then the user will be displayed an alert “This will work only for master pages.”.

If a master page, which is already set as default, is selected and the ribbon button is clicked, the user will be displayed an alert “The file at <url> is the current default master page. So please select another master page.”. If another master page is selected, then the user will be displayed an alert “Master Page Changed Successfully.

Please press CTRL + F5 for changes to reflect.”. Once the user clicks OK on the alert, the pop up dialog also closes and pressing CTRL + F5 will reflect the updated master page. Any time, the user clicks OK or cancel on the alert screens, the parent screen will be refreshed and the current selection will be cleared.

The app requires a Full Control on the host web, since this is required for setting the master page and thats precisely the reason why, I couldn’t publish this in the Office store.

The app has been tested on IE9 and the latest version of Chrome and Firefox. It may not work on IE8 or lower version of other browsers also, in case they don’t support HTML5. Also, the app currently supports only English. Also, the app will set the default master only on the host web (where the app is installed) and not on the sub webs.

The app uses jQuery AJAX and REST APIs of SharePoint 2013.

To use the app, just upload the app (.app file) to the App Catalog and add/install it to the host team site and trust it and navigate to the Master Page Gallery and you are good to go.

 

With this App, you will also receive the FREE SharePoint 2010 Select Master Page Web Part!!

It adds a custom ribbon button in the Share and Track group of the Documents group of Master Page Gallery.

It is a Sandbox solution and it is implemented to set the master of only the root site of a site collection, though it can be customized / extended for sub sites. It requires a user to be at least a Site owner to avoid unnecessary manipulation of master page by contributors or other users. Refer the documentation for the technical details.

The following screen shots depict the functionality.





 

 

How To : Setup MyTask List in SharePoint 2013

Overview

You are using SharePoint 2013, you have deployed My Sites. You or your users have tasks assigned. But when you or your users visit their MySite, they see below screen. Despite the users having assigned tasks elsewhere in the system, MySite still shows no tasks which is incorrect.

123

 

What is My Task List in SharePoint 2013?

By architecture of the Newsfeed site on SharePoint 2013, My Tasks list puts together and shows all the SharePoint and Project Server (if installed) task assignment right into the users My Site page. The tasks can be either private tasks or public tasks.

Pre-requisites for proper sync of My Task?

  • Search Service Application – very important to have this service enabled and running. Aggregator checks every 3 hours for any new “Tasks Lists”. Though the aggregator would look for SharePoint events / hints, they are known to have not activated an aggregation and hence the importance given to the indexer. Very important to have an Incremental / Continuous Crawl running.
  • Work Management Service Application (WMA) and the service running on the server.
  • User Profile Synchronization Service

Refreshing the My Tasks Page

The code behind aggregator is triggered by simply visiting the page within Newsfeed Site as long as the last trigger was older than 5 minutes. This delay is to preserve the performance of the SharePoint farm. This can be changed using PowerShell but highly recommend against the same for large farm deployments.

Possible problems causing sync not work?

  1. Work Management Service wasn’t running
  2. Search wasn’t indexing anything yet. No indexer meant aggregator could potentially be not performing any aggregation as well.

1234

Solution

  1. Work management Service should run on App Server. If required create one from Central Admin
  2. Work management service application should be created with an app pool which must run with profile app pool account
  3. Create/ensure Incremental Crawls to happen across all the content sources, setup people search, my sites search.
  4. Ensure that continuous crawl is running
  5. Wait till the crawl completes
  6. Review the permission of profile app pool and portal app pool account on the specific databases with dbowner permissions
  • social db
  • sync db
  • profile db
  • state service db
  • manage metadata db
  • my site db
  • portal content db
  • projects content db
  • teams content db
  • communities content db
  • Search db.
  1. User profile synchronization service should be running.
  2. Run IIS reset on all app and WFE servers at the same time.

12345

Introduction to the Unified Logging Service and Creating a Javascript Logging System

Microsoft SharePoint Foundation exposes a rich logging mechanism known as the Unified Logging Service (ULS) that enables developers to write useful information helping them to identify and troubleshoot issues during the application lifecycle. The ULS writes SharePoint Foundation events to the SharePoint Trace Log, and stores them in the file system, typically inside the SharePoint root folder in files named \14\LOGS\SERVERYYYmmDDID.log.

ULS exposes a rich managed object model enabling developers to specify their own configurations such as categories and severity while writing exceptions or trace message to the ULS logs. You can find more details on the managed API in the article Writing to the Trace Log from Custom Code.

With the evolution of a rich client object model in SharePoint 2010 that enables developers to build complex client applications, it is very important to write useful information that is not visible in the user interface but is recorded on the server so it can be monitored by administrators and developers.

To address these scenarios for applications running in thin-client browsers, SharePoint Foundation provides a web service named SharePoint Diagnostics (diagnostics.asmx). This web service enables a client application to submit diagnostic reports directly to the ULS logs.

This article focuses on how you can leverage the SharePoint Diagnostics web service to write trace messages from a custom JavaScript application into the ULS logs.

The following points are discussed:

  • Overview of the SendClientScriptErrorReport web method
  • Creating a simple JavaScript application to log trace messages by using SharePoint Diagnostics web service
  • Setting up the required configurations for enabling logging via the Diagnostics web service
  • Using the application
  • Using the ULS logging script with sandboxed solutions
The Diagnostics web service exposes a single method named SendClientScriptErrorReport that enables client applications to report errors to the ULS service. The following table summarizes the parameter list required by the SendClientScriptErrorReport method.

Parameter Name Description Value Examples
Message A string containing the message to display to the client The value of the displaypage property is null or undefined; not a function object.
File The URL file name associated with the current error customscript.js
Line A string containing the line of code from which the error is being generated 9
Client A string containing the client name that is experiencing the error <client><browser name=’Internet Explorer’ version=’9.0′></browser><language> en-us </language></client>
Stack A string containing the call-stack information from the generated error <stack><function depth=’0′ signature=’ myFunction() ‘>function myFunction() { ‘displaypage ();}</function></stack>
Team A string containing a team or product name Custom SharePoint Application
originalFile The physical file name associated with the current error customscript.js

In the table, notice that the example values for Client and Stack depict a XML fragments, not single lines of text. This information is stated in the protocol specification documented in 3.1.4.1.2.1 SendClientScriptErrorReport. Even though the protocol specification for these parameters requires a valid XML fragment, the web-service call to this method still succeeds even if the values supplied for these parameters do not follow this schema, despite the fact that creating the client and stack in this way would add more information to the trace.

The parameter list in the table shows that, unlike the managed API, the SendClientScriptErrorReport web method does not provide any option to specify the category or severity of the message being logged. Also looking at the method name and description, it appears that the exception logged should specify the severity level as Error. However, any message logged through the SharePoint Diagnostics web service is always displayed under the category Unified Logging Service and has a trace log severity level set to Verbose.

Later in this article, you will see the steps required to view the traces written through the SharePoint Diagnostics web service.

In this section, you create a JavaScript application that uses the Diagnostics web service to report errors to the ULS. The application contains a JavaScript file named ULSLogScript.js that contains the necessary functions to communicate and log traces to the Diagnostics web service. These functions are then called directly from any consumer script.

Note
This is a relatively simple application with just one file, so you are not creating a formal SharePoint solution; instead, you save the files directly to the Layouts directory in the SharePoint hive structure.

To create a JavaScript library containing the ULS logging logic

  1. Start Microsoft Visual Studio 2010.
  2. From the File menu, create a new JScript file and save it in the following path: <SharePoint Installation Folder>\14\TEMPLATE\LAYOUTS\LoggingSample\ULSLogScript.js.

    For example, C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\TEMPLATE\LAYOUTS\LoggingSample\ULSLogScript.js.

    Note
    You need to create a new directory named LoggingSample in the Layouts folder.
  3. Because you are using the JQuery library in the application, download the jquery-1.6.4.min.js file from the JQuery portal and add it to the LoggingSample folder created previously.
  4. Type or paste the following code into the ULSLogScript.js file.
    // Creates a custom ulslog object 
    // with the required properties.
    function ulsObject() {
        this.message = null;
        this.file = null;
        this.line = null;
        this.client = null;
        this.stack = null;
        this.team = null;
        this.originalFile = null;
    }
    

    The ulsObject function returns a new instance of a custom object with properties mapped to the parameters required by the SendClientScriptErrorReport method. This object is used throughout the script for performing various operations.

  5. Define the methods that populate the property values specified in the ulsObject method. Begin by defining the function that retrieves the client property. Following the ulsObject method, type or paste the following code.
    // Detecting the browser to create the client information
    // in the required format.
    function getClientInfo() {
        var browserName = '';
    
        if (jQuery.browser.msie)
            browserName = "Internet Explorer";
        else if (jQuery.browser.mozilla)
            browserName = "Firefox";
        else if (jQuery.browser.safari)
            browserName = "Safari";
        else if (jQuery.browser.opera)
            browserName = "Opera";
        else
            browserName = "Unknown";
    
        var browserVersion = jQuery.browser.version;
        var browserLanguage = navigator.language;
        if (browserLanguage == undefined) {
            browserLanguage = navigator.userLanguage;
        }
    
        var client = "<client><browser name='{0}' version='{1}'></browser><language>{2}</language></client>";
        client = String.format(client, browserName, browserVersion, browserLanguage);
     
        return client;
    }
    
    // Utility function to assist string formatting.
    String.format = function () {
        var s = arguments[0];
        for (var i = 0; i < arguments.length - 1; i++) {
            var reg = new RegExp("\\{" + i + "\\}", "gm");
            s = s.replace(reg, arguments[i + 1]);
        }
    
        return s;
    }
    

    The getClientInfo function uses the JQuery library to detect the current browser properties, such as the name and version, and then creates a XML fragment (as discussed previously) describing the browser details where the application is currently running. Additionally, a utility function named String.format assists string formatting through the code.

  6. Next, you need a function to create the call stack for the exception raised in the script. Add the following functions to the ULSLogScript.js code.
    // Creates the callstack in the required format 
    // using the caller function definition.
    function getCallStack(functionDef, depth) {
        if (functionDef != null) {
            var signature = '';
            functionDef = functionDef.toString();
            signature = functionDef.substring(0, functionDef.indexOf("{"));
            if (signature.indexOf("function") == 0) {
                signature = signature.substring(8);
            }
    
            if (depth == 0) {
                var stack = "<stack><function depth='0' signature='{0}'>{1}</function></stack>";
                stack = String.format(stack, signature, functionDef);
            }
            else {
                var stack = "<stack><function depth='1' signature='{0}'></function></stack>";
                stack = String.format(stack, signature);
            }
    
            return stack;
        }
    
        return "";
    }
    

    The getCallStack function receives the function definition where the exception occurred and a depth as a parameter. The depth parameter is used by the function to decide if only the caller function signature is required or the complete function definition is to be included. Based on the caller function definition, the getCallStack function extracts the required information such as the signature, body, and creates an XML fragment as described in the protocol specification.

  7. Next, create a function that creates a SOAP packet in the format expected by the Diagnostics web service SendClientScriptErrorReport method. Type or paste the following functions in the ULSLogScript.js file.
    // Creates the SOAP packet required by SendClientScriptErrorReport
    // web method.
    function generateErrorPacket(ulsObj) {
        var soapPacket = "<?xml version=\"1.0\" encoding=\"utf-8\"?>" +
                            "<soap:Envelope xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" " +
                                           "xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\" "+
                                           "xmlns:soap=\"http://schemas.xmlsoap.org/soap/envelope/\">" +
                              "<soap:Body>" +
                                "<SendClientScriptErrorReport " +
                                  "xmlns=\"http://schemas.microsoft.com/sharepoint/diagnostics/\">" +
                                  "<message>{0}</message>" +
                                  "<file>{1}</file>" +
                                  "<line>{2}</line>" +
                                  "<stack>{3}</stack>" +
                                  "<client>{4}</client>" +
                                  "<team>{5}</team>" +
                                  "<originalFile>{6}</originalFile>" +
                                "</SendClientScriptErrorReport>" +
                              "</soap:Body>" +
                            "</soap:Envelope>";
     
        soapPacket = String.format(soapPacket, encodeXmlString(ulsObj.message), encodeXmlString(ulsObj.file), 
                     ulsObj.line, encodeXmlString(ulsObj.stack), encodeXmlString(ulsObj.client), 
                     encodeXmlString(ulsObj.team), encodeXmlString(ulsObj.originalFile));
     
        return soapPacket;
    }
    
    // Utility function to encode special characters in XML.
    function encodeXmlString(txt) {
        txt = String(txt);
        txt = jQuery.trim(txt);
        txt = txt.replace(/&/g, "&amp;");
        txt = txt.replace(/</g, "&lt;");
        txt = txt.replace(/>/g, "&gt;");
        txt = txt.replace(/'/g, "&apos;");
        txt = txt.replace(/"/g, "&quot;");
     
        return txt;
    }
    

    The generateErrorPacket function receives an instance of the ulsObj object and returns the SOAP packet for the SendClientScriptErrorReport function as a string in the expected format. Because the values for the some parameters are expected as an XML fragment, the encodeXmlString function is used to encode the special characters.

  8. When the SOAP packet has been defined, you need a function to issue an asynchronous request to the Diagnostics web service. Add the code below to the ULSLogScript.js file.
    // Function to form the Diagnostics service URL.
    function getWebSvcUrl() {
        var serverurl = location.href;
        if (serverurl.indexOf("?") != -1) {
            serverurl = serverurl.replace(location.search, '');
        }
     
        var index = serverurl.lastIndexOf("/");
        serverurl = serverurl.substring(0, index - 1);
        serverurl = serverurl.concat('/_vti_bin/diagnostics.asmx');
     
        return serverurl;
    }
    
    // Method to post the SOAP packet to the Diagnostic web service.
    function postMessageToULSSvc(soapPacket) {
        $(document).ready(function () {
            $.ajax({
                url: getWebSvcUrl(),
                type: "POST",
                dataType: "xml",
                data: soapPacket, //soap packet.
                contentType: "text/xml; charset=\"utf-8\"",
                success: handleResponse, // Invoke when the web service call is successful.
                error: handleError// Invoke when the web service call fails.
            });
        });
    }
    
    // Invoked when the web service call succeeds.
    function handleResponse(data, textStatus, jqXHR) {
        // Custom code...
        alert('Successfully logged trace to ULS');
     }
     
    // Invoked when the web service call fails.
    function handleError(jqXHR, textStatus, errorThrown) {
        //Custom code...
            alert('Error occurred in executing the web request');
    }
    

    The postMessageToULSSvc function perform an asynchronous HTTP request and posts the SOAP packet to the Diagnostics web service. The URL of the Diagnostics web service is dynamically constructed in the getWebSvcUrl function. The postMessageToULSSvc function also defines respective handlers for success or error responses. Instead of displaying alerts on the handlers, other logic can be written as required by the application.

  9. Finally, you need a function that is invoked automatically when an error occurs in the code. To register this function globally for all the JavaScript functions on the page, you attach this function to the window.onerror event. Add the following lines of code as the first line of the ULSLogScript.js file.
    // Registering the ULS logging function on a global level.
    window.onerror = logErrorToULS;
    
    // Set default value for teamName.
    var teamName = "Custom SharePoint Application";
    
    // Further add the logErrorToULS method at the end of the script.
    
    // Function to log messages to Diagnostic web service.
    // Invoked by the window.onerror message.
    function logErrorToULS(msg, url, linenumber) {
        var ulsObj = new ulsObject();
        ulsObj.message = "Error occurred: " + msg;
        ulsObj.file = url.substring(url.lastIndexOf("/") + 1); // Get the current file name.
        ulsObj.line = linenumber;
        ulsObj.stack = getCallStack(logErrorToULS.caller); // Create error call stack.
        ulsObj.client = getClientInfo(); // Create client information.
        ulsObj.team = teamName; // Declared in the consumer script.
        ulsObj.originalFile = ulsObj.file;
    
        var soapPacket = generateErrorPacket(ulsObj); // Create the soap packet.
        postMessageToULSSvc(soapPacket); // Post to the web service.
    
        return true;
    }
    

    The line window.onerror = logErrorToULS links the function logErrorToULS with the window.onerror event. This enables you to capture the required information such as the error message, line number, and error file. The teamName variable enables you to set a unique value with respect to the calling application. This can be overridden in the consumer scripts. The logErrorToULS function creates an instance of the ulsObj object and populates all of its properties. Here, you see that the stack property of the ulsObj object is set to logErrorToULS.caller. This provides the function definition of the method that invoked this function. The postMessageToULSSvc function is called to write the error information to the trace logs.

    Note
    Because you cannot specify the security level of the trace message in the SendClientScriptErrorReport method, the message property of the ulsObj object is prepended with text indicating that the message logged is part of an exception.
  10. The logErrorToULS function is called automatically when an error occurs on the page, but to intentionally write a trace message to the ULS, you need one more function which can be called specifically. Add the following function just below the logErrorToULS function.
    // Function to log message to Diagnostic web service.
    // Specifically invoked by a consumer method.
    function logMessageToULS(message, fileName) {
        if (message != null) {
            var ulsObj = new ulsObject();
            ulsObj.message = message;
            ulsObj.file = fileName;
            ulsObj.line = 0; // We don't know the line, so we set it to zero.
            ulsObj.stack = getCallStack(logMessageToULS.caller);
            ulsObj.client = getClientInfo();
            ulsObj.team = teamName;
            ulsObj.originalFile = ulsObj.file;
    
            var soapPacket = generateErrorPacket(ulsObj);
            postMessageToULSSvc(soapPacket);
        }
    }
    

    Unlike the logErrorToULS function, the logMessageToULS function accepts the message to be logged and the file name where the error occurred as parameters.

So far, you have created the required logic to write trace messages or exceptions to the ULS logs. Now you need to write a function that consumes the logErrorToULS or logMessageToULS functions.

To create the consumer application

  1. Navigate to your SharePoint site.
  2. Create a new Web Parts page.
  3. Add a Content Editor Web Part in any of the available Web Part zones.
  4. Edit the Web Part and type or paste the following text in the HTML source.
    <script src="/_layouts/LoggingSample/jquery-1.6.4.min.js" type="text/javascript"></script>
     <script src="/_layouts/LoggingSample/ULSLogScript.js" type="text/javascript"></script>
     <script type="text/javascript">
            var teamName = "Simple ULS Logging";
            function doWork() {
                unknownFunction();
            }
            function logMessage() {
                logMessageToULS('This is a trace message from CEWP', 'loggingsample.aspx');
            }
     </script>
    
    <input type="button" value="Log Exception" onclick="doWork();" />
        <br /><br />
      <input type="button" value="Log Trace" onclick="logMessage();" />
    
    

    This HTML code contains the required script references to include the JQuery library and the ULSLogScript.js file that you created in the previous section. It also contains two inline JavaScript functions and the respective input buttons to invoke them.

    To demonstrate exception handling, the doWork function makes a call to an unknownFunction function that does not exist. This invokes an exception that is intercepted and logged by the ULSLogScript.js code. To demonstrate message logging, the logMessage function calls the logMessageToULS function to write trace messages to ULS.

  5. Exit the web page design mode.
  6. Save the Web Parts page.
Finally, you need to configure the Diagnostic Logging Service in SharePoint Central Administration to ensure that the traces and exceptions logged from the Diagnostics web service are visible in the ULS logs.

To configure the Diagnostic Logging Service

  1. Open SharePoint Central Administration.
  2. From the Quick Launch, click Monitoring.
    Figure 1. Click the Monitoring option

    Click the Monitoring option

  3. On the monitoring page, in the Reporting section, click Configure diagnostic logging.
    Figure 2. Click Configure diagnostic logging

    Click Configure diagnostic logging

  4. From all categories, expand the SharePoint Foundation category.
    Figure 3. Expand the SharePoint Foundation category

    Expand the SharePoint Foundation category

  5. Select the Unified Logging Service category.
    Figure 4. Select Unified Logging Service

    Select Unified Logging Service

  6. In the Least critical event to report to the trace log list, select Verbose.
    Figure 5. In the dropdown list, select Verbose

    From the dropdown list, select Verbose

  7. Click OK to save the configuration.

The server is now ready to log traces sent by the Diagnostics web service to ULS. These traces appear under the category Unified Logging Service with a severity set to Verbose.

In this section, you test the application by raising an alert that is logged to the ULS.

To test the logging application

  1. Click the Log Exception button inside the Content Editor Web Part (CEWP).
    Figure 6. Click the Log Exception button

    Click the Log Exception button

  2. An alert indicates that the message has been logged successfully to ULS.
    Figure 7. Confirmation message

    Confirmation message

  3. To see the exception details in the ULS logs, navigate to the Logs folder in the SharePoint hive ({SP Installation Path}\14\LOGS\)
  4. Because multiple log files can be present in the Logs folder, perform a descending sort on the Date modified field.
  5. Open the recent log file in a text editor such as Notepad and then search for Simple ULS Logging (the team name specified previously). Now you should see all the web service parameters as supplied from the client application, from Message to OriginalFileName, in the following text:

    10/14/2011 21:00:37.87 w3wp.exe (0x097C) 0x14DCSharePoint Foundation Unified Logging Service a084Verbose Message: Error occured: The value of the property ‘unknownFunction’ is null or undefined, not a Function object543a6672-9078-452f-93bd-545c4babefd510/14/2011 21:00:37.87 w3wp.exe (0x097C) 0x14DCSharePoint Foundation Unified Logging Service a085Verbose File: ULS%20Logging%20Sample.aspx543a6672-9078-452f-93bd-545c4babefd510/14/2011 21:00:37.87 w3wp.exe (0x097C) 0x14DCSharePoint Foundation Unified Logging Service a086Verbose Line: 676543a6672-9078-452f-93bd-545c4babefd510/14/2011 21:00:37.87 w3wp.exe (0x097C) 0x14DCSharePoint Foundation Unified Logging Service a087Verbose Client: <client><browser name=’Internet Explorer’ version=’8.0′></browser><language>en-us</language></client>543a6672-9078-452f-93bd-545c4babefd510/14/2011 21:00:37.87 w3wp.exe (0x097C) 0x14DCSharePoint Foundation Unified Logging Service a088Verbose Stack: <stack><function depth=’0′ signature=’ doWork() ‘>function doWork() { unknownFunction(); }</function></stack>543a6672-9078-452f-93bd-545c4babefd510/14/2011 21:00:37.87 w3wp.exe (0x097C) 0x14DCSharePoint Foundation Unified Logging Service a089Verbose TeamName: Simple ULS Logging543a6672-9078-452f-93bd-545c4babefd510/14/2011 21:00:37.87 w3wp.exe (0x097C) 0x14DCSharePoint Foundation Unified Logging Service a08aVerbose OriginalFileName: ULS%20Logging%20Sample.aspx543a6672-9078-452f-93bd-545c4babefd5

    Looking at the log message, you can easily determine that the exception occurred because unknownFunction was not defined, along with other relevant details such as the line number.

  6. Similarly, clicking Log Trace on the CEWP writes the following trace message:

    10/14/2011 21:29:55.76 w3wp.exe (0x097C) 0x0F6CSharePoint Foundation Unified Logging Service a084Verbose Message: This is a trace message from CEWP8c182889-c323-46f3-a287-a538c379f15210/14/2011 21:29:55.76 w3wp.exe (0x097C) 0x0F6CSharePoint Foundation Unified Logging Service a085Verbose File: loggingsample.aspx8c182889-c323-46f3-a287-a538c379f15210/14/2011 21:29:55.76 w3wp.exe (0x097C) 0x0F6CSharePoint Foundation Unified Logging Service a086Verbose Line: 08c182889-c323-46f3-a287-a538c379f15210/14/2011 21:29:55.76 w3wp.exe (0x097C) 0x0F6CSharePoint Foundation Unified Logging Service a087Verbose Client: <client><browser name=’Internet Explorer’ version=’8.0′></browser><language>en-us</language></client>8c182889-c323-46f3-a287-a538c379f15210/14/2011 21:29:55.76 w3wp.exe (0x097C) 0x0F6CSharePoint Foundation Unified Logging Service a088Verbose Stack: <stack><function depth=’1′ signature=’ logMessage() ‘></function></stack>8c182889-c323-46f3-a287-a538c379f15210/14/2011 21:29:55.76 w3wp.exe (0x097C) 0x0F6CSharePoint Foundation Unified Logging Service a089Verbose TeamName: Simple ULS Logging8c182889-c323-46f3-a287-a538c379f15210/14/2011 21:29:55.76 w3wp.exe (0x097C) 0x0F6CSharePoint Foundation Unified Logging Service a08aVerbose OriginalFileName: loggingsample.aspx8c182889-c323-46f3-a287-a538c379f152

    In this log, you see that a trace message was sent by the logMessage function.

In a sandboxed solution, you cannot deploy any file to the server file system (the Layouts folder), so to make the ULS logging script work, you need to make the following two changes:

  1. Provision the jquery-1.6.4.min.js and ULSLogScript.js file to a Site Collection–relative Styles Library folder (or any other library with appropriate read access).
  2. Update the script references in the consumer Content Query Web Part (CQWP), as needed.

The remaining functionality should work as is.

What is Kendo UI

Kendo UI is an HTML5, jQuery-based framework for building modern web apps. The framework features lots of UI widgets, a rich data vizualization framework, an auto-adaptive Mobile framework, and all of the tools needed for HTML5 app development, such as Data Binding, Templating, Drag-and-Drop API, and more.

Kendoui

 

Kendo UI comes in different bundles:

  • Kendo UI Web – HTML5 widgets for desktop browsing experience.
  • Kendo UI DataViz – HTML5 data vizualization widgets.
  • Kendo UI Mobile – HTML5 framework for building hybrid mobile applications.
  • Kendo UI Complete – includes Kendo UI Web, Kendo UI DataViz and Kendo UI Mobile.
  • Telerik UI for ASP.NET MVC – Kendo UI Complete plus ASP.NET MVC wrappers for Kendo UI Web, DataViz and Mobile.
  • Telerik UI for JSP – Kendo UI Complete plus JSP wrappers for Kendo UI Web and Kendo UI DataViz.
  • Telerik UI for PHP – Kendo UI Complete plus PHP wrappers for Kendo UI Web and Kendo UI DataViz.

Installing and Getting Started with Kendo UI

You can download all Kendo UI bundles from the download page.

The distribution zip file contains the following:

  • /examples – quick start demos.
  • /js – minified JavaScript files.
  • /src – complete source code. Not available in the trial distribution.
  • /styles – minified CSS files and theme images.
  • /wrappers – server-side wrappers. Available in Telerik UI for ASP.NET MVC, JSP or PHP.
  • changelog.html – Kendo UI release notes.

Using Kendo UI

To use Kendo UI in your HTML page you need to include the required JavaScript and CSS files.

Kendo UI Web

  1. Download Kendo UI Web and extract the distribution zip file to a convenient location.
  2. Copy the /js and /styles directories of the Kendo UI Web distribution to your web application root directory.
  3. Include the Kendo UI Web JavaScript and CSS files in the head tag of your HTML page. Make sure the common CSS file is registered before the theme CSS file. Also make sure only one combined script file is registered. For more information, please refer to the Javascript Dependencies page.
    <!-- Common Kendo UI Web CSS -->
    <link href="styles/kendo.common.min.css" rel="stylesheet" />
    <!-- Default Kendo UI Web theme CSS -->
    <link href="styles/kendo.default.min.css" rel="stylesheet" />
    <!-- jQuery JavaScript -->
    <script src="js/jquery.min.js"></script>
    <!-- Kendo UI Web combined JavaScript -->
    <script src="js/kendo.web.min.js"></script>
    
  4. Initialize a Kendo UI Web Widget (the KendoDatePicker in this example):
    <!-- HTML element from which the Kendo DatePicker would be initialized -->
    <input id="datepicker" />
    <script>
    $(function() {
        // Initialize the Kendo DatePicker by calling the kendoDatePicker jQuery plugin
        $("#datepicker").kendoDatePicker();
    });
    </script>
    

Here is the complete example:

<!--doctype html>
<html>
    <head>
        <title>Kendo UI Web</title>
        <link href="styles/kendo.common.min.css" rel="stylesheet" />
        <link href="styles/kendo.default.min.css" rel="stylesheet" />
        <script src="js/jquery.min.js"></script>
        <script src="js/kendo.web.min.js"></script>
    </head>
    <body>
        <input id="datepicker" />
        <script>
            $(function() {
                $("#datepicker").kendoDatePicker();
            });
        </script>
    </body>
</html>

Kendo UI DataViz

  1. Download Kendo UI DataViz and extract the distribution zip file to a convenient location.
  2. Copy the /js and /styles directories of the Kendo UI DataViz distribution to your web application root directory.
  3. Include the Kendo UI DataViz JavaScript and CSS files in the head tag of your HTML page:
    <!-- Kendo UI DataViz CSS -->
    <link href="styles/kendo.dataviz.min.css" rel="stylesheet" />
    <!-- jQuery JavaScript -->
    <script src="js/jquery.min.js"></script>
    <!-- Kendo UI DataViz combined JavaScript -->
    <script src="js/kendo.dataviz.min.js"></script>
    
  4. Initialize a Kendo UIDataViz Widget (the Kendo Radial Gauge in this example):
    <!-- HTML element from which the Kendo Radial Gauge would be initialized -->
    <div id="gauge"></div>
    <script>
    $(function() {
        $("#gauge").kendoRadialGauge();
    });
    </script>
    

Here is the complete example:

<!--doctype html>
<html>
    <head>
        <title>Kendo UI DataViz</title>
        <link href="styles/kendo.dataviz.min.css" rel="stylesheet" />
        <script src="js/jquery.min.js"></script>
        <script src="js/kendo.dataviz.min.js"></script>
    </head>
    <body>
        <div id="gauge"></div>
        <script>
        $(function() {
            $("#gauge").kendoRadialGauge();
        });
        </script>
    </body>
</html>

Kendo UI Mobile

  1. Download Kendo UI Mobile and extract the distribution zip file to a convenient location.
  2. Copy the /js and /styles directories of the Kendo UI Mobile distribution to your web application root directory.
  3. Include the Kendo UI Mobile JavaScript and CSS files in the head tag of your HTML page:
    <!-- Kendo UI Mobile CSS -->
    <link href="styles/kendo.mobile.all.min.css" rel="stylesheet" />
    <!-- jQuery JavaScript -->
    <script src="js/jquery.min.js"></script>
    <!-- Kendo UI Mobile combined JavaScript -->
    <script src="js/kendo.mobile.min.js"></script>
    
  4. Initialize a Kendo Mobile Application
    <!-- Kendo Mobile View -->
    <div data-role="view" data-title="View" id="index">
        <!--Kendo Mobile Header -->
        <header data-role="header">
            <!--Kendo Mobile NavBar widget -->
            <div data-role="navbar">
                <span data-role="view-title"></span>
            </div>
        </header>
        <!--Kendo Mobile ListView widget -->
        <ul data-role="listview">
          <li>Item 1</li>
          <li>Item 2</li>
        </ul>
        <!--Kendo Mobile Footer -->
        <footer data-role="footer">
            <!-- Kendo Mobile TabStrip widget -->
            <div data-role="tabstrip">
                <a data-icon="home" href="#index">Home</a>
                <a data-icon="settings" href="#settings">Settings</a>
            </div>
        </footer>
    </div>
    <script>
    // Initialize a new Kendo Mobile Application
    var app = new kendo.mobile.Application();
    </script>
    

Here is the complete example:

<!--doctype html>
<html>
    <head>
        <title>Kendo UI Mobile</title>
        <link href="styles/kendo.mobile.all.min.css" rel="stylesheet" />
        <script src="js/jquery.min.js"></script>
        <script src="js/kendo.mobile.min.js"></script>
    </head>
    <body>
        <div data-role="view" data-title="View" id="index">
            <header data-role="header">
                <div data-role="navbar">
                    <span data-role="view-title"></span>
                </div>
            </header>
            <ul data-role="listview">
              <li>Item 1</li>
              <li>Item 2</li>
            </ul>
            <footer data-role="footer">
                <div data-role="tabstrip">
                    <a data-icon="home" href="#index">Home</a>
                    <a data-icon="settings" href="#settings">Settings</a>
                </div>
            </footer>
        </div>
        <script>
        var app = new kendo.mobile.Application();
        </script>
    </body>
</html>

Server-side wrappers

Kendo UI provides server-side wrappers for ASP.NET, PHP and JSP. Those are classes (ASP.NET and PHP) or XML tags (JSP) which allow configuring the Kendo UI widgets with server-side code.

You can find more info about the server-side wrappers here:

  • Get Started with Telerik UI for ASP.NET MVC
  • Get Started with Telerik UI for JSP
  • Get Started with Telerik UI for PHP

Next Steps

Kendo UI videos

You can watch the videos in the Kendo UI YouTube channel.

Kendo UI Dojo

A lot of interactive tutorials are available in the Kendo UI Dojo.

Further reading

  1. Kendo UI Widgets
  2. Data Attribute Initialization
  3. Requirements

Examples

  1. Online demos
  2. Code library projects
  3. Examples availableongithub
    • ASP.NET MVC examples
    • ASP.NET MVC Kendo UI Music Store
    • ASP.NET WebForms examples
    • JSP examples
    • Kendo Mobile Sushi
    • PHP examples
    • Ruby on Rails examples

Help Us Improve Kendo UI Documentation, Samples, Tutorials and Demos

The Kendo UI team would LOVE your help to improve our documentation. We encourage you to contribute in the way that you choose:

Submit a New Issue at GitHub

Open a new issue on the topic if it does not exist already.When creating an issue, please provide a descriptive title, be as specific as possible and link to the document in question. If you can provide a link to the closest anchor to the issue, that is even better.

Update the Documentation at GitHub

This is the most direct method. Follow the contribution instructions. The basic steps are that you fork our documentation and submit a pull request. That way you can contribute to exactly where you found the error and our technical writing team just needs to approve your change request. Please use only standard Markdown and follow the directions at the link. If you find an issue in the docs, or even feel like creating new content, we are happy to have your contributions!

Forums

You can also go to the Kendo UI Forums and leave feedback. This method will take a bit longer to reach our documentation team, but if you like the accountability of forums and you want a fast reply from our amazing support team, leaving feedback in the Kendo UI forums guarantees that your suggestion has a support number and that we’ll follow up on it.Thank you for contributing to the Kendo UI community!

NEW “Filter My Lists” Web Part now available + FREE Metro UI Master Page when ordering

“Filter My Lists” Web Part

Saves you time with optimal performance

Find what you are looking for with a few clicks, even in cluttered sites and lists with masses of items and documents.

Find exactly what you need and stop wasting your time browsing SharePoint.
Filter the content of multiple lists and libraries in a single   step.

Combine search and metadata filters

In a single panel combine item, document and attachment searches with metadata keyword searches and managed metadata filters.

Select multiple filter values from drop-down lists or alternatively use the keyword search of metadata fields with the help of wildcard characters and logical operators.

Export filtered views to Excel

Export filtered views and data to Excel. A print view enables you to print your results in a clear printable format with a single  click.

Keep views clear and concise

Provides a complete set of filters without cluttering list views and keeps your list views clear, concise and speedy. Enables you to filter SharePoint using columns which aren’t visible in list views.

Refine filters and save them for future use, whether private, to share with others or to use as default filters.

FREE Metro Style UI Master Page

 

Screen Capture Medium

Modern UI Master Page and Styles for SharePoint 2010.

This will give the Metro/Modern UI styling of SharePoint 2013 to your SharePoint 2010 team sites.

Features include:
– Quick launch styling
– Global navigation and drop-down styling
– Search box styling and layout change
– Web part header styling
– Segoe UI font

New Web Part released – List Search Web Part now available!!

The List Search Web Part reads the entries from a Sharepoint List or Library (located anywhere in the site collection) and displays the selected user fields in a grid with an optional interactive search filter.

It can be used for WSS3.0, MOSS 2007, Sharepoint 2010 and Sharepoint 2013.

 Imagea

The following parameters can be configured:

  • Sharepoint Site
  • List Columns to be displayed
  • Filtering, Grouping, Searching, Paging and Sorting of rows
  • AZ Index
  • optional Header text

Installation Instructions:

  1. download the List Search Web Part Installation Instructions
  2. either install the web part manually or deploy the feature to your server/farm as described in the instructions. 
  3. Security Note:
    if you get the following error message: “Only an administrator may enumerate through all user profiles“, you will need to grant the application pool account(s) for the web application(s) „Manage User Profiles” permissions within the User Profile Sevice (SSP in case of MOSS2007).  
    This ensures that the application pool is able to retrieve the list of user profiles. 
    To assign this permission, access your active “User Profile Service” (SP 2010 Server ) or the “Shared Services Provider” (MOSS2007) via Central Admin. 
    From the „User Profiles and My Sites” group, click “Personalization services permissions”.  
    Add the „Manage User Profiles” permission to  your application pool account(s).
  4. Configure the following Web Part properties in the Web Part Editor “Miscellaneous” pane section as needed:
    • Site Name: Enter the name of the site that contains the List or Library:
      – leave this field empty if the List is in the current site (eg. the Web Part is placed in the same site)
      – Enter a “/” character if the List is contained in the top site
      – Enter a path if the List in in a subsite of the current site (eg. in the form of “current site/subsite”)
    • List Name: Enter the name of the desired Sharepoint List or Library
      Example: Project Documents
    • View Name: Optionally enter the desired List View of the list specified above. A List View allows you to specify specific data filtering and sorting. 
      Leave this field empty if you want to use the List default view.
    • Field Template: Enter the List columns to be displayed (separated by semicolons).
      Pictures can be attached (via File Upload) to the Sharepoint List items and displayed using the symbolic “Picture” column name.
      If you want to allow users to edit their own entries, please add the symbolic “Username” column name to the Field Template. An “Edit” symbol will then displayed to allow the user to navigate to the corresponding Edit Form:Example:
      Type;Name;Title;Modified;Modified By;Created By

      Friendly Header Names:
      If you would like to display a “friendly header name” instead of the default property name please append it to the User property, separated by the “|” pipe symbol.

      Example:
      Picture;LastName|Last Name;FirstName;Department;Email|Email Address

      Hiding individual columns:
      You can hide a column by prefixing it with a “!” character. 
      The following example hides the “Department” column: 
      LastName;FirstName;!Department;WorkEmail

      Suppress Column wrapping:
      You can suppress the wrapping of text inside a column by prefixing it with a “^” character.
      LastName;FirstName;Department;^AboutMe

      Showing the E-Mail address as plain text:
      You can opt to display the plain e-mail address (instead of the envelope icon) by appending “/plain” to the WorkEmail column:
      LastName;WorkEmail/plain;Department

    • Group By: enter an optional User property to group the rows.
    • Sort By: enter the List column(s) to define the default sort order. You can add multiple properties separated by commas. Append “/desc” to sort the column descending.
      Examples:
      Department
      Department,LastName
      Lastname
      /desc

      The columns headings can be clicked by the users to manually define the sort order.
    • AZ Index Column: enter an optional List column to display the AZ filter in the list header. 
      If an “!” character is appended to the property name, the “A” index will be forced when visiting the page.
      Example: LastName! 

       
       Image  
    • Search Box: enter one or more List columns (separated by semicolons) to allow for interactive searching.Example: LastName;FirstName

      If you want to display a search filter as a dropdown combo, please enter it with a leading “@” character:
      LastName;FirstName;Department;@Office

      Friendly Search Box Labels:
      If you would like to display a “friendly label” instead of the default property name please append it to the User property, separated by the “|” pipe symbol.
      Example:
      WorkPhone|Office Phone;Office|Office Nbr

       

    • Align Search Filters vertically: allows you to align the seach input boxes vertically to save horizontal space:
    • Rows per page: the Staff Directory web part supports paging and lets you specify the desired number of rows per page. 
    • Image Height: specify the image height in pixels if you include the “Picture” property. 
      Enter “0” if you want to use the default picture size.
    • Header Text: enter an optional header text. Please note that you can embed HTML tags if needed. You can additionally specify the text to be displayed if the “Show all entries” option is unchecked and the users has not performed a search yet by appending a “|” character followed by the text.
      Example:
      This is the regular header text|This text is only shown if the user has not yet performed a search
    • Detail View Page: enter an optional column name prefixed by “detailview=” to link a column to the item detail view page. Append the “/popup” option if you want to open the detail page in a Sharepoint 2010/2013 dialog popup window.
      Examples:
      detailview=LastName
      detailview/popup=Title
    • Alternating Row Color: enter the optional color of the alternating row background (leave blank to use default).
      Enter either the HTML color names (as eg. “red” etc.) or use hexadecimal RRGGBB coding (as eg. “#CCFFCC”). Enter the values without the double quotes.
      You can also change the default background color of the non-alternating rows by appending a second color value separated by a semicolon.
      Example: #ffffcc;#ffff99 

      The default Header style can be changed by adding the “AESD_Headerstyle” appSettings variable to the web.config “appSettings” section:

      <appSettings>
      <
      add key=AESD_Headerstyle value=background:green;font-size:10pt;color:white
       />
      <
      appSettings
      >

       

    • Show Column Headers: either show or suppress the List column header row.
    • Header Row CSS Style: enter the optionall header row CSS style(s) as needed.
      Example:
      color:blue;white-space:nowrap
    • Show Groups collapsed: either show the groups (if you specify a column in the “Group By” setting) collapsed or expanded when entering the page.
    • Enforce Security: hides the web part if user has no access to the site or the list. This avoids a login prompt if the user has not at least “View” permission on the list or site containing the list.
    • Show all entries: either show all directory entries or none when first visiting the page. 
      You can append a specific text to the “Header Text” field (see above) which is only displayed if this option is unchecked and no search has yet been performed by the user.
    • Open Links in new window: either open the links in a new window or in the same browser window.
    • Link Documents to Office365: open the Word, Excel and Powerpoint documents in the Office365 web viewer.
    • Show ‘Add New Item’ Button: either show or suppress the “Add new item button” to let users add new items to the list (this option is security-trimmed).
    • Export to CSV: Show/hide the “Export” button for Excel CSV File Export
    • CSV Separator: Enter the desired CSV field separator character (Default=Comma). Use a semicolon in countries which use the commas as a decimal separator.
    • Localization: enter the following 4 values (separated by semicolons) in your local language if you need to override the English strings corresponding to the 
      – Search button text, 
      – A..Z menu “View all” option, 
       the text displayed for Hyperlink columns 
      – the optional “Group By” name (if grouping is enabled)Default:
      Search;View all;Visit

    • License Key: enter your Product License Key (as supplied after purchase of the “Staff Directory Web Part” license key).
      Leave this field empty if you are using the free 30 day evaluation version.

 Contact me now at tomas.floyd@outlook.com for the List Search Web Part and other Free & Paid Web Parts and Apps for SharePoint 2010, 2013, Azure, Office 365, SharePoint Online

Thoughts on : Customizing the Public Website of Office 365

image_thumb

blog-office365

 

Recently, I attempted a migration from my ASP.NET based Azure website to Office 365. The reason was that I wanted to use SharePoint 2013 for in-page editing and simply try to get the platform to take care of all my business needs.

After a few days, I have reverted back to the Azure web host as I am not satisfied that the service will fulfill my requirements. Here is a recollection of my experiences of the shortcomings in the platform and the points that should be addressed.

Master page editing in the public Office 365 site is not much different from the rest of Office 365 and SharePoint 2013. You have access to the Design Manager and you can open the site with SharePoint Designer.

lekman-365

On the up-side, you can create master pages, create page layouts and add Rich Text areas using the “Multi-Area Page” that allows up to four separate rich text areas. I managed to get the site to look virtually the same when published.

On the down-side, the page contained all the scripts and CSS styles from standard SharePoint and caused the responsive design to break for tablets and phones. I could probably have fixed some of the issues but the difference in page and load time is as follows:

Azure .NET Office 365
Total page weight 305.2K 727K
Total non-cached file size 7.2K 54K
Total number of script files 7 12
Average page load time during load test 1.67 sec 3.46s

I then amended the blog layout. The comments feature from blogs in standard SharePoint is not available so it uses Facebook instead. I replaced this with a Disqus control instead. Later on, I started running in to several issues when trying to add features.

Issue #1: You cannot define your own content types

The site administration does not contain a link to allow modification of content types or site fields. Trying to navigate to the URL manually presents you with a 403 error. Adding custom content types for your page layouts seems like a simple request. I then tried to inject these using sandbox solutions.

Issue # 2: Sandboxed solutions are not supported

Yes, this link is also gone. You cannot navigate to “Solutions” but you can manually enter the URL. I found a helpful and informative post by Jason Cribbet on the topic and was able to activate my feature. This is, however, not supported by Microsoft and I am now in “not supported” land with my website.

Issue # 3: You cannot create subsites

I was fairly happy until I started to create more content and restricted areas. There is no way to create subsites using the interface. You need to use SharePoint Designer. Again, this is not supported by Microsoft.

Issue # 4: You cannot control feature activation

Yes, features can not be changed either. This means that you cannot add or remove any functionality outside of apps to the site.

Issue # 5: What is going on with the blog framework and managed navigation?

I could live with the “hacks” and continued to style the blog area. This, in itself, has a number of very strange issues:

  • If you remove the “Blog Tools” web part from the page then the links to blog posts will not work.
  • The pages does not seem to understand changing page layouts. I first had to change the page layout, then disconnect the page from the layout in SharePoint Designer.
  • Managed navigation allows you to use the blog as “/Blog/Post/1/My-Blog-Title” and “/Blog/Date/2013/” etc. The page configuration, however, does not allow to be changed. If you rename a page then the entire navigation framework will stop working. Just don’t.
  • The blog and blog category lists can still be accessed using the forms URL at “/Lists/Posts/AllItems.aspx” and you cannot change the anonymous behavior. As you cannot change features, then the lockdown feature is out of bounds. I guess you can inject redirects on the pages or try to use PowerShell to reactivate the forms lockdown page feature but I did not attempt this.

Issue # 6: You cannot recreate the site

So finally, you have hacked this puppy to pieces. You want to recreate the site, you go into SharePoint administration for Office 365 and delete the site collection. But wait… there is no option to recreate the site? This rectified itself on my test tenant after 24 hours and allowed me to create the public site. It did, however, not fully recreate. Now the site has no web template applied and I get the error message “Sorry, something went wrong: There is no site in the current site subscription matching the HiddenSiteSelection control’s value.”.

Summary

Office 365 has a long way to go before it can offer any kind of enterprise solutions for public web. And in a sense, it seems that they are just about there but have intentionally limited themselves to support basic usage only. But if that was the case, why allow SharePoint Designer and Design Manager access at all?

I hope that the public website will be improved in upcoming releases and would really like to run my site and blog using SharePoint technology.

Visio for Developers in Office 365

In this post, I’ll introduce some of the new features of interest to developers in Visio 2013. Among these features are:

  • New file format
  • Robust updates to themes
  • The change shape feature (that allows you to replace one shape with another while Maintaining shape text)
  • New shape effects
  • Improvements to commenting
  • Coauthoring on SharePoint Server 2013
  • Customizable image clipping
  • Relative geometry
  • Support for Business Connectivity Services (BCS) data
  • Updates to Visio Services in Microsoft SharePoint Server 2013
  • Duplicate page feature

At the end of the post, I provide you with some additional resources for both Visio and general Office development.

 

New file format

Visio 2013 introduces a new file format, based on the Open Packaging Conventions (OPC) standard (ISO 29500, Part 2) and the XML elements from the previous Visio XML file format (.vdx). It is a zipped, XML-based file format similar to the file formats used in other applications.

Because the new file format is supported by both Visio 2013 and Visio Services in Microsoft SharePoint Server 2013, you can save a Visio drawing directly to a SharePoint Server library without having to first publish the file as a Visio Web Drawing (.vdw). Even so, Visio Services can still read and display Visio Web Drawing files.

The new file format includes the following file types (by extension):

  • .vsdx (Visio drawing)
  • .vsdm (Visio macro-enabled drawing)
  • .vssx (Visio stencil)
  • .vssm (Visio macro-enabled stencil)
  • .vstx (Visio template)
  • .vstm (Visio macro-enabled template)

By using existing support for reading and writing to the file format package (such as System.IO.Packaging) and for parsing XML (System.Xml.Linq), you can work with the new file formats programmatically.

Visio 2013 retains the ability to read the old file formats (.vsd, .vss, .vst, .vdx, .vsx, .vtx, .vdw, .vwi). Visio 2013 does not save to the previous Visio XML file format (.vdx). Solutions or tools that consume the previous Visio XML file format (.vdx) files may need to be refactored in order to read the new file format and its schemas.

Visio Services retains the ability to display the Visio Web Drawing (.vdw) format in the browser. It now also renders the new Visio drawing (.vsdx) and Visio macro-enabled drawing (.vsdm) formats.

For more information about the new file format, see the article How to: Manipulate the Visio 2013 file format programmatically.

Themes

Themes have been redesigned in Visio 2013, making use of a greater variety of effects and styles including the integration of Shape Art effects. Users can now decide on an overarching style by applying a theme, personalize the diagram with theme variants, and highlight individual shapes with Quick Styles. ShapeSheet developers can take advantage of these features with new functions and cells in the ShapeSheet.

The user interface for applying theme variants is shown in the following figure.

 

 

You can also manipulate themes at the Page, Shape, and Selection object level. New APIs for working with themes include Page.SetTheme method, Page.SetThemeVariant method, Shape.SetQuickStyle method, and the Selection.SetQuickStyle method.

For more information about new VBA objects and members in Visio 2013, see the Visio Automation reference. For more information about the new ShapeSheet cells in Visio 2013, see the article What’s new for ShapeSheet developers in Visio 2013.

Change Shape

Visio 2013 includes a shape replacement API that enables you to swap one or more shapes for another shape contained in a stencil, while retaining some of the local values from the original shape, like the shape text shape, shape data, or shape formatting. Shape developers can update the ShapeSheet settings of their custom shapes to specify the Change Shape behavior for their shapes. Among the new APIs for Change Shape are the Shape.ReplaceShape and Selection.ReplaceShape methods and the ReplaceShapesEvent object.

The Change Shape feature lets you easily change a shape (in this case, the green rectangle)…

 

…to another shape, the green diamond.

For more information about the Change Shape feature, see Eric Schmidt’s blog post, Change shapes in Visio 2013.

For more information about new VBA objects and members in Visio 2013, see the Visio Automation reference. For more information about the new ShapeSheet cells in Visio 2013, see the article What’s new for ShapeSheet developers in Visio 2013.

Shape effects

New shape effects such as bevel, 3-D rotation, glow, reflection, and sketching have been added to Visio 2013. The ShapeSheet includes new cells for working with these effects. The following figure shows a shape to which effects have been applied.

You can also use Office VBA objects such as TextFrame2, GlowFormat, and ReflectionFormat and their members to apply shape effects.

For more information about the new ShapeSheet cells in Visio 2013, see the article What’s new for ShapeSheet developers in Visio 2013.

Commenting

Visio 2013 includes a new commenting framework. Comments can now be associated with a particular shape or page. Visio 2013 includes two new objects, Comments and Comment. New APIs for accessing comments programmatically include the Document.Comments, Page.Comments, Shape.Comments, and Page.ShapeComments properties.

The following images show what comments looked like in Visio 2010 and what they look like in Visio 2013.

 

 

Visio Services includes JavaScript APIs to read the comments from a page or shape in a diagram.

Note: You can no longer access comments in the ShapeSheet.

Coauthoring

Visio 2013 includes the ability to co-author diagrams stored on SharePoint or OneDrive. Developers have access to the Document.AfterDocumentMerge event which provides information about diagram changes due to coauthoring. Solution developers also have the ability to disable coauthoring to suit their custom needs by using the NoCoauth cell on the Document ShapeSheet.

Customizable image clipping

Visio 2013 supports defining a Custom Image Clipping path to crop images to any shape. This extends the capacities of Visio 2010, which supported clipping images in a rectangular way. This functionality is available in the ShapeSheet by using the ClippingPath cell in the Foreign Image Info section.

Relative geometries

In previous versions of Visio, shape geometry was defined by formulas that depended on the height or width of the shape. For example, in Visio 2010 the vertices of many built-in Visio shapes were defined by multiplying the height or width of the shape by a constant. These shapes had Geometry sections that included MoveTo or LineTo rows (for example) with formulas like Width1 and Height0.

Visio 2013 now supports relative geometry in the ShapeSheet. Shape developers can now use relative geometries to specify geometries as simple values or formulas, which multiply by the height or width automatically. You can now express Shape vertices by using constants, for instance—you no longer need to express vertices as multiples of the shape width or height. This makes it easier for you to create shapes that have better performance and smaller file sizes. New rows include the RelMoveTo and RelLineTo rows where the X and Y cell values are automatically multiplied by the width or height of the shape (respectively).

Support for Business Connectivity Services (BCS) data

Visio 2013 diagrams can now be connected to external lists on SharePoint Server 2013 servers. An external list is a content source external to SharePoint (for example, a SQL Server table) that has been connected to a SharePoint list by using Microsoft Business Connectivity Services (BCS). Visio Services supports the ability to refresh the Visio diagrams as the data updates.

For more information about what’s new in Visio Services, see the article Visio Services in SharePoint 2013. For more information about Business Connectivity Services (BCS), see Business Connectivity Services in SharePoint 2013.

Improvements in Visio Services

Visio Services in Microsoft SharePoint Server 2013 includes many improvements. As mentioned previously, Visio Services supports the new Visio file format (.vsdx and .vsdm). Visio Services has expanded data refresh and recalculation, including the ability to recalculate formulas across an entire diagram.

For more information about what’s new in Visio Services, see the article Visio Services in SharePoint 2013.

Duplicate page

You can now copy a page and all of its shapes within the same document in Visio 2013. Accordingly, the Page object has a new method, Duplicate, which duplicates the page and returns a new Page object.

Additional resources

Brand new 3 LINQ to Office Providers Available now!!

The SPSamurai.Office.LINQ namespace contains 3 classes –

OutlookProvider(LINQ to Outlook), OneNoteProvider (LINQ to OneNote) and ExcelProvider(LINQ to Excel).

The OutlookProvider is a wrapper class which provides IEnumerable collections to data of the COM interface of Outlook ( appointments, contacts, mails, tasks, …).

The OneNoteProvider provides collections of notebooks, sections and pages by manipulating the XML hierarchy tree of OneNote. And the ExcelProvider loads an Excel worksheet and provides column definition and row collections.

All collections are IEnumerable so you can query them with LINQ. The full source code is provided.

Check out my articles where I describe the implementation of these 3 classes and how to use them. These articles also contain a lot of LINQ query examples.

Class diagrams:

 

 

 

 

 

Features :
Set flag with due date from predefined list: Today, Tomorrow, This Week, Next Week or Custom  
Different options of follow-up visualization using combinations of flag, text and date  
Support of sorting and filtering features  
Support of different calendars (Gregorian, Japanese Emperor Era, Korean Tangun Era, Hijri, etc.)  
Supported Datasheet view  
Two-way conversion between ArtfulBits Follow-Up and standard Microsoft® SharePoint® Date and Time column  
Language pack support (desired localization could be added by request)  

 

Contact me at tomas.floyd@outlook.com for these tools and more SharePoint, Azure and Office 365 Apps, Tools and Web Parts or for specialised custom SharePoint Development

How To : Reserve Resources on the Calendar in SharePoint 2013 / Online

I suppose, many of you know about a great calendar feature in SharePoint 2010 called resource reservation. It enables organization of meetings in useful interface that allows to select multiple resources such as meeting rooms, projector and other facilities and required participants, and next the time frame that is free for all participants and facilities in the calendar view.

You can switch between week and day views.

Here is a screenshot of the calendar with resource reservation and member scheduling features:

You can change resources and participants in the form of your meeting, find free time frames in the diagram and check double booking:

There are two ways to add the resource reservation feature into SharePoint 2010 calendar:

  1. Enable web feature ‘Group Work Lists’, add calendar and go to its settings. Click ‘Title, description and navigation’ link in ‘General settings’ section. Here check ‘Use this calendar to share member’s schedule?’ and ‘Use this calendar for Resource Reservation?’
  2. Create a site based on ‘Group Work Site’ template.

Here is the detailed instructions: http://office.microsoft.com/en-us/sharepoint-server-help/enable-reservation-of-resources-in-a-calendar-HA101810595.aspx

SharePoint 2013 on-premise

After migration to SharePoint 2013 I discovered that these features were excluded from the new platform and saved only as backward compatibility.

So, you can migrate your application with installed booking calendar from SharePoint 2010 to SharePoint 2013 and you will keep functionality of resource reservation but you cannot activate it on a new SharePoint 2013 application through default interface.

Microsoft officially explained these restrictions by unpopularity of the resource reservation feature: http://technet.microsoft.com/en-us/library/ff607742(v=office.15).aspx#section1

First, I found a solution for SharePoint 2013 on-premise. It is possible to display the missing site templates including ‘Group Work Site’. Then you just need to create a site based on this template and you will get the calendar of resources.

Go to C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\15\TEMPLATE\1033\XML, open WEBTEMP.XML file, find an element with ‘Group Work Site’ title attribute and change its Hidden attribute from FALSE to TRUE.

SharePoint 2013 Online in Office 365

Perfect, now we can use free SharePoint booking system based on the standard calendar. But what about SharePoint Online in Office 365? We do not have an access to WEBTEMP.XML in its file system.

After some research I developed a sandbox solution that enables hidden ‘Group Work Lists’ feature and adds calendar with resource reservation and member scheduling features. Please, download it and follow the instructions to install:

  1. Go to the site collection settings.
  2. Open ‘Solutions’ area from ‘Web Designer Galleries’ section.
  3. Upload CalendarWithResources.wsp package and activate it.
  4. Now, navigate into the site where you wish to add the calendar with the enabled resource reservation feature.
  5. Open site settings -> site features.
  6. Activate ‘Calendar With Resources’ feature.

Great, now you have Group Calendar with an ability to book resources and schedule meetings.

This solution works for SharePoint 2013 on-premise as well, so you can use it instead of WEBTEMP.XML file modification.

Free download :

WSP File – http://1drv.ms/1f7ZSqO

 

SharePoint 2013 Standards released!!

SharePoint Development Standards for SharePoint 2013

This is only meant as application specific standards. You should always review these standards along with regular development standards which identify things like naming conventions and approaches.

designmanager[1]

General Principals

  •          All new functionality and customizations must be documented.
  • Do not edit out of the box files.
    • For a few well defined files such as the Web.config or docicon.xml files, the built-in files included with SharePoint Products and Technologies should never be modified except through official supported software updates, service packs, or product upgrades.
  • Do not modify the Database Schema.
    • Any change made to the structure or object types associated with a database or to the tables included in it. This includes changes to the SQL processing of data such as triggers and adding new User Defined Functions.

o    A schema change could be performed by a SQL script, by manual change, or by code that has appropriate permissions to access the SharePoint databases. Any custom code or installation script should always be scrutinized for the possibility that it modifies the SharePoint database.

  •          Do not directly access the databases.

o    Any addition, modification, or deletion of the data within any SharePoint database using database access commands. This would include bulk loading of data into a database, exporting data, or directly querying or modifying data.

o    Directly querying or modifying the database could place extra load on a server, or could expose information to users in a way that violates security policies or personal information management policies. If server- side code must query data, then the process for acquiring that data should be through the built-in SharePoint object model, and not by using any type of query to the database. Client-side code that needs to modify or query data in SharePoint Products and Technologies can do this by using calls to the built-in SharePoint Web services that in turn call the object model.

  •   Exception: In SharePoint 2010 the Logging database can be queried directly as this database was designed for that purpose.
  •          In SharePoint 2007 site and list templates must be created through code and features (site and list definitions). STP files are not allowed as they are not updatable.

Development Decisions:

There are plenty of challenging decisions that go into defining what the right solution for a business or technical challenge will be. What follows is a chart meant to help developers when selecting their development approach.

                     Sandbox                               Apps                                      Farm
When to use Deprecated. Therefore, it’s unadvisable to build new sandboxed solutions. Best practice. Create apps whenever you can. Create farm solutions when you can’t do it in an app. See http://www.learningsharepoint.com/2012/07/20/sharepoint-2013-apps-vs-farm-solutions/ for more info.
Server-side code Runs under a strict CAS policy and is limited in what it can do. No SharePoint server-code. When apps are hosted in an isolated SharePoint site, no server-code whatsoever is allowed. Can run full trust code or run under fine grained custom CAS policies.
Resource throttling Run under an advanced resource management system that allows resource point allocation and automatic shutdown for troublesome solutions. Apps run isolated from a SharePoint farm, but can have an indirect impact by leveraging the client object model. Can impact SharePoint server-farm stability directly.
Runs cross-domain No, and there’s no need to since code runs within the SharePoint farm. Yes, which provides a very interesting way to distribute server loads. No, and there’s no need to since code runs within the SharePoint farm.
Efficiency/Performance Runs on the server farm, but in a dedicated isolated process. The sandbox architecture provides overhead. Apps hosted on separate app servers (even cross-domain) or in the cloud may cause considerable overhead. Very efficient.
Safety Very safe. Apps rely on OAuth 2.0. The OAuth 2.0 standard is surrounded by some controversy (for example, check out what OAuth lead author Eran Hammer has to say about it here: http://hueniverse.com/2012/07/oauth-2-0-and-the-road-to-hell/ . In fact, some SharePoint experts have gone on the record stating that security for Apps will become a big problem. We’ll just have to wait and see how this turns out. Can be very safe, but this requires additional testing, validation and potential monitoring.
Should IT pros worry over it? Due the the limited CAS permissions and resource throttling system, IT pros don’t have to worry. Apps are able to do a lot via the client OM. There are some uncertainties concerning the safety of an App running on a page with other Apps. For now, this seems to be the most worry-able option, but we’ll have to see how this plays out. Definitely. This type of solutions run on the SharePoint farm itself and therefore can have a profound impact.
Manageability Easy to manage within the SharePoint farm. Can be managed on a dedicated environment without SharePoint. Dedicated app admins can take care of this. Easy to manage within the SharePoint farm.
Cloud support Yes Yes, also support for App MarketPlace. No, on-premises (or dedicated) only.

App Development (SharePoint 2013):

  •          When developing an app select the most appropriate client API:

o   Apps that offer Create/Read/Update/Delete (CRUD) actions against SharePoint or BCS external data, and are hosted on an application server separated by a firewall benefit most from using the JavaScript client object model.

o   Server-side code in Apps that offer Create/Read/Update/Delete (CRUD) actions against SharePoint or BCS external data, and are hosted on an application server but not separated by a firewall mainly benefit from using the .managed client object model, but the Silverlight client object model, JavaScript client object model or REST are also options.

o   Apps hosted on non-Microsoft technology (such as members of the LAMP stack) will need to use REST.

o   Windows phone apps need to use the mobile client object model.

o   If an App contains a Silverlight application, it should use the Silverlight client object model.

o   Office Apps that also work with SharePoint need to use the JavaScript client object model.

Quality Assurance:

  • Custom code must be checked for memory leaks using SPDisposeCheck.
    • False positives should be identified and commented.
  • Code should be carefully reviewed and checked. As a starting point use this code review checklist (and provide additional review as needed).
  • Provide an Installation Guide which contains the following items (Note this relates to SharePoint Deployment Standards):
    • Solution name and version number.
    • Targeted environments for installation.
    • Software and hardware Prerequisites: explicitly describes what is all needed updates, activities, configurations, packages, etc. that should be installed or performed before the package installation.
    • Deployment steps: Detailed steps to deploy or retract the package.
    • Deployment validation: How to validate that the package is deployed successfully.
    • Describe all impacted scopes in the deployment environment and the type of impact.

Branding:

  • A consistent user interface should be leveraged throughout the site. If a custom application is created it should leverage the same master page as the site.
  • Editing out of the box master pages is not allowed. Instead, duplicate an existing master page; make edits, then ensure you add it to a solution package for feature deployment.
  • When possible you should avoid removing SharePoint controls from any design as this may impact system behavior, or impair SharePoint functionality.
  • All Master Pages should have a title, a description, and a preview image.
  • All Page Layouts should have a title, a description, and a preview image.

Deployment:

  • All custom SharePoint work should be deployed through SharePoint Solution (.wsp) files.
  • Do not deploy directly into the SharePointRoot (12-Hive, 14-Hive) Folders. Instead deploy into a folder identified by Project Name.

Features:

  • Features must have a unique GUID within the farm.
  • Features with event receivers should clean up all changes created in the activation as part of the deactivation routine.
    • The exception to this is if the feature creates a list or library that contains user supplied data. Do not delete the list/library in this instance.
  • Features deployed at the Farm or Web Application level should never be hidden.
  • Site Collection and Site Features may be hidden if necessary.
  • Ensure that all features you develop have an appropriate name, description, updated version number and icon.

SharePoint Designer:

  • SharePoint Designer 2007 updates are generally not allowed.
    • The only exception to this rule is for creating DataForm Web Parts.
    • The following is a recommended way of managing this aspect:
      Create a temporary web part page (for managing the manipulation of a data view web part). Once the web part is ready for release and all modifications have been made export the .webpart and then delete the page. You can now import it onto a page elsewhere or place it in the gallery. This way none of your production pages are un-ghosted. The other advantage is that you can place the DVWP on a publishing page (as long as there are web part zones to accept them).
    • DataForm Web Parts should be exported through the SharePoint GUI and solution packaged for deployment as a feature.
    • This does not mean that SharePoint Designer should not be used for creating and testing branding artifacts such as master pages and page layouts.
      • It is important for these artifacts to be deployed through solution files (WSPs) and typical build and deployment processes and not by manual methods.
  • SharePoint Designer 2010 updates are generally only allowed by a trained individual.
    • The following is a recommended way of managing the creation of DataForm Web Parts:
      Create a temporary web part page (for managing the manipulation of a data view web part). Once the web part is ready for release and all modifications have been made export the .webpart and then delete the page. You can now import it onto a page elsewhere or place it in the gallery. This way none of your production pages are un-ghosted. The other advantage is that you can place the DVWP on a publishing page (as long as there are web part zones to accept them).
    • DataForm Web Parts should be exported through the SharePoint GUI and solution packaged for deployment as a feature.
  • SharePoint Designer workflows should not be used for Business Critical Processes.
    • They are not portable and cannot be packaged for solution deployment.
      • Exception Note: Based on the design and approach being used it may be viable in SharePoint 2010 for you to design a workflow that has more portability. This should be determined on a case by case basis as to whether it is worth the investment and is supportable in your organization.

Site Definitions:

  • In SharePoint 2007 site and list templates must be created through code and features (site and list definitions).
    • STP files are not allowed as they are not updatable.
  • Site definitions should use a minimal site definition with feature stapling.

Solutions:

  • Solutions must have a unique GUID within the farm.
  • Ensure that the new solution version number is incremented (format V#.#.#).
  • The solution package should not contain any of the files deployed with SharePoint.
  • Referenced assemblies should not be set to “Local Copy = true”
  • All pre-requisites must be communicated and pre-installed as separate solution(s) for easier administration.

Source Control:

  • All source code must be under a proper source control (like TFS or SVN).
  • All internal builds must have proper labels on source control.
  • All releases have proper labels on source control.

InfoPath:

  • If an InfoPath Form has a code behind file or requires full trust then it must be packaged as a solution and deployed through Central Administration.
  • If an InfoPath form does not have code behind and does not need full trust the form can be manually published to a document library, but the process and location of the document library must be documented inside the form.
    • Just add the documentation text into a section control at the top of the form and set conditional formatting on that section to always hide the section, that way users will never see it.

WebParts

  • All WebParts should have a title, a description, and an icon.

Application Configuration

o   Web.config

  •   APIs such as the ConfigurationSection class and SPWebConfigModification class should always be used when making modifications to the Web.config file.
  •   HTTPModules, FBA membership and Role provider configuration must be made to the Web.config.

o   Property Bags

  •   It is recommended that you create your own _layouts page for your own settings.
  •   It is also recommended that you use this codeplex tool to support this method http://pbs2010.codeplex.com/

o   Lists

  •   This is not a recommended option for Farm or Web Application level configuration data.
  •   It is also recommended that you use this codeplex tool to support this method http://spconfigstore.codeplex.com/

o   Hierarchical Object Store (HOS) or SPPersistedObject

  •   Ensure that any trees or relationships you create are clearly documented.
  •   It is also recommended that you use the Manage Hierarchical Object Store feature at http://features.codeplex.com/. This feature only stores values in the Web Application. You can build a hierarchy of persisted objects but these objects don’t necessarily map to SPSites and SPWebs.

 

When should I choose to create a mail app versus an add-in for Outlook?

When should I choose to create a mail app versus an add-in for Outlook?

Rate This
PoorPoorFairFairAverageAverageGoodGoodExcellentExcellent

Some of you may or may not be aware that alongside with the legacy COM-based Office client object models, Office 2013 supports a new apps for Office developer platform. This blog post is intended to help new and existing Office developers understand the main differences between the COM-based object models and the apps for Office platform. In particular, this post focuses on Outlook, suggests why you should consider developing solutions as mail apps, and identifies those exceptional scenarios where add-ins may still be the more appropriate choice.

Contents:

An introduction to the apps for Office platform

Architectural differences between add-in model and apps for Office platform

Main features available to mail apps

Major objects for mail apps

Reasons to create mail apps instead of add-ins for Outlook

Reasons to choose add-ins

Conclusion

Further references

An introduction to the apps for Office platform

The apps for Office platform includes a JavaScript API for Office and a schema for apps for Office manifests. You can use this platform to extend web services and content into the context of rich and web clients of Office. An app for Office is a webpage that is developed using common web technologies, hosted inside an Office client application (such as Outlook) on-premises or in the cloud. Of the three types of apps for Office, the type that Outlook supports is called mail apps. While you use the legacy APIs—the object model, PIA, and MAPI—to automate Outlook at an application level, you can use the JavaScript API for Office in a mail app to interact at an item level with the content and properties of an email message, meeting request, or appointment. You can publish mail apps in the Office Store or in an internal Exchange catalog. End users and administrators can install mail apps for an Exchange 2013 mailbox, and use mail apps in the Outlook rich client as well as Outlook Web App. As a developer, you can choose to make your mail app available for end users on only the desktop, or also on the tablet or smart phone. You can find more information about the apps for Office platform by starting here: Overview of apps for Office.

Architectural differences between add-in model and apps for Office platform

Add-in model

The Office add-in model offers individual object models for most of the Office rich clients. Each object model is intended to automate the corresponding Office client, and allows an add-in to integrate closely with the behavior of that client. The same add-in can integrate with one or multiple Office applications, such as Outlook, Word, and Excel, by calling into each of the Outlook, Word, and Excel object models. Figure 1 describes a few examples of 1:1 relationships between an Office rich client and its object model.

Figure 1. The legacy Office development architecture is composed of individual client object models.

 

Apps for Office platform

The apps for Office platform includes an apps for Office schema. Using this schema, each app specifies a manifest that describes the permissions it requests, its requirements for its host applications (for example, a mail app requires the host to support the mailbox capability), its support for the default and any extra locales, display details for one or more form factors, and activation rules for a mail app to be available in the app bar.

In addition to the schema, the apps for Office platform includes the JavaScript API for Office. This API spans across all supporting Office clients and allows apps to move toward a single code base. Rather than automating or extending a particular Office client at the application level, the apps for Office platform allows apps to connect to services and extend them into the context of a document, message, or appointment item in a rich or web client. Figure 2 shows Office applications with their rich and web clients sharing a common app platform.

Figure 2. The apps for Office development architecture is composed of a common platform and individual object models.

 

One main difference of note is that the object models were designed to integrate tightly with the corresponding Office client applications. However, this tight integration has a side effect of requiring an add-in to run in the same process as the rich client. The reliability and performance of an add-in often affects the perceived performance of the rich client. Unlike client add-ins, an app for Office doesn’t integrate as tightly with the host application, does not share the same process as the rich client, and instead runs in its own isolated runtime environment. This environment offers a privacy and permission model that allows users and IT administrators to monitor their ecosystem of apps and enjoy enhanced security.

Main features available to mail apps

Contextual activation: Mail app activation is contextual, based on the app’s activation rules and current circumstances, including the item that is currently displayed in the Reading Pane or inspector. A mail app is activated and becomes available to end users when such circumstances satisfy the activation rules in the app manifest.

Matching known entities or regular expression: A mail app can specify certain entities (such as a phone number or address) or regular expressions in its activation rules. If a match for entities or regular expressions occurs in the item’s subject or body, the mail app can access the match for further processing.

Roaming settings: A mail app can save data that is specific to Outlook and the user’s Exchange mailbox for access in a subsequent Outlook session.

Accessing item properties: A mail app can access built-in properties of the current item, such as the sender, recipients, and subject of a message, or the location, start, end, organizer, and attendees of a meeting request.

Creating item-level custom properties: A mail app can save item-specific data in the user’s Exchange mailbox for access in a subsequent Outlook session.

Accessing user profile: A mail app can access the display name, email address, and time zone in the user’s profile.

Authentication by identity tokens: A mail app can authenticate a user by using a token that identifies the user’s email account on an Exchange Server.

Using Exchange Web Services: A mail app can perform more complex operations or get further data about an item through Exchange Web Services.

Permissions model and governance: Mail apps support a three-tier permissions model. This model provides the basis for privacy and security for end users of mail apps.

Major objects for mail apps

For mail apps, you can look at the JavaScript API for Office object model in three layers:

  1. In the first layer, there are a few objects shared by all three types of apps for Office: Office, Context, and AsyncResult.
  2. The second layer in the API that is applicable and specific to mail apps includes the Mailbox, Item, and UserProfile objects, which support accessing information about the user and the item currently selected in the user’s mailbox.
  3. The third layer describes the data-level support for mail apps:
    1. There are CustomProperties and RoamingSettings that support persisting properties set up by the mail app for the selected item and for the user’s mailbox, respectively.
    2. There are the supported item objects, Appointment and Message, that inherit from Item, and the MeetingRequest object that inherits from Message. These objects represent the types of Outlook items that support mail apps: calendar items of appointments and meetings, and message items such as email messages, meeting requests, responses, and cancellations.
    3. Then there are the item-level properties (such as Appointment.subject) as well as objects and properties that support certain known Entities objects (for example Contact, MeetingSuggestion, PhoneNumber, and TaskSuggestion).

Figure 3 shows the major objects: Mailbox, Item, UserProfile, Appointment, Message, Entities, and their members.

Figure 3. Major objects and their members used by mail apps in the JavaScript API for Office.

Figure 4 shows all of the objects and enumerations in the JavaScript API for Office that pertain to mail apps.

Figure 4. All objects for mail apps in the JavaScript API for Office.

Figure 5 is a thumbnail of a diagram with all the objects and members that mail apps use. Zoom into the diagram at http://go.microsoft.com/fwlink/?LinkId=317271.

Figure 5. All objects and members used by mail apps in the JavaScript API for Office.

The following are common reasons why mail apps are a better choice for developers than add-ins:

  • You can use existing knowledge of and the benefits of web technologies such as HTML, JavaScript, and CSS. For power users and new developers, XML, HTML, and JavaScript require less significant ramp-up time than COM-based APIs such as the Outlook object      model.
  • You can use a simple web deployment model to update your mail app (including the web services that the app uses) on your web server without any complex installation on the Outlook client. In fact, any updates to the mail app, with the exception of the app manifest, do not require any updating on the Office client. You can update the code or user interface of the mail app conveniently just on the web server. This presents a significant advantage over the administrative overhead involved in updating add-ins.
  • You can use a common web development platform for mail apps that can roam across the Outlook rich client and Outlook Web App on the desktop, tablet, and smartphone. On the other hand, add-ins use the object model for the Outlook rich client and, hence, can run on only that rich client on a desktop form factor.
  • You can enjoy rapid turnaround of building and releasing apps via the Office Store.
  • Because of the three-tier permissions model, users and administrators can appreciate better security and privacy in mail apps than add-ins, which have full access to the content of each account in the user’s profile. This, in turn, encourages user consumption of apps.
  • Depending on your scenarios, there are features unique to mail apps that you can take advantage of and that are not supported by add-ins:
    • You can specify a mail app to activate only for certain contexts (for example, Outlook displays the app in the app bar only if the message class of the user-selected appointment is IPM.Appointment.Contoso, or if the body of an email contains a package       tracking number or a customer identifier).
    • You can activate a mail app if the selected message contains some known entities, such as an address, contact, email address, meeting suggestion, or task suggestion.
    • You can take advantage of authentication by identity tokens and of Exchange Web Services.

Reasons to choose add-ins

The following features are unique to add-ins and may make them a more appropriate choice than mail apps in some circumstances:

  • You can use add-ins to extend or automate Outlook at an application-level, because the object model and PIA have extensive integration with Outlook features (such as all Outlook item types, user interface, sessions, and rules). At the item-level, add-ins can interact with an item in read or compose mode. With mail apps, you cannot automate Outlook at the application level, and you can extend Outlook’s functionality in the context of only the read-mode of the supported items (messages and appointments) in the user’s mailbox.
  • You can specify custom business logic for a new item type.
  • You can modify and add custom commands in the ribbon and Backstage view.
  • You can display a custom form page or form region.
  • You can detect events such as sending an item or modifying properties of an item.
  • You can use add-ins on Outlook 2013 and Exchange Server 2013, as well as earlier versions of Outlook and Exchange. On the other hand, mail apps work with Outlook and Exchange starting in Outlook 2013 and Exchange Server 2013, but not earlier versions.

Conclusion

When you are considering creating a solution for Outlook, first verify whether the supported major features and objects of the apps for Office platform meet your needs. Develop your solution as a mail app, if possible, to take advantage of the platform’s support across Outlook clients over the desktop, tablet, and smartphone form factors. Note that there are still some circumstances where add-ins are more appropriate, and you should prioritize the goals of your solution before making a decision.

Further references

Apps for Office and mail apps

SAP NetWeaver and Hyper-Threading on Windows Servers : To be or Not to be

Here are the key messages related to SAP NetWeaver and Hyperthreading ( HT ) now called
Simultaneous MultiThreading ( SMT ) derived from different sources as well as internal lab tests
done for the WS2012 SAP First Customer Shipment program. In addition I incorporated very
valuable input from Juergen Thomas who published many blogs and papers about SAP on the
Microsoft platform :

  1. Always keep in mind : a CPU thread is NOT equal to a core !  ( also see walk-through section at the end )
  2. Sizing based on SAPS which is done by Hardware vendors for certain server models usually
    includes SMT. Looking at the latest published SAP SD benchmarks one will realize immediately
    that SMT was turned on according to the CPU Information ( # processors / # cores / # threads ).
    The goal is to achieve the maximum amount of SD workload. Including SMT for sizing implictly
    means that customers will turn it on

  3. To use Hyper-V on a Server with more than 64 logical processors ( e.g. 40 cores and SMT turned on )
    needs to use Windows Server 2012. Hyper-V of Windows 2008 ( R2 ) has a limit of 64 logical CPUs,
    Windows 2008 R2 being able to address 256 CPU threads in a bare-metal deployment

  4. When using the latest OS and application releases the general suggestion regarding SMT is to
    always turn it on. It either helps or won’t hurt. Turning SMT on or off requires a reboot as it’s a
    BIOS setting on the physical host

  5. How much SMT will help depends on the application workload. While there is a proven benefit in
    SAP SD Benchmarks as well as in many other benchmarks, we know customer tests where there
    was no difference between running a virtualized SAP application Server on WS2012 Hyper-V with
    or without SMT. Conclusion is that the effect/impact of SMT is pretty much dependent on the
    individual customer scenario
     

General Information about Hyper-Threading

Being around for many years Hyper-Threading ( HT ) now called Simultaneous MultiThreading ( SMT )
is a well-known feature of Intel processors. Looking on the Internet one can easily find a lot of information
and technical descriptions about it like this one :

http://software.intel.com/en-us/articles/performance-insights-to-intel-hyper-threading-technology/
While there have been some issues in early days the recommendations related to the more recent OS or
application releases is usually to have SMT turned on as it does increase the throughput  achievable
with an application on a single server. Nevertheless the question about the impact of SMT on SAP NetWeaver
shows up again when looking at the execution speed of a single request handled by a single CPU thread.
Especially as we are pushing WS2012 Hyper-V – customers wonder what the performance characteristics
are with a combination of SMT and virtualization.

In this blog I try to summarize the status quo and give some guidance based on all the statements and
test results and experiences which are around.

Single-Thread Performance

I personally would like to separate the SMT discussion from the single-thread performance discussion.
When People talk about the latter one it is usually about the trend in processor technology to increase
the number of cores instead of increasing the clock rate. In these discussions the often unspoken
assumption is 1 thread per core. Sure – there are very good reasons for it. But as one can read under
the following links only applications which are able to use parallelism will fully benefit from the multi-core
design. As a consequence certain SAP batch jobs which are dependent on high single-thread performance
might not improve a lot or not at all when the underlying hardware gets upgraded to the next processor
version which has more cores per CPU. A customer example of this effect can be found here :
http://blogs.msdn.com/b/saponsqlserver/archive/2010/01/24/performance-what-do-we-mean-in-regards-to-sap-workload.aspx

SAP NetWeaver is not a multi-threaded application. But in SAP it’s of course possible trying to
parallelize processing on a Business process level. Just think about payroll parallelism in SAP.

Some basic articles around single-threaded CPU performance and multi-core processing
can be found here :

http://preshing.com/20120208/a-look-back-at-single-threaded-cpu-performance

http://en.wikipedia.org/wiki/Multi-core_processor

http://iet-journals.org/archive/2012/may_vol_2_no_5/846361133715321.pdf

Could SMT hurt in some cases ?

is in principle the wrong question. The correct question should be : What is the effect and impact
of SMT with different applications and different configurations or scenarios ?
Understanding these effects and impacts will make it possible to adapt and get the maximum out
of an investment in a specific hardware model.
There might be some outdated messages or opinions around based on experiences from the early
days which are no longer valid. In other cases I personally wouldn’t call it an issue of SMT but
wrong sizing or overlooking some documented restrictions. First let’s look again at a general statement :

http://software.intel.com/en-us/articles/performance-insights-to-intel-hyper-threading-technology/

“Ideal scheduling would be to place active threads on cores before scheduling on threads on
the same core when maximum performance is the goal. This is best left to the operating system.
All multi-threaded operating systems support Intel HT Technology, while later versions have more
support for scheduling threads in the most ideal manner to maximize performance gains”

 

Here are some more details :

  1. single-thread / single-core performance

In SAP note 1612283 section “1.1 Clock Speed” you will find the following statement :
“If you need to speed up a single transaction or report you might try to switch off
Hyperthreading”

Based on some testing I would like to differentiate this a little bit further. As long as the
number of running processes / threads is <= the number of cores the OS / hypervisor should
be smart enough to distribute the workload over all the cores. In this case there shouldn’t be
any effect/impact by the fact that SMT is on or off. Based on the basics of SMT as described
in the Intel article named above, expectation is that with the number of running processes
/threads exceeding the # of cores, the performance/throughput of a single CPU thread,
dependent on the load, is decreasing.

  1. parallelism
    From an OS perspective one shouldn’t see any major issues with SMT anymore. It Looks different
    though when it comes to the application. One potential issue could arise if an application doesn’t realize
    that the available logical CPUs are mapped to SMT threads and not cores. This could lead to wrong
    assumptions. Here is an example from SQL Server :

http://support.microsoft.com/kb/2023536

“For servers that have hyper-threading enabled, the max degree of parallelism value should not
exceed the number of physical processors”
( the term “processors” being used related to physical cores in this article )

It’s related to the two items above. Parallelism on a SQL statement level means that the SQL Server
Optimizer expects all logical CPUs to be of the same type. The important question is if this is just not
as fast as if there would be as many cores as logical CPUs or if it becomes in fact slower than without
SMT
3. virtualization

Another topic is running VMs on Hyper-V with SMT turned on on the underlying physical host. It’s
again not different from the items mentioned before. Inside a VM an application might not be aware
of the nature of a virtual CPU. It’s not just SMT. Depending on the configuration ( e.g. over-
commitment ) and the capabilities of an OS/hypervisor a Virtual Processor will correspond only to a
“fraction” of a real Physical CPU.

Internal lab tests on WS2012 Hyper-V have proven the statement I quoted at the beginning of this
section. As long as there are enough cores available the workload will be optimally distributed.
A perfect way to show this is to increase the number of virtual CPUs inside a VM step by step
while monitoring the CPU workload on the host.

The CPU load screenshots further down were taken from perfmon on a WS2012 host where SMT
was turned on. The server had 8 cores and due to SMT 16 logical processors. The server also had
two NUMA nodes  ->  4 cores / 8 threads each. The SAP test running inside a VM ( guest OS was
Windows 2008 R2 ) was absolutely CPU-bound. The scenario looked like this :

a, the test started with two Virtual processors ( VP ) and the workload was increased until both VPs
were 100% busy

b, then the number of VPs was increased to four to see if it was possible to double the
workload. Scalability was very good in this case because the workload could still be
distributed over all four cores in one single NUMA node of the host server

c, but going from 4 VPs to 6 VPs changed the picture. Hyper-V has improved NUMA
support and per default sets max VPs per NUMA node to the # of Logical Processors.
( LP ) of the NUMA node ( 8 on the test Hardware ).
And as 6 VPs is still < 8 the whole workload of the VM still ended up on one single NUMA
node. On the other side due to SMT it was no longer possible to achieve an almost 1:1
VP-to-physical-core mapping. This is a situation where you will see still an improved
throughput compared to four VPs but it’s far away from linear scalability we achieved when
going from 2 VPs to 4 VPs.

Keep in mind that it’s NOT possible to configure processor affinity on Hyper-V to achieve
a fixed VP-physical-core mapping. But setting the “reserve” value in WS2012 Hyper-V
Manager to 100 has basically the same effect. See also the blog from Ben Armstrong :

http://blogs.msdn.com/b/virtual_pc_guy/archive/2009/09/21/processor-affinity-and-why-you-don-t-need-it-on-hyper-v.aspx

d, next step was to adapt the setting for the VM in Hyper-V Manager. It allows to define max
VPs per NUMA node. Setting this value to four forced Hyper-V to distribute the workload
over two NUMA nodes when using 6 VPs in the VM. Now it was again possible to achieve
basically a 1:1 VP-to-physical-core mapping. Scalability looked fine and because it was
totally CPU-bound the disadvantage of potentially slower memory access didn’t matter.
The advantage of getting more CPU power outweighed the memory access penalty by far.

The following pictures and screenshots will visualize the four items above :

 

Figure 1 : as long as SMT is turned off and a VM won’t span multiple numa nodes the virtual processors
will be mapped to the cores of one single numa node on the physical host

 

Figure 2 : once SMT is turned on the Virtual Processors of a VM will be mapped to Logical Processors
on the physical host. By Default WS2012 Hyper-V Manager will set the maximum # of Virtual
Processors per numa node according to the hardware layout. In this example it means a max
of 8 Virtual Processors per numa node

Figure 3 : configuring 4 Virtual Processors in a VM while SMT is turned on means that these 4 VPs
will be mapped to 8 Logical Processors on the physical host. This allows the OS/Hypervisor
to make sure that the workload will be distributed over all 4 physical cores in an optimal way
similar to having SMT turned off


Figure 4 : perfmon showed that the workload which kept four virtual processors busy inside a VM was
distributed over eight Logical processors on four cores within one NUMA node on the
physical host

 

Figure 5 : what happens when adding two additional Virtual Processors inside the
single VM ? Because 6 VPs is still less than the default setting of a max
of 8 VPs per single numa node the whole workload will be still mapped
to 8 Logical Processors which correspond to 4 cores within one numa
node


Figure 6 : increasing the workload the same way as before when going from two virtual processor to
four VPs by configuring six VPs inside the VM caused a super-busy single NUMA node using
almost all the threads 100%. Means each of the single physical cores of this NUMA node had
to engage more severly the two SMT threads it represented to the OS/Hypervisor. Therefore
the scalability for going from four VPs to 6 VPs looked not great compared to going to four
VPs from two VPs.

Figure 7 : the default setting regarding # of virtual processors per numa node in WS2012
Hyper-V Manager can be changed. Setting the number low enough will force
the Hypervisor to use more than one numa node

 

 

 


Figure 8 : changing the “max VPs per NUMA node” setting in Hyper-V Manager ( 2012 ) to four
forced Hyper-V to use the second NUMA node. This allowed again basically a 1:1
VP-to-physical-core mapping and scalability looked fine again
Conclusion :

The references as well as the experiences shown make it obvious that the effects of SMT are related
to sizing and configuration of SAP deployments as well as to set expectations and SLAs accordingly.

It is proven that with having SMT configured on a Hyper-V host or on a SAP bare metal deployment,
the overall throughput of a specific server is increasing including the power/throughput ratio. Both
are usually the goals we follow when specifying hardware configurations for SAP deployments.

In terms of using SMT for Hyper-V hosts, one clearly needs to define the goals of deploying SAP
components in VMs. Is the goal again to maximize the available capacity of servers, then having
SMT enabled is the way to go. Means one would deploy as many VPs as there are Logical Processors
on the host server and accept that there might be performance variations dependent on the load over
all VMs or the fact that a VM has more VPs than the # of physical cores in one NUMA node of the host
server. SLAs towards the business units would then take such variations into account.

Walk-through TTHS – Tray Table Hyper-Seating

 

One thing which will be repeated again and again in all the articles about SMT is the fact that a
CPU thread is NOT equal to a core. To visualize this specific point and to make it easy to remember
I would like to compare SMT with TTHS – Tray Table Hyper-Seating as shown on the following
six Pictures :

 

Figure 1 :  you have four comfortable seats and four passengers. Everyone is happy.

 

Figure 2 :  now you want to get more than four passengers into the car and the idea is to introduce
TTHS – Tray Table Hyper Seating. This will allow to put two passengers on one seat. But
it’s pretty obvious that it’s not so comfortable anymore. Especially one of the two
passengers cannot enjoy the cozy seat surface.
 

Figure 3 :  therefore the driver should be smart enough to let passengers enjoy the cozy seat surface
as long as seats are available despite the fact that TTHS is turned on

Figure 4 :   at some point though when you want to put six passengers into the 4-seat car two of them
have to get on the TTHS spots. This is when issues might evolve

 

Figure 5 :  of course one could turn TTHS off again and share two seats the traditional way. While this
might work too it’s very obvious that it’s not perfect

 

Figure 6 :  conclusion :  if it’s a hard requirement that every passenger has his own seat to fully enjoy
the cozy seat surface then there is no other way than to take a different car with an
approrpiate number of seats

How To : Use Powershell Scripts in Office 365 through the SharePoint CSOM

When we first started to work with Office 365, I remember being quite concerned at the lack of PowerShell cmdlets – basically all the commands we’re used to using do not exist there. Here’s a gratuitous graph to illustrate the point:

image

So yes, nearly 800 PowerShell commands in SP2013 (up from around 530 in SP2010) down to a measly 30 in SharePoint Online. And those 30 mainly cover basic operations with sites, users and permissions – no scripting of, say, Managed Metadata, user profiles, search and so on. It’s true to say that some of these things are now available down at site-collection scope (needed, of course, when you don’t have a true “Central Admin” site but there are still “tenant-level” settings that you want to use script for rather than make manual changes through the UI.

So what’s a poor developer/administrator to do?

The answer is to write PowerShell as you always did, but embed CSOM code in there. More examples later, but here’s a small illustration:

# get the site collection scoped Features collections (e.g. to activate one) – not showing how to obtain $clientContext here..
$siteFeatures = $clientContext.Site.Features
$clientContext.Load($siteFeatures)
$clientContext.ExecuteQuery()

So we’re using the .NET CSOM, but instead of C# we are using PowerShell’s ability to call any .NET object (indeed, nearly every script will use PowerShell’s New-Objectcommand). All the things we came to love about PowerShell are back on the table:

  • Scripts can be easily amended, no need to recompile (or open Visual Studio)
  • We can debug with PowerGui or PowerShell ISE
  • We can leverage other things PowerShell is good at e.g. easily reading from XML files, using other PowerShell modules and other APIs (including .NET) etc.

Of course, we can only perform operations where the method exists in the .NET CSOM – that’s the boundary of what we can do.

Getting started

Step 1 – understand the landscape

The first thing to understand is that there are actually 3 different approaches for scripting against Office 365/SharePoint Online, depending on what you need to do. It might just be me, but I think that when you start it’s easy to get confused between them, or not fully appreciate that they all exist. The 3 approaches I’m thinking of are:

  • SharePoint Online cmdlets
  • MSOL cmdlets
  • PowerShell + CSOM

This post focuses on the last flavor. I also wrote a short companion post about the overall landscape and with some details/examples on the other flavors, at Using SharePoint Online and MSOL cmdlets in PowerShell with Office 365

Step 2 – prepare the machine you will run scripts against SharePoint Online

Option 1 – if you will NOT run scripts from a SP2013 box (e.g. a SP2013 VM):

You need to obtain the SharePoint DLLs which comprise the .NET CSOM, and copy them to a folder on your machine – your scripts will reference these DLLs.

  1. Go to any SharePoint 2013 server, and copy any DLL
  2. which starts with Microsoft.SharePoint.Client*.dll from the C:\Program Files\Common Files\microsoft shared\Web Server Extensions\15\ISAPI folder.
  3. Store them in a folder on your machine e.g. C:\Lib – make a note of this location.

CSOM DLLs

Option 2 – if you WILL run scripts from a SP2013 box (e.g. a SP2013 VM):

In this case, there is no need to copy the DLLs – your scripts will reference them in the original SharePoint install location (C:\Program Files\Common Files\microsoft shared\Web Server Extensions\15\ISAPI).

The top of your script – referencing DLLs and authentication

Each .ps1 file which calls the SharePoint CSOM needs to deal with two things before you can use the API – loading the CSOM types, and authenticating/obtaining a ClientContext object. So, you’ll need this at the top of your script:

** N.B. My newer code samples do not show in RSS Readers – click here for full article **
# replace these details (also consider using Get-Credential to enter password securely as script runs)..
$username = “SomeUser@SomeOrg.onmicrosoft.com”
$password = “SomePassword”
$securePassword = ConvertTo-SecureString $Password -AsPlainText -Force
# the path here may need to change if you used e.g. C:\Lib..
Add-Type -Path “c:\Program Files\Common Files\microsoft shared\Web Server Extensions\15\ISAPI\Microsoft.SharePoint.Client.dll”
Add-Type -Path “c:\Program Files\Common Files\microsoft shared\Web Server Extensions\15\ISAPI\Microsoft.SharePoint.Client.Runtime.dll”
# note that you might need some other references (depending on what your script does) for example:
Add-Type -Path “c:\Program Files\Common Files\microsoft shared\Web Server Extensions\15\ISAPI\Microsoft.SharePoint.Client.Taxonomy.dll”
# connect/authenticate to SharePoint Online and get ClientContext object..
$clientContext = New-Object Microsoft.SharePoint.Client.ClientContext($url)
$credentials = New-Object Microsoft.SharePoint.Client.SharePointOnlineCredentials($username, $securePassword)
$clientContext.Credentials = $credentials
if (!$clientContext.ServerObjectIsNull.Value)
{
Write-Host “Connected to SharePoint Online site: ‘$Url'” -ForegroundColor Green
}
view rawTopOfScript_PSCSOM.ps1 hosted with ❤ by GitHub

In the scripts which follow, we’ll include this “top of script” stuff by dot-sourcing TopOfScript.ps1 in every script below – you could follow a similar approach (perhaps with a different name!) or simply paste that stuff into every script you create. If you enter a valid set of credentials and URL, running the script above should see you ready to rumble:

PS CSOM got context

Script examples

Activating a Feature in SPO

Something you might want to do at some point is enable or disable a Feature using script. The script below, like the others that follow it, all reference my TopOfScript.ps1 script above:

** N.B. My newer code samples do not show in RSS Readers – click here for full article **
. .\TopOfScript.ps1
[bool]$enable = $true
[bool]$force = $false
# using the Minimal Download Strategy Feature here..
$FeatureId = [GUID](“87294C72-F260-42f3-A41B-981A2FFCE37A”)
# ..and working with the web-scoped Features – use $clientContext.Site.Features for site-scoped Features
$webFeatures = $clientContext.Web.Features
$clientContext.Load($webFeatures)
$clientContext.ExecuteQuery()
if ($enable)
{
$webfeatures.Add($featureId, $force, [Microsoft.SharePoint.Client.FeatureDefinitionScope]::None)
}
else
{
$webfeatures.Remove($featureId, $force)
}
try
{
$clientContext.ExecuteQuery()
if ($enable)
{
Write-Host “Feature ‘$FeatureId’ successfully activated..”
}
else
{
Write-Host “Feature ‘$FeatureId’ successfully deactivated..”
}
}
catch
{
Write-Error “An error occurred whilst activating/deactivating the Feature. Error detail: $($_)
}
view rawActivateOrDeactivateFeature_PSCSOM.ps1 hosted with ❤ by GitHub

PS CSOM activate feature

Enable side-loading (for app deployment)

Along very similar lines (because it also involves activating a Feature), is the idea of enabling “side-loading” on a site. By default, if you’re developing a SharePoint app it can only be F5 deployed from Visual Studio to a site created from the Developer Site template, but by enabling “side-loading” you can do it on (say) a team site too. Since the Feature isn’t visible (in the UI), you’ll need a script like this:

** N.B. My newer code samples do not show in RSS Readers – click here for full article **
. .\TopOfScript.ps1
[bool]$enable = $true
[bool]$force = $false
# this is the side-loading Feature ID..
$FeatureId = [GUID](“AE3A1339-61F5-4f8f-81A7-ABD2DA956A7D”)
# ..and this one is site-scoped, so using $clientContext.Site.Features..
$siteFeatures = $clientContext.Site.Features
$clientContext.Load($siteFeatures)
$clientContext.ExecuteQuery()
if ($enable)
{
$siteFeatures.Add($featureId, $force, [Microsoft.SharePoint.Client.FeatureDefinitionScope]::None)
}
else
{
$siteFeatures.Remove($featureId, $force)
}
try
{
$clientContext.ExecuteQuery()
if ($enable)
{
Write-Host “Feature ‘$FeatureId’ successfully activated..”
}
else
{
Write-Host “Feature ‘$FeatureId’ successfully deactivated..”
}
}
catch
{
Write-Error “An error occurred whilst activating/deactivating the Feature. Error detail: $($_)
}
view rawEnableSideLoading_PSCSOM.ps1 hosted with ❤ by GitHub

PS CSOM enable side loading

Iterating webs

Sometimes you might want to loop through all the webs in a site collection, or underneath a particular web:

** N.B. My newer code samples do not show in RSS Readers – click here for full article **
1234567891011121314151617181920
. .\TopOfScript.ps1
$rootWeb = $clientContext.Web
$childWebs = $rootWeb.Webs
$clientContext.Load($rootWeb)
$clientContext.Load($childWebs)
$clientContext.ExecuteQuery()
function processWeb($web)
{
$lists = $web.Lists
$clientContext.Load($web)
$clientContext.ExecuteQuery()
Write-Host “Web URL is” $web.Url
}
foreach ($childWeb in $childWebs)
{
processWeb($childWeb)
}
view rawIterateWebs.ps1 hosted with ❤ by GitHub

PS CSOM iterate webs

(Worth noting that you also see SharePoint-hosted app webs also in the image above, since these are just subwebs (albeit ones which get accessed on the app domain URL rather than the actual host site’s web application URL).

Iterating webs, then lists, and updating a property on each list

Or how about extending the sample above to not only iterate webs, but also the lists in each – the property I’m updating on each list is the EnableVersioning property, but you easily use any other property or method in the same way:

** N.B. My newer code samples do not show in RSS Readers – click here for full article **
. .\TopOfScript.ps1
$enableVersioning = $true
$rootWeb = $clientContext.Web
$childWebs = $rootWeb.Webs
$clientContext.Load($rootWeb)
$clientContext.Load($childWebs)
$clientContext.ExecuteQuery()
function processWeb($web)
{
$lists = $web.Lists
$clientContext.Load($web)
$clientContext.Load($lists)
$clientContext.ExecuteQuery()
Write-Host “Processing web with URL “ $web.Url
foreach ($list in $web.Lists)
{
Write-Host “– “ $list.Title
# leave the “Master Page Gallery” and “Site Pages” lists alone, since these have versioning enabled by default..
if ($list.Title -ne “Master Page Gallery” -and $list.Title -ne “Site Pages”)
{
Write-Host “—- Versioning enabled: “ $list.EnableVersioning
$list.EnableVersioning = $enableVersioning
$list.Update()
$clientContext.Load($list)
$clientContext.ExecuteQuery()
Write-Host “—- Versioning now enabled: “ $list.EnableVersioning
}
}
}
foreach ($childWeb in $childWebs)
{
processWeb($childWeb)
}
view rawIterateWebsAndListsEnableVersioning.ps1 hosted with ❤ by GitHub

PS CSOM iterate lists enable versioning

Import search schema XML

In SharePoint 2013 and Office 365, many aspects of search configuration (such as Managed Properties and Crawled Properties, Query Rules, Result Sources and Result Types) can be exported and importing between environments as an XML file. The sample below shows the import operation handled with PS + CSOM:

** N.B. My newer code samples do not show in RSS Readers – click here for full article **
. .\TopOfScript.ps1
# need some extra types bringing in for this script..
Add-Type -Path “c:\Program Files\Common Files\microsoft shared\Web Server Extensions\15\ISAPI\Microsoft.SharePoint.Client.Search.dll”
# TODO: replace this path with yours..
$pathToSearchSchemaXmlFile = “C:\COB\Cloud\PS_CSOM\XML\COB_TenantSearchConfiguration.xml”
# we can work with search config at the tenancy or site collection level:
#$configScope = “SPSiteSubscription”
$configScope = “SPSite”
$searchConfigurationPortability = New-Object Microsoft.SharePoint.Client.Search.Portability.SearchConfigurationPortability($clientContext)
$owner = New-Object Microsoft.SharePoint.Client.Search.Administration.SearchObjectOwner($clientContext, $configScope)
[xml]$searchConfigXml = Get-Content $pathToSearchSchemaXmlFile
$searchConfigurationPortability.ImportSearchConfiguration($owner, $searchConfigXml.OuterXml)
$clientContext.ExecuteQuery()
Write-Host “Search configuration imported” -ForegroundColor Green
view rawImportSearchSchema.ps1 hosted with ❤ by GitHub

PS CSOM import search schema

Summary

As you can hopefully see, there’s lots you can accomplish with the PowerShell and CSOM combination. Anything that can be done with CSOM API can be wrapped into a script, and you can build up a library of useful PowerShell snippets just like the old days. There are some interesting things that you CANNOT do with CSOM (such as automating the process of uploading/deploying a sandboxed WSP to Office 365), but there ARE approaches for solving even these problems, and I’ll most likely cover this (and our experiences) in future posts.

A final idea on the PowerShell + CSOM front is the idea that you can have “hybrid” scripts which can deal with both SharePoint Online and on-premises SharePoint. For example, on my current project everything we build must be deployable to both SPO and on-premises, and our scripts take a “DeploymentTarget” parameter where the values can be “Online” or “OnPremises”. There are some differences (i.e. branching) in the scripts, but for many operations the same commands can be run.

How To : Use the Content Query Web Part for SharePoint 2013 Search

Meeting client requirements with SharePoint often involves aggregating items somehow – often we want to display things like “all the overdue tasks across all finance sites”, or “navigation links to all of the subsites of this area” or “related items (e.g. tagged with the same term)” and so on. In SharePoint 2010 there have been two main ways of accomplishing this:

SharePoint-2013-Service-Pack-1-225x93

  • Content Query web part
  • Custom solution built on SPSiteDataQuery (site collection-scoped), SPQuery (list-scoped) or search API

To a lesser extent, using the search web parts as part of a custom solution may also have been an option. Regardless, it was common to need custom code to meet such requirements. Maybe we needed to add paging to the results, or we needed to use some value obtained dynamically through code (e.g. from the current site/current page/current user/something else) – several Codeplex solutions arose from this gap, and lots of lines of code were written.

SharePoint 2013 presents the Content Search web part as a new option – it’s capabilities mean that simply using the web part (with some front-end work to meet look and feel requirements) will meet many needs, without use of custom code. If you’re a developer, the following screenshot should give you a clue as to why code won’t be required too often (with one of my favorite options highlighted):

CSWP_BasicsTab_AdvancedMode_PropertyFilterValues

It’s incredibly powerful, and it’s a good idea to understand what it can do.

Understanding the deal with search-based solutions

As the name suggests, the Content Search web part is powered by SharePoint’s search function. As such, there are the following considerations:

  • The CSWP can be configured to “see” items anywhere in SharePoint (potential advantage)
    • In contrast, the CQWP and related SPSiteDataQuery can only search within the current site collection – the site collection “boundary” is a factor
  • Results shown are not guaranteed to be 100% up-to-date (potential disadvantage) 
    • Since a search crawl has to run before any content changes will be shown in search results (remember this can include titles, summaries, images and so on for pages/documents), if a user creates/edits an item it will not be shown immediately. This can be a critical point.
    • Furthermore, my understanding from a FAST engineer is that in SharePoint 2013 there is no longer any means of pushing a document directly into the search index – in previous FAST incarnations including FAST for SharePoint 2010, there were options such as docpush.exe for “proactively” add an item to the index, rather than waiting for the next search crawl.
    • That said, it should be possible to obtain much lower indexing latencies in SharePoint 2013 via the “Continuous Crawl’” capability. In most deployments, my guess would be that changes would be reflected within a few minutes at most if this is enabled (where previously you may have had an incremental crawl scheduled every 15, 30 or 60 minutes for a SharePoint sites content source.

Summary – if the functionality you are creating needs fully up-to-date results (e.g. a user has created/edited something and it needs to be immediately reflected in the site) then you will probably need to stick with the original approaches (i.e. a query-based rather than search-based solution).

Terminology – new concepts in SharePoint 2013 search

So if we’re going to build solutions built on SP2013 search, we need to have a basic understanding of some concepts – we’ll run into these time and time again:

Concept

My quick definition

Result Source Like a search ‘scope’ in SP2007/SP2010, but on steroids. Rules are specified to say what the scope consists of – e.g. DOCUMENTS in my TEAM SITES area (constraining on content type and path in this example).

Created centrally, or at the web level. Result Sources can be used in just about any search-related functionality, including the Content Search web part.

Query Rule Like a ‘best bet’ on steroids. Ability to do specially formatted results at top of results list (e.g.Promoted Result) for highly-recommended content. In addition to Promoted Result, we can also do a Result Block (example could be a block of 5 image results within main list of text links).

Another option is to Change the Ranked Results – i.e. put something at the top, promoteor demote something by 1-10 (previously known as a ‘boost’ in FAST)

LOTS of flexibility in matching the user’s query, including regular expressions and matching terms in the Managed Metadata store.

Display Templates A Display Template is a JavaScript template (similar to jQuery templates) which controls formatting – in the case of the CSWP, this effectively replaces the use of XSL for look and feel. There is a separate template to pick for the overall control and formatting of an individualitem. The .js files for the templates are stored in the ‘Content Web Parts’ subfolder of the Master Page Gallery.

Side note – in the context of a search results page (rather than CSWP), a Display Template is associated with a Result Type (e.g. Word doc, wiki page, PowerPoint file etc.) and so we have granular control over how each is displayed (and when). Extremely cool.

So, lots of flexibility in the search infrastructure. Let’s see some of this in the context of the Content Search web part.

Configuring the Content Search web part

There are two main aspects to this:

  • Displaying the right items (Search Criteria)
  • Look and feel (Display Templates)

In terms of the search criteria, there is enormous flexibility in what the CSWP – and the underlying search capability – can do. For one thing, it’s possible to either directly configure the query entirely in the properties of this web part instance (e.g. show me all documents which meet criteria X), and/or start from a pre-existing Result Source to do some of the filtering. Combining the approaches will be fairly common – an example could be “search only on wiki pages” (an OOTB Result Source) but only show items tagged with X (this defined directly in the CSWP properties).

Interestingly, configuring a centralized Result Source and a Content Search web part on a page are very similar, even though it would seem some sort of “reusable scope” and a web part are very different things in SharePoint. The overlap comes because underneath both there is a search query which does the work of isolating the desired results – indeed, as we’ll see later the same “Query Builder” UI is used in both places (with a couple of minor differences). So, if you’ve learnt how to configure a CSWP you’ve essentially also learned how to create  a custom Result Source.

 

Configuring the web part

The first thing to understand is that the Content Search web part appears in different guises in the web part gallery. The ‘main’ web part is in the ‘Content Rollup’ category:

CBS_MainWebPartInAdder

But there are also many pre-configured versions available, each of which finds a specific type of content. This is great for end-users who don’t necessarily think in terms of needing a ‘Content Search’ web part:

CBS_WebPartsInAdder
And just to prove the point, the web parts above correspond to the following .webpart definition files in the Web Part Gallery:

CBS_WebParts

Once the web part has been added to the page, it can be configured by it’s tool pane. The main configuration item is the query to use, and this can be started by clicking the ‘Change query’ button:

CSWP_properties
This opens the “’Build Your Query” dialog – this has tabs labeled BASICS, REFINERS, SORTING, SETTINGS and TEST. This thing is known (unsurprisingly) as the Query Builder – what you might not realize, is that it’s used in several places in SharePoint 2013:

  • Configuring a Content Search web part (obviously)
  • Creating a Result Source (specifically in the Query Transform section)
  • Configuring a Search Results web part

There are some differences – for example, when configuring a Search Results web part there is no SORTING tab because this will be handled in the Result Source or the query. I’m going to talk about things from the perspective of the Content Search web part, but will call out any differences for the other usages – so hopefully by learning the CSWP, you also get to learn 75% of the search infrastructure.

BASICS tab – Quick Mode

Although the first tab is labeled ‘BASICS’, I’d say it’s actually the most involved – this is where the query itself is configured, and there is a ‘Quick Mode’ and ‘Advanced Mode’. You’ll also notice that – and let me just say I’d personally be willing to give the Product Manager for this feature A BIG HUG for this – that there’s a “live” results preview pane, permanently visible on the right-hand side of the Query Builder. This shows the first 10 results which would display from running the currently configured search against the current index, without the need to save the web part after each change:

CSWP_BasicsTab_QuickMode

Note that if you create your own query, then this preview pane is only able to show results when you are on the TEST tab. And we’ll talk about that towards the end.

Let’s now walk through the various configuration steps in here.

Select a query

In Quick Mode, the dropdown contains the Result Sources (see my definition above if you’ve forgotten already :)) which come out-of-the-box with SharePoint 2013 – one of these may provide a good foundation for what you need:

CSWP_BasicsTab_QuickMode_SelectQuery
As you select a Result Source from the dropdown, other options may become available lower down. So if I want to find items matching a specific content type, I get this:

RestrictByContentType
In fact, this option to restrict by content type appears for many of the pre-defined Result Sources, not just “Items matching a content type” – which makes sense, because it’s a common thing to include as a filter. Similarly, “Items matching a tag” and several other queries give this interface for selecting a tag to filter on:

RestrictByTag
And, happy days, if I specify the tag by typing one I get auto-complete to help me pick the term – this is a fully-fledged Managed Metadata input field. Consequently there’s also full validation of the terms you type-in (though this takes a few seconds to show), so if an author accidentally enters something which isn’t a known term, he/she should spot the mistake immediately:

TermValidation

Consider also that those middle options of using the navigation term associated with the current page is exactly what’s needed to build many types of ‘related items’ functionality – again, no code needed now.

Restrict results by app

In the next section, I can restrict the scope of the results to a particular location (e.g. the current site). This enables me to get something like the Content Query web part behavior of only searching within the current site collection if needed – because although we now have the power, it won’t always make sense to go across the entire farm 🙂

RestrictByApp

Add additional filters

In the next section I can supplement the query with any valid query text, e.g. a property filter. In this example, I’m adding a filter to only present items which werecreated by the current user:

AdditionalFilter

Sort results

When we scope our query to a pre-defined Result Source (as we are here in the CSWP ‘Quick Mode’), then sorting is usually pre-defined at that level. The CSWP does give us the opportunity to override sorting based on based on some popularity ranking models (around most viewed/most clicked) instead though – expect proper wording to appear in this dropdown in the RTM version, but you get the idea: 

SortResults
So what happens if none of the options presented so far do what you want? An example could be wanting to use an existing Result Source (e.g. ‘wiki pages’) but sort on Last Modified in descending order. Obviously the dropdown above does not allow that. We could create a custom Result Source and implement the query/sorting there, but that only really makes sense if we expect it to be re-used in multiple places.

In these cases, we can click into Advanced Mode (still on the BASICS tab).

BASICS tab – Advanced Mode

In Advanced Mode you basically get to specify the full query text yourself. In my mind, this is like building a solution with the search API in SP2007/SP2010 – I saw many custom solutions (and built several myself) which used the FullTextSqlQuery or KeywordQuery classes to find the right items. SharePoint 2013 makes it much easier to have this full control whilst still piggybacking onto the out-of-the-box web parts – meaning less work and more productivity.

When switching to the Advanced Mode, a couple of things become available:

  • A SORTING tab (details later)
  • Controls to help you build the query (which you’d previously do essentially by hand in earlier versions), with ‘Keyword filter’ and ‘Property filter’ options. These can be combined as you like, and the resulting query text appears in the textbox at the bottom:

CSWP_BasicsTab_AdvancedMode

Avoid custom code by using tokens

There are many tokens which can be used when building a query in this way – often you might want to pass something into the query, such as a URL (querystring) parameter, the value in a particular field on the page, and so on. Being able to do this unlocks a huge range of possibilities for building solutions. This is where the first image in this article comes from – here’s a reminder:

CSWP_BasicsTab_AdvancedMode_PropertyFilterValues

In summary, when using the Advanced Mode of the query builder you should be able to target just about any content in your SharePoint environment.

SORTING tab (Advanced Mode only)

In SharePoint 2010 Enterprise Search, you could only sort by relevance/rank (the normal search engine approach) or date. FAST for SharePoint 2010 had more options (you could sort by a Managed Property). In SharePoint 2013, frankly the sort options alone are enough to blow your mind 🙂  If you don’t need anything specific around sorting then you can skip this bit, but if you do then here are your options:

First you can sort by way more things than just rank and date:

CSWP_SortTab
One thing to note there – I’m unclear as to what makes it into that ‘Sort by’ list and what does not. It’s not Managed Properties as far as I can tell, so although the list is long many options may not be hugely useful. Still, better than before.

Usefully, you can now do multi-level sorting (sort by this, then by that). The ‘Add sort level’ link in the image above adds another row, allowing me to do things like sorting by URL depth (so items higher up in the site hierarchy show at the top), and then by rank (that makes sense, because there’ll be lots of items at the same URL depth so I do need two levels of sorting):

CSWP_SortTab_Custom

Note that effectively what I’m doing here is building some sort of custom ranking model. This works great if I need something very specific on sorting, but also note SharePoint 2013 comes with several ranking models – the next section allows me to pick from these if I’ve left the ‘Sort by’ dropdown on ‘Rank’, unlike in the image above. This is because all these options are effectively different forms of rank – most are around People Search or popularity:

CSWP_SortTab_RankingModel

And for those occasions when the client is telling you that his/her strategic document really has to be on page 1 of the results (but not a Promoted Result/best bet), you have ‘Dynamic ordering’ – you can boost/demote results, including the option to promote to the top:

CSWP_SortTab_DynamicOrdering

REFINERS tab

In the context of search, refiners are usually the links on the search engine’s results page (typically in the left nav) which allow the user to further filter the results. So if I do a search for “meeting minutes” and get lots of results, it would be nice to be able to filter by, say:

  • Date range
  • SharePoint site (since minutes might be stored in individual project sites)
  • Author
  • ..and so on

However, in the context of the Content Search web part, refiners actually allow you to do this filtering as part of the initial query. The REFINERS tab is effectively a convenience to you, the person configuring the web part – what happens is that a search is performed whilst in edit mode, and all relevant refiners (e.g. managed properties) are presented as available refiners. These can be selected and moved over to the right-hand list:

CSWP_RefinersTab
The effect of this is that a further filter is added to my query. In the example above, this may be easier than using a Property Filter on the BASICS tab – since there I have little support, I just select the property and type the value:

CSWP_BasicsTab_PropertyFilter
In the REFINERS tab, SharePoint is doing the search for me (as it’s configured so far), and only coming back with values which have been found in the returned results.

SETTINGS tab

The SETTINGS tab controls some high-level options for running the search:

CSWP_SettingsTab

Query rules

Since these can be defined at the parent site or search service, it could be the case that your CSWP gets affected by one of these. As the radio button shows, this can be overridden, but consider that some types of Query Rules may not have an effect anyway – as a reminder (from the table at the beginning), a Query Rule can either:

  • Add a promoted result
  • Add a result block
  • Change the ranked results somehow (by modifying the query)

Out of these 3 actions, 1.5 of them could affect the results of a ‘default’ CSWP. This can be summarized:

Query Rule Action

Will affect CSWP results?

Add a promoted result Not by default. When a search runs in SharePoint, multiple result sets are returned (e.g. ‘main results’, ‘best bet results’ and so on – in SP2013, the real names for these are ‘RelevantResults’, ‘SpecialTermResults’, ‘PersonalFavoriteResults’ and ‘RefinementResults’.). Although a CSWP can be configured to show any table, the default is ‘RelevantResults’ – and a promoted result gets added to ‘SpecialTermResults’.
Add a result block Yes if result block is configured to show ‘ranked within core results’ (the default), rather than ‘shown above core results’.
Change ranked results Yes.

For completeness, here’s the place in the CSWP where you select which search result set to use (e.g. if you want to switch from the default of ‘RelevantResults’:

CSWP_ResultTableSelection

Options in the Results Table dropdown (shown to the left):

CSWP_ResultTableSelectionOptions

URL rewriting

This one is fairly simple – if results are being returned from a catalog which is using “friendly” URLs, then the CSWP can override this to use the original URLs. It may not always make sense to use rewritten URLs in aggregations outside of the catalog pages, especially if you’ve implemented anything funky there.

Loading behavior

This is useful – specify whether the CSWP web part instance should load in the main page load (default) or in an AJAX manner after the main page has finished. Considering that a CSWP could either be the centerpiece of your landing page or merely some page footer navigation, it’s nice to be able to prioritize in this way.

Priority

Similarly, we can actually specify High, Medium or Low priority for each CSWP instance we use – great for the different usages we will have, although as per the description, note this only has any effect if the search service is overloaded.

TEST tab

The TEST tab is hugely useful – it provides you the ability:

  • To see the underlying query text (in Keyword Query Language [KQL]) which has been generated (though it must be edited in other tabs)
  • To see the preview when you are defining a query yourself (the preview pane will be empty on other tabs in this scenario)

CSWP_TestTab_Less
Which is all great, but at first glance it’s easy to miss some extra functionality – if the ‘Show more’ link is clicked, other information becomes visible including details on any refiners and Query Rules which have been applied. So below I can see that a custom Query Rule I created has indeed been used, so there’s no guesswork on (for example) whether a certain item is actually being promoted or not:

CSWP_TestTab_More

Sidenote – listing items from ONE site/list/library with the Content Search web part

Worthy of a quick note – if all you need to do is roll-up content from one list/library, then you can do this with the CSWP – in the query, simply restrict the search using PATH:[URL to document library]. The Query Builder UI helps you do this by providing the ‘Restrict by app’ area:

CSWPrestricttositeorlibrary_thumb2

N.B. note that one potential gotcha here can be that you need ‘HTTP’ if your sites are browsed on HTTPS but crawled on HTTP (as in my case).

If you do want to filter by site/list/library, consider of course that the good ol’ Content Query web part will work just fine here, and you’ll get instant changes as content is changed. What you won’t have, is the Content Search Web Part’s ability to automatically use tokens in the query (e.g. value of current navigation category, value from current user’s profile etc.)

Summary

The Content Search web part is a great tool in the SharePoint consultant’s box of tricks. Configuration may prove quite simple for some scenarios, but there is also huge amount of flexibility and so a certain degree of complexity comes with that. Many advanced scenarios which make use SP2013 search capabilities (such as Result Sources, Query Rules, promoted results and so on) will be possible – knowing the details will help you identify whether the CSWP can be the answer to a particular problem or not.

How To : Update SharePoint Social Rating more than once per hour

These two Timer Jobs are responsible for Social Rating Updates:

(http://technet.microsoft.com/en-us/library/cc678870.aspx)

  • User Profile service application – social data maintenance
    • Aggregates social tags and ratings and cleans the social data change log.
  • User Profile service application proxy – social rating synchronization
    • Synchronizes rating values between the social database and content database.

Per default these Jobs are running hourly. You can modify these jobs to run more than once an hour up to once per minute.

SharePoint-2013-Service-Pack-1-225x93

 

Performance Considerations when you let this timerjob run e.g.: once per minute:

–> You have to monitor especially the performance of your social database, because this is the single point where every rating information is inserted.

 

 

 How to find out how long these TimerJobs are currently running:

$social_timer = get-sptimerjob | ? {$_.Name -like “*social*”}

 

foreach ($timer in $social_timer)

{

write-host “Duration of TimerJob:” $timer.name “(hh:mm:ss,ms)” -foregroundcolor yellow

$historyentries = $timer.historyentries

foreach ($hist in $historyentries)

{

$duration = $hist.EndTime – $hist.StartTime

write-host $duration.hours”:”$duration.minutes”:”$duration.seconds”,”$duration.milliseconds -foregroundcolor green

}

}

SharePoint Samurai